0% found this document useful (0 votes)
59 views

Binomial Distribution-Must Satisfy Bernouli Trials

The document covers key concepts in probability and statistics including binomial distribution, normal distribution, standard normal probabilities, expected values, central limit theorem, confidence intervals, analysis of variance, probability density function, probability, hypothesis testing, and Poisson distribution. It defines key terms and formulas for these concepts in statistics and probability distributions.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views

Binomial Distribution-Must Satisfy Bernouli Trials

The document covers key concepts in probability and statistics including binomial distribution, normal distribution, standard normal probabilities, expected values, central limit theorem, confidence intervals, analysis of variance, probability density function, probability, hypothesis testing, and Poisson distribution. It defines key terms and formulas for these concepts in statistics and probability distributions.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 6

Binomial Distribution- must satisfy bernouli trials

1.There are only two possible outcomes for each trial


2.The probability of success is the same for each trial
3. The outcome for different trials are independent
4. There are a fixed number n of Bernouli trials conducted
P(X=k) = b(k; n, p) = (nk) pk(1-p)n-k, k=0, 1, 2, 3, … , n
The number of objects k selected from a set of n objects
When quantity of trials is an inequality:
B(x; n, p) = k=0x b(k; n, p) for x = 0, 1, 2, 3, …, n
Normal Distribution
(x; , 2)= 1/((2)) e-(x-)^2 / 2^2
Standard Normal distribution
(z)=P(Z<=z)= 1/(2) -z e-t^2/2 dt
Standard Normal Probabilities
(Z) = P(Z<=z)
(b)-(a) = P(a < Z <= b)
Standardized random variable
Z= (X-)/
Expected Values
E(X) = 
E(aX + b) = aE(X) + b

V(xbar) = 2/n
Sd(xbar) = /n
Central Limit Thrm
If xbar is the mean of a sample size n taken from a population having the mean  and the
finite variance 2, then; Z = (Xbar - ) / (/n) is a rv whose distribution function
approaches that of the standard norm distributions as n approaches infinity
Confidence Intervals
Maximum error of estimate E = za/2 √(p(1-p)/n)
Sample Size determination n = p(1-p) [za/2/E]2
n = 1/4 [za/2/E]2
Confidence Intervals
Lg sample ci for  [xbar – za/2 * /n <  < xbar + za/2 * /n]
Or [xbar – za/2 * s/n <  < xbar + za/2 * s/n]
Sm sample ci for  [xbar – ta/2 * s/n <  < xbar + ta/2 * s/n]
CI= xbar +- ta/2 * s/n
t = (xbar - )/(S/n)
Analysis of Variance
Analysis of Variance
Source of Degrees of Sum of Mean Squares F
variation freedom Squares
Treatments K-1 SS(Tr) MS(Tr)=SS(Tr)/(k-1) MS(Tr)/MSE
Error N–k SSE MSE=SSE/(N-k)
Total N–1 SST
SSE=SST-SS(Tr)
C= T.2/N
N= i=1k ni, quantity of sample sizes
T.= i=1k Ti and Ti= j=1ni yij, grand total of all sample sizes
SST= i=1k j=1ni y2ij – C, all samples squared minus C
SS(Tr)= i=1k(Ti2/ni) – C, samples added up in each treatment quared divided by sample
size minus C
Confidence Intervals
Lg sample confidence interval for the diff of 2 proportions
X1/n1 – x2/n2 +- za/2 [ (X1/n1*(1- X1/n1)/n1) + (X2/n2*(1- X2/n2)/n2)]
Confidence limits for a + bxo
(a + bxo) +- ta/2 * se * [(1/n) + (xo-xbar)2/ Sxx], number of degrees of freedom for ta/2 is n –
2
Probability Density Function
F(x) = P(X <= x) = -x f(t) dt
d/dx(F(x)) = f(x)
f(x)>= 0 for all x

- f(x) dx = 1
E(x) = - x * f(x) dx
Probability
P(AB) = P(A) + P(B) – P(AB)
Confidence Intervals mean differences
Xbar – ybar +- za/2 [(s12/n1) + (s22/n2)]
Hypothesis
1 - 2 = , norm pop, 1 and 2 not known, n1, n2 >= 30
Alternative Hypothesis Reject null hypothesis if
1 - 2 < , Z < -za
1 - 2 > , Z > za
1 - 2 <> , Z < -za/2 or Z > za/2

1 - 2 = , norm pop, 1 = 2, two sample t test


Alternative Hypothesis Reject null hypothesis if
1 - 2 < , t < -ta
1 - 2 > , t > ta
1 - 2 <> , t < -ta/2 or t > ta/2

 = 0, lg sample
Alternative Hypothesis Reject null hypothesis if
 < 0 Z < -za
 > 0 Z > za
 <> 0 Z < -za/2 or Z > za/2

 = 0 norm pop,  unknown, one sample t test


Alternative Hypothesis Reject null hypothesis if
 < 0 t < -ta
 > 0 t > ta
 <> 0 t < -ta/2 or t > ta/2
Poisson Distribution- model for counts that do not have an upper bound
 = np
f(x; ) = xe-/ x!, >0
x=0 f(x; )

You might also like