Econometrics1 2 PDF
Econometrics1 2 PDF
1
Outline
Random Variables and Their Probability
Distributions
Joint Distributions, Conditional Distributions, and
Independence
Features of Probability Distributions
Features of Joint and Conditional Distributions
The Normal and Related Distributions
2
Random Variables
What is random variable?
A random variable is one that takes on numerical
values and has an outcome that is determined by an
experiment.
Example
The number of heads appearing in 10 flips of a coin.
Before we flip the coin 10 times, we do not know the
value of this random variable.
3
Random Variables
What about qualitative events?
Random variables are always defined to take on
numerical values, even when they describe qualitative
events.
Example:
tossing a single coin
Outcomes: T, H
Random Variable: X(T)=0, X(H)=1.
Bernoulli (or binary) random variable?
A random variable that can only take on the values zero
and one.
4
Discrete Random Variables
How do we define discrete random variable?
The one that takes on only a finite or countable
infinite number of values.
“countable infinite” means that even though an infinite
number of values can be taken on, those values can be
put in a one-to-one correspondence with the positive
integers.
Example: A Bernoulli random variable.
5
Discrete Random Variables
What do we need to completely describe the
behavior of a discrete random variable?
Any discrete random variable is completely described
by listing its possible values and the associated
probability that it takes on each value.
6
Discrete Random Variables
PDF?
The probability density function (pdf) of X
summarizes the information concerning the possible
outcomes of X and the corresponding probabilities:
𝑓 𝑥𝑗 = 𝑝𝑗 , 𝑗 = 1, 2, … , 𝑘
0 < 𝑓(𝑥𝑗 ) < 1
𝑓(𝑥1 ) + 𝑓(𝑥2 ) + ⋯ + 𝑓(𝑥𝑘 ) = 1
7
Continuous Random Variables
𝑓 𝑥 𝑑𝑥 = 1
−∞
9
Continuous Random Variables
Cumulative Distribution Function (cdf)?
𝐹 𝑥 ≡𝑃 𝑋≤𝑥
Two important properties:
𝐹𝑜𝑟 𝑎𝑛𝑦 𝑛𝑢𝑚𝑏𝑒𝑟 𝑐,
𝑃 𝑋 >𝑐 =1−𝐹 𝑐
10
Continuous Random Variables
For continuous random variable X,
𝑃 𝑋 ≥ 𝑐 = 𝑃(𝑋 > 𝑐)
𝑃 𝑎<𝑋<𝑏 =
𝑃 𝑎≤𝑋≤𝑏 =
𝑃 𝑎≤𝑋<𝑏 =
𝑃 𝑎<𝑋≤𝑏
11
Joint Distributions and Independence
Let X and Y be discrete random variables. Then,
(X,Y) have a joint distribution.
Joint pdf of (X,Y) (𝑓𝑋,𝑌 𝑥, 𝑦 )?
𝑓𝑋,𝑌 𝑥, 𝑦 = 𝑃(𝑋 = 𝑥, 𝑌 = 𝑦)
How we can define independence using joint pdf?
13
Joint Distributions and Independence
Example: Airline reservation
𝑌𝑖 = 1 𝑖𝑓 𝑐𝑢𝑠𝑡𝑜𝑚𝑒𝑟 𝑖 𝑎𝑝𝑝𝑒𝑎𝑟𝑠, 𝑎𝑛𝑑 𝑌𝑖 = 0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒.
Y: Bernoulli random variable (Probability of success = 𝜃)
X: the total number of customers showing.
𝑋 = 𝑌1 + 𝑌2 + ⋯ + 𝑌𝑛 𝑌𝑖 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡
Calculate 𝑓𝑋 𝑥 ?
𝑛 𝑥
𝑓𝑋 𝑥 = 𝜃 (1 − 𝜃)𝑛−𝑥 , 𝑥 = 0, 1, 2, … , 𝑛
𝑥
The name of X’s distribution?
Binomial Distribution. 14
Conditional Distributions
Conditional pdf?
𝑓𝑋,𝑌 (𝑥, 𝑦)
𝑓𝑌 𝑋 (𝑦|𝑥) ≝
𝑓𝑋 (𝑥)
How is conditional pdf, if X and Y are
independent?
𝑓 𝑌 𝑋 𝑦 𝑥 = 𝑓𝑌 𝑦 𝑎𝑛𝑑 𝑣𝑖𝑐𝑒 𝑣𝑒𝑟𝑠𝑎
Intuition?
Knowledge of the value taken on by X tells us nothing
about the probability that Y takes on various values.
15
Features of Probability Distributions
Measures of central tendency?
Median
Mode
16
Features of Probability Distributions
Measures of variability or spread?
Variance
Standard Deviation
17
Features of Probability Distributions
Measures of association between two random
variables?
Covariance
Correlation Coefficient
Conditional Expectation
Conditional Variance
18
The Expected Value
X is Discrete Random Variable, 𝐸 𝑋 ?
𝐸 𝑋 = 𝑥1 𝑓 𝑥1 + 𝑥2 𝑓 𝑥2 + ⋯ + 𝑥𝑘 𝑓 𝑥𝑘
𝑘
≡ 𝑥𝑗 𝑓 𝑥𝑗
𝑗=1
𝐸 𝑋 = 𝑥𝑓 𝑥 𝑑𝑥
−∞
19
The Expected Value
If X is a random variable, we can create a new random
variable g(X).
X is Discrete Random Variable, 𝐸 𝑔 𝑋 ?
𝑘
𝐸𝑔 𝑋 = 𝑔(𝑥𝑗 )𝑓𝑋 𝑥𝑗
𝑗=1
𝐸𝑔 𝑋 = 𝑔(𝑥)𝑓𝑋 𝑥 𝑑𝑥
−∞
20
The Expected Value
Is E[g(X)] equal to g[E(X)]?
For a nonlinear function g(X), E[g(X)]≠ g[E(X)].
Example?
X is a discrete random variable,
1 1
𝑋 = 1 𝑤𝑖𝑡ℎ 𝑃𝑟𝑜𝑏 , 𝑋 = 0 𝑤𝑖𝑡ℎ 𝑃𝑟𝑜𝑏 ,
3 3
1
𝑋 = −1 𝑤𝑖𝑡ℎ 𝑃𝑟𝑜𝑏 .
3
𝑔 𝑋 = 𝑋2
1 1 1
𝐸 𝑋 = × 1 + × 0 + × −1 = 0 → 𝑔 𝐸 𝑋 = 0
3 3 3
1 2
1 2
1 2
2
𝐸 𝑔 𝑋 = 𝑋 = × 1 + × 0 + × (−1) = 21
3 3 3 3
Properties of Expected Value
For any constant c, 𝐸 𝑐 = 𝑐.
For any constants a and b, 𝐸 𝑎𝑋 + 𝑏 = 𝑎𝐸 𝑋 + 𝑏.
𝑎1 , … , 𝑎𝑛 are constants and 𝑋1 , … , 𝑋𝑛 are RVs.
𝐸 𝑎1 𝑋1 + ⋯ + 𝑎𝑛 𝑋𝑛 = 𝑎1 𝐸 𝑋1 + ⋯ + 𝑎𝑛 𝐸 𝑋𝑛
Proof?
𝐸 𝑎1 𝑋1 + ⋯ + 𝑎𝑛 𝑋𝑛
= … 𝑎1 𝑥1 + ⋯ + 𝑎𝑛 𝑥𝑛 𝑓𝑋1,…𝑋𝑛 (𝑥1 , … , 𝑥𝑛 )𝑑𝑥1 … 𝑑𝑥𝑛 =
22
Properties of Expected Value
𝑎1 𝑥1 [ … 𝑓𝑋1 ,…𝑋𝑛 (𝑥1 , … , 𝑥𝑛 )𝑑𝑥2 … 𝑑𝑥𝑛 ] 𝑑𝑥1 + ⋯ =
𝑎1 𝐸 𝑋1 + ⋯ + 𝑎𝑛 𝐸 𝑋𝑛
As a special case of this, 𝑎𝑖 = 1:
𝑛 𝑛
𝐸 𝑋𝑖 = 𝐸(𝑋𝑖 )
𝑖=1 𝑖=1
The expected value of the sum is the sum of expected values.
23
Properties of Expected Value
𝑋~𝐵𝑖𝑛𝑜𝑚𝑖𝑎𝑙 𝑛, 𝜃 , 𝐸 𝑋 =?
𝑋 = 𝑌1 + 𝑌2 + ⋯ + 𝑌𝑛 , where each
𝑌𝑖 ~𝐵𝑒𝑟𝑛𝑜𝑢𝑙𝑙𝑖(𝜃)
24
The Median
X is continuous, the median of X?
25
The Median
X is discrete, the median of X?
If X takes on a finite number of odd values, the
median is obtained by ordering the possible
values of X and then selecting the value in the
middle.
If X takes on an even number of values, there
are really two median values; sometimes these
are averaged to get a unique median value.
26
Variance and Standard Deviation
For a random variable X, 𝑉𝑎𝑟 𝑋 ?
𝑉𝑎𝑟 𝑋 = 𝜎 2 ≡ 𝐸 𝑋 − 𝐸(𝑋) 2
𝜎 2 = 𝐸 𝑋 2 − 2𝑋𝜇 + 𝜇2 = 𝐸 𝑋 2 − 2𝜇2 + 𝜇2
= 𝐸 𝑋 2 − 𝜇2
Standard Deviation?
𝑠𝑑 𝑋 ≡ + 𝑉𝑎𝑟(𝑋)
27
Properties of Variance
The variance of constant?
The variance of any constant is zero and if a random
variable has zero variance, then it is essentially
constant.
For any constants a and b:
𝑎 2 𝑉𝑎𝑟(𝑥)
𝑉𝑎𝑟 𝑎𝑋 + 𝑏 =
Proof ?
2
𝑉𝑎𝑟 𝑎𝑋 + 𝑏 = 𝐸 𝑎𝑋 + 𝑏 − 𝐸 𝑎𝑋 + 𝑏
=𝐸 𝑎𝑋 + 𝑏 − 𝑎𝐸 𝑋 + 𝑏 2 = 𝐸 𝑎𝑋 − 𝑎𝐸 𝑋 2
28
Properties of Variance
= 𝐸 𝑎2 𝑋 − 𝐸 𝑋 2
= 𝑎2 𝐸 𝑋 − 𝐸 𝑋 2
= 𝑎2 𝑉𝑎𝑟(𝑋).
29
Standardizing a Random Variable
X is a random variable with mean 𝜇 and standard
deviation 𝜎 :
𝑋−𝜇
𝑍≡ , E Z =? , Var Z =?
𝜎
1 𝜇
𝑍= 𝑋 −
𝜎 𝜎
1 𝜇
𝐸 𝑍 = 𝜇 − =0
𝜎 𝜎
2
1
𝑉𝑎𝑟 𝑍 = 𝜎2 = 1
𝜎
30
Covariance
Let 𝜇𝑋 = 𝐸 𝑋 and 𝜇𝑌 = 𝐸 𝑌 , 𝐶𝑜𝑣 𝑋, 𝑌 ?
𝐶𝑜𝑣 𝑋, 𝑌 = 𝜎𝑋𝑌 ≡ 𝐸[(𝑋 − 𝜇𝑋 )(𝑌 − 𝜇𝑌 )]
𝐶𝑜𝑣 𝑋, 𝑌 = 𝐸 𝑋 − 𝜇𝑋 𝑌 − 𝜇𝑌
= 𝐸 𝑋 − 𝜇𝑋 𝑌 = 𝐸 𝑋 𝑌 − 𝜇 𝑌
= 𝐸 𝑋𝑌 − 𝜇𝑋 𝜇𝑌
31
Covariance
Intuition of Covariance?
If 𝜎𝑋𝑌 > 0, on average, when X is above its
mean, Y is also above its mean.
If 𝜎𝑋𝑌 < 0, on average, when X is above its
mean, Y is also below its mean.
Covariance measures the amount of linear
dependence between two random variables
move in the same direction, while a negative
covariance indicates they move in opposite
directions.
32
Properties of Covariance
If X and Y are independent, then 𝐶𝑜𝑣 𝑋, 𝑌 =?
X and Y are independent, so 𝐸 𝑋𝑌 = 𝐸 𝑋 𝐸 𝑌 , so
𝐶𝑜𝑣 𝑋, 𝑌 = 0.
33
Properties of Covariance
Example: X is a discrete random variable,
1 1
𝑋 = 1 𝑤𝑖𝑡ℎ 𝑃𝑟𝑜𝑏 , 𝑋 = 0 𝑤𝑖𝑡ℎ 𝑃𝑟𝑜𝑏 ,
3 3
1
𝑋 = −1 𝑤𝑖𝑡ℎ 𝑃𝑟𝑜𝑏 .
3
Y = 𝑋2.
2
𝐸 𝑋 = 0, 𝐸 𝑋2 = ,𝐸 𝑋 3 = 0 . So
3
𝐶𝑜𝑣 𝑋, 𝑌 = 𝐸 𝑋𝑌 − 𝐸 𝑋 𝐸 𝑌 =
𝐸 𝑋3 − 𝐸 𝑋 𝐸 𝑋2 = 0
But X and Y clearly are dependent.
34
Properties of Covariance
For any constants 𝑎1 , 𝑏1 , 𝑎2 𝑎𝑛𝑑 𝑏2
𝐶𝑜𝑣 𝑎1 𝑋 + 𝑏1 , 𝑎2 𝑌 + 𝑏2 = 𝑎1 𝑎2 𝐶𝑜𝑣(𝑋, 𝑌)
Proof ?
𝐶𝑜𝑣 𝑎1 𝑋 + 𝑏1 , 𝑎2 𝑌 + 𝑏2
= 𝐸 𝑎1 𝑋 + 𝑏1 𝑎2 𝑌 + 𝑏2 − 𝐸 𝑎1 𝑋 + 𝑏1 𝐸(𝑎2 𝑌 + 𝑏2 )
= 𝐸 𝑎1 𝑎2 𝑋𝑌 + 𝑎1 𝑏2 𝑋 + 𝑎2 𝑏1 𝑌 + 𝑏1 𝑏2
− 𝑎1 𝐸 𝑋 + 𝑏1 [𝑎2 𝐸 𝑌 + 𝑏2 ]
= 𝑎1 𝑎2 𝐸 𝑋𝑌 + 𝑎1 𝑏2 𝐸 𝑋 + 𝑎2 𝑏1 𝐸 𝑌 + 𝑏1 𝑏2
− [𝑎1 𝑎2 𝐸 𝑋)𝐸(𝑌 + 𝑎1 𝑏2 𝐸 𝑋 + 𝑎2 𝑏1 𝐸 𝑌 + 𝑏1 𝑏2 ]
= 𝑎1 𝑎2 𝐸 𝑋𝑌 − 𝐸 𝑋 𝐸 𝑌 = 𝑎1 𝑎2 𝐶𝑜𝑣 𝑋, 𝑌 .
35
Properties of Covariance
Cauchy-Schwartz inequality
𝐶𝑜𝑣 𝑋, 𝑌 ≤ 𝑠𝑑 𝑋 𝑠𝑑 𝑌
36
Correlation Coefficient
The Covariance between two random variables
depends on units of measurement.
37
Properties of Correlation Coefficient
−1 ≤ 𝐶𝑜𝑟𝑟(𝑋, 𝑌) ≤ 1
Proof?
|𝐶𝑜𝑣(𝑋, 𝑌)| 𝐶𝑎𝑢𝑐ℎ𝑦−𝑆𝑐ℎ𝑤𝑎𝑟𝑡𝑧 𝑖𝑛𝑒𝑞𝑢𝑎𝑙𝑖𝑡𝑦
𝐶𝑜𝑟𝑟 𝑋, 𝑌 =
𝑠𝑑 𝑋 𝑠𝑑(𝑌)
𝑠𝑑 𝑋 𝑠𝑑 𝑌
𝐶𝑜𝑟𝑟 𝑋, 𝑌 ≤ ≤1
𝑠𝑑 𝑋 𝑠𝑑 𝑌
−1 ≤ 𝐶𝑜𝑟𝑟(𝑋, 𝑌) ≤ 1
38
Properties of Correlation Coefficient
If 𝐶𝑜𝑟𝑟 𝑋, 𝑌 = 0, What can we say about relationship
between X and Y?
There is no linear relationship between X and Y.
39
Properties of Correlation Coefficient
For any constants 𝑎1 , 𝑏1 , 𝑎2 𝑎𝑛𝑑 𝑏2 𝑤𝑖𝑡ℎ 𝑎1 𝑎2 > 0,
𝐶𝑜𝑟𝑟 𝑎1 𝑋 + 𝑏1 , 𝑎2 𝑌 + 𝑏2 = 𝐶𝑜𝑟𝑟 𝑋, 𝑌 .
40
Variance of Sums of Random Variables
For constants a and b,
𝑉𝑎𝑟 𝑎𝑋 + 𝑏𝑌 ?
= 𝑎2 𝑉𝑎𝑟 𝑋 + 𝑏 2 𝑉𝑎𝑟 𝑌 + 2𝑎𝑏𝐶𝑜𝑣 𝑋, 𝑌 .
If X and Y are uncorrelated, then
𝑉𝑎𝑟 𝑋 + 𝑌 = 𝑉𝑎𝑟 𝑋 + 𝑉𝑎𝑟(𝑌)
The random variables {𝑋1 , … , 𝑋𝑛 } are pairwise
uncorrelated random variables if
𝐶𝑜𝑣 𝑋𝑖 , 𝑋𝑗 = 0, 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑖 ≠ 𝑗.
41
Variance of Sums of Random Variables
If {𝑋1 , … , 𝑋𝑛 } are pairwise uncorrelated random
variables and {𝑎𝑖 : 𝑖 = 1, … , 𝑛} are constants, then
𝑉𝑎𝑟 𝑎1 𝑋1 + ⋯ + 𝑎𝑛 𝑋𝑛 ?
= 𝑎12 𝑉𝑎𝑟 𝑋1 + ⋯ + 𝑎𝑛2 𝑉𝑎𝑟 𝑋𝑛
Y is continuous, 𝐸 𝑌 𝑋 = 𝑥 = ?
𝐸 𝑌 𝑋 = 𝑥 = 𝑦 𝑓𝑌|𝑋 𝑦 𝑋 = 𝑥 𝑑𝑦
𝐸 𝑌 𝑋 = 𝑥 is just some function of x.
43
Properties of Conditional Expectation
For any function 𝑐 𝑋 , 𝐸 𝑐 𝑋 𝑋 = 𝑐 𝑋 .
Example: 𝐸 𝑋 2 𝑋 = 𝑥 = 𝑥 2 .
𝐸 𝑎 𝑋 𝑌 + 𝑏 𝑋 𝑋 = 𝑎 𝑋 𝐸 𝑌 𝑋 + 𝑏(𝑋).
Example: 𝐸 𝑋𝑌 + 2𝑋 2 𝑋 = 𝑋𝐸 𝑌 𝑋 + 2𝑋 2 .
44
Properties of Conditional Expectation
X and Y are independent, then 𝐸 𝑌 𝑋 = 𝐸(𝑌).
Proof ?
Suppose X and Y are continuous:
𝑓𝑋,𝑌 (𝑥, 𝑦)
𝐸 𝑌𝑋=𝑥 = 𝑦 𝑓𝑌|𝑋 𝑦 𝑋 = 𝑥 𝑑𝑦 = 𝑦 𝑑𝑦
𝑓𝑋 (𝑥)
𝑋 𝑎𝑛𝑑 𝑌 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡
𝐸 𝑌𝑋=𝑥 = 𝑦𝑓𝑌 𝑦 𝑑𝑦
𝐸 𝑌 𝑋 = 𝐸(𝑌)
𝐸𝑋 𝐸 𝑌 𝑋 = 𝐸𝑋 𝑦𝑓𝑌|𝑋 𝑦 𝑥 𝑑𝑦 =
𝑦 𝑓𝑋,𝑌 𝑥, 𝑦 𝑑𝑥 𝑑𝑦 = 𝑦𝑓𝑌 𝑦 𝑑𝑦 = 𝐸 𝑌 .
46
Properties of Conditional Expectation
47
Properties of Conditional Expectation
𝐸 𝐸 𝑌 𝑋, 𝑍 |𝑋 = 𝐸 𝑌 𝑋 .
Proof ?
Suppose X, Y and Z are continuous:
𝑓𝑋,𝑌,𝑍 𝑥, 𝑦, 𝑧
𝐸 𝐸 𝑌 𝑋, 𝑍 |𝑋 = 𝐸𝑍 𝑦 𝑑𝑦 𝑋 =
𝑓𝑋,𝑍 𝑥, 𝑧
𝑓𝑋,𝑌,𝑍 𝑥, 𝑦, 𝑧
𝑦 𝑑𝑦 𝑓𝑍|𝑋 𝑧 𝑋 𝑑𝑧 =
𝑓𝑋,𝑍 𝑥, 𝑧
𝑓𝑋,𝑌,𝑍 𝑥, 𝑦, 𝑧 𝑓𝑋,𝑍 (𝑥, 𝑧)
𝑦 . 𝑑𝑦𝑑𝑧 =
𝑓𝑋,𝑍 𝑥, 𝑧 𝑓𝑋 (𝑥)
48
Properties of Conditional Expectation
1
𝑦 𝑓𝑋,𝑌,𝑍 𝑥, 𝑦, 𝑧 𝑑𝑧 𝑑𝑦 =
𝑓𝑋 (𝑥)
1 𝑓𝑋,𝑌 𝑥, 𝑦
𝑦 𝑓𝑋,𝑌 𝑥, 𝑦 𝑑𝑦 = 𝑦 𝑑𝑦 =
𝑓𝑋 (𝑥) 𝑓𝑋 𝑥
𝑦𝑓𝑌|𝑋 𝑦 𝑥 𝑑𝑦 = 𝐸 𝑌 𝑋 .
49
Properties of Conditional Expectation
If 𝐸 𝑌 𝑋 = 𝐸(𝑌), then 𝐶𝑜𝑣 𝑋, 𝑌 = 0.
Proof ?
𝐶𝑜𝑣 𝑋, 𝑌 = 𝐸 𝑋𝑌 − 𝐸 𝑋 𝐸 𝑌
𝐿.𝐼.𝐸
𝐸 𝑋𝑌 = 𝐸𝑋 𝐸 𝑋𝑌 𝑋 = 𝐸𝑋 𝑋𝐸 𝑌 𝑋
𝐸 𝑌 𝑋 =𝐸(𝑌)
𝐸 𝑋𝑌 = 𝐸𝑋 𝑋𝐸(𝑌) = 𝐸(𝑌)𝐸𝑋 𝑋
→ 𝐶𝑜𝑣 𝑋, 𝑌 = 0.
50
Properties of Conditional Expectation
How about the converse of Property?
The converse of Property is not true.
If X and Y are uncorrelated, 𝐸 𝑌 𝑋 could still depend
on X.
51
Conditional Expectation
The variance of Y, conditional on 𝑋 = 𝑥 ?
𝐸{[𝑌 − 𝐸(𝑌|𝑋 = 𝑥)]2 |𝑋 = 𝑥}
𝑉𝑎𝑟 𝑌 𝑋 = 𝑥 = 𝐸 𝑌 2 𝑥 − [𝐸(𝑌|𝑥)]2
52
The Normal Distribution
𝑋~𝑁 𝜇, 𝜎 2 , 𝑓 𝑥 =?
2
1 𝑥−𝜇
𝑓 𝑥 = exp − , −∞ < 𝑥 < ∞
𝜎 2𝜋 2𝜎 2
where 𝜇 = 𝐸 𝑋 , 𝜎 2 = 𝑉𝑎𝑟(𝑋)
53
The Normal Distribution
The normal distribution is sometimes called the
Gaussian distribution.
In some cases, a variable can be transformed to
achieve normality.
A popular transformation: natural log.
If X is a positive random variable, such as income or
price of goods, and 𝑌 = log(𝑋) has a normal
distribution, then we say that X has a lognormal
distribution.
54
The Standard Normal Distribution
𝑍~𝑁 0,1
1 𝑧2
𝑓 𝑧 = exp − , −∞ < 𝑧 < ∞
2𝜋 2
55
Properties of Normal Distribution
If 𝑋~𝑁𝑜𝑟𝑚𝑎𝑙(𝜇, 𝜎 2 ), then
𝑋−𝜇
2
~ 𝑁𝑜𝑟𝑚𝑎𝑙(0,1)
𝜎
If 𝑋~𝑁𝑜𝑟𝑚𝑎𝑙(𝜇, 𝜎 2 ), then
2 2
𝑎𝑋 + 𝑏~ 𝑁𝑜𝑟𝑚𝑎𝑙(𝑎𝜇 + 𝑏, 𝑎 𝜎 )
56
Properties of Normal Distribution
If X and Y are jointly normally distributed:
𝐼𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡 𝐶𝑜𝑣 𝑋, 𝑌 = 0
𝜎2
𝑌~ 𝑁𝑜𝑟𝑚𝑎𝑙(𝜇, )
𝑛
57
The Chi-Square Distribution
Let 𝑍𝑖 , 𝑖 = 1,2, … , 𝑛 be independent and each
distributed as standard normal,
𝑛
𝑋= 𝑍𝑖2
𝑖=1
𝑋~𝜒𝑛2 , 𝐸 𝑋 =?
𝑍𝑖 ~𝑁𝐼𝐷 0,1 → 𝐸 𝑍𝑖2 = 1 , 𝐸 𝑍𝑖4 = 3.
𝑛
𝑛 𝑛
𝐸 𝑋 =𝐸 𝑍𝑖2 = 𝐸 𝑍𝑖2 = (1) = 𝑛.
𝑖=1 𝑖=1
𝑖=1
58
The Chi-Square Distribution
𝑋~𝜒𝑛2 , 𝑉𝑎𝑟 𝑋 =?
𝑛
𝑍𝑖2 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡
𝑉𝑎𝑟 𝑋 = 𝑉𝑎𝑟( 𝑍𝑖2 )
𝑖=1
𝑛
𝑉𝑎𝑟 𝑋 = 𝑉𝑎𝑟(𝑍𝑖2 )
𝑖=1
𝑛 2
𝑉𝑎𝑟 𝑋 = {𝐸 𝑍𝑖2 − [𝐸(𝑍𝑖2 )]2 }
𝑖=1
𝑛
𝑉𝑎𝑟 𝑋 = {3 − 1} = 2𝑛.
𝑖=1
59
The Chi-Square Distribution
A chi-square random variable is always non-negative.
The chi-square distribution is not symmetric about any point.
60
The t Distribution
𝑍~𝑁 0,1 , 𝑋~𝜒𝑛2 , Z and X are independent,
𝑍
𝑇=
𝑋 𝑛
T has t distribution with n degrees of freedom.
61
The t Distribution
The pdf of the t distribution has a shape similar to that of the
standard normal distribution, except that it is more spread out
and therefore has more area in the tails.
62
The F Distribution
𝑋1 ~𝜒𝑘21 , 𝑋2 ~𝜒𝑘22 , 𝑋1 and 𝑋2 are independent,
𝑋1 /𝑘1
𝐹= , F has F distribution with (𝒌𝟏 , 𝒌𝟐 ) degrees of freedom.
𝑋2 /𝑘2
63