Lecture5-Estimating the Linear Conditional Mean Model II - Annotated
Lecture5-Estimating the Linear Conditional Mean Model II - Annotated
Xiaoxia Shi
11/12/2018
E [Y |X ] = X 0 β. (1)
Define U = Y − E [Y |X ]. The conditional mean model above can be
equivalently written as
Y = X 0 β + U, E [U |X ] = 0. (2)
The different notation does not make the model a different model.
The model describes the conditional mean function E [Y |X = x ].
Consistency
Asymptotic Normality
Consistency
Asymptotic Normality
Var (Y1 |X1 ) 0 ... 0
0 Var ( Y |
2 2X ) . .. 0
=
.. .. . .. ..
. . .
0 0 ... Var (Yn |Xn )
Var (Y |X ) = σ2 .
Thus, the Gauss Markov Theorem is really more fun than meaningful
for us.
Var (Y |X ) = Var (U |X ).
And then use the OLS estimator based on this transformed model.
The resulting estimator is the GLS estimator, which is a weighted
least squares estimator (WLS).
−1
Thus β̃WLS = ∑ni=1 σ(Xi )−2 Xi Xi0 ∑ni=1 σ (Xi )−2 Xi Yi .
−1
Thus β̃WLS = ∑ni=1 σ(Xi )−2 Xi Xi0 ∑ni=1 σ (Xi )−2 Xi Yi .
The OLS estimator also possesses some nice large sample properties.
Consistency
Asymptotic Normality
That means the law of large numbers (LLN) can be applied to each
element of the n−1 ∑ni=1 (Xi Xi0 ) and the vector n−1 ∑ni=1 (Xi Yi ).
Consistency
Asymptotic Normality
We want to use the central limit theorem on this. The first step to
apply CLT is always subtracting the true value:
! −1 !
n n
βb − β = (Xn0 Xn )−1 Xn0 Un = n −1
∑ Xi Xi0 n −1
∑ Xi Ui
i =1 i =1
! −1 !
n n
βb − β = n −1
∑ Xi Xi0 n −1
∑ Xi Ui
i =1 i =1
We have shown
n
n −1 ∑ Xi Xi0 →p E (XX 0 )
i =1
n
n−1/2 ∑ (Xi Ui ) →d N (0, Var (XU )).
i =1
Therefore,
! −1 !
√ n n
n ( β̂ − β) = n −1 ∑ Xi Xi0 n−1/2 ∑ (Xi Ui )
i =1 i =1
√
n ( β̂ − β) →d N 0, E (XX 0 )−1 Var (XU )E (XX 0 )−1 .
= E [U 2 XX 0 ]
√
n ( β̂ − β) →d N 0, E (XX 0 )−1 E [σ2 (X )XX 0 ]E (XX 0 )−1 .
Is the following statement true or false? If false, give two reasons why.