0% found this document useful (0 votes)
2 views

PS3_Answer

The document discusses the calculation of standard errors and t-test statistics for instrumental variable (IV) estimates, highlighting the differences between t-tests and Wald tests. It also covers essential conditions for unbiasedness and consistency of OLS estimators, the implications of omitted variable bias, and the use of instrumental variables. Additionally, it explains the limitations of R-squared in IV estimation and the relevance condition for choosing instruments.

Uploaded by

fanrenaz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

PS3_Answer

The document discusses the calculation of standard errors and t-test statistics for instrumental variable (IV) estimates, highlighting the differences between t-tests and Wald tests. It also covers essential conditions for unbiasedness and consistency of OLS estimators, the implications of omitted variable bias, and the use of instrumental variables. Additionally, it explains the limitations of R-squared in IV estimation and the relevance condition for choosing instruments.

Uploaded by

fanrenaz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

1 Comprehension Questions

1. Obtain the standard errors of the IV estimate.


Based on the formula Var(β̂n ) = Â−1 −1
n B̂n Ân , we can perform the following
calculations:
Inverse matrix:  
−1 1 0
Ân =
0 2
Variance-covariance matrix:
     
1 0 100 0 1 0 100 0
Var(β̂n ) = =
0 2 0 400 0 2 0 1600

Standard errors: √
SE(β̂1 ) = 100 = 10

SE(β̂2 ) = 1600 = 40
Thus, the standard error of β̂1 is 10, and the standard error of β̂2 is 40.

2. Obtain the t-test statistics using the IV estimate.


To calculate the t-test statistics using the IV estimate, we use the following
formula:

β̂i
t=
SE(β̂i )
where: - β̂i is the IV estimate for the i-th parameter. - SE(β̂i ) is the standard
error of the estimate.
From the problem: - β̂1 = 1, β̂2 = 2 - Standard errors are SE(β̂1 ) = 10,
SE(β̂2 ) = 40
t-statistic for β̂1 :
β̂1 1
tβ̂1 = = = 0.1
SE(β̂1 ) 10

t-statistic for β̂2 :


β̂2 2
tβ̂2 = = = 0.05
SE(β̂2 ) 40

The t-statistic for β̂1 is 0.1. The t-statistic for β̂2 is 0.05.

1
3. We want to test the following hypotheses using the Wald
test.
β̂22
W =
Var(β̂2 )
where: - β̂2 = 2 (the IV estimate of β2 ). - Var(β̂2 ) = SE(β̂2 )2 = 402 = 1600.

β̂22 22 4
W = = = = 0.0025
Var(β̂2 ) 1600 1600
The Wald test statistic is 0.0025.

4. Compare the t-test and the Wald test.


Comparison of the t-test and the Wald test
1. t-test: - The t-test statistic for β̂2 was calculated as:

β̂2 2
tβ̂2 = = = 0.05
SE(β̂2 ) 40

The t-test typically follows a t-distribution under the null hypothesis, but for
large samples, it converges to the standard normal distribution.
2. Wald test:
β̂22 4
W = = = 0.0025
Var(β̂2 ) 1600
The Wald test statistic follows a chi-squared distribution with 1 degree of free-
dom under the null hypothesis.
Relationship between the t-test and the Wald test:

W = t2

W = (0.05)2 = 0.0025
Both tests provide similar information, but they are expressed in different
distributions. The t-test statistic is used to assess significance under the t-
distribution. The Wald test statistic is used under the chi-squared distribution.

2 Textbook Questions
a. Essential conditions for unbiasedness and consistency of
OLS estimator b
1. Unbiasedness: - Linearity: The model must be linear in parameters:

yi = β1 + β2 xi2 + β3 xi3 + ϵi

2
- Exogeneity: The error term ϵi must satisfy:

E[ϵi |xi2 , xi3 ] = 0

No perfect multicollinearity: The independent variables must not be perfectly


collinear. Under these conditions, E[b] = β, meaning the OLS estimator is
unbiased.
2. Consistency:
- Exogeneity: As in the unbiasedness condition, E[ϵi |xi2 , xi3 ] = 0.
- No perfect multicollinearity.
p
- Large sample size: As N → ∞, b → β (converges in probability).
- Finite variance: The independent variables and the error term must have finite
second moments.
3. Difference between unbiasedness and consistency:
- Unbiasedness: The estimator b satisfies E[b] = β in finite samples.
- Consistency: As the sample size N → ∞, b converges to β. Consistency does
not require b to be unbiased in small samples, but it must approach the true
parameter as N increases.

b. Consistency as moment conditions and method of mo-


ments estimator
1. Moment conditions:
Consistency requires exogeneity, which gives:

E[xi2 ϵi ] = 0, E[xi3 ϵi ] = 0

Substitute ϵi = yi − β1 − β2 xi2 − β3 xi3 to get:

E[xi2 (yi − β1 − β2 xi2 − β3 xi3 )] = 0

E[xi3 (yi − β1 − β2 xi2 − β3 xi3 )] = 0


2. Method of moments:
The sample analogs of the moment conditions are:
N
1 X
xi2 (yi − β1 − β2 xi2 − β3 xi3 ) = 0
N i=1

N
1 X
xi3 (yi − β1 − β2 xi2 − β3 xi3 ) = 0
N i=1
Solve these to get the MoM estimates.
3. Comparison with OLS:
The MoM estimator is the same as the OLS estimator. Both solve the same
moment conditions.

3
c. Examples of nonzero correlation between xi3 and ϵi
1. Omitted variable bias: If a relevant variable is excluded from the model
and this variable is correlated with both xi3 and ϵi , there can be correlation
between xi3 and ϵi .

2. Simultaneity: If xi3 is determined simultaneously with yi , then ϵi , which


is part of yi , will be correlated with xi3 .

d. Inferences with OLS when cov(xi3 , ϵi ) ̸= 0


No, it is not possible to make valid inferences using OLS in this case. The OLS
estimator becomes biased and inconsistent when cov(xi3 , ϵi ) ̸= 0. Adjusting
standard errors does not solve the fundamental issue of bias.

e. Instrumental variable and new moment condition


An instrumental variable (IV), say zi , satisfies:

E[zi ϵi ] = 0

The moment conditions become:

E[zi (yi − β1 − β2 xi2 − β3 xi3 )] = 0

This leads to a new estimator for β, called the IV estimator, which is consistent
even when cov(xi3 , ϵi ) ̸= 0.

f. Smaller R2 with IV estimator


The IV estimator usually leads to a smaller R2 because it reduces bias at the
cost of higher variance. R2 is not a reliable measure of model adequacy when
using IV, as it focuses on fit rather than consistency.

g. Why not use zi = xi2 as an instrument for xi3 ?


Even if E[xi2 ϵi ] = 0, xi2 cannot be used as an instrument for xi3 if xi2 and
xi3 are highly correlated, as this would violate the relevance condition. Using
x2i2 as an instrument for xi3 depends on whether x2i2 is correlated with xi3 and
uncorrelated with ϵi .

You might also like