0% found this document useful (0 votes)
40 views

Problem Set 2 - Solution: X, Y 0.5 0 X, Y

1. The document provides solutions to problems in econometric theory. It includes calculations of expectations and probabilities for random variables, derivations of conditional distributions and best linear predictors, and properties of normal distributions. 2. Questions involve calculating probabilities, conditional expectations, variances, covariances, and properties of the normal distribution. Solutions show working through integrals, laws of total expectation and probability. 3. The last problem discusses properties of the expectation of a matrix containing random variables and derivation of OLS estimators.

Uploaded by

damnedchild
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views

Problem Set 2 - Solution: X, Y 0.5 0 X, Y

1. The document provides solutions to problems in econometric theory. It includes calculations of expectations and probabilities for random variables, derivations of conditional distributions and best linear predictors, and properties of normal distributions. 2. Questions involve calculating probabilities, conditional expectations, variances, covariances, and properties of the normal distribution. Solutions show working through integrals, laws of total expectation and probability. 3. The last problem discusses properties of the expectation of a matrix containing random variables and derivation of OLS estimators.

Uploaded by

damnedchild
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Intro.

Econometric Theory (Fall 06/07)

1-1

Nese Yildiz

Problem Set 2 - Solution


1. a) By symmetry of X and Y ,
Z 1
Z 1
Z 1Z 1
1 1
7
2
x(x + y)dxdy = x dx + 0.5 xdx = + = ,
E(Y ) = E(X) =
3 4
12
0
0
Z0 1Z0 1
Z 1Z 1
Z 1
1
E(XY ) =
xy(x + y)dxdy = 2
x2 ydxdy = x2 dx = .
3
0 0
0 0
0
f

(x,0.5+x)

b) Note that fX|Y (x|0.5 + x) = R 0.5 fX,Y (x,0.5+x)dx if 0 < x < 0.5 and is equal to 0
X,Y
0
otherwise. Moreover,
Z 0.5
Z 0.5
1
(2x + 0.5)dx = .
fX,Y (x, 0.5 + x)dx =
2
0
0
Thus,
Z
E(X|Y = 0.5 + X) = 2

0.5

Z
x(2x + 0.5)dx = 4

0.5
2

0.5

x dx +
0

xdx
0

4
1
7
+ = .
24 8
24

2.
E(Z) = P (Z = 1) = P (Y = 1)P (Z = 1|Y = 1) + P (Y = 0)P (Z = 1|Y = 0)
= P (Y = 1) 0.7 + P (Y = 0) 0.3
= 0.7[P (X = 1)P (Y = 1|X = 1) + P (X = 0)P (Y = 1|X = 0)]
= 0.3[P (X = 1)P (Y = 0|X = 1) + P (X = 0)P (Y = 0|X = 0)]
= 0.7[0.4 0.5 + 0.6 0.4] + 0.3[0.4 0.5 + 0.4 0.4]
= 0.7(0.2 + 0.24) + 0.3(0.2 + 0.16) = 0.308 + 0.108 = 0.416.
The last sentence was not clear. It was supposed to be about conditional distribu-

Intro. Econometric Theory (Fall 06/07)

1-2

Nese Yildiz

tions.
P (Z = 1, X = 1)
P (X = 1)
P (Z = 1, X = 1, Y = 1) P (Z = 1, X = 1, Y = 0)
+
=
P (X = 1)
P (X = 1)
P (Z = 1|X = 1, Y = 1)P (Y = 1, X = 1)
=
P (X = 1)
P (Z = 1|X = 1, Y = 0)P (Y = 0, X = 1)
+
P (X = 1)
P (Z = 1|Y = 1)P (Y = 1, X = 1)
=
P (X = 1)
P (Z = 1|Y = 0)P (Y = 0, X = 1)
+
P (X = 1)
= P (Z = 1|Y = 1)P (Y = 1|X = 1) + P (Z = 1|Y = 0)P (Y = 0|X = 1)
= 0.44P (Y = 1|X = 1) + 0.36P (Y = 0|X = 1) = 0.5(0.44 + 0.36) = 0.4.

E(Z|X = 1) = P (Z = 1|X = 1) =

3. We can rewrite the given equations as



 

  
U
1 1
X
1
=

,
V
1 2
Y
3
which is equivalent to


X
Y


=

2 1
1 1



U +1
V +3


.

Thus,


X
Y


N

5
4

 

4 + 5 3(1 + )
,
.
3(1 + ) 2(1 + )

Then
Cov(X, Y )
(X E(X))
V ar(X)
3(1 + )
=4+
(X 5),
4 + 5

E(Y |X) = E(Y ) +

Intro. Econometric Theory (Fall 06/07)

1-3

Nese Yildiz

Cov 2 (X, Y )
V ar(Y |X) = V ar(Y )
V ar(X)
9(1 + )2
= 2(1 + )
4 + 5
8 + 10 9 9
(1 + )(1 )
= (1 + )
=
.
4 + 5
4 + 5
4. V ar(U |V ) = E(U 2 |V )[E(U |V )]2 . Therefore, E(U 2 |V ) = V ar(U |V )+[E(U |V )]2 .
In addition, V ar(U |V ) = 1 2 and E(U |V ) = V . As a result, E(U 2 |V ) = 1
2 ,V )
(V E(V )).
2 +2 V 2 . Best linear predictor of U 2 given V equals E(U 2 )+ Cov(U
V ar(V )
2
2
Note E(U ) = V ar(U ) = 1. Similarly, since E(V ) = 0, Cov(U , V ) = E(U 2 V ) =
EV E(U 2 |V ), by the law of iterated expectations.
EV E(U 2 |V ) = EV (1 2 + 2 V 2 ) = 1 2 + 2 V ar(V ) = 1.
Therefore, best linear predictor of U 2 given V equals 1 + V .
5.

Eg(X|m, s) = 0 E

X m
(X m)2 s

E(X) = m

and


=0

E(X m)2 = s,

which means that m = and s = 2 .


6. a)

1
E X2
X3

1 X2

1
X
X
2
3

X3 = E X2 X22 X2 X3
X3 X2 X3 X32

1
X2
(1 + 2 X2 )
X2
X22
X2 (1 + 2 X2 )
=E
(1 + 2 X2 ) X2 (1 + 2 X2 ) (1 + 2 X2 )2

Since

1
X2
(1 + 2 X2 )
+ 2
= X2 (1 + 2 X2 ) ,
X2
X22
1
(1 + 2 X2 )
X2 (1 + 2 X2 )
(1 + 2 X2 )2

the matrix inside the expectation is not invertible for any realization of X2 .

Intro. Econometric Theory (Fall 06/07)

1-4

Nese Yildiz




1 0 0
1
E(X2 )
T
= Ax, where A = E
)=
b) Let x
. Then E(
xx
,
0 1 0
E(X2 ) E(X22 )

1 

E(Y )
1
E(X2 )
T
)=
and = E(
xx
.
E(X2 ) E(X22 )
E(X2 Y )

You might also like