Problem Set 2 - Solution: X, Y 0.5 0 X, Y
Problem Set 2 - Solution: X, Y 0.5 0 X, Y
1-1
Nese Yildiz
(x,0.5+x)
b) Note that fX|Y (x|0.5 + x) = R 0.5 fX,Y (x,0.5+x)dx if 0 < x < 0.5 and is equal to 0
X,Y
0
otherwise. Moreover,
Z 0.5
Z 0.5
1
(2x + 0.5)dx = .
fX,Y (x, 0.5 + x)dx =
2
0
0
Thus,
Z
E(X|Y = 0.5 + X) = 2
0.5
Z
x(2x + 0.5)dx = 4
0.5
2
0.5
x dx +
0
xdx
0
4
1
7
+ = .
24 8
24
2.
E(Z) = P (Z = 1) = P (Y = 1)P (Z = 1|Y = 1) + P (Y = 0)P (Z = 1|Y = 0)
= P (Y = 1) 0.7 + P (Y = 0) 0.3
= 0.7[P (X = 1)P (Y = 1|X = 1) + P (X = 0)P (Y = 1|X = 0)]
= 0.3[P (X = 1)P (Y = 0|X = 1) + P (X = 0)P (Y = 0|X = 0)]
= 0.7[0.4 0.5 + 0.6 0.4] + 0.3[0.4 0.5 + 0.4 0.4]
= 0.7(0.2 + 0.24) + 0.3(0.2 + 0.16) = 0.308 + 0.108 = 0.416.
The last sentence was not clear. It was supposed to be about conditional distribu-
1-2
Nese Yildiz
tions.
P (Z = 1, X = 1)
P (X = 1)
P (Z = 1, X = 1, Y = 1) P (Z = 1, X = 1, Y = 0)
+
=
P (X = 1)
P (X = 1)
P (Z = 1|X = 1, Y = 1)P (Y = 1, X = 1)
=
P (X = 1)
P (Z = 1|X = 1, Y = 0)P (Y = 0, X = 1)
+
P (X = 1)
P (Z = 1|Y = 1)P (Y = 1, X = 1)
=
P (X = 1)
P (Z = 1|Y = 0)P (Y = 0, X = 1)
+
P (X = 1)
= P (Z = 1|Y = 1)P (Y = 1|X = 1) + P (Z = 1|Y = 0)P (Y = 0|X = 1)
= 0.44P (Y = 1|X = 1) + 0.36P (Y = 0|X = 1) = 0.5(0.44 + 0.36) = 0.4.
E(Z|X = 1) = P (Z = 1|X = 1) =
,
V
1 2
Y
3
which is equivalent to
X
Y
=
2 1
1 1
U +1
V +3
.
Thus,
X
Y
N
5
4
4 + 5 3(1 + )
,
.
3(1 + ) 2(1 + )
Then
Cov(X, Y )
(X E(X))
V ar(X)
3(1 + )
=4+
(X 5),
4 + 5
1-3
Nese Yildiz
Cov 2 (X, Y )
V ar(Y |X) = V ar(Y )
V ar(X)
9(1 + )2
= 2(1 + )
4 + 5
8 + 10 9 9
(1 + )(1 )
= (1 + )
=
.
4 + 5
4 + 5
4. V ar(U |V ) = E(U 2 |V )[E(U |V )]2 . Therefore, E(U 2 |V ) = V ar(U |V )+[E(U |V )]2 .
In addition, V ar(U |V ) = 1 2 and E(U |V ) = V . As a result, E(U 2 |V ) = 1
2 ,V )
(V E(V )).
2 +2 V 2 . Best linear predictor of U 2 given V equals E(U 2 )+ Cov(U
V ar(V )
2
2
Note E(U ) = V ar(U ) = 1. Similarly, since E(V ) = 0, Cov(U , V ) = E(U 2 V ) =
EV E(U 2 |V ), by the law of iterated expectations.
EV E(U 2 |V ) = EV (1 2 + 2 V 2 ) = 1 2 + 2 V ar(V ) = 1.
Therefore, best linear predictor of U 2 given V equals 1 + V .
5.
Eg(X|m, s) = 0 E
X m
(X m)2 s
E(X) = m
and
=0
E(X m)2 = s,
1
E X2
X3
1 X2
1
X
X
2
3
X3 = E X2 X22 X2 X3
X3 X2 X3 X32
1
X2
(1 + 2 X2 )
X2
X22
X2 (1 + 2 X2 )
=E
(1 + 2 X2 ) X2 (1 + 2 X2 ) (1 + 2 X2 )2
Since
1
X2
(1 + 2 X2 )
+ 2
= X2 (1 + 2 X2 ) ,
X2
X22
1
(1 + 2 X2 )
X2 (1 + 2 X2 )
(1 + 2 X2 )2
the matrix inside the expectation is not invertible for any realization of X2 .
1-4
Nese Yildiz
1 0 0
1
E(X2 )
T
= Ax, where A = E
)=
b) Let x
. Then E(
xx
,
0 1 0
E(X2 ) E(X22 )
1
E(Y )
1
E(X2 )
T
)=
and = E(
xx
.
E(X2 ) E(X22 )
E(X2 Y )