0% found this document useful (0 votes)
6 views6 pages

TutW2Sol

The document provides solutions for a statistical inference tutorial, detailing the sample space and marginal distributions for random variables X and Y. It demonstrates calculations for probabilities, expected values, and conditional expectations, highlighting the dependence between X and Y. Additionally, it discusses the moment generating function for a random variable, illustrating the process through mathematical expressions.

Uploaded by

qq1812016515
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views6 pages

TutW2Sol

The document provides solutions for a statistical inference tutorial, detailing the sample space and marginal distributions for random variables X and Y. It demonstrates calculations for probabilities, expected values, and conditional expectations, highlighting the dependence between X and Y. Additionally, it discusses the moment generating function for a random variable, illustrating the process through mathematical expressions.

Uploaded by

qq1812016515
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Tutorial Week 2: Solutions

Statistical Inference: STAT3013/4027/6027

1. a. The sample space for 𝑈 is 𝑆𝑈 = {0, 1, 2, 3, 4}:

𝑝𝑈 (0) = 𝑃 (𝑈 = 0) = 𝑃 (𝑋 = 0, 𝑌 = 0) = 0.1
𝑝𝑈 (1) = 𝑃 (𝑈 = 1) = 𝑃 (𝑋 = 0, 𝑌 = 1) + 𝑃 (𝑋 = 1, 𝑌 = 0) = 0.1 + 0.25 = 0.35
𝑝𝑈 (2) = 𝑃 (𝑈 = 2) = 𝑃 (𝑋 = 0, 𝑌 = 2) + 𝑃 (𝑋 = 1, 𝑌 = 1) + 𝑃 (𝑋 = 2, 𝑌 = 0) = 0.2 +
0 + 0.05 = 0.25
𝑝𝑈 (3) = 𝑃 (𝑈 = 3) = 𝑃 (𝑋 = 1, 𝑌 = 2) + 𝑃 (𝑋 = 2, 𝑌 = 1) = 0.2 + 0.05 = 0.25
𝑝𝑈 (4) = 𝑃 (𝑈 = 4) = 𝑃 (𝑋 = 2, 𝑌 = 2) = 0.05
b. The marginal distribution for 𝑋 is:

𝑝𝑋 (0) = 𝑃 (𝑋 = 0) = 𝑃 (𝑋 = 0, 0 ≤ 𝑌 ≤ 2) = 0.1 + 0.1 + 0.2 = 0.4


𝑝𝑋 (1) = 𝑃 (𝑋 = 1) = 𝑃 (𝑋 = 1, 0 ≤ 𝑌 ≤ 2) = 0.25 + 0 + 0.2 = 0.45
𝑝𝑋 (2) = 𝑃 (𝑋 = 2) = 𝑃 (𝑋 = 2, 0 ≤ 𝑌 ≤ 2) = 0.05 + 0.05 + 0.05 = 0.15

The marginal distribution for 𝑌 is:

𝑝𝑌 (0) = 𝑃 (𝑌 = 0) = 𝑃 (0 ≤ 𝑋 ≤ 2, 𝑌 = 0) = 0.1 + 0.25 + 0.05 = 0.4


𝑝𝑌 (1) = 𝑃 (𝑌 = 1) = 𝑃 (0 ≤ 𝑋 ≤ 2, 𝑌 = 1) = 0.1 + 0 + 0.05 = 0.15
𝑝𝑌 (2) = 𝑃 (𝑌 = 2) = 𝑃 (0 ≤ 𝑋 ≤ 2, 𝑌 = 2) = 0.2 + 0.2 + 0.05 = 0.45

Clearly, 𝑋 and 𝑌 are not independent as 𝑃 (𝑋 = 0, 𝑌 = 0) = 0.1 ≠ 𝑃 (𝑋 = 0) × 𝑃 (𝑌 =


0) = 0.4 × 0.4 = 0.16.

1
c. Using the mutliplication rule for independent random variables we have:

Values of 𝑋1 Values of 𝑌1
0 1 2
0 0.16 0.06 0.18
1 0.18 0.0675 0.2025
2 0.06 0.0225 0.0675

d. Similar to part (a), the pmf of 𝑈1 is calculated as:

𝑝𝑈1 (0) = 𝑃 (𝑈1 = 0) = 𝑃 (𝑋1 = 0, 𝑌1 = 0) = 0.16


𝑝𝑈1 (1) = 𝑃 (𝑈1 = 1) = 𝑃 (𝑋1 = 0, 𝑌1 = 1) + 𝑃 (𝑋1 = 1, 𝑌1 = 0) = 0.06 + 0.18 = 0.24
𝑝𝑈1 (2) = 𝑃 (𝑈1 = 2) = 𝑃 (𝑋1 = 0, 𝑌1 = 2) + 𝑃 (𝑋1 = 1, 𝑌1 = 1) + 𝑃 (𝑋1 = 2, 𝑌1 = 0) = 0.18 +
0.0675 + 0.06 = 0.3075
𝑝𝑈1 (3) = 𝑃 (𝑈1 = 3) = 𝑃 (𝑋1 = 1, 𝑌1 = 2) + 𝑃 (𝑋1 = 2, 𝑌1 = 1)0.2025 + 0.0225 = 0.225
𝑝𝑈1 (4) = 𝑃 (𝑈1 = 4) = 𝑃 (𝑋1 = 2, 𝑌1 = 2) = 0.0675

This is different from the pmf of 𝑈 , despite the equality of the marginal distributions of
the components of 𝑈1 and 𝑈 . Thus, the joint distributions is important in determining
the distribution of the sum (or any multi-variable function) of random variables.
e. Let’s compute the following:

𝐸[𝑋] = ∑ possibility × probability


𝑥∈𝑋
2
= ∑ 𝑥𝑃 (𝑋 = 𝑥)
𝑥=0
= (0 × 0.4) + (1 × 0.45) + (2 × 0.15) = 0.75

𝐸[𝑌 ] = ∑ possibility × probability


𝑦∈𝑌
2
= ∑ 𝑦𝑃 (𝑌 = 𝑦)
𝑦=0

= (0 × 0.4) + (1 × 0.15) + (2 × 0.45) = 1.05

2
2
𝐸[𝑌 2 ] = ∑ 𝑦2 𝑃 (𝑌 = 𝑦)
𝑦=0

= (02 × 0.4) + (12 × 0.15) + (22 × 0.45) = 1.95

2
𝑉 [𝑌 ] = 𝐸[𝑌 2 ] − [𝐸[𝑌 ]]
= 1.95 − 1.052 = 0.8475
2
f. We note that 𝐸(𝑋|𝑌 = 𝑦) = ∑𝑥=0 𝑥𝑃 (𝑋 = 𝑥|𝑦) where 𝑃 (𝑋 = 𝑥|𝑦) is the
conditional pmf of 𝑋 given 𝑌 = 𝑦 which we can calculate for all possible pairs
(𝑥, 𝑦) as:

𝑃 (𝑋 = 0, 𝑌 = 0) 0.1
𝑃 (𝑋 = 0|𝑌 = 0) = = = 0.25
𝑃 (𝑌 = 0) 0.4
𝑃 (𝑋 = 1, 𝑌 = 0) 0.25
𝑃 (𝑋 = 1|𝑌 = 0) = = = 0.625
𝑃 (𝑌 = 0) 0.4
𝑃 (𝑋 = 2, 𝑌 = 0) 0.05
𝑃 (𝑋 = 2|𝑌 = 0) = = = 0.125
𝑃 (𝑌 = 0) 0.4

𝑃 (𝑋 = 0, 𝑌 = 1) 0.1
𝑃 (𝑋 = 0|𝑌 = 1) = = = 0.667
𝑃 (𝑌 = 1) 0.1
𝑃 (𝑋 = 1, 𝑌 = 1) 0
𝑃 (𝑋 = 1|𝑌 = 1) = = =0
𝑃 (𝑌 = 1) 0.15
𝑃 (𝑋 = 2, 𝑌 = 1) 0.05
𝑃 (𝑋 = 2|𝑌 = 1) = = = 0.333
𝑃 (𝑌 = 1) 0.15

𝑃 (𝑋 = 0, 𝑌 = 2) 0.2
𝑃 (𝑋 = 0|𝑌 = 2) = = = 0.444
𝑃 (𝑌 = 2) 0.45
𝑃 (𝑋 = 1, 𝑌 = 2) 0.2
𝑃 (𝑋 = 1|𝑌 = 2) = = = 0.444
𝑃 (𝑌 = 2) 0.45
𝑃 (𝑋 = 2, 𝑌 = 2) 0.05
𝑃 (𝑋 = 2|𝑌 = 2) = = = 0.111
𝑃 (𝑌 = 2) 0.45

3
𝐸(𝑋|𝑌 = 0) = (0 × 0.25) + (1 × 0.625) + (2 × 0.125) = 0.875
𝐸(𝑋|𝑌 = 1) = (0 × 0.667) + (1 × 0) + (2 × 0.333) = 0.667
𝐸(𝑋|𝑌 = 2) = (0 × 0.444) + (1 × 0.444) + (2 × 0.111) = 0.667

We can see:

2
𝐸 [𝐸[𝑋|𝑌 ]] = ∑ 𝐸[𝑋|𝑌 = 𝑦]𝑃 (𝑌 = 𝑦)
𝑦=0

= (0.875 × 0.4) + (0.667 × 0.15) + (0.667 × 0.45) = 0.75


= 𝐸[𝑋]

4
2. We calculate the moment genrating function as follows:


1 1
𝑀𝑋 (𝑡) = 𝐸[exp(𝑋𝑡)] = ∫ exp(𝑡𝑥) √ exp {− 2 (𝑥 − 𝜇)2 } 𝑑𝑥
−∞ 𝜎 2𝜋 2𝜎

1 1
= ∫ √ exp {− 2 (𝑥2 − 2𝜇𝑥 + 𝜇2 − 2𝜎2 𝑡𝑥)} 𝑑𝑥
−∞ 𝜎 2𝜋 2𝜎

1 1
= ∫ √ exp {− 2 (𝑥2 − 2(𝜇 + 𝜎2 𝑡)𝑥 + 𝜇2 )} 𝑑𝑥
−∞ 𝜎 2𝜋 2𝜎

1 1
= ∫ √ exp {− 2 (𝑥2 − 2(𝜇 + 𝜎2 𝑡)𝑥 + 𝜇2 + (𝜇 + 𝜎2 𝑡)2 − (𝜇 + 𝜎2 𝑡)2 )} 𝑑𝑥
−∞ 𝜎 2𝜋 2𝜎

1 1
= ∫ √ exp {− 2 (𝑥2 − 2(𝜇 + 𝜎2 𝑡)𝑥 + (𝜇 + 𝜎2 𝑡)2 − 2𝜇𝜎2 𝑡 − 𝜎4 𝑡2 )} 𝑑𝑥
−∞ 𝜎 2𝜋 2𝜎

1 2 4 2 1 1
= exp {− 2 (−2𝜇𝜎 𝑡 − 𝜎 𝑡 )} ∫ √ exp {− 2 (𝑥2 − 2(𝜇 + 𝜎2 𝑡)𝑥 + (𝜇 + 𝜎2 𝑡)2 )} 𝑑𝑥
2𝜎 −∞ 𝜎 2𝜋 2𝜎

1 1 1 2
= exp {− 2 (−2𝜇𝜎2 𝑡 − 𝜎4 𝑡2 )} ∫ √ exp {− 2 (𝑥 − (𝜇 + 𝜎2 𝑡)) } 𝑑𝑥
2𝜎 −∞ 𝜎 2𝜋 2𝜎
1
= exp {− 2 (−2𝜇𝜎2 𝑡 − 𝜎4 𝑡2 )} × 1
2𝜎
1
= exp {(𝜇𝑡 + 𝜎2 𝑡2 )}
2

5
√ 𝑑𝑥
3. First, we note that 𝑌 = 𝑋 2 implies that 𝑋 = 𝑌 and 𝑑𝑦
= 0.5𝑌 −1/2 . So, using
the change of variable formula, we have (for 𝑦 > 0):
a.

√ 𝑑𝑥
𝑓𝑌 (𝑦) = 𝑓𝑋 (𝑥 = 𝑦)∣ ∣
𝑑𝑦
2 1 √
= √ exp {− ( 𝑦)2 } 0.5𝑦−1/2
𝜋 2
1 1
= √ exp {− 𝑦}
2𝜋𝑦 2


b. Note: Γ(1/2) = 𝜋, so we have:

1 1 1
𝑓𝑌 (𝑦) = √ exp {− 𝑦}
Γ(1/2) 2𝑦 2
1 1
= 1/2 𝑦1/2−1 exp {− 𝑦}
2 Γ(1/2) 2

This is a 𝜒2 distribution with 1 degree of freedom.


• Note: since the square of a standard normal random variable has a chi-squared
distribution with 1 degree of freedom, 𝑍 2 = |𝑍|2 , we see that 𝑋 = |𝑍|. Thus, the
name of the distribution of 𝑋 arises from the fact that it is the absolute value of a
standard normal random variable or, more colorfully, 𝑋 has a normal distribution
which is “folded” over at the origin.

You might also like