Tutorial 9 Memo
Tutorial 9 Memo
Question 1:
a) 𝑃 = 𝑍𝑋 + (1 − 𝑍)𝜇 = 0.7(65000) + 0.3(66000) = 65300 √
b) 𝑍 must go up √ (larger sample size of direct data thus more accurate direct data)
c) 𝑍 must go down √ (because 1 − 𝑍 must go up, collateral data more accurate)
Question 2:
a)
Prior: 𝜃~𝑃𝑎(𝜉, 𝛾) i.e. 𝑝(𝜃) ∝ 𝜃 −𝛾−1 𝐼(𝜉,∞) (𝜃) √
𝜃 −1 , 0 < 𝑥𝑖 < 𝜃
𝑋𝑖 ~𝑈(0, 𝜃) i.e. p(xi|θ) = {
0, otherwise
𝜃 −𝑛 , 𝜃 > 𝑥𝑖 for all 𝑖 𝜃 −𝑛 , 𝜃 > 𝑀
Likelihood function: l(θ|x) = { ={ = 𝜃 −𝑛 𝐼(𝑀,∞) (𝜃) √ where 𝑀 = 𝑚𝑎𝑥{𝑥1 , … , 𝑥𝑛 }
0, otherwise 0, otherwise
Posterior: p(𝜃 |x) ∝ l(𝜃 |x)p(𝜃) θ − n I (M , ) (θ ) 𝜃 −𝛾−1 𝐼(𝜉,∞) (𝜃) = θ −(γ +n )−1 I ( M , ) (θ ) 𝐼(𝜉,∞) (𝜃) ∝ 𝜃 −𝛾 −1 𝐼(𝜉′ ,∞) (𝜃) √
′
where 𝛾 ′ = 𝛾 + 𝑛 and 𝜉 ′ = 𝑚𝑎𝑥{𝜉, 𝑀}, we can show 𝐼(𝑀,∞) (𝜃)𝐼(𝜉,∞) (𝜃) = 𝐼(𝜉′ ,∞) (𝜃) i.e. 𝜃|x ~𝑃𝑎(𝑚𝑎𝑥{𝜉,
⏟ 𝑀} , 𝛾⏟+ 𝑛 )
𝜉′ 𝛾′
𝛾′ 𝜉′ (𝛾+𝑛 )(𝑚𝑎𝑥{𝜉,𝑀})
i) The Bayes estimate under squared error loss is the posterior mean 𝐸(𝜃|𝒙) = 𝛾 ′ −1
i.e. 𝛾+𝑛 −1
√
𝛾′ 𝛾+𝑛
ii) The Bayes estimate under absolute error loss is the posterior median 𝐸(𝜃|𝒙) = 𝜉′( √2) i.e. (𝑚𝑎𝑥{𝜉, 𝑀})( √2) √
b)
1
Prior: 𝜃 ~ N(θ0,𝜑0) i.e. 𝑝(𝜃) ∝ 𝑒𝑥𝑝 {− 2 (𝜃 − 𝜃0 )2 ⁄𝜑0 } √
1
1
Xi|θ ~ N(θ, 𝜑) i.e. p(xi|θ) = (2𝜋𝜑)−2 𝑒𝑥𝑝 {− 2 (𝑥𝑖 − 𝜃)2 ⁄𝜑}
Likelihood function: p(x|θ) = 𝑝(𝑥1 |𝜃). . . . . . 𝑝(𝑥𝑛 |𝜃)
1 1
1 1
= (2𝜋𝜑)−2 𝑒𝑥𝑝 {− 2 (𝑥1 − 𝜃)2⁄𝜑}×……× (2𝜋𝜑)−2 𝑒𝑥𝑝 {− 2 (𝑥𝑛 − 𝜃)2 ⁄𝜑}
1
∝ 𝑒𝑥𝑝 {− 2 ∑(𝑥𝑖 − 𝜃)2⁄𝜑} √
Posterior: p(θ|x) ∝ p(θ)p(x|θ)
1 1
∝ 𝑒𝑥𝑝 {− (𝜃 − 𝜃0 )2 ⁄𝜑0 } 𝑒𝑥𝑝 {− ∑(𝑥𝑖 − 𝜃)2 ⁄𝜑}
2 2
1 1
= 𝑒𝑥𝑝 {− 2 (𝜃 2 − 2𝜃0 𝜃 + 𝜃0 2 )⁄𝜑0 }×𝑒𝑥𝑝 {− 2 ∑ (𝑥𝑖 2 − 2𝑥𝑖 𝜃 + 𝜃 2 )⁄𝜑}
1 1
= 𝑒𝑥𝑝 {− 2 (𝜃 2 − 2𝜃0 𝜃 + 𝜃0 2 )⁄𝜑0 }× 𝑒𝑥𝑝 {− 2 (∑ 𝑥𝑖 2 − 2𝜃 ∑ 𝑥𝑖 + 𝑛𝜃 2 )⁄𝜑}
1 1
∝ 𝑒𝑥𝑝 {− 2 (𝜃 2 − 2𝜃0 𝜃)⁄𝜑0 − 2 (𝑛𝜃 2 − 2𝜃 ∑ 𝑥𝑖 )⁄𝜑}
1 1 𝑛 𝜃 ∑ 𝑥𝑖
= 𝑒𝑥𝑝 {− 2 𝜃 2 (𝜑 + 𝜑) + 𝜃 (𝜑0 + 𝜑
)}
0 0
1 𝜃2 𝜃𝜃 𝑛 −1 1 𝜃 ∑𝑥
∝ 𝑒𝑥𝑝 {− + 1 } where 𝜑1 = + ) and 𝜃1 = 𝜑1 ( 0 + 𝑖)
(
2 𝜑1 𝜑1 𝜑 𝜑0 𝜑0 𝜑
1 𝜃2 𝜃𝜃1 1 𝜃1 2 1 2
∝ 𝑒𝑥𝑝 {−
2 𝜑1
+
𝜑1
−
2 𝜑1
} = 𝑒𝑥𝑝 {− 2 (𝜃 2 − 2𝜃𝜃1 + 𝜃1 )⁄𝜑1 }
1 1
= 𝑒𝑥𝑝 {− (𝜃 − 𝜃1 )2 ⁄𝜑1 } √ i.e. 𝑝(𝜃|𝒙) = (2𝜋𝜑1 )−1⁄2 𝑒𝑥𝑝 {− (𝜃 − 𝜃1 )2 ⁄𝜑1 } i.e. θ|x ~ N(θ1, 𝜑1)
2 2
𝜃 ∑𝑥
i) The Bayes estimate under squared error loss is the posterior mean 𝐸(𝜃|𝒙) = 𝜃1 √ i.e. 𝜑1 (𝜑0 + 𝜑 𝑖)
0
𝜃 ∑𝑥
ii) The Bayes estimate under absolute error loss is the posterior median = posterior mean 𝜃1 √ i.e. 𝜑1 (𝜑0 + 𝜑 𝑖)
0
Credibility form: (for (i) and (ii) as they both had the same answer):
𝜃 ⁄𝜑 +∑ 𝑥𝑖 ⁄𝜑 𝜃0 ⁄𝜑0 +∑ 𝑥𝑖 ⁄𝜑 𝜑 𝜑 𝜑 ⁄𝑛 𝜑 ⁄𝑛
𝜃1 = 𝜑1 (𝜃0 ⁄𝜑0 + ∑ 𝑥𝑖 ⁄𝜑) = 0 −10 𝜑0−1 =+𝑛𝜑 𝜑+𝑛𝜑0 = 𝜃0 𝜑+𝑛𝜑0
0
+ ∑ 𝑥𝑖 𝜑+𝑛𝜑 0
= 𝜃0 𝜑⁄𝑛+𝜑 + ∑ 𝑥𝑖 𝜑⁄𝑛+𝜑
0 0 0
𝜑0 𝜑
𝜑⁄𝑛 𝜑0 𝜑0 𝜑⁄𝑛 𝜑0⁄𝑛
= 𝜃0 𝜑⁄𝑛+𝜑 + 𝑥̄ 𝜑⁄𝑛+𝜑0
= 𝜑⁄𝑛+𝜑0
𝑥̄ + 𝜑⁄𝑛+𝜑 𝜃0 = 𝑍𝑥̄ + (1 − 𝑍)𝜃0 √ where 𝑍 = 𝜑⁄𝑛+𝜑0
√
0 0
c)
Prior: 𝜃~𝐺𝑎𝑚𝑚𝑎(𝛼0 , 𝛽0 ) i.e. 𝑝(𝜃) ∝ 𝜃 𝛼0 −1 𝑒 −𝛽0𝜃 √
Xi|θ ~ Exp(θ) i.e. 𝑝(𝑥𝑖 |𝜃) = 𝜃𝑒 −𝜃𝑥𝑖
Likelihood function: 𝑝(𝒙|𝜃) = ∏𝑖 𝜃𝑒 −𝜃 ∑ 𝑥𝑖 = 𝜃 𝑛 𝑒 −𝜃 ∑ 𝑥𝑖 √
Posterior: p(θ|x) ∝ p(x|θ)p(θ) ∝ 𝜃 𝑛 𝑒 −𝜃 ∑ 𝑥𝑖 𝜃 𝛼0 −1 𝑒 −𝛽0𝜃 = 𝜃 𝛼0 +𝑛−1 𝑒 −(𝛽0+∑ 𝑥𝑖 )𝜃 √
i.e. θ|x ~𝐺𝑎𝑚𝑚𝑎(𝛼⏟0 + 𝑛 , 𝛽 ⏟0 + ∑ 𝑥𝑖 )
𝛼′ 𝛽′
𝛼′ 𝛼0 +𝑛
The Bayes estimate under squared error loss is the posterior mean 𝐸(𝜃|𝒙) = 𝛽′ i.e. 𝛽 √
0 +∑ 𝑥𝑖
d)
Prior: 𝜃~𝐺𝑎𝑚𝑚𝑎(𝑎, 𝑏) i.e. 𝑝(𝜃) ∝ 𝜃 𝑎−1 𝑒 −𝑏𝜃 √
𝜃𝛼
𝑋𝑖 |𝜃~𝐺𝑎𝑚𝑚𝑎(𝛼, 𝜃) i.e. 𝑝(𝑥𝑖 |𝜃) = 𝛤(𝛼) 𝑥 𝛼−1 𝑒 −𝜃𝑥𝑖 ∝ 𝜃 𝛼 𝑒 −𝜃𝑥𝑖
Likelihood function: 𝑝(𝒙|𝜃) ∝ 𝜃 𝑛𝛼 𝑒 −𝜃 ∑ 𝑥𝑖 √
Posterior: 𝑝(𝜃|𝒙) ∝ 𝑝(𝜃)𝑝(𝒙|𝜃) ∝ 𝜃 𝑎−1 𝑒 −𝑏𝜃 𝜃 𝑛𝛼 𝑒 −𝜆 ∑ 𝑥𝑖 = 𝜃 𝑎+𝑛𝛼−1 𝑒 −(𝑏+∑ 𝑥𝑖 )𝜃 √ i.e.
θ|x~𝐺𝑎𝑚𝑚𝑎(𝑎 ⏟+ 𝑛𝛼 , ⏟𝑏 + ∑ 𝑥𝑖 )
𝛼′ 𝛽′
𝛼′ 𝑎+𝑛𝛼
The Bayes estimate under squared error loss is the posterior mean 𝐸(𝜃|𝒙) = 𝛽′ i.e. 𝑏+∑ 𝑥 √
𝑖
e)
1 𝛼+1 𝜆
Prior: 𝜃~𝐼𝑛𝑣 − 𝐺(𝛼, 𝜆) i.e. 𝑝(𝜃) ∝ (𝜃) 𝑒𝑥𝑝 [− 𝜃] √
1 𝑥2
𝑝(𝑥𝑖 |𝜃) ∝ 𝜃 𝑒𝑥𝑝 [− 𝜃𝑖 ]
1 𝑥2 1 𝑛 ∑ 𝑥𝑖2
Likelihood function: 𝑙(𝜃|𝒙) = 𝑝(𝒙|𝜃) ∝ ∏ 𝜃 𝑒𝑥𝑝 [− 𝜃𝑖 ] = (𝜃) 𝑒𝑥𝑝 [− ]√
𝜃
1 𝛼+1 𝜆 1 𝑛 ∑ 𝑥𝑖2 1 𝑛+𝛼+1 𝜆+∑ 𝑥𝑖2
Posterior: 𝑝(𝜃|𝒙) ∝ 𝑝(𝜃)𝑝(𝒙|𝜃) ∝ (𝜃) 𝑒𝑥𝑝 [− 𝜃] (𝜃) 𝑒𝑥𝑝 [− ] = (𝜃) 𝑒𝑥𝑝 [− ]√
𝜃 𝜃
i.e. 𝜃|𝒙~𝐼𝑛𝑣 − 𝐺(𝛼 𝜆 + ∑ 𝑥𝑖2 )
⏟+ 𝑛 , ⏟
𝛼′ 𝜆′
𝜆′ 𝜆+∑ 𝑥𝑖2
The Bayes estimate under squared error loss is the posterior mean 𝐸(𝜃|𝒙) = 𝛼′ −1 i.e. 𝛼+𝑛−1 √
Question 3:
Prior: 𝜋~𝐵𝑒𝑡𝑎(𝑎, 𝑏) i.e. 𝑝(𝜋) ∝ 𝜋 𝑎−1 (1 − 𝜋)𝑏−1 √
20 𝑥 (1
𝑋|𝜋~𝐵𝑖𝑛(20, 𝜋) i.e. 𝑝(𝑥|𝜋) = ( )𝜋 − 𝜋)20−𝑥 ∝ 𝜋 𝑥 (1 − 𝜋)20−𝑥
𝑥
Likelihood function: 𝑙(𝜋|𝑥) = 𝑝(𝑥|𝜋) ∝ 𝜋 𝑥 (1 − 𝜋)20−𝑥 √ (x = x as we only have one observation)
Posterior: 𝑝(𝜋|𝒙) ∝ 𝑝(𝜋)𝑙(𝜋|𝑥) ∝ 𝜋 𝑎−1 (1 − 𝜋)𝑏−1 𝜋 𝑥 (1 − 𝜋)𝑛−𝑥 = 𝜋 𝑎+𝑥−1 (1 − 𝜋)𝑏+𝑛−𝑥−1 √
i.e. 𝜋|𝑥~𝐵𝑒𝑡𝑎(𝑎
⏟+ 𝑥 , ⏟
𝑏 + 𝑛 − 𝑥)
𝑎′ 𝑏′
𝑎′ 𝑎+𝑥
a) The Bayes estimate under squared error loss is the posterior mean 𝐸(𝜋|𝑥) = i.e. √
𝑎 +𝑏′
′ 𝑎+𝑏+𝑛
1 1
𝑎′ − 𝑎+𝑥−
3 3
b) The Bayes estimate under absolute error loss is the posterior median 𝑀𝑒𝑑𝑖𝑎𝑛(𝜋|𝑥) ≈ 2 i.e. 2 √
𝑎 ′ +𝑏′ − 𝑎+𝑏+𝑛−
3 3
Credibility form: (for (a)):
𝑎+𝑥 𝑎 𝑥 𝑎 𝑎+𝑏 𝑥 𝑛 𝑎+𝑏 𝑎 𝑛 𝑥
𝐸(𝜋|𝑥) = 𝑎+𝑏+𝑛 = 𝑎+𝑏+𝑛 + 𝑎+𝑏+𝑛 = 𝑎+𝑏+𝑛 . 𝑎+𝑏 + 𝑎+𝑏+𝑛 . 𝑛 = 𝑎+𝑏+𝑛 . 𝑎+𝑏 + 𝑎+𝑏+𝑛 . 𝑛 = 𝑍𝑦 + (1 − 𝑍)𝐸(𝜋) √
𝑛
where 𝑍 = 𝑎+𝑏+𝑛 √, 𝑥 = ∑20
𝑖=1 𝑦𝑖 and 𝑌𝑖 ~𝐵𝑒𝑟𝑛𝑜𝑢𝑙𝑙𝑖(𝜋)
Credibility form: (for (b)):
1 1 1 2 2 1
𝑎+𝑥− 𝑎− 𝑥 𝑎− 𝑎+𝑏− 𝑥 𝑛 𝑎+𝑏− 𝑎− 𝑛 𝑥
3 3 3 3 3 3
𝑀𝑒𝑑𝑖𝑎𝑛(𝜋|𝑥) = 2= 2+ 2 = 2.
2 + 2.
𝑛
= 2. 2 + 2 .𝑛
𝑎+𝑏+𝑛− 𝑎+𝑏+𝑛− 𝑎+𝑏+𝑛− 𝑎+𝑏+𝑛− 𝑎+𝑏− 𝑎+𝑏+𝑛− 𝑎+𝑏+𝑛− 𝑎+𝑏− 𝑎+𝑏+𝑛−
3 3 3 3 3 3 3 3 3
2 1
𝑛 𝑥 𝑎+𝑏− 𝑎− 𝑛
= 2. + 3
2 . 3
2 = 𝑍𝑦 + (1 − 𝑍)𝑀𝑒𝑑𝑖𝑎𝑛(𝜋) √ where 𝑍 = 2 √, 𝑥 = ∑20
𝑖=1 𝑦𝑖 and 𝑌𝑖 ~𝐵𝑒𝑟𝑛𝑜𝑢𝑙𝑙𝑖(𝜋)
𝑎+𝑏+𝑛− 𝑛 𝑎+𝑏+𝑛− 𝑎+𝑏− 𝑎+𝑏+𝑛−
3 3 3 3
Question 4:
Question 5:
𝐸[𝑚(𝜃)] ≈ 3732√
𝐸[𝑠 2 (𝜃)] ≈ 2173324.96 √
2173324.96
𝑣𝑎𝑟[𝑚(𝜃)] ≈ 3756942 − = 3394721.173 √
6
𝒏 𝟔
𝑍= 𝐸[𝑠2(𝜃)]
= 𝟔+2173324.96 = 0.9036 √
𝒏+𝑣𝑎𝑟[𝑚(𝜃)] 3394721.173