7 Bayesian Analysis
7 Bayesian Analysis
= 0.42/0.63 = 0.6667
P (low | high sales forecast )
= 0.03/0.63 = 0.0476
Steps to get posterior probability
(6/6)
(1) Construct a tree with branches representing all the possible
events which can occur and write the prior probabilities for these
events on the branches.
(2) Extend the tree by attaching to each branch a new branch
which represents the new information which you have obtained. On
each branch write the conditional probability of obtaining this
information given the circumstance represented by the preceding
branch.
(3) Obtain the joint probabilities by multiplying each prior
probability by the conditional probability which follows it on the
tree.
(4) Sum the joint probabilities.
(5) Divide the ‘appropriate’ joint probability by the sum of the joint
probabilities to obtain the required posterior probability.
Example 2: assistant’s
forecast is medium (1/3)
Prior probability:
A company’s sales manager estimates that there is a 0.2 probability
that sales in the coming year will be high, a 0.7 probability that they
will be medium and a 0.1 probability that they will be low:
p (High)=0.2, p (Medium)=0.7, p (Low)=0.1
New information 1:
She then receives a sales forecast from her assistant and the
forecast suggests that sales will be medium.
Posterior probability:
What should be the sales manager’s revised estimates of the
probability of (a) high sales, (b) medium sales and (c) low sales?
p (high | medium sales forecast)=?
p (medium| medium sales forecast)=?
p (low | medium sales forecast)=?
Example 2: assistant’s
forecast is medium (2/3)
The accuracy of the assistant’s forecast
medium:
By examining the track record of the assistant’s
forecasts she is able to obtain the following
probabilities:
p(fore m)=0.01+0.14+0.01=0.16
Example 3: assistant’s
forecast is low (1/3)
Prior probability:
A company’s sales manager estimates that there is a 0.2 probability
that sales in the coming year will be high, a 0.7 probability that they
will be medium and a 0.1 probability that they will be low:
p (High)=0.2, p (Medium)=0.7, p (Low)=0.1
New information 1:
She then receives a sales forecast from her assistant and the
forecast suggests that sales will be low.
Posterior probability:
What should be the sales manager’s revised estimates of the
probability of (a) high sales, (b) medium sales and (c) low sales?
p (high | low sales forecast)=?
p (medium| low sales forecast)=?
p (low | low sales forecast)=?
Example 3: assistant’s
forecast is low (2/3)
The accuracy of the assistant’s forecast
low:
By examining the track record of the assistant’s
forecasts she is able to obtain the following
probabilities:
p(fore m)=0.01+0.14+0.01=0.21
Summary of Example 1-3:
Accuracy of assistant’s forecast:
(1) Conditional Probabilities if real sales level is high:
p (forecast h| H) = 0.9
p (forecast m| H) = 0.05
p (forecast l |H) = 0.05
p(M)=0.7
p(fore h|M)=0.6 p(fore hM)=0.42 p(M|fore h)=0.6667
p(fore h)=0.63
p(L)=0.1
p(fore h|L)=0.3 p(fore hL)=0.03 p(L|fore h)=0.0476
p(L)=0.1
p(fore m|L)=0.1 p(fore mL)=0.01 p(L|fore m)=0.0625
p(L)=0.1
p(fore l|L)=0.6 p(fore lL)=0.06 p(L|fore l)=0.2857
7.2 Bayes’ theorem
If the events A and B are not
independent, the multiplication rule is:
p (A and B) = p (A) × p (B|A), then
p (B|A)= p (A and B) / p (A)
Decision
Conditional Probability:
In the past when sales turned out to be high the forecast had
correctly predicted high sales on 75% of occasions.
p (high forecast | sales came to be high) =0.75
However, in seasons when sales turned out to be low the
forecast had wrongly predicted high sales on 20% of
occasions.
p (high forecast | sales came to be low) =0.2
Example 4: Posterior
probability (7/9)
Example 4: Decision tree with
posterior probability (8/9)
Example 4: Decision matrix
with posterior probability (9/9)
Decision
H 18 1 1 20
M 5 40 5 50
L 3 3 24 30
Sum 26 44 30 100
Question : How to figure out conditional probabilities a
ccording to Table 1?
Conditional Probabilities for ABC
(1) Conditional Probabilities if real sales level is high:
p (forecast h|H)=18/20=0.9
p (forecast m|H)=1/20=0.05
p (forecast l |H)=1/20=0.05
p(fore h)=0.18+0.05+0.03=0.26
( 2 ) How to adjust prior proba
bilities if forecast is m
We have following conditional probabilities:
p (forecast m|H)=1/20=0.05
p (forecast m|M)=40/50=0.8
p (forecast m|L)=3/30=0.1
p(fore m)=0.01+0.4+0.03=0.44
( 3 ) How to adjust prior pro
babilities if forecast is l
We have following conditional probabilities:
p (forecast l|H)=1/20=0.05
p (forecast l|M)=5/50=0.1
p (forecast l|L)=24/30=0.8
p(fore l)=0.01+0.05+0.24=0.3
How HP predict the forecast
result of ABC before paying
forecast
h m l Sum
Real sales leve
l
H 18 1 1 20
M 5 40 5 50
L 3 3 24 30
Sum 26 44 30 100
p(M)=0.5
p(fore h|M)=0.1 p(fore hM)=0.05 p(M|fore h)=0.192
p(fore h)=0.26
p(L)=0.3
p(fore h|L)=0.1 p(fore hL)=0.03 p(L|fore h)=0.115
p(L)=0.3
p(fore m|L)=0.1 p(fore mL)=0.03 p(L|fore m)=0.068
p(L)=0.3
p(fore l|L)=0.8 p(fore lL)=0.24 p(L|fore l)=0.80
How much should HP pay for ABC
before knowing its forecast result
The way:
How different forecast results improve the
EMV of HP.
HP’s EMV without forecast
High 0.2
55
The EMV for the decision of MU is:
MU Medium 0.5
10 550.2+ 100.5-150.3=11.5
Low 0.3
-15
High 0.2 25
BD The EMV for the decision of BD is:
10 550.033+100.167-150.80
p(L|fore l)=0.80
-15 =-8.515
p(H|fore l)=0.033 25
BD p(M|fore l)=0.167 The EMV for the decision of BD is:
H 30 250.033+ 300.167+ 100.80
P p(L|fore l)=0.80
10
=13.835
p(H|fore l)=0.033
40 The EMV for the decision of BA is:
9.28
MU
p(fore m)=0.44 28.5
BD 28.5
HP
BA 19.4
-8.515
p(fore l)=0.3
13.835 MU
BD 13.835
BA
8.66
Improvement of EMV :
26.6368 - 23 = 3.6368 (EVII)
The expected value of
perfect information (EVPI)
High 0.2
55
MU Medium 0.5
10 The EMV with perfect informatio
Low 0.3
-15 n is :
High 0.2 25 550.2+ 300.5+ 100.3=29
BD Medium 0.5
H 30
P Low 0.3 Improvement of EMV with perfec
10 t information:
High 0.2 29-23=6 (EVPI)
40
BA Medium 0.5
20
Low 0.3
5
Fig.2-3 Completed decision tree
(pay-off and probability)
Accuracy of New Information
Suppose: the forecast accuracy of ABC improves as follows.
forecast
H 19 1 0 20
M 2 46 2 50
L 2 1 27 30
Sum 23 48 29 100
7.6 Examples for Bayesian
analysis
Example 7: North Holt Farm
Example 8:
Example 7: North Holt Farm
(1/13)
A year ago a major potato producer suffered
serious losses when a virus affected the crop at
the company’s North Holt farm.
Since then, steps have been taken to eradicate
the virus from the soil and the specialist who
directed these operations estimates, on the basis
of preliminary evidence, that there is a 70%
chance that the eradication program has been
successful.
Example 7: North Holt Farm
(2/13)
The manager of the farm now has to decide
on his policy for the coming season and he
has identified two options:
(1) He could go ahead and plant a full crop of
potatoes. If the virus is still present an estimated
net loss of $20 000 will be incurred. However, if
the virus is absent, an estimated net return of $90
000 will be earned.
(2) He could avoid planting potatoes at all and
turn the entire acreage over to the alternative
crop. This would almost certainly lead to net
returns of $30 000.
Example 7: Decision matrix
without perfect information (3/13)
Decision
Situation of Virus Probability Plant potatoes Plant alternative
Compute EVwPI
The best alternative with a favorable market is to build a large
plant with a payoff of $200,000. In an unfavorable market the
choice is to do nothing with a payoff of $0
EVwPI = ($200,000)*.5 + ($0)(.5) = $100,000
Compute EVPI = EVwPI – max EMV = $100,000 - $40,000 = $60,000
The most we should pay for any information is $60,000
Thompson’s Decision Tree (3/10)
EMV for Node = (0.5)($200,000) + (0.5)(–$180,000)
1 = $10,000
Payoffs
Favorable Market (0.5)
$200,000
Alternative with best
EMV is selected 1
Unfavorable Market (0.5)
ct nt –$180,000
r u
n st Pla
e
Co a r g
L Favorable Market (0.5)
$100,000
Construct
2
Small Plant Unfavorable Market (0.5)
–$20,000
Do
No
th EMV for Node = (0.5)($100,000)
in
g 2 = $40,000 + (0.5)(–$20,000)
$0
Figure 3.3
Thompson’s Complex Decision Tree:
Using Sample Information (4/10)
Thompson Lumber has two decisions two make,
with the second decision dependent upon the
outcome of the first:
First, whether or not to conduct their own marketing
survey, at a cost of $10,000, to help them decide
which alternative to pursue (large, small or no plant)
The survey does not provide perfect information
Then, to decide which type of plant to build :
Note that the $10,000 cost was subtracted from each
of the first 10 branches. The, $190,000 payoff was
originally $200,000 and the $-10,000 was originally $0.
(5/10)
Ne su t
la n
ve
ga lts 5) P –$190,000
ge
ur
e Small $90,000
ke
No Plant
du
–$10,000
n
Co
$106,400
ge $63,600
Lar Small
Favorable Market (0.78)
$90,000
) Plant Unfavorable Market (0.22)
. 45 –$30,000
(0
e y s e No Plant
u rv sult abl –$10,000
S e r
$49,200Su R avo
rv F –$87,400 Favorable Market (0.27)
e $190,000
Re y (0
.5 Unfavorable Market (0.73)
y
Ne su nt
ve
g $2,400
tiv Lar Favorable Market (0.27)
$2,400
tS
e Small $90,000
ke
No Plant
du
–$10,000
n
$49,200
Co
$40,000
e y Lar Small
Favorable Market (0.50)
$100,000
Plant Unfavorable Market (0.50)
–$20,000
No Plant
$0
(9/10)
Complex Decision Tree
(1/10)
Expected Value of Sample Information
(10/10)
Thompson wants to know the actual value of
doing the survey
Expected value Expected value
with sample of best decision
EVSI = information, assuming – without sample
no cost to gather it information