3 Limiting Probabilities
3 Limiting Probabilities
Limiting Probabilities
In this Chapter, we would like to discuss the long-term behaviour of Markov chains. In particular, we would like to know
the fraction of times that the Markov chain spends in each state as n becomes large. More specically, we would like to
π (n) = P (Xn = 0) P (Xn = 1) ···
as n → ∞. To better understand the subject, we will rst look at an example and then provide a general analysis.
0.7 0.3
Example. Let P =
0.4
. Calculate P 2, P 4, P 8 and P 16 .
0.6
Example 14. Consider Example, in which we assume that if it rains today, then it will rain tomorrow with probability α; and
if it does not rain today, then it will rain tomorrow with probability β .
α 1 − α
P =
β 1 − β
If we say that the state is 0 when it rains and 1 when it does not rain, then by Equation (3.0.1), nd the limiting probabilities π0
and π1 .
Remark. Given that, πj = limn→∞ Pijn exists and is independent of the initial state i,
∞
∞
P {Xn+1 = j} = P {Xn+1 = j/Xn = i} P {Xn = i} = Pij P {Xn = i}
i=0 i=0
Letting n → ∞, and assuming that we can bring the limit inside the summation, leads to
9
Dr. L.S. Nawarathna, Department of Statistics & Computer Science, UoP
CHAPTER 3. LIMITING PROBABILITIES 10
∞
∞
πj = Pij limn→∞ P {Xn = i} = Pij πi
i=0 i=0
It can be shown that πj , the limiting probability that the process will be in state j at time n, also equals the long-run
proportion of time that the process will be in state j .
If a solution exists then it will be unique, and πj will equal the long-run proportion of time that the Markov chain is
in state j . If the chain is aperiodic, then πj is also the limiting probability that the chain is in state j .
Example 15. Consider Example in which the mood of an individual is considered as a three-state Markov chain having a
Example 16. An organization has N employees where N is a large number. Each employee has one of three possible
job classications and changes classications (independently) according to a Markov chain with transition probabilities
0.7 0.2 0.1
P = 0.2 0.6 0.2
0.1 0.4 0.5
Example 17. (A Model of Class Mobility) A problem of interest to sociologists is to determine the proportion of society
that has an upper or lower-class occupation. One possible mathematical model would be to assume that transitions
between social classes of the successive generations in a family can be regarded as transitions of a Markov chain. That
is, we assume that the occupation of a child depends only on his or her parent's occupation. Let us suppose that such a
model is appropriate and that the transition probability matrix is given by
0.45 0.48 0.07
P = 0.05 0.70 0.25
0.01 0.50 0.49
That is, for instance, we suppose that the child of a middle-class worker will attain an upper, middle, or lower-class
occupation with respective probabilities 0.05, 0.70, 0.25. Find the limiting probabilities π0 , π1 and π2 .
P {X0 = j} = πj , j ≥ 0 then
Example 18. Coin 1 comes up heads with probability 0.6 and coin 2 with probability 0.5. A coin is continually ipped until it
comes up tails, at which time that coin is put aside and we start ipping the other one.
(a) What proportion of ips use coin 1?
(b) If we start the process with coin 1 what is the probability that coin 2 is sed on the fth ip?
Since it is impossible to go from a recurrent to a transient state, implying that skj = 0 when k is a recurrent state.
Let S denote the matrix of values sij , i, j = 1, . . . , t. That is,
⎡ ⎤
s11 s12 ... ... s1t
⎢ ⎥
⎢ . . . ⎥
⎢ ⎥
S=⎢
⎢ . . . ⎥
⎥
⎢ ⎥
⎣ . . . ⎦
st1 st2 ... ... stt
S = I + PTS
S = (I − P T )−1
That is, the quantities sij , i ∈ T , j ∈ T , can be obtained by inverting the matrix (I − P T ).
Example 19. Consider the gambler's ruin problem with p = 0.4 andN = 7. Starting with 3 units, determine
(a) the expected amount of time the gambler has 5 units,
3.3 Probability that the Markov chain ever makes a transition into state j
given that it starts in state i
For i ∈ T , j ∈ T , the quantity fij , equal to the probability that the Markov chain ever makes a transition into state j
given that it starts in state i
sij − δi,j
fij =
sjj
where
sjj = the expected number of additional time periods spent in state j given that it is eventually entered from state i.
Example 20. In the Example 19, what is the probability that the gambler ever has a fortune of 1?
Example 21. College oers 4-year degree program. Each student repeats year, or progresses to next year, or drops out /
graduates with dierent prob's, depending on which year he is currently in. The probabilities are described by transition matrix
P
Y1 Y2 Y3 Y4 D/G
Y1 0.2 0.7 0 0 0.1
Y2 0 0.15 0.8 0 0.05
P = Y3 0 0 0.1 0.85 0.05
Y4 0 0 0 0.05 0.95
D/G 0 0 0 0 1
D/G = Drop Out/Graduation State