0% found this document useful (0 votes)
46 views4 pages

3 Limiting Probabilities

This document discusses limiting probabilities and stationary probabilities for Markov chains. It begins with an example Markov chain and calculates its transition matrices for various time steps. It then states a theorem - for an irreducible, ergodic Markov chain, the limiting probability that the chain is in a given state exists and is independent of the initial state. This limiting probability solves a system of equations involving the transition probabilities. Several examples are then provided to illustrate calculating limiting probabilities for Markov chains modeling different processes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views4 pages

3 Limiting Probabilities

This document discusses limiting probabilities and stationary probabilities for Markov chains. It begins with an example Markov chain and calculates its transition matrices for various time steps. It then states a theorem - for an irreducible, ergodic Markov chain, the limiting probability that the chain is in a given state exists and is independent of the initial state. This limiting probability solves a system of equations involving the transition probabilities. Several examples are then provided to illustrate calculating limiting probabilities for Markov chains modeling different processes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Chapter 3

Limiting Probabilities
In this Chapter, we would like to discuss the long-term behaviour of Markov chains. In particular, we would like to know

the fraction of times that the Markov chain spends in each state as n becomes large. More specically, we would like to

study the distributions

 
π (n) = P (Xn = 0) P (Xn = 1) ···

as n → ∞. To better understand the subject, we will rst look at an example and then provide a general analysis.
 
0.7 0.3
 
Example. Let P =
0.4
. Calculate P 2, P 4, P 8 and P 16 .
0.6

In fact it seems that Pijn is converging to some value (as 


n ∞) that is the same for all i. In other words, there seems
to exist a limiting probability that the process will be in state j after a large number of transitions, and this value is

independent of the initial state.


Theorem. For an irreducible ergodic Markov chain lim n→∞ Pijn exists and is independent of i. Furthermore, letting
πj = limn→∞ Pijn , j≥0

then πj is the unique nonnegative solution of



 ∞

πj = πi Pij , j ≥ 0, πj = 1 (3.0.1)
i=0 j=0

Example 14. Consider Example, in which we assume that if it rains today, then it will rain tomorrow with probability α; and
if it does not rain today, then it will rain tomorrow with probability β .
 
α 1 − α
 
P = 
β 1 − β

If we say that the state is 0 when it rains and 1 when it does not rain, then by Equation (3.0.1), nd the limiting probabilities π0
and π1 .
Remark. Given that, πj = limn→∞ Pijn exists and is independent of the initial state i,


∞ 

P {Xn+1 = j} = P {Xn+1 = j/Xn = i} P {Xn = i} = Pij P {Xn = i}
i=0 i=0

Letting n → ∞, and assuming that we can bring the limit inside the summation, leads to

9
Dr. L.S. Nawarathna, Department of Statistics & Computer Science, UoP
CHAPTER 3. LIMITING PROBABILITIES 10


∞ 

πj = Pij limn→∞ P {Xn = i} = Pij πi
i=0 i=0

It can be shown that πj , the limiting probability that the process will be in state j at time n, also equals the long-run
proportion of time that the process will be in state j .

If the Markov chain is irreducible, then there will be a solution to



 ∞

πj = πi Pij , j ≥ 0, πj = 1
i=0 j=0

if and only if the Markov chain is positive recurrent.

If a solution exists then it will be unique, and πj will equal the long-run proportion of time that the Markov chain is
in state j . If the chain is aperiodic, then πj is also the limiting probability that the chain is in state j .

Example 15. Consider Example in which the mood of an individual is considered as a three-state Markov chain having a

transition probability matrix


 
0.5 0.4 0.1
 
 
P = 0.3 0.4 0.3
 
0.2 0.3 0.5
In the long run, what proportion of time is the process in each of the three states?

Example 16. An organization has N employees where N is a large number. Each employee has one of three possible
job classications and changes classications (independently) according to a Markov chain with transition probabilities
 
0.7 0.2 0.1
 
 
P = 0.2 0.6 0.2
 
0.1 0.4 0.5

What percentage of employees are in each classication?

Example 17. (A Model of Class Mobility) A problem of interest to sociologists is to determine the proportion of society
that has an upper or lower-class occupation. One possible mathematical model would be to assume that transitions
between social classes of the successive generations in a family can be regarded as transitions of a Markov chain. That
is, we assume that the occupation of a child depends only on his or her parent's occupation. Let us suppose that such a
model is appropriate and that the transition probability matrix is given by
 
0.45 0.48 0.07
 
 
P = 0.05 0.70 0.25
 
0.01 0.50 0.49

That is, for instance, we suppose that the child of a middle-class worker will attain an upper, middle, or lower-class
occupation with respective probabilities 0.05, 0.70, 0.25. Find the limiting probabilities π0 , π1 and π2 .

3.1 Stationary Probabilities


The long run proportions πj , j ≥ 0, are often called stationary probabilities. The reason being that if the initial state
is chosen according to the probabilities πj , j ≥ 0, then the probability of being in state j at any time n is also equal to
πj . That is, if

P {X0 = j} = πj , j ≥ 0 then

P {Xn = j} = πj for all n, j ≥ 0

Dr. L.S. Nawarathna, Department of Statistics & Computer Science, UoP


CHAPTER 3. LIMITING PROBABILITIES 11

Proof.By induction, if we suppose it is true for n − 1,



P {Xn = j} = P {Xn = j|Xn−1 = i} P {Xn−1 = i}
 i
= i Pij πi (by the induction hypothesis)
= πj by Equation (3.0.1)

Example 18. Coin 1 comes up heads with probability 0.6 and coin 2 with probability 0.5. A coin is continually ipped until it
comes up tails, at which time that coin is put aside and we start ipping the other one.
(a) What proportion of ips use coin 1?
(b) If we start the process with coin 1 what is the probability that coin 2 is sed on the fth ip?

3.2 Mean Time Spent in Transient States


Consider now a nite state Markov chain and suppose that the states are numbered so that T = {1, 2, ..., t} denotes the
set of transient states. Let ⎡ ⎤
P11 P12 ... ... P1t
⎢ ⎥
⎢ . . . ⎥
⎢ ⎥
PT =⎢
⎢ . . . ⎥⎥
⎢ ⎥
⎣ . . . ⎦
Pt1 Pt2 ... ... Ptt
Note that since P T species only the transition probabilities from transient states into transient states, some of its
row sums are less than 1. For transient states i and j , let sij denote the expected number of time periods that the
Markov chain is in state j , given that it starts in state i.

⎨1 when i = j
Let δi,j = ⎩
0 otherwise
Condition on the initial transition to obtain

t
sij = δi,j + Pik skj (3.2.1)
k=1

Since it is impossible to go from a recurrent to a transient state, implying that skj = 0 when k is a recurrent state.
Let S denote the matrix of values sij , i, j = 1, . . . , t. That is,
⎡ ⎤
s11 s12 ... ... s1t
⎢ ⎥
⎢ . . . ⎥
⎢ ⎥
S=⎢
⎢ . . . ⎥

⎢ ⎥
⎣ . . . ⎦
st1 st2 ... ... stt

In matrix notation, Equation (3.2.1) can be written as

S = I + PTS

where I is the identity matrix of size t.


Because the preceding equation is equivalent to (I − P T )S = I
we obtain, upon multiplying both sides by (I − P T )−1 ,

S = (I − P T )−1

That is, the quantities sij , i ∈ T , j ∈ T , can be obtained by inverting the matrix (I − P T ).

Example 19. Consider the gambler's ruin problem with p = 0.4 andN = 7. Starting with 3 units, determine
(a) the expected amount of time the gambler has 5 units,

Dr. L.S. Nawarathna, Department of Statistics & Computer Science, UoP


CHAPTER 3. LIMITING PROBABILITIES 12

(b) the expected amount of time the gambler has 2 units.

3.3 Probability that the Markov chain ever makes a transition into state j
given that it starts in state i
For i ∈ T , j ∈ T , the quantity fij , equal to the probability that the Markov chain ever makes a transition into state j
given that it starts in state i

sij − δi,j
fij =
sjj
where
sjj = the expected number of additional time periods spent in state j given that it is eventually entered from state i.

Example 20. In the Example 19, what is the probability that the gambler ever has a fortune of 1?
Example 21. College oers 4-year degree program. Each student repeats year, or progresses to next year, or drops out /
graduates with dierent prob's, depending on which year he is currently in. The probabilities are described by transition matrix
P

Y1 Y2 Y3 Y4 D/G
Y1 0.2 0.7 0 0 0.1
Y2 0 0.15 0.8 0 0.05
P = Y3 0 0 0.1 0.85 0.05
Y4 0 0 0 0.05 0.95
D/G 0 0 0 0 1
D/G = Drop Out/Graduation State

1. Find transient class for Markov Chain.


2. If annual tuition is $10,000 for years 1 & 2, and $12,000 for years 3 & 4, what is average student expected to pay from start
to end, i.e. from Y1 until drop-out or graduation?
3. What proportion of freshmen make it to Y4?

Dr. L.S. Nawarathna, Department of Statistics & Computer Science, UoP

You might also like