AppliedProbability ProblemSheets PS2
AppliedProbability ProblemSheets PS2
Suppose there is a thief walking through the vaults and you can model the location of the thief,
i.e. the corresponding vault number, at each point in time by a homogeneous Markov chain denoted by
(Xn )n∈{0,1,2,... } .
If the thief is in vault 1 (together with the policeman), he will run out of vault 1 to vault 2 with
probability one. If the thief is in vault 4 (with the treasure), he will stay there forever. If the thief
is in vault 2 or 3, he will either go left with probability 1/2 or he will go right with probability 1/2.
Moreover, assume that the thief is not very clever, so he might return to vault 1 (with the policeman)
several times. Also, suppose the policeman never manages to catch the thief and jail him (even if they
are in the same vault).
(a) State the transition probabilities for this Markov chain.
(b) Suppose the thief starts his journey in vault 1. What is the expected number of moves required
until the thief reaches the treasure chest? Justify your answer carefully.
Solution:
2. Let T = min{n ≥ 0 : Xn = 4}. Set νi = E(T |X0 = i) for i ∈ E = {1, 2, 3, 4}. We need
to find ν1 .
We proceed recursively: Clearly, ν4 = 0. Similarly to the Gambler’s ruin problem, we
condition on the outcome of the first move. Also, we apply the law of the total conditional
expectation, the Markov property and time-homogeneity:
ν3 = E(T |X0 = 3)
4
X
= E(T |X0 = 3, X1 = x1 )P(X1 = x1 |X0 = 3)
x1 =1
Page 12 of 50
MATH60045/MATH70045 Applied Probability Problem sheets: Autumn 2022
4
X
ν1 = E(T |X0 = 1, X1 = x1 )P(X1 = x1 |X0 = 1)
x1 =1
We have ν1 = 1 + ν2 , ν3 = 1 + 21 ν2 and
1 1 1 3
ν2 = 1 + (ν1 + ν3 ) = 1 + 1 + ν2 + 1 + ν2 = 2 + ν2 ⇔ ν2 = 8.
2 2 2 4
Hence, ν1 = 9 (and also ν3 = 5). I.e. the expected number of moves required until the thief
reaches the treasure chest is 9.
Let us check some of the details required in the above computations: E.g. in the calculations
above, we claimed that
and similar results appeared subsequently. To see that the stated result hold, we can spell
out the details of the computation as follows:
∞
X ∞
X
E(T |X1 = 2) = tP(T = t|X1 = 2) = tP(T = t|X1 = 2) (next replace t by t + 1)
t=0 t=1
X∞ ∞
X
= (t + 1)P(T = t + 1|X1 = 2) = (t + 1)P(T = t + 1|X1 = 2)
t+1=1 t=0
X∞ ∞
X
= tP(T = t + 1|X1 = 2) + P(T = t + 1|X1 = 2).
t=0 t=0
where we used the time-homogeneity of the Markov chain in the second equality.
For the second term, we have
∞
X ∞
X ∞
X
P(T = t + 1|X1 = 2) = P(T = t|X1 = 2) = P(T = t|X1 = 2) = 1,
t=0 t=1 t=0
where we replaced t by t − 1 in the first equality and used the fact that
since p42 = 0.
Page 13 of 50
MATH60045/MATH70045 Applied Probability Problem sheets: Autumn 2022
Solution:
(a) Let P be the doubly stochastic transition matrix.
• Then
!
X XX X X
pij (n) = pik (n − 1)pkj = pik (n − 1) pkj
i∈E i∈E k∈E k∈E i∈E
where we used the CK equations. Now we can prove by induction that Pn is also
doubly stochastic for all n ∈ N.
Suppose j P∈ E is not positive recurrent. Then pij (n) → 0 as n → ∞ for all i ∈ E.
Then 1 = i pij (n) → 0 (we can interchange limit and sum since we have a finite
chain). This is a contradiction! Hence all states are positive recurrent.
• In addition assume the chain is irreducible and aperiodic, then pij (n) → πj , where
π is the unique stationary distribution.
P Since P is doubly stochastic we get for π :=
(1/K, . . . , 1/K), that πi ≥ 0, i∈E πi = 1 and
X 1 X 1
πi pij = pij = = πj .
K K
i∈E i∈E
(b) We only need to show that the chain cannot be positive recurrent.
Suppose the chain is positive recurrent. Then according to Theorem 3.9.8 there exists a
positive root of the equation xP = x, which is unique up to a multiplicative constant. Since
P is doubly stochastic, we can take x = 1 (the vector of 1’s). Since the root x is unique,
there cannot exist a stationary distribution and therefore the chain is null or transient.
Exercise 2- 16: The following question is adapted from Grimmett & Stirzaker (2001b,a) Problem 6.15.7:
Let {Xn }n∈N0 be a recurrent irreducible Markov chain on the state space E with transition matrix P,
and let x be a positive solution of the equation x = xP.
(a) Show that
xj
qij (n) = pji (n), i, j ∈ E, n ∈ N,
xi
defines the n-step transition probabilities of a recurrent irreducible Markov chain on E whose
first-passage probabilities are given by
xj
gij (n) = lji (n), i 6= j, n ∈ N, (2.1)
xi
where lji (n) = P(Xn = i, Tj ≥ n|X0 = j) and Tj = min{m ∈ N : Xm = j}.
Page 14 of 50
MATH60045/MATH70045 Applied Probability Problem sheets: Autumn 2022
Solution:
(a) We observe that qij (n) ≥ 0 for all i, j ∈ E, n ∈ N0 . Also,
X X xj 1 X xi
qij (n) = pji (n) = xj pji (n) = = 1,
xi xi xi
j∈E j∈E j∈E
for all i, j ∈ E, n ∈ N0 . Hence Q(1) is the transition matrix of a Markov chain {Yn }n∈N0 ,
say, and Q(n) = Qn . The chain {Yn }n∈N0 is also recurrent since
∞ ∞ ∞
X X xi X
qii (n) = pii (n) = pii (n) = ∞,
n=1
x
n=1 i n=1
for all i ∈ E. Also, {Yn }n∈N0 is irreducible since i → j for {Yn }n∈N0 whenever j → i for
{Xn }n∈N0 , and {Xn }n∈N0 is irreducible.
Next we compute the first passage probabilities of {Yn }n∈N0 which we denote by gij (n) for
i 6= j. We conduct a proof by induction. The claim is true for n = 1, since we have
xj xj
gij (1) = qij (1) = pji (1) = lji (1).
xi xi
Now suppose the claim is true for n ∈ N, then, by equation (3.9.5) in the lecture notes
X
lji (n + 1) = pri ljr (n).
r∈E:r6=j
Hence,
xj X xr xj X
lji (n + 1) = pri ljr (n) = qir grj (n) = gij (n + 1),
xi xi xr
r∈E:r6=j r∈E:r6=j
where we applied the law of total probability, the Markov property and time-homogeneity in
the last step. More precisely, we used that
X
fij (n + 1) = pir frj (n), for i 6= j, n ∈ N.
r:r6=j
Page 15 of 50
MATH60045/MATH70045 Applied Probability Problem sheets: Autumn 2022
(b) We sum (2.1) over n and obtain for the left hand side (LHS):
∞
X
LHS = gij (n) = gij = 1,
n=1
by Exercise 1- 13. And for the right hand side (RHS), we obtain
∞ ∞
X xj xj X xj
RHS = lji (n) = lji (n) = ρi (j).
n=1
x i x i n=1
xi
Hence
Without loss of generality assume that 0 ∈ E, then xi = x0 ρi (0) for all i ∈ E. Hence x is
unique up to a multiplicative constant.
Exercise 2- 17: Let T be a nonnegative integer-valued random variable on a probability space (Ω, F, P)
and let A ∈ F be an event with P(A) > 0. Show that
∞
X
E(T |A) = P(T ≥ n|A).
n=1
Pm−1
Solution: We use the definition of the conditional expectation and the fact that m = n=0 1 to
deduce that
∞
X ∞ m−1
X X ∞
X ∞
X
E(T |A) = mP(T = m|A) = P(T = m|A) = P(T = m|A)
m=0 m=0 n=0 n=0 m=n+1
∞
X ∞
X
= P(T ≥ n + 1|A) = P(T ≥ n|A).
n=0 n=1
• Specify the communicating classes and determine whether they are transient or recurrent;
• Decide whether or not they have a unique stationary distributions;
• Find a stationary distribution for each of them and show that it is not unique where appropriate.
Page 16 of 50
MATH60045/MATH70045 Applied Probability Problem sheets: Autumn 2022
Solution:
(a) • We have three transient classes {0}, {1}, {2} and 1 closed recurrent class {3}, (the state
3 is absorbing).
• Since we have only 1 closed communicating class: C := {3} on a finite state space,
(the state 3 is absorbing), there is a unique stationary distribution!
• Compute the unique stationary solution πC = πC PC . Here: PC = 1, and πC = 1.
Now we set π = (0, 0, 0, 1).
(b) • We have 2 closed recurrent classes C1 = {0, 1} and C2 = {2, 3}.
• Since we have 2 closed communicating classes C1 = {0, 1} and C2 = {2, 3} on a finite
state space, there is a stationary solution, but it is not unique!
• We get π = (a, a, b, b) for a, b ≥ 0 with 2(a + b) = 1.
(c) • There is 1 closed, recurrent class: C = {0, 1, 2} and one transient class {3}.
• Since there is only 1 closed communicating class: C = {0, 1, 2} on a finite state space,
there is a unique stationary solution.
• Solve π C = π C PC ! Then we obtain π C = (1/4, 5/12, 1/3) and π = (1/4, 5/12, 1/3, 0).
Exercise 2- 19: Consider a discrete-time homogeneous Markov chain (Xn )n∈N0 with state space E =
{1, 2, 3, 4, 5, 6, 7, 8} and transition matrix given by
0 0 0.5 0 0 0 0 0.5
0 0 0.5 0.5 0 0 0 0
0 0 1 0 0 0 0 0
0 0.25 0 0.75 0 0 0 0
P= .
0 0 0.5 0 0 0.5 0 0
0 0 0 0 0.5 0 0.5 0
0 0 0 0 0 1 0 0
1 0 0 0 0 0 0 0
Solution:
(a) The transition diagram is given by
1 0.75
0.5
1 0.5 3 0.5 2 4
0.25
1 0.5 0.5
0.5 0.5
8 5 6 7
0.5 1
Page 17 of 50
MATH60045/MATH70045 Applied Probability Problem sheets: Autumn 2022
(b) We have a finite state space with four communicating classes: The classes T1 = {1, 8}, T2 =
{2, 4}, T3 = {5, 6, 7} are not closed and hence transient. The class C1 = {3} is finite and
closed and hence positive recurrent.
(c) According to a theorem from lectures, this Markov chain has a unique stationary distri-
bution π since it has one closed communicating classes in a finite state space. For the
transient states we know from the lectures that πi = 0 for i = 1, 2, 4, 5, 6, 7, 8. Hence
π = (0, 0, 1, 0, 0, 0, 0, 0) is the unique stationary distribution.
(d) For each communicating class, we pick one state, e.g.
f66 (1) = 0, f66 (2) = 0.5 × 0.5 + 0.5 × 1 = 3/8, f66 (n) = 0, ∀n ≥ 3 ⇒ f66 = 3/8.
f77 (1) = 0, f77 (2) = 0.5, f77 (3) = 0, f77 (4) = 0.53 = 1/8, . . ., i.e. f77 (n) = 0 for
odd n and f77 (n) = 0.5n−1 for even n. Hence f77 = 2/3.
C1 : f33 (1) = 1, f33 (n) = 0, ∀n ≥ 2 ⇒ f33 = 1.
Exercise 2- 20: Suppose we have a Markov chain with finite state space E, i.e. K = |E| < ∞, and
transition matrix P. Suppose for some i ∈ E that
where we used the CK equations. Note that we have used the finiteness of E to justify the inter-
change of summation and limit operations.
Exercise 2- 21: Consider a discrete-time homogeneous Markov chain (Xn )n∈N0 with state space E =
{1, 2} and transition matrix given by
1 1
P= 2 2 .
1 3
4 4
Page 18 of 50
MATH60045/MATH70045 Applied Probability Problem sheets: Autumn 2022
(b) Find
N
!
1 X
lim E eXn .
N →∞ N + 1
n=0
Solution:
(a) We can read off from the transition matrix that this Markov chain is irreducible and aperiodic
with a finite state space. Hence we conclude from the lectures that there exists a unique sta-
tionary distribution π = (π1 , π2 ) and that the limiting distribution is given by the stationary
distribution.
We derive the stationary distribution: π = πP, π1 + π2 = 1, ⇔ 12 π1 + 14 π2 = π1 , π1 + π2 =
1, ⇔ 41 π2 = 21 π1 , π1 + π2 = 1 ⇔ π2 = 2π1 , π2 = 1 − π1 ⇔ π1 = 13 , π2 = 23 . Hence,
limn→∞ P(Xn = 1) = 13 and limn→∞ P(Xn = 2) = 32 .
(b) Using the linearity of the expectation, we have
N
! N N X 2 2 N
X X X X X
Xn
E e = E eXn = ek P(Xn = k) = ek P(Xn = k).
n=0 n=0 n=0 k=1 k=1 n=0
Hence we have
N
! 2 N
1 X X 1 X
lim E eX n = ek lim P(Xn = k)
N →∞ N + 1 N →∞ N + 1
n=0 k=1 n=0
2
Hint
X 1 2
= ek lim P(Xn = k) = eπ1 + e2 π2 = e + e2 .
n→∞ 3 3
k=1
Page 19 of 50