0% found this document useful (0 votes)
3 views

Homework_Week5_solutions

The document discusses various exercises related to Markov chains, including finding transition matrices, eigenvalues, and expected return times. It covers specific scenarios such as a three-state chain, a particle moving on a hexagon, and a hidden Markov chain with defined transition probabilities. The solutions involve calculating probabilities, expected visits, and absorption probabilities, referencing a textbook for further details.

Uploaded by

The Aquaman
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Homework_Week5_solutions

The document discusses various exercises related to Markov chains, including finding transition matrices, eigenvalues, and expected return times. It covers specific scenarios such as a three-state chain, a particle moving on a hexagon, and a hidden Markov chain with defined transition probabilities. The solutions involve calculating probabilities, expected visits, and absorption probabilities, referencing a textbook for further details.

Uploaded by

The Aquaman
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Exercise 1: Consider the three-state chain with diagram

1
1/2 2 1
1/2
1/2
1/2

(n)
find a general formula for p11 .
Solution: The transition matrix, and its square are as follows
   
0 1 0 0 12 21
P =  0 12 12  , P 2 =  14 12 41  .
   
1 1
2 2 0 0 34 41

We look for the eigenvalues of P by equating det(λI − P ) = 0, therefore


1 1
(4λ3 − 2λ2 − λ − 1) = (λ − 1)(4λ2 + 2λ + 1) = 0 .
4 4
Note that to decompose the polynomial of third degree we used the known fact that λ = 1 is
always an eigenvalue of P . It follows that the eigenvalues are
1 √ 1 √
λ ∈ {1, (−1 − i 3), (−1 + i 3)} .
4 4
Since reiα = r cos α + p
ir sin α, we can, by equating real and immaginary parts, rewrite them
as – note that r(λ) = (ℜ(λ))2 + (ℑ(λ))2 and α = arctan(ℑ(λ)/ℜ(λ)) ± π 1{ℜ(λ) > 0} –

1 2π 1 2π
λ ∈ {1, e−i 3 , ei 3 } ,
2 2
and also
1 −i 2nπ 1 i 2nπ
λn ∈ {1,
e 3 , ne 3 } ,
2n 2
that written in terms of trigonometric functions is
1 1
λn ∈ {1, n
(cos(2nπ/3) − i sin(2nπ/3)) , n (cos(2nπ/3) + i sin(2nπ/3))} .
2 2
(n)
So we can write p11 as a linear combination of these eigenvalues, that is

(n) b′ c′
p11 = a′ + (cos(2nπ/3) − i sin(2nπ/3)) + (cos(2nπ/3) + i sin(2nπ/3)) .
2n 2n
However, a′ , b′ and c′ can be complex, and the result has to be real for any n ≥ 0, Therefore,
with a, b, c ∈ R, and a′ = a, b′ = (b + ic)/2, and c′ = (b − ic)/2 we have that

(n) b c
p11 = a + cos(2nπ/3) + n sin(2nπ/3) .
2n 2
To solve for the three unknown, by looking at the first element of P 0 , P and P 2 , we have
that 
0
 p11
 =1=a+b

c 3
p111 = 0 = a + 2b cos(2π/3) + 2c sin(2π/3) = a − b
4 + 4
,
 √
 2 b c b c 3
p11 = 0 = a + cos(4π/3) + sin(4π/3) = a −
4 4 4 − 4

2 3
whose solution is a = 1/7, b = 6/7, and c = 21 .
It follows that  n √ !
(n) 1 1 6 2 3
p11 = + cos(2nπ/3) + sin(2nπ/3) .
7 2 7 21
See also Example 1.1.6 pag. 6 in
[N’97] J.R. Norris (1997). Markov Chains. Cambridge University Press.
Exercise 2: A particle moves on the six vertices of a hexagon in the following way: at each
step the particle is equally likely to move and stay, then in case of moving it is equally likely
to move to each of its two adjacent vertices.
Let i be the initial vertex occupied by the particle and o the vertex opposite to i .

1. Describe the Markov chain model, classify its states and their periods

and calculate each of the following quantities:

resume the expected number of steps until the particle returns to i ;

resume the expected number of visits to o until the first return to i ;

resume the expected number of steps until the first visit to o .

Solution:

1. The transition matrix and the transition diagram of the Markov chain are the following

  1/2 1/4 1/2


1 1 1
2 4 0 0 0 4
1 1 1
 1 2

4 2 4 0 0 0
 1/4
1/4
1/4
0 1 1 1 1/4 1/4
4 2 4 0 0 1/2 1/2
P =

1 1 1

 i o
0 0 4 2 4 0 1/4 1/4
1/4
 
0 1 1 1 1/4
0 0 4 2 4
1/4
4 3

1 1 1
4 0 0 0 4 2 1/4
1/2 1/2

The chain is irreducible, aperiodic and positive recurrent.


By symmetry the unique stationary distribution, π, is the uniform one.

2. Let Ri = min{n > 0 : Xn = i}, we have that


1
Ei [Ri ] = =6.
πi
PRi
3. We know that xj = Ei [ n=1 1{Xn = j}] is the unique invariant measure with xi = 1.
It follows that x = 6 π and therefore
"R #
X i

Ei 1{Xn = o} = xo = 1 .
n=1

4. To compute the expected number of steps until the first visit to o we can consider the
following simplified chain where o is an absorbing state:
 
1 1
2 2 0 0
1/2 1/2
 1 1 1 

 4 2 4 0 
 . 1/2 1/4 1/4
 0 1 1 1 
 4 2 4  1/2 i 1/4 1 1/4 2 o 1

0 0 0 1

We have that 
1 1
 mi = 1 + 2 mi + 2 m1

m1 = 1 + 14 mi + 12 m1 + 14 m2

m2 = 1 + 14 m1 + 12 m2

From the first and the last equations, we have


(
mi = 2 + m1
m2 = 2 + 12 m1

and substituting these in the middle one gives m1 = 16.


It follows that m2 = 10 and
mi = 18 .

Exercise 3: Let {Xn }n≥0 be a HMC on I = {0, 1, 2, . . .} with the following transition
probabilities
p00 = p10 = pi,i−1 = 1/4 i ≥ 2
p01 = p11 = pi,i+1 = 3/4 i ≥ 2
Assume that the chain starts in state 3 ∈ I at time 0.

1. Find the probability to visit state 0 ∈ I, that is P3 (T0 < ∞).

2. Compute limn→∞ Pi (X(n) = j|T0 < ∞), for i, j ≥ 0.


(n)
3. Compute limn→∞ pij , for i, j ≥ 0.

Solution:

3/4
3/4 3/4 3/4 3/4 3/4 3/4 3/4

1/4 0 1/4 1 1/4 2 1/4 3 1/4 4 1/4


··· 1/4
n 1/4
n+1
1/4
···
1. Let ui = Pi (T0 < ∞), with u0 = u1 = 1, and with h = Pi (Ti−1 < ∞) for i ≥ 2. Being
{0, 1} a closed class, that is positive recurrent, we have that ui = Pi (T1 < ∞). Looking
at the state 2 ∈ I, we have that

P2 (T1 < ∞) = 1 × p21 + P3 (T1 < ∞) × p23 .

Since on X(0) = 3 the set {T1 < ∞} ⊂ {T2 < ∞}, we have

P3 (T1 < ∞) = P3 (T1 < ∞, T2 < ∞)


= P3 (T1 < ∞|T2 < ∞)P3 (T2 < ∞)
= P2 (T1 < ∞)P3 (T2 < ∞) ,

where in the last equality we used the Strong Markov property on {X(T2 ) = 2}.
Therefore we have
1 3 2
h= + h
4 4
that admits two solutions
1
h=1 y h= ,
3
and we choose the minimal one, that is h = 1/3.
It follows that ui = hi−1 = 31−i for i ≥ 2 and

P3 (T1 < ∞) = 1/9 .

2. We know that, once entered in the closed recurrend class, the chain, that will be re-
stricted to that class, behaves as a two-states HMC. The limiting distribution in there
is the stationary one as we computed in class when we studied the two-states HMC.
Therefore conditioning that in finite time the chain visits the state 0 ∈ I, we have

 P(X(∞) = 0|T0 < ∞) = π0

P(X(∞) = 1|T0 < ∞) = π1

P(X(∞) = j|T0 < ∞) = 0, j ≥ 2 .

Solving the system 


1

 + 14 π1 = π0
4 π0
3
+ 34 π1 = π1
4 π0

π0 + π1 =1

we get (π0 , π1 ) = ( 14 , 34 ) and therefore

1
P(X(∞) = 0|T0 < ∞) =
4
3
P(X(∞) = 1|T0 < ∞) =
4
P(X(∞) = j|T0 < ∞) = 0 = πj , j≥2.
3. By the previous results, we have for i ∈ {0, 1},

(n) 1
lim p = π0 =
n→∞ i0 4
(n) 3
lim p = π1 =
n→∞ i1 4
(n)
lim p = 0, j ≥ 2 ,
n→∞ ij

and for i ̸∈ {0, 1},

(n) 31−i
lim pi0 = Pi (X(∞) = 0|T0 < ∞)Pi (T0 < ∞) = ui π0 =
n→∞ 4
(n) 32−i
lim pi1 = Pi (X(∞) = 1|T0 < ∞)Pi (T0 < ∞) = ui π1 =
n→∞ 4
(n)
lim p = 0, j ≥ 2 .
n→∞ ij

Exercise 4: Let {Xn }n≥0 be a HMC on I = {0, 1, 2, . . .} with the following transition
probabilities, for i ≥ 1,
pi,i+1 = pi
pi,i−1 = qi .
For i = 1, 2, . . . , we have 0 < pi = 1 − qi < 1, while p00 = 1, making 0 an absorbing state.
Calculate the absorption probability starting from i ≥ 1, that is Pi (T0 < ∞).
Solution: See Example 1.3.4 (Birth-and-death chain) pag. 16 in
[N’97] J.R. Norris (1997). Markov Chains. Cambridge University Press.

You might also like