0% found this document useful (0 votes)
8 views

Markov Chain Part 2

Markov Chain Part 2

Uploaded by

lllt
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Markov Chain Part 2

Markov Chain Part 2

Uploaded by

lllt
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Discrete Time Markov Chain (Part 2)

Noha Youssef

The American University in Cairo


[email protected]

1 / 31
Table of Contents

Classification of States

Hitting Times and Hitting Probabilities

Examples

2 / 31
Classification of States
Definition 1
(n)
A state j is accessible from state i if pij > 0 for some n ≥ 0.

Definition 2
State i and j communicate if both j is accessible from i and i is
accessible from j. Notation: i ←→ j . To communicate is an
equivalence relation:
I reflexivity: i always communicates with i.
I symmetry: if i communicates with j, then j communicates
with i (also by definition).
I transitivity: if i communicates with j and j communicates
with k, then i communicates with k
I Two states that communicate are said to belong to the same
equivalence class, and the state space S is divided into a
certain number of such classes.
3 / 31
Continued

Definition 3
A communicating class of states is closed if it is not possible to
leave that class. That is, the communicating class C is closed if
pij = 0 whenever i ∈ C and j ∈ / C.

Definition 4
A state i is said to be absorbing if the set {i} is a closed class. A
state i is absorbing if pii = 1.

4 / 31
Example 1
Consider the Markov chain below. It is assumed that when there is
an arrow from state i to state j then pij > 0. Find the equivalence
classes for this Markov chain.

Solution Class 1={1, 2}, Class 2={3, 4}, Class 3={5},State 5 is a


class on its own, Class 4= {6, 7, 8}.
5 / 31
Continued

Definition 5
The chain is irreducible if the state space S is a single
communicating class. i.e. all states communicate with each other.

Definition 6
A state is said to be recurrent if, any time that we leave that
state, we will return to that state in the future with probability
one. On the other hand, if the probability of returning is less than
one, the state is called transient. For any state i, we define

fii = P (Xn = i, for some n ≥ 1|X0 = i).

State i is recurrent if fii = 1 and it is transient if fii < 1.

6 / 31
Example 2

Consider the chain on states 0, 1, 2 and


 
0.5 0.5 0
 0.5 0.25 0.25  .
0 0.3 0.7

Is this chain irreducible?


Solution
It is clear that As 0 ↔ 1 and 1 ↔ 2. This is an irreducible chain.

7 / 31
Example 3
Consider the chain on states 0, 1, 2, 3 and
 
0.5 0.5 0 0
 0.5 0.5 0 0 
 .
 0.25 0.25 0.25 0.25 
0 0 0 1
Is the chain reducible? This chain is not irreducible and it has
three classes {0, 1}, {2} and {3}.
Which states are recurrent and which are transient?
I State 0: in order to never return to 0 we need to go to state 1
and stay there forever. But the probability to stay at 1 n steps
is 0.5n which means that the probability to stay forever at 1 is
0.5∞ = 0. This implies that f0 = 1, i.e., 0 is recurrent.
I State 1: following similar logic as we did for state 0, this state
is recurrent.
I State 2: the only possibility of returning to 2 is to do so in
one step, so we have f2 = 0.25, and 2 is transient.
I State 3: it is recurrent since it is an absorbing state.
8 / 31
Definition 7
Remark: We are talking about the first return only when we talk
about a recurrent state.
Consider a discrete-time Markov chain. Let V be the total number
of visits to state i.
a) If i is a recurrent state, then P (V = ∞|X0 = i) = 1.
b) If i is a transient state, then V |X0 = i ∼ Geometric(1 − fii ).
(n)
c) recurrent if ∞
P
n=0 Pii = ∞.
(n)
d) transient if ∞
P
n=0 Pii < ∞.

9 / 31
Continued

Definition 8
A state i is periodic with
period d if d is the
smallest integer such that
(n)
pii = 0 for all n which
are not multiples of d. In
case d = 1, the state is
said to be aperiodic. If
periodic states i and j are
in the same class, then
their periods are the same.
Figure: Periodic states

10 / 31
Continued

The period of a state i is the smallest integer d satisfying the fol-


(n)
lowing property: pii = 0 whenever n is not divisible by d. The
(n)
period of i is shown by d(i). If pii = 0 for all n > 0 then we let
d(i) = ∞.
a) If d(i) > 1 we say that state i is periodic.
b) If d(i) = 1 we say that state i is aperiodic.
c) All states in the same periodic communicating class have the
same period. If i ↔ j then d(i) = d(j).
d) A class is said to be periodic if its states are periodic.
e) Similarly, a class is said to be aperiodic if its states are
aperiodic.
f) Finally, a Markov chain is said to be aperiodic if all of its states
are aperiodic.

11 / 31
Markov chains with different periods

12 / 31
Example 4

Back to the graph in Example 1


a) Is Class 1={state 1,state 2} aperiodic?
b) Is Class 2={state 3,state 4} aperiodic?
c) Is Class 4={state 6,state 7,state 8} aperiodic?
Solution
a) Class 1={state 1,state 2} is aperiodic since it has a
self-transition, p22 > 0
b) Class 2={state 3,state 4} is periodic with period 2.
c) Class 4={state 6,state 7,state 8} is aperiodic. For example,
note that we can go from state 6 to state 6 in two steps (6-7-6)
and in three steps (6-7-8-6) Since gcd(2,3)=1, we conclude
state 6 and its class are aperiodic.

13 / 31
Example 5

For the Markov chain given above, answer the following questions:
How many classes are there? For each class, mention if it is recurrent
or transient.
Solution
There are three classes: Class 1 consists of one state, state 0, which
is a recurrent state. Class two consists of two states, states 1 and
2, both of which are transient. Finally, class three consists of one
state, state 3, which is a recurrent state. For States 0 and 3, once
you enter those states, you never leave them. We call them ab-
sorbing states.There are two absorbing states here. The process will
eventually get absorbed in one of them.

14 / 31
Example 6

Consider the Markov chain with state space{1, 2, 3, 4, 5} and


transition matrix
 
0 1 0 0 0
 0 0 1 0 0 
 
P =  1 0 0 0 0 

 0 0 0 0 1 
0 0 0 1 0

{1, 2, 3} is a class of period 3 and {4, 5} is a class of period 2.

15 / 31
Example 7
Consider the Markov chain with state space {1, 2, 3} and
transition matrix  
0 0.4 0.6
 1 0 0 
1 0 0

a) Find the communicating classes.


b) Classify the communicating classes as transient or recurrent.
c) Find the period d of each communicating class.
Solution
a) All the states communicate, i.e. it is irreducible chain.
b) It is a finite irreducible Markov chain then all the states are
recurrent
c) The period d for each state is 2.

16 / 31
Hitting Times

Let (Xn , n ≥ 0) be a Markov chain with state space S and transition


matrix P and let A be a subset of the state space S (notice that A
need not be a class).
We are interested in knowing what is the probability that the Markov
chain X reaches a state in A.
Definition 9
Hitting time is the first time the chain X ’hits’ the subset A. I am
looking for the minimum n at which I hit the target set.

Definition 10
Hitting probability: hiA = P (HA < ∞|X0 = i) = P (for any n ≥
0 such that Xn ∈ A|X0 = i), i ∈ S.

17 / 31
Hitting Times

Remarks.
I The time HA to hit a given set A might be infinity (if the
chain never hits A).
I On the contrary, we say by convention that if X0 = i and
i ∈ A, then HA = 0 and hiA = 1.
I If A is an absorbing set of states (i.e. there is no way for the
chain to leave the set A once it has entered it), then the
probability hiA is called an absorption probability. A particular
case that will be of interest to us is when A is a single
absorbing state.

18 / 31
Hitting Probabilities

The hitting probability describes the probability that the Markov


chain will ever reach some state or set of states. It is easier to
continue using our own common sense, but occasionally the formula
becomes more necessary.
Let A be some subset of the state space S. (A need not be a
communicating class: it can be any subset required, including the
subset consisting of a single state: e.g. A = {4}). The hitting
probability from state i to set A is the probability of ever reaching
the set A, starting from initial state i. We write this probability as
hiA . Thus

hiA = P (Xt ∈ A for some t ≥ 0|X0 = i).

19 / 31
Example 8

Let set A = {1, 3} as shown The hitting probability for set A is:
I 1 starting from states 1 or 3 (We are starting in set A, so we
hit it immediately);
I 0 starting from states 4 or 5 (The set {4, 5} is a closed class,
so we can never escape out to set A)
I 0.3 starting from state 2 (We could hit A at the first step
(probability 0.3), but otherwise we move to state 4 and get
stuck in the closed class {4, 5} (probability 0.7)).

20 / 31
Continued

We can summarize all the information from the example above in a


vector of hitting probabilities
   
h1A 1
 h2A   0.3 
   
hA =  h 3A
 =  1 .
  
 h4A   0 
h5A 0

When A is a closed class, the hitting probability hiA is called the


absorption probability.

21 / 31
Theorem of Hitting Probabilities

The vector of hitting probabilities hA = (hiA : i ∈ S) is the


minimal non-negative solution to the following equations:

1 for i ∈ A
hiA = P
j∈S pij hjA for i∈/ A.

The ’minimal non-negative solution’ means that:


1. the values {hiA } collectively satisfy the equations above.
2. each value hiA is ≥ 0 (non-negative).
3. given any other non-negative solution to the equations above,
say {giA } where giA ≥ 0 for all i, then hiA ≤ giA for all i.
(minimal solution).

22 / 31
Proof

Part 1:
Let us first prove that hiA is a solution of the system of equations.
If i ∈ A,then hiA = 1,. If i ∈
/ A, then

hiA = P (Xt ∈ A for some t ≥ 1|X0 = i)


X
= P (Xt ∈ A for some t ≥ 1|X1 = j)P (X1 = j|X0 = i)
j∈S
Partition rule
X
= pij hjA Definitions
j∈S

The trick for any proof is to express the probability in terms of a


transition probability and a conditional one.

23 / 31
Proof

Part 2:

X X X
giA = pij gjA = pij + pij gjA
j∈S j∈A j ∈A
/
!
X X X X
= pij + pij pjk + pjk gkA
j∈A j ∈A
/ k∈A k∈A
/
= P (X1 ∈ A|X0 = i) + P (X2 ∈ A, X1 ∈
/ A|X0 = i)
XX
+ pij pjk gkA
j ∈A
/ k∈A
/

24 / 31
Proof

The last term on the right-hand side is non-negative, so

giA ≥ P (X1 ∈ A or X2 ∈ A|X0 = i).

This procedure can be iterated further and gives, for any n ≥ 1:

giA ≥ P (X1 ∈ A orX2 ∈ A or · · · or Xn ∈ A|X0 = i)

So finally, we obtain

giA ≥ P (for all n ≥ 1 : Xn ∈ A|X0 = i) = hiA .

which completes the proof.

25 / 31
Expected Hitting Times

In this part we study how long it takes to get from state i to A.


Definition 1
Let A be a subset of the state space S. The hitting time of A is
the random variable HA , where HA = min{t ≥ 0 : Xt ∈ A}. HA
is the time taken before hitting set A for the first time. The hitting
time HA can take values 0, 1, 2, · · · , and ∞. If the chain never hits
set A, then HA = ∞.

The hitting time is also called the reaching time. If A is a closed


class, it is also called the absorption time.

26 / 31
Continued

The mean hitting time for A, starting from state i, is

miA = E(HA |X0 = i).

Theorem 1
The vector of expected hitting times mA = (miA : i ∈ S) is the
minimal non-negative solution to the following equations:

P 0 for i ∈ A
miA =
1 + j ∈A/ p ij mjA for i∈
/A

If A = {i} then mii is called the mean return time.

27 / 31
Continued

Note that the process takes 1 step to get to state j at time 1, then
find E(TA ) from there. For i ∈
/ A then

X
miA = 1 + E(HA |X1 = j)P (X1 = j|X0 = i)
j∈S
X X
= 1+ mjA pij = 1 + pij mjA .
j∈S j ∈A
/

because mjA = 0 for j ∈ A.

28 / 31
Example 1

For the following MC.


a) Find the vector of hitting probabilities for state 4.
b) Starting from state 2, find the expected time to absorption.
Solution
a)
   
h14 P 0
  Pj∈S p2j hj4
 h24   
 h34  = 
 .
j∈S p3j hj4

h44 1

29 / 31
Example 1

     
h14 0 0
 h24   0.5h14 + 0.5h34   0.5 × 0 + 0.5h34 
 h34  =  0.5h24 + 0.5h44
   = 
  0.5h24 + 0.5 
h44 1 1

Solving h34 = 21 + 12 ( 21 h34 ) ⇒ h34 = 23 , h24 = 1


3 The vector of
hitting probabilities is
 
1 2
h = 0, , , 1 .
3 3

30 / 31
Example 1
b) Starting from state i = 2, we wish to find the expected time to
reach the set A = 1, 4 (the set of absorbing states).

P 0 for i ∈ {1, 4}
miA =
1 + j ∈A/ pij mjA for i∈/ {1, 4}

m1A = 0 for {1 ∈ A}
m4A = 0 for {4 ∈ A}
1
m2A = 1 + p22 m2A + p23 m3A = 1 + m3A
2
1 1 1
m3A = 1 + p32 m2A + p33 m3A = 1 + m2A = 1 + (1 + m3A ) = 2.
2 2 2
Thus
1 1
m2A = 1 + m3A = 1 + × 2 = 2.
2 2

31 / 31

You might also like