Markov Chain and Transition Probabilities (Class 5) : N N N 1 N 1, N 2 N 2
Markov Chain and Transition Probabilities (Class 5) : N N N 1 N 1, N 2 N 2
= P ( X ( t n )=an|X ( t n−1 )=a n−1 , ) for all t 1< t 2<….¿ t n . In other words if the future behaviour of a
process depends on the present but not on the past, then the process is called a Markov
process. a 1 , a2 , ....,a nare called the states of the Markov process. Random process { X (t) } with
Markov property which take discrete values whether t is discrete or continuous are called
Markov chains. Poisson process discussed earlier is a continuous time Markov chain. In this
section we will discuss discrete time Markov chains. The outcomes are called the states of the
Markov chain. If X n has the outcome j , that is X n= j , the process is said to be at state j atnth
trial. If for all n ,
then the process { X ( n ) , n=1,2,3 , … } is called a Markov chain. The conditional probability
from state P ( X n =a j| X n−1=a i , ) is called the one step transition probability from state a i, to
state a j, at the nth step and is denoted by pi , j ( n−1 , n ). If the one step transition probability
does not depend on the step pi , j ( n−1 , n )= pi , j ( m−1 ,m ) then the Markov chain is called a
homogeneous Markov chain or the chain is said to have stationary transition probabilities.
When the Markov chain is homogeneous the one step transition probability is denoted by
pi j . The matrix P=¿) is called one step transition probability matrix or shortly tpm.
If for all choices of t 1 , t 2 , … , t n such that t 1< t 2<….¿ t n the random variables
X ( t 1 ) , X ( t 2 ) −X ( t 1 ) , … , X ( t n )−X ( t n−1 )are independent then the process { X (t) } is said to be a
random process with independent increments
Remark 1 The tpm of a Markov chain is a stochastic matrix since pi , j ≥ 0 and ∑ pi , j=1 for
j
all i. That is the sum of all the elements of any row of the tpm is 1. The conditional
probability that the process is in state a jat step n, given that it was in state a i at step 0, that is
P ( X n =a j| X 0=ai ) is called the n-step transition probability and is denoted by pij ( n ) .
Remark 2 If the probability that the process is in state a i is pi , i=1,2 , …at any arbitrary step
then the row vector p=( p1 . p2 , … , p k ) is called the probability distribution of the process at
.that time. In particular p(0 )=( p 1(0) . p2(0 ) ,… , pk (0 )) is the initial probability distribution.
Remark 3 The transition probability matrix together with the initial probability distribution
completely specifies a Markov chain { X n }.
1.If p is the state probability distribution of the process at an arbitrary time, then after one
step is pP.If p(0 )is the initial state probability distribution, then after one step the probability
distribution is p(1) = p(0 )P. p(2 )= p(1) P= p(0 ) P2.Continuing in this way we get p(n ) = p(0 ) Pn
2.If a homogeneous Markov chain is regular, then every sequence of state probability
distributions approaches a unique fixed probability distribution called the stationary (steady)
state distribution of the Markov chain. That is lim ¿n →∞ p(n)=π ¿,where the state probability
(n ) ( n) ( n) ( n)
distribution at step n , p =( p1 , p2 ,… , pk ) and the steady state distribution
π=( π 1 , π 2 , … , π k ) are row vectors.
3.Moreover if P is the tpm of the regular chain, then πP=π.Using this property of π it can be
found out.