0% found this document useful (0 votes)
26 views

Markov Chain and Transition Probabilities (Class 5) : N N N 1 N 1, N 2 N 2

A Markov process is a random process where the probability of future states only depends on the present state, not past states. Markov chains are Markov processes that take on discrete states and times, and can be homogeneous if the one-step transition probabilities do not change over time. The transition probability matrix contains the one-step transition probabilities between all states and characterizes the Markov chain along with the initial state probabilities. For a regular, homogeneous Markov chain, the state probabilities will converge to a unique stationary distribution as time increases.

Uploaded by

Meenakshi S L
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

Markov Chain and Transition Probabilities (Class 5) : N N N 1 N 1, N 2 N 2

A Markov process is a random process where the probability of future states only depends on the present state, not past states. Markov chains are Markov processes that take on discrete states and times, and can be homogeneous if the one-step transition probabilities do not change over time. The transition probability matrix contains the one-step transition probabilities between all states and characterizes the Markov chain along with the initial state probabilities. For a regular, homogeneous Markov chain, the state probabilities will converge to a unique stationary distribution as time increases.

Uploaded by

Meenakshi S L
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Markov chain and transition probabilities (class 5)

A random process { X (t) } is called a Markov process if


P ( X ( t n )=an|X ( t n−1 )=a n−1 , X ( t n −2 ) =an−2 , … , X ( t 1 ) =a1 )

= P ( X ( t n )=an|X ( t n−1 )=a n−1 , ) for all t 1< t 2<….¿ t n . In other words if the future behaviour of a
process depends on the present but not on the past, then the process is called a Markov
process. a 1 , a2 , ....,a nare called the states of the Markov process. Random process { X (t) } with
Markov property which take discrete values whether t is discrete or continuous are called
Markov chains. Poisson process discussed earlier is a continuous time Markov chain. In this
section we will discuss discrete time Markov chains. The outcomes are called the states of the
Markov chain. If X n has the outcome j , that is X n= j , the process is said to be at state j atnth
trial. If for all n ,

P ( X n =an|X n−1=an−1 , X n−2=an−2 , …, X 1=a1 ) = P( X n=a n| X n−1=a n−1, ),

then the process { X ( n ) , n=1,2,3 , … } is called a Markov chain. The conditional probability
from state P ( X n =a j| X n−1=a i , ) is called the one step transition probability from state a i, to
state a j, at the nth step and is denoted by pi , j ( n−1 , n ). If the one step transition probability
does not depend on the step pi , j ( n−1 , n )= pi , j ( m−1 ,m ) then the Markov chain is called a
homogeneous Markov chain or the chain is said to have stationary transition probabilities.

When the Markov chain is homogeneous the one step transition probability is denoted by
pi j . The matrix P=¿) is called one step transition probability matrix or shortly tpm.

If for all choices of t 1 , t 2 , … , t n such that t 1< t 2<….¿ t n the random variables
X ( t 1 ) , X ( t 2 ) −X ( t 1 ) , … , X ( t n )−X ( t n−1 )are independent then the process { X (t) } is said to be a
random process with independent increments

Remark 1 The tpm of a Markov chain is a stochastic matrix since pi , j ≥ 0 and ∑ pi , j=1 for
j

all i. That is the sum of all the elements of any row of the tpm is 1. The conditional
probability that the process is in state a jat step n, given that it was in state a i at step 0, that is
P ( X n =a j| X 0=ai ) is called the n-step transition probability and is denoted by pij ( n ) .

Remark 2 If the probability that the process is in state a i is pi , i=1,2 , …at any arbitrary step
then the row vector p=( p1 . p2 , … , p k ) is called the probability distribution of the process at
.that time. In particular p(0 )=( p 1(0) . p2(0 ) ,… , pk (0 )) is the initial probability distribution.

Remark 3 The transition probability matrix together with the initial probability distribution
completely specifies a Markov chain { X n }.

Chapman Kolmogorov Theorem


If P is the tpm of a homogeneous Markov chain then the n step tpm P(n )is equal to Pn.That is
[ pij(n ) ]=[ pij ]n.
Definition : A stochastic matrix P is said to be a regular matrix if all the entries of Pm(for
some positive integer m ) are positive. A homogeneous Markov chain is said to be regular if
its tpm is regular.

We state below two theorems without proof

1.If p is the state probability distribution of the process at an arbitrary time, then after one
step is pP.If p(0 )is the initial state probability distribution, then after one step the probability
distribution is p(1) = p(0 )P. p(2 )= p(1) P= p(0 ) P2.Continuing in this way we get p(n ) = p(0 ) Pn

2.If a homogeneous Markov chain is regular, then every sequence of state probability
distributions approaches a unique fixed probability distribution called the stationary (steady)
state distribution of the Markov chain. That is lim ¿n →∞ p(n)=π ¿,where the state probability
(n ) ( n) ( n) ( n)
distribution at step n , p =( p1 , p2 ,… , pk ) and the steady state distribution
π=( π 1 , π 2 , … , π k ) are row vectors.

3.Moreover if P is the tpm of the regular chain, then πP=π.Using this property of π it can be
found out.

You might also like