System Modeling - 5
System Modeling - 5
Zahran
Markov Chains
Lecture #5
Contents
1 Stochastic Process 1
2 Markov Process 2
3 Markov Chain 2
3.1 Single-Step Conditional Transition Probability . . . . . . . . . . . . . . . . . . . . 3
3.2 n-Step Conditional Transition Probabilities Matrix . . . . . . . . . . . . . . . . . . 4
(n)
3.3 Unconditional State Probabilities π Vector . . . . . . . . . . . . . . . . . . . . . . 4
3.4 Classication of states of a Markov chain . . . . . . . . . . . . . . . . . . . . . . . . 5
3.5 State Sojurn Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.6 Gobal Balance Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.6.1 Solving Balance Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.6.2 Example: Cascaded Binary Channel . . . . . . . . . . . . . . . . . . . . . . 8
3.6.3 Example: Computer Program Analysis
8
4 Homework 9
1 Stochastic Process
Sotchastic Process
A stochastic process Xt (or X(t)) is a family of random variables indexed by a parameter t
(usually the time). Formally, a stochastic process is a mapping from the sample space S to
functions of t. With each element e of S is associated a function Xt (e) where e is a realization
of the sotochastic process (also trajectory or sample path).
According to the type of the parameter space one speaks about discrete time or contin-
uous time stochastic processes.
∞
Stationary distribution: denes the probability that X t takes a value in a particular
subset of S as t (assuming the limit exists)
The relationships between Xs and Xt for diernet times s and t (e.g. covariance or
correlation of Xs and Xt )
Hitting probability: the probability that a given state is S will ever be entered
First passage time: the instant at which the stochastic process rst time enters a given
state or set of states starting from a given initial state
2 Markov Process
A stochastic process is called a Markov process when it has the Markov property
P {Xtn ≤ xn |Xtn−1 = xn−1 , ......, X1 = x1 } = P {Xtn ≤ xn |Xtn−1 = xn−1 } ∀n, ∀t1 < ... < tn
the future path of a Markov process, given its current state (Xtn=1 ) and the past history
=
before tn 1 , depends only on the current state (not on how this state has been reached).
The current state contains all the information (summary of the past) that is needed to
characterize the future (stochastic) behaviour of the process.
Given the state of the process at an instant its future and past are independent.
3 Markov Chain
In this course, we will use Markov Chain to refer to a Markov process that is discrete in both
time and state. Such a process will be also denoted as DTMC (Discrete time Markov Chain)
A Markov chain is thus the process that represents the evolution of the process Xn states in
a discrete time index n = 0, 1, ....
We also focus on time homogeneous Markov Processes which feature stationary probablistic
evolution, that is to say the transition probability does not depend on n
Since the system always goes to some state, the sum of the row probabilities is 1. A
matrix with non-negative elements such that the sum of each row equals 1 is called a
stochastic matrix.
One can easily show that the product of two stochastic matrices is a stochastic matrix.
It is very common to represent Matrix P with a graph called state diagram in which
teh edge representing the single step transition probability between the connected ver-
ticies.
Figure 1: 2-state DTMC for a cascade of binary comm. channels. Signal values: `0' or `1' form
the state values.
(n)
Denote its elements by pij (the subscript refers to the number of steps). Since it holds that
=
P n = P m · P n m (0 ≤ m ≤ n), we can write in component form
(n) X (m) (n−m)
pij = pik pkj
k
| {z }
Chapman−Kolmogorov equation
Chapman-Kolmogorov equation simply expresses the law of total probability, where the tran-
sition in n steps from state i to state j is conditioned on the system being in state k after m
steps.
Note that π (n) represents the probabilistic transient behavior of the DTMC
(n) X (n−1)
πi = πk pki
k
(n) (n−1)
πi = πi P
Example
Derive an expression of n-step transition probability matrix for the binary cascaded channel
!
2
n 3
+ 13 ( 14 )n 1
3
− 13 ( 14 )n
P = 2
3
− 23 ( 14 )n 1
3
+ 32 ( 14 )n
Communication is an equivalence relation: the states can be grouped into equivalent classes
so that
within each class all the states communicate with each other
two states from two dierent classes never communicate which each other
If all the states of a Markov chain belong to the same communicating class,
the Markov chain is said to be irreducible.
A set of states is closed , if none of its states leads to any of the states outside the set.
A single state which alone forms a closed set is called an absorbing state
one may reach an absorbing state from other states, but one cannot get out of it
* Recurrent states are further classied according to the expectation of the time Tii
it takes to return to the state.
- positive recurrent expectation of rst return time < ∞
If the rst return time of state i can only be a multiple of an integer d > 1 the state
i is called periodic . Otherwise the state is aperiodic .
- null recurrent expectation of rst return time = ∞
Figure 3: Single class of recurrent states (1 and 2) and one transient state (3)
Figure 4: Two classes of recurrent states (class of state1 and class of states 4 and 5) and two
transient states (2 and 3)
The distribution of state Sojurn time is ~Geo(1-pii ) because the exit from the state occurs
with the probability (1-pii )
the limiting distribution π = limn→∞ π(n) = limn→∞ pi,j (n) does exist
X
πP = π and πi = 1
i
Note that πj denes the proportion of time (steps) the system stays in
state j.
Note. An equilibrium does not mean that nothing happens in the system, but
merely that the information on the initial state of the system has been forgot
or washed out because of the stochastic development.
LHS indicates the prob. that the system is in state j and makes a transition to another
state
RHS indicates the prob. that the system is in another state and makes a transition to
state j
Hence, at steady state, there are as many exits form state j as there are entries to it
(Balance of probability ows).
Write the balance condition for all but one of the states (n = 1 equations)
π(P + E − I) = e
Where E and e are all ones matrix and vector respectively. Note that πP = π and πE = e.
Solution
!
a 1−a
P =
1−b b
πP = π →πo = aπ0 + (1 − b)π1
π1 + π 0 = 1
Solve the two equation together to get π0 , π 1
4 Homework
A machine can be either working or broken down on a given day. If it is working, it will
=
break down in the next day with probability b, and will continue working with probability
1 b.
=
If it breaks down on a given day, it will be repaired and be working in the next day
with probability r, and will continue to be broken down with probability 1 r. What is the
steady-state probability that the machine is working on a given day?
If the machine remains broken for a given number of days, despite the repair eorts, it is
replaced by a new working machine. What is the steady-state probability that the machine
is working on a given day when l = 3?