0% found this document useful (0 votes)
27 views

System Modeling - 5

The document discusses Markov chains, which are discrete-time stochastic processes with the Markov property. Key points: - Markov chains are characterized by single-step transition probabilities between states. These are represented by the stochastic transition matrix P. - The n-step transition probability matrix Pn gives the probabilities of moving between states in n steps. It can be calculated from P using the Chapman-Kolmogorov equation. - The unconditional state probability vector π(n) gives the probabilities of being in each state after n steps, regardless of the starting state.

Uploaded by

dark689
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

System Modeling - 5

The document discusses Markov chains, which are discrete-time stochastic processes with the Markov property. Key points: - Markov chains are characterized by single-step transition probabilities between states. These are represented by the stochastic transition matrix P. - The n-step transition probability matrix Pn gives the probabilities of moving between states in n steps. It can be calculated from P using the Chapman-Kolmogorov equation. - The unconditional state probability vector π(n) gives the probabilities of being in each state after n steps, regardless of the starting state.

Uploaded by

dark689
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

CONTENTS Ahmed H.

Zahran

Markov Chains
Lecture #5

Contents
1 Stochastic Process 1
2 Markov Process 2
3 Markov Chain 2
3.1 Single-Step Conditional Transition Probability . . . . . . . . . . . . . . . . . . . . 3
3.2 n-Step Conditional Transition Probabilities Matrix . . . . . . . . . . . . . . . . . . 4
(n)
3.3 Unconditional State Probabilities π Vector . . . . . . . . . . . . . . . . . . . . . . 4
3.4 Classication of states of a Markov chain . . . . . . . . . . . . . . . . . . . . . . . . 5
3.5 State Sojurn Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.6 Gobal Balance Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.6.1 Solving Balance Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.6.2 Example: Cascaded Binary Channel . . . . . . . . . . . . . . . . . . . . . . 8
3.6.3 Example: Computer Program Analysis
8

4 Homework 9

1 Stochastic Process
Sotchastic Process
ˆ A stochastic process Xt (or X(t)) is a family of random variables indexed by a parameter t
(usually the time). Formally, a stochastic process is a mapping from the sample space S to
functions of t. With each element e of S is associated a function Xt (e) where e is a realization
of the sotochastic process (also trajectory or sample path).

 For a given value of e, Xt (e) is a function of time

 For a given value of t, Xt (e) is a random variable

 For a given value of e and t, Xt (e) is a (xed) number

ˆ For any stochastic process

 State spasce: the set of possible values of Xt

ELC-780: Modelling and Simulation 1 Lecture Notes


Ahmed H. Zahran

 Parameter space: the set of values of t


 Stochastic processes can be classied according to whether these spaces are discrete or
continuous

 According to the type of the parameter space one speaks about discrete time or contin-
uous time stochastic processes.

 Discrete time stochastic processes are also called random sequences.

ˆ In considering stochastic processes we are often interested in quantities like:

 Time-dependent distribution: denes the probability that X t takes a value in a partic-


ular subset of S at a given instant t


∞
Stationary distribution: denes the probability that X t takes a value in a particular
subset of S as t (assuming the limit exists)

 The relationships between Xs and Xt for diernet times s and t (e.g. covariance or
correlation of Xs and Xt )

 Hitting probability: the probability that a given state is S will ever be entered

 First passage time: the instant at which the stochastic process rst time enters a given
state or set of states starting from a given initial state

2 Markov Process
ˆ A stochastic process is called a Markov process when it has the Markov property

P {Xtn ≤ xn |Xtn−1 = xn−1 , ......, X1 = x1 } = P {Xtn ≤ xn |Xtn−1 = xn−1 } ∀n, ∀t1 < ... < tn

that is to say that

 the future path of a Markov process, given its current state (Xtn=1 ) and the past history

=
before tn 1 , depends only on the current state (not on how this state has been reached).

 The current state contains all the information (summary of the past) that is needed to
characterize the future (stochastic) behaviour of the process.

 Given the state of the process at an instant its future and past are independent.

3 Markov Chain
ˆ In this course, we will use Markov Chain to refer to a Markov process that is discrete in both
time and state. Such a process will be also denoted as DTMC (Discrete time Markov Chain)

ˆ A Markov chain is thus the process that represents the evolution of the process Xn states in
a discrete time index n = 0, 1, ....

ˆ We also focus on time homogeneous Markov Processes which feature stationary probablistic
evolution, that is to say the transition probability does not depend on n

ELC-780: Modelling and Simulation 2 Lecture Notes


3.1 Single-Step Conditional Transition Probability Ahmed H. Zahran

ˆ A Markov process of this kind is characterized by the (one-step) transition probabilities


(transition from state i to state j)

pij = P (Xn = j|Xn−1 = i)

ˆ Hence, the probability of a state path can be expressed as P {Xo = io , ....., Xn = in } =


P {Xo = io }pio i1 pi1 i2 ...pin−1 in

3.1 Single-Step Conditional Transition Probability


ˆ The single step transition probability matrix, denoted as P , represents a conditional prob-
ability matrix that describes the transition probability form state i (identifed by the row id)
and state j (identifed by column id)
 
poo po1 p02 .... .....
p12 p22 p23 .... .... 
 

 
P = 
 . . . . .  

 . . . . .  
. . . . .

 Since the system always goes to some state, the sum of the row probabilities is 1. A
matrix with non-negative elements such that the sum of each row equals 1 is called a
stochastic matrix.
 One can easily show that the product of two stochastic matrices is a stochastic matrix.

ˆ It is very common to represent Matrix P with a graph called state diagram in which

 the vertix represents the state and

 teh edge representing the single step transition probability between the connected ver-
ticies.

Figure 1: 2-state DTMC for a cascade of binary comm. channels. Signal values: `0' or `1' form
the state values.

ELC-780: Modelling and Simulation 3 Lecture Notes


3.2 n-Step Conditional Transition Probabilities Matrix Ahmed H. Zahran

3.2 n-Step Conditional Transition Probabilities Matrix


ˆ Typically, the probability that the system, initially in state i, will in state j after two steps
P
is k pik Pkj to take into account all paths via an intermediate state k.

 Clearly this the element {i, j} of the matrix P2


 Similarly, one nds that the n-step transition probability matrix Pn

ˆ (n)
Denote its elements by pij (the subscript refers to the number of steps). Since it holds that
=
P n = P m · P n m (0 ≤ m ≤ n), we can write in component form
(n) X (m) (n−m)
pij = pik pkj
k
| {z }
Chapman−Kolmogorov equation

ˆ Chapman-Kolmogorov equation simply expresses the law of total probability, where the tran-
sition in n steps from state i to state j is conditioned on the system being in state k after m
steps.

3.3 Unconditional State Probabilities π(n) Vector


ˆ The unconditional state probability πi
(n)
representes the probability that the process is in
state i at time n; i.e. P {Xn = i}

ˆ By arranging these probabilities in a vector, we obtain Unconditional State Probabilities


(n) (n)
π (n)
Vector [π1 π2 ....]

ˆ Note that π (n) represents the probabilistic transient behavior of the DTMC

ˆ By the law of total probability we have


X
P {Xn = i} = P {Xn = i|Xn−1 = k}P {Xn−1 = k}
k

(n) X (n−1)
πi = πk pki
k
(n) (n−1)
πi = πi P

ˆ and Recursively, we can have


(n) (0)
πi = πi P n

The Beauty of Markov Chains


Given the initial probabilities and by the repeated use of one-step transition
th
probabilities (n-step transition probability matrix) We can determine the n
order pmf for all n

Example

ELC-780: Modelling and Simulation 4 Lecture Notes


3.4 Classication of states of a Markov chain Ahmed H. Zahran

ˆ Derive an expression of n-step transition probability matrix for the binary cascaded channel
!
2
n 3
+ 13 ( 14 )n 1
3
− 13 ( 14 )n
P = 2
3
− 23 ( 14 )n 1
3
+ 32 ( 14 )n

 what will happen to Pn as n → ∞?


 Assuming an inital dtate distribution of [p q], what would be the state distribution as
n → ∞?

3.4 Classication of states of a Markov chain


ˆ  j), if there is a path i0 = i, i1 , ..., in = j such that
=
State i leads to state j (written i all the
transition probabilities are positive,pik ,ik+1 > 0, k = 0, ..., n 1. Then (P n )ij > 0.

ˆ States i and j communicate (written i ↔ j), if i  j and j  i.

ˆ Communication is an equivalence relation: the states can be grouped into equivalent classes
so that

 within each class all the states communicate with each other

 two states from two dierent classes never communicate which each other

If all the states of a Markov chain belong to the same communicating class,
the Markov chain is said to be irreducible.

ˆ A set of states is closed , if none of its states leads to any of the states outside the set.

Figure 2: Single class of recurrent states

ˆ A single state which alone forms a closed set is called an absorbing state

 for an absorbing state we have pi,i = 1

ELC-780: Modelling and Simulation 5 Lecture Notes


3.5 State Sojurn Time Ahmed H. Zahran

 one may reach an absorbing state from other states, but one cannot get out of it

ˆ Each state is either transient or recurrent.


 A state i is transient if there is a nonzero probability that the Markov chain will not
return to that state again.

 A state i is recurrent if the probability of returning to the state is = 1.


i.e. with certainty, the system sometimes returns to the state.

* Recurrent states are further classied according to the expectation of the time Tii
it takes to return to the state.
- positive recurrent expectation of rst return time < ∞
If the rst return time of state i can only be a multiple of an integer d > 1 the state
i is called periodic . Otherwise the state is aperiodic .
- null recurrent expectation of rst return time = ∞

Type # visits E[Tii ]


Transient <∞ ∞
Null Recurrent ∞ ∞
Positive Recurrent ∞ <∞

Figure 3: Single class of recurrent states (1 and 2) and one transient state (3)

Figure 4: Two classes of recurrent states (class of state1 and class of states 4 and 5) and two
transient states (2 and 3)

3.5 State Sojurn Time


ˆ The number of steps the system consecutively stays in state i is geometrically distributed

ˆ The distribution of state Sojurn time is ~Geo(1-pii ) because the exit from the state occurs
with the probability (1-pii )

ELC-780: Modelling and Simulation 6 Lecture Notes


3.6 Gobal Balance Equations Ahmed H. Zahran

Theorem: Steady-state probability distributions in a DTMC


In an irreducible and aperiodic DTMC with positive recurrent (nite) states:

ˆ the limiting distribution π = limn→∞ π(n) = limn→∞ pi,j (n) does exist

ˆπ is independent of the initial probability distribution π0 ;

ˆπ is the unique stationary probability distribution (AKA, the steady-state


probability vector, equilibrium distribution).

ˆπ can be estimated by solving

X
πP = π and πi = 1
i

ˆ Note that the π can be estimated as the eigenvector of the matrix P


belonging to the eigenvalue 1.

ˆ Note that πj denes the proportion of time (steps) the system stays in
state j.

Note. An equilibrium does not mean that nothing happens in the system, but
merely that the information on the initial state of the system has been  forgot
or  washed out because of the stochastic development.

3.6 Gobal Balance Equations


ˆ The equation πP = π or πj =
P
i πi pij is called (global) balance condition. This equation
can be rewritted as
X X
πj pjk = πi pij
k i
X X
πj pjk = πi pij
k i

 LHS indicates the prob. that the system is in state j and makes a transition to another
state

 RHS indicates the prob. that the system is in another state and makes a transition to
state j

 Hence, at steady state, there are as many exits form state j as there are entries to it
(Balance of probability ows).

ELC-780: Modelling and Simulation 7 Lecture Notes


3.6 Gobal Balance Equations Ahmed H. Zahran

3.6.1 Solving Balance Equations


ˆ In general, the equilibrium equations are solved as follows (assume a nite state space with
n states):

 Write the balance condition for all but one of the states (n = 1 equations)

* the equations x the relative values of the equilibrium probabilities

* the solution is determined up to a constant factor

 The last balance equation (automatically satised) is replaced by the normalization


P
condition i πi = 1.

ˆ On using the computer, on can solve the system of equations by solving

π(P + E − I) = e

Where E and e are all ones matrix and vector respectively. Note that πP = π and πE = e.

3.6.2 Example: Cascaded Binary Channel


Calculate the stationary distribution for cascaded binary channel

Solution

!
a 1−a
P =
1−b b
πP = π →πo = aπ0 + (1 − b)π1
π1 + π 0 = 1
Solve the two equation together to get π0 , π 1

3.6.3 Example: Computer Program Analysis

A typical computer program: continuous cycle of compute & I/O

The resulting DTMC is irreducible with period =1.


 
q 0 q1 . . q m
 1 0 . . 0
 

 
P = 1 0 . . 0


 1 0 . . 0
 

1 0P 0 0 0
π0 = π0 q 0 + m
i=1 πi

ELC-780: Modelling and Simulation 8 Lecture Notes


Ahmed H. Zahran

πj = π0 q j for all j = 1, ..., m


P
i πi = 1

4 Homework
ˆ A machine can be either working or broken down on a given day. If it is working, it will

=
break down in the next day with probability b, and will continue working with probability
1 b.
=
If it breaks down on a given day, it will be repaired and be working in the next day
with probability r, and will continue to be broken down with probability 1 r. What is the
steady-state probability that the machine is working on a given day?

ˆ If the machine remains broken for a given number of days, despite the repair eorts, it is
replaced by a new working machine. What is the steady-state probability that the machine
is working on a given day when l = 3?

ELC-780: Modelling and Simulation 9 Lecture Notes

You might also like