Markov Process: Properties, Analysis and Applications: Ajay Kumar
Markov Process: Properties, Analysis and Applications: Ajay Kumar
Applications
AJAY KUMAR
Assistant Professor
Atal Bihari Vajpayee-
Indian Institute of Information Technology & Management, Gwalior
Morena Link Road, Gwalior-474010
MP, India
1 / 113
Stochastic Process: A stochastic process is a collection of
random variables, {X (t) : t ∈ T }, where X (t) is a random variable
for each t ∈ T .
2 / 113
I Chain: If the state space S of a stochastic process
{X (t) : t ∈ T } is countable, the process is said to be a chain.
I Based on the countable or continuous nature of the index set
and the space state, we have the following types of stochastic
processes:
I Discrete time, discrete state space processes
I Discrete time, continuous state space processes
I Continuous time, discrete state space processes
I Continuous time, continuous state space processes
3 / 113
Example: Consider a single machine prone to failures and let us
examine its states at discrete points of time, t0 , t1 , t2 , · · · . Let us
define its state as 0, if the machine is up and 1, if machine is
down. The machine will keep changing its state with time. Let, Xi
denote the state of the machine at time ti , i = 0, 1, 2, · · · , then
Xi : i ∈ N is a stochastic process.
4 / 113
Markov Process: Markov process (or chain) is a stochastic
process which has the property that the probability of transition
from a given state to any future state depends only on the present
state and not on the manner in which it was reached. OR
Mathematically, a stochastic process is called a Markov process, if
the occurrence of the future state depends on the immediately
preceding state and only on it. Thus, if t0 , t1 , t2 , · · · , represents the
points in time scale, then the family of random variables {X (tn )}
is said to be a Markov process if it holds the following property:
5 / 113
Transition Probability: The probability of moving from one state
to another is said to be transition probability. Mathematically,
6 / 113
Properties of TPM:
I P is a square matrix
I 0 ≤ pij≤1 , ∀i, j
P
m
I Sum along each row is unity, i.e., pij = 1
j=1
7 / 113
Discrete Time Markov Chain (DTMC): A discrete time Markov
chain is a stochastic process, {Xn : n ∈ N}, with countable state
space S, such that for all n ∈ N, and for all
xi ∈ S, i = 0, 1, 2, · · · , n:
Which implies, that given the current state of the system, the
future is independent of the past.
8 / 113
Let, P[sj (0)] be the probability that the system be in state sj at
time t = t0 , and
9 / 113
Question: What is the probability that the system will occupy
state sj after n transitions, if its state at n = 0, is known.
Let P[sj (n + 1)] = the probability that the system is in state sj at
time t = tn+1 , Then:
10 / 113
The above system can be written as:
P(n + 1) = P(n) · P
P(1) = P(0) · P
P(2) = P(1) · P = P(0) · P 2
P(3) = P(2) · P = P(0) · P 3
··· ··· ··· ···
P(k) = P(k − 1) · P = P(0) · P k
11 / 113
Question: Gwalior city is divided into three zones A, B and C.
From the records of drivers, the following information is available.
Out of passengers picked up in zone A (B) (C) 40% (30%) (30%)
go to a destination in zone A, 40% (40%) (50%) are taken to a
destination in zone B, 20% (30%) (20%) go to a destination in
zone C. Suppose in the beginning of the day 30% of the taxis are
in zone A, 30% are in zone B and 40% are in zone C. What is the
distribution of taxis after all have go one ride and two rides. Also
determine the long run distribution of taxis.
Answer:
12 / 113
Limiting State Probabilities: As, if n → ∞, P(n) will be
constant. In this situation limiting state probabilities exist and are
independent of the initial conditions. The process in this case is
said to be ergodic process.
Let lim P(n) = π, then:
n→∞
Where, π is a 1 × m vector.
13 / 113
I Transient State: If the system can leave the state but never
return to it, e.g., s1
I Trapping or Absorbing State: If the system can enter the
state and cannot leave it, e.g., s2
I Recurrent Chain: A recurrent chain is a collection of all
trapping states.
0.25
s1 s2
0.75
14 / 113
Continuous Time Markov Chain (CTMC): A discrete state
space, continuous time process {X (t) : t ≥ 0} with state space S
is called CTMC, if the following Markov or memoryless property is
satisfied, for all s ≥ 0, u ≥ 0, t ≥ s and i, j, x(u) ∈ S,
15 / 113
Question: Let P[si (t)]=probability that the system occupies state
i at time t. Find out the probability of being in state j at time
t + 4t.
Note that:
16 / 113
Substituting,
m
X m
X
P[sj (t + 4t)] = P[sj (t)] 1 − aji 4t + P[si (t)]aij 4t
i6=j i6=j
j = 1, 2, 3, · · · , m
m
X
P[sj (t + 4t)] = P[sj (t)](1 + ajj 4t) + P[si (t)]aij 4t
i6=j
or
m
X
P[sj (t + 4t)] − P[sj (t)] = P[si (t)]aij 4t
i=1
17 / 113
Dividing by 4t and taking limit 4t → 0, leads to:
X m
d
P[sj (t)] = P[si (t)]aij ; j = 1, 2, · · · , m
dt
i=1
Next, define:
P(t) = P(0)e At
18 / 113
Steady State Solution: As time t → ∞, P(t) will be constant
and so from:
d
P(t) = P(t) · A
dt
we have:
0 = P(∞) · A = π · A
=⇒ rate of leaving the state=rate of entering in it
19 / 113
Example: Consider a 2-component system, which has following
four states:
Assumptions:
20 / 113
I If the two components are in parallel configuration, only state
4 results in system failure, so reliability is given by:
Rs (t) = R1 (t)
21 / 113
The transition diagram is given below:
dP1 (t)
+ (λ1 + λ2 )P1 (t) = 0
dt
dP2 (t)
+ λ2 P2 (t) = λ1 P1 (t)
dt
dP3 (t)
+ λ1 P3 (t) = λ2 P1 (t)
dt
dP4 (t)
= λ1 P3 (t) + λ2 P2 (t)
dt
22 / 113
The probabilities are given by:
23 / 113
Load-Sharing System: A system has two components, working in
parallel configuration. There is a dependency between the
components. If one component fails, the failure rate of the other
component increases as a result of additional load, placed on it.
Assume that, λ+ +
1 and λ2 are the increased failure rates of
component-1 and component-2, respectively.
24 / 113
The transition diagram is shown below:
dP1 (t)
+ (λ1 + λ2 )P1 (t) = 0
dt
dP2 (t)
+ λ+
2 P2 (t) = λ1 P1 (t)
dt
dP3 (t)
+ λ+
1 P3 (t) = λ2 P1 (t)
dt
25 / 113
The solution of the system is given below:
26 / 113
Standby Systems: Two components, A & B, are working in
standby redundancy mode, i.e., if A fails, B starts functioning.
Such systems are said to be standby systems.
The failure rate of standby unit will depend on the state of primary
(on-line) unit. We may consider:
27 / 113
The transition diagram is shown below:
dP1 (t)
+ (λ1 + λ−2 )P1 (t) = 0
dt
dP2 (t)
+ λ2 P2 (t) = λ1 P1 (t)
dt
dP3 (t)
+ λ1 P3 (t) = λ−2 P1 (t)
dt
28 / 113
The solution of the system is given below:
−
P1 (t) = e −(λ1 +λ2 )t
· ¸
λ1 −
−λ2 t−e −(λ1 +λ2 )t
P2 (t) = e
λ1 + λ− 2 − λ2
−
P3 (t) = e −λ1 t − e −(λ1 +λ2 )t
and
1 λ1
MTTF = +
λ1 λ2 (λ1 + λ−
2)
29 / 113
Degraded Systems: Some systems may continue to operate in a
degraded state.
Define the states as fully operational (state 1), degraded (state 2)
and failed (state 3). The transition diagram in this case is shown
below:
30 / 113
The system of ODEs can be generated as:
dP1 (t)
+ (λ1 + λ2 )P1 (t) = 0
dt
dP2 (t)
+ λ3 P2 (t) = λ2 P1 (t)
dt
Solution:
31 / 113
Availability: Availability is the probability that a system or
component is performing its required function at a given point in
time when operated and maintained in a prescribed manner.
Mathematically,
uptime
Availability =
uptime+downtime
MTBF
=
MTBF+MTTR
32 / 113
Calculate point availability and steady state availability for:
33 / 113
Example: Consider the CTMC, which has three states
0: set up, 1: Processing, and 2: Down
The transition diagram is given below:
p r
0 1 2
s f
Find out the ODEs for the above transition diagram and find out:
34 / 113
For steady state, the values of π0 , π1 and π2 are given by:
pr rs fs
π0 = , π1 = , π2 =
pr + rs + fs pr + rs + fs pr + rs + fs
Moreover, if s = 20 per hour, p = 4 per hour, f = 0.05 per hour,
and r = 1 per hour, then:
35 / 113
Example: Consider a system with two machines, M1 and M2 ,
working in parallel configuration. Machine M1 is fast and M2 is a
slow machine. Raw parts are always available and have to undergo
one operation each, which may be carried out either on M1 or on
M2 . Each machine will immediately start processing, after finishing
the processing of one part. Each machine can fail randomly, and
after possible repair, it will resume processing.
The objective is to find CTMC for the above system, and to
analyze the system performance, with:
I no repair facility
I single repair facility
I two identical repair facilities
36 / 113
The system can be modeled using CTMC with four states given
by:
Solution:
0 3
f f
2
37 / 113
I For one repair facility, assume that in state 3, where both the
machines are down, the repair facility will spend its time
equally between M1 and M2 .
r
r/2
1
f f
0 3
f f
2 r/2
r
r2
π0 =
r2 + 2fr + 2f 2
fr
π1 = π2 =
r + 2fr + 2f 2
2
2f 2
π3 =
r 2 + 2fr + 2f 2
38 / 113
I When there are two identical repair facilities, the CTMC
model will be similar to that in last figure, except the labels
on the directed arcs from state 3 to state 2, and from state 3
to state 1.
r2
π0 =
r 2 + 2fr + f 2
fr
π1 = π2 =
r + 2fr + f 2
2
2f 2
π3 =
r 2 + 2fr + f 2
39 / 113
Computational Techniques for CTMC
I Transient or Instantaneous Analysis
I Solution of system of linear ODEs
I Steady State Analysis
I Solution of π · A = 0
40 / 113
I Methods for Steady-State Analysis
I Power method
I SOR
I Gaussian elimination Method
I Gauss-Seidel Method
I Methods for Transient Analysis
I Fully Symbolic Method (Laplas Transformation)
I Euler’s Method
I Runge-Kutta forth order Method
41 / 113
Methods for Steady State
Analysis
42 / 113
Power Method: The problem is to solve π · A = 0
43 / 113
Example: Consider the following transition rate matrix to solve
πA = 0:
−2λ 2λc 0 2λ(1 − c)
0 −λ λc λ(1 − c)
A= 0
0 0 0
0 0 0 0
Choose a = 2λ, we get:
0 c 0 1−c
0 21 c 1−c
A∗ =
0 0
2 2
1 0
0 0 0 1
44 / 113
SOR (Successive Over Relaxation) Method: Let A be a
symmetric matrix then the linear system Ax = b can be solved
using the following steps:
45 / 113
Gauss Elimination Method: To solve the system Ax = b
(k)
(k+1) (k) aik (k)
aij = aij − a
(k) kj
akk
i = k + 1, k + 2, · · · , n
j = k + 1, k + 2, · · · , n, n + 1
(1)
where, aij = aij
I this method will always give the exact solution
46 / 113
Example: Solve the system of equations:
2 1 1 −2 x1 −10
4 0 2 1
x2 = 8
3 2 2 0 x3 7
1 3 2 −1 x4 −5
2 1 1 −2 −10
4 0 2 1 8
[A|b] = R ↔ R1
3 2 2 0 7 2
1 3 2 −1 −5
4 0 2 1 8
2 1 −2 −10
=
1 R − 1R , R − 3R , R − 1R
3 2 2 0 7 2 2 1 3 4 1 4 4 1
1 3 2 −1 −5
47 / 113
4 0 2 1 8
0 1 0 −5/2 −14
= R ↔ R2
0 2 1/2 −3/4 1 4
0 3 3/2 −5/4 −7
4 0 2 1 8
0 3 3/2 −5/4 −7
= R3 − 2 R2 , R4 − 1 R2
0 2 1/2 −3/4 1 3 3
0 1 0 −5/2 −14
4 0 2 1 8
0 3 3/2 −5/4 −7
= R − R3
0 0 −1/2 1/12 17/3 4
0 0 −1/2 −25/12 −35/3
48 / 113
4 0 2 1 8
0 3 3/2 −5/4 −7
= R − R3
0 0 −1/2 1/12 17/3 4
0 0 0 −13/6 −52/3
Finally, we get:
4 0 2 1 x1 8
0 3 3/2 −5/4 x2 −7
0 0 −1/2 1/12 x3 = 17/3
0 0 0 −13/6 x4 −52/3
49 / 113
Methods for Transient Analysis
50 / 113
Fully Symbolic Method
51 / 113
Using Matrix Theory: The solution of Kolmogorov differential
equations is:
P(t) = P(0)e At
Now, the problem is to compute e At :
I Input matrix A
I Find out eigenvalues and eigenvectors of matrix A
I A = V · D · V −1 and e A = V · e D · V −1 , where V is collection
of eigenvectors and D is diagonal matrix whose diagonal
elements are eigenvalues of A
I in MATLAB, e A can be directly obtained using:
I Y = expm(A), or
I [V,D] = EIG(A);
EXPM(A) = V*diag(exp(diag(D)))/V
52 / 113
Euler Method: Consider the following differential equation:
then:
uj+1 = uj + hfj , j = 0, 1, 2, · · · , N − 1
53 / 113
Example: Solve the ode:
u 0 = −2tu 2 , u(0) = 1
We have:
uj+1 = uj − 2htj uj2
with h = 0.2 and u0 = 1
u(0.2) = u1 = 1.0
u(0.4) = u2 = 0.92
u(0.6) = u3 = 0.78458
u(0.8) = u4 = 0.63684
u(1) = u5 = 0.50706
54 / 113
Runge-Kutta Method of Fourth Order (RK4): Consider the
following differential equation:
Then:
1
ui+1 = ui + [k1 + 2k2 + 2k3 + k4 ]
6
Where:
k1 = hf (ti , ui )
µ ¶
h k1
k2 = hf ti + , ui +
2 2
µ ¶
h k2
k3 = hf ti + , ui +
2 2
k4 = hf (ti + h, ui + k3 )
55 / 113
RK4 Method for System of ODEs: Consider the system of n
equations:
dU
= F (t, u1 , u2 , · · · , un )
dt
U(t0 ) = U0
Where, U = [u1 , u2 , · · · , un ]T ,
F = [f1 , f2 , · · · , fn ]T ,
U0 = [u1 (0), u2 (0), · · · , un (0)]
56 / 113
Then the solution is given by:
1
Uj+1 = Uj + (K1 + 2K2 + 2K3 + K4 )
6
Where:
u1,j+1 k1i
u2,j+1 k2i
Uj+1 = .. ; Ki = ..
. .
un,j+1 kni
and:
57 / 113
Solve the system of equations:
u 0 = −3u + 2v , u(0) = 0
v 0 = 3u − 4v , v (0) = 0.5
58 / 113
Euler’s Method for System of ODEs: Consider the system of n
equations:
dU
= F (t, u1 , u2 , · · · , un )
dt
U(t0 ) = U0
Where,
U = [u1 , u2 , · · · , un ]T ,
F = [f1 , f2 , · · · , fn ]T ,
U0 = [u1 (0), u2 (0), · · · , un (0)]
The solution is given by:
Uj+1 = Uj + hUj0 ; j = 0, 1, 2, · · · , N − 1
59 / 113
Computational Techniques for DTMC
60 / 113
Assignments
61 / 113
Pulping System: The four major actions carried out in the system
are, (i) cooking of chips, (ii) separation of knots, (iii) washing of
pulp, and (iv) opening of fibers. The pulping system consists of for
subsystems, namely:
I Digester (A): One unit, used for cooking the chips whose
failure causes the complete failure of the cooking system.
I Knotter (B): Two units, one working and other standby,
used to remove the knots from the cooked chips. Whose
complete failure occurs only if both the units fail.
62 / 113
I Decker (C): Three units, arranged in series configuration,
used to remove liquor for the cooked chips. Failure of any one
causes the complete failure of the pulping system.
I Opener (D): Two units, one working and other standby, used
to separate the fibers. Whose complete failure occurs only if
both the units fail.
B1 D1
A C1 C2 C3
Digester Deckers
B2 D2
Knotter Opener
63 / 113
Figure: CTMC for Pulping System
64 / 113
Assignments (Contd...)
3. Power Method
4. SOR/Gauss-Seidel Method
5. Gauss Elimination Method
65 / 113
Assignments (Contd...)
6. Matrix Method
7. Euler’s Method
8. RK4 Method
66 / 113
Birth and Death Process (BD Process): A homogeneous
CTMC {X (t) : t ≥ 0}, with state space {0, 1, 2, · · · } is called a
BD process, if there exist constants,
λi ; i = 0, 1, 2, · · · , and
µi ; i = 1, 2, 3, · · · such that the transition rates are given by:
pi,i+1 = λi , i = 0, 1, 2, · · ·
pi,i−1 = µi , i = 1, 2, 3, · · ·
pij = 0, for |i − j| > 1
67 / 113
In steady-state,we get:
λ0 π0 = µ1 π1
(λk + µk )πk = λk−1 πk−1 + µk+1 πk+1 , k ≥ 1
Now,
68 / 113
Thus, we have:
λk πk = µk+1 πk+1
or
λk λk−1
πk+1 = πk =⇒ πk = πk−1 , for k ≥ 1
µk+1 µk
or
k−1Y λi
λ0 · λ1 · · · λk−1
πk = π0 = π0
µ1 · µ2 · · · µk µi+1
i=0
69 / 113
P
Using the fact that πj = 1, we get:
j
X k−1
Y λi
π0 + π0 =1
µi+1
k≥1 i=0
i.e.,
1
π0 =
P k−1
Q λi
1+ µi+1
k≥1 i=0
X k−1
Y λi
µi+1
k≥1 i=0
converges.
70 / 113
Queues: A queue is a system into which customers arrive to
receive service. When the servers in the system are busy, incoming
customers wait for their turn. Upon completion of a service, the
customer to be served next is selected according to some queuing
discipline.
Queuing discipline: The queuing discipline is a rule for selecting
the customer for the next service among the set waiting customer.
This could be either:
71 / 113
A queuing system can be completely described by:
72 / 113
Notation for queues: Generally queuing model may be completely
specified in the following symbolic form: (A/B/C):(D/E/F), where
73 / 113
Let C1 , C2 , . . . be any arbitrary arrival stream of customers with:
I Sj = service time of Cj
I Dj = delay in queue of Cj , (waiting time in queue)
I Wj = Dj + Sj = waiting time in system of Cj
74 / 113
The number of customers in the system at epoch t is given by
N(t), which can be written as
nA (t) = max{j : tj ≤ t}
nD (t) = nA (t) − N(t)
The process {nA (t) : t ≥ 0} is called the arrival process and the
process {nD (t) : t ≥ 0} is called the departure process.
75 / 113
Performance Measures: The performance of a queue will be
defined in terms of properties of one or more of the following
stochastic processes: {N(t) : t ≥ 0}, {Nq (t) : t ≥ 0},
{Ns (t) : t ≥ 0}, {Wj : j ∈ N}, and {Dj : j ∈ N}.
Let us define the following parameters first:
nA (t)
λ = Arrival rate = limt→∞ t
1 1 P
n
µ = Average time in service = limn→∞ n Sj , where µ is called
j=1
the service rate. We now define generic measures of performance
for a queue. Let, m be the number of identical servers in the
queuing system.
76 / 113
Average number of customers in the system:
Zt
1
L = lim N(u)du (1)
t→∞ t
0
Zt
1
B = lim Ns (u)du (3)
t→∞ t
0
77 / 113
Average server utilization:
B
U= (4)
m
Zt
1
L = lim N(u)du
t→∞ t
0
L = λW (7)
79 / 113
The M/M/1 Queue: In the M/M/1 queue, the arrivals are
following Poisson distribution with rate λ, the departures are
following Poisson distribution with rate µ, and there is only single
server.
Let N(t) be the number of customers in the M/M/1 queue at time
t, then obviously {N(t) : t ≥ 0} is a CTMC. Also it is a special
case of Birth and Death Process with:
λk = λ k ≥ 0
µk = µ k ≥ 1
λ
The ratio ρ = µ is called as traffic intensity of the system.
80 / 113
From the last equation of BD process, we have:
1
π0 = P = (1 − ρ) (8)
ρk
k≥0
Thus we have,
∞
X ∞
X
L = E [N] = kπk = (1 − ρ) kρk (10)
k=0 k=0
But, we have,
∞
X ρ
kρk = (11)
(1 − ρ)2
k=0
ρ λ
L= = (12)
1−ρ µ−λ
82 / 113
The mean number of customers, Q in the queue is similarly given
as
∞
X ∞
X ∞
X
Q = (k − 1)πk = kπk − πk
k=1 k=1 k=1
= L − (1 − π0 ) = L − ρ
ρ2 λ2
= = (13)
1−ρ µ(µ − λ)
83 / 113
To find out the explicit formula for W , we use Little’s formula,
L = λW (15)
Using (15), we obtain the expressions for W and D for the M/M/1
queue. From (12) and (15), we have,
L 1 1
W = = = (16)
λ µ(1 − ρ) µ−λ
1 1 λ
D= − = (17)
µ−λ µ µ(µ − λ)
84 / 113
Example: Consider an NC machine center processing raw parts
one at a time in M/M/1 fashion. Let λ = 8 parts/h and µ = 10
parts/h. Then find out:
a. Machine utilization,
b. Mean number of customers in the system,
c. Mean number of customers in the queue,
d. Mean waiting time in the system, and
e. Mean waiting time in queue.
85 / 113
Stability of M/M/1 Queue: From (8), we see that:
1
π0 = P
ρk
k≥0
We can solve this equation for π0 if and only if the series in the
numerator is convergent. The series will be convergent if and only
if ρ < 1. When ρ ≥ 1, the series diverges, which means that the
queue grows without bound and the values of L, Q, W and D
would go to infinity. We say that the queue is stable if ρ < 1 and it
is unstable if ρ ≥ 1. The stability condition physically means that
the arrival rate must be strictly less than the service rate.
When ρ is close to 1, queues become very long and as ρ → 1, the
number in the system grows without bound. Transient analysis
reveals system behavior as ρ → 1 or short term behavior when
ρ > 1.
86 / 113
Waiting Time Distributions: We have seen that W and D are
the mean values of the random variables w and d, whose
distributions can be defined by:
n
1X
Fw (t) = lim P{Wj ≤ t}
n→∞ n
j=1
n
1X
Fd (t) = lim P{Dj ≤ t}
n→∞ n
j=1
87 / 113
First note that d is a continuous random variable except that there
is a nonzero probability, π0 that d = 0 and the customers enters
service immediately on arrival. We have:
So,
Fd (0) = π0 = 1 − ρ (18)
88 / 113
Now we will find Fd (t) for t > 0. If there are k customers in the
system upon arrival, for the customer to get into the system
between 0 and t, all k units must be served by the time t. Since
the service time distribution is memoryless, the distribution of the
time required for k service times is independent of the time of the
current arrival, and is the convolution of k exponential random
variables, which is an Erlang-k distribution. The probability density
function of the Erlang distribution is:
λ(λx)k−1 e −λx
f (x; k, λ) = for x, λ ≥ 0
(k − 1)!
89 / 113
The probability that an arrival finds k units in the system in the
steady state is simply the probability πk . Thus we have:
Fd (t) = P(d ≤ t)
∞ ·
X ¸
= Fd (0) + P(0 < d ≤ t|k units in the system)πk
k=1
90 / 113
Therefore, we get for t > 0,
∞
X Zt
k µ(µx)k−1 e −µx
Fd (t) = (1 − ρ) + (1 − ρ) ρ dx
(k − 1)!
k=1 0
Zt µX
∞ ¶
(µxρ)k−1
= (1 − ρ) + (1 − ρ)ρ µe −µx dx
(k − 1)!
0 k=1
Zt
= (1 − ρ) + (1 − ρ)ρ µe −µx(1−ρ) dx (19)
0
91 / 113
Simplifying (19) we get:
From (20), we can compute the mean waiting time in queue as:
λ
D = E [D] = (21)
µ(µ − λ)
92 / 113
Let us now compute the distribution of w , following the similar
arrangements. We have for t ≥ 0,
∞ ·
X ¸
Fw (t) = P(w ≤ t|k units in the system)πk
k=0
∞
X
= P(sum of (k + 1) service times ≤ t)πk
k=0
∞ µZ
X
t ¶
µ(µx)k e −µx
= dx ρk (1 − ρ)
k!
k=0 0
93 / 113
Zt ∞
X (µxρ)k
Fw (t) = (1 − ρ) µe −µx dx
k!
0 k=0
Zt
= (1 − ρ) µe −µx(1−ρ) dx
0
−µ(1−ρ)t
= 1−e
= 1 − e −(µ−λ)t
94 / 113
Thus we have:
(
0, for t < 0
Fw (t) = −(µ−λ)t
(22)
1−e , for t ≥ 0.
95 / 113
The M/M/1/N Queue: In an M/M/1/N queueing system, the
maximum number of jobs in the system is N, which implies a
maximum queue length of N − 1. An arriving job enters the queue
if it finds fewer than N jobs in the system and is lost otherwise.
This behavior can be modeled by a birth-death process with:
(
λ for k = 0, 1, 2, . . . , N − 1
λk =
0 for k ≥ N
96 / 113
The steady state probabilities are given by:
(1 − ρ)ρk
πk = , k = 0, 1, 2, . . . , N
1 − ρN+1
= 0, k > N (23)
Note that a system with finite population such as above will always
be stable for all values of ρ.
The performance measures can be verified to be given by:
ρ(1 − ρN )
U = 1 − π0 = (24)
1 − ρN+1
97 / 113
ρ2 [1 − ρN − NρN−1 (1 − ρ)]
Q= (25)
(1 − ρ)(1 − ρN+1 )
1 − ρN − NρN (1 − ρ)
W = (27)
µ(1 − ρ)(1 − ρN )
98 / 113
Example: It would be interesting to study the performance
measures of a single-server queuing system for various queue
capacities. First, consider the M/M/1 queue of Example 1, where
we had λ = 8 parts/h and µ = 10 parts/h. We have seen that
U = 0.8, L = 4, Q = 3.2, W = 0.5 h and D = 0.4 h.
At the other extreme, we have M/M/1/1 system in which no
queuing is allowed. In this case, there are only two states, 0 and 1,
and
µ
π0 = λ+µ = 0.556; and π1 = 0.444
Verify that U = 0.444, L = Q = 0.444, and W = D = 0.1h.
Can you calculate the performance measures for M/M/1/10
queue?
99 / 113
Example: Consider a machine center modeled as an M/M/1/N
queue with parameters λ and µ. Let C µ be the cost of operating
the machine at a rate of µ and let a profit of q accrue for every
finished part produced. Find out the service rate that maximizes
the total profit.
Solution: Production rate of the machine center=Uµ
n) N −λN )
= ρ(1−ρ
1−ρN+1
µ= λµ(µ
µN+1 −λN+1
The above is the number of parts produced in one time unit (for
example one hour, if µ is expressed in terms of parts per hour).
The corresponding total profit is given by:
λµ(µN − λN )
P= q − Cµ
µN+1 − λN+1
The service rate µ that maximizes the profit can now be obtained
in the usual way.
100 / 113
The M/M/m Queue: Consider a queuing system with Poisson
arrival rate λ as before, but where m(≥ 1) exponential servers,
with rate µ each, share a common queue. This is referred to as the
M/M/m queue and is illustrated in Fig. 3. If the number of
customers in the system is greater than the number of servers,
then a queue forms.
101 / 113
Steady State Analysis: The M/M/m queue is also a special case
of the birth and death model with rates:
λk = λ, k = 0, 1, 2, . . . (29)
µk = kµ, k = 0, 1, . . . , m
= mµ, k > m (30)
102 / 113
The state transition diagram is shown in Fig. 3. The steady state
probabilities could be obtained by using (29) and (30) in the
expressions of π0 and πk of BD process. We find that:
π0 (mρ)k
πk = ; k = 0, 1, 2, . . . , m
k!
π0 mm ρk
= ; k>m (31)
m!
where ρ is defined as:
λ
ρ= (32)
mµ
Equation (31) is valid for determining the steady state probabilities
whenever ρ < 1, i.e., the stability condition holds.
103 / 113
The expression for π0 can now be obtained using (31) and the fact
that the steady state probabilities sum up to 1. Thus:
· X (mρ)k ¸−1
m−1
(mρ)m
π0 = + (33)
m!(1 − ρ) k!
k=0
P(Ns = k) = πk k = 0, 1, . . . , m − 1
X∞
= πj k=m
j=m
104 / 113
The average number of busy servers in the steady state, B can be
immediately obtained as:
m
X λ
B = E [Ns ] = kP(Ns = k) = (34)
µ
k=0
105 / 113
We now compute Q, the average number of customers waiting in
queue:
X∞
ρ(mρ)m π0
Q= (k − m)πk = (36)
m!(1 − ρ)2
k=m
ρ(mρ)m π0 λ
L=Q +B = + (37)
m!(1 − ρ)2 µ
106 / 113
We now use Little’s law to compute the average waiting time in
the system. We have:
L ρ(mρ)m π0 1
W = = 2
+ (38)
λ m!λ(1 − ρ) µ
1 ρ(mρ)m π0
D=W − = (39)
µ m!λ(1 − ρ)2
107 / 113
The M/M/∞ Queue: In an M/M/∞ queueing system we have a
Poisson arrival process with arrival rate λ and an infinite number of
servers with service rate µ each.
If there are k jobs in the system, then the overall service rate is kµ
because each arriving job immediately gets a server and does not
have to wait. Once again, the underlying CTMC is a birth-death
process.
108 / 113
From the expressions of π0 and πk of BD process, we obtain the
steady-state probability of k jobs in the system:
k−1
Y µ ¶k
λ λ 1
πk = π0 = π0 (40)
(i + 1)µ µ k!
i=0
109 / 113
The steady state probability of no jobs in the system:
³ ´
1 λ
− µ
π0 = ∞ ³ ´k =e (41)
P λ 1
1+ µ k!
k=1
and finally: ³ ´
³ ´k λ
λ −
µ e µ
πk = (42)
k!
This is the Poisson pmf.
110 / 113
The expected number of jobs in the system is:
λ
L= (43)
µ
With Little’s theorem, the mean waiting time in the system is
given by:
1
W = (44)
µ
111 / 113
Queuing models for self study:
A. (M/M/m):(GD/K /∞)
B. (M/M/m):(GD/K /N)
112 / 113
Thank You
113 / 113