0% found this document useful (0 votes)
95 views204 pages

Network Analysis Dec 06

This document contains lecture notes for ECE 467 Communication Network Analysis. It covers topics related to countable state Markov processes including definitions, properties, classification, and applications to queueing systems and networks. The notes are divided into multiple sections covering concepts such as renewal theory, stability criteria, queues with general arrival and service time distributions, multiple access, stochastic networks, and calculus of deterministic constraints. The document is copyrighted and permission is given to freely circulate copies for non-commercial purposes.

Uploaded by

husamhassan
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views204 pages

Network Analysis Dec 06

This document contains lecture notes for ECE 467 Communication Network Analysis. It covers topics related to countable state Markov processes including definitions, properties, classification, and applications to queueing systems and networks. The notes are divided into multiple sections covering concepts such as renewal theory, stability criteria, queues with general arrival and service time distributions, multiple access, stochastic networks, and calculus of deterministic constraints. The document is copyrighted and permission is given to freely circulate copies for non-commercial purposes.

Uploaded by

husamhassan
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 204

Notes for ECE 467

Communication Network Analysis


Bruce Hajek
December 15, 2006
c _ 2006 by Bruce Hajek
All rights reserved. Permission is hereby given to freely print and circulate copies of these notes so long as the notes are left
intact and not reproduced for commercial purposes. Email to [email protected], pointing out errors or hard to understand
passages or providing comments, is welcome.
Contents
1 Countable State Markov Processes 3
1.1 Example of a Markov model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Denition, Notation and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Pure-Jump, Time-Homogeneous Markov Processes . . . . . . . . . . . . . . . . . . . 6
1.4 Space-Time Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 Poisson Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6 Renewal Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6.1 Renewal Theory in Continuous Time . . . . . . . . . . . . . . . . . . . . . . . 16
1.6.2 Renewal Theory in Discrete Time . . . . . . . . . . . . . . . . . . . . . . . . 17
1.7 Classication and Convergence of Discrete State Markov Processes . . . . . . . . . . 17
1.7.1 Examples with nite state space . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.7.2 Discrete Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.7.3 Continuous Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.8 Classication of Birth-Death Processes . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.9 Time Averages vs. Statistical Averages . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.10 Queueing Systems, M/M/1 Queue and Littles Law . . . . . . . . . . . . . . . . . . . 27
1.11 Mean Arrival Rate, Distributions Seen by Arrivals, and PASTA . . . . . . . . . . . . 30
1.12 More Examples of Queueing Systems Modeled as Markov Birth-Death Processes . . 32
1.13 Method of Phases and Quasi Birth-Death Processes . . . . . . . . . . . . . . . . . . 34
1.14 Markov Fluid Model of a Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
1.15 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2 Foster-Lyapunov stability criterion and moment bounds 45
2.1 Stability criteria for discrete time processes . . . . . . . . . . . . . . . . . . . . . . . 45
2.2 Stability criteria for continuous time processes . . . . . . . . . . . . . . . . . . . . . 53
2.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3 Queues with General Interarrival Time and/or Service Time Distributions 61
3.1 The M/GI/1 queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.1.1 Busy Period Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.1.2 Priority M/GI/1 systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.2 The GI/M/1 queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.3 The GI/GI/1 System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.4 Kingmans Bounds for GI/GI/1 Queues . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.5 Stochastic Comparison with Application to GI/GI/1 Queues . . . . . . . . . . . . . 71
iii
3.6 GI/GI/1 Systems with Server Vacations, and Application to TDM and FDM . . . . 73
3.7 Eective Bandwidth of a Data Stream . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.8 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4 Multiple Access 83
4.1 Slotted ALOHA with Finitely Many Stations . . . . . . . . . . . . . . . . . . . . . . 83
4.2 Slotted ALOHA with Innitely Many Stations . . . . . . . . . . . . . . . . . . . . . 85
4.3 Bound Implied by Drift, and Proof of Proposition 4.2.1 . . . . . . . . . . . . . . . . 87
4.4 Probing Algorithms for Multiple Access . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.4.1 Random Access for Streams of Arrivals . . . . . . . . . . . . . . . . . . . . . 92
4.4.2 Delay Analysis of Decoupled Window Random Access Scheme . . . . . . . . 92
4.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5 Stochastic Network Models 99
5.1 Time Reversal of Markov Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.2 Circuit Switched Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.3 Markov Queueing Networks (in equilibrium) . . . . . . . . . . . . . . . . . . . . . . . 104
5.3.1 Markov server stations in series . . . . . . . . . . . . . . . . . . . . . . . . . . 104
5.3.2 Simple networks of M
S
stations . . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.3.3 A multitype network of M
S
stations with more general routing . . . . . . . . 107
5.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6 Calculus of Deterministic Constraints 113
6.1 The (, ) Constraints and Performance Bounds for a Queue . . . . . . . . . . . . . 113
6.2 f-upper constrained processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
6.3 Service Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
6.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
7 Graph Algorithms 123
7.1 Maximum Flow Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
7.2 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
8 Flow Models in Routing and Congestion Control 127
8.1 Convex functions and optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
8.2 The Routing Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
8.3 Utility Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
8.4 Joint Congestion Control and Routing . . . . . . . . . . . . . . . . . . . . . . . . . . 134
8.5 Hard Constraints and Prices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
8.6 Decomposition into Network and User Problems . . . . . . . . . . . . . . . . . . . . 136
8.7 Specialization to pure congestion control . . . . . . . . . . . . . . . . . . . . . . . . . 139
8.8 Fair allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
8.9 A Network Evacuation Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
8.10 Braess Paradox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
8.11 Further reading and notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
8.12 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
iv
9 Dynamic Network Control 149
9.1 Dynamic programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
9.2 Dynamic Programming Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
9.3 The Dynamic Programming Optimality Equations . . . . . . . . . . . . . . . . . . . 152
9.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
10 Solutions 161
v
vi
Preface
This is the latest draft of notes I have used for the graduate course Communication Network Analy-
sis, oered by the Department of Electrical and Computer Engineering at the University of Illinois
at Urbana-Champaign. The notes describe many of the most popular analytical techniques for
design and analysis of computer communication networks, with an emphasis on performance issues
such as delay, blocking, and resource allocation. Topics that are not covered in the notes include
the Internet protocols (at least not explicitly), simulation techniques and simulation packages, and
some of the mathematical proofs. These are covered in other books and courses.
The topics of these notes form a basis for understanding the literature on performance issues
in networks, including the Internet. Specic topics include
The basic and intermediate theory of queueing systems, along with stability criteria based on
drift analysis and uid models
The notion of eective bandwidth, in which a constant bit rate equivalent is given for a bursty
data stream in a given context
An introduction to the calculus of deterministic constraints on trac ows
The use of penalty and barrier functions in optimization, and the natural extension to the use
of utility functions and prices in the formulation of dynamic routing and congestion control
problems
Some topics related to performance analysis in wireless networks, including coverage of basic
multiple access techniques, and transmission scheduling
The basics of dynamic programming, introduced in the context of a simple queueing control
problem
The analysis of blocking and the reduced load xed point approximation for circuit switched
networks.
Students are assumed to have already had a course on computer communication networks, al-
though the material in such a course is more to provide motivation for the material in these notes,
than to provide understanding of the mathematics. In addition, since probability is used exten-
sively, students in the class are assumed to have previously had two courses in probability. Some
prior exposure to the theory of Lagrange multipliers for constrained optimization and nonlinear
optimization algorithms is desirable, but not necessary.
Im grateful to students and colleagues for suggestions and corrections, and am always eager
for more. Bruce Hajek, December 2006
1
2
Chapter 1
Countable State Markov Processes
1.1 Example of a Markov model
Consider a two-stage pipeline as pictured in Figure 1.1. Some assumptions about it will be made
in order to model it as a simple discrete time Markov process, without any pretension of modeling
a particular real life system. Each stage has a single buer. Normalize time so that in one unit
of time a packet can make a single transition. Call the time interval between k and k + 1 the kth
time slot, and assume that the pipeline evolves in the following way during a given slot.
If at the beginning of the slot, there are no packets in stage one, then a new packet arrives to stage
one with probability a, independently of the past history of the pipeline and of the outcome
at state two.
If at the beginning of the slot, there is a packet in stage one and no packet in stage two, then the
packet is transfered to stage two with probability d
1
.
If at the beginning of the slot, there is a packet in stage two, then the packet departs from the
stage and leaves the system with probability d
2
, independently of the state or outcome of
stage one.
These assumptions lead us to model the pipeline as a discrete-time Markov process with the
state space o = 00, 01, 10, 11, transition probability diagram shown in Figure 1.2 (using the
notation x = 1 x) and one-step transition probability matrix P given by
P =
_
_
_
_
a 0 a 0
ad
2
a

d
2
ad
2
a

d
2
0 d
1

d
1
0
0 0 d
2

d
2
_
_
_
_
a
d d
1 2
Figure 1.1: A two-stage pipeline
3
1
00
10 11
01
a
a
d
d
1
ad
2
ad
2
d
2 d
2
ad
2
ad
2
--
--
--
--
--
--
--
Figure 1.2: One-step transition probability diagram for example.
The rows of P are probability vectors. (In these notes, probability vectors are always taken to
be row vectors, and more often than not, they are referred to as probability distributions.). For
example, the rst row is the probability distribution of the state at the end of a slot, given that
the state is 00 at the beginning of a slot. Now that the model is specied, let us determine the
throughput rate of the pipeline.
The equilibrium probability distribution = (
00
,
01
,
10
,
11
) is the probability vector satis-
fying the linear equation = P. Once is found, the throughput rate can be computed as
follows. It is dened to be the rate (averaged over a long time) that packets transit the pipeline.
Since at most two packets can be in the pipeline at a time, the following three quantities are all
clearly the same, and can be taken to be the throughput rate.
The rate of arrivals to stage one
The rate of departures from stage one (or rate of arrivals to stage two)
The rate of departures from stage two
Focus on the rst of these three quantities. Equating long term averages with statistical averages
yields
= P[an arrival at stage 1]
= P[an arrival at stage 1[stage 1 empty at slot beginning]P[stage 1 empty at slot beginning]
= a(
00
+
01
).
Similarly, by focusing on departures from stage 1, obtain = d
1

10
. Finally, by focusing on
departures from stage 2, obtain = d
2
(
01
+
11
). These three expressions for must agree.
Consider the numerical example a = d
1
= d
2
= 0.5. The equation = P yields that is
proportional to the vector (1, 2, 3, 1). Applying the fact that is a probability distribution yields
that = (1/7, 2/7, 3/7, 1/7). Therefore = 3/14 = 0.214 . . ..
By way of comparison, consider another system with only a single stage, containing a single
buer. In each slot, if the buer is empty at the beginning of a slot an arrival occurs with probability
a, and if the buer has a packet at the beginning of a slot it departs with probability d. Simultaneous
arrival and departure is not allowed. Then o = 0, 1, = (d/(a+d), a/(a+d)) and the throughput
4
rate is ad/(a+d). The two-stage pipeline with d
2
= 1 is essentially the same as the one-stage system.
In case a = d = 0.5, the throughput rate of the single stage system is 0.25, which as expected is
somewhat greater than that of the two-stage pipeline.
1.2 Denition, Notation and Properties
Having given an example of a discrete state Markov process, we now digress and give the formal
denitions and some of the properties of Markov processes. Let T be a subset of the real numbers
R and let o be a nite or countably innite set. A collection of ovalued random variables (X(t) :
t T) is a discrete-state Markov process with state space o if
P[X(t
n+1
) = i
n+1
[X(t
n
) = i
n
, . . . , X(t
1
) = i
1
] = P[X(t
n+1
) = i
n+1
[X(t
n
) = i
n
] (1.1)
whenever
_
_
_
t
1
< t
2
< . . . < t
n+1
are in T,
i
i
, i
2
, ..., i
n+1
are in o, and
P[X(t
n
) = i
n
, . . . , X(t
1
) = i
1
] > 0.
(1.2)
Set p
ij
(s, t) = P[X(t) = j[X(s) = i] and
i
(t) = P[X(t) = i]. The probability distribution
(t) = (
i
(t) : i o) should be thought of as a row vector, and can be written as one once o
is ordered. Similarly, H(s, t) dened by H(s, t) = (p
ij
(s, t) : i, j o) should be thought of as a
matrix. Let e denote the column vector with all ones, indexed by o. Since (t) and the rows of
H(s, t) are probability vectors for s, t T and s t, it follows that (t)e = 1 and, H(s, t)e = e.
Next observe that the marginal distributions (t) and the transition probabilities p
ij
(s, t) de-
termine all the nite dimensional distributions of the Markov process. Indeed, given
_
t
1
< t
2
< . . . < t
n
in T,
i
i
, i
2
, ..., i
n
o
(1.3)
one writes
P[X(t
1
) = i
1
, . . . , X(t
n
) = i
n
] =
P[X(t
1
) = i
1
, . . . , X(t
n1
) = i
n1
]P[X(t
n
) = i
n
[X(t
1
) = i
1
, . . . , X(t
n1
) = i
n1
]
= P[X(t
1
) = i
1
, . . . , X(t
n1
) = i
n1
]p
i
n1
in
(t
n1
, t
n
)
Application of this operation n 2 more times yields that
P[X(t
1
) = i
1
, X(t
2
) = i
2
, . . . , X(t
n
) = i
n
] =
i
1
(t
1
)p
i
1
i
2
(t
1
, t
2
) . . . p
i
n1
in
(t
n1
, t
n
), (1.4)
which shows that the nite dimensional distributions of X are indeed determined by ((t)) and
(p
ij
(s, t)). From this and the denition of conditional probabilities it follows by straight substitution
that
P[X(t
j
) = i
j
, for 1 j n +l[X(t
n
) = i
n
] = (1.5)
P[X(t
j
) = i
j
, for 1 j n[X(t
n
) = i
n
]P[X(t
j
) = i
j
, for n j n +l[X(t
n
) = i
n
]
whenever P[X(t
n
) = i
n
] > 0. Property (1.5) is equivalent to the Markov property. Note in addition
that it has no preferred direction of time, simply stating that the past and future are conditionally
5
independent given the present. It follows that if X is a Markov process, the time reversal of X
dened by

X(t) = X(t) is also a Markov process.
A Markov process is time homogeneous if p
ij
(s, t) depends on s and t only through t s. In
that case we write p
ij
(t s) instead of p
ij
(s, t), and H
ij
(t s) instead of H
ij
(s, t).
Recall that a random process is stationary if its nite dimensional distributions are invariant
with respect to translation in time. Referring to (1.4), we see that a time-homogeneous Markov
process is stationary if and only if its one dimensional distributions (t) do not depend on t. If, in
our example of a two-stage pipeline, it is assumed that the pipeline is empty at time zero and that
a ,= 0, then the process is not stationary (since (0) = (1, 0, 0, 0) ,= (1) = (1 a, 0, a, 0)), even
though it is time homogeneous. On the other hand, a Markov random process that is stationary is
time homogeneous.
Computing the distribution of X(t) by conditioning on the value of X(s) yields that
j
(t) =

i
P[X(s) = i, X(t) = j] =

i

i
(s)p
ij
(s, t), which in matrix form yields that (t) = (s)H(s, t)
for s, t T, s t. Similarly, given s < < t, computing the conditional distribution of X(t) given
X(s) by conditioning on the value of X() yields
H(s, t) = H(s, )H(, t) s, , t T, s < < t. (1.6)
The relations (1.6) are known as the Chapman-Kolmogorov equations.
If the Markov process is time-homogeneous, then (s + ) = (s)H() for s, s + T and
0. A probability distribution is called an equilibrium (or invariant) distribution if H() =
for all 0.
Repeated application of the Chapman-Kolmogorov equations yields that p
ij
(s, t) can be ex-
pressed in terms of transition probabilities for s and t close together. For example, consider
Markov processes with index set the integers. Then H(n, k + 1) = H(n, k)P(k) for n k, where
P(k) = H(k, k + 1) is the one-step transition probability matrix. Fixing n and using forward re-
cursion starting with H(n, n) = I, H(n, n + 1) = P(n), H(n, n + 2) = P(n)P(n + 1), and so forth
yields
H(n, l) = P(n)P(n + 1) P(l 1)
In particular, if the process is time-homogeneous then H(k) = P
k
for all k for some matrix P, and
(l) = P
lk
(k) for l k. In this case a probability distribution is an equilibrium distribution
if and only if P = .
In the next section, processes indexed by the real line are considered. Such a process can be
described in terms of p(s, t) with t s arbitrarily small. By saving only a linearization, the concept
of generator matrix arises naturally.
1.3 Pure-Jump, Time-Homogeneous Markov Processes
Let o be a nite or countably innite set, and let , o. A pure-jump function is a function
x : R
+
o such that there is a sequence of times, 0 =
0
<
1
< . . . , and a sequence of
states, s
0
, s
1
, . . . with s
i
o, and s
i
,= s
i+1
, i 0, so that
x(t) =
_
s
i
if
i
t <
i+1
i 0
if t

(1.7)
where

= lim
i

i
. If

is nite it is said to be the explosion time of the function x. The


example corresponding to o = 0, 1, . . .,
i
= i/(i + 1) and s
i
= i is pictured in Fig. 1.3. Note
6
x(t)
!
1 0 2
. . .
"
t
! ! !
Figure 1.3: Sample pure-jump function with an explosion time
1
2
3
1
1
1
0.5
0.5
Figure 1.4: Transition rate diagram for a continuous time Markov process
that

= 1 for this example. A pure-jump Markov process (X


t
: t 0) is a Markov process such
that, with probability one, its sample paths are pure-jump functions.
Let Q = (q
ij
: i, j o) be such that
q
ij
0 i, j o, i ,= j
q
ii
=

jS,j,=i
q
ij
i o.
(1.8)
An example for state space o = 1, 2, 3 is
Q =
_
_
1 0.5 0.5
1 2 1
0 1 1
_
_
,
which can be represented by the transition rate diagram shown in Figure 1.4. A pure-jump, time-
homogeneous Markov process X has generator matrix Q if
lim
h0
(p
ij
(h) I
i=j
)/h = q
ij
i, j o (1.9)
or equivalently
p
ij
(h) = I
i=j
+hq
ij
+o(h) i, j o (1.10)
where o(h) represents a quantity such that lim
h0
o(h)/h = 0.
7
For the example this means that the transition probability matrix for a time interval of duration
h is given by
_
_
1 h 0.5h 0.5h
h 1 2h h
0 h 1 h
_
_
+
_
_
o(h) o(h) o(h)
o(h) o(h) o(h)
o(h) o(h) o(h)
_
_
The rst term is a stochastic matrix, owing to the assumptions on the generator matrix Q.
Proposition 1.3.1 Given a matrix Q satisfying (1.8), and a probability distribution (0) = (
i
(0) :
i o), there is a pure-jump, time-homogeneous Markov process with generator matrix Q and initial
distribution (0). The nite-dimensional distributions of the process are uniquely determined by
(0) and Q.
The proposition can be proved by appealing to the space-time properties in the next section. In
some cases it can also be proved by considering the forward-dierential evolution equations for (t),
which are derived next. Fix t > 0 and let h be a small positive number. The Chapman-Kolmogorov
equations imply that

j
(t +h)
j
(t)
h
=

iS

i
(t)
_
p
ij
(h) I
i=j
h
_
. (1.11)
Consider letting h tend to zero. If the limit in (1.9) is uniform in i for j xed, then the limit and
summation on the right side of (1.11) can be interchanged to yield the forward-dierential evolution
equation:

j
(t)
t
=

iS

i
(t)q
ij
(1.12)
or
(t)
t
= (t)Q. This equation, known as the Kolmogorov forward equation, can be rewritten as

j
(t)
t
=

iS,i,=j

i
(t)q
ij

iS,i,=j

j
(t)q
ji
, (1.13)
which states that the rate change of the probability of being at state j is the rate of probability
ow into state j minus the rate of probability ow out of state j.
1.4 Space-Time Structure
Let (X
k
: k Z
+
) be a time-homogeneous Markov process with one-step transition probability
matrix P. Let T
k
denote the time that elapses between the k
th
and k + 1
th
jumps of X, and let
X
J
(k) denote the state after k jumps. See Fig. 1.5 for illustration. More precisely, the holding
times are dened by
T
0
= mint 0 : X(t) ,= X(0) (1.14)
T
k
= mint 0 : X(T
0
+ . . . +T
k1
+t) ,= X(T
0
+ . . . +T
k1
) (1.15)
and the jump process X
J
= (X
J
(k) : k 0) is dened by
X
J
(0) = X(0) and X
J
(k) = X(T
0
+ . . . +T
k1
) (1.16)
8
!
!
0
1
2
!
X (3)
X(k)
J
k
X (1)
X (2)
J
J
s
s
s
1
2
3
Figure 1.5: Illustration of jump process and holding times.
Clearly the holding times and jump process contain all the information needed to construct X, and
vice versa. Thus, the following description of the joint distribution of the holding times and the
jump process characterizes the distribution of X.
Proposition 1.4.1 Let X = (X(k) : k Z
+
) be a time-homogeneous Markov process with one-step
transition probability matrix P.
(a) The jump process X
J
is itself a time-homogeneous Markov process, and its one-step transition
probabilities are given by p
J
ij
= p
ij
/(1 p
ii
) for i ,= j, and p
J
ii
= 0, i, j o.
(b) Given X(0), X
J
(1) is conditionally independent of T
0
.
(c) Given (X
J
(0), . . . , X
J
(n)) = (j
0
, . . . , j
n
), the variables T
0
, . . . , T
n
are conditionally indepen-
dent, and the conditional distribution of T
l
is geometric with parameter p
j
l
j
l
:
P[T
l
= k[X
J
(0) = j
0
, . . . , X
J
(n) = j
n
] = p
k1
j
l
j
l
(1 p
j
l
j
l
) 0 l n, k 1.
Proof. Observe that if X(0) = i, then
T
0
= k, X
J
(1) = j = X(1) = i, X(2) = i, . . . , X(k 1) = i, X(k) = j,
so
P[T
0
= k, X
J
(1) = j[X(0) = i] = p
k1
ii
p
ij
= [(1 p
ii
)p
k1
ii
]p
J
ij
(1.17)
Because for i xed the last expression in (1.17) displays the product of two probability distributions,
conclude that given X(0) = i,
T
0
has distribution ((1 p
ii
)p
k1
ii
: k 1), the geometric distribution of mean 1/(1 p
ii
)
X
J
(1) has distribution (p
J
ij
: j o) (i xed)
T
0
and X
J
(1) are independent
More generally, check that
P[X
J
(1) = j
1
, . . . , X
J
(n) = j
n
, T
o
= k
0
, . . . , T
n
= k
n
[X
J
(0) = i] =
p
J
ij
1
p
J
j
1
j
2
. . . p
J
j
n1
jn
n

l=0
(p
k
l
1
j
l
j
l
(1 p
j
l
j
l
))
9
X(t)
t
(h)
X (1)
X (2)
(h)
X (3)
(h)
s
s
s
1
2
3
Figure 1.6: Illustration of sampling of a pure-jump function
This establishes the proposition.
Next we consider the space-time structure of time-homogeneous continuous-time pure-jump
Markov processes. Essentially the only dierence between the discrete- and continuous-time Markov
processes is that the holding times for the continuous-time processes are exponentially distributed
rather than geometrically distributed. Indeed, dene the holding times T
k
, k 0 and the jump
process X
J
using (1.14)-(1.16) as before.
Proposition 1.4.2 Let X = (X(t) : t R
+
) be a time-homogeneous, pure-jump Markov process
with generator matrix Q. Then
(a) The jump process X
J
is a discrete-time, time-homogeneous Markov process, and its one-step
transition probabilities are given by
p
J
ij
=
_
q
ij
/q
ii
for i ,= j
0 for i = j
(1.18)
(b) Given X(0), X
J
(1) is conditionally independent of T
0
.
(c) Given X
J
(0) = j
0
, . . . , X
J
(n) = j
n
, the variables T
0
, . . . , T
n
are conditionally independent,
and the conditional distribution of T
l
is exponential with parameter q
j
l
j
l
:
P[T
l
c[X
J
(0) = j
0
, . . . , X
J
(n) = j
n
] = exp(cq
j
l
j
l
) 0 l n.
Proof. Fix h > 0 and dene the sampled process X
(h)
by X
(h)
(k) = X(hk) for k 0. See
Fig. 1.6. Then X
(h)
is a discrete time Markov process with one-step transition probabilities p
ij
(h)
(the transition probabilities for the original process for an interval of length h). Let (T
(h)
k
: k 0)
denote the sequence of holding times and (X
J,h
(k) : k 0) the jump process for the process X
(h)
.
The assumption that with probability one the sample paths of X are pure-jump functions,
implies that with probability one:
lim
h0
(X
J,h
(0), X
J,h
(1), . . . , X
J,h
(n), hT
(h)
0
, hT
(h)
1
, . . . , hT
(h)
n
) =
(X
J
(0), X
J
(1), . . . , X
J
(n), T
0
, T
1
, . . . , T
n
) (1.19)
10
1 2 3 0
1
1 2
2
3
3
0
4

!

! ! !
Figure 1.7: Transition rate diagram of a birth-death process
Since convergence with probability one implies convergence in distribution, the goal of identifying
the distribution of the random vector on the righthand side of (1.19) can be accomplished by
identifying the limit of the distribution of the vector on the left.
First, the limiting distribution of the process X
J,h
is identied. Since X
(h)
has one-step transi-
tion probabilities p
ij
(h), the formula for the jump process probabilities for discrete-time processes
(see Proposition 1.4.1, part a) yields that the one step transition probabilities p
J,h
ij
for X
(J,h)
are
given by
p
J,h
ij
=
p
ij
(h)
1 p
ii
(h)
=
p
ij
(h)/h
(1 p
ii
(h))/h

q
ij
q
ii
as h 0 (1.20)
for i ,= j, where the limit indicated in (1.20) follows from the denition (1.9) of the generator matrix
Q. Thus, the limiting distribution of X
J,h
is that of a Markov process with one-step transition
probabilities given by (1.18), establishing part (a) of the proposition. The conditional independence
properties stated in (b) and (c) of the proposition follow in the limit from the corresponding
properties for the jump process X
J,h
guaranteed by Proposition 1.4.1. Finally, since log(1 + ) =
+o() by Taylors formula, we have for all c 0 that
P[hT
(h)
l
> c[X
J,h
(0) = j
0
, . . . , X
J,h
= j
n
] = (p
j
l
j
l
(h))
]c/h|
= exp(c/h| log(p
j
l
j
l
(h)))
= exp(c/h|(q
j
l
j
l
h +o(h)))
exp(q
j
l
j
l
c) as h 0
which establishes the remaining part of (c), and the proposition is proved.
Birth-Death Processes A useful class of countable state Markov processes is the set of
birth-death processes. A (continuous time) birth-death process with parameters (
0
,
2
, . . .) and
(
1
,
2
, . . .) (also set
1
=
0
= 0) is a pure-jump Markov process with state space o = Z
+
and
generator matrix Q dened by q
kk+1
=
k
, q
kk
= (
k
+
k
), and q
kk1
=
k
for k 0, and q
ij
= 0
if [i j[ 2. The transition rate diagram is shown in Fig. (1.7). The space-time structure of such
a process is as follows. Given the process is in state k at time t, the next state visited is k +1 with
probability
k
/(
k
+
k
) and k 1 with probability
k
/(
k
+
k
). The holding time of state k is
exponential with parameter
k
+
k
.
11
The space-time structure just described can be used to show that the limit in (1.9) is uniform
in i for j xed, so that the Kolmogorov forward equations are satised. These equations are:

k
(t)
t
=
k1

k1
(t) ( +)
k
(t) +
k+1

k+1
(t) (1.21)
1.5 Poisson Processes
A Poisson process with rate is a birth-death process N = (N(t) : t 0) with initial distribution
P[N(0) = 0] = 1, birth rates
k
= for all k and death rates
k
= 0 for all k. The space-time
structure is particularly simple. The jump process N
J
is deterministic and is given by N
J
(k) = k.
Therefore the holding times are not only conditionally independent given N
J
, they are independent
and each is exponentially distributed with parameter .
Let us calculate
j
(t) = P[N(t) = j]. The Kolmogorov forward equation for k = 0 is
0
/t =

0
, from which we deduce that
0
(t) = exp(t). Next the equation for
1
is
1
/t =
exp(t)
1
(t) which can be solved to yield
1
(t) = (t) exp(t). Continuing by induction on
k, verify that
k
(t) = (t)
k
exp(t)/k!, so that N(t) is a Poisson random variable with parameter
k.
It is instructive to solve the Kolmogorov equations by another method, namely the z-transform
method, since it works for some more complex Markov processes as well. For convenience, set

1
(t) = 0 for all t. Then the Kolmogorov equations for become

k
t
=
k1

k
. Multiplying
each side of this equation by z
k
, summing over k, and interchanging the order of summation
and dierentiation, yields that the z transform P

(z, t) of (t) satises


P

(z,t)
t
= (z )P

(z, t).
Solving this with the initial condition P

(z, 0) = 1 yields that P

(z, t) = exp((z)t). Expanding


exp(zt) into powers of z identies (t)
k
exp(t)/k! as the coecient of z
n
in P

(z, t).
In general, for i xed, p
ij
(t) is determined by the same equations but with the initial distribution
p
ij
(0) = I
i=j
. The resulting solution is p
ij
(t) =
ji
(t) for j i and p
ij
(t) = 0 otherwise. Thus
for t
0
< t
1
< . . . < t
d
= s < t,
P[N(t) N(s) = l[N(t
0
) = i
0
, . . . , N(t
d1
) = i
d1
, N(s) = i]
= P[N(t) = i +l[N(s) = i]
= [(t s)]
l
exp((t s))/l!
Conclude that N(t) N(s) is a Poisson random variable with mean (t s). Furthermore, N(t)
N(s) is independent of (N(u) : u s), which implies that the increments N(t
1
) N(t
0
), N(t
2
)
N(t
1
), N(t
3
) N(t
2
), . . . are mutually independent whenever 0 t
0
< t
1
< . . ..
Turning to another characterization, x T > 0 and > 0, let U
1
, U
2
, . . . be uniformly distributed
on the interval [0, T], and let K be a Poisson random variable with mean T. Finally, dene the
random process (

N(t) : 0 t T) by

N(t) =
K

i=1
I
tU
i

. (1.22)
That is, for 0 t T,

N(t) is the number of the rst K uniform random variables located in
[0, t]. We claim that

N has the same distribution as a Poisson random process N with parameter ,
restricted to the interval [0, T]. To verify this claim, it suces to check that the increments of the two
12
processes have the same joint distribution. To that end, x 0 = t
0
< t
1
. . . < t
d
= T and nonnegative
integers k
1
, . . . , k
d
. Dene the event A = N(t
1
) N(t
0
) = k
1
, . . . , N(t
d
) N(t
d1
) = k
d
, and
dene the event

A in the same way, but with N replaced by

N. It is to be shown that P[A] = P[

A].
The independence of the increments of N and the Poisson distribution of each increment of N
implies that
P[A] =
d

i=1
((t
i
t
i1
))
k
i
exp((t
i
t
i1
))/k
i
!. (1.23)
On the other hand, the event

A is equivalent to the condition that K = k, where k = k
1
+. . . +k
d
,
and that furthermore the k random variables U
1
, . . . , U
k
are distributed appropriately across the d
subintervals. Let p
i
= (t
i
t
i1
)/T denote the relative length of the ith subinterval [t
i1
, t
i
]. The
probability that the rst k
1
points U
1
, . . . , U
k
1
fall in the rst interval [t
0
, t
1
], the next k
2
points
fall in the next interval, and so on, is p
k
1
1
p
k
2
2
. . . p
k
d
d
. However, there are
_
k
k
1
... k
d
_
=
k!
k
1
!...k
d
!
equally
likely ways to assign the variables U
1
, . . . , U
k
to the intervals with k
i
variables in the ith interval
for all i, so
P[

A] = P[K = k]
_
k
k
1
. . . k
d
_
p
k
1
1
. . . p
k
d
d
, (1.24)
Some rearrangement yields that P[A] = P[

A], as was to be shown. Therefore,

N and N restricted
to the time interval [0, T] have the same distribution.
The above discussion provides some useful characterizations of a Poisson process, which are
summarized in the following proposition. Suppose that X = (X(t) : t 0) has pure-jump sample
paths with probability one, suppose P[X(0) = 0] = 1, and suppose > 0.
Proposition 1.5.1 The following four are equivalent:
(i) X is a Poisson process with rate .
(ii) The jumps of X are all size one, and the holding times are independent, exponentially dis-
tributed with parameter .
(iii) Whenever 0 t
0
< t
1
. . . < t
n
, the increments X(t
1
) X(t
0
), . . . , X(t
n
) X(t
n1
)
are independent random variables, and whenever 0 s < t, the increment X(t) X(s) is a
Poisson random variable with mean (t s).
(iv) Whenever 0 s < t, the increment X(t) X(s) is a Poisson random variable with mean
(t s), and given X(t) X(s) = n, the jumps of X in [s, t] are height one and the vector
of their locations is uniformly distributed over the set R
n
+
: s <
1
< . . . <
n
< t.
The fourth characterization of 1.5.1 can be readily extended to dene Poisson point processes
on any set with a suitable positive measure giving the mean number of points within subsets. For
example, a Poisson point process on R
d
with intensity is specied by the following two conditions.
First, for any subset A of R
d
with nite volume, the distribution of the number of points in the
set is Poisson with mean V olume(A). Second, given that the number of points in such a set
A is n, the locations of the points within A are distributed according to n mutually independent
random variables, each uniformly distributed over A. As a sample calculation, take d = 2 and
consider the distance R of the closest point of the Poisson point process to the origin. Since the
13
number of points falling within a circle of radius r has the Poisson distribution with mean r
2
, the
distribution function of R is given by
P[R r] = 1-P[no points of the process in circle of radius r] = 1 exp(r
2
).
1.6 Renewal Theory
Renewal theory describes convergence to statistical equilibrium for systems involving sequences of
intervals of random duration. It also helps describe what is meant by the state of a random system
at a typical time, or as viewed by a typical customer. We begin by examining Poisson processes
and an illustration of the dependence of an average on the choice of sampling distribution.
Consider a two-sided Poisson process (N(t) : t R) with parameter , which by denition
means that N(0) = 0, N is right-continuous, the increment N(t + s) N(t) is a Poisson random
variable with mean for any s > 0 and t, and the increments of N are independent. Fix a time t.
With probability one, t is strictly in between two jump times of N. The time between t and the rst
jump after time t is exponentially distributed, with mean 1/. Similarly, the time between the last
jump before time t and t is also exponentially distributed with mean 1/. (In fact, these two times
are independent.) Thus, the mean of the length of the interval between the jumps surrounding t
is 2/. In contrast, the Poisson process is built up from exponential random variables with mean
1/. That the two means are distinct is known as the Poisson paradox.
Perhaps an analogy can help explain the Poisson paradox. Observe that the average population
of a country on earth is about 6 10
9
/200 = 30 million people. On the other hand, if a person
is chosen at random from among all people on earth, the mean number of people in the same
country as that person is greater than 240 million people. Indeed, the population of China is about
1.2 10
9
, so that about one of ve people live in China. Thus, the mean number of people in the
same country as a randomly selected person is at least 1.210
9
/5=240 million people. The second
average is much larger than the rst, because sampling a country by choosing a person at random
skews the distribution of size towards larger countries in direct proportion to the populations of the
countries. In the same way, the sampled lifetime of a Poisson process is greater than the typical
lifetimes used to generate the process.
A generalization of the Poisson process is a counting process in which the times between arrivals
are independent and identically distributed, but not necessarily exponentially distributed. For
slightly more generality, the time until the rst arrival is allowed to have a distribution dierent
from the lengths of times between arrivals, as follows.
Suppose F and G are distribution functions of nonnegative random variables, and to avoid
trivialities, suppose that F(0) < 1. Let T
1
, T
2
, . . . be independent, such that T
1
has distribution
function G and T
k
has distribution function F for k 2. The time of the n
th
arrival is (n) =
T
1
+ T
2
+ . . . + T
n
and the number of arrivals up to time t is (t) = maxn : (n) t. The
residual lifetime at a xed time t is dened to be
t
= ((t) +1) t and the sampled lifetime at a
xed time t is L
t
= T
(t)+1
, as pictured in Figure 1.8.
1
Renewal theory, the focus of this section,
describes the limits of the residual lifetime and sampled lifetime distributions for such processes.
This section concludes with a presentation of the renewal equation, which plays a key role in
renewal theory. To write the equation in general form, the denition of convolution of two measures
on the real line, given next, is needed. Let F
1
and F
2
be two distribution functions, corresponding
1
Note that if one or more arrivals occur at a time t, then t and Lt are both equal to the next nonzero lifetime.
14
t
L
t
!
t
Figure 1.8: Sampled lifetime and forward lifetime
to two positive measures on the real line. The convolution of the two measures is again a measure,
such that its distribution function is F
1
F
2
, dened by the Riemann-Stieltjes integral
F
1
F
2
(x) =
_

F
1
(x y)F
2
(dy)
If X
1
and X
2
are independent random variables, if X
1
has distribution function F
1
, and if X
2
has
distribution function F
2
, then X
1
+X
2
has distribution function F
1
F
2
. It is clear from this that
convolution is commutative (F G = G F) and associative ((F G) H = F (G H)). If F
1
has density function f
1
and F
2
has density function F
2
, then F
1
F
2
has density function f
1
f
2
,
dened as the usual convolution of functions on R:
f
1
f
2
(x) =
_

f
1
(x y)f
2
(y)dy
Similarly, if F
1
has a discrete mass function f
1
on Z and F
2
has discrete mass function f
2
on Z,
then F
1
F
2
has discrete mass function f
1
f
2
dened as the usual convolution of functions on Z:
f
1
f
2
(n) =

k=
f
1
(n k)f
2
(k).
Dene the cumulative mean renewal function H by H(t) = E[(t)]. Function H can be viewed
as the cumulative distribution function for the renewal measure, which is the measure on R
+
such
that the measure of an interval is the mean number of arrivals in the interval. In particular, for
0 s t, H(t) H(s) is the mean number of arrivals in the interval (s, t]. The function H
determines much about the renewal variablesin particular the joint distribution of (L
t
,
t
) can be
expressed in terms of H and the CDFs F and G. For example, if t u > 0, then
P[L
t
u] =
_
t
tu
P[a renewal occurs in [s, s +ds] with lifetime [t s, u]]
=
_
t
tu
(F(u) F(t s))dH(s). (1.25)
Observe that the function H can be expressed as
H(t) =

n=1
U
n
(t) (1.26)
where U
n
is the probability distribution function of T
1
+ +T
n
. From (1.26) and the facts U
1
= G
and U
n
= F U
n1
for n 2, the renewal equation follows:
H = G+F H
15
1.6.1 Renewal Theory in Continuous Time
The distribution function F is said to be lattice type if, for some a > 0, the distribution is con-
centrated on the set of integer multiples of a. Otherwise F is said to be nonlattice type. In this
subsection F is assumed to be nonlattice type, while the next subsection touches on discrete-time
renewal theory, addressing the situations when F is lattice type. If F and G have densities, then H
has a density and the renewal equation can be written as h = g+f h. Many convergence questions
about renewal processes, and also about Markov processes, can be settled by the following theorem
(for a proof, see [1, 18]).
Proposition 1.6.1 (Blackwells Renewal Theorem in continuous time) Suppose F is a nonlattice
distribution. Then for xed h > 0,
lim
t
(H(t +h) H(t))/h = 1/m
1
,
where m
k
is the k
th
moment of F.
The renewal theorem can be used to show that if the distribution F is nonlattice, then
lim
t
P[L
t
u,
t
s] = F
L
(u, s)
where
F
L
(u, s) =
1
m
1
_
su
0
(F(u) F(y))dy u, s R
+
(1.27)
For example, to identify lim
t
P[L
t
u] one can start with (1.25). Roughly speaking, the renewal
theorem tells us that in the limit, we can replace dH(s) in (1.25) by
ds
m
1
. This can be justied by
transforming the representation (1.25) by a change of variables (letting x = t s for t xed) and
an integration by parts, before applying the renewal theorem, as follows
P[L
t
u] =
_
u
0
(F(u) F(x))dH(t x)
=
_
u
0
(H(t) H(t x))dF(x)

_
u
0
x
m
1
dF(x) =
1
m
1
_
u
0
(F(u) F(y))dy
The following properties of the joint distribution of (L, ) under F
L
are usually much easier to
work with than the expression (1.27):
Moments: E[L
k
] = m
k+1
/m
1
and E[
k
] = m
k+1
/((k + 1)m
1
),
Density of : has density f

(s) = (1 F(s))/m
1
,
Joint density: If F has density f, then (L, ) has density
f
L,
(u, s) =
_
f(u)/m
1
if 0 s u
0 else
Equivalently, L has density f
L
(u) = uf(u)/m
1
and given L = u, is uniformly distributed
over the interval [0, u].
16
Laplace transforms: F

(s) = (1 F

(s))/(sm
1
) and F

L
(s) = (dF

(s)/ds)/m
1
.
The fact that the stationary sampled lifetime density, f
L
(u), is proportional to uf(u), makes
sense according to the intuition gleaned from the countries example. The lifetimes here play the role
of countries in that example, and sampling by a particular time (analogous to selecting a country
by rst selecting an individual) results in a factor u proportional to the size of the lifetime.
1.6.2 Renewal Theory in Discrete Time
Suppose that the variables T
1
, T
2
, . . . are integer valued. The process (t) need only be considered
for integer values of t. The renewal equation can be written as h = g + f h. where h(n) is
the expected number of renewals that occur at time n, and f and g are the probability mass
functions corresponding to F and G. The z-transform of h is readily found to be given by H(z) =
G(z)/(1 F(z)).
Recall that the greatest common divisor, GCD, of a set of positive integers is the largest integer
that evenly divides all the integers in the set. For example, GCD4, 6, 12 = 2.
Proposition 1.6.2 (Blackwells Renewal Theorem in discrete time) Suppose GCDk 1 : f(k) >
0 = 1. Then lim
n
h(n) = 1/m
1
, where m
k
is the k
th
moment of F.
The proposition can be used to prove the following fact. If GCDk 1 : f(k) > 0 = 1, then
(L
n
,
n
) converges in distribution as n , and under the limiting distribution
P[L = l] = lf
l
/m
1
and P[ = i[L = l] = 1/l 1 i l,
P[ = i] =
1 F(i 1)
m
1
i 1
and
E[L
k
] = m
k+1
/m
1
and E[] = ((m
2
/2m
1
)) + 0.5.
Note that if a renewal occurs at time t, then by denition
t
is the time between t and the rst
renewal after time t. An alternative denition would take
t
equal to zero in such a case. Under
the alternative denition, E[] = ((m
2
/2m
1
)) 0.5. In fact, this is the same as the mean of the
backwards residual arrival time for the original denition. In contrast, in continuous time, both the
forward and backward residual lifetimes have mean m
2
/2m
1
.
1.7 Classication and Convergence of Discrete State Markov Pro-
cesses
1.7.1 Examples with nite state space
Recall that a probability distribution on o is an equilibrium probability distribution for a time-
homogeneous Markov process X if = H(t) for all t. We shall see in this section that under
certain natural conditions, the existence of an equilibrium probability distribution is related to
whether the distribution of X(t) converges as t . Existence of an equilibrium distribution is
also connected to the mean time needed for X to return to its starting state. To motivate the
conditions that will be imposed, we begin by considering four examples of nite state processes.
17
Then the relevant denitions are given for nite or countably-innite state space, and propositions
regarding convergence are presented. The key role of renewal theory is exposed.
Example 1.1 Consider the discrete-time Markov process with the one-step probability diagram
shown in Figure 1.9. Note that the process cant escape from the set of states o
1
= a, b, c, d, e,
so that if the initial state X(0) is in o
1
with probability one, then the limiting distribution is
supported by o
1
. Similarly if the initial state X(0) is in o
2
= f, g, h with probability one, then
the limiting distribution is supported by o
2
. Thus, the limiting distribution is not unique for this
process. The natural way to deal with this problem is to decompose the original problem into two
problems. That is, consider a Markov process on o
1
, and then consider a Markov process on o
2
.
0.5
1
0.5
0.5
0.5
0.5
1
1
0.5
0.5
b c
d
e f
g h
a
0.5
1
Figure 1.9: Example 1.1
Does the distribution of X(0) necessarily converge if X(0) o
1
with probability one? The
answer is no. For example, note that if X(0) = a, then X(k) a, c, e for all even values of k,
whereas X(k) b, d for all odd values of k. That is,
a
(k) +
c
(k) +
e
(k) is one if k is even and
is zero if k is odd. Therefore, if
a
(0) = 1, then (k) does not converge as k .
Basically speaking, the Markov process of this example fails to have a unique limiting distribu-
tion independent of the initial state for two reasons: (i) the process is not irreducible, and (ii) the
states are periodic.
Example 1.2 Consider the two-state, continuous time Markov process with the transition rate
diagram shown in Figure 1.10 for some positive constants and . The generator matrix is given
!
"
1 2
Figure 1.10: Example 1.2
by
Q =
_


_
We shall compute H(t) = e
Qt
by solving the Kolmogorov forward equations. The rst equation is
p
11
(t)
t
= p
11
(t) +p
12
(t); p
11
(0) = 1
18
But p
12
(t) = 1 p
11
(t), so
p
11
(t)
t
= ( +)p
11
(t) +; p
11
(0) = 1
By dierentiation we check that this equation has the solution
p
11
(t) = e
(+)t
+
_
t
0
e
(+)(ts)
ds
=

+
(1 e
(+)t
).
Using the fact p
12
(t) = 1 p
11
(t) yields p
12
(t). The second row of H(t) can be found similarly,
yielding
H(t) =
_

+

+
_
+e
(+)t
_

+


+

+
_
Note that H(0) = I as required, and that
lim
t
H(t) =
_

+

+
_
Thus, for any initial distribution (0),
lim
t
(t) = lim
t
(0)H(t) = (

+
,

+
).
The rate of convergence is exponential, with rate parameter +, which happens to be the nonzero
eigenvalue of Q. Note that the limiting distribution is the unique probability distribution satisfying
Q = 0. The periodicity problem of Example 1.1 does not arise for continuous-time processes.
Example 1.3 Consider the continuous-time Markov process with the transition rate diagram
in Figure 1.11. The Q matrix is the block-diagonal matrix given by
!
"
1 2
!
"
3 4
Figure 1.11: Example 1.3
Q =
_

_
0 0
0 0
0 0
0 0
_

_
This process is not irreducible, but rather the transition rate diagram can be decomposed into two
parts, each equivalent to the diagram for Example 1.1. The equilibrium probability distributions
are the probability distributions of the form = (

+
,

+
, (1 )

+
, (1 )

+
), where is
the probability placed on the subset 1, 2.
19
1
2
3
1
1
1
Figure 1.12: Example 1.4
Example 1.4 Consider the discrete-time Markov process with the transition probability dia-
gram in Figure 1.12. The one-step transition probability matrix P is given by
P =
_
_
0 1 0
0 0 1
1 0 0
_
_
Solving the equation = P we nd there is a unique equilibrium probability vector, namely
= (
1
3
,
1
3
,
1
3
). On the other hand, if (0) = (1, 0, 0), then
(k) = (0)P
k
=
_
_
_
(1, 0, 0) if k 0 mod 3
(0, 1, 0) if k 1 mod 3
(0, 0, 1) if k 2 mod 3
Therefore, (k) does not converge as k .
Let X be a discrete-time, or pure-jump continuous-time, Markov process on the countable state
space o. Suppose X is time-homogeneous. The process is said to be irreducible if for all i, j o,
there exists s > 0 so that p
ij
(s) > 0.
1.7.2 Discrete Time
Clearly a probability distribution is an equilibrium probability distribution for a discrete-time
time-homogeneous Markov process X if and only if = P. The period of a state i is dened
to be GCDk 0 : p
ii
(k) > 0. The set k 0 : p
ii
(k) > 0 is closed under addition, which
by elementary algebra
2
implies that the set contains all suciently large integer multiples of the
period. The Markov process is called aperiodic if the period of all the states is one.
Proposition 1.7.1 If X is irreducible, all states have the same period.
Proof. Let i and j be two states. By irreducibility, there are integers k
1
and k
2
so that p
ij
(k
1
) > 0
and p
ji
(k
2
) > 0. For any integer n, p
ii
(n + k
1
+ k
2
) p
ij
(k
1
)p
jj
(n)p
ji
(k
2
), so the set k 0 :
p
ii
(k) > 0 contains the set k 0 : p
jj
(k) > 0 translated up by k
1
+ k
2
. Thus the period of i is
less than or equal to the period of j. Since i and j were arbitrary states, the proposition follows.
For a xed state i, dene
i
= mink 1 : X(k) = i, where we adopt the convention
that the minimum of an empty set of numbers is +. Let M
i
= E[
i
[X(0) = i]. If P[
i
<
+[X(0) = i] < 1, then state i is called transient (and by convention, M
i
= +). Otherwise
P[
i
< +[X(0) = i] = 1, and i is then said to be positive recurrent if M
i
< + and to be
null recurrent if M
i
= +.
2
see Euclidean algorithm, Chinese remainder theorem, Bezout theorem
20
Proposition 1.7.2 Suppose X is irreducible and aperiodic.
(a) All states are transient, or all are positive-recurrent, or all are null-recurrent.
(b) For any initial distribution (0), lim
t

i
(t) = 1/M
i
, with the understanding that the limit
is zero if M
i
= +.
(c) An equilibrium probability distribution exists if and only if all states are positive-recurrent.
(d) If it exists, the equilibrium probability distribution is given by
i
= 1/M
i
. (In particular, if
it exists, the equilibrium probability distribution is unique).
Proof. (a) Suppose state i is recurrent. Given X(0) = i, after leaving i the process returns to
state i at time
i
. The process during the time interval 0, . . . ,
i
is the rst excursion of X from
state 0. From time
i
onward, the process behaves just as it did initially. Thus there is a second
excursion from i, third excursion from i, and so on. Let T
k
for k 1 denote the length of the kth
excursion. Then the T
k
s are independent, and each has the same distribution as T
1
=
i
. Let j be
another state and let denote the probability that X visits state j during one excursion from i.
Since X is irreducible, > 0. The excursions are independent, so state j is visited during the kth
excursion with probability , independently of whether j was visited in earlier excursions. Thus, the
number of excursions needed until state j is reached has the geometric distribution with parameter
, which has mean 1/. In particular, state j is eventually visited with probability one. After j
is visited the process eventually returns to state i, and then within an average of 1/ additional
excursions, it will return to state j again. Thus, state j is also recurrent. Hence, if one state is
recurrent, all states are recurrent.
The same argument shows that if i is positive-recurrent, then j is positive-recurrent. Given
X(0) = i, the mean time needed for the process to visit j and then return to i is M
i
/, since on
average 1/ excursions of mean length M
i
are needed. Thus, the mean time to hit j starting from
i, and the mean time to hit i starting from j, are both nite. Thus, j is positive-recurrent. Hence,
if one state is positive-recurrent, all states are positive-recurrent.
(b) Part (b) of the proposition follows immediately from the renewal theorem for discrete time.
The rst lifetime for the renewal process is the time needed to rst reach state i, and the subsequent
renewal times are the lengths of the excursions from state i. The probability
i
(t) is the probability
that there is a renewal event at time t, which by the renewal theorem converges to 1/M
i
.
(c) Suppose all states are positive-recurrent. By the law of large numbers, for any state j, the
long run fraction of time the process is in state j is 1/M
j
with probability one. Similarly, for any
states i and j, the long run fraction of time the process is in state j is
ij
/M
i
, where
ij
is the
mean number of visits to j in an excursion from i. Therefore 1/M
j
=
ij
/M
i
. This implies that

i
1/M
i
= 1. That is, dened by
i
= 1/M
i
is a probability distribution. The convergence for
each i separately given in part (b), together with the fact that is a probability distribution, imply
that

i
[
i
(t)
i
[ 0. Thus, taking s to innity in the equation (s)H(t) = (s + t) yields
H(t) = , so that is an equilibrium probability distribution.
Conversely, if there is an equilibrium probability distribution , consider running the process
with initial state . Then (t) = for all t. So by part (b), for any state i,
i
= 1/M
i
. Taking a
state i such that
i
> 0, it follows that M
i
< . So state i is positive-recurrent. By part (a), all
states are positive-recurrent.
(d) Part (d) was proved in the course of proving part (c).
21
We conclude this section by describing a technique to establish a rate of convergence to the
equilibrium distribution for nite-state Markov processes. Dene (P) for a one-step transition
probability matrix P by
(P) = min
i,k

j
p
ij
p
kj
,
where a b = mina, b. The number (P) is known as Dobrushins coecient of ergodicity. Since
a +b 2(a b) = [a b[ for a, b 0, we also have
1 2(P) = min
i,k

j
[p
ij
p
kj
[.
Let || for a vector denote the L
1
norm: || =

i
[
i
[.
Proposition 1.7.3 For any probability vectors and , |P P| (1(P))||. Further-
more, if (P) > 0 then there is a unique equilibrium distribution

, and for any other probability


distribution on o, |P
l

| 2(1 (P))
l
.
Proof. Let
i
=
i

i

i
and
i
=
i

i

i
. Clearly | | = | | = 2| | = 2| |.
Furthermore,
|P P| = | P P|
=

j
|

i

i
P
ij

k

k
P
kj
|
= (1/| |)

j
[

i,k

i

k
(P
ij
P
kj
)[
(1/| |)

i,k

i

k

j
[P
ij
P
kj
[
| |(2 2(P)) = | |(1 (P)),
which proves the rst part of the proposition. Iterating the inequality just proved yields that
|P
l
P
l
| (1 )
l
| | 2(1 )
l
. (1.28)
This inequality for = P
n
yields that |P
l
P
l+n
| 2(1 )
l
. Thus the sequence P
l
is a
Cauchy sequence and has a limit

, and

P =

. Finally, taking in (1.28) equal to

yields the last part of the proposition.


Proposition 1.7.3 typically does not yield the exact asymptotic rate that |
l

| tends to
zero. The asymptotic behavior can be investigated by computing (I zP)
1
, and then matching
powers of z in the identity (I zP)
1
=

n=0
z
n
P
n
.
1.7.3 Continuous Time
Dene, for i o,
o
i
= mint > 0 : X(t) ,= i, and
i
= mint >
o
i
: X(t) = i. Thus, if
X(0) = i,
i
is the rst time the process returns to state i, with the exception that
i
= +
if the process never returns to state i. The following denitions are the same as when X is a
discrete-time process. Let M
i
= E[
i
[X(0) = i]. If P[
i
< +] < 1, then state i is called transient.
22
Otherwise P[
i
< +] = 1, and i is then said to be positive recurrent if M
i
< + and to be
null recurrent if M
i
= +.
The following propositions are analogous to those for discrete-time Markov processes. Proofs
can be found in [1, 32].
Proposition 1.7.4 Suppose X is irreducible.
(a) All states are transient, or all are positive-recurrent, or all are null-recurrent.
(b) For any initial distribution (0), lim
t+

i
(t) = 1/(q
ii
M
i
), with the understanding that the
limit is zero if M
i
= +.
Proposition 1.7.5 Suppose X is irreducible and nonexplosive.
(a) A probability distribution is an equilibrium distribution if and only if Q = 0.
(b) An equilibrium probability distribution exists if and only if all states are positive-recurrent.
(c) If all states are positive-recurrent, the equilibrium probability distribution is given by
i
=
1/(q
ii
M
i
). (In particular, if it exists, the equilibrium probability distribution is unique).
Once again, the renewal theorem leads to an understanding of the convergence result of part
(b) in Proposition 1.7.4. Let T
k
denote the amount of time between the kth and k + 1th visits to
state i. Then the variables T
k
: k 0 are independent and identically distributed, and hence the
sequence of visiting times is a renewal sequence. Let H(t) denote the expected number of visits to
state i during the interval [0, t], so that H is the cumulative mean renewal function for the sequence
T
1
, T
2
, . . .. As above, note that M
i
= E[T
k
].
The renewal theorem is applied next to show that
lim
t
P[X(t) = i] = 1/(q
ii
M
i
). (1.29)
Indeed, note that the denition of H, the fact that holding times for state i are exponentially
distributed with parameter q
ii
, integration by parts, a change of variable, and the renewal theorem,
imply that
P[X(t) = i] = P[X(t) = i, T
1
> t] +
_
t
0
P[X(t) = i, (last renewal before t) ds]
= P[X(0) = i] exp(q
ii
t) +
_
t
0
exp(q
ii
(t s))dH(s)
= P[X(0) = i] exp(q
ii
t) +
_
t
0
exp(q
ii
(t s))(q
ii
)H(t) H(s)ds
= P[X(0) = i] exp(q
ii
t) +
_
t
0
exp(q
ii
u)(q
ii
)H(t) H(t u)du

_

0
exp(q
ii
u)(q
ii
)(u/M
i
)du = 1/(q
ii
M
i
).
This establishes (1.29).
23
An immediate consequence of (1.29) is that if there is an equilibrium distribution , then the
Markov process is positive recurrent and the equilibrium distribution is given by

i
= 1/(q
ii
M
i
). (1.30)
Conversely, if the process is positive recurrent, then there is an equilibrium probability distribution
and it is given by (1.30).
Part (a) of Proposition 1.7.5 is suggested by the Kolmogorov forward equation (t)/t =
(t)Q. The assumption that X be nonexplosive is needed (per homework problem), but the follow-
ing proposition shows that the Markov processes encountered in most applications are nonexplosive.
Proposition 1.7.6 Suppose X is irreducible. Fix a state i
o
and for k 1 let o
k
denote the set of
states reachable from i in k jumps. Suppose for each k 1 there is a constants
k
so that the jump
intensities on o
k
are bounded by
k
, that is, suppose q
ii

k
for i o
k
. If

k=1
1

k
= +, then
the process X is nonexplosive.
The proposition is proved after the following lemma is proved:
Lemma 1.7.7 Suppose S is a random variable such that E[S] 1 and S is the sum of nitely
many independent exponentially distributed random variables (with possibly dierent means). Then
P[S
1
3
]
1
4
.
Proof. Let S = T
1
+ +T
n
, such that the T
i
are independent, exponentially distributed, and
suppose E[S] 1. If E[T
j
]
1
3
for some j then P[S
1
3
] P[T
j

1
3
] e
1

1
3
. So it remains
to prove the lemma in case E[T
i
]
1
3
for all i. But then V ar(T
i
) = E[T
i
]
2

1
3
E[T
i
]. Therefore,
V ar(S)
E(S)
3
. Thus, by the Chebychev inequality, P[S
1
3
] P[[S E[S][
2E[S]
3
]
9V ar(S)
4E[S]
2

3
4E[S]

3
4
. The lemma is proved.
Proof. Let X
J
denote the jump process and T
0
, T
1
, . . . the sequence of holding times of X.
Select an increasing sequence of integers k
0
= 0 < k
1
< k
2
< so that

k
i
1
k=k
i1
1

k
1 for all
i 1. Fix any particular sample path (x
J
(n)) for the jump process with x
J
(0) = i
o
. Conditioned
on the process X
J
following this sample path, the holding times are independent, exponentially
distributed, and E[T
k
]
1

k
for all k 0. Thus, if S
i
is the random variable S
i
=

k
i
1
k=k
i
1
T
k
,
it follows from Lemma 1.7.7 that P[S
i

1
3
]
1
4
for all i. Thus, for a suitable threshold
i

1
3
,
the random variable
i
= I
S
i

is a Be(
1
4
) random variable, S
i


i
3
for all i, and the
i
s
are independent. Thus,

k
j
1
k=0
T
k


j
i=1

i
, and by the law of large number,

j
i=1

i
a.s.
as
j . Thus, the sum of the holding times is a.s. innite, so X is nonexplosive.
1.8 Classication of Birth-Death Processes
The classication of birth-death processes is relatively simple. Recall that a birth-death process
with birth rates (
k
: k 0) and death rates (
k
: k 1) is a pure-jump Markov process with state
space equal to the set of nonnegative integers Z
+
(with the appended state , which is reached if
24
and only if innitely many jumps occur in nite time), and generator matrix Q = (q
ij
: i, j 0)
given by
q
ij
=
_

i
if j = i + 1

i
if j = i 1

i

i
if j = i 1

i
if j = i = 1
0 else.
(1.31)
Suppose that the birth and death rates are all strictly positive. Then the process is irreducible. Let
b
in
denote the probability that, given the process starts in state i, it reaches state n without rst
reaching state 0. Clearly b
0n
= 0 and b
nn
= 1. Fix i with 1 i n 1, and derive an expression
for b
in
by rst conditioning on the state reached upon the rst jump of the process, starting from
state i. By the analysis of jump probabilities, the probability the rst jump is up is
i
/(
i
+
i
)
and the probability the rst jump is down is
i
/(
i
+
i
). Thus,
b
in
=

i

i
+
i
b
i+1,n
+

i

i
+
i
b
i1,n
,
which can be rewritten as
i
(b
in
b
i1,n
) =
i
(b
i+1,n
b
i,n
). In particular, b
2n
b
1n
= b
1n

1
/
1
and b
3n
b
2n
= b
1n

2
/(
1

2
), and so on, which upon summing yields the expression
b
kn
= b
1n
k1

i=0

2
. . .
i

2
. . .
i
.
with the convention that the i = 0 term in the sum is zero. Finally, the condition b
nn
= 1 yields
the solution
b
1n
=
1

n1
i=0

2
...
i

2
...
i
. (1.32)
Note that b
1n
is the probability, for initial state 1, of the event B
n
that state n is reached without
an earlier visit to state 0. Since B
n+1
B
n
for all n 1,
P[
n1
B
n
[X(0) = 1] = lim
n
b
1n
= 1/S
2
(1.33)
where S
2
=

i=0

2
. . .
i
/
1

2
. . .
i
, with the understanding that the i = 0 term in the sum
dening S
2
is one. Due to the denition of pure jump processes used, whenever X visits a state
in o the number of jumps up until that time is nite. Thus, on the event
n1
B
n
, state zero is
never reached. Conversely, if state zero is never reached, then either the process remains bounded
(which has probability zero) or
n1
B
n
is true. Thus, P[zero is never reached[X(0) = 1] = 1/S
2
.
Consequently, X is recurrent if and only if S
2
= .
Next, investigate the case S
2
= . Then the process is recurrent, so that in particular explosion
has probability zero, so that the process is positive recurrent if and only if Q = 0.
Now Q = 0 if and only if ow balance holds for any state k:
(
k
+
k
)
k
=
k1

k1
+
k+1

k+1
. (1.34)
Equivalently, ow balance must hold for all sets of the form 0, . . . , n1, yielding that
n1

n1
=

n
, or
n
=
0

0
. . .
n1
/(
1
. . .
n
). Thus, a probability distribution with Q = 0 exists if
and only if S
1
< +, where
S
1
=

n=0

0
. . .
n1

1
. . .
n
, (1.35)
25
with the understanding that the i = 0 term in the sum dening S
1
is one. Still assuming S
2
= ,
such a probability distribution exists if and only if the Markov process is positive recurrent.
In summary, the following proposition is proved.
Proposition 1.8.1 If S
2
< + then all the states in Z
+
are transient, and the process may
be explosive. If S
2
= + and S
1
= + then all states are null-recurrent. If S
2
= + and
S
1
< + then all states are positive-recurrent and the equilibrium probability distribution is given
by
n
= (
0
. . .
n1
)/(S
1

1
. . .
n
).
Discrete-time birth-death processes have a similar characterization. They are discrete-time,
time-homogeneous Markov processes with state space equal to the set of nonnegative integers. Let
nonnegative birth probabilities (
k
: k 0) and death probabilities (
k
: k 1) satisfy
0
1, and

k
+
k
1 for k 1. The one-step transition probability matrix P = (p
ij
: i, j 0) is given by
p
ij
=
_

i
if j = i + 1

i
if j = i 1
1
i

i
if j = i 1
1
i
if j = i = 1
0 else.
(1.36)
Implicit in the specication of P is that births and deaths cant happen simultaneously. If the birth
and death probabilities are strictly positive, Proposition 1.8.1 holds as before, with the exception
that the discrete-time process cannot be explosive.
3
1.9 Time Averages vs. Statistical Averages
Let X be a positive recurrent, irreducible, time-homogeneous Markov process with equilibrium
probability distribution . To be denite, suppose X is a continuous-time process, with pure-jump
sample paths and generator matrix Q. The results of this section apply with minor modica-
tions to the discrete-time setting as well. Above it is shown that renewal theory implies that
lim
t

i
(t) =
i
= 1/(q
ii
M
i
), where M
i
is the mean cycle time of state i. A related con-
sideration is convergence of the empirical distribution of the Markov process, where the empirical
distribution is the distribution observed over a (usually large) time interval.
For a xed state i, the fraction of time the process spends in state i during [0, t] is
1
t
_
t
0
I
X(s)=i
ds
Let T
0
denote the time that the process is rst in state i, and let T
k
for k 1 denote the time
that the process jumps to state i for the kth time after T
0
. The cycle times T
k+1
T
k
, k 0 are
independent and identically distributed, with mean M
i
. Therefore, by the law of large numbers,
with probability one,
lim
k
T
k
/k = lim
k
1
k
k1

l=0
(T
l+1
T
l
) (1.37)
= M
i
(1.38)
3
If in addition i + i = 1 for all i, then the discrete time process has period 2.
26
Furthermore, during the kth cycle interval [T
k
, T
k+1
), the amount of time spent by the process
in state i is exponentially distributed with mean 1/q
ii
, and the time spent in the state during
disjoint cycles is independent. Thus, with probability one,
lim
k
1
k
_
T
k
0
I
X(s)=i
dt = lim
k
1
k
k1

l=0
_
T
l+1
T
l
I
X(s)=i
ds (1.39)
= E[
_
T
1
T
0
I
X(s)=i
ds] (1.40)
= 1/q
ii
(1.41)
Combining these two observations yields that
lim
t
1
t
_
t
0
I
X(s)=i
ds = 1/(q
ii
M
i
) =
i
(1.42)
with probability one. In short, the limit (1.42) is expected, because the process spends on average
1/q
ii
time units in state i per cycle from state i, and the cycle rate is 1/M
i
. Of course, since state
i is arbitrary, if j is any other state then
lim
t
1
t
_
t
0
I
X(s)=j
ds = 1/(q
jj
M
j
) =
j
(1.43)
By considering how the time in state j is distributed among the cycles from state i, it follows that
the mean time spent in state j per cycle from state i is M
i

j
.
So for any nonnegative function on o,
lim
t
1
t
_
t
0
(X(s))ds = lim
k
1
kM
i
_
T
k
0
(X(s))ds
=
1
M
i
E
__
T
1
T
0
(X(s))ds
_
=
1
M
i
E
_
_

jS
(j)
_
T
1
T
0
I
X(s)=j
ds
_
_
=
1
M
i

jS
(j)E
__
T
1
T
0
I
X(s)=j
_
ds
=

jS
(j)
j
(1.44)
Finally, if is a function of S such that either

jS

+
(j)
j
< or

jS

(j)
j
< , then
since (1.44) holds for both
+
and

, it must hold for itself.


1.10 Queueing Systems, M/M/1 Queue and Littles Law
Some basic terminology of queueing theory will now be explained. A simple type of queueing system
is pictured in Figure 1.13. Notice that the system is comprised of a queue and a server. Ordinarily
27
queue server
system
Figure 1.13: A single server queueing system
whenever the system is not empty, there is a customer in the server, and any other customers in
the system are waiting in the queue. When the service of a customer is complete it departs from
the server and then another customer from the queue, if any, immediately enters the server. The
choice of which customer to be served next depends on the service discipline. Common service
disciplines are rst-come rst-served (FCFS) in which customers are served in the order of their
arrival, or last-come rst-served (LCFS) in which the customer that arrived most recently is served
next. Some of the more complicated service disciplines involve priority classes, or the notion of
processor sharing in which all customers present in the system receive equal attention from the
server.
Often models of queueing systems involve a stochastic description. For example, given positive
parameters and , we may declare that the arrival process is a Poisson process with rate ,
and that the service times of the customers are independent and exponentially distributed with
parameter . Many queueing systems are given labels of the form A/B/s, where A is chosen to
denote the type of arrival process, B is used to denote the type of departure process, and s is
the number of servers in the system. In particular, the system just described is called an M/M/1
queueing system, so-named because the arrival process is memoryless (i.e. a Poisson arrival process),
the service times are memoryless (i.e. are exponentially distributed), and there is a single server.
Other labels for queueing systems have a fourth descriptor and thus have the form A/B/s/b, where
b denotes the maximum number of customers that can be in the system. Thus, an M/M/1 system
is also an M/M/1/ system, because there is no nite bound on the number of customers in the
system.
A second way to specify an M/M/1 queueing system with parameters and is to let A(t) and
D(t) be independent Poisson processes with rates and respectively. Process A marks customer
arrival times and process D marks potential customer departure times. The number of customers in
the system, starting from some initial value N(0), evolves as follows. Each time there is a jump of
A, a customer arrives to the system. Each time there is a jump of D, there is a potential departure,
meaning that if there is a customer in the server at the time of the jump then the customer departs.
If a potential departure occurs when the system is empty then the potential departure has no eect
on the system. The number of customers in the system N can thus be expressed as
N(t) = N(0) +A(t) +
_
t
0
I
N(s)1
dD(s)
It is easy to verify that the resulting process N is Markov, which leads to the third specication of
an M/M/1 queueing system.
28
t
!
"
s
s
s
N(s)
Figure 1.14: A single server queueing system
A third way to specify an M/M/1 queuing system is that the number of customers in the system
N(t) is a birth-death process with
k
= and
k
= for all k, for some parameters and . Let
= /. Using the classication criteria derived for birth-death processes, it is easy to see that
the system is recurrent if and only if 1, and that it is positive recurrent if and only if < 1.
Moreover, if < 1 then the equilibrium distribution for the number of customers in the system is
given by
k
= (1 )
k
for k 0. This is the geometric distribution with zero as a possible value,
and with mean
N =

k=0
k
k
= (1 )

k=1

k1
k = (1 )(
1
1
)
t
=

1
The probability the server is busy, which is also the mean number of customers in the server, is
1
0
= . The mean number of customers in the queue is thus given by /(1) =
2
/(1).
This third specication is the most commonly used way to dene an M/M/1 queueing process.
Since the M/M/1 process N(t) is positive recurrent, the Markov ergodic convergence theorem
implies that the statistical averages just computed, such as N, are also equal to the limit of the
time-averaged number of customers in the system as the averaging interval tends to innity.
An important performance measure for a queueing system is the mean time spent in the system
or the mean time spent in the queue. Littles law, described next, is a quite general and useful
relationship that aids in computing mean transit time.
Littles law can be applied in a great variety of circumstances involving ow through a system
with delay. In the context of queueing systems we speak of a ow of customers, but the same
principle applies to a ow of water through a pipe. Littles law is that T = N where is the
mean ow rate, T is the mean delay in the system, and N is the mean content of the queue. For
example, if water ows through a pipe with volume one cubic meter at the rate of two cubic meters
per minute, then the mean time (averaged over all drops of water) that water spends in the pipe is
T = N/ = 1/2 minute. This is clear if water ows through the pipe without mixing, for then the
transit time of each drop of water is 1/2 minute. However, mixing within the pipe does not eect
the average transit time.
Littles law is actually a set of results, each with somewhat dierent mathematical assumptions.
29
The following version is quite general. Figure 1.14 pictures the cumulative number of arrivals ((t))
and the cumulative number of departures ((t)) versus time, for a queueing system assumed to be
initially empty. Note that the number of customers in the system at any time s is given by the
dierence N(s) = (s) (s), which is the vertical distance between the arrival and departure
graphs in the gure. On the other hand, assuming that customers are served in rst-come rst-
served order, the horizontal distance between the graphs gives the times in system for the customers.
Given a (usually large) t > 0, let
t
denote the area of the region between the two graphs over the
interval [0, t]. This is the shaded region indicated in the gure. It is then natural to dene the
time-averaged values of arrival rate and system content as

t
= (t)/t and N
t
=
1
t
_
t
0
N(s)ds =
t
/t
Finally, the average, over the (t) customers that arrive during the interval [0, t], of the time spent
in the system up to time t, is given by
T
t
=
t
/(t).
Once these denitions are accepted, we have the obvious simple relation:
N
t
=
t
T
t
(1.45)
Consider next (1.45) in the limit as t . If any two of the three variables in (1.45) converge to
nite positive quantities, then so does the third variable, and in the limit Littles law is obtained.
For example, the number of customers in an M/M/1 queue is a positive recurrent Markov
process so that
lim
t
N
t
= N = /(1 )
where calculation of the statistical mean N was previously discussed. Also, by the weak renewal
theorem or the law of large numbers applied to interarrival times, we have that the Poisson arrival
process for an M/M/1 queue satises lim
t

t
= with probability one. Thus,
lim
t
T
t
= N/ =
1

.
In this sense, the average waiting time in an M/M/1 system is 1/(). The average time in service
is 1/ (this follows from the third description of an M/M/1 queue, or also from Littles law applied
to the server alone) so that the average waiting time in queue is given by W = 1/( ) 1/ =
/( ). This nal result also follows from Littles law applied to the queue alone.
1.11 Mean Arrival Rate, Distributions Seen by Arrivals, and PASTA
The mean arrival rate for the M/M/1 system is , the parameter of the Poisson arrival process.
However for some queueing systems the arrival rate depends on the number of customers in the
system. In such cases the mean arrival rate is still typically meaningful, and it can be used in
Littles law.
Suppose the number of customers in a queuing system is modeled by a birth death process
with arrival rates (
k
) and departure rates (
k
). Suppose in addition that the process is positive
30
recurrent. Intuitively, the process spends a fraction of time
k
in state k and while in state k the
arrival rate is
k
. Therefore, the average arrival rate is
=

k=0

k
Similarly the average departure rate is
=

k=1

k
and of course = because both are equal to the throughput of the system.
Often the distribution of a system at particular system-related sampling times are more impor-
tant than the distribution in equilibrium. For example, the distribution seen by arriving customers
may be the most relevant distribution, as far as the customers are concerned. If the arrival rate
depends on the number of customers in the system then the distribution seen by arrivals need not
be the same as the equilibrium distribution. Intuitively,
k

k
is the long-term frequency of arrivals
which occur when there are k customers in the system, so that the fraction of customers that see
k customers in the system upon arrival is given by
r
k
=

k

.
The following is an example of a system with variable arrival rate.
Example 1.5 (Single-server, discouraged arrivals)
Suppose
k
= /(k + 1) and
k
= for all k, where and are positive constants. Then
S
2
=

k=0
(k + 1)!
k

k
= and S
1
=

k=0

k
k!
k
= exp(

) <
so that the number of customers in the system is a positive recurrent Markov process, with no
additional restrictions on and . Moreover, the equilibrium probability distribution is given by

k
= (/)
k
exp(/)/k!, which is the Poisson distribution with mean N = /. The mean
arrival rate is
=

k=0

k + 1
= exp(

k=0
(/)
k+1
(k + 1)!
= exp(

)(exp(

) 1) = (1 exp(

)). (1.46)
This expression derived for is clearly equal to , because the departure rate is with probability
1
0
and zero otherwise. The distribution of the number of customers in the system seen by
arrivals, (r
k
) is given by
r
k
=

k

(k + 1)
=
(/)
k+1
exp(/)
(k + 1)!(1 exp(/))
for k 0
which in words can be described as the result of removing the probability mass at zero in the
Poisson distribution, shifting the distribution down by one, and then renormalizing. The mean
number of customers in the queue seen by a typical arrival is therefore (/1)/(1 exp(/)).
31
This mean is somewhat less than N because, roughly speaking, the customer arrival rate is higher
when the system is more lightly loaded.
The equivalence of time-averages and statistical averages for computing the mean arrival rate
and the distribution seen by arrivals can be shown by application of ergodic properties of the
processes involved. The associated formal approach is described next, in slightly more generality.
Let X denote an irreducible, positive-recurrent pure-jump Markov process. If the process makes a
jump from state i to state j at time t, say that a transition of type (i, j) occurs. The sequence of
transitions of X forms a new Markov process, Y . The process Y is a discrete-time Markov process
with state space (i, j) o o : q
ij
> 0, and it can be described in terms of the jump process for
X, by Y (k) = (X
J
(k 1), X
J
(k)) for k 0. (Let X
J
(1) be dened arbitrarily.)
The one-step transition probability matrix of the jump process X
J
is given by
J
ij
= q
ij
/(q
ii
),
and X
J
is recurrent because X is recurrent. Its equilibrium distribution
J
(if it exists) is propor-
tional to
i
q
ii
(see problem set 1), and X
J
is positive recurrent if and only if this distribution can
be normalized to make a probability distribution, i.e. if and only if R =

i
q
ii
< . Assume
for simplicity that X
J
is positive recurrent. Then
J
i
=
i
q
ii
/R is the equilibrium probability
distribution of X
J
. Furthermore, Y is positive recurrent and its equilibrium distribution is given
by

Y
ij
=
J
i
p
J
ij
=

i
q
ii
R
q
ij
q
ii
=

i
q
ij
R
Since limiting time averages equal statistical averages for Y ,
lim
n
(number of rst n transitions of X that are type (i, j))/n =
i
q
ij
/R
with probability one. Therefore, if A o o, and if i A, then
lim
n
number of rst n transitions of X that are type (i, j)
number of rst n transitions of X with type in A
=

i
q
ij

(i

,j

)A

i
q
i

(1.47)
We remark (1.47) is still true if the assumption R < is replaced by the weaker assumption that
the denominator on the righthand side of (1.47) is nite. In case R = the statement is then an
instance of a ratio limit theorem for recurrent (not necessarily positive recurrent) Markov processes.
To apply this setup to the special case of a queueing system in which the number of customers
in the system is a Markov birth-death processes, let the set A be the set of transitions of the form
(i, i + 1). Then deduce that the fraction of the rst n arrivals that see i customers in the system
upon arrival converges to
i

i
/

j

j

j
with probability one.
Note that if
i
= for all i, then = and = r. The condition
i
= also implies that the
arrival process is Poisson. This situation is called Poisson Arrivals See Time Averages (PASTA).
1.12 More Examples of Queueing Systems Modeled as Markov
Birth-Death Processes
For each of the four examples of this section it is assumed that new customers are oered to the
system according to a Poisson process with rate , so that the PASTA property holds. Also, when
32
there are k customers in the system then the service rate is
k
for some given numbers
k
. The
number of customers in the system is a Markov birth-death process with
k
= for all k. Since
the number of transitions of the process up to any given time t is at most twice the number of
customers that arrived by time t, the Markov process is not explosive. Therefore the process is
positive recurrent if and only if S
1
is nite, where
S
1
=

k=0

2
. . .
k
Special cases of this example are presented in the next four examples.
Example 1.6 (M/M/m systems) An M/M/m queueing system consists of a single queue and
m servers. The arrival process is Poisson with some rate and the customer service times are
independent and exponentially distributed with mean for some > 0. The total number of
customers in the system is a birth-death process with
k
= min(k, m). Let = /(m). Since

k
= m for all k large enough it is easy to check that the process is positive recurrent if and only
if < 1. Assume now that < 1. Then the equilibrium distribution is given by

k
=
(/)
k
S
1
k!
for 0 k m

m+j
=
m

j
for j 1
where S
1
is chosen to make the probabilities sum to one (use the fact 1 + +
2
. . . = 1/(1 )):
S
1
=
_
m1

k=0
(/)
k
k!
_
+
(/)
m
m!(1 )
.
An arriving customer must join the queue (rather that go directly to a server) if and only if the
system has m or more customers in it. By the PASTA property, this is the same as the equilibrium
probability of having m or more customers in the system:
P
Q
=

j=0

m+j
=
m
/(1 )
This formula is called the Erlang C formula for probability of queueing.
Example 1.7 (M/M/m/m systems) An M/M/m/m queueing system consists of m servers.
The arrival process is Poisson with some rate and the customer service times are independent
and exponentially distributed with mean for some > 0. Since there is no queue, if a customer
arrives when there are already m customers in the system, then the arrival is blocked and cleared
from the system. The total number of customers in the system is a birth death process, but with
the state space reduced to 0, 1, . . . , m, and with
k
= k for 1 k m. The unique equilibrium
distribution is given by

k
=
(/)
k
S
1
k!
for 0 k m
where S
1
is chosen to make the probabilities sum to one.
An arriving customer is blocked and cleared from the system if and only if the system already
has m customers in it. By the PASTA property, this is the same as the equilibrium probability of
33
having m customers in the system:
P
B
=
m
=
(/)
m
m!

m
j=0
(/)
j
j!
This formula is called the Erlang B formula for probability of blocking.
Example 1.8 (A system with a discouraged server) The number of customers in this system
is a birth-death process with constant birth rate and death rates
k
= 1/k. It is is easy to
check that all states are transient for any positive value of (to verify this it suces to check that
S
2
< ). It is not dicult to show that N(t) converges to + with probability one as t .
Example 1.9 (A barely stable system) The number of customers in this system is a birth-death
process with constant birth rate and death rates
k
=
(1+k
2
)
1+(k1)
2
for all k 1. Since the departure
rates are barely larger than the arrival rates, this system is near the borderline between recurrence
and transience. However, we see that
S
1
=

k=0
1
1 +k
2
<
so that N(t) is positive recurrent with equilibrium distribution
k
= 1/(S
1
(1 +k
2
)). Note that the
mean number of customers in the system is
N =

k=0
k/(S
1
(1 +k
2
)) =
By Littles law the mean time customers spend in the system is also innity. It is debatable whether
this system should be thought of as stable even though all states are positive recurrent and all
waiting times are nite with probability one.
1.13 Method of Phases and Quasi Birth-Death Processes
The method of phases in a broad sense refers to approximating general probability distributions by
sums and mixtures of exponential distributions. Such distributions can be obtained as the hitting
time of a target state of a nite state Markov process. The state of the Markov process at a given
time represents the phase of the distribution. The method permits Markov processes to model
systems in which the original distributions are not exponential.
To illustrate the method here we consider the class of random processes called quasi birth and
death (QBD) processes. These processes generalize M/M/1 queueing models in that the arrival and
departure rates depend on the state of a phase process. (For more background, see M.F. Neuts,
Matrix-Geometric Solutions in Stochastic Models An Algorithmic Approach, The Johns Hopkins
University Press, Baltimore, MD, 1981 and B. Hajek, Birth-and-Death Processes on the Integers
with Phases and General Boundaries, Journal of Applied Probability, pp. 488-499, 1982.
Let k be a positive integer, denoting the number of possible phases. A QBD process is a time-
homogeneous Markov process. The state space of the process is the set o equal to (l, ) : l
1, 1 k. We call l the level and the phase of the state (l, ). Here we shall restrict our
34
attention to continuous time processes. Let A
0
, A
1
, A
2
, and A
00
denote k k matrices so that the
matrix Q

dened by
Q

=
_
_
_
_
_
_
_
_
A
00
A
2
A
0
A
1
A
2
A
0
A
1
A
2
A
0
A
1
A
2
. . .
. . .
_
_
_
_
_
_
_
_
(1.48)
is a generator matrix for the state space o. Equivalently, the matrices A
0
and A
2
have nonnegative
entries, the matrices A
1
and A
00
have nonnegative o-diagonal entries and nonpositive diagonal
entries, the matrix Q dened by Q = A
0
+A
1
+A
2
and the matrix A
00
+A
2
both have row sums
equal to zero.
Suppose that Q

and Q are both irreducible. Dene to be the equilibrium probability vector


for Q, so that Q = 0 and set = A
2
e A
0
e where e is the k vector of all ones. Then all states
are positive recurrent for Q

if and only if < 0. If < 0 then the equilibrium distribution for Q

is given by (x
0
, x
1
, . . .) where
x
0
=
0
/c, x
k
= x
0
R
k
, c =
0
(I R)
1
e, R = A
2
B (1.49)
where
0
is the equilibrium probability distribution for a k-state Markov process with generator
matrix A
00
+A
01
BA
10
, and B is the minimal positive solution to
B = (A
1
+A
2
BA
0
)
1
(1.50)
An eective numerical procedure for nding B is to iterate equation (1.50). In other words, set
B
(0)
to be the matrix of all zeros, and sequentially compute B
(n)
by B
(n)
= (A
1
+A
2
B
(n1)
A
0
)
1
.
Exercises
1. Let (L
t
,
t
) denote the QBD process. Under what conditions on A
0
, A
1
, A
2
and A
00
is the
process (
t
) a Markov process? Prove your answer.
2. Consider the following variation of an M/M/1 queue. There is a two state Markov process with
states 1 and 2 and transition rates q
12
= 1 and q
21
= 2. The two state process is a phase process for
the queue. When the phase is 1 there are no arrivals to the queue, and when the phase is 2 arrivals
occur at an average rate of 2 customers per second, according to a Poisson process. Service times of
all customers are independent and exponentially distributed with mean one. Suppose a QBD model
is used. What values of the matrices A
0
, A
1
, A
2
and A
00
should be used? Find (numerical solution
OK) B,R, and the equilibrium distribution (x
l
)
l0
. Finally, describe the equilibrium probability
distribution of the number of customers in the system. (Solution:
A
0
=
_
1 0
0 1
_
A
1
=
_
2 1
2 5
_
A
2
=
_
0 0
0 2
_
(1.51)
A
00
=
_
1 1
2 4
_
B =
_
0.8090 0.1910
0.6179 0.3819
_
R =
_
0 0
1.236 0.764
_
(1.52)
x
k
= (.764)
k
(0.1273, 0.0787) for k 1 and x
0
= (0.255, 0.0787). In equilibrium, P[L = l] =
(0.206)(0.764)
l
for l 1 and P[L = 0] = 1/3. Thus, the distribution for L is a mixture of the
geometric distribution and a distribution concentrated at zero.
35
1.14 Markov Fluid Model of a Queue
A limiting form of quasi birth and death processes is a queue with a Markov-modulated rate of
arrival, which is briey described in this section. See D. Mitra, Stochastic Theory of a Fluid
Model of Producers and Consumers Coupled by a Buer, Annals of Applied Probability, Vol. 20,
pp. 646-676, 1988 for more information.
Let Q be the generator matrix of an irreducible Markov process with state space 1, . . . , m.
Suppose there is a buer and that when the process is in state i, the net rate (arrival rate minus
departure rate) of ow of uid into the buer is d
i
. The only exception to this rule is that if the
buer is empty and the Markov process is in a state i such that d
i
0 then the buer simply
remains empty. Let D denote the diagonal matrix with d
1
, . . . , d
m
along the diagonal. See Figure
1.15.
"!
#
E E
t
t
t
t
t
t

Markov process determines


(input rate) - (output rate)
Figure 1.15: Fluid queue with net input rate determined by a Markov process
The average drift is dened to be

m
i=1
w
i
d
i
, or wDe, where w is the equilibrium probability
distribution for the nite state Markov process and e is the column vector of all 1s.
Let L
t
denote the level of uid in the buer at time t and let
t
denote the state of the nite
Markov process at time t. Then (L
t
,
t
) is a continuous state Markov process, and is a limiting
form of a quasi birth-death process. In the remainder of this section we describe how to nd the
equilibrium distribution of this process.
Let F(x, i, t) = P[
t
= i, L
t
x]. Kolmogorovs forward equations are derived by beginning
with the observation
F(x, i, t +h) = h
m

j=1
F(x, j, t)q
i,j
+o(h) +F(x hd
i
, i, t). (1.53)
Subtracting F(x, i, t) from each side of this equation and letting h 0 yields
F(x, i, t)
t
=
m

j=1
F(x, j, t)q
j,i

F(x, i, t)
x
d
i
(1.54)
or in matrix notation, letting F(t) denote the row vector F(t) = (F(x, 1, t), . . . , F(x, m, t)),
F
t
= FQ
F
x
D (1.55)
36
Conditions on the equilibrium distribution F(x, i) are found by setting
F
t
= 0, yielding the equation
FQ =
F
x
D. By denition, we know F(, i) = w
i
. Motivated by this and the theory of linear
systems, we seek a solution of the form
F(x, i) = w
k

i=1
a
i

i
e
xz
i
(1.56)
where (
i
, z
i
) are eigenvalue-eigenvector pairs, meaning that
i
Q = z
i

i
D.
To avoid substantial technical complications, we make the assumption that Q is reversible,
meaning that w
i
q
i,j
= w
j
q
j,i
for 1 i, j m. Then all the eigenvalues are real valued. Let
k = [i : d
i
> 0[. It can be shown that there are k strictly negative eigenvalues (counting
multiplicity) and those are the ones to be included in the sum in (1.56). Finally, the k coecients
a
1
, . . . , a
k
can be found by satisfying the k boundary conditions:
F(0, i) = 0 whenever d
i
> 0 (1.57)
For example, suppose
Q =
_
1 1
2 2
_
and D =
_
1 0
0 1
_
(1.58)
We compute that w = (
2
3
,
1
3
) and that (F(x, 1), F(x, 2)) = (
2exp(x)
3
,
1exp(x)
3
). In particular, in
equilibrium, P[L
t
x] =
2 exp(x)
3
.
1.15 Problems
1.1. Poisson merger
Summing counting processes corresponds to merging point processes. Show that the sum of K
independent Poisson processes, having rates
1
, . . . ,
K
, respectively, is a Poisson process with rate

1
+. . . +
K
. (Hint: First formulate and prove a similar result for sums of random variables, and
then think about what else is needed to get the result for Poisson processes. You can use any one
of the equivalent denitions given by Proposition 1.5.1 in the notes. Dont forget to check required
independence properties.)
1.2. Poisson splitting
Consider a stream of customers modeled by a Poisson process, and suppose each customer is one
of K types. Let (p
1
, . . . , p
K
) be a probability vector, and suppose that for each k, the k
th
customer
is type i with probability p
i
. The types of the customers are mutually independent and also inde-
pendent of the arrival times of the customers. Show that the stream of customers of a given type i
is again a Poisson stream, and that its rate is p
i
. (Same hint as in the previous problem applies.)
Show furthermore that the K substreams are mutually independent.
1.3. Poisson method for coupon collectors problem
(a) Suppose a stream of coupons arrives according to a Poisson process (A(t) : t 0) with rate
= 1, and suppose there are k types of coupons. (In network applications, the coupons could be
pieces of a le to be distributed by some sort ot gossip algorithm.) The type of each coupon in the
37
stream is randomly drawn from the k types, each possibility having probability
1
k
, and the types of
dierent coupons are mutually independent. Let p(k, t) be the probability that at least one coupon
of each type arrives by time t. (The letter p is used here because the number of coupons arriving
by time t has the Poisson distribution). Express p(k, t) in terms of k and t.
(b) Find lim
k
p(k, k ln k + kc) for an arbitrary constant c. That is, nd limit of the proba-
bility that the collection is complete at time t = k ln k + kc. (Hint: If a
k
a as k , then
(1 +
a
k
k
)
k
e
a
.)
(c) The rest of this problem shows that the limit found in part (b) also holds if the total number of
coupons is deterministic, rather than Poisson distributed. One idea is that if t is large, then A(t)
is not too far from its mean with high probability. Show, specically, that
lim
k
P[A(k ln k +kc) k ln k +kc
t
] =
_
0 if c < c
t
1 if c > c
t
(d) Let d(k, n) denote the probability that the collection is complete after n coupon arrivals. (The
letter d is used here because the number of coupons, n, is deterministic.) Show that for any k, t,
and n xed, d(k, n)P[A(t) n] p(k, t) P[A(t) n] +P[A(t) n]d(k, n).
(e) Combine parts (c) and (d) to identify lim
k
d(k, k ln k +kc).
1.4. The sum of a random number of random variables
Let (X
i
: i 0) be a sequence of independent and identically distributed random variables. Let
S = X
1
+. . . +X
N
, where N is a nonnegative random variable that is independent of the sequence.
(a) Express E[S] and V ar(S) in terms of the mean and variance of X
1
and N. (Hint: Use the
fact E[S
k
] = E[E[S
k
[N]]). (b) Express the characteristic function
S
in terms of the characteristic
function
X
of X
1
and the z-transform B(z) of N.
1.5. Mean hitting time for a simple Markov process
Let (X(n) : n 0) denote a discrete-time, time-homogeneous Markov chain with state space
0, 1, 2, 3 and one-step transition probability matrix
P =
_
_
_
_
0 1 0 0
1 a 0 a 0
0 0.5 0 0.5
0 0 1 0
_
_
_
_
for some constant a with 0 a 1. (a) Sketch the transition probability diagram for X and
give the equilibrium probability vector. If the equilibrium vector is not unique, describe all the
equilibrium probability vectors.
(b) Compute E[minn 1 : X(n) = 3[X(0) = 0].
1.6. A two station pipeline in continuous time
Consider a pipeline consisting of two single-buer stages in series. Model the system as a continuous-
time Markov process. Suppose new packets are oered to the rst stage according to a rate Poisson
process. A new packet is accepted at stage one if the buer in stage one is empty at the time of
arrival. Otherwise the new packet is lost. If at a xed time t there is a packet in stage one and
no packet in stage two, then the packet is transfered during [t, t +h) to stage two with probability
h
1
+o(h). Similarly, if at time t the second stage has a packet, then the packet leaves the system
during [t, t + h) with probability h
2
+ o(h), independently of the state of stage one. Finally, the
probability of two or more arrival, transfer, or departure events during [t, t +h) is o(h). (a) What
38
is an appropriate state-space for this model? (b) Sketch a transition rate diagram. (c) Write down
the Q matrix. (d) Derive the throughput, assuming that =
1
=
2
= 1. (e) Still assuming
=
1
=
2
= 1. Suppose the system starts with one packet in each stage. What is the expected
time until both buers are empty?
1.7. Simple population growth models
A certain population of living cells begins with a single cell. Each cell present at a time t splits
into two cells during the interval [t, t + h] with probability h + o(h), independently of the other
cells, for some constant > 0. The number of cells at time t, N(t), can thus be modeled as a
continuous-time Markov chain. (a) Sketch an appropriate transition rate diagram and describe the
transition rate matrix. (b) For xed t 0, let P(z, t) denote the z-transform of the probability
vector (t). Starting with the Kolmogorov forward equations for ((t)), derive a partial dierential
equation for the z-transform of (t). Show that the solution is
P(z, t) =
ze
t
1 z +ze
t
(c) Find E[N(t)]. (d) Solve for (t). (e) Another population evolves deterministically. The rst cell
splits at time
1
, and thereafter each cell splits into two exactly
1
time units after its creation.
Find the size of the population of this population as a function of time, and compare to your answer
in part (c).
1.8. Equilibrium distribution of the jump chain
Suppose that is the equilibrium distribution for a time-homogeneous Markov process with tran-
sition rate matrix Q. Suppose that B
1
=

i
q
ii

i
, where the sum is over all i in the state space,
is nite. Show that the equilibrium distribution for the jump chain (X
J
(k) : k 0) is given by

J
i
= Bq
ii

i
. (So and
J
are identical if and only if q
ii
is the same for all i.)
1.9. A simple Poisson process calculation
1. Let (N(t) : t 0) be a Poisson random process with rate > 0. Compute P[N(s) = i[N(t) = k]
where 0 < s < t and i and k are nonnegative integers. (Caution: note order of s and t carefully).
1.10. An alternating renewal process
Consider a two-color trac signal on a street corner which alternately stays red for one minute and
then stays green for one minute. Suppose you are riding your bicycle towards the signal and are
a block away when you rst spot it, so that it will take you one-half minute to reach the signal.
(a) What is the probability that the signal will change colors (at least once) before you reach the
corner? Explain. (b) Compute the conditional expectation of how long you must wait at the corner
(possibly not at all) given that the light is green when you rst spot it. Repeat assuming the light
is red when you rst spot it. (Which is greater? What is the average of the two answers?) (c)
Repeat part (a) if instead the signal randomly changes colors, staying the same color for a time
period uniformly distributed between 0 and 2 minutes. The times between switches are assumed
to be independent. (d) How fast are you riding? (Hint: What units of speed can you use so that
you dont need more information?)
1.10.5. A simple question of periods
3. (5 points ) Consider a discrete-time Markov process with the nonzero one-step transition prob-
39
abilities indicated by the following graph.
8
3
4
1 2
5
6
7
(a) What is the period of state 4?
(b) What is the period of state 6?
1.11. A mean hitting time problem
Let (X(t) : t 0) be a time-homogeneous, pure-jump Markov process with state space 0, 1, 2
and Q matrix
Q =
_
_
4 2 2
1 2 1
2 0 2.
_
_
(a) Write down the state transition diagram and compute the equilibrium distribution.
(b) Compute a
i
= E[mint 0 : X(t) = 1[X(0) = i] for i = 0, 1, 2. If possible, use an approach
that can be applied to larger state spaces.
(c) Derive a variation of the Kolmogorov forward dierential equations for the quantities:
i
(t) =
P[X(s) ,= 2 for 0 s t and X(t) = i[X(0) = 0] for 0 i 2. (You need not solve the
equations.)
(d) The forward Kolmogorov equations describe the evolution of an initial probability distribu-
tion going forward in time, given an initial. In other problems, a boundary condition is given
at a nal time, and a dierential equation working backwards in time from a nal condition is
called for (called Kolmogorov backward equations). Derive a backward dierential equation for:

j
(t) = P[X(s) ,= 2 for t s t
f
[X(t) = j], for 0 j 2 and t t
f
for some xed time t
f
. (Hint:
Express
i
(th) in terms of the
j
(t)
t
s for t t
f
, and let h 0. You need not solve the equations.)
1.12. A birth-death process with periodic rates
Consider a single server queueing system in which the number in the system is modeled as a
continuous time birth-death process with the transition rate diagram shown, where
a
,
b
,
a
, and

b
are strictly positive constants.
a
3
1
2 0
. . .
4
! ! ! !


a a a b
b
a
b
a
b
(a) Under what additional assumptions on these four parameters is the process positive recurrent?
(b) Assuming the system is positive recurrent, under what conditions on
a
,
b
,
a
, and
b
is it true
that the distribution of the number in the system at the time of a typical arrival is the same as the
equilibrium distribution of the number in the system?
40
1.13. Markov model for a link with resets
Suppose that a regulated communication link resets at a sequence of times forming a Poisson
process with rate . Packets are oered to the link according to a Poisson process with rate .
Suppose the link shuts down after three packets pass in the absence of resets. Once the link is
shut down, additional oered packets are dropped, until the link is reset again, at which time the
process begins anew.
!

(a) Sketch a transition rate diagram for a nite state Markov process describing the system state.
(b) Express the dropping probability (same as the long term fraction of packets dropped) in terms
of and .
1.14. An unusual birth-death process
Consider the birth-death process X with arrival rates
k
= (p/(1 p))
k
/a
k
and death rates

k
= (p/(1 p))
k1
/a
k
, where .5 < p < 1, and a = (a
0
, a
1
, . . .) is a probability distribution
on the nonnegative integers with a
k
> 0 for all k. (a) Classify the states for the process X as tran-
sient, null-recurrent or positive recurrent. (b) Check that aQ = 0. Is a an equilibrium distribution
for X? Explain. (c) Find the one-step transition probabilities for the jump-chain, X
J
(d) Classify
the states for the process X
J
as transient, null-recurrent or positive recurrent.
1.15. A queue with decreasing service rate
Consider a queueing system in which the arrival process is a Poisson process with rate . Suppose
the instantaneous completion rate is when there are K or fewer customers in the system, and /2
when there are K +1 or more customers in the system. The number in the system is modeled as a
birth-death Markov process. (a) Sketch the transition rate diagram. (b) Under what condition on
and are all states positive recurrent? Under this condition, give the equilibrium distribution.
(c) Suppose that = (2/3). Describe in words the typical behavior of the system, given that it is
initially empty.
1.16. Limit of a distrete time queueing system
Model a queue by a discrete-time Markov chain by recording the queue state after intervals of q
seconds each. Assume the queue evolves during one of the atomic intervals as follows: There is an
arrival during the interval with probability q, and no arrival otherwise. If there is a customer in
the queue at the beginning of the interval then a single departure will occur during the interval
with probability q. Otherwise no departure occurs. Suppose that it is impossible to have an
arrival and a departure in a single atomic interval. (a) Find a
k
=P[an interarrival time is kq] and
b
k
=P[a service time is kq]. (b) Find the equilibrium distribution, p = (p
k
: k 0), of the number
of customers in the system at the end of an atomic interval. What happens as q 0?
1.17. An M/M/1 queue with impatient customers
Consider an M/M/1 queue with parameters and with the following modication. Each cus-
tomer in the queue will defect (i.e. depart without service) with probability h+o(h) in an interval
of length h, independently of the other customers in the queue. Once a customer makes it to
41
the server it no longer has a chance to defect and simply waits until its service is completed and
then departs from the system. Let N(t) denote the number of customers in the system (queue plus
server) at time t. (a) Give the transition rate diagram and generator matrix Q for the Markov chain
N = (N(t) : t 0). (b) Under what conditions are all states positive recurrent? Under this condi-
tion, nd the equilibrium distribution for N. (You need not explicitly sum the series.) (c) Suppose
that = . Find an explicit expression for p
D
, the probability that a typical arriving customer
defects instead of being served. Does your answer make sense as / converges to zero or to innity?
1.18. Statistical multiplexing
Consider the following scenario regarding a one-way link in a store-and-forward packet commu-
nication network. Suppose that the link supports eight connections, each generating trac at 5
kilobits per second (kbps). The data for each connection is assumed to be in packets exponentially
distributed in length with mean packet size 1 kilobit. The packet lengths are assumed mutually
independent and the packets for each stream arrive according to a Poisson process. Packets are
queued at the beginning of the link if necessary, and queue space is unlimited. Compute the mean
delay (queueing plus transmission timeneglect propagation delay) for each of the following three
scenarios. Compare your answers. (a) (Full multiplexing) The link transmit speed is 50 kbps. (b)
The link is replaced by two 25 kbps links, and each of the two links carries four sessions. (Of course
the delay would be larger if the sessions were not evenly divided.) (c) (Multiplexing over two links)
The link is replaced by two 25 kbps links. Each packet is transmitted on one link or the other, and
neither link is idle whenever a packet from any session is waiting.
1.19. A queue with blocking
(M/M/1/5 system) Consider an M/M/1 queue with service rate , arrival rate , and the modi-
cation that at any time, at most ve customers can be in the system (including the one in service,
if any). If a customer arrives and the system is full (i.e. already has ve customers in it) then
the customer is dropped, and is said to be blocked. Let N(t) denote the number of customers in
the system at time t. Then (N(t) : t 0) is a Markov chain. (a) Indicate the transition rate
diagram of the chain and nd the equilibrium probability distribution. (b) What is the proba-
bility, p
B
, that a typical customer is blocked? (c) What is the mean waiting time in queue, W,
of a typical customer that is not blocked? (d) Give a simple method to numerically calculate,
or give a simple expression for, the mean length of a busy period of the system. (A busy period
begins with the arrival of a customer to an empty system and ends when the system is again empty.)
1.20. Multiplexing circuit and packet data streams
Consider a communication link that is shared between circuit switched trac (consisting of calls)
and datagram packet trac. Suppose new calls arrive at rate
c
, that new packets arrive at rate
p
,
and that the call durations are exponential with parameter
c
. All calls are served simultaneously.
Suppose that the link capacity is exactly sucient to carry C calls, and that calls have priority
over packets. Thus, a new call is blocked and lost if and only if it arrives to nd C calls already in
progress. Packets are not blocked, but instead they are queued until transmission. Finally, suppose
that the instantaneous service rate of packets is (C n
c
) when there are n
c
calls in progress.
(a) Dene a continuous-time, countable state Markov chain to model the system, and indicate the
transition rates. (b) Is the method of Markov processes with phases relevant to this problem? If
so, describe in detail how the method applies. (c) What is the necessary and sucient condition
42
for the system to be stable (be as explicit as possible).
1.20.1. Three queues and an autonomously traveling server
Consider three stations that are served by a single rotating server, as pictured.
!
1
2
3
"
"
"
station 1
station 2
station 3
#
$
Customers arrive to station i according to a Poisson process of rate
i
for 1 i 3, and the total
service requirement of each customer is exponentially distributed, with mean one. The rotation
of the server is modelled by a three state Markov process with the transition rates , , and as
indicated by the dashed lines. When at a station, the server works at unit rate, or is idle if the
station is empty. If the service to a customer is interrupted because the server moves to the next
station, the service is resumed when the server returns.
(a) Under what condition is the system stable? Briey justify your answer.
(b) Identify a method for computing the mean customer waiting time at station one.
1.21. On two distibutions seen by customers
Consider a queueing system in which the number in the system only changes in steps of plus one
or minus one. Let D(k, t) denote the number of customers that depart in the interval [0,t] that
leave behind exactly k customers, and let R(k,t) denote the number of customers that arrive in the
interval [0,t] to nd exactly k customers already in the system. (a) Show that [D(k, t)R(k, t)[ 1
for all k and t. (b) Let
t
(respectively
t
) denote the number of arrivals (departures) up to time
t. Suppose that
t
and
t
/
t
1 as t . Show that if the following two limits exist for
a given value k, then they are equal: r
k
= lim
t
R(k, t)/
t
and d
k
= lim
t
D(k, t)/
t
.
43
44
Chapter 2
Foster-Lyapunov stability criterion
and moment bounds
Communication network models can become quite complex, especially when dynamic scheduling,
congestion, and physical layer eects such as fading wireless channel models are included. It is thus
useful to have methods to give approximations or bounds on key performance parameters. The
criteria for stability and related moment bounds discussed in this chapter are useful for providing
such bounds.
Aleksandr Mikhailovich Lyapunov (1857-1918) contributed signicantly to the theory of stabil-
ity of dynamical systems. Although a dynamical system may evolve on a complicated, multiple
dimensional state space, a recurring theme of dynamical systems theory is that stability questions
can often be settled by studying the potential of a system for some nonnegative potential function
V . Potential functions used for stability analysis are widely called Lyapunov functions. Similar
stability conditions have been developed by many authors for stochastic systems. Below we present
the well known criteria due to Foster [14] for recurrence and positive recurrence. In addition we
present associated bounds on the moments, which are expectations of some functions on the state
space, computed with respect to the equilibrium probability distribution.
1
Section 2.1 discusses the discrete time tools, and presents examples involving load balancing
routing, and input queued crossbar switches. Section 2.2 presents the continuous time tools, and
an example. Problems are given in Section 2.3.
2.1 Stability criteria for discrete time processes
Consider an irreducible discrete-time Markov process X on a countable state space o, with one-step
transition probability matrix P. If f is a function on o, then Pf represents the function obtained
by multiplication of the vector f by the matrix P: Pf(i) =

jS
p
ij
f(j). If f is nonnegative, then
1
The proof of Fosters criteria given here is similar to Fosters original proof, but is geared to establishing the
moment bounds, and the continuous time versions, at the same time. The moment bounds and proofs given here
are adaptations of those in Meyn and Tweedie [30] to discrete state spaces, and to continuous time. They involve
some basic notions from martingale theory, which can be found, for example, in [18]. A special case of the moment
bounds was given by Tweedie [42], and a version of the moment bound method was used by Kingman [26] in a
queueing context. As noted in [30], the moment bound method is closely related to Dynkins formula. The works
[39, 40, 28, 41], and many others, have demonstrated the wide applicability of the stability methods in various
queueing network contexts, using quadratic Lyapunov functions.
45
Pf is well dened, with the understanding that Pf(i) = + is possible for some, or all, values of
i. An important property of Pf is that Pf(i) = E[f(X(t + 1)[X(t) = i]. Let V be a nonnegative
function on o, to serve as the Lyapunov function. The drift function of V (X(t)) is dened by
d(i) = E[V (X(t +1))[X(t) = i] V (i). That is, d = PV V . Note that d(i) is always well-dened,
if the value + is permitted. The drift function is also given by
d(i) =

j:j,=i
p
ij
(V (j) V (i)). (2.1)
Proposition 2.1.1 (Foster-Lyapunov stability criterion) Suppose V : o R
+
and C is a nite
subset of o.
(a) If i : V (i) K is nite for all K, and if PV V 0 on o C, then X is recurrent.
(b) If > 0 and b is a constant such that PV V +bI
C
, then X is positive recurrent.
The proof of the proposition is given after two lemmas.
Lemma 2.1.2 Suppose PV V f + g on o, where f and g are nonnegative functions. Then
for any initial state i
o
and any stopping time ,
E
_
_

k:0k1
f(X(k))
_
_
V (i
o
) +E
_
_

k:0k1
g(X(k))
_
_
(2.2)
Proof. Let T
k
denote the -algebra generated by (X(s) : 0 s k). By assumption, PV + f
V +g on o. Evaluating each side of this inequality at X(k), and taking the conditional expectation
given T
k
yields
E[V (x(k + 1))[T
k
] +f(X(k)) V (X(k)) +g(X(k)). (2.3)
Let
n
= min, n, infk 0 : V (X(k)) n. The next step is to multiply each side of (2.3) by
I

n
>k
, take expectations of each side, and use the fact that
E
_
E[V (x(k + 1))[T
k
]I

n
>k

= E
_
V (x(k + 1))I

n
>k

E
_
V (x(k + 1))I

n
>k+1

.
The result is
E
_
V (x(k + 1))I

n
>k+1

+E
_
f(x(k))I

n
>k

E
_
V (x(k))I

n
>k

+E
_
g(x(k))I

n
>k

(2.4)
The denition of
n
implies that all the terms in (2.4) are zero for k n, and that E
_
V (x(k))I

n
>k

<
for all k. Thus, it is legitimate to sum each side of (2.4) and cancel like terms, to yield:
E
_
_

k:0k
n
1
f(X(k))
_
_
V (i
o
) +E
_
_

k:0k
n
1
g(X(k))
_
_
. (2.5)
Letting n in (2.5) and appealing to the monotone convergence theorem yields (2.2).
Lemma 2.1.3 Let X be an irreducible, time-homogeneous Markov process. If there is a nite set
C, and the mean time to hit C starting from any state in C is nite, then X is positive recurrent.
46
Proof. Lemma I.3.9 of Assussen. Variation of Walds lemma.
Proof of Proposition 2.1.1. (a) The proof of part (a) is based on a martingale convergence
theorem. Fix an initial state i
o
o C, let = mint 0 : X(t) C, and let Y (t) = V (X(t ))
for t 0. Note that Y (t +1) Y (t) = (V (X(t +1) V (X(t)))I
>t
. Let T
t
denote the -algebra
generated by (X(s) : 0 s t), and note that the event > t is in T
t
. Thus,
E[Y (t+1)Y (t)[T
t
] = E[(V (X(t+1))V (X(t)))I
>t
[T
t
] = E[V (X(t+1))V (X(t))[T
t
]I
>t
0,
so that (Y
t
: t 0) is a nonnegative supermartingale. Therefore, by a version of the martingale
convergence theorem, lim
t
Y (t) exists and is nite with probability one. By the assumption
i : V (i) K is nite for all K, it follows that X(t ) reaches only nitely many states with
probability one, which implies that < + with probability one. Therefore, P[ < [X(0) =
i
o
] = 1 for any i
o
o C. Therefore, for any initial state in C, the process returns to C innitely
often with probability one. The process watched in C is a nite-state Markov process, and is thus
(positive) recurrent. Therefore, all the states of C are recurrent for the original process X. Part
(a) is proved.
(b) Let f , g = bI
C
, and = mint 1 : X(t) C. Then Lemma 2.1.2 implies that
E[] V (i
o
) +b for any i
o
o. In particular, the mean time to hit C after time zero, beginning
from any state in C, is nite. Therefore X is positive recurrent by Lemma 2.1.3.
Proposition 2.1.4 (Moment bound) Suppose V , f, and g are nonnegative functions on o and
suppose
PV (i) V (i) f(i) +g(i) for all i o (2.6)
In addition, suppose X is positive recurrent, so that the means, f = f and g = g are well-dened.
Then f g. (In particular, if g is bounded, then g is nite, and therefore f is nite.)
Proof. Fix a state i
o
and let T
m
be the time of the m
th
return to state i
o
. Then by the equality
of time and statistical averages,
E
_
_

k:0kTm1
f(X(k))
_
_
= mE[T
1
]f
E
_
_

k:0kTm1
g(X(k))
_
_
= mE[T
1
]g
Lemma 2.1.2 applied with stopping time T
m
thus yields mE[T
1
]f V (i
o
) + mE[T
1
]g. Dividing
through by mE[T
1
] and letting m yields the desired inequality, f g.
Corollary 2.1.5 (Combined Foster-Lyapunov stability criterion and moment bound) Suppose V, f,
and g are nonnegative functions on o such that
PV (i) V (i) f(i) +g(i) for all i o (2.7)
In addition, suppose for some > 0 that the set C dened by C = i : f(i) < g(i)+ is nite. Then
X is positive recurrent and f g. (In particular, if g is bounded, then g is nite, and therefore f
is nite.)
47
u
queue 1
queue 2
2
d
1
d
a
u
Figure 2.1: Two queues fed by a single arrival stream.
Proof. Let b = maxg(i) + f(i) : i C. Then V, C, b, and satisfy the hypotheses of
Proposition 2.1.1(b), so that X is positive recurrent. Therefore the hypotheses of Proposition 2.1.4
are satised, so that f g.
The assumptions in Propositions 2.1.1 and 2.1.4 and Corollary 2.1.5 do not imply that V is
nite. Even so, since V is nonnegative, for a given initial state X(0), the long term average drift of
V (X(t)) is nonnegative. This gives an intuitive reason why the mean downward part of the drift,
f, must be less than or equal to the mean upward part of the drift, g.
Example 1a (Probabilistic routing to two queues) Consider the routing scenario with two
queues, queue 1 and queue 2, fed by a single stream of packets, as pictured in Figure 2.1. Here,
0 a, u, d
1
, d
2
1, and u = 1 u. The state space for the process is o = Z
2
+
, where the state
x = (x
1
, x
2
) denotes x
1
packets in queue 1 and x
2
packets in queue 2. In each time slot, a new
arrival is generated with probability a, and then is routed to queue 1 with probability u and to
queue 2 with probability u. Then each queue i, if not empty, has a departure with probability d
i
.
Note that we allow a packet to arrive and depart in the same slot. Thus, if X
i
(t) is the number of
packets in queue i at the beginning of slot t, then the system dynamics can be described as follows:
X
i
(t + 1) = X
i
(t) +A
i
(t) D
i
(t) +L
i
(t) for i 0, 1 (2.8)
where
A(t) = (A
1
(t), A
2
(t)) is equal to (1, 0) with probability au, (0, 1) with probability au, and
A(t) = (0, 0) otherwise.
D
i
(t) : t 0, are Bernoulli(d
i
) random variables, for i 0, 1
All the A(t)s, D
1
(t)s, and D
2
(t)s are mutually independent
L
i
(t) = ((X
i
(t) +A
i
(t) D
i
(t)))
+
(see explanation next)
If X
i
(t) + A
i
(t) = 0, there can be no actual departure from queue i. However, we still allow D
i
(t)
to equal one. To keep the queue length process from going negative, we add the random variable
L
i
(t) in (2.8). Thus, D
i
(t) is the potential number of departures from queue i during the slot, and
D
i
(t) L
i
(t) is the actual number of departures. This completes the specication of the one-step
transition probabilities of the Markov process.
A necessary condition for positive recurrence is, for any routing policy, a < d
1
+d
2
, because the
total arrival rate must be less than the total depature rate. We seek to show that this necessary
condition is also sucient, under the random routing policy.
48
Let us calculate the drift of V (X(t)) for the choice V (x) = (x
2
1
+x
2
2
)/2. Note that (X
i
(t+1))
2
=
(X
i
(t) + A
i
(t) D
i
(t) + L
i
(t))
2
(X
i
(t) + A
i
(t) D
i
(t))
2
, because addition of the variable L
i
(t)
can only push X
i
(t) +A
i
(t) D
i
(t) closer to zero. Thus,
PV (x) V (x) = E[V (X(t + 1))[X(t) = x] V (x)

1
2
2

i=1
E[(x
i
+A
i
(t) D
i
(t))
2
x
2
i
[X(t) = x]
=
2

i=1
x
i
E[A
i
(t) D
i
(t)[X(t) = x] +
1
2
E[(A
i
(t) D
i
(t))
2
[X(t) = x] (2.9)

_
2

i=1
x
i
E[A
i
(t) D
i
(t)[X(t) = x]
_
+ 1
= (x
1
(d
1
au) +x
2
(d
2
au)) + 1 (2.10)
Under the necessary condition a < d
1
+d
2
, there are choices of u so that au < d
1
and au < d
2
, and
for such u the conditions of Corollary 2.1.5 are satised, with f(x) = x
1
(d
1
au) + x
2
(d
2
au),
g(x) = 1, and any > 0, implying that the Markov process is positive recurrent. In addition, the
rst moments under the equlibrium distribution satisfy:
(d
1
au)X
1
+ (d
2
au)X
2
1. (2.11)
In order to deduce an upper bound on X
1
+X
2
, we select u

to maximize the minimum of the


two coecients in (2.11). Intuitively, this entails selecting u to minimize the absolute value of the
dierence between the two coecients. We nd:
= max
0u1
mind
1
au, d
2
au
= mind
1
, d
2
,
d
1
+d
2
a
2

and the corresponding value u

of u is given by
u

=
_
_
_
0 if d
1
d
2
< a
1
2
+
d
1
d
2
2a
if [d
1
d
2
[ a
1 if d
1
d
2
> a
For the system with u = u

, (2.11) yields
X
1
+X
2

1

. (2.12)
We remark that, in fact,
X
1
+X
2

2
d
1
+d
2
a
(2.13)
If [d
1
d
2
[ a then the bounds (2.12) and (2.13) coincide, and otherwise, the bound (2.13) is
strictly tighter. If d
1
d
2
< a then u

= 0, so that X
1
= 0, and (2.11) becomes (d
2
a)X
2
1
, which implies (2.13). Similarly, if d
1
d
2
> a, then u

= 1, so that X
2
= 0, and (2.11) becomes
49
(d
1
a)X
1
1, which implies (2.13). Thus, (2.13) is proved.
Example 1b (Route-to-shorter policy) Consider a variation of the previous example such that
when a packet arrives, it is routed to the shorter queue. To be denite, in case of a tie, the packet
is routed to queue 1. Then the evolution equation (2.8) still holds, but with with the description
of the arrival variables changed to the following:
Given X(t) = (x
1
, x
2
), A(t) = (I
x
1
x
2

, I
x
1
>x
2

) with probability a, and A(t) = (0, 0)


otherwise. Let P
RS
denote the one-step transition probability matrix when the route-to-
shorter policy is used.
Proceeding as in (2.9) yields:
P
RS
V (x) V (x)
2

i=1
x
i
E[A
i
(t) D
i
(t))[X(t) = x] + 1
= a
_
x
1
I
x
1
x
2

+x
2
I
x
1
>x
2

_
d
1
x
1
d
2
x
2
+ 1
Note that x
1
I
x
1
x
2

+ x
2
I
x
1
>x
2

ux
1
+ ux
2
for any u [0, 1], with equality for u = I
x
1
x
2

.
Therefore, the drift bound for V under the route-to-shorter policy is less than or equal to the drift
bound (2.10), for V for any choice of probabilistic splitting. In fact, route-to-shorter routing can
be viewed as a controlled version of the independent splitting model, for which the control policy is
selected to minimize the bound on the drift of V in each state. It follows that the route-to-shorter
process is positive recurrent as long as a < d
1
+ d
2
, and (2.11) holds for any value of u such that
au < d
1
and au d
2
. In particular, (2.12) holds for the route-to-shorter process.
We remark that the stronger bound (2.13) is not always true for the route-to-shorter policy.
The problem is that even if d
1
d
2
< a, the route-to-shorter policy can still route to queue 1,
and so X
1
,= 0. In fact, if a and d
2
are xed with 0 < a < d
2
< 1, then X
1
as d
1
0 for
the route-to-shorter policy. Intuitively, that is because occasionally there will be a large number of
customers in the system due to statistical uctuations, and then there will be many customers in
queue 1. But if d
2
<< 1, those customers will remain in queue 2 for a very long time.
Example 2a (An input queued switch with probabilistic switching)
2
Consider a packet switch
with N inputs and N outputs, as pictured in Figure 2.2. Suppose there are N
2
queues N at
each input with queue i, j containing packets that arrived at input i and are destined for output
j, for i, j E, where E = 1, , N. Suppose the packets are all the same length, and adopt
a discrete time model, so that during one time slot, a transfer of packets can occur, such that at
most one packet can be transferred from each input, and at most one packet can be transferred to
each output. A permutation of E has the form = (
1
, . . . ,
N
), where
1
, . . . ,
N
are distinct
elements of E. Let denote the set of all N! such permutations. Given , let R() be the
N N switching matrix dened by R
ij
= I

i
=j
. Thus, R
ij
() = 1 means that under permutation
, input i is connected to output j, or, equivalently, a packet in queue i, j is to depart, if there is
any such packet. A state x of the system has the form x = (x
ij
: i, j E), where x
ij
denotes the
number of packets in queue i, j.
2
Tassiulas [41] originally developed the results of Examples 2a-b, in the context of wireless networks. The paper
[29] presents similiar results in the context of a packet switch.
50
input 4
1,3
1,4
1,2
1,1
2,1
2,2
2,3
2,4
3,1
3,2
3,3
3,4
4,1
4,2
4,3
4,4
output 1
output 2
output 3
output 4
input 1
input 2
input 3
Figure 2.2: A 4 4 input queued switch
The evolution of the system over a time slot [t, t + 1) is described as follows:
X
ij
(t + 1) = X
ij
(t) +A
ij
(t) R
ij
((t)) +L
ij
(t)
where
A
ij
(t) is the number of packets arriving at input i, destined for output j, in the slot. Assume
that the variables (A
ij
(t) : i, j E, t 0) are mutually independent, and for each i, j, the
random variables (A
ij
(t) : t 0) are independent, identically distributed, with mean
ij
and
E[A
2
ij
] K
ij
, for some constants
ij
and K
ij
. Let = (
ij
: i, j E).
(t) is the switch state used during the slot
L
ij
= ((X
ij
(t) +A
ij
(t) R
ij
((t)))
+
, which takes value one if there was an unused potential
departure at queue ij during the slot, and is zero otherwise.
The number of packets at input i at the beginning of the slot is given by the row sum

jE
X
ij
(t), its mean is given by the row sum

jE

ij
, and at most one packet at input i
can be served in a time slot. Similarly, the set of packets waiting for output j, called the virtual
queue for output j, has size given by the column sum

iE
X
ij
(t). The mean number of arrivals
to the virtual queue for output j is

iE

ij
(t), and at most one packet in the virtual queue can
be served in a time slot. These considerations lead us to impose the following restrictions on :

jE

ij
< 1 for all i and

iE

ij
< 1 for all j (2.14)
Except for trivial cases involving deterministic arrival sequences, the conditions (2.14) are necessary
for stable operation, for any choice of the switch schedule ((t) : t 0).
Lets rst explore random, independent and identically distributed (i.i.d.) switching. That is,
given a probability distribution u on , let ((t) : t 0) be independent with common probability
distribution u. Once the distributions of the A
ij
s and u are xed, we have a discrete-time Markov
process model. Given satisfying (2.14), we wish to determine a choice of u so that the process
with i.i.d. switch selection is positive recurrent.
Some standard background from switching theory is given in this paragraph. A line sum of a
matrix M is either a row sum,

j
M
ij
, or a column sum,

i
M
ij
. A square matrix M is called
51
doubly stochastic if it has nonnegative entries and if all of its line sums are one. Birkhos theorem,
celebrated in the theory of switching, states that any doubly stochastic matrix M is a convex
combination of switching matrices. That is, such an M can be represented as M =

R()u(),
where u = (u() : ) is a probability distribution on . If

M is a nonnegative matrix with all
line sums less than or equal to one, then if some of the entries of

M are increased appropriately,
a doubly stochastic matrix can be obtained. That is, there exists a doubly stochastic matrix M
so that

M
ij
M
ij
for all i, j. Applying Birkhos theorem to M yields that there is a probability
distribution u so that

M
ij

R()u() for all i, j.


Suppose satises the necessary conditions (2.14). That is, suppose that all the line sums of
are less than one. Then with dened by
=
1 (maximum line sum of )
N
,
each line sum of (
ij
+ : i, j E) is less than or equal to one. Thus, by the observation at the
end of the previous paragraph, there is a probability distribution u

on so that
ij
+
ij
(u

),
where

ij
(u) =

R
ij
()u().
We consider the system using probability distribution u

for the switch states. That is, let ((t) :


t 0) be independent, each with distribution u

. Then for each ij, the random variables R


ij
((t))
are independent, Bernoulli(
ij
(u

)) random variables.
Consider the quadratic Lyapunov function V given by V (x) =
1
2

i,j
x
2
ij
. As in (2.9),
PV (x) V (x)

i,j
x
ij
E[A
ij
(t) R
ij
((t))[X
ij
(t) = x] +
1
2

i,j
E[(A
ij
(t) R
ij
((t)))
2
[X(t) = x].
Now
E[A
ij
(t) R
ij
((t))[X
ij
(t) = x] = E[A
ij
(t) R
ij
((t))] =
ij

ij
(u

)
and
1
2

i,j
E[(A
ij
(t) R
ij
((t)))
2
[X(t) = x]
1
2

i,j
E[(A
ij
(t))
2
+ (R
ij
((t)))
2
] K
where K =
1
2
(N +

i,j
K
ij
). Thus,
PV (x) V (x)
_
_

ij
x
ij
_
_
+K (2.15)
Therefore, by Corollary 2.1.5, the process is positive recurrent, and

ij
X
ij

K

(2.16)
That is, the necessary condition (2.14) is also sucient for positive recurrence and nite mean queue
length in equilibrium, under i.i.d. random switching, for an appropriate probability distribution u

on the set of permutations.


52
Example 2b (An input queued switch with maximum weight switching) The random switching
policy used in Example 2a depends on the arrival rate matrix , which may be unknown a priori.
Also, the policy allocates potential departures to a given queue ij, whether or not the queue is
empty, even if other queues could be served instead. This suggests using a dynamic switching
policy, such as the maximum weight switching policy, dened by (t) =
MW
(X(t)), where for a
state x,

MW
(x) = arg max

ij
x
ij
R
ij
(). (2.17)
The use of arg max here means that
MW
(x) is selected to be a value of that maximizes the
sum on the right hand side of (2.17), which is the weight of permutation with edge weights x
ij
. In
order to obtain a particular Markov model, we assume that the set of permutations is numbered
from 1 to N! in some fashion, and in case there is a tie between two or more permutations for having
the maximum weight, the lowest numbered permutation is used. Let P
MW
denote the one-step
transition probability matrix when the route-to-shorter policy is used.
Letting V and K be as in Example 2a, we nd under the maximum weight policy that
P
MW
V (x) V (x)

ij
x
ij
(
ij
R
ij
(
MW
(x))) +K
The maximum of a function is greater than or equal to the average of the function, so that for any
probability distribution u on

ij
x
ij
R
ij
(
MW
(t))

u()

ij
x
ij
R
ij
() (2.18)
=

ij
x
ij

ij
(u)
with equality in (2.18) if and only if u is concentrated on the set of maximum weight permutations.
In particular, the choice u = u

shows that

ij
x
ij
R
ij
(
MW
(t))

ij
x
ij

ij
(u)

ij
x
ij
(
ij
+)
Therefore, if P is replaced by P
MW
, (2.15) still holds. Therefore, by Corollary 2.1.5, the process
is positive recurrent, and the same moment bound, (2.16), holds, as for the randomized switching
strategy of Example 2a. On one hand, implementing the maximum weight algorithm does not
require knowledge of the arrival rates, but on the other hand, it requires that queue length infor-
mation be shared, and that a maximization problem be solved for each time slot. Much recent
work has gone towards reduced complexity dynamic switching algorithms.
2.2 Stability criteria for continuous time processes
Here is a continuous time version of the Foster-Lyapunov stability criteria and the moment bounds.
Suppose X is a time-homegeneous, irreducible, continuous-time Markov process with generator
matrix Q. The drift vector of V (X(t)) is the vector QV . This denition is motivated by the fact
53
that the mean drift of X for an interval of duration h is given by
d
h
(i) =
E[V (X(t +h))[X(t) = i] V (i)
h
=

jS
_
p
ij
(h)
ij
h
_
V (j)
=

jS
_
q
ij
+
o(h)
h
_
V (j), (2.19)
so that if the limit as h 0 can be taken inside the summation in (2.19), then d
h
(i) QV (i) as
h 0. The following useful expression for QV follows from the fact that the row sums of Q are
zero:
QV (i) =

j:j,=i
q
ij
(V (j) V (i)). (2.20)
Formula (2.20) is quite similar to the formula (2.1) for the drift vector for a discrete-time process.
Proposition 2.2.1 (Foster-Lyapunov stability criterioncontinuous time) Suppose V : o R
+
and C is a nite subset of o.
(a) If QV 0 on o C, and i : V (i) K is nite for all K then X is recurrent.
(b) Suppose for some b > 0 and > 0 that
QV (i) +bI
C
(i) for all i o. (2.21)
Suppose further that i : V (i) K is nite for all K, or that X is nonexplosive. Then X is
positive recurrent.
The proof of the proposition is similar to that above for discrete time processes. We begin with
an analog of Lemma 2.1.2.
Lemma 2.2.2 Suppose QV f + g on o, where f and g are nonnegative functions. Fix an
initial state i
o
, let N be a stopping time for the jump process X
J
, and let
N
denote the time of
the N
th
jump of X. Then
E
_
_

N
0
f(X(t))dt
_
V (i
o
) +E
_
_

N
0
g(X(t))dt
_
(2.22)
Proof. Let D denote the diagonal matrix with entries q
ii
. The one-step transition probability
matrix of the jump chain is given by P
J
= D
1
Q + I. The condition QV f + g thus implies
P
J
V V

f + g, were

f = D
1
f and g = D
1
g. Lemma 2.1.2 applied to the jump chain thus
yields
E
_
_

k:0k1

f(X
J
(k))
_
_
V (i
o
) +E
_
_

k:0k1
g(X
J
(k))
_
_
(2.23)
However, by the space-time description of an excursion of X from i
o
,
E
_
_

k:0kN1

f(X
J
(k))
_
_
= E
_
_

N
0
f(X(t))dt
_
54
and a similar equation holds for g and g. Substituting these relations into (2.23) yields (2.22), as
desired.
Proof of Proposition 2.2.1. (a) Let D denote the diagonal matrix with entries q
ii
. The
one-step transition probability matrix of the jump chain is given by P
J
= D
1
Q+I. The condition
QV 0 thus implies P
J
V V 0, so that X
J
is recurrent by Proposition 2.1.1(a). But X
J
is
recurrent if and only if X is recurrent, so X is also recurrent.
(b) The assumptions imply QV 0 on o C, so if i : V (i) K is nite for all K then X is
recurrent by Proposition 2.2.1(a), and, in particular, X is nonexplosive. Thus, X is nonexplosive,
either by direct assumption or by implication.
Let f , let g = bI
C
, let i
o
C, let N = mink 1 : X
J
(k) C, and let
N
denote
the time of the N
th
jump. Then Lemma 2.2.2 implies that E[
N
] V (i
o
) + b/q
ioio
. Since
N
is
nite a.s. and since X is not explosive, it must be that
N
is the time that X returns to the set C
after exiting the initial state i
o
. That is, the time to hit C beginning from any state in C is nite.
Therefore X is positive recurrent by the continuous time version of Lemma 2.1.3.
Example 3 Suppose X has state space o = Z
+
, with q
i0
= for all i 1, q
ii+1
=
i
for
all i 0, and all other o-diagonal entries of the rate matrix Q equal to zero, where > 0 and

i
> 0 such that

i0
1

i
< +. Let C = 0, V (0) = 0, and V (i) = 1 for i 0. Then
QV = + (
0
+ )I
C
, so that (2.21) is satised with = and b =
0
+ . However, X is not
positive recurrent. In fact, X is explosive. To see this, note that p
J
ii+1
=

i
+
i
exp(

i
). Let
be the probability that, starting from state 0, the jump process does not return to zero. Then
=

i=0
p
J
ii+1
exp(

i=0
1

i
) > 0. Thus, X
J
is transient. After the last visit to state zero, all
the jumps of X
J
are up one. The corresponding mean holding times of X are
1

i
+
which have a
nite sum, so that the process X is explosive. This example illustrates the need for the assumption
just after (2.21) in Proposition 2.2.1.
As for the case of discrete time, the drift conditions imply moment bounds. The proofs of the
following two propositions are minor variations of the ones used for discrete time, with Lemma
2.1.2 used in place of Lemma 2.2.2, and are omitted.
Proposition 2.2.3 (Moment boundcontinuous time) Suppose V , f, and g are nonnegative func-
tions on o, and suppose QV (i) f(i) + g(i) for all i o. In addition, suppose X is positive
recurrent, so that the means, f = f and g = g are well-dened. Then f g.
Corollary 2.2.4 (Combined Foster-Lyapunov stability criterion and moment boundcontinuous
time) Suppose V , f, and g are nonnegative functions on o such that QV (i) f(i) +g(i) for all
i o, and, for some > 0, the set C dened by C = i : f(i) < g(i) + is nite. Suppose also
that i : V (i) K is nite for all K. Then X is positive recurrent and f g.
Example 4.a (Random server allocation with two servers) Consider the system shown in Figure
2.3. Suppose that each queue i is fed by a Poisson arrival process with rate
i
, and suppose there
are two potential departure processes, D
1
and D
2
, which are Poisson processes with rates m
1
and
m
2
, respectively. The ve Poisson processes are assumed to be independent. No matter how the
potential departures are allocated to the permitted queues, the following conditions are necessary
for stability:

1
< m
1
,
3
< m
2
, and
1
+
2
+
3
< m
1
+m
2
(2.24)
55
2
queue 1
queue 2
queue 3
1
2
3
2
1
!
!
!
m
m
u
1
1
u
u
2
u
Figure 2.3: A system of three queues with two servers
That is because server 1 is the only one that can serve queue 1, server 2 is the only one that can
serve queue 3, and the sum of the potential service rates must exceed the sum of the potential
arrival rates for stability. A vector x = (x
1
, x
2
, x
2
) Z
3
+
corresponds to x
i
packets in queue i for
each i. Let us consider random selection, so that when D
i
has a jump, the queue served is chosen at
random, with the probabilities determined by u = (u
1
, u
2
). As indicated in Figure 2.3, a potential
service by server 1 is given to queue 1 with probability u
1
, and to queue 2 with probability u
1
.
Similarly, a potential service by server 2 is given to queue 2 with probability u
2
, and to queue 3
with probability u
2
. The rates of potential service at the three stations are given by

1
(u) = u
1
m
1

2
(u) = u
1
m
1
+u
2
m
2

3
(u) = u
2
m
2
.
Let V (x) =
1
2
(x
2
1
+x
2
2
+x
2
3
). Using (2.20), we nd that the drift function QV is given by
QV (x) =
1
2
_
3

i=1
((x
i
+ 1)
2
x
2
i
)
i
_
+
1
2
_
3

i=1
((x
i
1)
2
+
x
2
i
)
i
(u)
_
Now (x
i
1)
2
+
(x
i
1)
2
, so that
QV (x)
_
3

i=1
x
i
(
i

i
(u))
_
+

2
(2.25)
where is the total rate of events, given by =
1
+
2
+
3
+
1
(u)+
2
(u)+
3
(u), or equivalently,
=
1
+
2
+
3
+m
1
+m
2
. Suppose that the necessary condition (2.24) holds. Then there exists
some > 0 and choice of u so that

i
+
i
(u) for 1 i 3
and the largest such choice of is = minm
1

1
, m
2

3
,
m
1
+m
2

3
3
. (See excercise.)
So QV (x) (x
1
+x
2
+x
3
) + for all x, so Corollary 2.2.4 implies that X is positive recurrent
and X
1
+X
2
+X
3


2
.
56
Example 4.b (Longer rst server allocation with two servers) This is a continuation of Example
4.a, concerned with the system shown in Figure 2.3. Examine the right hand side of (2.25). Rather
than taking a xed value of u, suppose that the choice of u could be specied as a function of the
state x. The maximum of a function is greater than or equal to the average of the function, so that
for any probability distribution u,
3

i=1
x
i

i
(u) max
u

i
x
i

i
(u
t
) (2.26)
= max
u

m
1
(x
1
u
t
1
+x
2
u
t
1
) +m
2
(x
2
u
t
2
+x
3
u
t
2
)
= m
1
(x
1
x
2
) +m
2
(x
2
x
3
)
with equality in (2.26) for a given state x if and only if a longer rst policy is used: each service
opportunity is allocated to the longer queue connected to the server. Let Q
LF
denote the one-step
transition probability matrix when the longest rst policy is used. Then (2.25) continues to hold
for any xed u, when Q is replaced by Q
LF
. Therefore if the necessary condition (2.24) holds,
can be taken as in Example 4a, and Q
LF
V (x) (x
1
+x
2
+x
3
) + for all x. So Corollary 2.2.4
implies that X is positive recurrent under the longer rst policy, and X
1
+ X
2
+ X
3


2
. (Note:
We see that
Q
LF
V (x)
_
3

i=1
x
i

i
_
m
1
(x
1
x
2
) m
2
(x
2
x
3
) +

2
,
but for obtaining a bound on X
1
+X
2
+X
3
it was simpler to compare to the case of random service
allocation.)
2.3 Problems
2.1. Recurrence of mean zero random walks
(a) Suppose B
1
, B
2
, . . . is a sequence of independent, mean zero, integer valued random variables,
which are bounded, i.e. P[[B
i
[ M] = 1 for some M.
(a) Let X
0
= 0 and X
n
= B
1
+ +B
n
for n 0. Show that X is recurrent.
(b) Suppose Y
0
= 0 and Y
n+1
= Y
n
+ B
n
+ L
n
, where L
n
= ((Y
n
+ B
n
))
+
. The process Y is a
reected version of X. Show that Y is recurrent.
2.2. Positive recurrence of reected random walk with negative drift
Suppose B
1
, B
2
, . . . is a sequence of independent, integer valued random variables, each with mean
B < 0 and second moment B
2
< +. Suppose X
0
= 0 and X
n+1
= X
n
+ B
n
+ L
n
, where
L
n
= ((X
n
+ B
n
))
+
. Show that X is positive recurrent, and give an upper bound on the mean
under the equilibrium distribution, X. (Note, it is not assumed that the Bs are bounded.)
2.3. Routing with two arrival streams
(a) Generalize Example 1.a to the scenario shown.
57
1
queue 1
queue 2
2
d
1
d
d
1
2
u
u
u
u
2
queue 3
3
2
a
a
1
where a
i
, d
j
(0, 1) for 1 i 2 and 1 j 3. In particular, determine conditions on a
1
and
a
2
that insure there is a choice of u = (u
1
, u
2
) which makes the system positive recurrent. Under
those conditions, nd an upper bound on X
1
+X
2
+X
3
, and select u to mnimize the bound.
(b) Generalize Example 1.b to the scenario shown. In particular, can you nd a version of route-
to-shorter routing so that the bound found in part (a) still holds?
2.4. An inadequacy of a linear potential function
Consider the system of Example 1.b in the notes (a discrete time model, using the route to shorter
policy, with ties broken in favor of queue 1, so u = I
x
1
x
2

):
u
queue 1
queue 2
2
d
1
d
a
u
Assume a = 0.7 and d
1
= d
2
= 0.4. The system is positive recurrent. Explain why the function
V (x) = x
1
+ x
2
does not satisfy the Foster-Lyapunov stability criteria for positive recurrence, for
any choice of the constant b and the nite set C.
2.5. Allocation of service
Prove the claim in Example 4a about the largest value of .
2.6. Opportunistic scheduling (Tassiulas and Ephremides [40])
Suppose N queues are in parallel, and suppose the arrivals to a queue i form an independent,
identically distributed sequence, with the number of arrivals in a given slot having mean a
i
> 0 and
nite second moment K
i
. Let S(t) for each t be a subset of E = 1, . . . , N and t 0. The random
sets S(t) : t 0 are assumed to be independent with common distribution w. The interpretation
is that there is a single server, and in slot i, it can serve one packet from one of the queues in S(t).
For example, the queues might be in the base station of a wireless network with packets queued for
N mobile users, and S(t) denotes the set of mobile users that have working channels for time slot
[t, t + 1). See the illustration:
58
state s
queue 1
1
queue 2
2
N
queue N
a
.
.
.
a
a
Fading
channel
(a) Explain why the following condition is necessary for stability: For all s E with s ,= ,

is
a
i
<

B:Bs,=
w(B) (2.27)
(b) Consider u of the form u = (u(i, s) : i E, s E), with u(i, s) 0, u(i, s) = 0 if i , s, and

iE
u(i, s) = I
s,=
. Suppose that given S(t) = s, the queue that is given a potential service op-
portunity has probability distribution (u(i, s) : i E). Then the probability of a potential service
at queue i is given by
i
(u) =

s
u(i, s)w(s) for i E. Show that under the condition (2.27), for
some > 0, u can be selected to that a
i
+
i
(u) for i E. (Hint: Apply the min-cut, max-ow
theorem, given in Chapter 6 of the notes, to an appropriate graph.)
(c) Show that using the u found in part (b) that the process is positive recurrent.
(d) Suggest a dynamic scheduling method which does not require knowledge of the arrival rates or
the distribution w, which yields the same bound on the mean sum of queue lengths found in part (b).
2.7. Routing to two queues continuous time model
Give a continuous time analog of Examples 1.a and 1.b. In particular, suppose that the arrival
process is Poisson with rate and the potential departure processes are Poisson with rates
1
and

2
.
2.8. Stability of two queues with transfers
Let (
1
,
2
, ,
1
,
2
) be a vector of strictly positve parameters, and consider a system of two service
stations with transfers as pictured.
2
station 1
station 2
u!
2
"
"
1

1

Station i has Possion arrivals at rate


i
and an exponential type server, with rate
i
. In addi-
tion, customers are transferred from station 1 to station 2 at rate u, where u is a constant with
u U = [0, 1]. (Rather than applying dynamic programming here, we will apply the method of
Foster-Lyapunov stability theory in continuous time.) The system is described by a continuous-
time Markov process on Z
2
+
with some transition rate matrix Q. (You dont need to write out Q.)
(a) Under what condition on (
1
,
2
, ,
1
,
2
) is there a choice of the constant u such that the
Markov process describing the system is positive recurrent?
(b) Let V be the quadratic Lyapunov function, V (x
1
, x
2
) =
x
2
1
2
+
x
2
2
2
. Compute the drift function
QV .
59
(c) Under the condition of part (a), and using the moment bound associated with the Foster-
Lyapunov criteria, nd an upper bound on the mean number in the system in equilibrium, X
1
+X
2
.
(The smaller the bound the better.)
2.9. Stability of a system with two queues and modulated server
Consider two queues, queue 1 and queue 2, such that in each time slot, queue i receives a new
packet with probability a
i
, where 0 < a
1
< 1 and 0 < a
2
< 1. Suppose the server is described by a
three state Markov process, as shown.
queue 2
a
a
1
2
1
2
0 ! server longer
queue 1
If the server process is in state i for i 1, 2 at the beginning of a slot, then a potential service
is given to station i. If the server process is in state 0 at the beginning of a slot, then a potential
service is given to the longer queue (with ties broken in favor of queue 1). Then during the slot,
the server state jumps with the probabilities indicated. (Note that a packet can arrive and depart
in one time slot.) For what values of a
1
and a
2
is the process stable? Briey explain your answer
(but rigorous proof is not required).
60
Chapter 3
Queues with General Interarrival
Time and/or Service Time
Distributions
3.1 The M/GI/1 queue
An M/GI/1 queue is one in which the arrival process is memoryless (in fact, Poisson, with some
parameter > 0), the service times X
1
, X
2
, . . . of successive customers are independent with a
general probability distribution function B, and there is a single server. The number of customers
in the system at time t is denoted N(t). A sample path of N is illustrated in Figure 3.1. For ease of
notation, it is assumed that customers are served in the order in which they arrive. This is known
as the First-Come, First-Served (FCFS) service discipline. As indicated in the gure, we dene the
following auxiliary variables:
C
n
denotes the nth customer
q
n
is the number in the system just after C
n
departs, equilibrium distribution (d
k
: k 0)
q
t
n
is the number in the system just before C
n
arrives, equilibrium distribution (r
k
: k 0)
v
n
is the number of customers which arrive while C
n
is being served
Our goal will be to compute the distribution of N(t) in equilibrium.
Note that the random variable N(t) for a xed time t does not determine how long the customer
in the server (if any) at time t has been in the server, whereas the past history (N(s) : s t) does
determine this time-in-service. The time-in-service of the customer in the server is relevant to
the distribution of the future of the process N, unless the service time distribution is exponential.
Thus, unless the service time distribution is exponential, (N(t) : t 0) is not Markov.
Suppose for now that equilibrium distributions exist. Since the arrivals form a Poisson sequence,
the equilibrium distribution of N is the same as the distribution r = (r
k
) seen by arrivals. Since as
most one arrival and one departure can occur at a time, r = d (see exercise). Thus, r
k
= p
k
= d
k
for all k.
Consider the evolution of (q
n
: n 0). Just after C
n+1
begins service, there are q
n
+ I
qn=0
customers in the system. The term I
qn=0
accounts for the arrival of C
n+1
in case the system is
61
N
t
t
q
q
q
1
3
2
X X X
1 2 3
v =2 v =3
1
v =0
2 3
Figure 3.1: A typical sample path for an M/GI/1 queueing system
empty upon the departure of C
n
. During the period C
n+1
is served, there are v
n+1
arrivals. This
leads to the recursion
q
n+1
= q
n
I
qn,=0
+v
n+1
(3.1)
The variables v
n
are independent and identically distributed with common z-transform
V (z) = E[z
vn
] =
_

0
_
e
x

k=0
(xz)
k
k!
_
dB(x) (3.2)
=
_

0
e
x+xz
dB(x) (3.3)
= B

( z), (3.4)
and mean V
t
(1) = B
t
(0) = X = . Thus, the sequence (q
n
: n 0) is Markov with the one
step transition probability matrix (letting (
k
: k 0) denote the distribution of the v
n
):
P =
_

0

1

2
. . .

0

1

2
. . .

0

1

2
. . .
.
.
.
.
.
.
.
.
.
_

_
. (3.5)
Due to the nearly upper triangular structure of the matrix P it is possible to recursively nd
the elements of p as a multiple of p
0
, and then nd p
0
at the end by normalizing the sum to
one. Another approach is to work with z-transforms, writing the equation p = pP as zP(z) =
[P(z) +p
0
(z 1)]V (z). Solve this to obtain
P(z) =
p
0
(z 1)V (z)
z V (z)
(3.6)
To determine the constant p
0
use the fact that lim
z1
P(z) = 1 (use lHospitals rule and the fact
V
t
(1) = X = ) to obtain p
0
= 1 . Therefore,
P(z) =
(1 )(z 1)V (z)
z V (z)
, (3.7)
62
which is the celebrated formula of Polleczek and Khinchine for the M/GI/1 system. Computation
of the derivatives at one of P(z), again using lHospitals rule, yields the moments N
k
of the
distribution of the number in system. In particular,
N = +

2
X
2
2(1 )
. (3.8)
To obtain the mean number in the queue, simply subtract the mean number in the server, . To
obtain the mean waiting time (in queue or system) apply Littles law to the corresponding mean
number in system. For example, the mean waiting time in the queue is
W =
X
2
2(1 )
. (3.9)
It is remarkable the W depends only on , X and X
2
= (X)
2
+V ar(X). Note that W is increasing
in the variance of X, for xed mean X and arrival rate .
A simple alternative derivation of (3.9) is based on renewal theory. Suppose the order of service
is rst-come, rst-served. The mean waiting time W of a typical arrival is the same as the mean
amount of work in the system at the time of arrival, which by the PASTA property is the same
as the mean amount of work in the system in equilibrium. The mean work in the system can be
decomposed to the mean work in the server plus the mean work in the queue. Thus,
W = R +N
Q
X, (3.10)
where
R =
_
X
2
2X
_
, (3.11)
which is the product of the probability the server is busy () times the mean residual service time
of a customer in service, which is determined by renewal theory. The other term on the right of
(3.10) accounts for the portion of delay caused by the customers that are in the queue at the time
of an arrival. Applying Littles law, N
Q
X = WX = W, substituting into (3.10) and solving
yields W = R/(1 ), which is equivalent to the desired formula (3.9).
3.1.1 Busy Period Distribution
Call the customers that arrive during the service interval of a given customer an ospring of that
customer. The ospring of a customer, together with the ospring of the ospring, and so forth,
is the set of descendants of the customer. The length of the busy period initiated by the rst
customer is equal to the sum of the service times of the customer itself and of all its descendants.
The number of ospring of the rst customer is v
1
, which, given that X
1
= x, has the Poisson
distribution of mean x.
The length of the busy period L can be expressed as the sum
L = X
1
+L
1
+L
2
+. . . +L
v
1
(3.12)
where X
1
is the service time of the rst customer in the busy period, v
1
is the number of ospring
of that customer, and L
i
is the sum of the service time of the ith ospring of C
1
and the service
63
time of that osprings descendants. Notice that the joint distribution of L
1
, L
2
, . . . , L
v
1
is as if
independent samples with the same distribution as L are drawn.
Condition on X
1
to obtain:
G

(s) = E[e
sL
] =
_

0
e
sx
_

n=0
G

(s)
n
e
x
(x)
n
n!
_
dB(x) (3.13)
=
_

0
e
sx
e
x+G

(s)x
B(dx) (3.14)
= B

(s + G

(s)) (3.15)
Obtain g
1
= X/(1). Can also get this by a renewal argument. Obtain also g
2
= X
2
/(1)
3
.
3.1.2 Priority M/GI/1 systems
Suppose there are n independent arrival streams, each modeled as a Poisson process, with the ith
having arrival rate
i
. Suppose the streams are merged into a single-server queueing system, and
that the service times of customers from the ith stream are identically distributed with the same
distribution as a generic random variable X
i
.
An issue for the system may be the service order. For example, for a pure priority service
order, the server attempts to serve the available customer with the highest priority. Some ner
distinctions are possible. If, for example, service of a lower priority customer is interrupted by
the arrival of a higher priority customer, the service discipline is said to be preemptive priority.
Otherwise it is nonpreemptive priority.
Does the mean waiting time of a customer (averaged over all the streams according to arrival
rates) depend on the service discipline? The answer is yes, which is why mass service facilities (such
as supermarket checkout lines) often give priority to customers with short service requirements.
The stream obtained by merging the arrival streams is again M/GI/1 type. The arrival rate
of the combined stream is =
1
+ . . . +
n
, and the service time of a customer in the combined
stream is distributed as the random variable X = X
J
. Here the index J is chosen at random, with
P[J = j] =
j
/, and J, X
1
, . . . , X
n
are mutually independent. Note that
X =

i
X
i

and X
2
=

i
X
2
i

(3.16)
As long as customers are served in FCFS order, (3.9) is valid. The PASTA property and the FCFS
service order implies that W is equal to the mean work in the system, where work is measured in
units of time. Therefore,
Mean work in system =
X
2
2(1 )
=
R
1
. (3.17)
where R is given by (3.11), which here can be written as
R =
X
2
2
=

n
i=1

i
X
2
i
2
.
Although (3.17) was just derived under the assumption that the service order is FCFS, the terms
involved do not depend on the order of service, as long as the server remains busy whenever there
64
is work to be done (in which case the system is said to be work conserving). Thus, (3.17) is valid
for any work-conserving service discipline.
Assuming the required equilibria exist, the mean work in the queue composed of type i customers
is N
i
X
i
=
i
W
i
X
i
=
i
W
i
for each i, and the mean amount of work in the server is R. The mean
work in the system is thus (

n
i=1

i
W
i
) +R. Equating this with the last expression in (3.17) yields
that
(

i
W
i
) +R =
R
1
, (3.18)
which can be simplied to yield the conservation of work equation:

i
W
i
=
R
1
. (3.19)
This equation is true for any work-conserving service order for which the required equilibrium
means exist.
In order to derive the mean waiting time of customers of stream i a particular service order
must be specied. As mentioned above, if the order is FCFS, then the mean waiting time is the
same for all streams, and is given by (3.9). Suppose now that the nonpreemptive priority discipline
is used, with stream i having priority over stream j if i j. Let W
i
continue to denote the mean
waiting time in queue for customers of stream i. Upon arrival to the queue, a customer from stream
1 need wait only for the completion of the customer in service (if any) and for the service of other
stream 1 customers, if any, already in the queue. Thus, W
1
= R+(
1
W
1
)/
1
, where by Littles law
we write W
1
for the mean number of stream one customers in the queue, and R is the expected
residual service time of the customer in service, if any:
R =
X
2
2
=

n
i=1

i
X
2
i
2
(3.20)
Solving for W
1
yields
W
1
=
R
1
1
. (3.21)
Next, argue that W
2
is equal to R+(
1
W
1
)/
1
+(
2
W
2
)/
2
+
1
X
1
W
2
. The rst three terms
in this sum account for the residual service time and the service times of stream 1 and stream 2
customers already in the queue at the time of arrival of a stream 2 packet. The last term accounts
for the expected wait due to type 1 customers that arrive while the typical arrival from stream 2
is in the queue. Solve for W
2
to nd
W
2
=
R +
1
W
1
1
1

2
=
R
(1
1
)(1
1

2
)
(3.22)
Finally, continuing by induction on i, obtain
W
i
=
R
(1
1
. . .
i1
)(1
1
. . .
i
)
(3.23)
for nonpreemptive priority order of service.
65
N
t
t
q
2
q =0
1
v =0
1
v =0
2
v =2
4
q
3
q
4
v =0
3
Figure 3.2: A GI/M/1 queue sample path
3.2 The GI/M/1 queue
A GI/M/1 queue is one in which the arrival process is a renewal process, service times are inde-
pendent and exponentially distributed with a common mean, and there is a single server. Let A
denote the cumulative distribution function and 1/ the mean of the interarrival times, and let
denote the parameter of the service times. The number of customers in the system at time t is
denoted N(t). A sample path of N is illustrated in Figure 3.2. It is assumed that customers are
served in FCFS order.
It is useful to think of the service process as being governed by a Poisson process of rate
. Jumps in the process denote potential service completions. A potential service completion
becomes an actual service completion if there is at least one customer in the system at the time
of the potential service completion. By the properties of the Poisson process and the exponential
distribution, this view of the system is consistent with its distribution. Dene the following auxiliary
variables, which are indicated in Figure 3.2:
C
n
denotes the nth customer
q
t
n
is the number in the system just before C
n
arrives, equilibrium distribution (r
k
: k 0)
v
n
is the number of potential service completions which occur during the interarrival period just
before the arrival of C
n
.
Note that the random variable N(t) for a xed time t does not determine how long before time
t the last customer arrival occurred, whereas the past history (N(s) : s t) does determine the
time-since-most-recent-arrival. The time-since-most-recent-arrival is relevant to the distribution of
the future of the process N, unless the interarrival distribution is exponential. Thus, unless the
arrival process is Poisson, (N(t) : t 0) is not Markov. Nevertheless, the mean waiting time, and
in fact the waiting time distribution, can be determined by focusing on the process (q
t
n
: n 1).
Consider the evolution of (q
t
n
: n 0). Just after C
n
arrives, there are q
t
n
+ 1 customers in the
system. There are no more arrivals and potentially as many as q
t
n
departures in the period strictly
66
between arrival times of C
n
and C
n+1
. Therefore,
q
t
n+1
= (q
t
n
+ 1 v
n+1
)
+
(3.24)
The variables v
n
are independent and identically distributed with common z-transform A

(
z). and mean A
t
(0) = / = 1/. Thus, the sequence (q
t
n
: n 0) is Markov with the one
step transition probability matrix (letting (
k
: k 0) denote the distribution of the v
n
):
P =
_

_
1
0

0
0 0 . . .
1
0

1

1

0
0 . . .
1
0

1

2

2

1

0
0 . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
_

_
. (3.25)
The equilibrium equation r = rP can be written
r
k
=
0
r
k1
+
1
r
k
+. . . k 1 (3.26)
Seek a solution of the form r
k
= (1 )
k
for some > 0. This yields the equation for :
= A

( ) (3.27)
The function A

() is strictly convex, value one and derivative 1/ at = 1. Thus, a solution


with 0 < < 1 exists if and only if < 1.
If an equilibrium probability distribution r = (r
k
: k 0) of (q
t
k
) exists, then the process is
positive recurrent. Recall from the proof of Proposition 1.7.2, that for a given state, say k, the
mean number of visits to state k + 1 per excursion from state k is r
k+1
/r
k
. Since the process can
increase by only one at a time, in order that the process reach k +1 before k given that it starts in
k, it must take its rst jump towards state k + 1. As soon as the process crosses below state k, it
has no more opportunities to hit k + 1 before reaching k. By the shift invariance of the transition
probabilities, it follows that the ratio r
k+1
/r
k
is the same for all k.
r
k+1
r
k
= E[number of visits to k + 1 before return to k[q
t
0
= k] (3.28)
This ratio does not depend on k. Thus, if any equilibrium distribution exists, it must be a geometric
distribution.
Given q
t
n
= k, the waiting time in queue of C
n
is the sum of k independently distributed Poisson
random variables with parameter , which has Laplace transform /( + s). Thus, the Laplace
transform of the waiting time in equilibrium is
E[e
sW
] = (1 )

k=0

k
_

s +
_
k
(3.29)
= 1 +
_
(1 )
s +(1 )
_
(3.30)
Thus, the equilibrium waiting distribution is a mixture of the random variable that is identically
zero, and an exponentially distributed random variable with parameter (1 ). Thus, P[W >
x] = exp(x(1 )). and W has mean W = /((1 )). By Littles law, the mean number
in the system is given by N = (W + 1/) = /(1 ). The total time in the system, T, can
be represented as T = W + X, where W is the waiting time, X is the service time, and W and
X are mutually independent. The Laplace transform of the system time is given by E[e
sT
] =
E[e
sW
]E[e
sX
] =
(1)
s+(1)
. That is, the total time in the system T is exponentially distributed
with parameter (1 ).
67
3.3 The GI/GI/1 System
A GI/GI/1 queue is one in which the arrival process is renewal type, the service times of successive
customers are independent and identically distributed, and there is a single server. It is assumed
that customers are served in FCFS order.
C
n
denotes the nth customer
x
n
is the service time required for the nth customer
t
n
is the time between the arrival of C
n1
and C
n
w
n
is the waiting time in queue of the nth customer
U(t) is the work in system, also known as the virtual waiting time, at time t
Take as the intitial condition for the system that it is empty at time 0 and C
1
arrives at time t
1
.
With these denitions, notice that w
n
= U(t
1
+. . . +t
n
).
A useful stochastic equation is derived next. Just after the arrival of customer C
n
, the amount
of work in the system is w
n
+ x
n
, and the work begins to decrease at unit rate. Customer C
n+1
arrives to nd the system nonempty if and only if w
n
+ x
n
t
n+1
> 0, and if this condition holds
then w
n+1
= w
n
+x
n
t
n+1
. Therefore, in general,
w
n+1
= (w
n
+u
n
)
+
(3.31)
where u
n
= x
n
t
n+1
. The random variables of the sequence (u
n
: n 1) are independent and
identically distributed, because the same is true of the two independent sequences (x
n
: n 1) and
(t
n
: n 1). Then w
1
= 0, and by repeated use of (3.31),
w
n
= max0, w
n1
+u
n1

= max0, u
n1
+ max0, w
n2
+u
n2

= max0, u
n1
, u
n1
+u
n2
+w
n2

= max0, u
n1
, u
n1
+u
n2
, u
n1
+u
n1
+u
n2
+w
n3

.
.
.
.
.
.
= max0, u
n1
, u
n1
+u
n2
, . . . , u
n1
+. . . u
1

Let us next consider the question of stability. Dene a random variable w


n
by
w
n
= max0, u
1
, u
1
+u
2
, . . . , u
1
+. . . u
n1
, (3.32)
which is the same as w
n
, but with the sequence (u
1
, . . . , u
n1
) reversed in time. Since time reversal
preserves the distribution of the u
n
sequence, it is clear that for each n, w
n
has the same distribution
as w
n
.The sequence w
n
is nondecreasing in n, so that the limit
w

= lim
n
w
n
= max0, u
1
, u
1
+u
2
, . . .
68
exists (with values in (0, ]) and is equal to the maximum of the partial sums of the u
n
sequence as
shown. Since convergence with probability one implies convergence in distribution, it follows that
the variables ( w
n
) converge in distribution to w

. Since convergence in distribution is determined


by marginal distributions only, it follows that
lim-in-dist
n
w
n
= w

(3.33)
Suppose that the x
n
and t
n
have nite means, so that u
n
also has a nite mean and the strong
law of large numbers applies: lim
n
(u
1
+. . .+u
n
)/n = E[u
1
] with probability one. If E[u
n
] < 0, it
follows that lim
n
(u
1
+. . .+u
n
) = with probability one, which implies that P[ w

< ] = 1.
The arrival rate is = 1/E[t
1
] and the service rate is = 1/E[x
1
]. Letting = / as usual,
it has just been proved that if < 1, then the variables w
n
have a limiting distribution with no
probability mass at +.
There are techniques for computing the distribution of waiting time. One is based on Wiener-
Hopf spectral factorization, and another on certain identities due to F. Sipitzer. However, some
simple bounds on the waiting time distribution, presented in the next section, are often more useful
in applications.
3.4 Kingmans Bounds for GI/GI/1 Queues
The elegant bounds, due to J.F.C. Kingman, on the mean and tail probabilities for the waiting
time in a GI/GI/1 system are given in this section. Let
2
z
denote the variance of a random variable
z.
Proposition 3.4.1 (Kingmans moment bound) The mean waiting time W for a GI/GI/1 FCFS
queue satises
W
(
2
x
+
2
t
)
2(1 )
(3.34)
Proof. Let w denote a random variable with the distribution of w

and let u denote a random


variable independent of w with the same distribution as the u
n
s. Taking n in (3.31) yields
that w
d
= (w +u)
+
, where
d
= denotes equality in distribution. In particular, E[w] = E[(w +u)
+
]
and E[w
2
] = E[(w +u)
2
+
]. Clearly
w +u = (w +u)
+
(w +u)

, (3.35)
where (w + u)

= min0, w + u. Taking expectations of each side of (3.35) yields E[u] =


E[(w + u)

]. Squaring each side of (3.35), using the fact (w + u)


+
(w + u)

= 0 and taking
expectations yields
E[w
2
] + 2E[w]E[u] +E[u
2
] = E[(w +u)
2
+
] +E[(w +u)
2

],
or equivalently
E[w] =
E[u
2
] E[(w +u)
2

]
2E[u]
=

2
u

2
(w+u)

2E[u]
.
69
Noting that
2
(w+u)

0 and
2
u
=
2
x
+
2
t
yields the desired inequality.
Exercise: Compare Kingmans moment bound to the exact expression for W in case the inter-
arrival times are exponentially distributed. Solution: Then
2
t
=
1

2
so the bound becomes
E[w]
(
2
x
+
1

2
)
2(1 )
whereas by the Polleczek and Khinchine mean formula for an M/GI/1 queuing system
E[w] =
(
2
x
+
1

2
)
2(1 )
.
Next Kingmans tail bound is given. Let w
n
denote the waiting time in queue of the n
th
customer
in a GI/GI/1 system. Then w
n+1
has the same probability distribution as max
0kn
u
1
+ . . . +u
k
,
where u
k
is the service time required by the k
th
customer minus the length of time between the
k
th
and k +1
th
arrivals. By assumption, the random variables u
1
, u
2
, ... are identically distributed.
Suppose E[u
n
] < 0 and that () = E[exp(u
n
)] < +for some > 0. Let

= sup : () 1.
Proposition 3.4.2 (Kingmans tail bound) For n 0 and b 0, P[w
n+1
b] exp(

b).
Proof. Choose any > 0 such that () 1. Let M
0
= 1 and M
k
= exp((u
1
+ . . . +u
k
)). Then
the probability in question can be expressed as P[w
n+1
b] = P[(max
0kn
M
k
) exp(b)]. It is
easy to check that E[M
k+1
[M
k
, . . . , M
0
] M
k
, so by Lemma 3.4.3 below,
P[ max
1kn
M
k
exp(b)] E[M
0
]/ exp(b) = exp(b). (3.36)
Thus, P[w
n+1
b] exp(b) for 0 < <

, which implies the proposition.


Lemma 3.4.3 Let M
0
, M
1
, . . . be nonnegative random variables such that E[M
k+1
[M
k
, . . . , M
0
]
M
k
for k 0 (i.e. M is a nonnegative supermartingale). Then for any n 0, P[(max
0kn
M
k
)
] E[M
0
]/.
Proof. Let = mink 0 : M
k
, with the convention that = + if the indicated set is
empty. Then (max
0kn
M
k
) if and only if M
n
, where ab denotes the minimum of a and
b. This fact and the simple Markov inequality implies that P[(max
0kn
M
k
) ] = P[M
n

] E[M
n
]/. Since E[M
n
] = E[M
0
] for n = 0, the proof can be completed by showing that
E[M
n
] in nonincreasing in n. To that end, note that M
(n+1)
M
n
= (M
n+1
M
n
)I
>n
, so
E[M
(n+1)
] E[M
n
] = E[(M
n+1
M
n
)I
>n
]
= E[E[(M
n+1
M
n
)I
>n
[M
n
, . . . , M
0
]]
= E[E[(M
n+1
M
n
)[M
n
, . . . , M
0
]I
>n
]
0,
where we use the fact that the event > n is determined by M
0
, . . . , M
n
. The lemma is proved.
70
c
F(c)
p
F
-1
(p)
1
1
Figure 3.3: A distribution function and its inverse
Exercise: Compare Kingmans tail bound to the exact tail probabilities in case the service time
distribution is exponential. Solution: In that case () =

() so the equation for

becomes
A

() = 1

. With the change of variable = (1 ) this equation becomes the same as


the equation for in the analysis of a GI/M/1 queueing system. Thus,

in Kingmans bound
is equal to (1 ) where is the value determined in the analysis of a GI/M/1 system. Thus,
P[w

b] = exp(

b), showing that Kingmans bound gives the same exponential behavior as
the actual tail probabilities.
3.5 Stochastic Comparison with Application to GI/GI/1 Queues
Before describing the most basic part of the theory of stochastic comparison theory, we shall review
the problem of generating a random variable with a given cumulative distribution function. Suppose
F is a probability distribution function and that U is a random variable uniformly distributed on
the interval [0, 1]. The inverse function of F is dened by
F
1
(p) = supx : F(x) p (3.37)
and an example is pictured in Figure 3.5. The graph of F
1
is the graph of F reected about
the line of slope one through the origin. Let X = F
1
(U). Then for any constant c, P[X c] =
P[F
1
(U) c] = P[U F(c)] = F(c), so that X has distribution F
c
.
Exercise 1. a) Find and sketch F
1
assuming that F is the distribution function of an exponential
random variable with parameter . b) Repeat part a) in case F corresponds to a random variable
uniformly distributed over the set 1, 2, 3, 4, 5, 6. Simultaneously sketch F and F
1
as is done in
Figure 3.5. Is F
1
right-continuous? c) Show that if F(c) G(c) for all c, then F
1
(p) G
1
(p)
for all p.
Denition 3.5.1 Let F and G be two distribution functions. We say F stochastically dominates
G, and write F ~ G, if F(c) G(c) for all constants c. If X and Y are two random variables,
we say that X stochastically dominates Y , and write X ~ Y , if F
X
stochastically dominates F
Y
,
where F
X
is the distribution function of X and F
Y
is the distribution function of Y .
71
Note that whether X ~ Y , depends only on the individual distributions of X and Y , and
otherwise does not depend on the joint distribution of X and Y . In fact, X and Y dont even need
to be random variables on the same probability space.
The fundamental proposition of stochastic comparison theory is the following:
Proposition 3.5.2 X ~ Y if and only if there is a pair of random variables (

X,

Y ) dened on
some probability space such that the following three conditions hold:

X
d
= X (the notation
d
= means the variables have the sam probability distribution.)

Y
d
= Y
P[

X

Y ] = 1.
Proof. If X ~ Y , let U be a uniform random variable and let

X = F
1
X
(U) and

Y = F
1
Y
(U).
Then (

X,

Y ) satises the required three conditions.
Conversely, if there exists (

X,

Y ) satisfying the three conditions then for any constant c, the
event

X c is a subset of the event

Y c (possibly ignoring an event of probability zero).


Therefore, F
X
(c) = F

X
(c) F

Y
(c) = F
Y
(c), so that indeed X ~ Y .
A useful method for deriving bounds on probability distributions, called the stochastic compar-
ison method, or coupling method, consists of applying Proposition 3.5.2. Note that the random
variables X and Y may have little to do with each other. They may be independent, or even dened
on dierent probability spaces. Variables

X and

Y dened on a common statespace, with

X
d
= X
and

Y
d
= Y are called coupled versions of X and Y . Thus, to establish that X ~ Y , it suces to
show the existence of coupled versions

X and

Y such that P[

X

Y ] = 1.
Example Let w
n
, for n 1, denote the sequence of waiting times for the customers in a
GI/GI/1 queue, which is empty at the time of arrival of customer one. Then the random variables
w
1
, w
2
, . . . dened by (3.32) are coupled versions of w
1
, w
2
, . . .. Since the random variables w
n
are
nondecreasing in n with probability one, conclude that w
1
w
2
. That is, the waiting time
of customer C
n
stochastically increases with n.
Example As a related example, let us demonstrate that the waiting time in a GI/GI/1 queue
is an increasing function of the service time distribution in the sense of stochastic comparison.
That is, suppose there are two queueing systems, called system 1 and system 2. Suppose that both
systems are initially empty and that the sequence of arrival times of the two systems have identical
distributions. Suppose that both systems use the FIFO service order and that the service times of
the customers in system i, (i = 1 or i = 2) are independent and identically distributed with CDF
B
(i)
. Let W
(i)
n
denote the waiting time of the n
th
customer in system i.
Suppose that B
(1)
~ B
(2)
. We will establish that for each n, W
(1)
n
~ W
(2)
n
. Consider a probabil-
ity space on which there is dened a sequence of variables (T
1
, T
2
, . . .) which has the same distribu-
tion as the sequence of interarrival times for either of the two queuing systems being compared. On
the same probability space, for each n, let (X
(1)
n
, X
(2)
n
) be a pair of random variables such that X
(1)
n
has CDF B
(1)
, X
(2)
n
has CDF B
(2)
, and P[X
(1)
n
X
(2)
n
] = 1. Such a pair can be constructed accord-
ing to Proposition 3.5.2. Moreover, the pairs in the sequence (X
(1)
n
, X
(2)
n
))
n1
can be constructed
72
to be mutually independent, and also to be independent of (T
1
, T
2
, . . .). For i = 1 or i = 2, dene

W
(i)
n
to be the waiting time of the n
th
customer in the queueing system in which (T
1
, T
2
, . . .) is the
interarrival time sequence and (B
(i)
1
, B
(i)
2
, . . .) is the sequence of customer service times. Clearly

W
(i)
n
and W
(i)
n
have the same distribution, so it suces to prove that P[

W
(1)
n


W
(2)
n
] = 1. How-
ever, the fact

W
(1)
1
=

W
(2)
1
= 0, the update equations

W
(i)
n+1
= (

W
(i)
n
T
n+1
+X
(i)
n
)
+
, and the fact
P[X
(1)
n
X
(2)
n
] = 1 imply that P[

W
(1)
n


W
(2)
n
] = 1 for all n 1 by induction on n. Applying
Proposition 3.5.2 again, we conclude that W
(1)
n
~ W
(2)
n
for all n, as advertised.
Execise 2 a) Show that X ~ Y if and only if for every nondecreasing function , E[(X)] E[(Y )].
(Another type of stochastic ordering requires the inequality to hold only for convex, nondecreasing
functions .) b) Let X be uniformly distributed on [0,1], let Y be exponentially distributed with
E[Y ] = 1, and let Z have the Poisson distribution with E[Z] = 1. Identify any stochastic orderings
that appear between pairs of these random variables (e.g., is X ~ Y ?). c) Suppose X ~ Y and
E[X] = E[Y ]. Prove that X and Y have the same distribution.
3.6 GI/GI/1 Systems with Server Vacations, and Application to
TDM and FDM
In a GI/GI/1 system with server vacations, whenever the server nishes serving a customer or
returns from a vacation, and nds the system empty, the server goes on another vacation. A
vacation is a period of time in which the server does not serve customers, even if one or more
customers arrive during the period. An eective method for accounting for the eect of vacations
is to compare the sample paths of a system operating without vacations to one with vacations. The
following variables will be used.
w
o
n
is the waiting time in queue of the n
th
customer in the system without vacations.
w
n
is the waiting time in queue of the n
th
customer in the system with vacations.
t
1
is the time of the rst arrival and, for n 1, t
n+1
is the time between the n
th
and n + 1
th
arrivals. These variables are used in both systems. The variables t
1
, t
2
, . . . are assumed to
be independent and identically distributed.
x
n
is the service time required by the n
th
customer. These variables are used in both systems.
The variables x
1
, x
2
, . . . are assumed to be independent and identically distributed.
V
n
is the length of the n
th
vacation period for the system with vacations. The variables V
n
for n 1 are assumed to be independent, and identically distributed.
The two systems both start empty, and in the system with vacations, the server initiates a
vacation at time zero with duration V
0
. Roughly speaking, the main dierence between the two
systems is that when the n
th
customer begins service in the system with vacations, the total idle
time of the server is of the form V
0
+ . . . + V
k
for some k. In fact, k is the minimum integer,
subject to the constraint that the idle time in the system with vacations is greater than or equal to
the idle time in the system without vacations. This idea is expressed concisely in the lemma below.
Let (u) denote the residual lifetime process for the renewal process determined by V
0
, V
1
, . . . .
That is, (u) = V
0
+ . . . + V
l(u)
u, where l(u) = mink : V
0
+ . . . + V
k
u. Also, let I
o
n
73
denote the total idle time, in the system without vacations, up to the time the n
th
customer begins
(or ends) service.
Lemma 3.6.1 For n 1,w
n
= w
o
n
+(I
o
n
).
Proof. Argue by induction on n. Since w
1
= (I
o
1
) and w
o
1
= 0, the lemma is true for n = 1.
Suppose the lemma is true for n, and argue that it is true for n + 1 as follows. Let x denote the
amount of time (possibly zero) the server in the system without vacations is idle between services
of the n
th
and n + 1
th
customers. Therefore, I
o
n+1
= I
o
n
+ x, and two cases are considered: (1) If
x w
n
w
o
n
then then the server in the system with vacations will not take a vacation between
service of the n
th
and n + 1
th
customers. Therefore w
n+1
w
o
n+1
= w
n
w
0
n
x = (I
o
n
) x =
(I
o
n
+ x) = (I
o
n+1
), as was to be proved. (2) If x > w
n
w
o
n
then at the time of arrival of the
n +1
th
customer, both systems are empty and both have been idle for I
o
n+1
time units. Therefore,
w
n+1
= (I
o
n+1
) and w
o
n+1
= 0, so again w
n+1
w
o
n+1
= (I
o
n+1
). The induction step is complete,
and the lemma is proved.
By renewal theory, if the distribution of V
1
is nonlattice type, the random variable (u) converges
in distribution as u , where the limit has probability density function (p.d.f.) f

given by
f

(c) =
P[V
1
c]
E[V
1
]
. (3.38)
It is useful to skip the question of convergence to equilibrium by choosing the distribution of V
0
to make ((u) : u 0) a stationary random process (rather than only asymptotically stationary).
Thus, assume that V
0
has p.d.f. f

. Consequently, (u) has distribution function f

for all u 0.
Since ((u) : u 0) is independent of (w
o
n
, I
o
n
), the conditional density of (I
o
n
) given (w
o
n
, I
o
n
) is
again f

. Thus (w
o
n
, I
o
n
) and (I
o
n
) are mutually independent. The following proposition is proved.
Proposition 3.6.2 Suppose V
0
has p.d.f. f

. Then w
o
n
and (I
n
o
) are independent, (I
n
o
) has p.d.f.
f

, and w
n
= w
o
n
+(I
o
n
).
Since the k
th
moment of a random variable with p.d.f. f

is m
k+1
/((k + 1)m
1
where m
j
= E[V
j
1
],
the following corollary is immediate from the Proposition.
Corollary 3.6.3 Suppose V
0
has p.d.f. f

. Then E[w
n
] = E[w
o
n
] + m
2
/(2m
1
) and Var(w
n
) =
Var(w
o
n
) +m
3
/(3m
1
) (m
2
/(2m
1
))
2
.
If V
0
has some arbitrary distribution, the Proposition still holds in a limiting sense, under an
additional assumption to prevent periodic behavior. For example, if E[x
1
] < E[t
1
] < and if V
1
or x
1
t
1
is nonlattice type, then the pair ((I
o
n
), w
o
n
) converges in distribution as n , and the
limit ( , w) has independent coordinates, with the rst having p.d.f. f

.
As an example, one can model a time-slotted system in which the time axis is divided into
frames of length M, for some constant M, and service can only begin at the beginning of a frame.
To that end, choose the service time random variable x
1
to be a possibly-random integer multiple
of M, and suppose that the vacations are all M units in duration. Then f

is the uniform density


on the interval [0, M]. By the corollary, the mean waiting time in the system is M/2 time units
larger than the mean waiting time in the system with the same service time distribution, without
the constraint that service begin at the beginning of frames.
74
Continuing this example further, if M is an integer, we can think of a frame as being composed
of M unit length slots. The waiting time in the slotted system is the same as that for a stream
traversing a Time Division Multiplexed (TDM) communication link, in which the stream is served
during one slot per frame. In the TDM system there are M arrival streams, all multiplexed onto a
single line, and the queuing processes for the M streams are not related. When the vacations are
dropped, the model represents a Frequency Division Multiplexed (FDM) communication link of the
same total capacity. In an FDM multiplexed link, the M streams again do not interact. The mean
waiting times are thus related by W
TDM
= W
FDM
+ M/2. However, for the TDM system, the
actual service time of a packet is M1 time units less than the service time in an FDM system, so
the total mean system time for packets in the two systems are related by T
TDM
= T
FDM
M/2+1.
Note that there is not much dierence between the delay in TDM and FDM systems, especially for
heavily loaded systems.
Finally, to specialize the example further, if the trac arrival process is Poisson with rate /M
packets/slot and if each packet is served in M slots in the FDM system and in one slot in the TDM
system (with only one out of every M slots being available), then the FDM system is an M/GI/1
system with constant service times (i.e. an M/D/1 system). The P-K mean formula thus yields
W
FDM
= M/(2(1 )), and so W
TDM
= W
FDM
+M/2 = M/(2(1 )).
3.7 Eective Bandwidth of a Data Stream
There has been an extensive eort since the inception of packet-switched communication networks
to characterize the trac carried by networks. The work aims to account for the bursty nature of
many data sources. This section describes one approach for dening the eective bandwidth of a
data stream. The approach is based on the theory of large deviations.
Recently, there has been a keen interest in accounting for the observations of many studies of
trac in real networks, which indicate that trac streams exhibit self-similar behavior. That is,
the random uctuations in the arrival rate of packets appears to be nearly statistically the same
on dierent time scales, ranging over several orders of magnitude. We touch on this development
briey in the context of the eective bandwidth of a self-similar Gaussian source. An extensive
annotated bibliography on the subject is given in [43].
One of the primary goals of information theory is to identify the eective information rate
of a data source. The entropy or the rate-distortion function of a data source may be thought of
as such. The theory of eective bandwidth, described in this section, has a similar goal. Another
connection between the theory of eective bandwidth of data streams and information theory is
that much of the theory of eective bandwidth is based on large deviations theory, which is a close
cousin to Shannons theory of information. Moreover, there is possibly some yet to be discovered
direct connection between the theory of eective bandwidth and Shannons theory of information.
For example, perhaps an eective-bandwidth vs. distortion function can be computed for some
nontrivial sources.
A major way that the theory of eective bandwidth diers from the Shannon theory is that it
treats the ow of data bits as it would the ow of anything else, such as a uid. The values of the
bits are not especially relevant. The idea is that individual connections or data streams carried by a
network may be variable in nature. The data rate may be variable in a statistical sense, in that the
rate produced by the stream or connection is not a priori known and can possibly be considered to
be random. It may also be varying in time: such is the case for data streams produced by variable
75
rate source coders. Suppose many variable data streams are multiplexed together onto a line with
a xed capacity, where capacity is measured in bits per second. Because of statistical multiplexing,
the multiplexer has less work to do than if all the data streams were sending data at the peak rate all
the time. Therefore a given data stream has an eective bandwidth (which depends on the context)
somewhere between the mean and peak rate of the stream. The use of the word bandwidth in the
phrase eective bandwidth has a dierent meaning than that commonly used by radio engineers.
A better name might be eective data rate. However, the name eective bandwidth is already
deeply entrenched in the networking literature, so is used here.
To illustrate the ideas in the simplest setting rst, we begin by considering a buerless com-
munication link, following Hui [20, 21]. The total oered load (measured in bits per second, for
example) is given by
X =
J

j=1
n
j

i=1
X
ji
(3.39)
where J is the number of connection types, n
j
is the number of connections of type j, and X
ji
is the
data rate required by the ith connection of type j. Assume that the variables X
ji
are independent,
with the distribution of each depending only on the index j. If the link capacity is C then the
probability of overload, P[X > C], can be bounded by Chernos inequality:
log P[X C] log E[e
s(XC)
] = s(
J

j=1
n
j

j
(s) C) (3.40)
where
j
(s) is given by

j
(s) =
1
s
log E[e
sX
ji
]. (3.41)
Thus, for a given value of , the quality of service constraint log P[X > C] is satised if the
vector n = (n
1
, . . . , n
J
) lies in the region
A = n R
J
+
: min
s>0
[s(
J

j=1
n
j

j
(s) C)] =
s
A(s) (3.42)
where
A(s) = n R
J
+
:
J

j=1
n
j

j
(s) C

s
. (3.43)
The complement of A relative to R
J
+
is convex. Let n

be on the boundary of A (think of n

as
a nominal value of the vector n). A polyhedral subset of A, delineated by a hyperplane tangent
to the boundary of A at n

, is given by A(s

), where s

achieves the minimum in (3.42). Thus, any


vector n Z
J
+
satisfying
J

j=1
n
j

j
(s

) C

s

(3.44)
satises the quality of service constraint. Once C, , and s

are xed, the sucient condition (3.44)


is rather simple. The number
j
(s

) is the eective bandwidth of a type j connection, and C/s

is
the eective capacity. Condition (3.44) is analogous to the condition in classical information theory
76
which insures that a particular channel is capable of conveying several independent data sources
within specied average distortions, namely that the sum of the rate distortion functions evaluated
at the targeted distortions should be less than or equal to the channel capacity.
As long as the random variables X
ji
are not constant, the function
j
is strictly increasing,
and ranging from the mean, E[X
ji
] as s 0, to the peak (actually the essential supremum,
supc : P[X
ji
> c] > 0), of X
ji
. Note that the eective bandwidth used depends on the variable
s

. Such dependence is natural, for there is a tradeo between the degree of statistical multiplexing
utilized and the probability of overload, and the choice of the parameter s

corresponds to selecting
a point along that tradeo curve. Roughly speaking, as the constraint on the overow probability
becomes more severe, a larger value of s

is appropriate. For example, if is very large, then


the sets A
s
are nonempty only for large s, so that the choice of s

is also large, meaning that the


eective bandwidths will be near the peak values.
The set A dened in (3.42) is only an inner bound to the true acceptance region for the constraint
log P[X > C] . However, if C and both tend to innity with their ratio xed, then the
true acceptance region, when scaled by dividing by C, converges to A scaled by dividing by C [23].
(The scaled version of A depends only on the ratio of C to .) This follows from Cramers theorem
(see [36]), to the eect that Chernos bound gives the correct exponent.
So far, only a buerless link confronted with demand that is constant over all time has been
considered. The notion of eective bandwidth can be extended to cover sources of data which vary
in time, but which are statistically stationary and mutually independent [3, 37, 7]. Let X
ji
[a, b]
denote the amount of trac generated by the ith connection of type j during an interval [a, b]. We
assume that the process X is stationary in time. Set

j
(s, t) =
1
st
log E[e
sX
ij
[0,t]
]. (3.45)
For t xed, the function
j
is the same as the one-parameter version of
j
considered above,
applied to the amount of work generated in an interval of length t. Beginning with the well-known
representation of Loynes for the stationary queue length, Q(0) = sup
t0
X[t, 0] tC, we write
log P[Q(0) > B]
= log P[sup
t0
X[t, 0] tC > B] (3.46)
sup
t0
log P[X[t, 0] tC > B] (3.47)
sup
t0
min
s0
_
_
st
J

j=1
n
j

j
(s, t) s(B +tC)
_
_
(3.48)
The symbol used in (3.47) and (3.48) denotes asymptotic equivalence of the logarithms of
the quantities on either side of the symbol. The use of this symbol is justied by limit theorems
in at least two distinct regimes: (1) the buer size B tends to innity with n and C xed, and
(2) the elements of the vector n, the capacity C, and the buer space B all tend to innity with
the ratios among them xed. Under either limiting regime, the line (3.47) is justied by the fact
that the probability of the union of many rare events (with probabilities tending to zero at various
exponential rates) is dominated by the probability of the most probable of those events. The
line (3.48), which represents the use of the Cherno bound as in (3.40), relies on the asymptotic
77
exactness of the Cherno bound (Cramers theorem or more general large deviations principles such
as the Gartner-Ellis theorem. See [36]).
Equations (3.46)-(3.48) suggest that the eective bandwidth to be associated with a connection
of type j is
j
(s

, t

), where t

achieves the supremum in (3.48), and s

achieves the minimum in


(3.48) for a nominal value n

of n. The approximate condition for meeting the quality of service


requirement log P[Q(0) > B] for n near n

is then
J

j=1
n
j

j
(s

, t

) C +
B
t

This region scales linearly in if n

, B and C scale linearly in , and asymptotically becomes


a tight constraint as . The value t

is the amount of time that the system behaves in an


unusual way to build up the queue length just before the queue length exceeds B. The quantity
C + B/t

/s

is the eective capacity of the link. Following [35], we call t

the critical time


scale. In the rst limiting regime, described above, t

tends to innity, so the eective bandwidth


becomes
j
(, s

). Use of the Gartner-Ellis theorem of large deviations theory allows the limit
theorems in the rst limiting regime to be carried out for a wide class of trac streams with
memory.
The above approximation simplies considerably in the case that the trac process is Gaussian.
In particular, suppose also that there is only one class of customers (so we drop the index j and let
n denote the number of connections) and that for each i the trac X
i
(0, t] is Gaussian with mean
t and variance V (t). The corresponding eective bandwidth function is (s, t) = + sV (t)/2t.
Inserting this into (3.48) and then performing the minimization over s yields that
log P[Q(0) > B] ninf
t
((c )t +b)
2
2V (t)
(3.49)
where b is the buer space per connection (so B = nb) and c is the capacity per connection (C = nc).
Suppose V (t)/t
2H
converges to a nite constant
2
as t tends to innity, where H, known as the
Hurst parameter, typically satises 1/2 H < 1. If H = 1/2, we see the process does not exhibit
long range dependence. In particular, if X has independent increments (therefore the increments of
a Brownian motion with drift and diusion parameter
2
), then V (t) =
2
t and moreover (3.49)
holds with exact equality.
If H > 1/2 (but still H < 1) then the critical time scale t

is still nite. That is, even in the


presence of long range dependence, the critical time scale is still nite in the limiting regime of C,
B and n tending to innity with xed ratios among them [35]. The value of V (t) for t larger than
t

therefore does not inuence the approximation.


See [11] and [23] for extensive surveys on eective bandwidth, and [43] for a very extensive
bibliographic guide to self-similar trac models and their use. The paper [4] presents signicant
bounds and analysis related to notions of equivalent bandwidth with a dierent terminology. Fi-
nally, the paper [22] connects the theory of eective bandwidths to thermodynamics and statistical
mechanics.
3.8 Problems
3.1. A queue with customers arriving in pairs
Customers arrive at a single-server queue two at a time. The arrival instants for pairs of customers
78
are given by a Poisson process with rate . The service times of customers are independent,
exponentially distributed with parameter . Let N(t) denote the number of customers in the
system at time t. (a) The process (N(t) : t 0) is a continuous time pure jump Markov process
with statespace Z
+
. Sketch the transition rate diagram of N. (b) See what can be deduced about
N by applying the Foster-Lyapunov stability criteria and related moment bounds for the Lyapunov
function V (x) = x. (c) Repeat part (b) using V (x) =
x
2
2
.
(d) Solve for the z transform of the equilibrium distribution and nd the equilibrium distribution.
Under what condition does it exist (i.e. under what condition is N positive recurrent?)
(e) Let (r
k
: k 0) and (d
k
: k 0) be the equilibrium probabilities dened by
r
k
= P[there are k in the system just before the arrival of a pair of customers]
d
k
= P[there are k in the system just after the departure of a typical customer].
Express (r
k
: k 0) and (d
k
: k 0) in terms of (p
k
: k 0), and . In particular, does r
k
= p
k
?
or d
k
= p
k
? or r
k
= d
k
?
3.2. Token bucket regulation of a Poisson stream
A token bucket trac regulation scheme works as follows. The packet stream to be regulated is
modeled as a Poisson stream with arrival rate . There is a token pool that holds up to B tokens.
When a packet arrives, if there is a token in the token pool then the packet instantly passes through
the regulator, and one of the tokens is removed from the token pool. If instead the token pool is
empty, then the packet is lost. (Notice that packets are never queued.) New tokens are generated
periodically, with one time unit between successive generation times. If a token is generated when
the token pool is full, the token is lost. (The token pool acts like a leaky bucket.) (a) Identify an
embedded discrete-time, discrete-state Markov process, and describe the one-step transition prob-
abilities of the chain. (b) Express the fraction of packets lost (long term average) in terms of , B
and , where denotes the equilibrium probability vector for your Markov chain. (You do NOT
need to nd .) (c) As an approximation, suppose that the times between new token generations
are independent, exponentially distributed with common mean 1. Find a fairly simple expression
for the loss probability.
3.3. Extremality of constant interarrival times for G/M/1 queues
Consider two G/M/1 queueing systems with common service rate . The rst has a general in-
terarrival distribution function A(t) with mean 1/ such that / < 1. The second has constant
interarrival times, all equal to 1/. (Thus the interarrival distribution function in the second sys-
tem is A
d
(t) = 0 if t < 1/ and A
d
(t) = 1 otherwise, and the system is called a D/M/1 system).
(a) Show that the Laplace transforms are ordered: A

d
(s) A

(s) for all s 0. (b) Show that


the mean number in the system at arrival times (in equilibrium) and the mean waiting time in the
system is smaller for the D/M/1 system.
3.4. Propagation of perturbations
Consider a single-server queueing system which is initially empty and for which a total of ve
customers arrive, at times 1,2,5,13, and 14, respectively. Suppose the amounts of time required to
serve the customers are 5,2,4,2,2 time units, respectively. (a) Sketch the unnished work in the
system as a function of time. Indicate the departure times of the ve customers, and compute the
waiting times in queue of the ve customers. (b) Repeat part (a), assuming that the service time
79
of the second customer is increased to 3 time units. (c) Repeat part (a), assuming the service time
of the second customer is increased to 4 time units. (d) Describe briey in words how in general
an increase in service time of one customer eects the waiting times of other customers in the system.
3.5. On priority M/GI/1 queues
Consider an M/GI/1 system with preemptive resume priority service. There are assumed to be two
priority classes, with independent arrival streams and service requirements. Class 1 has priority
over class 2. Customers of the ith class arrive according to a Poisson process of rate
i
, and the
service times of customers of class i are independent and identically distributed. Use X
i
to denote
a random variable with the same distribution as the service requirements for class i, for i = 1 or
i = 2. (a) Describe what preemptive resume priority means. (b) Derive the mean time in the
system, T
1
and T
2
, for class 1 and class 2, respectively.
3.6. Optimality of the c rule
Consider an M/G/1 queue with customer classes 1 through K and a nonpreemptive priority ser-
vice discipline. Let
i
= 1/X
i
where X
i
denotes the mean service time for type i customers and
let c
i
, with c
i
> 0, denote the cost per unit time of holding a class i customer in the queue. For
some permutation = (
1
,
1
, . . . ,
K
) of (1, 2, . . . , K), the class
1
customers are given the highest
priority, the class
2
customers are given the next highest priority, and so on. The resulting long
run average cost per unit time is

i

i
c
i
W
i
(), where W
i
() is the mean waiting time of class i
customers under the priority order . (a) Show that an ordering minimizes the long-run aver-
age cost per unit time over all possible orderings if and only if

1
c

2
c

2
. . .

K
c

K
.
(Hint: If does not satisfy the given condition, then for some i,

i
c

i
<

i+1
c

i+1
. Appeal to the
conservation of work equation to argue that by swapping
i
and
i+1
, an ordering with strictly
smaller cost than is obtained. You still also need to gure out how to prove both the if and
only if portions of the statement.) (b) Write a brief intuitive explanation for the result of part (a).
3.7. A queue with persistent customers
Consider a queue with feedback as shown, where , D > 0 and 0 b 1. New arrivals occur
according to a Poisson process of rate . The service time of a customer for each visit to the
queue is the constant D. Upon a service completion, the customer is returned to the queue with
probability 1 b.
D
!
1!b
b
(a) Under what condition is the system stable? Justify your answer.
(b) Suppose the service order is FIFO, except that a returning customer is able to bypass all other
customers and begin a new service immediately. Denote this by PR, for priority to returning,
service order. Express the mean total system time of a customer, from the time it arrives until
the time it leaves the server for the last time, in terms of , b, and D. (Hint: A geometrically
distributed random variable with parameter p has rst moment
1
p
and second moment
2p
p
2
.)
(c) If instead the service within the queue were true FIFO, so that returning customers go to the
end of the line, would the mean total time in the system be larger, equal, or smaller than for PR
service order? Would t
80
he variance of the total time in the system be larger, equal, or smaller than for PR service order?
3.8. A discrete time M/GI/1 queue
Consider a single server queue in discrete time. Suppose that during each time slot, one customer
arrives with probability p, and no customers arrive with probability 1 p. The arrival events for
dierent slots are mutually independent. The service times are independent of the arrival times
and of each other, and the service time of any given customer is equally likely to be 1,2,3,4,5 or 6
slots. Find W, the mean number of complete slots spent in the queue by a typical customer (dont
include time in service). Show your reasoning. Also identify an embedded Markov chain for the
system.
3.9. On stochastic ordering of sampled lifetimes
Let f and g be two dierent probability density functions with support R
+
, and with the same
nite mean, m
1
. Let f
L
and g
L
denote the corresponding sampled lifetime densities: f
L
(x) =
xf(x)
m
1
and g
L
(x) =
xg(x)
m
1
. Show that it is impossible that f
L
g
L
. (We write f
L
g
L
to denote that a
random variable with pdf f
L
is stochastically smaller than a random variable with pdf g
L
. Note
that the mean of f
L
does not have to equal the mean of g
L
.)
3.10. Eective bandwidth, buerless link
Consider a buerless link of xed capacity C = 200 serving connections of 2 types. The data rate of
each connection is random, but constant in time. The data rate required by a connection of type 1
is uniformly distributed over [0,2], the data rate required by a connection of type 2 is exponentially
distributed with mean 1, and requirements for dierent connections are mutually independent. (a)
Calculate the eective bandwidth functions
1
(s) and
2
(s). (b) Find the largest integer n so that
the Cherno inequality implies that the blocking probability is less than or equal to 0.001 for a
nominal load of n connections of each type. Show your work for full credit! (Hint: Write a short
computer program for this problem. For a given value of n the Cherno bound is computed by a
minimization over s. Then n can be adjusted by binary search increasing n if the corresponding
overow probability is too small, and decreasing n if the corresponding overow probability is too
large.) (c) Compute the eective bandwidths
1
(s

),
2
(s

), and the eective capacity C+


log(0.001
s

.
Which of the two eective bandwidths is larger? Sketch the corresponding acceptance region A(s

).
Here s

denotes the optimizing value of s in the Cherno bound. (Hint: The nominal load point
(n,n) should be contained in the region A(s

). Which of the two eective bandwidths is larger?)


3.11. Eective bandwidth for a buered link and long range dependent Gaussian traf-
c
Consider a link with buer size B = 100 and xed capacity C = 200, serving connections of 2
types. The amount of data oered by a connection of type i over an interval of length t is assumed
to be Gaussian with mean t and variance V
i
(t) = t
2H
i
, where H
i
is the Hurst parameter. Assume
H
1
= 0.5 and H
2
= 0.9. (So the class 2 connections have long-range dependence.) (a) Compute
the largest value n so that the Cherno approximation for the overow probability for n connec-
tions of each type (simultaneously) is less than or equal to exp() = 0.001. (Hint: Write a short
computer program for this problem. For a given value of C and t the approximation is computed
by a minimization over s, which can be done analytically. The maximization over t can be done
numerically, yielding the approximate overow probability for the given value of C. Finally, C can
be adjusted by binary search as in the previous problem. Also, compute the eective bandwidths
81

1
(s

, t

),
2
(s

, t

), the critical time t

, and the eective capacity C


eff
= C +
B
t

. Sketch
the corresponding acceptance region A(s

, t

). (Hint: The nominal load point (n, n) should be


contained in the region A(s

, t

). Which of the two eective bandwidths is larger?) (b) Redo


problem (a) for overow probability exp() = 0.00001. and comment on the dierences between
the answers to parts (a) and (b).
3.12. Extremal equivalent bandwidth for xed mean and range
Suppose X is a random variable representing the rate of a connection sharing a buerless link.
Suppose the only information known about the probability distribution of X is that E[X] = 1 and
P[0 X 2] = 1. For a given s > 0 xed, what specic choice of distribution of X meeting these
constraints maximizes the equivalent bandwidth (s), where (s) =
ln(E[e
sX
])
s
?
82
Chapter 4
Multiple Access
4.1 Slotted ALOHA with Finitely Many Stations
Suppose M stations wish to communicate to each other via a shared broadcast channel. Suppose
that time is divided into equal length slots such that the length of each slot is the amount of time
needed for a station to transmit one packet of information.
Consider a particular station and a particular time slot. If the station does not have a packet
at the beginning of the slot, then it receives a new packet in the slot with probability q
a
. A new
packet (one that arrives in the slot) cannot be transmitted in the slot. If the station already has a
packet to transmit at the beginning of the slot, then it transmits the packet with probability q
x
in
the slot, and the station is not eligible to receive any new packets in the slot. The events of packet
arrival and transmission at the dierent stations are mutually conditionally independent, given the
status of the stations at the beginning of the slot.
If no station transmits a packet in the slot, the channel is said to be idle during the slot, and if
two or more stations transmit during the slot, a collision is said to occur. If either the slot is idle
or a collision occurs in the slot, then no packet is successfully transmitted. If exactly one station
transmits a packet in the slot, then that transmission is called a success, and the packet leaves
the system. The station making the successful transmission is eligible to receive a new packet in
the next time slot, while all other stations with packets carry their packets over to the next slot.
The parameters of the model are thus M, q
a
and q
x
. Several aspects of the model involve
choices that were made largely for simplicity and tractability. For example, we might have allowed
a station to continue receiving packets, even after it already had one to send. However, in some
systems the successful transmission of the rst packet in a queue at a station is the most crucial,
because that packet can reserve slots (possibly on another channel) for the packets in queue behind
it. Also implicit in the model is that a station can determine by the beginning of the next slot
whether its transmission is successful, so that it knows whether to transmit the same packet again,
or to switch back to the mode of receiving a new packet. One way to modify the current model
to account for a delay of, say, R time slots would be to time-interleave R independent versions of
the system. On the other hand, for networks of small physical size, feedback comes so soon that
an unsuccessful transmission can be stopped well before the whole packet is transmitted. That is
the idea of collision detect.
A nal aspect of the model we should comment on is that new packets are treated the same
way as packets that have already been transmitted. An alternative is to transmit a new packet
83
with probability one in the rst slot after it arrives, and if such transmission is not successful then
the packet should be retransmitted with probability q
r
in subsequent slots, until it is successfully
transmitted. Thus, we have adopted a delayed rst transmission (DFT) rather than immediate rst
transmission (IFT) model. The qualitative behavior and exact analysis of the two types of models
are similar. The IFT model can lead to lower mean delays for lightly loaded systems. On the other
hand, the DFT model is less sensitive to variations in the statistics of the arrival process.
Quantities of interest include the mean throughput and mean delay of the system. Since the
stations are interchangeable in the model, the process (N(k) : k 0) is a Markov process, where
N(k) is the number of stations with packets at the beginning of a slot k. Given N(k) = n, the
conditional probability of a successful transmission in slot k, P
S
(n, q
x
), is given by P
S
(n, q
x
) =
nq
x
(1 q
x
)
n1
, and the conditional probability that there are j arrivals is b(M n, q
a
, j), where
b() is the binomial mass function
b(M n, q
a
, j) =
_
_
_
_
M n
j
_
q
j
a
(1 q
a
)
Mnj
0 j M n
0 else.
Furthermore, whether there is a successful transmission is conditionally independent of the number
of new arrivals, given N(k) = n. Thus, the one step transition probabilities of the process can be
expressed as follows.
P[N(k + 1) = n +i[N(k) = n] = P
S
(n, q
x
)b(M n, q
a
, i + 1) + (1 P
S
(n, q
x
))b(M n, q
a
, i)
The equilibrium probability distribution of the Markov process (N(k)) can be easily computed
numerically by matrix iteration. The mean number of packets in the system and mean throughput
are given by

N =

M
n=0
n
n
and S =

M
n=0

n
P
S
(n, q
x
) = (M N)q
a
respectively, and by Littles
law the mean delay is

N/S.
The dynamical behavior of the system can be better understood by considering the drift of the
random processes (N(k)). The drift at instant k is dened to be E[N(k + 1) N(k)[N(k)]. It is
a function of N(k), and is equal to the expected change in N over slot k, given N(k). Since the
change in N is the number of arrivals minus the number of departures, the drift can be expressed
as d(N(k)) where d(n) = (M n)q
a
P
S
(n, q
x
).
It is convenient to think of the function d(n) as a one-dimensional vector eld. For convenience,
consider d(n) for real values of n between 0 and M. The zeros of d(n) are called equilibrium points,
and an equilibrium point is called stable if d(n) makes a down-crossing at the equilibrium point.
We expect that the sample paths of the Markov process spend much of their time near the stable
equilibrium points of (N(k)). If q
x
is suciently small for given values of M and q
a
, then there is
only a single equilibrium point, which is stable. However, for some parameter values there can be
three equilibrium points, two of which are stable. One of the stable points occurs for a large value of
n, corresponding to large delay and small throughput, whereas the other equilibrium point occurs
for a much smaller value of n. The existence of stable equilibrium points at undesirable values of
n indicates the following potential problem with the operation of the system: for a long period
of time the system may stay near the desirable stable equilibrium point, but due to occasional
large stochastic uctuations it may move to near the undesirable point. A drastic, and undesirable
qualitative change in the performance of the stochastic system can occur rather suddenly.
In many dynamical systems, a control mechanism based on feedback can be used to eliminate
undesirable stable equilibrium points. That method is investigated for the ALOHA system in the
next two sections.
84
4.2 Slotted ALOHA with Innitely Many Stations
No Control
Let A
k
denote the number of new packets to arrive in slot k. Assume that the random variables
(A
k
: k 0) are independent, identically distributed with E[A
k
] = and V ar(A
k
) < . For
example, A
k
might have a Poisson distribution, which arises in the limit as the number of stations
tends to innity with the aggregate rate of trac generation set to . Suppose that each station
with a packet at the beginning of a slot transmits in the slot with probability q
x
. If there are n
such stations then the probability of a success, P
S
(n, q
x
), is given by P
S
(n, q
x
) = nq
x
(1 q
x
)
n1
.
Since lim
n
P
S
(n, q
x
) = 0, there is an n
o
so large that P
S
(n, q
x
) /2 for n n
o
.
We shall argue that this implies that P[lim
k
N(k) = ] = 1. Conditioned on the event
N(k) n
o
, the increment N(k + 1) N(k) is stochastically greater than or equal to
1
a random
variable u, where u = A
1
B, and B is a random variable independent of A
1
such that P[B =
1] = /2 = 1 P[B = 0]. Note that E[u] = /2 > 0. Moreover, given N(k) n
o
, the future
increments of N are all stochastically larger than u until, if ever, the backlog becomes smaller
than n
o
. Furthermore, there is a positive probability that the backlog will never again be less than
n
o
, because of the following fact: If u
1
, u
2
, . . . are independent, identically distributed random
variables with E[u
i
] > 0, then P[min
k1
u
1
+ . . . + u
k
1] < 1 (otherwise the Strong Law of
Large Numbers would be violated). Thus, conditioned on the event N(k) n
o
, there is a positive
probability that the backlog will converge to innity and never again cross below n
o
. Since the
backlog must lie above n
o
innitely often, it follows that P[lim
k
N(k) = +] = 1.
Centralized (Unrealistic) Control
It is easy to show that for n xed, P
S
(n, q
x
) is maximized by q
x
= 1/n, that P
S
(n, 1/n) = (1 +
1/(n 1))
(n1)
> e
1
and that lim
n
P
S
(1, 1/n) = e
1
. We can try to apply this result to a
collision access system as follows. Let N(k) denote the number of stations with packets to transmit
at the beginning of slot k, and suppose that each such station transmits its packet in the slot with
probability 1/N(k). Then if N(k) is not zero, the conditional probability of a successful transmission
in the slot given N(k) is greater than e
1
. Thus the delay in the system is no larger than that for
a discrete time M/M/1 bulk-arrival system in which the service times are geometrically distributed
with parameter e
1
. (The two processes can be constructed on the same probability space so that
the M/M/1 bulk system always has at least as many packets in it as the ALOHA system.) The
mean delay will thus be nite if < e
1
and the variance of the number of arrivals in each slot is
nite. Unfortunately, this retransmission procedure is far from being practical because it requires
that all the stations be aware of the number of stations with packets.
Decentralized Control
The stations can implement a practical variation of the centralized control scheme by estimating the
number of stations with packets. Suppose, for example, that immediately after slot k the stations
1
Stochastically greater than or equal to refers to the notion of stochastic comparison, discussed in Section3.5.
85
learn the value of Z(k), where
Z(k) =
_
_
_
0 if the slot was idle
1 if there was a successful transmission in the slot
e if there was a collision in the slot.
The actual number of packets in the system evolves according to the equation
N(k + 1) = N(k) I
Z(k)=1
+A(k), (4.1)
where A(k) is the number of packets to arrive during slot k. By denition the term I
Z(k)=1
is
one if Z(k) = 1 and zero otherwise, and the stations learn the value of the term because they
observe Z(k). On the other hand A(k) is independent of N(k) and all the feedback up to and
including Z(k). Thus, a reasonable way for the stations to update an estimate

N(k) of N(k) is by
the following update equations:

N(k + 1) = (

N(k) I
Z(k)=1
+c(Z(k)))
+
+, (4.2)
where c = (c(0), c(1), c(e)) is to be chosen so that the estimator error D(k) dened by D(k) =

N(k) N(k) will tend to remain small for all k. Combining the equations for N(k + 1) and

N(k+1), we see that if



N(k) > 1minc(0), c(1), c(e), then D(k+1) = D(k)(A(k))+c(Z(k)).
Suppose that all stations with a packet to transmit at the beginning of slot k transmit in the slot
with probability min(1, 1/

N(k)). The drift of (D(k)) during slot k is given by
E[D(k + 1) D(k)[N(k),

N(k)] =
c(0)P[Z = 0[N(k),

N(k)] +c(1)P[Z = 1[N(k),

N(k)] +c(e)P[Z = e[N(k),

N(k)].
It is not hard to show that if either N(k) or D(k) are suciently large then
P[Z = 0[N(k),

N(k)] exp(G(k)),
P[Z = 1[N(k),

N(k)] G(k) exp(G(k)),
P[Z = e[N(k),

N(k)] 1 (1 +G(k)) exp(G(k)).
where
G(k) =
_
N(k)

N(k)
if

N(k) ,= 0
0 if

N(k) = 0.
Therefore, the drift of D satises
E[D(k + 1) D(k)[N(k),

N(k)] d(G(k)),
where
d(G) = c(0) exp(G) +c(1)Gexp(G) +c(e)(1 (1 +G) exp(G)).
We would like the sign of the drift of D(k) to be opposite the sign of D(k), in order that the error
drift towards zero. Now D(k) is positive if and only if

N(k) is too large (i.e. larger than N(k)),
which means that G(k) is less than one. Hence we want to choose the constants (c(0), c(1), c(e))
so that d(G) < 0 when G < 1. Similarly we want d(G) > 0 when G > 1. By continuity this forces
d(1) = 1. Since we have only one equation (and two other conditions) for the three constants there
86
0 2 4 6 8 10 12 14 16 18 20
0
2
4
6
8
10
12
14
16
18
20
n
E
s
t
i
m
a
t
e

o
f

n
Figure 4.1: Drift eld for ALOHA with distributed control.
is some exibility in choosing the constants. One suitable choice is c(0) = (e2)/(e1) 0.418,
c(1) = 0 and c(e) = 1/(e 1) 0.582. To summarize, with this choice of (c(0), c(1), c(e)), if the
stations use the update formula (4.2) and transmit with probability 1/

N(k) in slot k when they
have a packet, then the estimator error D(k) will tend to drift to zero.
As a check on our calculations we can easily do an exact calculation of the drift vector eld of
the Markov process (N(t),

N(t)), given by
d(n, n) = E
__
N(k + 1) N(k)

N(k + 1)

N(k)
_

_
N(k)

N(k)
_
=
_
n
n
__
.
This vector eld is plotted in Figure 4.2 for = 0.20 and the above value of c.
The system should perform almost as well as the system with centralized control. The following
proposition shows that the system with decentralized control is stable for the same arrival rates
as the system with centralized control.
Proposition 4.2.1 Suppose (c(0), c(1), c(e)) are chosen so that d(G) < 0 if G < 1 and d(G) > 0
if G > 1. Suppose E[A
k
] = < e
1
, and suppose for some constants a and b, that P[A
k
x]
a exp(bx) for all x > 0. Finally, suppose the system begins in the initial state N(0) =

N(0) = 0.
Then there exist positive constants A and B with B < 1 so that, for all k, P[N(k) n] AB
n
.
4.3 Bound Implied by Drift, and Proof of Proposition 4.2.1
Proposition 4.2.1 will be proved in this section by a general method which we call the Lyapunov
method of stochastic stability. The idea is to apply bounds implied by drift analysis (explained
next) to a one-dimensional process which is a function of the main process under study. The
87
function mapping the original process into the one dimensional process is called the Lyapunov
function. The method oers slick proofs, but, in typical applications, a good understanding of the
original process and why it is stable, is needed in order to construct the Lyapunov function.
Bounds Implied by Drift Analysis
Let Y
0
, Y
1
, . . . be a sequence of random variables. The drift at instant k is dened by E[Y
k+1

Y
k
[Y
0
, Y
1
, . . . , Y
k
]. Note that this drift is itself random, being a function of Y
0
, . . . , Y
k
. For notational
convenience, we shall write T
k
instead of Y
0
, Y
1
, . . . , Y
k
when conditioning on these variables. Thus,
the drift is E[Y
k+1
Y
k
[T
k
]. If the sequence Y
0
, Y
1
, . . . is a Markov sequence, then the drift at instant
k is a function of Y
k
alone.
Let a be a constant with a < , and let
o
, and D be strictly positive nite constants.
Consider the following conditions on the random sequence:
Condition C.1 The drift at instant k is less than or equal to
o
whenever Y
k
a, for k 0.
Equivalently,
2
E[(Y
k+1
Y
k
+
o
)I
Y
k
a
[T
k
] 0 k 0.
One might expect that Condition C.1 alone is enough to guarantee that Y
k
, at least for large
values of k, does not tend to be much larger than the constant a (if a is nite) or that Y
k
tends
to in some sense as k (if a = ). However, an additional condition is needed (see
homework exercise). It is sucient that the increments of the sequence not be too large. That is
the rough idea of Condition C.2:
Condition C.2 There is a random variable Z with E[exp(Z)] = D so that
P[[Y
k+1
Y
k
[ x [T
k
] P[Z x] x > 0.
Let c, and be constants such that
c
E[exp(Z)](1+E[Z])

2
0 < ,
<
o
/c and,
= 1
o
+c
2
.
Then < 1 and D 1, and it is not dicult to show that Conditions C.1 and C.2 imply Condition
D.1, stated as follows:
Condition D.1 E[exp((Y
k+1
Y
k
))I
Y
k
a
[T
k
] for k 0.
Also, Condition C.2 directly implies condition D.2, stated as follows:
Condition D.2 E[exp((Y
k+1
a))I
Y
k
<a
[T
k
] D for k 0.
Proposition 4.3.1 Conditions D.1 and D.2 (and therefore also Conditions C.1 and C.2) imply
that
P[Y
k
b[Y
0
]
k
exp((Y
0
b)) +
1
k
1
Dexp((b a)). (4.3)
2
The inequality should be understood to hold except possibly on a set of probability zero, because conditional
expectations are only dened up to events of probability zero.
88
Note that the rst term on the right-hand side of (4.3) tends to zero geometrically fast as
k , and the second term tends to zero exponentially fast as b . Note also in the special
case Y
0
a that (4.3) yields P[Y
k
b[Y
0
]
Dexp((ba))
1
.
Proof. Note that E[exp(Y
k+1
)[Y
0
] = E[E[exp(Y
k+1
)[T
k
][T
0
]. However
E[exp(Y
k+1
)[T
k
] = E[exp((Y
k+1
Y
k
))I
Y
k
a
[T
k
] exp(Y
k
)
+E[exp((Y
k+1
a))I
Y
k
<a
[T
k
] exp(a)
exp(Y
k
) +Dexp(a),
so that
E[exp(Y
k+1
)[Y
0
] E[exp(Y
k
)[Y
0
] +Dexp(a). (4.4)
Use of (4.4) and argument by induction on k yields that
E[exp(Y
k
)[Y
0
]
k
exp(Y
0
) +
1
k
1
Dexp(a).
Combining this inequality with the simple Cherno bound,
P[Y
k
b[Y
0
] = P[exp(Y
k
) exp(b)] exp(b)E[exp(Y
k
)[Y
0
],
establishes (4.3).
Proof of Proposition 4.2.1
In this section we explain how to prove Proposition 4.2.1 by applying Proposition 4.3.1. The drift
of the backlog at instant k, (given the state of the system), is E[N(k + 1) N(k)[N(k),

N(k)].
If this drift were less than some xed negative constant whenever N(k) is suciently large, then
Proposition 4.3.1 could be directly applied to prove Proposition 4.2.1. By comparison with the
centralized control scheme, we see that the drift is negative if the estimate

N(k) is close enough
to N(k). It is possible that

N(k) is not close to N(k) (so that N(k) itself doesnt have negative
drift), but in that case the estimation error should tend to decrease.
To turn this intuition into a proof, let Y
k
= V (N(k),

N(k)) where V (n, n) = n+([ nn[) with
(u) =
_
u
2
K
if 0 u K
2
2Ku K
3
if u > K
2
.
for some positive constant K to be specied later. Here V is the Lyapunov function which we use
to convert the problem of establishing stability for the two dimensional process (N(k),

N(k)) into
the problem of establishing stability for a one dimensional process. Thus, we argue that if K is
suciently large, then the random sequence (Y
k
) satises Conditions C.1 and C.2 for some nite
constants a,
o
, , and , and conditioning on T
k
means conditioning on ((N(j),

N(j)) : 0 j k).
Observe that Y
k
= N(k) +([

N(k) N(k)[) so that the drift of Y
k
is the sum of the drift of N(k)
and the drift of ([

N(k)N(k)[). To prove that Condition C.1 holds, note that if Y
k
is large, and if
[

N(k) N(k)[ is small, then Y
k
inherits negative drift from N(k), and the drift of ([

N(k) N(k)[)
is small because the function is nearly at near zero. On the other hand, if Y
k
is large, and if
[

N(k) N(k)[ is also large, then Y
k
inherits large negative drift from ([

N(k) N(k)[). Thus, in
89
either case, when Y
k
is large it has negative drift, so Condition C.1 is satised. Details are left to
the reader. Condition C.2 is easily establisheda key point is that the derivative of the function
is bounded by 2K, so
[Y
k+1
Y
k
[ (2K + 1)[N(k + 1) N(k)[ + 2K[

N(k + 1)

N(k)[ (4.5)
Thus, Proposition 4.3.1 can be applied to (Y
k
: k 0), and since N(k) Y
k
, Proposition 4.2.1
follows immediately.
4.4 Probing Algorithms for Multiple Access
The decentralized control strategy described above has obvious deciencies. For example, if a col-
lision is observed, then until there is a success, it is clear that there are at least two backlogged
stations. The update procedure would need a larger state space to take advantage of such infor-
mation. One could imagine each station computing the conditional distribution of the number of
backlogged stations, using a Bayesian update formula at each step (in fact this is described in
the paper of A. Segall). This would be realizable if the state space is truncated to a nite set of
states for the purpose of the estimator. Some approximate equations motivated by this procedure
were given by Rivest. However, a key observation is that any given individual station knows not
only the feedback information, but it also knows the times that it itself transmitted in the past.
This information can serve to distinguish the station from other stations, thereby helping to re-
solve conicts. Algorithms called probing algorithms or tree algorithms take advantage of this
additional information.
First the basic collision resolution algorithm (CRA) is described. It operates in discrete time
with the same 01e feedback model considered for controlled ALOHA. Suppose there are initially
k stations with packets to transmit. In the rst slot, all stations transmit. If k=0 or k = 1, then
at the end of the slot it is clear that all packets are successfully sent. If k=2 then a collision occurs
in the rst slot.
After a collision, all transmitters involved ip a fair coin with possible outcomes 0 or 1. Those
that obtain a 0 from their coin ip (those ipping 0 for short) retransmit in the very next slot,
and those ipping 1 retransmit in the rst slot after the algorithm is done processing the group
of stations that ipped a 0. An example involving four stations with packets, A, B, C and D, is
pictured in Figure 4.2. Suppose after the initial collision that stations B and C ip 0. Then both
B and C transmit in the second slot, causing another collision. If then C ips 0 and B ips 1, then
C and B are transmitted successfully in the third and fourth slots respectively. After the fourth
slot, the stations with packets A and D realize it is time to transmit again, resulting in a collision
in the fth slot, and so on.
The progress of the algorithm can be traced on a binary tree, as pictured in Figure 4.3. Each
node of the tree corresponds to a single slot of the algorithm. Each terminal node of the tree
constructed is labeled either 0 or 1, whereas each nonterminal node is labeled e. The algorithm
corresponds to a virtual stack, as pictured in Figure 4.4. Packets all initially enter the top level,
called level 0, of the stack. All packets at level 0 at the beginning of a slot are transmitted. If there
is only one packet at level 0, it is thus successfully transmitted. If there are two or more packets
at level 0, then all such packets ip a fair coin, and those that ip 0 stay at level 0 for the next
slot, and all those that ip 1 move down to level 1. A packet that is in any other level l at the
beginning of a slot is not transmitted in the slot. At the end of the slot the packet moves to level
90
l +1 if there is a collision in the slot, and moves to level l 1 if there is no collision (i.e. either no
transmission or a success) in the slot. For actual implementation of the algorithm, a station with
a packet maintains a single counter to keep track of the level of the packet in the virtual stack.
The basic CRA is now completely described. Some comments are in order. First, whenever there
is a collision followed by an idle slot, the next slot will be a collision. For example, the outcomes of
slots 5 and 6 in the example insure that slot 7 will be a collision. Consequently, the CRA can be
improved by skipping a step in the algorithm. However, this improvement can lead to instability
if the model assumptions are slightly violatedsee problem XXX. Another observation is that all
packets can sometimes be successfully transmitted a slot or more before the algorithm is complete.
In the example, the algorithm requires eleven slots, even though all four packets are transmitted
in ten slots. The eleventh slot is necessary in order for all stations listening to the channel outputs
to learn that the algorithm is indeed complete.
Let L
n
denote the mean number of slots needed for a CRA to process a batch of n packets.
Then L
0
= L
1
= 1. To nd L
n
for n 2, the outcome of the rst slot is to be considered. Then,
L
n
= 1 +
n

k=0
_
n
k
_
2
n
E[additional slots required[k stations ip 0 after rst slot]
= 1 +
n

k=0
_
n
k
_
2
n
(L
k
+L
nk
)
= 1 + 2
n

k=0
_
n
k
_
2
n
L
k
.
Observing that L
n
appears on the right side with a coecient 2
(n1)
, we can solve for L
n
yielding
the recursion:
L
n
=
1 + 2

n1
k=0
_
n
k
_
2
n
L
k
1 2
(n1)
(4.6)
Let denote the (random) completion time given the number of packets has a Poisson distri-
bution with mean . The mean of can be computed numerically using the expression
=

n=0

n
e

L
n
n!
.
A more direct way is to note that
= E[number of nodes of tree visited]
= 1 + 2E[number of nodes corresponding to collisions]
= 1 + 2

k=0
2
k
P[a given node in level k of full binary tree corresponds to a collision]
= 1 + 2

k=0
2
k
1 (1 +2
k
) exp(2
k
)
The nal sum converges quickly, as can be seen from the fact that 1(1+) exp() =
2
/2+o(
2
)
as 0. Thus, is nite for any > 0.
91
4.4.1 Random Access for Streams of Arrivals
So far a method for arranging for transmissions of a xed group of stations with packets has
been described. This collision resolution algorithm can be used as a building block to construct a
random access strategy for a continuous stream of arrivals of new packets. In one method, packets
just arriving are rst transmitted in the rst slot after the batch in progress has been handled. This
leads to large variations in batch size. Analysis can be based on the observation that the sequence
of consecutive batch sizes forms a Markov process. A second method, known as continuous entry, is
discussed in Problem xxx. A third method, known as the decoupled window strategy, is discussed
in the next subsection.
4.4.2 Delay Analysis of Decoupled Window Random Access Scheme
Under the decoupled window random access scheme, packets are grouped into batches. The k
th
batch consists of those packets that arrive during the time interval [(k 1), k), where is
a parameter of the algorithm. Note that the distribution of the number of packets in a batch is
Poisson with mean . Packets in each batch are processed by a separate run of the basic collision
resolution algorithm. The processing time of the k
th
batch is denoted
k
. The variables
k
,k 0
are independent and each has the same distribution as the completion time of the basic CRA
when the number of packets involved has the Poisson distribution with mean = . Processing
of the kth batch begins in the rst slot after the (k 1)
th
batch is processed, or in the rst slot
after time k, whichever is later.
Let U
k
denote the waiting time of the kth batch, which is the amount of time from k until
processing for batch k begins. Note that for suitable initial conditions, U
k
= U
o
k
+
k
, where
k
is
uniform on the interval [0,1], and U
o
k
is the waiting time of the kth batch in a modied system which
does not have time slotting of the transmission process (cf. section on servers with vacations). The
sequence (U
o
k
: k 1) satises the equations:
U
o
k+1
= (U
o
k
+
k
)
+
,
which is identical to the update equations for the waiting times in a GI/GI/1 queue (more specif-
ically, a D/GI/1 queue), if the
k
represent the service times in the GI/GI/1 queue, and if the
interarrival times are the constant . For E[U
k
] to be bounded as k it is necessary and
sucient that and V ar() be nite. In fact, by Kingmans rst moment bound,
E[U
o
k
]
(V ar() +V ar())
2(1 /)
=
Var()
2(1 /)
if the load = / is less than one. It can be shown that has nite variance for any > 0 by
modifying either of the two methods used to compute .
For E[U
o
k
] to be bounded it is thus sucient that < / = /, where = . For the
basic CRA, the maximum throughput is max

/ = 0.429. ZZZ CHOICE OF . It is 0.462 for


the modied CRA.
The maximum throughput can be further increased by terminating the processing of a batch
whenever the conditional distribution of the number of remaining packets is Poisson distributed.
Those packets can be regrouped with packets arriving during a dierent time interval with length
adjusted so that the mean number of packets in the resulting batch is adjusted to an optimal value.
This leads to maximum throughput 0.4872. Finally, with this restart idea used, the tree is no longer
92
symmetric, so further improvement can be gained by adjusting the bias of the coin used, yielding
maximum throughput 0.4878.
4.5 Problems
(I havent assigned these problems recently, and solutions are not provided for them in the back.
-BH)
4.1. M/G/1 queue with bulk arrivals
Consider a single server queueing system in which customers arrive in batches. Suppose that batches
arrive according to a Poisson process of rate , and suppose that P[a batch has k customers]=g
k
,
independently of the time of arrival and of the size of all other batches. Suppose the service times
of all customers are independent with distribution function B(x) and mean 1/, and suppose the
sequence of service times is independent of the arrival process. Suppose that the customers are
served in FCFS order, and that the customers within a batch are served in some random order that
is independent of the service times of the customers. Think of a typical customer as one drawn at
random from among the rst N customers, all choices being equally likely, where N is very large.
// (a) Dene the waiting time of a batch to be the length of time from when the batch arrives until
some customer in the batch begins service. Under what condition is the limiting batch waiting
time nite? Under such condition, what is the mean waiting time of a batch in equilibrium? (Hint:
reduce to consideration of an M/G/1 system).
(b) What is the distribution of the total number of customers in the same batch as a typical
customer? Given that a typical customer is in a batch of size n, what is the conditional distribution
of the number, X, of customers in the same batch as the typical customer which are served before
the customer itself is served? What is the unconditional probability distribution of X? What is
E[X]? Double check your answers for the case g
1
= g
2
= 0.5.
(c) What is the mean waiting time in queue for a typical customer?
(d) (Eect of statistical multiplexing). If the arrival rate is doubled to 2, if the batch sizes have
the same distribution, and if the service times are cut in half, how does the mean waiting time in
the queue change? How about the mean waiting time in the system?
(e) (Eect of increasing burstiness). If the arrival rate is halved to /2, if the batch sizes are
doubled, and if the service times are kept the same, how does the mean waiting time in the queue
change?
4.2. Basic ALOHA
Consider a basic ALOHA system with M stations, each having buer size one, as discussed in
class. (a) Suppose the arrival probability per slot at an empty station is q
a
= 1/(Me), and that the
transmission probability per slot at an occupied station is q
x
= 1/M. Perform a drift analysis of
the system. How many equilibrium points are there? Assuming M is large and approximating the
mean number of busy stations using drift analysis, derive the throughput and the mean delay. (b)
Continue to assume that M is large. Also assume that q
x
= 3/M and q
a
= a/M for some constant
a. For what value of a is T M/2 (in other words, about the same as for TDMA), where the mean
delay T is again predicted by drift analysis. How many equilibrium points are there for this value
of a?
4.3. Basic ALOHA for nite M
Consider a slotted ALOHA system with M stations, each with an innite buer. Each station
which, at the beginning of a slot, has at least one packet, transmits a packet with probability
93
q
x
. In addition, each station, whether transmitting or not, receives a packet during a slot with
probability q
a
. Suppose the stations have unbounded buers. At the end of each slot, all stations
learn how many packets were transmitted. If exactly one is transmitted, that packet leaves the
system. Otherwise, no packets leave the system. (a) Identify a Markov chain describing the system
and give the one-step transition probabilities. (Hints: What is the state space? To gain intuition,
it might help to think about the case M = 2 rst.) (b) What is a necessary and sucient condition
for the system to be stable? (Be as explicit as possible. You should explain your reasoning but you
neednt prove your answer.)
4.4. Drift analysis for two interacting Markov queues
Consider a system with two queueing stations. Suppose the arrival processes to the two stations
are independent Poisson processes with the rates
1
and
2
. Suppose the stations are correlated
in the following sense. When there are n
1
customers in the rst station and n
2
customers in the
second station, than the instantaneous service rate in the rst station is n
1
/(n
1
+ n
2
) and the
instantaneous service rate in the second station is n
2
/(n
1
+ n
2
). That is, the eort of a rate
exponential server is allocated among the two stations in proportion to their numbers of customers.
a) Sketch the drift vector eld of the Markov chain (N
1
, N
2
) assuming that
1
= 1,
2
= 2 and
= 4. b) Find the mean total number in the system in equilibrium. c) Under what conditions on

1
,
2
and is the Markov chain positive recurrent? Can you prove your answer?
4.5. Is negative drift alone enough?
Let X
1
, X
2
, . . . be independent random variables with P[X
k
= 2k
2
] = 1/k
2
and P[X
k
= 1] =
1 1/k
2
. Let Y
k
= X
1
+X
2
+... +X
k
. (a) Find d
k
(n) = E[Y
k+1
Y
k
[Y
1
, . . . , Y
k1
, Y
k
= n]. What
does your answer suggest about the asymptotic behavior of the sequence (Y
k
: k 1)? (b) Let
A
k
be the event X
k
= 2k
2
. Since the sum P[A
1
] + P[A
2
] + . . . is nite, the Borel-Cantelli
lemma implies that P[A
k
occurs for innitely many k] = 0. What does this fact imply about the
asymptotic behavior of the random sequence (Y
k
: k 1)? (Does the discrepancy between your
answers to (a) and (b) contradict any results from class?)
4.6. Controlled ALOHA
Consider a discrete-time controlled ALOHA system. Suppose that just before slot one there are
no packets in the system. The number of new packets that arrive during each slot has the Poisson
distribution with mean , and each new packet arrives at an empty station (innite user model).
The numbers of arrivals during distinct slots are mutually independent. Each station keeps track
of a counter. We use C(k) to denote the (common) value of the counters at the beginning of slot
k. We assume C(1) = 0 and use the following update rule: C(k) = (C(k) 1)
+
if no packets are
transmitted in slot k, and C(k) = C(k) +1 if at least one packet is transmitted in slot k. Suppose
that any station with a packet at the beginning of slot k transmits it with probability f(k), where
f(k) = 1/C(k) if C(k) 1 and f(k) = 1 otherwise.(a) Identify a Markov chain describing the
system and write down the transition probabilities. (b) For what range of is the Markov chain
positive recurrent? (Hint: When the backlog is large, what does the mean number of packets
transmitted per slot drift towards?)
4.7. A controlled ALOHA system
Consider a discrete-time controlled ALOHA system, as discussed in class. Suppose that just before
slot one there are no packets in the system. The number of new packets that arrive during each
slot has the Poisson distribution with mean , and each new packet arrives at an empty station
(innite user model). The numbers of arrivals during distinct slots are mutually independent. Each
94
station keeps track of an estimate

N(k) as follows:

N(k + 1) = (

N(k) I
Z(k)=1
+c(Z
k
))
+
+e
1
where (c(0), c(1), c(e)) = (0.418, 0, .582). (If the e
1
here were replaced by , this would be
identical to the estimator considered in class. The estimator here does not require knowledge of
.) Suppose that any station with a packet at the beginning of slot k transmits it with probability
min(1/

N(k), 1). (a) Identify a Markov chain describing the system, and write down the transition
probabilities. (b) When

N(k) and N(k) are both large, the dierence G
o

N(k)N(k) drifts towards
zero for some value of G
o
. How is the constant G
o
determined? (Be as explicit as possible.) (c)
Suppose for a given that you have computed G
o
. Under what condition will the system be stable
(using any reasonable denition of stability)? (d) Extra credit: For what range of is the system
stable?
4.8. An access problem with three stations
Three stations each have a packet and contend for access over a time-slotted channel. Once a station
successfully transmits its packet, it remains silent afterwards. Consider two access methods: (i)
Each station transmits in each slot with probability 1/2 until successful. (ii) The stations use
the basic unimproved tree algorithm based on fair coin ips. (a) For each of these two methods,
compute the expected number of slots required until all packets are successfully transmitted. (b)
Suppose a fourth station is monitoring the channel, and that the fourth station does not know that
there are initially three stations with packets. Compute the expected number of slots until the
fourth station knows that all packets have been successfully transmitted.
4.9. Basic tree algorithm with continuous entry
Packets are transmitted in slots of unit length. Suppose packets arrive according to a Poisson
process of rate packets per slot. Each packet arrives at a distinct station, so that any station has
at most one packet. Each station adopts the following strategy:
* A counter value C is set to zero when a new packet is received.
* If a packet has counter value C at the beginning of a slot, then
if C = 0 the packet is transmitted in the slot. If the transmis-
sion is successful the packet leaves the system. If a collision
occurs, the station ips a fair coin (value zero or one) and
sets C equal to the value.

else if C 1 the packet is not transmitted in the slot. If no


collision occurs in the slot then C is decreased by one. If a
collision occurs then C is increased by one.
(a) Sketch the movement of packets
in a virtual stack. (b) Suppose n new packets arrive just before slot zero and that there are no
other packets in the network at that time. Let B
n
denote the expected number of slots until the
feedback indicates that there are no packets in the network. Then B
0
= B
1
= 1. By conditioning
on the outcome during the rst slot, nd equations for B
2
, B
3
, . . . Can you nd the least upper
bound on the set of such that B
n
is nite?
4.10. Simple multiple access with four stations
Four stations, labeled 00,01,10,11, share a channel. Some subset of the stations wish to transmit
a packet. The stations execute the basic tree algorithm using the bits in their labels to make
decisions about when to transmit. Let R
k
denote the mean number of slots needed for this conict
resolution algorithm to complete, given that k of the four stations (chosen at random, uniformly)
have a packet to transmit. Include the rst slot in your count, so that R
0
= R
1
= 1. (a) Find
R
2
, R
3
and R
4
. (b) Compare to the mean time if instead the decisions in the basic tree algorithm
95
are made on the basis of independent fair coin ips for 1 k 4. Also consider the comparison
for small values of k when there are 2
J
stations for J equal to 5 or 6.
96
ABCD BC B C AD AD A D AD -- --
1 2 3 4 5 6 7 8 9 10 11
Figure 4.2: Time axis for the basic collision resolution algorithm
BC
AD
C
B AD
AD
A D
e
o
e
e
e
e
1
o
1
1 1
--
--
1
2
3 4 6
5
7
8
9
10
11
Figure 4.3: Tree for the basic collision resolution algorithm
0
1
2
3
4
5
6
7
0 or 1
e
random
1
e
0 or 1
e
Counter
Figure 4.4: Dynamics of the counters for the basic collision resolution algorithm
97
98
Chapter 5
Stochastic Network Models
This chapter describes simple stochastic models for two types of networks: circuit switched networks
and datagram networks. In circuit switched networks, a given route is considered to be available if
there is sucient capacity at each link along the route to support one more circuit. A performance
measure of interest for a circuit switched network is the probability a given route is available. In
datagram networks, individual customers move along routes in the network. Performance measures
of interest for datagram networks are the transit delay and throughput.
Although arbitrary networks can be quite complicated, the models considered in this chapter
have surprisingly simple equilibrium distributions. Furthermore, they suggest useful approxima-
tions (such as the reduced load approximation for blocking probability) and they illustrate impor-
tant phenomena (such as the importance of bottleneck links in heavily loaded store and forward
networks).
Before the two types of network models are presented, the theory of reversible Markov processes
is introduced. Among other things, the theory of reversible Markov processes suggests rather general
techniques for constructing complicated stochastic models which have fairly simple equilibrium
distributions.
5.1 Time Reversal of Markov Processes
The reverse of a random process X = (X(t) : t R) relative to a xed time is (X( t) : t R).
Recall that (1.5) gives a characterization of the Markov property which is symmetric in time.
Therefore a random process (X(t) : t R) is Markov if and only if its reverse is Markov. If X is
Markov, the transition probabilities of its reverse process are obtained by
P[X(t) = i[X(t +s) = j] =
P[X(t) = i]P[X(t +s) = j[X(t) = i]
P[X(t +s) = j]
Note that even if X is time-homogeneous, the reverse is not necessarily time-homogeneous.
Suppose (X(t) : t R) is a pure-jump, stationary Markov process with countable state space o
and irreducible generator matrix Q. Then the reverse process is also stationary, and its equilibrium
distribution
t
, transition probabilities (p
t
ji
(t)), and generator matrix Q
t
are given by

t
= , p
t
ji
(t) =

i
p
ij
(t)

j
t 0 and q
t
ji
=

i
q
ij

j
. (5.1)
99
In vector-matrix form, this last relation becomes
Q
t
= diag()
1
Q
T
diag()
where
diag() =
_
_
_

1
0 0 . . .
0
2
.
.
.
_
_
_.
For example, the transition rate diagram of a stationary Markov process is shown in Fig. 5.1, along
with the transition rate diagram of the corresponding reverse process. Note that the process and
its reverse have dierent transition rates for any > 0. However, the sum of the rates leading out
of any given state is the same for the forward and reverse time processes. This reects the fact
that the mean holding time for any given state, which is the reciprocal of the sum of the rates out
of the state, should be the same for the forward and reverse processes.
1
1
!
1
1
1
1+!
1
!/(1+!)
1/(1+!)
Figure 5.1: Example of forward and reverse transition rate diagrams
The notion of reverse process is useful for verifying that a given distribution is the equilibrium
distribution, as indicated in the following proposition. The proof is straightforward and is left to
the reader.
Proposition 5.1.1 Let X be an irreducible, pure-jump Markov process with generator matrix Q,
let be a probability vector and Q
t
be a generator matrix such that

i
q
ij
=
j
q
t
ji
; i, j o.
Then is the equilibrium distribution of X and Q
t
is the generator of the reverse process.
A random process is reversible if the process and its reverse (relative to a xed time ) have
the same nite-dimensional distributions, for all . Due to the phrase for all in this denition,
a reversible process is necessarily stationary.
The pure-jump stationary Markov process X is reversible if and only if Q = Q
t
, or equivalently,
if and only if the detailed balance equations hold:

i
q
ij
=
j
q
ji
i, j. (5.2)
100
In comparison, the global balance equations are Q = 0, or equivalently

i:i,=j

i
q
ij
=

i:i,=j

j
q
ji
j.
Note that it is easy to construct reversible Markov processes as follows. First select a state space
o and an arbitrary probability distribution on o. Then for each pair of states i and j, simply
select any values for q
ij
and q
ji
such that (5.2) holds. Also, if o is innite, care should be taken
that the sum of the rates out of any state is nite.
Reversible Markov processes also arise naturally. Consider for example an irreducible, stationary
birth-death processes, and let n 0. The only way the process can move between the subsets
0,...,n and n+1,n+2,... is to make a jump from n to n+1 or from n+1 to n. Thus,
n

n+1
=

n+1

n+1
. This relation yields that the detailed balance equations (5.2) hold, so that the process
is reversible.
More generally, let X be an irreducible stationary pure-jump Markov process, and let G = (V, E)
denote the undirected graph with set of vertices o and set of edges E = [i, j] : i ,= j, and q
ij
>
0 or q
ji
> 0. The graph G is called a tree graph if it is connected (which it must be because X
is assumed to be irreducible) and cycle-free. In case o is nite this is equivalent to assuming that
G is connected and has [o[ 1 edges, or equivalently, G is connected but the deletion of any edge
from G divides G into two disjoint subgraphs. The same reasoning used for birth-death processes
(which are special cases) shows that any stationary Markov process so associated with a tree graph
is reversible.
A useful way to obtain reversible processes with larger state spaces is to consider collections of
independent reversible processes. That is, if X(t) = (X
1
(t), . . . , X
n
(t)) for all t, where the compo-
nents X
i
(t) are mutually independent reversible processes, then X itself is a reversible process.
If X is an irreducible reversible pure-jump Markov process with generator matrix Q, then a new
such process can be obtained as follows. Fix any two distinct states i
o
and j
o
and let > 0. The
new generator matrix is the same as Q, except that q
iojo
and q
joio
are replaced by q
iojo
and q
joio
respectively, and the new values of q
ioio
and q
jojo
are chosen so that the new matrix also has row
sums equal to zero. We can as well take = 0, although doing so might destroy the irreducibility
of the resulting Q matrix, so that more than one modied process can arise, depending on the
equilibrium distribution.
Taking this last idea a bit further, given a subset A of o we can obtain a generator for a Markov
process with state space A by simply restricting Q to the set A A, and suitably readjusting the
diagonal of Q. The resulting process is called the truncation of X to A.
As we shall see in the next section, the above ideas of considering collections of processes (to
build up size) and truncation (to introduce nontrivial dependence among the constituent processes)
is a powerful way to construct complex reversible models in a natural way.
Exercise: Consider the above modication if . In some sense the states i
o
and j
o
become
aggregated in this limit.
5.2 Circuit Switched Networks
Consider a circuit switched network with R routes and M links. A route is simply a subset of links.
Usually such links form a path, but that is not necessary in what follows. An example network with
links 1,2,3,4 and four routes 1,2,3,1,2,3,2,4, is shown in Figure 1.2. Suppose that calls
101
arrive for route r according to a Poisson process at rate
r
, and that call durations are independent
and exponentially distributed with mean one.
Let X
r
(t) denote the number of calls in progress on route r at time t, and let Z
l
(t) denote the
resulting load on link l, dened by
Z
l
(t) =

r
A
lr
X
r
(t),
where A is the link-route incidence matrix dened by A
lr
= 1 if l r and A
lr
= 0 otherwise. In
matrix notation, Z(t) = AX(t) for all t.
Then X = ((X
1
(t), . . . , X
R
(t))
T
: t 0) is a Markov process with state space Z
R
+
. If there
are no capacity constraints imposed on the links, then the components of X are independent, and
each behaves as an M/M/ system. An M/M/ system is of course a special type of birth-death
process, and is thus reversible. Therefore X itself is reversible.
R
o
u
t
e

1
R
o
u
t
e

2
R
o
u
t
e

3
R
o
u
t
e

4
1 2
3
4
Figure 5.2: A sample circuit switched network, with routes indicated.
Let e
i
denote the column vector with a 1 in the ith position and 0s in all other positions. The
Q matrix is given by (only nonzero, o-diagonal entries are displayed):
q(x, x +e
r
) =
r
if x Z
R
+
q(x, x e
r
) = x
r
if x e
r
Z
R
+
and the equilibrium distribution is given by
(x) =

xr
r
exp(
r
)
x
r
!
We remark that the process Z is not necessarily Markov. Consider the example in Figure 5.2. If it
is known that Z(t) = (1, 1, 1, 0), then it is not determined whether there is a single call, on route 1,
or two calls, one on route 2 and one on route 3. The history of Z before t can distinguish between
these two possibilities, giving additional information about the future of Z.
If capacity constraints are imposed, so that link l can support at most C
l
calls, then the state
space is truncated down to o
C
= x Z
R
+
: Ax C, where C is the column vector of capacities.
102
The truncated system is also reversible, and the equilibrium distribution
1
is

C
(x) =
1
Z(C)

xr
r
exp(
r
)
x
r
!
where the constant Z(C) is chosen to make
C
a probability distribution.
In the remainder of this section we examine the probability of blocking for calls oered for a
given route. If the network has only a single link, the total number of calls in progress behaves like
an M/M/C
1
/C
1
system, so that
P[k calls in progress] =

k
1
/k!

C
1
j=0

j
1
/j!
,
where
1
is the sum of the arrival rates. In particular, the blocking probability at the link is given
by B
1
= P[C
1
calls in progress] = E(
1
, C
1
), where E denotes the Erlang blocking formula.
In general, a link l is full with probability B
l
= 1 (Z(C e
l
)/Z(C)). For large networks, this
is a dicult quantity to compute, which leads us to consider bounds and approximations. First, a
bound is presented. Dene
l
for a link l as the sum of
r
, over all routes r that include link l. In
matrix notation, this is = A. The following conservative bound on the link blocking probability
is intuitively clear:
B
l
E(
l
, C
l
) (5.3)
A proof of (5.3) goes as follows. Fix a link l. The right hand side, E(
l
, C
l
), is the same as the
blocking probability in the original network if all link capacities except C
l
are changed to +. It
is easy to construct the original network process and the modied process on the same probability
space so that, with probability one, the number of calls in progress through line l for the original
process is less than or equal to the number of calls in progress for the modied process. Indeed,
the sequences of oered arrivals should be the same, and departures of routes currently using link
l should be governed by C
l
independent Poisson clocks. When there are k 1 customers in a
system (either one) at time t then there is a departure at time t if one of the rst k Poisson clocks
rings. In particular, the fraction of time link l is full is smaller for the original process. Since time-
averages equal statistical averages for positive recurrent Markov processes, (5.3) follows. What is
less obvious (and more dicult to prove) is the remarkable lower bound of W. Whitt (Blocking
When Service is Required from Several Service Facilities, AT&T Technical Journal, vol. 64, pp.
1807-1856, 1985)
P[route r free]

lr
(1 E(
l
, C
l
)) (5.4)
The bounds (5.3) and (5.4) are often accurate approximations when blocking probabilities are
very small (which is, of course, often the case of interest). For moderately loaded networks, the
following reduced load approximation is often quite accurate:

B
l
= E(

l
, C
l
), (5.5)
1
Amazingly enough, the same equilibrium distribution results if the holding times of calls for each route r are
independent and identically distributed (not necessarily exponentially distributed) random variables of unit mean.
This is an example of an invariance result in queueing theory.
103
where

l
is an estimate of the eective trac oered at link l:

l
=

r:lr
_
_
_

:l

r,l

,=l
(1

B
l
)
_
_
_
(5.6)
The product term in (5.6) approximately accounts for the thinning of the trac oered at link l
due to blocking at other links. Taken together, (5.5) and (5.6) yield 2M equations for the 2M
variables (

B
l
,

l
: 1 l M). Moreover, the

s are determined by the

Bs, and the

Bs are
determined by the

s. The equations can thus be thought of as xed point equation for the

Bs
(or for the

s). They can be numerically solved using iterated substitution or dampened iterated
substitution. It can be shown that there is a unique xed point (F.P. Kelly, Blocking Probabilities
in Large Circuit Switched Networks, Advances in Applied Probability, 18, 473-505, 1986). The
acceptance probability for a route r can then be approximated as
P[route r free]

l:lr
(1

B
l
). (5.7)
Theory and numerical experience have shown the xed point equations to be rather accurate for
large networks with large capacity constraints and a diverse collection of routes.
5.3 Markov Queueing Networks (in equilibrium)
Networks of queueing stations (we use the word station instead of system here to better distin-
guish the elements from the total network) are examined in this section. A simple way to construct
such a network is to arrange a sequence of stations in series. Such networks are considered rst.
Then two more general types of networks are considered, which dier in the routing mechanisms
used. In the rst, each customer wanders through the network according to a Markov process. In
the second model, each customer follows a deterministic route, which depends on the type of the
customer. If one views the type of a new customer as being random, one can see that the second
routing mechanism is essentially more general than the rst. For simplicity, we restrict our attention
to Poisson arrival processes and Markov type service mechanisms. More general networks, based
on the notion of quasi-reversibility discussed below, are described in Walrand, An Introduction to
Queueing Networks, Prentice Hall, 1988.
5.3.1 Markov server stations in series
We rst consider a single station, paying particular attention to the departure process. A Markov
server with state dependent rate, denoted as an M
S
type server, with parameters (1), (2), . . .,
is an exponential server with instantaneous rate (n) when there are n customers in the station.
That is, given there are n customers in the station at time t, the probability of a service completion
during the interval [t, t +h] is (n)h +o(h). If the arrival process is Poisson with rate , the state
of the resulting number-in-station process is a birth-death Markov process of the type considered
in Section 1.12, such as an M/M/m station with 1 m . The equilibrium distribution exists
and is given by:
(n) =

n
S
1

n
l=1
(l)
, if S
1
=

n=0

n
1
(l)
< +.
104
Suppose the state process (N(t)) is stationary. Since it is a birth-death process it is time-reversible.
Conclude that the following properties hold. The rst property, which implies the second, is known
as quasi-reversibility.
The departure process before time t is independent of N(t).
The departure process is Poisson, with rate .
Next, consider a network of M
S
stations in series, with the arrival process at the rst station
being a Poisson process with rate . The entire network is described by a Markov process with
state space o = (n
1
, n
2
, . . . , n
J
) : n
i
Z
+
= Z
J
+
. Consider a stationary version of the network
state process (which exists if the station is positive recurrent.) Also, x a time t
o
. Since there is no
feedback eect in the network, the rst station in the network behaves like an isolated M
S
server
station with Poisson arrivals. Therefore, the departure process from the rst station is a Poisson
process. In turn, that means that the second station in the network (and by induction, all other
stations in the network) behave as an isolated M
S
server station with Poisson arrivals of rate .
Of course the individual stations are stochastically coupled, because, for example, a departure
instant for the rst station is also an arrival instant for the second station. However, remarkably
enough, for a xed time t
o
, the components N
1
(t
o
), . . . , N
J
(t
o
) are independent. One explanation
for this is the following. By the rst quasi-reversibility property for the rst station, N
1
(t
0
) is
independent of departures from station 1 up to time t
0
. But those arrivals, together with the
service times at the other stations, determine the states of all the other stations. Therefore, N
1
(t
0
)
is independent of (N
2
(t
0
), . . . , N
J
(t
0
)). Similarly, N
2
(t
o
) is independent of the past departures from
the second station, so that N
2
(t
o
) is independent of (N
3
(t
0
), . . . , N
J
(t
0
)). Continuing by induction
yields that N
1
(t
o
), . . . , N
J
(t
o
) are indeed independent.
Therefore, the equilibrium distribution for the network is given by
(n
1
, n
2
, . . . , n
J
) =
J

j=1
p
j
(n
j
) where p
j
(n
j
) =
b
j

n
j

n
j
l=1

j
(l)
where b
j
is a normalizing constant for each j.
Another approach for proving this result is to check that Q = 0. That approach is taken in a
more general setting in the next subsection. An advantage of the argument just given is that it can
be modied to show that the waiting times of a typical customer at the J stages of the network are
mutually independent, with each waiting time having the distribution of a single M
S
server station
with Poisson arrivals in equilibrium.
5.3.2 Simple networks of M
S
stations
Consider next a network of J stations, each with M
S
type servers. Let
j
(n) denote the service rate
at station j given n customers are at the station. Customers are assumed to follow routes according
to a discrete time Markov process with one step transition probability matrix R = (r
ij
)
1i,jJ
.
Thus, given that a customer just left station i, it visits station j next with probability r
ij
. We
distinguish two types of networks, open and closed, as follows.
An open network accepts customers from outside the network. Such arrivals are called exogenous
arrivals. The stream of exogenous arrivals at any given station i is assumed to be a Poisson stream
with rate
i
, and the arrival processes for distinct stations are assumed to be independent. To
105
avoid an innite buildup of customers, we assume that R is such that starting from any station in
the network, it is possible to eventually leave. In particular, at least some of the row sums of the
matrix R are less than one. The vector of exogenous arrival rates = (
1
, . . . ,
J
) and R determine
the vector of total arrival rates = (
1
, . . . ,
J
) by the equations

i
=
i
+

j
r
ji
i, (5.8)
which in vector-matrix notation can be written = R +.
A closed network contains a xed set of customers. There are no exogenous arrivals, nor
departures from the network. The rows of R all have sum one. A possibly unnormalized vector of
arrival rates = (
1
, . . . ,
J
) satises

i
=

j
r
ji
i (5.9)
or in vector form, = R. These are the same equations as satised by the equilibrium distribution
for a nite state Markov process with one step transition probability matrix R. Assuming that R
is irreducible (as we do), the vector is uniquely determined by these equations up to a constant
multiple.
If for each j the service rate
j
(n) is the same for all n 1 (so that the service times at station
j are exponentially distributed with some parameter
j
), then the network model of this section is
known as a Jackson network.
Proposition 5.3.1 The equilibrium distribution of the network is given by
(n
1
, n
2
, . . . , n
J
) = G
1
J

i=1
p
i
(n
i
)
where
p
i
(n
i
) =

n
i
i

n
i
l=1

i
(l)
and
G =

n
_
J

i=1
p
i
(n
i
)
_
.
where the sum is over all possible states n = (n
1
, . . . , n
J
). (For closed networks, write G(K) instead
of G to emphasize the dependence of G on K).
Proof. In order to treat both open and closed networks at the same time, if the network is open
we convert it into a closed network by adding a new station which represents the outside of
the original open network. That is, if the network is open, add a station called station 0 (zero)
which always has innitely many customers (so n
0
+). Let the service rate of station 0 be

0
(n
0
)
tot
=

J
i=1

i
and extend the routing matrix by setting
r
0i
=

i

tot
, r
00
= 0, and r
i0
= 1
J

j=1
R
ij
.
106
The extended routing matrix R has row sums equal to zero, and the vector satisfying (5.8) can
be extended to a vector satisfying (5.9) for the extended network by setting
0
=
tot
. Then the
network can be considered to be closed (albeit with innitely many customers) so the remainder of
the proof is given only for closed networks.
For state n = (n
1
, n
2
. . . . , n
J
) let T
ij
n denote the new state
T
ij
n = (n
1
, n
2
, . . . , n
i
1, . . . , n
j
+ 1, . . . , n
J
) 0 i, j J.
The transition rate matrix for the Markov process describing the whole network is dened by (only
terms for n ,= n
t
need be given):
Q(n, n
t
) =
_

i
(n
i
)r
ij
if n
t
= T
ij
n some i, j i ,= j
0 if n
t
,= T
ij
n for any i, j and n ,= n
t
We need to verify the balance equations

:n

,=n
(n
t
)Q(n
t
, n) =

n:n,=n

(n)Q(n, n
t
) for all n.
Therefore it suces to note that the stronger statement holds for each i such that n
i
1:

j:j,=i
(T
ij
n)Q(T
ij
n, n) =

j:j,=i
(n)Q(n, T
ij
n), n, i
which is equivalent to

j:j,=i
_
(n)
_

j

j
(n
j
+ 1)
__

i

i
(n
i
)
_
1
_

j
(n
j
+ 1)r
ji
= (n)
i
(n
i
).
which is true because it reduces to (5.8) with
i
0.
TO BE ADDED NOTES HERE: Evaluation of mean delay for open networks, and evaluation
of throughput/cycle time for closed networks and investigation for varying number of customers,
application to head of line blocking in packet switches.
5.3.3 A multitype network of M
S
stations with more general routing
We assume the network has I types of customers. It is assumed that new type i customers are
generated according to a Poisson process with rate (i) and that each type i customer has a xed
route, equal to the sequence of stations (r(i, 1), r(i, 2), . . . , r(i, s(i))).
We assume that
j
(n
j
) is the departure rate of station j if there are n
j
customers there, no
matter what type of customers are in the station, for (1 j J). For simplicity, we assume that
customers at each station are served in rst-come, rst-served (FCFS) order. (Results exist for
many other orderings, but the service order does play a big role in the model.)
The entire network can be modeled as a pure jump Markov process. To obtain a suciently
large state to obtain the Markov property, it is necessary to keep track not only of the number of
customers n
j
at each station j, but also the type t
j
(l) of the l
th
customer in station j for 0 j n
j
.
Moreover, since we allow a customer to visit a given station more than once, in order to know the
107
future routing we must also keep track of s
j
(l), which is the stage that the l
th
customer in station
j is along its route. For example, if s
j
(l) = 3 then the lth customer in station j is currently at the
third station (which happens to be station j) along its route. A generic state of the network can
thus be written as c = (c
1
, c
2
, . . . , c
J
), where for each j, c
j
is the state of station j, written as
c
j
= (c
j
(1), c
j
(2), . . . , c
j
(n
j
))
where c
j
(l) = (t
j
(l), s
j
(l)) is called the class of the l
th
customer in station j.
The long term total arrival rate at station j (assuming the network is positive recurrent) is
given by

j
=

types i
(i) number of times route i passes through station j
Dene a probability distribution for the state of station j by

j
(c
j
) = b
j
_

n
j
j

n
j
l=1

j
(l)
_
n
j

l=1
(t
j
(l))

j
= b
j
n
j

l=1
(t
j
(l))

j
(l)
.
Under this distribution, the distribution of the total number in station j is the same as in the
simpler network of the previous subsection. Furthermore, given n
j
, the classes attached to the
customers in the n
j
positions in station j are independent and proportional to the fraction of
arrivals of various classes.
Proposition 5.3.2 The equilibrium distribution for the network is
(c) =
J

j=1

j
(c
j
).
Proof. Given a state c of the network, let T
i
c (respectively T
j
c) denote the new state caused
by an arrival of an exogenous customer of type i (respectively caused be a departure at station j,
which is well dened only if n
j
1). Then, the transition rates for the network (given only for
c ,= c
t
) are
Q(c, c
t
) =
_
_
_

j
(n
j
) if c
t
= T
j
c
(i) if c
t
= T
i
c
0 if c
t
is not of the form T
i
c or T
j
c (and c ,= c
t
).
Now, some thought (or hindsight!) allows us to make the following guess for transition rates of
time-reversed network:
Q
t
(c
t
, c) =
_
_
_
(i) if c
t
= T
j
c and s
j
(1) = S(t
j
(1)), where i = t
j
(1)

k
(n
k
+ 1) if c
t
= T
j
c and s
j
(1) < S(t
j
(1)), where k = r(t
j
(1), s
j
(1) + 1)

k
(n
k
+ 1) if c
t
= T
j
c where k = r(j, 1).
To complete the proof we need only verify that (c)Q(c, c
t
) = (c
t
)Q
t
(c
t
, c) for all states c, c
t
(this
is left to the reader) and apply Proposition 5.1.1.
108
5.4 Problems
5.1. Time reversal of a simple continuous time Markov process
Let (X(t) : t 0) be a stationary time-homogeneous, pure-jump Markov process with state space
1, 2, 3, 4 and Q matrix
Q =
_
_
_
_
1 1 0 0
0 1 1
1 0 1 0
0 0 1 1
_
_
_
_
,
where > 0. Find the Q matrix of the time-reversed process. Sketch the rate transition diagram
of both X and the time-reversed process.
5.2. Reversibility of a three state Markov process
Consider the continuous-time Markov process with the transition rate diagram shown. Identify all
possible values of (a, b, c) R
3
+
which make the process reversible.
2
3
1
1
1
a
b c
1
5.3. A link with heterogeneous call requirements
Let M = K L, where K and L are strictly positive integers, and let
S
,
F
0. A bank of M
channels (i.e. servers) is shared between slow and fast connections. A slow connection requires the
use of one channel and a fast connection requires the use of L channels. Connection requests of each
type arrive according to independent Poisson processes, with rates
S
and
F
respectively. If there
are enough channels available to handle a connection request, the connection is accepted, service
begins immediately, and the service time is exponentially distributed with mean one. Otherwise,
the request is blocked and cleared from the system.
(a) Sketch a transition rate diagram for a continuous-time Markov model of the system.
(b) Explain why there is a simple expression for the equilibrium probability distribution, and nd
it. (You dont need to nd an explicit expression for the normalizing constant.)
(c) Express B
S
, the blocking probability for slow requests, and B
F
, the blocking probability for
fast requests, in terms of the equilibrium distribution.
5.4. Time reversibility of an M/GI/1 processor sharing queue
A processor sharing server devotes a fraction 1/n of its eort to each customer in the queue when-
ever there are n customers in the queue. Thus, if the service requirement of a customer arriving
at time s is x, then the customer departs at the rst time s
t
such that
_
s

s
1/N(t)dt = x, where
N(t) denotes the number of customers in the system at time t. It turns out that if the queue is
stable then N is time-reversible, and its equilibrium distribution depends only on the load . In
particular, the departure process is Poisson. This result is veried in this problem for a certain
class of distributions of the service requirement X of a single customer. (a) Is N Markovian? If
not, explain what additional information would suce to summarize the state of the system at a
time t so that, given the state, the past of the system is independent of the future.
(b) To arrive at a Markov model, restrict attention to the case that the service requirement X
of a customer is the sum of k exponentially distributed random variables, with parameters
i
for
109
1 i k. Note that E[X] = 1/
1
+ +1/
k
and = E[X]. Assume < 1. Equivalently, X is
the time it takes a pure birth Markov process to move from state 1 to state k+1 if the birth rate in
state i is
i
. Thus, each customer moves through k stages of service. Given there are n customers
in the system, a customer in stage i of service completes that stage of service at rate
i
/n. Let
N
i
(t) denote the number of customers in the system that are in stage i of service at time t. Let
(t) = (N
1
(t), . . . , N
k
(t)). Then is a Markov process. Describe its transition rate matrix Q.
(c) The process is not reversible. Show that the equilrium distribution of is given by
(n
1
, . . . , n
k
) = (1)
n
_
n
n
1
,...,n
k
_
a
n
1
1
a
n
k
k
, where n = n
1
+ +n
k
and a
i
= (1/
i
)/(1/
1
+ +
1/
k
). At the same time, show that the time reverse of has a structure similar to , except that
instead of going through the stages of service in the order 1, 2, . . . , k, the customers go through
the stages of service in the order k, k 1, . . . , 1, with the service requirement for stage i remaining
exponentially distributed with paramenter
i
. (It follows that then number of customers in the
system in equilibrium has distribution P[N(t) = n] = (1)
n
.) (d) Explain why N(t) is reversible
for the class of service times used. (Note: If
i
= k for 1 i k, then as k , the service
time requirement approahes the determinitic service time X 1. By using a more general Markov
process to model X, any service time distribution can be approximated, and still N is reversible.)
5.5. A sequence of symmetric star shaped loss networks
Consider a circuit switching system with M 2 links, all sharing an endpoint at a central switching
node. Suppose the set of routes used by calls is the set of all (unordered) pairs of distinct links.
There are thus
_
M
2
_
routes. Suppose that calls arrive for any given route according to a Poisson
process of rate = 4/(M 1). Suppose that the call holding times are independent and exponen-
tially distributed with mean one. Finally, suppose that each link has capacity C = 5, so that a call
oered to a route is blocked and discarded if either of the two links in the route already has 5 calls
in progress. (a) What Markov chain describes the system? Describe the state space and transition
rates. Is the chain reversible? (b) What is the conservative upper bound on the blocking probabil-
ity for a given link, and what is Whitts lower bound on the call acceptance probability. Consider
the cases M = 2, 3, 4 and M . Is the bound exact for any of these values? (c) Derive the
reduced load approximation for the blocking probability of a link. Making an independence assump-
tion, derive an approximation for the probability that a call is accepted. Again, consider the cases
M = 2, 3, 4 and M . Comment on the accuracy of the approximation for dierent values of M.
Note: Solutions are not provided for the remainder of the problems in this section.
5.6. A simple open Jackson network
Consider the Jackson type network (network of single server queues, independent exponentially
distributed service times with queue-dependent means, exogenous arrival processes are Poisson and
Markov random routing is used) shown in Figure 5.3. The service rates at the three stations are 1,
2, and 1 respectively, and the exogenous arrival rate is . At the three points indicated, a customer
randomly chooses a path to continue on, each of two choices having probability 0.5. (a) Determine
the range of for which the system is positive recurrent. (b) Determine the mean time in the
network for a typical customer. (c) Each customer exits at either point A or point B. Find the
probability that a typical customer exits at point A. (Does your answer depend on ?) (d) Indicate
how the network can be embedded into an equivalent closed queueing network.
5.7. A tale of two cities
110
0.5
0.5 0.5
0.5
0.5
0.5
A
B
1 2 1
v
Figure 5.3: A three queue Jackson network
Two cities are connected by two one-way channels, each of capacity C bits per second. Each
channel carries packets which are initiated according to a Poisson process at rate a packets per
second. Assume that packet lengths are independent, exponentially distributed with mean length

1
bits. It is proposed that the two channels be replaced by a single half-duplex channel (capable
of carrying messages in either direction, but only one direction at a time) of capacity C
t
. How large
must C
t
be in order to achieve smaller mean waiting times than for the original network? (Model
links by single server queues.)
5.8. A tale of three cities
Two communication networks are being considered to link three cities. The rst has six one-way
channels of capacity C bps, one for each ordered pair of cities. The second has three one-way
channels of capacity 2C bps each, and the channels connect the three cities in a ring. In the rst
network packets need only one hop while in the second network some of the packets need travel
two hops. Assume that for any ordered pair of cities, packets originate at the rst city destined
for the second city according to a Poisson process of rate a. Thus there are six types of packets.
Assume that packets are exponentially distributed in length with mean length
1
bits.
(a) Give a full Markov description for each of these two networks.
(b) Compute the mean time in the network. (If you need to make an independence approximation
at any point, go ahead but state that youve done so.) Under what conditions on a and C will the
mean time in the rst network be smaller than the mean time in the second network?
5.9. A cyclic network of nite buer queues
K jobs circulate in a computer pipeline with N stages, as shown in Figure 5.4. Each stage is assumed
. . .
1 2 3 4 N
Figure 5.4: A recirculating pipeline
to have one server and storage space for B jobs. Service at any nonempty stage is completed at rate
one whenever the next stage is not full, and at rate zero otherwise (this is called communication
blocking). Dene (N, K, B) to be the mean rate at which jobs pass through a given stage in
equilibrium. (a) Find (N, K, B) for B K. (b) Find (N, K, 1). (Hint: show that all states
are equally likely in equilibrium. The rotational symmetry of the system is not enough.) (c) Let

(B) = lim
N
max(N, K, B) : K 1. Find

(+) and

(1). Note: Recent work on


interacting particle systems has allowed a determination of

(B) for all B.


111
112
Chapter 6
Calculus of Deterministic Constraints
If a communication link is supposed to guarantee a maximum delay to a trac ow, then the
trac ow must also satisfy some constraint. In this section a natural calculus of deterministic
constraints is examined. In the rst subsection the basic (, ) constraints, and the resulting
performance bounds for a single server queue are discussed. In the second section, a more general
notion of deterministic constraint is explored. It is shown that token bucket lters are the minimal
delay regulators that produce (, )-upper constrained trac streams, and more general regulators
are also considered. The third section focuses on applications of service curves, which oer a exible
way to specify a lower bound on the output of a server as a function of the input. Given a constraint
on the input trac and a service curve, a maximum delay can be computed for any server that
conforms to the service curve. Two applications are discussed: bounding the maximum delay in a
priority queue, and scheduling service at a constant rate link with multiple input streams in order
to achieve a specied service curve for each input.
This chapter is based largely on the presentation in C.-S. Chang [5]. The basic calculus of (, )
constraints was rst given in the seminal work of R. L. Cruz [8, 9].
6.1 The (, ) Constraints and Performance Bounds for a Queue
For simplicity, the case of equal length packets transmitted in discrete time will be considered.
A cumulative arrival process A is a nondecreasing, integer-valued function on the nonnegative
integers Z
+
such that A(0) = 0. For each integer t 1, A(t) denotes the number of arrivals in slots
1, 2, . . . , t. The number of arrivals at time t, denoted by a(t), is given by a(t) = A(t) A(t 1).
A cumulative arrival process A is said to be (, )-upper constrained if A(t) A(s) + (t s)
for any integers 0 s t. This is denoted as A (, ). In these notes and are taken to be
integer valued.
A popular way to regulate data streams is through the use of a token bucket lter. A token
bucket lter (with no dropping) with parameters (, ) operates as follows. The lter has a packet
queue with unlimited storage capacity, and it has a token bucket (implemented as a counter).
The events at an integer time occur in the following order. First new packet arrivals are added
into the queue and new tokens are added to the token bucket. Then as many of the packets
immediately depart as possible, with the only restriction being that each departing packet take one
token. Finally, if there are more than tokens in the queue, some tokens are dropped to bring the
total number of tokens to .
113
A token bucket lter with parameters (, ) is a (, ) regulator, meaning that for any input
process A the output process B is (, )-upper constrained. Indeed, since at most tokens are in
the bucket just before time s + 1, and since (t s) additional tokens arrive in slots s + 1, . . . , t,
at most + (t s) packets can depart from the lter in those slots. So the departure process
from the token bucket lter is indeed (, )-upper constrained. It is shown in the next section that
among all FIFO servers producing (, )-upper constrained outputs, the token bucket lter requires
the least delay for each packet.
If constrained ows are merged, the output process is also constrained:
A
i
(
i
,
i
) =>

A
i
(

i
,

i
)
The remainder of this section deals with a queue with a constant rate server, and examines
performance bounds implied if the input process is (, )-upper constrained. First the queue process
is dened. Let C be a positive integer and let A denote a cumulative arrival process. Suppose the
arrival process is fed into a queue with a server that can serve up to C packets per slot. Think of
the events at an integer time as occurring in the following order. First arrivals, if any, are added to
the queue, then up to C customers in the queue are instantaneously served, and then the number
of customers in the queue is recorded. Since we take service times to be zero, the amount of time a
customer spends in the queue is the same as the amount of time the customer spends in the system,
and is simply called the delay of the customer. The queue length process is recursively determined
by the Lindley equation:
q(t + 1) = (q(t) +a(t + 1) C)
+
with the initial condition q(0) = 0. As can be shown by induction on t (and as seen earlier in the
course) the solution to the Lindley equation can be written as
q(t) = max
0st
[A(t) A(s) C(t s)] (6.1)
for all t 0. The cumulative output process B satises, by (6.1),
B(t) = A(t) q(t)
= min
0st
A(s) +C(t s) all t 0.
Suppose now that the input A is (, )-upper constrained. Then, if C, (6.1) immediately
implies that q(t) for all t. Conversely if C = and q(t) for all t, then A (, ).
Next, upper bounds on the length of a busy period and on packet delay are derived, assuming
that the input A is (, )-upper constrained. Given times s and t with s t, a busy period is said
to begin at s and end at t if q(s 1) = 0, a(s) > 0, q(r) > 0 for s r < t, and q(t) = 0. We dene
the duration B of the busy period to be B = t s time units. Given such a busy period, there
must be C departures at each of the B times s, . . . , t 1, and at least one packet in the queue at
time t 1. Thus at least CB+1 packets must arrive at times in s, . . . , t 1 in order to sustain
the busy period. Since A is (, )-upper constrained, at most + B packets can arrive in those
slots. Thus it must be that CB + 1 + B which yields B
1
C
. But since B is an integer it
must be that B
1
C
|.
114
The delay of a packet in the queue is the time the packet departs minus the time it arrives.
The delay of any packet is less than or equal to the length of the busy period in which the packet
arrived. Thus,
d
1
C
|
(for any service order). (Note: If one unit of service time is added, the total system time would be
bounded by
d + 1
1
C
| + 1

C
|
which is the bound given in [5], although [5] uses a dierent denition of the length of a busy
period.
A tighter bound can be given in case the order of service is rst in rst out (FIFO). If a packet
has a nonzero waiting time, then it is carried over from the time it rst arrived to the next time
slot. The total number of packets carried over, including that packet, is less than or equal to ,
as shown above. Thus, assuming FIFO order, since at least C of these q packets depart in each of
the subsequent slots until all q are served, the delay for the packet is less than or equal to d
FIFO
,
given by
d
FIFO
= /C|.
(Note: This delay also does not include an extra unit of time for service.)
In order to analyze networks, it is important to nd constraints on the output of a device,
because the output of one device could be the input of another device. A constraint on the output
of a device is readily found if the input satises a known constraint and if a bound on the maximum
delay of the device is known. Suppose that the input A to a device is (, )-upper constrained,
and that the maximum delay in the device is d. Then for s < t, any packets that departs from the
device at a time in s+1, . . . , t must have arrived at one of the t s+d times in s+1d, . . . , t.
Therefore, B(t) B(s) A(t) A(s d) +(t (s d)) = +d +(t s). Therefore, B is
( +d, )-upper constrained.
Consider, for example, a queue with a server of rate C 1 and input arrival process A. The
cumulative output process B does not depend on the order of service, so no matter what the order
of service, the output process is ( +d
FIFO
, )-upper constrained.
6.2 f-upper constrained processes
The notion of (, ) constraints has a natural generalization. The generalization is useful not only
for the sake of generalization, but it helps in the analysis of even basic (, ) constrained trac. Let
f be a nondecreasing function from Z
+
to Z
+
. A cumulative arrival process A is said to be f-upper
constrained if A(t) A(s) f(t s) for all s, t with 0 s t. For example, if f(t) = +t, then
A is f-upper constrained if and only if it is (, )-upper constrained.
Rearranging the denition yields that A is f-upper constrained if and only if A(t) A(s) +
f(t s) for 0 s t. Equivalently, A is f-upper constrained if and only if A A f, where for
two functions f and g dened on Z
+
, f g is the function on Z
+
dened by
(f g)(t) = min
0st
g(s) +f(t s). (6.2)
Note that f g is a convolution in the (min, plus) algebra. That is, (6.2) uses a minimum rather
than addition (as in an integral), and (6.2) uses addition rather than multiplication.
115
It turns out that some functions f can be reduced, without changing the condition that an
arrival process is f-upper constrained. For example, changing f(0) to zero has no eect on the
requirement that A is f-upper constrained, because trivially A(t) A(t) 0 anyway. Moreover,
suppose A is f-upper constrained and s, u 0. Then A(s +u) A(s) f(u), but a tighter bound
may be implied. Indeed, if n 1 and u is represented as u = u
1
+ + u
n
where u
i
1, integer,
then
A(s +u) A(s) = (A(s +u
1
) A(s)) + (A(s +u
1
+u
2
) A(s +u
1
))
+ + (A(s +u
1
+ +u
n
) A(s +u
1
+ +u
n1
))
f(u
1
) +f(u
2
) + +f(u
n
)
So A(t) A(s) f

(t s) where f

, called the subadditive closure of f, is dened by


f

(u) =
_
0 if u = 0
minf(u
1
) + +f(u
n
) : n 1, u
i
1 for each i, and u
1
+ +u
n
= u if u 1
The function f

has the following properties.


a) f

f
b) A is f-upper constrained if and only if A is f

-upper constrained
c) f

is subadditive; f

(s +t) f

(s) +f

(t) all s, t 0.
d) If g is any other function with g(0) = 0 satisfying (a) and (c), then g f

.
Recall that the output of a token bucket lter with parameters (, ) is (, )-upper constrained
for any input. More generally, given a nondecreasing function f from Z
+
to Z
+
, a regulator for f
is a device such that for any input A the corresponding output B is f-upper constrained. We shall
only consider regulators that output packets only after they arrive, requiring B A.
Roughly speaking, regulators (that dont drop packets) delay some packets in order to smooth
out the arrival process. There are many possible regulators for a given arrival sequence, but it is
desirable for the delay in a regulator to be small. A regulator is said to be a maximal regulator for
f if the following is true: For any input A, if B is the output of the regulator for input A and if

B is a cumulative arrival process such that



B A (ow condition) and

B is f-upper constrained,
then

B B. Clearly if a maximal regulator for f exists, it is unique.
Proposition 6.2.1 A maximal regulator for f is determined by the relation B = A f

.
Proof. Let A, B, and

B be as in the denition of maximal regulator. Then

B =

B f

A f

= B (6.3)
The rst equality in (6.3) holds because

B is f-upper constrained. The inequality holds because
f

is a monotone operation. The nal equality holds by the denition of B.


Corollary 6.2.2 Suppose f
1
and f
2
are nondecreasing functions on Z
+
with f
1
(0) = f
2
(0) = 0.
Two maximal regulators in series, consisting of a maximal regulator for f
1
followed by a maximal
regulator for f
2
, is a maximal regulator for f
1
f
2
. In particular, the output is the same if the order
of the two regulators is reversed.
116
Proof. Let A be the input to the rst regulator and let B be the output of the second regulator.
Then
B = (A f

1
) f

2
= A (f

1
f

2
) = A (f
1
f
2
)

, (6.4)
where the last equality in (6.4) depends on the assumption that f
1
(0) = f
2
(0) = 0. The rst part
of the corollary is proved. The second part follows from the uniqueness of maximal regulators and
the fact that f
1
f
2
= f
2
f
1
.
Proposition 6.2.3 The token bucket lter with parameters (, ) is the maximal (, ) regulator.
Proof. A key step in the proof is to represent the state of a token bucket lter as a function
of the queue length process for a queue with constant server rate packets per unit time. Let A
denote a cumulative arrival process, and let q denote the corresponding queue length process when
A is fed into a queue with a server of constant capacity packets per time unit. Therefore,
q(t) = max
0st
[A(t) A(s) (t s)].
Then the size of the packet queue for a (, ) token bucket lter is (q(t) )
+
, and the number
of tokens in the the lter is ( q(t))
+
. See Figure This representation can be readily proved by
induction on time.
Therefore, if B is the cumulative departure process from a (, ) token bucket lter with input
A, then B is given by
B(t) = A(t) (q(t) )
+
= A(t)
_
max
0st
A(t) A(s) [ +(t s)]
_
+
= min
_
A(t), min
0st
A(s) + +(t s)
_
That is, B = f

A for f(t) = +t, which concludes the proof.


6.3 Service Curves
So far we have considered passing constrained processes into a maximal regulator. Examples of
maximal regulators include a token bucket lter, or a queue with xed service rate. The output
for these servers is completely determined by the input. In practice, packet streams can be fed
into many other types of servers, which often involve interaction among multiple packet streams.
Examples include a station in a token ring network or a link in a communication network with
a service discipline such as pure priority or weighted round robin. The exact service oered to a
particular stream thus cannot be xed in advance. It is thus useful to have a exible way to specify
a guarantee that a particular server oers to a particular stream. The denition we use naturally
complements the notion of f-upper constraints discussed above.
A service curve is a nondecreasing function from Z
+
to Z
+
. Given a service curve f, a server is
an f-server if for any input A, the output B satises B A f. That is, B(t) min
0st
A(s) +
f(t s).
117
Some comments about the denition are in order. First, note that the notion of f-server involves
an inequality, rather than a complete specication of the output given the input as in the case for
regulators. Second, in these notes we only consider servers such that the ow constraint B(t) A(t)
is satised, and it is assumed that A(0) = 0 for all cumulative arrival streams. Taking s = t in
the above equation implies that B(t) A(t) + f(0). This can only happen if f(0) = 0, so that
when we discuss f-servers we will only consider functions f with f(0) = 0. Third, the denition
requires that the output inequality is true for all input processes (it is universal in A). In some
applications (not discussed in these notes) a server might be an f server only for a specied class
of input processes.
Examples For an example of a service curve, given an integer d 0, let O
d
denote the function
O
d
(t) =
_
0 for t d
+ for t > d
Then a FIFO device is an O
d
-server if and only if the delay of every packet is less than or equal
to d, no matter what the input process. A queue with constant service rate C is an f server for
f(t) = Ct. A leaky bucket regulator is an f server for f(t) = ( + t)I
t1
. More generally, the
maximal regulator for a function f is an f

-server.
The following proposition describes some basic performance guarantees for a constrained input
process passing through an f-server. Suppose an f
1
-upper constrained process A passes through
an f
2
-server. Let d
V
= max
t0
(f

1
(t) f
2
(t)) and d
H
= mind 0 : f
1
(t) f
2
(t +d) for all t 0.
That is, d
V
is the maximum vertical distance that the graph of f

1
is above f
2
, and d
H
is the
maximum horizontal distance that the graph of f
2
is to the right of f

1
.
Proposition 6.3.1 The queue size A(t) B(t) is less than or equal to d
V
for any t 0, and if
the order of service is FIFO, the delay of any packet is less than or equal to d
H
.
Proof. Let t 0. Since the server is an f
2
-server, there exists an s

with 0 s

t such
that B(t) A(t s

) + f
2
(s

). Since A is f
1
-upper constrained, A(t) A(t s

) + f

1
(s

). Thus,
A(t) B(t) f

1
(s

) f
2
(s

) d
V
.
To prove the delay bound, suppose a packet arrives at time t and departs at time t > t. Then
A(t) > B(t 1). Since the server is an f
2
-server, there exists an s

with 0 s

t 1 such that
B(t 1) A(s

) + f
2
(t 1 s

). Because A(t) > B(t 1), it must be that 0 s

t. Since A
1
is f
1
-upper constrained, we thus have
A(s

) +f
1
(t s

) A(t) B(t 1) A(s

) +f
2
(t 1 s

),
so that f
1
(t s

) > f
2
(t 1 s

). Hence, t 1 t < d
H
, so that t t d
H
.
Consider a queue with a server of constant rate C which serves input streams 1 and 2, giving
priority to packets from input 1, and serving the packets within a single input stream in FIFO
order. Let A
i
denote the cumulative arrival stream of input i.
Proposition 6.3.2 (See [5, Theorem 2.3.1]). If A
1
is f
1
-upper constrained, the link is an

f
2
-server
for the type 2 stream, where

f
2
(t) = (Ct f
1
(t))
+
.
Example For example, suppose A
i
is (
i
,
i
) constrained for each i. Then

f
2
(t) = ((C
1
)t

1
)
+
. Applying Proposition 6.3.1 to the input 2 yields that the delay for any packet in input 2 is
118
less than or equal to

1
+
2
C
1
. Since packets in input 1 are not aected by the packets from input 2,
the delay for any packet in input 1 is less than or equal to

1
C
|.
In comparison, if the queue were served in FIFO order, then the maximum delay for packets
from either input is

1
+
2
C
|. Conclude that if
1
is much smaller than
2
, the delay for input 1
packets is much smaller for the priority server than for the FIFO server.
The nal topic of the section is the problem of scheduling service for multiple input streams on
a constant rate link to meet specied service curves for each input. The scheduling algorithm used
is the Service Curve Earliest Deadline (SCED) algorithm. Suppose the ith input has cumulative
arrival process A
i
which is known to be g
i
-upper constrained, and suppose that the ith input wants
to receive service conforming to a specied service curve f
i
. Suppose the link can serve C packets
per unit time. Given C, the functions g
i
and service curves f
i
for all i, the basic question addressed
is this. Is there a scheduling algorithm (an algorithm that determines which packets to serve at
each time) so that the service for each input A
i
conforms to service curve f
i
?
The SCED algorithm works as follows. For each input stream i let N
i
= A
i
f
i
. Then N
i
(t)
is the minimum number of type i packets that must depart by time t in order that the input i see
service curve f
i
. Based on this, a deadline can be computed for each packet of input i. Specically,
the deadline for the kth packet from input i is the minimum t such that N
i
(t) k. Simply put, if
all packets are scheduled by their deadlines, then all service curve constraints are met.
In general, given any arrival sequence with deadlines, if it is possible for an algorithm to meet
all the deadlines, then the earliest deadline rst (EDF) scheduling algorithm can do it. The SCED
algorithm, then, is to put deadlines on the packets as described, and then use earliest deadline rst
scheduling.
Proposition 6.3.3 Given g
i
, f
i
for each i and capacity C satisfying
n

i=1
(g
i
f
i
)(t) Ct all t. (6.5)
the service to each input stream i provided by the SCED scheduling algorithm conforms to service
curve f
i
.
Proof. Fix a time t
o
1. Imagine that all packets with deadline t
o
or earlier are colored red,
and all packets with deadline t
o
+ 1 or later are white. For any time t 0, let q
o
(t) denote the
number of red packets that are carried over from time t to time t + 1. Since EDF is used, red
packets have pure priority over white packets, so q
o
is the queuelength process in case all white
packets are simply ignored. It must be proved that q
o
(t
o
) = 0. For 0 s t
o
1, the number of
red packets that arrive from stream i in the set of times s +1, , t
o
is (N
i
(t
o
) A
i
(s))
+
. Also,
N
i
(t
o
) A
i
(t
o
). Therefore
q
o
(t
o
) = max
0sto
_

i
(N
i
(t
o
) A
i
(s))
+
_
C(t
o
s).
For any s with 1 s t
o
,
N
i
(t
o
) = (A
i
f
i
)(t
o
) (A
i
g
i
f
i
)(t
o
) = min
0uto
A
i
(u) +(g
i
f
i
)(t
o
u) A
i
(s) +(g
i
f
i
)(t
o
s)
119
It follows that

i
(N
i
(t
o
) A
i
(s))
+

i
(g
i
f
i
)(t
o
s) C(t
o
s),
so that q
o
(t
o
) = 0, and the proposition is proved.
6.4 Problems
6.1. Deterministic delay constraints for two servers in series
Consider the network shown. There are three arrival streams. Suppose that A
i
is (
i
,
i
)-upper
B A
A
A
C C
1
2
1 2
3
FIFO PRIORITY TO A
3
1
constrained for each i and that
1
+
2
C
1
and
1
+
3
C
2
. The rst server is a FIFO server,
and the second server gives priority to stream 3, but is FIFO within each class. For all three parts
below, express your answers in terms of the given parameters, and also give numerical answers for
the case (
i
,
i
) = (4, 2) for 1 i 3 and C
1
= C
2
= 5.
(a) Give the maximum delay d
1
for customers of the rst queue. (Your answer should be nite
even if
1
+
2
= C
1
due to FIFO service order). (b) Find a value of such that B
1
is (,
1
)-upper
constrained. (Seek the smallest value you can.)
(c) Find an upper bound on the delay that stream 1 suers in the second queue.
6.2. Calculation of an acceptance region based on the SCED algorithm
Let
g
1
(t) = 20 +t f
1
(t) = 5(t 4)
+
g
2
(t) = 8 + 4t f
2
(t) = 6(t 20)
+
(a) Verify that
g

i
(t) =
_
0 if t = 0
g
i
(t) if t 1
(b) Sketch g

i
f
i
for i = 1, 2 on two separate graphs. These functions are piecewise linear. Label
the breakpoints (places the slopes change), the values of the functions g

i
f
i
at the breakpoints,
and the values of slope between breakpoints.
(c) Suppose that n
1
+ n
2
streams share a constant bit rate link of capacity C = 100 such that
for i = 1, 2, there are n
i
streams that are g
i
-upper constrained, and that each of these n
i
streams
require the link to appear as an f
i
-server. Using Service Curve Earliest Deadline rst (SCED), it
is possible to accommodate the streams if (n
1
, n
2
) satises

i
n
i
g

i
f
i
Ct for all t. Find and
sketch the region of admissible (n
1
, n
2
) pairs. (Hint: It is enough to check the inequality at the
breakpoints and at t . Why?)
6.3. Serve longer service priority with deterministically constrained arrival processes
Suppose two queues are served by a constant rate server with service rate C, which serves the
longer queue. Specically, suppose a discrete-time model is used, and after new customers arrive
120
in a given slot, there are C potential services available, which are allocated one at a time to the
longer queue. For example, if after the arrivals in a slot there are 9 customers in the rst queue,
and 7 in the second, and if C = 4, then 3 customers are served from the rst queue and 1 from the
second. Suppose the arrival process A
i
to queue i is (
i
,
i
)-upper constrained, for each i, where

1
,
2
,
1
,
2
, and C are strictly positive integers such that
1
+
2
C.
longer
C
A
A
1
2
serve
(a) Find a bound on the maximum number of customers in the system, carried over from one slot
to the next, which is valid for any order of service within each substream.
(b) Find a bound on the maximum delay which is valid for any order of service within each sub-
stream. (The bound should be nite if
1
+
2
< C.)
(c) Suppose in addition that customers within each stream are served in FIFO order. Find an
upper bound for the delay of customers in stream 1 which is nite, even if
1
+
2
= C. Explain
your reasoning.
121
122
Chapter 7
Graph Algorithms
In class we covered:
(i) basic graph terminology
(ii) the Prim-Dijkstra and Kruskul algorithms for the minimum weight spanning tree problem in
a directed graph
(iii) the Bellman-Ford and Dijkstra algorithms for the shortest path problem
Those topics are not covered in these notes: the reader is refered to Bertsekas and Gallager, Data
Networks, for those topics.
7.1 Maximum Flow Problem
A single-commodity ow network is a collection N = (s, t, V, E, C), where (V, E) is a directed
graph, s and t are distinct nodes in V , and C = (C(e) : e E) is a vector of nonnegative link
capacities. We write e = (i, j) if e is the link from node i to node j, in which case C(e) can also be
written as C(i, j). Let f = (f(e) : e E) such that 0 f(e) C(e) for all links e. The net ow
into a node j is given by

i:(i,j)E
f(i, j)

k:(j,k)E
f(j, k), (7.1)
and the net ow out of node j is minus one times the ow into node j. The net ow into a set of
nodes is the sum of the net ows into the nodes within the set. If the net ow into each node other
than s and t is zero, then f is called an s t ow. The value of an s t ow is dened to be the
net ow out of node s, or, equivalently, the net ow into node t.
An st cut for N is a partition (A : B) of the set of nodes (meaning that AB = V and AB =
) such that s A and t B. The capacity of such a cut is C(A, B) =

(i,j)E:iA,jB
C(i, j) The
famous min-ow, max-cut theorem, is stated as follows.
Theorem 7.1.1 (Max-Flow, Min-Cut Theorem) The maximum value of s t ows is equal
to the minimum capacity of s t cuts.
123
We shall give a proof of the theorem in the remainder of this section. On the one hand, for any
s t ow f and any s t cut (A : B)
value(f) =

(i,j)E:iA,jB
f(i, j)

(j,i)E:iA,jB
f(j, i)

(i,j)E:iA,jB
C(i, j) = C(A, B) (7.2)
The inequality in (7.2) is a consequence of the fact that f(i, j) C(i, j) and f(j, i) 0 for all i A
and j B. Thus, the value of any s t ow is less than or equal to the capacity of any s t cut.
It remains to be shown that there exists an s t ow and an s t cut such that the value of the
ow is equal to the capacity of the cut.
Let f be an s t ow with maximum value. Such a ow exits because the set of s t ows
is compact and value(f) is a continuous function of f. Without loss of generality, suppose that
E = V V (simply set C(e) = 0 for links not originally in the network.) Call a link (i, j) good if
f(i, j) < C(i, j) or f(j, i) > 0. Intuitively, a link is good if the net link ow f(i, j) f(j, i) can
be increased without violating the capacity constraints. Let A denote the set of nodes i in V such
that there is a path from s to i consisting of good links, and let B = V A. Note that t , A,
for otherwise the ow vector f could be modied to yield a ow with strictly larger value. In
addition, by the denition of good link and of A, f(i, j) = C(i, j) and f(j, i) = 0 for all i A and
all j B. Thus, the inequality in (7.2) holds with equality for this specic choice of cut (A, B).
This completes the proof of the min-ow, max-cut theorem.
The above proof suggests an algorithm for computing an s t ow with maximum value.
Starting with an arbitrary ow f, one seeks a path from s to t using only good links. If no such
path can be found, the ow has maximum value. If a path is found, a new ow with a strictly
larger value can be found. An algorithm can in fact be found (called the Malhotra, Kumar and
Maheshwari algorithm) to nd a maximum ow in O([V [
3
) steps. A variation of the algorithm can
also nd a ow with minimum weight, among all ows with a specied value, when nonnegative
weights are assigned to the links of the graph, in addition to the capacities.
7.2 Problems
7.1. A shortest path problem
A weighted graph with a single source node s is shown in Fig. 7.1. The weight of each edge is
assumed to be the same in either direction. (a) Indicate the tree of shortest paths from s to all
other nodes found by Dijkstras shortest path algorithm. (b) Is the tree of shortest paths from s for
the graph unique? Justify your answer. (c) How many iterations of the synchronous Bellman-Ford
algorithm are required to nd shortest length paths from the source node s to all other nodes of
this graph?
7.2. A minumum weight spanning tree problem
Consider again the graph of Figure 7.1. Find the minimum weight (undirected) spanning tree found
by Dijkstras MWST algorithm. Is the MWST for this graph unique? Justify your answer.
7.3. A maximum ow problem
A ow graph with a source node s and a terminal node t is indicated in Figure 7.2. A line between a
pair of nodes indicates a pair of directed links, one in each direction. The capacity of each directed
124
12 14 23 4
22 10
3 1 8 29
15 15
2
7
11 17 27 6
16 5
13 9
s
Figure 7.1: An undirected weighted graph.
12 3 23
14
7
22
11
13
3
1
4
29
1
9 2
27
1 15
12 16
S
t
6
15
Figure 7.2: A weighted graph with terminals s and t.
link in a pair is the same, and is indicated. Find a maximum value s t ow and prove that it is
indeed a maximum ow.
125
126
Chapter 8
Flow Models in Routing and
Congestion Control
This chapter gives an introduction to the study of routing and congestion control in networks, using
ow models and convex optimization. The role of routing is to decide which routes through the
network should be used to get the data from sources to destinations. The role of congestion control
is to determine data rates for the users so that fairness among users is achieved, and a reasonable
operating point is found on the tradeo between throughput and delay or loss. The role of ow
models is to give a simplied description of network control protocols, which focus on basic issues of
stability, rates of convergence, and fairness. Elements of routing and congestion control not covered
in this chapter, include routing tables, messages exchanged for maintaining routing tables, packet
headers, and so on.
8.1 Convex functions and optimization
Some basic denitions and facts concerning convex optimization are briey described in this section,
for use in the rest of the chapter.
Suppose that is a subset of R
n
for some n 1. Also, suppose that is a convex set, which
by denition means that for any pair of points in , the line segment connecting them is also in
. That is, whenever x, x
t
, ax + (1 a)x
t
for 0 a 1. A function f on is a
convex function if along any line segment in , f is less than or equal to the value of the linear
function agreeing with f at the endpoints. That is, by denition, f is a convex function on if
f(ax+(1a)x
t
) af(x) +(1a)f(x
t
) whenever x, x
t
and 0 a 1. A concave function f is
a function f such that f is convex, and results for minimizing convex functions can be translated
to results for maximizing concave functions.
The gradient, f, of a function f is a vector valued function, dened to be the vector of rst
partial derivatives of f:
f =
_
_
_
_
_
_
_
f
x
1
.
.
.
f
xn
_
_
_
_
_
_
_
127
We assume that f exists and is continuous. If x and if v is a vector such that x +v for
small enough > 0, then the directional derivative of f at x in the direction v is given by
lim
0
f(x +v) f(x)

= (f(x)) v =

i
f
x
i
(x)v
i
(8.1)
We write x

arg min
x
f(x) if x

minimizes f over , i.e. if x

and f(x

) f(x) for all


x .
Proposition 8.1.1 Suppose f is a convex function on a convex set with a continuous gradient
function f. Then x

arg min
x
f(x) if and only if (f(x

)) (x x

) 0 for all x .
Proof. (Necessity part.) Suppose x

arg min
x
f(x) and let x . Then x

+ (x
x

) for 0 1, so by the assumption on x

, f(x

) f(x

+ (x x

)), or equivalently,
f(x

+(xx

))f(x

0. Letting 0 and using (8.1) with v = xx

yields (f(x

)) (xx

) 0,
as required.
(Suciency part.) Suppose x

and that (f(x

)) (x x

) 0 for all x . Let x .


Then by the convexity of f, f(x + (1 )x

)) f(x) + (1 )f(x

) for 0 1. Equivalently,
f(x) f(x

) +
f(x

+(xx

))f(x

. Taking the limit as 0 and using the hypotheses about x

yields that f(x) f(x

) + (f(x

)) (x x

) f(x

). Thus, x

arg min
x
f(x).
8.2 The Routing Problem
The problem of routing in the presence of congestion is considered in this section. The ow rates
of the users are taken as a given, and the problem is to determine which routes should provide the
ow. In a later section, we consider the joint routing and congestion control problem, in which
both the ow rates of the users and the routing are determined jointly. The following notation is
used for both problems.
J is the set links.
R is the set of routes. Each route r is associated with a subset of the set of links. We write j r
to denote that link j is in route r. If all routes use dierent sets of links, we could simply
consider routes to be subsets of links. But more generally we allow two dierent routes to
use the same set of links.
A is the link-route incidence matrix, dened by A
j,r
= 1 if j r and A
j,r
= 0 otherwise.
S is the set of users. Each user s is associated with a subset of the set of routes, which is the set
of routes that serve user s. We write r s to denote that route r serves user s. We require
that the sets of routes for dierent users br disjoint, so that each route serves only one user.
H is the user-route incidence matrix, dened by H
s,r
= 1 if r s and H
s,r
= 0 otherwise.
y
r
is the ow on route r.
f
j
is the ow on a link j: f
j
=

r:jr
y
r
, or in vector form, f = Ay.
128
x
s
is the total ow available to user s: x
s
=

rs
y
r
, or in vector form, x = Hy. The vector
x = (x
s
: s S) is xed for the routing problem, and variable for the joint congestion control
and routing problem considered in Section 8.4.
D
j
(f
j
) is the cost of carrying ow j on link j. The function D
j
is assumed to be a convex,
continuously dierentiable, and increasing function on R
+
. The cost associated with ow
rate f
j
on link j is D
j
(f
j
).
The routing problem, which we now consider, is to specify the route rates for given values of
the user rates. It can be written as:
ROUTE(x,H,A,D):
min

jJ
D
j
(

r:jr
y
r
)
subject to
x = Hy
over
y 0.
We say that the vector y is a feasible ow meeting demand x if y 0 and x = Hy. Problem
ROUTE(x,H,A,D) thus concerns nding a feasible ow y, meeting the demand x, which minimizes
the total cost F(y), dened by
F(y) =

j
D
j
(

r:jr
y
r
).
As far as the mathematics is concerned, the links could be resources other than connections
between nodes, and the set of links in a route could be an arbitrary subset of J. But to be concrete,
we can think of there being an underlying graph with nodes representing switches in a network,
links indexed by ordered pairs of nodes, and routes forming paths through the network. We could
assume each user has a specic origin node and destination node, and the routes that serve a user
each connect the source node to the destination node of the user.
A possible choice for the function D
j
would be
D
j
(f
j
) =
f
j
C
j
f
j
+f
j
d
j
(8.2)
where C
j
is the capacity of link j and d
j
is the propagation delay on link j. If f
j
and C
j
are in units
of packets per second, if arrivals are modeled as a Poisson process, and if service times are modeled
as exponentially distributed (based on random packet lengths with an exponential distribution),
then a link can be viewed as an M/M/1 queue followed by a xed delay element, and
1
C
j
f
j
is the
mean system time for the queue. Thus, the mean transit time of a packet over the link j, including
the propagation delay, is
1
C
j
f
j
+ d
j
. Then by Littles law, D
j
(f
j
) given by (8.2), represents the
mean number of packets in transit on link j. Therefore, the total cost, F(y), represents the mean
number of packets in transit in the network. By Littles law, the average delay of packets entering
the network is thus F(y)/

s
x
s
. For xed user ow rates (x
s
: s S), minimizing the cost is
therefore equivalent to minimizing the delay in the network, averaged over all packets.
129
(c)
1
2
4
3
5
3
1
2
3
1
3
4 3
0
5
0
0
1
2
4
3
5
1
2
4
3
5
(b) (a)
Figure 8.1: Example of a network showing (a) the network, (b) route ows, and (c) link ows.
Example Consider the network graph shown in Figure 8.1(a). For this example there is an
underlying set of 5 nodes, numbered 1 through 5. For this example we take
J = (1, 2), (1, 4), (2, 1), (2, 3), (2, 5), (3, 5), (4, 5), (5, 4), with each link labeled by its endpoints.
S = (1, 5), (2, 4), (2, 5), with each user labeled by its origin and destination nodes.
We assume user (1, 5) is served by routes (1,2,5), (1,4,5), (1,2,3,5)
user (2, 4) is served by routes (2,1,4), (2,5,4)
user (2, 5) is served by routes (2,5), (2,3,5)
where each route is labled by the sequence of nodes that it traverses.
For brevity we write the ow y
r
for a route r = (1, 2, 5) as y
125
, rather than as y
(1,2,5)
. We
also write the rate x
s
for a user s = (1, 5) as x
1,5
, rather than as x
(1,5)
, and the ow f
j
for a link
j = (1, 2) as f
1,2
, rather than as f
(1,2)
. An example of an assignment of ow rates to routes, shown
in Figure 8.1(b), is given by
y
125
= 3.0, y
145
= 3.0, y
1235
= 0, y
214
= 1.0, y
254
= 0, y
25
= 2.0, y
235
= 0 (8.3)
which implies the ow rates to the users
x
1,5
= 6.0, x
2,4
= 1.0, x
2,5
= 2.0 (8.4)
and the link rates f
1,2
= 3, f
1,4
= 4, f
2,1
= 1, etc., as shown in Figure 7.1(c). The resulting cost is
F(y) = D
1,2
(3) +D
1,4
(4) +D
2,1
(1) +D
2,3
(0) +D
2,5
(5) +D
3,5
(0) +D
4,5
(3) +D
5,4
(0).
We consider, in the context of this example, the eect on the cost of changing the ow rates. If
y
125
were increased to y
125
+ for some > 0, then the cost F(y) would increase, with the amount
of the increase given by
F = D
1,2
(3 +) D
1,2
(3) +D
2,5
(5 +) D
2,5
(5)
= (D
t
1,3
(3) +D
t
2,5
(5)) +o()
Similarly, if y
145
were increased by , the cost would increase by (D
t
1,4
(4) + D
t
4,5
(3)) + o(). If
instead, y
145
were decreased by , the cost would decrease by (D
t
1,4
(4) + D
t
4,5
(3)) + o(). If two
130
small changes were made, the eect on F(y), to rst order, would be the sum of the eects. Thus,
if y
125
were increased by and y
145
were simultaneously decressed by , that is, if units of ow
were deviated to route (1, 2, 5) from route (1, 4, 5), then the change in cost would be
(cost of 145125 ow deviation) = (D
t
1,2
(3) +D
t
2,5
(5) D
t
1,4
(4) D
t
4,5
(3)) +o()
That is, to rst order, the change in cost due to the ow deviation is times the rst derivative
length D
t
1,2
(3) +D
t
2,5
(5) of route (1, 2, 5) minus the rst derivative length D
t
1,4
(4) +D
t
4,5
(3) of route
(1, 4, 5). Thus, for small enough , the ow deviation will decrease the cost if ow is deviated from a
longer route to a shorter one, if the lengths of the routes are given by the sums of the rst derivative
link lengths, D
j
(f
j
), of the links in the routes.
We return from the example to the general routing problem. The observation made about ow
deviation in the example is general. It implies that a necessary condition for y to be optimal is
that for any s and any r s, the rate y
r
is strictly positive only if r has minimum rst derivative
length among the routes serving s. Since the set of feasible rate vectors y meeting the demand x
is a convex set, and the cost function F is convex, it follows from elementary facts about convex
optimization, described in Section 8.1, that the rst order necessary condition for optimality is also
sucient for optimality. Specically, the following is implied by Proposition 8.1.1.
Proposition 8.2.1 A feasible ow y = (y
r
: r R) meeting demand x minimizes the cost over all
such ows, if and only if there exists a vector (
s
: s S) so that

jr
D
t
j
(f
j
)
s
, with equality
if y
r
> 0, whenever r s S.
Given a ow vector y, let
r
denote the rst derivative length of route r. That is,
r
=

jr
D
j
(

:jr
y
r
). Note that if the rate requirement for a user s is strictly positive, i.e. x
s
> 0,
then the parameter
s
in Proposition 8.2.1 must satisfy
s
= min
rs

r
. The example and Propo-
sition 8.2.1 suggest the following algorithm, called the ow-deviation algorithm, for nding an
optimal ow y. The algorithm is iterative, and we describe one iteration of it.
The ow deviation algorithm One iteration is described. At the beginning of an itera-
tion, a feasible ow vector y, meeting the specied demand x, is given. The iteration nds
the next such vector, y.
Step1. Compute the link ow f
j
and rst derivative link lengths, D
j
(f
j
), for j J.
Step 2. For each user s, nd a route r

s
serving s with the minimum rst derivative link
length.
Step 3. Let [0, 1]. For each user s, deviate a fraction of the ow from each of the
other routes serving s to r

s
. That is, for each s, let
y
r
=
_
(1 )y
r
if r s and r ,= r

s
y
r
+(

:r,=r

s
y
r
) if r = r

s
Adjust to minimize F(y). The resulting vector y is the result of the iteration.
The ow deviation algorithm is not a distributed algorithm, but some parts of it can be im-
plemented in a distributed way. A user could gather the rst derivative link lengths of its routes
131
by having the links along each route signal the information in passing packets. A user could then
determine which of its routes has the shortest rst derivative link length. Synchronization of the
ow deviation step across users, and in particular, the fact that all users must collectively determine
a single best value of , is the main obstacle to distributed implementation of the ow deviation al-
gorithm presented. However, by having each user respond slowly enough, possibly asynchronously,
convergence can be achieved.
It may give insight into the problem to ignore the step size problem, nd an algorithm, and
then address the step size selection problem later. This suggests describing an algorithm by an
ordinary dierential equation. For the routing problem at hand, imagine that all users gradually
shift their ow towards routes with smaller rst derivative link length. Let y(t) denote the vector
of ows at time t, and let
r
(t) denote the corresponding rst derivative link length of route r at
time t, for each r R. Assume that the users adjust their rates slowly enough that the variables

r
(t) can be continuously updated. In the following, we suppress the variable t in the notation, but
keep in mind that y and (
r
: r R) are time varying. Weve seen that if r and r
t
are two routes
for a given user s, if
r
>
r
, and if y
r
> 0, then the cost F(y) can be reduced by deviating ow
from r
t
to r. If the speed of deviation is equal to y
r
(
r

r
), then y
r
is kept nonnegative, and
the factor (
r

r
) avoids discontinuities in case two routes have nearly the same length. Suppose
such changes are made continuously for all pairs of routes. Let = (
r,r
: r, r
t
R) be a matrix of
constants such that
r,r
=
r

,r
> 0 whenever r, r
t
s for some s S, and
r,r
= 0 if r and r
t
serve
dierent users. We will use the constant
r,r
as the relative speed for deviation of ow between
routes r and r
t
. The following equation gives a ow deviation algorithm in ordinary dierential
equation form.
A continuous ow deviation algorithm
y
r
=

s
_

,r
y
r
(
r

r
)
+

r,r
y
r
(
r

r
)
+
_
for r s S (8.5)
Because the changes in y are due to transfers between routes used by the same user, we expect
x(t) = Hy(t), to be constant. Indeed, using the symmetry of and interchanging the variables r
and r
t
on a summation, yields that for any user s,
x
s
=

rs
y
r
=

rs

,r
y
r
(
r

r
)
+

rs

r,r
y
r
(
r

r
)
+
=

rs

,r
y
r
(
r

r
)
+

rs

,r
y
r
(
r

r
)
+
= 0
132
Now
F
yr
=
r
, so by the chain rule of dierentiation, and the same interchange of r and r
t
used in
computing x
s
,
dF(y(t))
dt
=

r
y
r
=

rR

R
_

,r
y
r

r
(
r

r
)
+

r,r
y
r

r
(
r

r
)
+
_
=

rR

R
_

,r
y
r

r
(
r

r
)
+

r

,r
y
r

r
(
r

r
)
+
_
=

rR

,r
y
r
(
r

r
)
2
+
0
Thus, the cost F(y(t)) is decreasing in time. Moreover, the rate of decrease is strictly bounded
away from zero away from a neighborhood of the set of y satisfying the necessary conditions for
optimality. It follows that all limit points of the trajectory (y(t) : t 0) are solutions to the routing
problem, and F(y(t)) converges monotonically to the minimum cost.
The step size problem is an important one, and is not just a minor detail. If the users do
not implement some type of control on how fast they make changes, oscillations could occur. For
example, one link could be lightly loaded one minute, so the next, many users could transfer ow
to the link, to the point it becomes overloaded. Then they may all transfer ow away from the
link, and the link may be very lightly loaded again, and so on. Such oscillations have limited the
use of automated dynamic routing based on congestion information. In applications, an important
facet of the implementation is that the ow rates at the links often have to be estimated, and the
accuracy of an estimate over an interval of length t is typically on the order of 1/

t, by the central
limit theorem.
The two routing algorithms considered in this section are considered to be primal algorithms,
because the ow rates to be found are directly updated. In contrast, in a dual algorithm, the
primal variables are expressed in terms of certain dual variables, or prices, and the algorithm seeks
to identify the optimal values of the dual variables. Dual algorithms arise naturally for networks
with hard link constraints of the form f
j
C
j
, rather than with the use of soft link constraints
based on cost functions D
j
(f
j
) as in this section. See the remark in Section 8.7
8.3 Utility Functions
Suppose a consumer would like to purchase or consume some commodity, such that an amount
x of the commodity has utility or value U(x) to the consumer. The function U is called the
utility function. Utility functions are typically nondecreasing, continuously dierentiable, concave
functions on R
+
. Commonly used examples include:
(i) U(x) = log x (for some constant > 0)
(ii) U(x) =
_

1
_
(x
1
1) (for some constants , > 0, ,= 1)
(iii) U(x) =

2
_
x
2
( x x)
2
+

(for some constants , x > 0)


133
(iv) U(x) = exp(x) (for some constant > 0)
The function in (ii) converges to the function in (i) as 1.
If x is increased by a small amount the utility U(x) is increased by U
t
(x) +o(x). Thus U
t
(x)
is called the marginal utility per unit of commodity. Concavity of U is equivalent to the natural
property that the marginal utility is monotone nonincreasing in x.
If a rational consumer can buy commodity at a price per unit, then the consumer would
choose to purchase an amount x to maximize U(x) x. Dierentiation with respect to x yields
that the optimal value x
opt
satises
U
t
(x
opt
) , with equality if x
opt
> 0. (8.6)
It is useful to imagine increasing x, starting from zero, until the consumers marginal utility is equal
to the price . If U is strictly concave, and if (U
t
)
1
is the inverse function of U
t
, we see that the
response of the consumer to price is to purchase x
opt
= (U
t
)
1
() units of commodity.
The response function for a logarithmic utility function U(x) = log x has a particularly simple
form, and an important interpretation. The response of a rational consumer with such a utility
function to a price is to purchase x
opt
given by x
opt
= /. So x
opt
= for any > 0. That
is, such a consumer eectively decides to make a payment of , no matter what the price per unit
ow is.
Exercise: Derive the response functions for the other utility functions given in (i)-(iv) above.
8.4 Joint Congestion Control and Routing
In Section 8.2, the ow requirement x
s
for each user s is assumed to be given and xed. However, if
the values of the x
s
s are too large, then the congestion in the network, measured by the sum of the
link costs, may be unreasonably large. Perhaps no nite cost solution exists. Congestion control
consists of restricting the input rates to reasonable levels. One approach to doing this fairly is to
allow the x
s
s to be variables; and to assume utility functions U
s
are given. The joint congestion
control and routing problem is to select the y
r
s. and hence also the x
s
s. to maximize the sum of
the utilities of the ows minus the sum of link costs. We label the problem the system problem,
because it involves elements of the system ranging from the users utility functions to the topology
and cost functions of the network. Its given as follows:
SYSTEM(U,H,A,D) (for joint congestion control and routing with soft link constraints):
max

sS
U
s
(x
s
)

jJ
D
j
(

r:jr
y
r
)
subject to
x = Hy
over
x, y 0.
This problem involves maximizing a concave objective function over a convex set. Optimality
conditions, which are both necessary and sucient for optimality, can be derived for this joint
134
congestion control and routing problem in a fashion similar to that used for the routing problem
alone.
1
The starting point is to note that the change in the objective function due in increasing x
r
by , for some user s and some route r serving s, is
(U
t
s
(x
s
)

jr
D
t
j
(f
j
)) +o()
This implies that if y is an optimal ow, then
U
t
s
(x
s
)

jr
D
t
j
(f
j
), with equality if y
r
> 0, whenever r s S. (8.7)
That is, condition (8.7) is necessary for optimality of x. Due to the concavity of the objective
function, the optimality condition (8.7) is also sucient for optimality, and indeed Proposition
8.1.1 implies that (8.7) is both necessary and sucient for optimality.
The rst derivative length of a link j, D
t
j
(f
j
), can be thought of as the marginal price (which
we will simply call price, for brevity) for using link j, and the sum of these prices over j r for
a route r, is the price for ow along the route. In words, (8.7) says the following for the routes r
serving a user s: The price of any route with positive ow is equal to the marginal utility for s,
and the price of any route with zero ow is greater than or equal to the marginal utility for s.
8.5 Hard Constraints and Prices
In some contexts the ow f on a link is simply constrained by a capacity: f C. This can be
captured by a cost function which is zero if f C and innite if f > C. However, such a cost
function is highly discontinuous. Instead, we could consider for some small , > 0 the function
D(f) =
1
2
(f (C ))
2
+
.
The price D
t
(f) is given by
D
t
(f) =
1

(f (C ))
+
. (8.8)
As , 0, the graph of the price function D
t
converges to a horizontal segment up to C and
a vertical segment at C, as shown in Figure 8.2. That is, for the hard constraint f C, the
price is zero if f < C and the price is an arbitrary nonnegative number if f = C. (This is the
complementary slackness condition for Kuhn-Tucker constraints.)
The routing problem formulated in Section 8.2 already involves hard constraintsnot inequality
constraints on the linksbut equality constraints on the user rates x
s
. The variable
s
can be viewed
as a marginal price for the rate allocation to user s. The rate x
s
of user s could be increased by
at cost
s
+ o() by increasing the ow along one of the routes r serving s with minimum rst
derivative link length
s
.
The joint routing and congestion control problem with hard link constraints is similar to prob-
lem SYSTEM(U,H,A,D), but with the cost functions (D
j
: j J) replaced by a vector of capacities,
1
In fact, through the introduction of an additional imaginary link, corresponding to undelivered ow, it is possible
to reduce the joint congestion control and routing problem described here to a pure routing problem. See [2, Section
6.5.1].
135
D(f)
0
f
0
f
C!! C C C!!
D(f)
limit
Figure 8.2: Steep cost function, approximating a hard constraint.
C = (C
j
: j J):
SYSTEM(U,H,A,C) (for joint congestion control and routing with hard link constraints):
max

sS
U
s
(x
s
)
subject to
x = Hy, Ay C
over
x, y 0.
The optimality conditions for SYSTEM(U,H,A,C) are, for some vector = (
j
: j J),

j
0, with equality if f
j
< C
j
, for j J (8.9)
U
t
s
(x
s
)

jr

j
, with equality if y
r
> 0, whenever r s S. (8.10)
Equation (8.10) is identical to (8.7), except
j
replaces D
t
j
(f
j
). Equation (8.9) shows that
j
satises the limiting behavior of the approximating function D
t
(f
j
) in (8.8), as , 0. It is
plausible that (8.9) and (8.10) give optimality conditions for SYSTEM(U,H,A,C), because in a
formal sense they are the limit of (8.7) as the soft constraints approach hard constraints. The
suciency of these conditions for optimality is proved in an exercise, based on use of a Lagrangian.
8.6 Decomposition into Network and User Problems
A diculty of the joint congestion control and routing problem is that it involves the users utility
functions, the topology of the network, and the link cost functions or capacities. Such large, hetero-
geneous optimization problems can often be decoupled, using prices (a.k.a. Lagrange multipliers)
across interfaces. A particularly elegant formulation of this method was given in the work of Kelly
[24], followed here. See Figure 8.3. The idea is to view the problem faced by a user as how much
to pay. Once each user s has decided to pay
s
for its rate, the network mechanisms can ignore the
real utility function U
s
(x
s
) and pretend the utility function of user s is the surrogate utility function
136
Network
User 1
User n
User s
!
1
"
1
"
n
!
n
s
"
!
s
.
.
.
.
.
.
Figure 8.3: Coupling between the network and user problems in the system decomposition.
V
s
(x
s
) =
s
log(x
s
). That is because, as noted in Section 8.3, the response of a user with utility
function
s
log(x
s
) to a xed price
s
is to pay
s
, independently of the price. The network can
determine (x, y), using the surrogate valuation functions. In addition, for each user s, the network
can determine a price
s
, equal to the minimum marginal cost of the routes serving s:

s
= min
rs

r
where
r
=
_
jr
D
j
(f
j
) in the case of soft link constraints

jr

j
in the case of hard link constraints.
(8.11)
The network reports
s
to each user s. User s can then also learn the allocation rate x
s
allocated to
it by the network, because x
s
=
s

s
. The other side of the story is the user optimization problem.
A user s takes the price
s
as xed, and then considers the possibility of submitting a new value of
the payment,
s
, in response. Specically, if the price
s
resulting from payment
s
is less than the
marginal utility U
t
s
(
s
/
s
), then the user should spend more (increase
s
). Note that the network
and user optimization problems are connected by the payments declared by the users (the
s
s) and
the prices declared by the network (the
s
s). The network need not know the utility functions of
the users, and the users need not know the topology or congestion cost functions of the network.
Input data for the network problem includes the vector of payments . but not the utility
functions. The problem is as follows.
NETWORK(,H,A,D) (for joint congestion control and routing with soft link constraints):
max

sS

s
log(x
s
)

jJ
D
j
(

r:jr
y
r
)
subject to
x = Hy
over
x, y 0.
The user problem for a given user s does not involve the network topology. It is as follows.
137
USER
s
(U
s
,
s
):
max U
r
_

s
_

s
over

s
0.
Let us prove that if the optimality conditions of both the user problem and the network problem
are satised, then the optimality conditions of the system problem are satised. We rst consider
the problem with soft link constraints. The optimality conditions for the user problem are:
U
t
s
(x
s
)
s
, with equality if x
s
> 0, for s S. (8.12)
The optimality conditions for the network problem are given by (8.7) with U
t
s
replaced by V
t
s
. Since
V
t
s
(x
s
) =
s
xs
(for this to always be true, we interpret
s
xs
to be zero if
s
= x
s
= 0), the optimality
conditions for the network problem are thus:

s
x
s

jr
D
t
j
(f
j
), with equality if y
r
> 0, whenever r s S. (8.13)
The fact, true by (8.12), that U
t
s
(x
s
)
s
for all s S, and the rule (8.11) for selecting
s
, imply
that U
t
s
(x
s
)

jr
D
t
j
(f
j
) whenever r s S. Thus, it remains to prove the equality condition
in (8.7) in case y
r
> 0. If r s S and y
r
> 0, then x
s
> 0. Therefore, by (8.13),

s
=

s
x
s
(8.14)
Also, equality holds in both (8.12) and (8.13), so using (8.14) to eliminate
s
, we nd equality holds
in (8.7).
The story is similar for the case of hard link constraints, as we see next. The network problem is
NETWORK(,H,A,C) (for joint congestion control and routing with hard link constraints):
max

sS

s
log(x
s
)
subject to
x = Hy, Ay C
over
x, y 0.
and the same user problem is used. The optimality condition for the user problem is again (8.12).
The optimality condition for the network problem is given by (8.9) and (8.10) with U
t
s
replaced by
V
t
s
:

j
0, with equality if f
j
< C
j
, for j J (8.15)

s
x
s

jr

j
, with equality if y
r
> 0, whenever r s S. (8.16)
138
We wish to show that the user problem optimality conditions, (8.12), the network problem
optimality conditions, (8.15) and (8.16), and the choice of (
s
: s S), given in (8.11), imply the
optimality conditions for the system problem, (8.9) and (8.10). Equation (8.15) is the same as
(8.9). The proof of (8.16) follows by the same reasoning used for soft constraints.
In summary, whether the link constraints are soft or hard, if the network computes the prices
using
s
= min
rs

r
, and if the optimality conditions of the network and user problems are satised,
then so are the optimality conditions of the system problem.
8.7 Specialization to pure congestion control
A model for pure congestion control is obtained by assuming the users in the joint congestion con-
trol and routing problem are served by only one route each. In this situation we can take the set
of routes R to also be the set of users., and write x
r
for the rate allocated to user r. If hard link
constraints are used, the congestion control problem is the following.
SYSTEM(U,A,D) (for congestion control with soft link constraints):
max

rR
U
r
(x
r
)

jJ
D
j
(

r:jr
x
r
)
over
x 0.
The optimality conditions for this problem are
U
t
r
(x
r
)

jr
D
t
j
(f
j
), with equality if x
r
> 0, for all r R. (8.17)
If hard link constraints are used, the congestion control problem is the following.
SYSTEM(U,A,C) (for congestion control with hard link constraints):
max

rR
U
r
(x
r
)
subject to
Ax C
over
x 0.
The optimality conditions for this problem are

j
0, with equality if f
j
< C
j
, for j J (8.18)
U
t
r
(x
r
)

jr

j
, with equality if x
r
> 0, for r R. (8.19)
139
Link 3
Flow 2 Flow 3 Flow 1
Flow 0
Link 1 Link 2
Figure 8.4: A three link network with four users
The corresponding network problem is obtained by replacing the utility function U
r
(x
r
) by

r
log(x
r
) for each r:
NETWORK(,A,C) (for congestion control with hard link constraints):
max

rR

r
log(x
r
)
subject to
Ax C
over
x 0.
The optimality conditions for this problem are special cases of (8.18) and (8.19) and are given
by (with f = Ax):

j
0, with equality if f
j
< C
j
, for j J (8.20)

r
= x
r
(

jr

j
), for r R, (8.21)
The user problem for the congestion control system problem is the same as for the joint routing and
congestion control problems. Basically, the user cannot distinguish between these two problems.
Remark We remark briey about dual algorithms, which seek to nd an optimal value for the
vector of dual variables, . The primal variables, x, are determined by through (8.21). The dual
variable
j
is adjusted in an eort to make sure that f
j
C
j
, and to satisfy the other optimality
condition, (8.20). Intuitively, if for the current value (t) of the dual variables, the constraint
f
j
C
j
is violated,
j
should be increased. If the contraint f
j
is strictly smaller than C
j
, then
j
should be decreased (if it is strictly positive) or be held at zero otherwise.
Example Consider the following instance of the problem NETWORK(, A,C), for congestion
control with hard link constraints. Suppose J = 1, 2, 3 and R = 0, 1, 2, 3, so there are three
links and fours users, with one route each. Suppose the route of user 0 contains all three links, and
the route of user i contains only link i, for 1 i 3, as shown in Figure 8.4. Thus, the optimality
conditions are given by (8.20) and (8.21). In order for x to be an optimal ow, it is necessary for
this example that f
j
= C
j
for all j, because each link has a single-link user. Therefore,
1
,
2
, and

3
can be any values in R
+
, and x
i
= 1 x
0
for 1 i 3. Condition (8.21) becomes:

0
= x
0
(
1
+
2
+
3
),
1
= x
1

1
,
2
= x
2

2
,
3
= x
3

3
..
140
Eliminating the s and using x
i
= 1 x
0
yields:

0
= x
0
_

1
1 x
0
+

2
1 x
0
+

3
1 x
0
_
,
or
x
0
=

0

0
+
1
+
2
+
3
.
For example, if
i
= 1 for all i, then x = (0.25, 0.75, 0.75, 0.75), and = (1.333, 1.333, 1.333). If
3
were increased to two, still with
0
=
1
=
2
= 1, then the new allocations and link prices would
be x = (0.2, 0.8, 0.8, 0.8) and = (1.25, 1.25, 2.5). Thus, the increase in
3
causes an increase in
the allocation to user 3 from 0.75 to 0.8. A side eect is that the rates of users 1 and 2 also increase
the same amount, the rate of user 0 is decreased, the price of link 3 increases substantially, while
the prices of links 1 and 2 decrease somewhat.
8.8 Fair allocation
The problem SYSTEM(U,A,C) is an example of a pure allocation problem with no costs. The
allocation x is constrained to be in the convex set x 0 : Ax C. A generalization is to replace
the space of x by a general convex set . For example, we could let = x 0 :

rR
x
2
r
C.
This leads to the following generalization of SYSTEM(U,A,C):
ALLOCATE(U,)
max

rR
U
r
(x
r
)
over
x .
An optimal solution exists if is closed and bounded and the functions U
r
are continuous. By
Proposition 8.1.1, x

is optimal if and only if

rR
U
r
t
(x

r
)(x
r
x

r
) 0 for all x . (8.22)
In case U
r
(x
r
) =
r
log x
r
for all r, for some constants
r
> 0, the optimality condition (8.22)
becomes

rR

r
_
x
r
x

r
x

r
_
0 for all x . (8.23)
The term in braces in (8.23) is the weighted normalized improvement of x
r
over x

r
for user r. For
example, if
xrx

r
x

r
= 0.5, we say that x
r
oers factor 0.5 increase over x

r
. The optimality condition
(8.23) means that no vector x can oer a positive weighted sum of normalized improvement over
x

. A vector x

that satises (8.23) is said to be a proportionally fair allocation, with weight vector
.
If instead, U
i
(x) =
_

i
1
_
(x
1
1) for each i, where
i
> 0 for each i, and 0 is the same
for all users, then an optimal allocation is called -fair, and the optimality condition for -fairness
141
is
n

i=1

i
_
x
i
x

i
(x

i
)

_
0 for all x .
The limiting case = 1 corresponds to proportional fairness. The case = 0 corresponds maxi-
mizing a weighted sum of the rates. A limiting form of fairness as is max-min fairness,
discussed next.
Max-Min Fair Allocation
In the remainder of this section we discuss max min fairness, an alternative to the notion of
proportional fairness. Perhaps the easiest way to dene the max-min fair allocation is to give a
recipe for nding it. Suppose that is a closed, convex subset of R
n
+
containing the origin. Let e
r
denote the vector in R
n
with r
th
coordinate equal to one, and other coordinates zero. Given x ,
let g
r
(x) = 1 if x + e
r
for suciently small > 0, and g
r
(x) = 0 otherwise. Equivalently,
g(x) is the maximal vector with coordinates in 0, 1 such that x + g(x) for suciently
small > 0. Construct the sequence (x
(k)
: k 0) as follows. Let x
(0)
= 0. Given x
(k)
, let
x
(k+1)
= x
(k)
+ a
k
g(x
(k)
), where a
k
is taken to be as large as possible subject to x
(k+1)
. For
some K n, g(x
(K)
) = 0. Then x
(K)
is the max-min fair allocation. We denote it by x
maxmin
.
Equivalently, if the coordinates of x
maxmin
and any other x are both reordered to be in
nondecreasing order, then the reordered version of x
maxmin
is lexicographically larger than the
reordered version of x
t
.
Another characterization is that for any x , if x
r
> x
maxmin
r
for some r, then there exists
r
t
so that x
t
r
< x
maxmin
r

x
maxmin
r
.
Still another characterization can be given for x
maxmin
, in the special case that = x 0 :
Ax C, as in the congestion control problem with hard link constraints. Setting f
maxmin
=
Ax
maxmin
, the characterization is as follows: For each route r there is a link j
r
so that f
maxmin
j
=
C
j
and x
maxmin
r
= maxx
maxmin
r

: j r
t
. Link j
r
is called a bottleneck link for route r. In
words, a bottleneck link for route r must be saturated, and no other user using link j
r
can have a
strictly larger allocation than user r. Due to this last characterization, a max-min fair allocation
is sometimes called the envy free allocation, because for each user r there is a bottleneck link, and
user r would not get a larger allocation by trading allocations with any of the other users using
that link.
8.9 A Network Evacuation Problem
8.10 Braess Paradox
Consider the network shown in Fig. 8.5. The arrival process is assumed to be a Poisson process of
rate . The upper branch of the network has a queue with a single exponential server followed by
an innite server queue with mean service time 2. The lower branch is similar, but with the order
of the stations reversed. The trac enters at a branch point, at which each customer is routed
to one of the two branches, chosen independently and with equal probabilities. The scenario is
equivalent to having a large number of independent Poisson arrival streams with sum of arrival
rates equal to 2, in which all customers from some streams are routed to the upper branch, and
all customers from the other streams are routed to the lower branch. For reasons that are apparent
below, assume that 0 < < 1 < 2. For example, = 1.0 and = 2.5.
142

2
2
2!
Figure 8.5: A network with two routes.

2
2
2!
1
2!"!#
2!"!#
2(!#"!)
!#
!#
Figure 8.6: Augmented network.
Suppose that the choice of routing for each stream is made in a greedy fashion, so that no stream
can improve its mean sojourn time by switching its routing choice. In game theory terminology, this
is to say that the routing choices for the substreams form a Nash (or person-by-person optimal)
equilibrium. By symmetry and by the convexity of the mean number of customers in an M/M/1
queue as a function of arrival rate, the Nash equilibrium is achieved when half the customers are
routed to the upper branch, and half to the lower branch. This leads to Poisson arrival processes
with parameter for each branch, and hence mean system time D = 2+1/() for both branches.
The assumption that < 1 implies that D < 3.
Consider next the network of Fig. 8.6, which is obtained by augmenting the original network
by a link with an innite server queue with mean service time one. Let us consider what the new
Nash equilibrium routing selection is. The augmented network has three routes: upper branch,
lower branch, and cross-over route. Seek a solution in which all three routes are used by a positive
fraction of the customers. This entails that the mean sojourn time for each of the three routes is the
same. Let
t
denote the sum of the rates for the upper branch and cross-over routes. Equating the
mean sojourn time of the cross-over route to that of the lower branch route yields that the mean
system time, 1/(
t
), of the single server station on the upper branch, should be 1. We thus set

t
= 1. Our assumptions on and insure that
t
(0, 2), as desired. The mean sojourn
time on the upper branch and lower branch should also be the same, so that the mean time in the
single server system on the lower branch should also be one, so that its throughput should also be

t
. This determines the rates for all three routes, and the resulting trac rates on the links of the
network are shown in the gure. Further thought reveals this to be the unique Nash equilibrium
routing selection.
Let us compare the mean sojourn time of customers in the two networks. The mean sojourn
time in the original network is less than three, whereas in the augmented network it is equal to
three. Thus, paradoxically, the mean sojourn time of all substreams is larger in the augmented
143
network than in the original network. Roughly speaking, the added link encourages customers to
make heavier use of the single server systems, which are not as ecient from a social point of
view. Of course if all streams of customers were to be banned from using the crossover link, then
the performance observed in the original network would be obtained. This point corresponds to a
Pareto optimal equilibrium point, and it clearly requires cooperation among the substreams.
8.11 Further reading and notes
The ow deviation algorithm was introduced in [15] and is discussed in [27], and is a special case
of the Frank-Wolfe method. The ow deviation algorithms and other algorithms, based on use of
second derivative information and projection, for congestion control and routing, are covered in [2].
For the decomposition of system problems into user and network problems, we follow [24]. Much
additional material on the primal and dual algorithms, and second order convergence analysis based
on the central limit theorem, is given in [25]. The notion of -fairness is given in [31]. Additional
material on algorithms and convergence analysis can be found in [38]. Braess discovered his paradox
in a transportation network setting in 1968. It was adapted to the queueing context by Cohen and
Kelly [6].
Today many subnetworks of the Internet use shortest path routing algorithms, but typically the
routing is not dynamic, in that the link weights do not depend on congestion levels. Perhaps this
is because the performance is deemed adequate, because there is plenty of capacity to spare. But
it is also because of possible instabilities due to time lags. The recent surge in the use of wireless
networks for high data rates has increased the use of dynamic routing (or load balancing) methods.
Congestion control is central to the stability of the Internet, embodied by the transport control
protocol (TCP) and van Jacobsons slow start method. The decomposition method discussed in
this chapter matches well the congestion control mechanism used on the Internet today, and it gives
insight into how performance could be improved.
8.12 Problems
8.1. A ow deviation problem (Frank-Wolfe method)
Consider the communication network with 24 links indicated in Figure 8.7. Each undirected
1 2 3
4 5
8 9 7
6
Figure 8.7: Mesh network with one source and one destination
link in the gure represents two directed links, one in each direction. There are four users:
S = (1, 9), (3, 7), (9, 1), (7, 3), each with demand b. The initial routing is deterministic, being
concentrated on the four paths (1, 2, 5, 6, 9),(3, 6, 5, 8, 7),(9, 8, 5, 4, 1), and (7, 4, 5, 2, 3). Suppose
that the cost associated with any one of the 24 links is given by D
j
(f) = f
2
j
/2, where f
j
is the total
ow on the link, measured in units of trac per second.
144
(a) What is the cost associated with the initial routing?
(b) Describe the ow (for all four users) after one iteration of the ow deviation algorithm. Assume
that, for any given user, any route from the origin to the destination can be used. Is the resulting
ow optimal?
8.2. A simple routing problem in a queueing network
Consider the open queueing network with two customer classes shown in Figure 8.8. Customers of
2
2 2
1!p
p
Source a
Source b exit
Figure 8.8: Open network with two types of customers
each type arrive according to a Poisson process with rate one. All three stations have a single expo-
nential server with rate two, all service times are independent, and the service order is rst-come,
rst-served. Each arrival of a customer of type b is routed to the upper branch with probability p,
independently of the history of the system up to the time of arrival. (a) Find expressions for D
a
(p)
and D
b
(p), the mean time in the network for type a and type b customers, respectively. (b) Sketch
the set of possible operating points (D
a
(p), D
b
(p)) : 0 p 1. (c) Find the value p
a
of p which
minimizes D
a
(p). (d) Find the value p
b
of p which minimizes D
b
(p). (e) Find the value p
ave
of p
which minimizes the average delay, (D
a
(p) +D
b
(p))/2. Also, how many iterations are required for
the ow deviation algorithm (Frank-Wolfe method) to nd p
ave
? (f) Find the value p
m
of p which
minimizes maxD
a
(p), D
b
(p).
8.3. A joint routing and congestion control problem
Consider the network shown in Figure 8.9 and suppose (1,4) and (2,3) are the only two origin-
b+c
d c a b
a+d
2
3
4
1
Figure 8.9: Network with joint routing and congestion control
destination pairs to be supported by the network. Suppose that the system cost for ow F on any
link l is D
l
(F) =
F
2
2
, and that the utility of ow r
ij
for origin-destination pair (i, j) is
ij
log(r
ij
)
for constants
14
,
23
> 0. Suppose that for each of the two origin-destination pairs that ow can
be split between two paths. For convenience of notation we use the letters a, b, c, d to denote the
values of the four path ows: a = x
124
, b = x
134
, c = x
213
, and d = x
243
. The joint routing and
congestion control problem is to minimize the sum of link costs minus the sum of the utilities:
min
a,b,c,d0
1
2
(a
2
+b
2
+c
2
+d
2
+ (b +c)
2
+ (a +c)
2
)
14
log(a +b)
23
log(c +d)
(a) Write out the optimality conditions. Include the possibility that some ow values may be
145
zero. (b) Show that if all ow values are positive, then a = b and c = d. (c) Find the optimal
ows if
14
= 66 and
23
= 130. Verify that for each route used, the price of the route (sum of
D
t
(F
l
) along the route) is equal to the marginal utility of ow for the origin-destination of the route.
8.4. Joint routing and congestion control with hard link constraints
Consider the joint routing and congestion control problem for two users, each of which can split
ow over two routes, as shown in the gure.
2
b
a
c
d
C =8
1
C =8
2
C =8
3
x
x
1
The total ow of user one is x
1
= a + b and the total ow of user 2 is x
2
= c + d. The three
central links each have capacity 8, and all other links have much larger capacity. User 1 derives
value U
1
(a + b) = 2

a +b and user 2 derives value U


2
(c + d) = ln(c + d). Let variables p
1
, p
2
, p
3
denote prices for the three links (i.e. Lagrange multipliers for the capacity constraints).
(a) Write down the optimality conditions for a, b, c, d, p
1
, p
2
, p
3
.
(b) Identify the optimal assignment (a, b, c, d) and the corresponding three link prices.
8.5. Suciency of the optimality condition for hard link constraints
Consider a network with link set L, users indexed by a set of xed routes R, and hard capacity
constraints. Use the following notation:
A = (A
lr
: l L, r R) with A
lr
= 1 if link l is in route r, and A
lr
= 0 otherwise
x = (x
r
: r R) is the vector of ows on routes
f = (f
l
: l L) is the vector of ows on links, given by f = Ax
C = (C
l
: l L) is the vector of link capacities, assumed nite
U
r
(x
r
) is the utility function for route r, assumed concave, continuously dierentiable, non-
decreasing on [0, +).
(For simplicity routing is not considered.) The system optimization problem is max
x:x0,AxC

r
U
r
(x
r
).
Suppose x

is feasible (x

0 and Ax

C) and satises the following condition (with f

= Ax

:
There exists a vector p = (p
l
) for each link such that
p
l
0, for l L, with equality if f

l
< C
l
U
t
(x

r
)

lr
p
l
for r R, with equality if x

r
> 0
In this problem you are to prove that x

is a solution to the system optimization problem.


(a) Dene the Lagrangian
L(x, p) =

r
U
r
(x
r
) +

l
p
l
_
C
l

r:lr
x
r
_
146
Show that L(x

, p) L(x, p) for any other vector x with x 0. (Hint: There is no capacity


constraint, and L(x, p) is concave in x). (b) Deduce from (a) that x

is a solution to the sys-


tem optimization problem. (Note: Since the feasible set is compact and the objective function is
continuous, a maximizer x

exits. Since the objective function is continuously dierentiable and


concave and the constraints linear, a price vector p satsisfying the above conditions also exists, by
a standard result in nonlinear optimization theory based on Farkas lemma.)
8.6. Fair ow allocation with hard constrained links
Consider four ows in a network of three links as shown in Figure 8.10. Assume the capacity of
Link 1 Link 2 Link 3
Flow 1
Flow 4
Flow 2
Flow 3
Figure 8.10: Network with four ows
each link is one. (a) Find the max-min fair allocation of ows. (b) Find the proportionally fair
allocation of ows, assuming the ows are equally weighted. This allocation maximizes

i
log(x
i
),
where x
i
is the rate of the ith ow. Indicate the corresponding link prices.
147
148
Chapter 9
Dynamic Network Control
9.1 Dynamic programming
The following game illustrates the concept of dynamic programming. Initially there are ten spots
on a blackboard and a pair of ordinary dice is available. The player, a student volunteer from the
audience, rolls the dice. Let i denote the number rolled on the rst die and j denote the number
rolled on the second die. The player then must either erase i spots, or j spots from the board.
Play continues in a similar way. The player wins if at some time the number rolled on one of the
dice is equal to the number of spots remaining on the board, so that all spots can be erased from
the board. The player loses if at some time both numbers rolled are greater than the number of
spots remaining on the board. For example, suppose the player begins by rolling (4,5) and elects
to erase 4 and leave 6 spots on the board. Suppose the player next rolls (5,2) and elects to erase 2
more and leave 4 spots on the board. Suppose the player next rolls (4,2). The player wins because
the four spots can be erased, leaving zero behind.
What is the optimal strategy for this game? What is the probability of winning for the optimal
strategy? These questions are easily answered using dynamic programming. Fundamentally, dy-
namic programming is about working backwards from the end of a game. In the example at hand,
the strategy is trivial if there is only one spot remaining on the board. Indeed, given one spot
remains, the player wins if and only if one or both of the numbers rolled on the dice is 1, which has
probability w
1
=
11
36
. The player has no choice. If there are only two spots remaining, the player
wins if one of the numbers rolled is a two (happens with probability
11
36
). If one of the numbers is a
one but neither is a two (happens with probability
9
36
), then the player erases one spot and will go
on to win with probability
11
36
, because there will be one spot left. Thus, given the player has two
spots left, the probabilty of winning the game is w
2
=
11
36
+
9
36

11
36
. Again, there is essentially no
strategy needed. What should the player do if there are three spots left and the player rolls (1,2)?
Since w
2
> w
1
, clearly it is optimal for the player to erase just one spot, leaving two on the board.
In general. let w
n
denote the probability of winning, for the optimal strategy, given there are
n spots left on the board. Let w
0
= 1 and w
k
= 0 for k < 0. If there are n spots left and (i, j)
is rolled, the player has the choice of leaving n i spots or n j spots remaining on the board.
Clearly the optimal strategy is to make the choice to maximize the probability of winning. This
leads to the equation,
w
k
=
1
36
6

i=1
6

j=1
maxw
ki
, w
kj
(9.1)
149
Table 9.1: Probability of winning given n spots remaining
n w
n
1 0.305556
2 0.381944
3 0.460455
4 0.537375
5 0.607943
6 0.666299
7 0.564335
8 0.588925
9 0.606042
10 0.617169
which recursively determines all the w
n
s. Numerical results are shown in Table 9.1.
In particular, given ten spots initially, the probability of winning the game is 0.617169. To
compactly describe the optimal strategy, we list the numbers 1 through 9 according to decreasing
values of the w
n
s: 6, 5, 9, 8, 7, 4, 3, 2, 1. Each time the player is faced with a decision, the optimal
choice is the one that makes the number of spots remaining on the board appear earliest on the
list. For example, if the rst roll is (4,5), the player should erase 4 spots, because 6 appears on the
list before 5.
Note that the optimal policy is deterministic and that decisions are based only on the remaining
number of spots. There is no advantage for the player to randomize decisions or to recall the past
history of the game. Note also that while the dynamic program equation (9.1) was derived by
conditioning on the outcomes of one step forward in time, the equation is solved essentially by
working backward from the end of the game. Working backwards from the end is the essence of
dynamic programming.
9.2 Dynamic Programming Formulation
Dynamic control with complete state information is studied using the dynamic programming ap-
proach. The setting is pure jump controlled semi-Markov processes with independent and identically
distributed holding times. The following are assumed given:
A nite or countably innite state space o,
A real-valued function g on o, denoting instantaneous cost,
An action space |
A transition probability matrix P(u) = (p
ij
(u))
i,jS
for each u |.
A probability distribution function F, giving the distribution of inter-event times.
Denition 9.2.1 A random process (X(t) : t 0) is a controlled semi-Markov process with control
policy = (w
0
, w
1
, w
2
, . . .) if
150
(a) (X(t) : t 0) has pure-jump sample paths.
(b) There is a sequence of random variables t
0
= 0, t
1
, t
2
, . . . such that (t
k+1
t
k
)
k0
are indepen-
dent with common distribution function F. The jump times of (X(t) : t 0) are all included
in t
0
, t
1
, . . .
(c) w
k
is a function of (X(t
j
) : 0 j k) for each k
(d) P[X(t
k+1
) = j [ X(t
k
) = i, w
k
= u, (X(s) : s t
k
)] = p
ij
(u)
(e) (X(t
k
) : k 0) is independent of (t
k
: k 0)
Notes: Condition (c) is a causality condition on the control policy , while condition (d) is a
semi-Markov property. Conditions (a)-(e) can be greatly relaxed in what follows.
A cost function is used to evaluate a particular control policy . A possible cost function is the
discounted average cost
E
x
_
T
0
g(X(t))e
t
dt,
where e
t
is a discount factor for ination rate > 0, T (0, ] is the time horizon, and E
x

denotes expectation for initial state x(0) = x. Another cost function is the long term average cost:
limsup
T
1
T
E
x
_
T
0
g(X(t))dt
In order to apply the dynamic programming approach in a simple setting, it is useful to transfer
attention to an equivalent discrete time control problem. The main idea is that
E
x
_
tn
0
g(X(t))e
t
dt =
_
1

E
x

n1
k=0

k
g(X(t
k
)) if > 0

t
1
E
x

n1
k=0
g(X(t
k
)) if = 0
where =

F() =
_

0
e
s
dF(s). Thus, control of the discrete time process (X(t
k
)) is equivalent
to control of the original process (X(t)). Often we will write X(k) for X(t
k
).
Example (Flow control an M
controlled
/M/1 queue) Consider a system consisting of a queue
and a server. Let X(t) denote the number of customers in the system. Suppose customers arrive at
instantaneous rate u, where u denotes a control variable with u [0, 1]. Thus, is the maximum
possible arrival rate, and the eect of the control is to select a possibly smaller actual arrival rate.
The server is an exponential type server with rate . Thus, the parameters of the system are the
maximum arrival rate , and the service rate . By taking the control u to be a constant, the model
reduces to an M/M/1 system with arrival rate u. In this example the control will be allowed to
depend on the number in the system.
Roughly speaking, the control strategy should insure a large throughput, while keeping the
number in the system small. Of course, setting u 1 maximizes the expected throughput, but,
especially if > , this can lead to a large number in the system. On the other hand, if u 0 then
the number in the system is minimized, but the long run throughput is zero. We will thus select a
cost function which reects both customer holding cost and a reward for throughput.
151
This example can be t into the dynamic programming framework by choosing o = 0, 1, 2, . . .,
|=[0,1],
P(u) =
_

_
+(1u)

0 0

(1u)

0
0

(1u)

.
.
.
0 0
.
.
.
.
.
.
.
.
.
.
.
.
_

_
,
and taking F to be the exponential distribution function with parameter = +.
A reasonable objective is to minimize the discounted average cost,
cE
x
_
T
0
e
t
X(t)dt E
x

tT
I
X(t)=1
e
t
, (9.2)
where > 0 is a discount factor, and c > 0 is a constant. The integral term in (9.2) reects holding
cost c per unit time per customer in the system, and the sum term in (9.2) reects a unit reward
per departure. Replacing the departure process by its compensator (integrated intensity), the cost
in (9.2) can be reexpressed as
E
x
_
T
0
e
t
g(X(t))dt E
x
_
T
0
e
t
dt
. .
policy independent
(9.3)
where g(x) = cx + I
x=0
. Since the second term in (9.3) does not depend on the control policy
used, the term can be dropped from the cost function. Thus, the function g taken in the dynamic
programming formulation is g(x) = cx+Ix = 0. Intuitively, the cost g(0) = accounts for lost
departure opportunities when the system is empty, whereas the linear term cx gives the holding
cost of customers in the system.
9.3 The Dynamic Programming Optimality Equations
The discrete time problem is to choose to minimize E
x

N1
k=0

k
g(X(k)).
Dene
V
n
(x) = inf

E
x
n1

k=0

k
g(X(k)).
The value V
n
(x) is the minimum cost if n terms are included in the cost function, given initial
state x. Set by convention V
0
(x) = 0, and observe that V
1
(x) = g(x). The fundamental backwards
equation of dynamic programming is
V
n+1
(x) = g(x) + inf
u|

y
p
xy
(u)V
n
(y), (9.4)
for n 0. Furthermore, a policy is optimal if the control value when there are n steps to go is a
function u
n
(x) of the present state, where the feedback control function u
n
is given by
u
n
(x) = argmin
u|

y
p
xy
(u)V
n
(y) (9.5)
152
Together (9.4) and (9.5) determine the optimal control values. If the state space is nite, these
equations provide a numerical method for computation of the optimal policies. For very large or
innite state spaces, some progress can often be made by approximating the process by one with a
moderate number of states and again seeking numerical results. Still another use of the equations
is to verify structural properties of optimal feedback control functions u
n
(such as a threshold or
switching curve structure). These properties can serve to restrict the search for optimal policies
to a much smaller set of policies. An approach to verifying structural properties for the u
n
is to
seek a set of properties for the cost-to-go functions V
n
: (a) which can be proved by induction on n
via (9.4), and (b) which imply the structural properties of the u
n
via (9.5).
Return to Example To illustrate the method, let us return to the M
controlled
/M/1 example ,
and prove the following claim: An optimal decision for n steps to go is given by
u
n
(x) =
_
1 if x s(n)
0 if x > s(n)
(9.6)
for some threshold value s(n).
The dynamic programming recursion (9.4) for this example becomes
V
n+1
(x) = cx +I
x=0
+ min
u[0,1]

V
n
((x 1)) +
(1 u)

V
n
(x) +
u

V
n
(x + 1) (9.7)
and (9.5) becomes
u
n
(x) =
_
_
_
0 if V
n
(x) < V
n
(x + 1)
1 if V
n
(x) > V
n
(x + 1)
arbitrary if V
n
(x) = V
n
(x + 1)
(9.8)
Equation (9.8) shows that without loss of optimality, the control values can be restricted to values
0 and 1. Furthermore, (9.7) simplies to
V
n+1
(x) = cx +I
x=0
+

V
n
((x 1)
+
) +

minV
n
(x), V
n
(x + 1) (9.9)
In view of (9.8), the claim is implied by the following property of the V
n
:
V
n
(x + 1) V
n
(x)
_
0 if x s(n)
0 if x > s(n)
(9.10)
for some threshold value s(n). While (9.10) implies that the structure (9.6) is optimal, the property
(9.10) does not follow by induction from (9.9). Thus, a stronger property is sought for V
n
which
can be proved by induction.
The following two properties can be proved jointly by induction on n:
(a) V
n
(x) is convex (i.e. V
n
(x + 1) V
n
(x) is nondecreasing in x)
(b) V
n
(1) V
n
(0) /.
Also, property (a) implies (9.10) for some value of the threshold s(n), and hence establishes that
the threshold structure for u
n
is optimal.
Proof. Here is the induction argument. The base case, that V
0
satises (a) and (b), is imme-
diate. Suppose that V
n
satises properties (a) and (b). It must be shown that V
n+1
has properties
153
(a) and (b). Dene V
n
(1) = V
n
(0) +(/). By property (b) for V
n
, the new function V
n
is convex
on the set 1, 0, 1, . . ., and (9.9) becomes
V
n+1
(x) = cx +

V
n
((x 1)
+
) +

minV
n
(x), V
n
(x + 1), (9.11)
for x 0. Since each of the three terms on the right of (9.11) is convex on 0, 1, 2, . . . (sketch an
example for the last of the three terms) it follows that V
n+1
satises property (a).
Note that property (b) of V
n
was used to prove property (a) for V
n+1
. In fact, that is the only
reason we chose to introduce property (b). It remains to verify property (b) for V
n+1
. That is
accomplished by the following string of inequalities, where a b denotes the minimum of two
numbers a and b:
V
n+1
(1) V
n+1
(0) = +c +

[(V
n
(2) V
n
(1)) (V
n
(1) V
n
(0))]
+

[(V
n
(2) V
n
(1)) V
n
(1)]
= +

(V
n
(2) V
n
(1)) 0
+

(V
n
(1) V
n
(0)) 0
+

) = /.
The proof that V
n+1
, and hence all the functions V
n
, have properties (a) and (b), is proved.
Some remarks concerning extensions of the above result are in order. The threshold values s(n)
depend on n. However, it can be shown that lim
n
V
n
(x) exists and is nite for each x. Call the
limit function V

(x). It can also be shown that the limit V

determines the optimal strategies


for the innite horizon, discounted cost problem. Since properties (a) and (b) are inherited by the
limit, conclude that there is an optimal control strategy for the innite horizon problem given by
a xed threshold s: new arrivals are allowed if and only if the number in the system is less than
s. Moreover, unless it happens that V

(x) = V

(x + 1) for some value of x, the threshold policy


is the unique optimal control. It can also be shown that lim
1
[V

(x) V

(0)] exists and gives


rise to the control for the long-term average cost criterion. The limit is again convex, so that the
long-term average cost is also minimized by a xed threshold policy.
Example (Dynamic routing to two exponential server queues)
Consider a system with two service stations, each consisting of a queue plus server, as shown
in Figure 9.1. Customers arrive according to a Poisson process of rate > 0. A customer is routed
to the rst station with probability u, and to the second station with probability 1 u, where u is
a variable to be controlled, and can depend on the state of the system. Service times at the rst
station are exponentially distributed with parameter
1
, and service times at the second station
are exponentially distributed with paramenter
2
. The holding cost per unit time per customer in
station 1 is c
1
and in station 2 is c
2
, where c
i
> 0 for i = 1, 2. This example ts into the dynamic
programming framework we have described, with the following choices:
U = [0, 1]
154
o = Z
2
+
, where state x = (x
1
, x
2
) denotes x
i
customers in station i, for i = 1, 2.
g(x) = c
1
x
1
+c
2
x
2
F corresponds to the exponential distribution with parameter = +
1
+
2
.
p
x,y
(u) =
1

1
I
y=D
1
x
+
2
I
y=D
2
x
+uI
y=A
1
x
+ (1 u)I
y=A
2
x
_
where
A
1
x = (x
1
+ 1, x
2
), A
2
x = (x
1
, x
2
+ 1), D
1
x = ((x
1
1)
+
, x
2
), and D
2
x = (x
1
, (x
2
1)
+
).
The backwards equation of dynamic programming becomes
V
n+1
(x) = g(x) + min
0u1


1
V
n
(D
1
x) +
2
V
n
(D
2
x) +uV
n
(A
1
x) + (1 u)V
n
(A
2
x)
or, after plugging in the optimal value for u,
V
n+1
(x) = g(x) +


1
V
n
(D
1
x) +
2
V
n
(D
2
x) +minV
n
(A
1
x), V
n
(A
2
x),
with the intial condition V
0
0. Furthermore, the optimal control is given in feedback form as
follows. Given there are n steps to go and the state is x, the optimal control action is u

n
(x), given
by
u

n
(x) =
_
1 if V
n
(A
1
x) V
2
(A
2
x)
0 else.
(9.12)
That is, if the current state is x and an arrival occurs, the optimal control routes the arrival to
whichever station yields the lower expected cost.
Consider the symmetric case:
1
=
2
and c
1
= c
2
. We will prove that u

n
(x) = I
x
1
x
2

for
all n 1. That is, the send to shorter policy is optimal. It is easy to check that the control
specication (10.20) is equivalent to u

n
(x) = I
x
1
x
2

, if V
n
has the following three properties:
1. (symmetry) V (x
1
, x
2
) = V (x
2
, x
1
) for all (x
1
, x
2
) o
2. (monotonicity) V (x
1
, x
2
) is nondecreasing in x
1
and in x
2
.
3. V (x
1
, x
2
) V (x
1
1, x
2
+ 1) whenever 0 x
1
x
2
.
Together properties 1 and 3 mean that when restricted to the states along a line segment of the
form x o : x
1
+ x
2
= l for l 1, the cost function is a monotonically nondecreasing function
of the distance from the midpoint.
Thus, it remains to prove that V
n
has the three properties stated, for all n 0. This is done
by induction. Trivially, the function V
0
satises all three properties. So for the sake of argument
by induction, suppose that V
n
satises all three properties. We show that V
n+1
satises all three
station 2
2
1
u

!
1!u
station 1
Figure 9.1: Dynamic routing to two service stations
155
properties. Property 1 for V
n+1
follows from property 1 for V
n
and the symmetry of the backwards
equations in x
1
and x
2
. Property 2 for V
n+1
follows from property 2 for V
n
. It remains to show
that V
n+1
has property 3. Since property 3 is closed under summation, it suces to prove that
g(x), V
n
(D
1
x), V
n
(D
2
x), and minV
n
(A
1
x), V
n
(A
2
x) have property 3. It is easily checked that the
rst three of these functions has property 3, so it remains to prove that minV
n
(A
1
x), V
n
(A
2
x)
has property 3. So suppose that 1 x
1
x
2
, and refer to Figure 9.2. It must be shown that
2
x
c
b
a
(x !1,x +1)
1
Figure 9.2: V
n
is sampled to get values a, b, and c.
ab b c where a = V
n
(x
1
+1, x
2
), b = V
n
(x
1
, x
2
+1), and c = V
n
(x
1
1, x
2
+2). We argue rst
that a b, by considering two cases: If x
1
= x
2
then the states (x
1
, x
2
+ 1) and (x
1
+ 1, x
2
) are
obtained from each other by swapping coordinates, so that a = b by property 1 for V
n
. If x
1
> x
2
then x
1
x
2
+ 1 so that a b by property 3 for V
n
. Thus, in general a b. Similarly, b c by
property 3 for V
n
. Thus, a b c, which immediately implies a b b c, as desired. The proof
of the induction step is complete. Therefore, V
n
has properties 1-3 for all n 0, and the send to
shorter queue policy is optimal, in the symmetric case.
In the general case, the optimal control is given by u

n
(x) = I
x
2
>sn(x
1
)
, where s
n
is a nonde-
creasing function. This fact can be established by proving, by induction on n, that the cost-to-go
functions V
n
have a more elaborate set of properties [19].
9.4 Problems
9.1. Illustration of dynamic programming a stopping time problem
Let n 1 and let X
1
, . . . , X
n
be mutually independent, exponentially distributed random variables
with mean one. A player observes the values of the random variables one at a time. After each
observation the player decides whether to stop making observations or to continue making observa-
tions. The players score for the game is the last value observed. Let V
n
denote the expected score
for an optimal policy. Note that V
1
= 1. (a) Express V
n+1
as a function of V
n
. (Hint: Condition on
the value of X
1
.) (b) Describe the optimal policy for arbitrary value n. (c) For 1 n 5, compare
V
n
to E[max(X
1
, . . . , X
n
)] = 1 +
1
2
+
1
3
+ +
1
n
, which is the maximum expected score in case the
player learns the values of the Xs before play begins.
9.2. Comparison of four ways to share two servers
Let N
i
denote the mean total number of customers in system i in equilibrium, where systems 1
through 4 are shown. The arrival process for each system is Poisson with rate 2 and the servers
156
are exponential with parameter . (a) Compute N
i
as a function of = / for 1 i 3. (Let
0.5

2!

1.
independent splitting

2!

2!

2.
3. 4.
Every other customer
goes to the top queue
2!
2!server queue
Arrivals sent to the shorter queue
0.5
Figure 9.3: Four two-server systems
me know if you can nd N
4
. ) (b) Order N
1
, N
2
, N
3
, N
4
from smallest to largest. Justify answer.
Does your answer depend on ?
9.3. A stochastic dynamic congestion control problem
Consider a single server queue with nite waiting room so that the system can hold at most K
customers. The service times are independent and exponential with parameter . If a customer
arrives and nds the system full then the customer is rejected and a penalty c is assessed. For each
customer that is served a reward r is received at the time of the customers departure. There are
two sources of customers. One is Poisson with xed rate
o
and the other is a variable rate source
with instantaneous rate u, where is a constant and u is the control value, assumed to lie in the
interval [0,1]. The control value u can depend on the state.
(a) Formulate a dynamic programming problem. In particular, describe the state space, the in-
stantaneous cost function, the distribution of inter-event times, and the transition matrix P(u)
(draw the transition probability diagram). (b) Give the equations for the cost-to-go functions V
n
.
(Assume a discount factor < 1 for the discrete time formulation as in class.) (c) Describe the
optimal control law explicitly in terms of the cost-to-go functions. (d) Speculate about the struc-
ture of optimal policy. Can you prove your conjecture? (e) Suppose the assumptions were modied
so that customers from the rst source are served at rate
1
and that customers from the second
are served at rate
2
. Describe the state space that would be needed for a controlled Markov
description of the system under (i) FCFS service order, (ii) preemptive resume priority to customer
from the rst source, or (iii) processor sharing.
9.4. Conversion to discrete time with control dependent cost
The continuous time control problem can still be converted to a discrete time control problem if
the cost per unit time function g depends on u because, in that case,
E
x
_
tn
0
g(X(t), u(t))e
t
dt =
_
1

E
x

n1
k=0

k
g(X(t
k
), w
k
) if > 0

t
1
E
x

n
k=0
g(X(t
k
), w
k
) if = 0
(9.13)
where =

F() =
_

0
e
s
dF(s). (a) Prove (9.13) in case > 0. (The proof for = 0 is sim-
157
ilar.) Thus, by ignoring constant factors, we can take the cost for n terms in discrete time to be
E
x

n1
k=0
g(X(t
k
), w
k
).
(b) How should the fundamental backwards recursion of dynamic programming be modied to take
into account the dependence of g on u?
9.5. Optimal control of a server
Consider a system of two service stations as pictured.
0
1
2
!
!
station 1
station 2
1
2

u
1!u
Station i has Possion arrivals at rate
i
and an exponential type server, with rate m
i
(u), where
m
1
(u) =
1
+ u
0
and m
2
(u) =
2
+ (1 u)
0
and u is a control variable with u U = [0, 1].
Suppose we are interested in the innite horizon discounted average number of customers in the
system, with discount rate > 0.
(a) Specify the transition probability matrix, P(u), the interjump CDF, F, and cost function g, for
a dynamic programming model.
(b) Write the dynamic programming update equations for the optimal cost-to-go functions, V
n
.
(c) Express the optimal state feedback control for n-steps, u

n
(x), in terms of V
n
.
(d) Consider the symmetric case:
1
=
2
and
1
=
2
. Give an educated guess regarding the
structure of the optimal control, and give an equivalent condition on V
n
. (Note: An equivalent
condition, not just a sucient condition, is asked for here.)
9.6. A dynamic server rate control problem with switching costs
Consider the following variation of an M/M/1 queueing system. Customers arrive according to
a Poisson process with rate . The server can be in one of two statesthe low state or the high
state. In addition, the server is either busy or idle, depending on whether there are customers in
the system. A control policy is to be found which determines when the server should switch states,
either from low to high or from high to low. The costs involved are a cost per unit time for each
customer waiting in the system, a cost per unit time of having the server in the high state, and a
switching cost, assessed each time that the server is switched from the low state to the high state.
Specically, let
H
(respectively,
L
) denote the service rate when the server is in the high state
(respectively, low state), where
H
>
L
> 0. Let c
H
denote the added cost per unit time for
having the server in the high state, and let c
S
denote the cost of switching the server from the low
state to the high state. Assume that the control can switch the state of the server only just after
a customer arrival or potential departure. Assume there is no charge for switching the server from
the high state to the low state. Finally, let c
W
denote the cost per unit time per customer waiting
in the system. The controller has knowledge of the server state and the number of customers in
the system. Suppose the goal is to minimize the expected cost over the innite horizon, discounted
at rate . (a) Formulate the control problem as a semi-Markov stochastic control problem. In
particular, indicate the state space you use, and the transition probabilities. (Hint: Use a control
dependent cost function to incorporate the switching cost. The switching cost is incurred at discrete
times, in the same way the rewards are given at discrete times for the M
controlled
/M/1 example
158
in the notes, and can be handled similarly. Using a two dimensional control, one coordinate to
be used in case of an arrival, and one coordinate to be used in case of a departure, is helpful.)
(b) Let V
n
denote the cost-to-go function when there are n steps in the equivalent discrete-time
control problem. Write down the dynamic programming equation expressing V
n+1
in terms of V
n
and indicate how the optimal control for n steps to go depends on V
n
. (c) Describe the qualitative
behavior you expect for the optimal control for the innite horizon problem. In particular, indicate
a particular threshold or switching curve structure you expect the control to have. (d) Describe
the optimal control policy when the following two conditions both hold: there is no switching cost
(i.e. c
S
= 0), and c
W
(
H
+ ) > (
L
+ )(c
W
+ c
H
). (Hint: Consider the expected cost incurred
during the service of one customer.)
159
160
Chapter 10
Solutions
1.1. Poisson merger
Suppose X
i
is a Poisson random variable with mean
i
for i 1, 2 such that X
1
and X
2
are
independent. Let X = X
1
+X
2
, =
1
+
2
, and p
i
=
i
/. Then for k 0,
P[X = k] =

i
j=0
P[X
1
= j]P[X
2
= k j] =

i
j=0
e

i
1
i!
e

kj
2
(kj)!
=
e

k
k!
[

n
j=0
_
k
j
_
p
j
1
p
kj
2
] =
e

k
k!
, where we use the fact that the sum in square brackets is the sum over the binomial distri-
bution with parameters k and p
1
. Thus, the sum of two independent Poission random variables is
a Poisson random variable. By induction this implies that the sum of any number of independent
Poisson random variables is also Poisson.
Now suppose that N
j
= (N
j
(t) : t 0) is a Poisson process with rate
j
for 1 j K, and
that N
1
, . . . , N
K
are mutually independent. Given t
0
< t
1
< < t
p
, consider the following array
of random variables:
N
1
(t
1
) N
1
(t
0
) N
1
(t
2
) N
1
(t
1
) N
1
(t
p
) N
1
(t
p1
)
N
2
(t
1
) N
2
(t
0
) N
2
(t
2
) N
2
(t
1
) N
2
(t
p
) N
2
(t
p1
)
.
.
.
.
.
.
.
.
.
N
K
(t
1
) N
K
(t
0
) N
K
(t
2
) N
K
(t
1
) N
K
(t
p
) N
K
(t
p1
)
(10.1)
The rows of the array are independent by assumption, and the variables within a row of the ar-
ray are independent and Poisson distributed by Proposition 1.5.1, characterization (iii). Thus,
the elements of the array are mutually independent and each is Poisson distributed. Let N(t) =
N
1
(t) + +N
K
(t) for all t 1. Then the vector of random variables
N(t
1
) N(t
0
) N(t
2
) N(t
1
) N(t
p
) N(t
p1
) (10.2)
is obtained by summing the rows of the array. Therefore, the variables of the row are independent,
and for any i the random variable N(t
i+1
) N(t
i
) has the Poi(((t
i+1
t
i
)) distribution, where
=
1
+ +
K
. Thus, N is a rate Poisson process by Proposition 1.5.1, characterization (iii).
1.2. Poisson splitting
This is basically the rst problem in reverse. Let X be Possion random variable, and let each
of X individuals be independently assigned a type, with type i having probability p
i
, for some
161
probability distribution p
1
, . . . , p
K
. Let X
i
denote the number assigned type i. Then,
P(X
1
= i
1
, X
2
= i
2
, , X
K
= i
K
) = P(X = i
1
+ +i
K
)
_
i
1
+ +i
K
i
1
! i
2
! i
K
!
_
p
k
1
1
p
i
K
K
=
K

j=1
e

i
j
j
i
j
!
where
i
= p
i
. Thus, independent splitting of a Poisson number of individuals yields that the
number of each type i is Poisson, with mean
i
= p
i
and they are independent of each other.
Now suppose that N is a rate process, and that N
i
is the process of type i points, given
independent splitting of N with spilt distribution p
1
, . . . , p
K
. Proposition 1.5.1, characterization
(iii). the random variables in (10.2) are independent, with the i
th
having the Poi(((t
i+1
t
i
))
distribution. Suppose each column of the array (10.1) is obtained by independent splitting of the
corresponding variable in (10.2). Then by the splitting property of random variables, we get that
all elements of the array (10.1) are independent, with the appropriate means. By Proposition
1.5.1, characterization (iii), the i
th
process N
i
is a rate p
i
random process for each i, and because
of the independence of the rows of the array, the K processes N
1
, . . . , N
K
are mutually independent.
1.3. Poisson method for coupon collectors problem
(a) In view of the previous problem, the number of coupons of a given type i that arrive by time t has
the Possion distribution with mean
t
k
, and the numbers of arrivals of dierent types are independent.
Thus, at least one type i coupon arrives with probability 1 e
t/K
, and p(k, t) = (1 e
t/k
)
k
.
(b) Therefore, p(k, k ln k +kc) = (1
e
c
k
)
k
, which, by the hint, converges to e
e
c
as k .
(c) The increments of A over intervals of the form [k, k+1] are independent, Poi(1) random variables.
Thus, the central limit theorem can be applied to A(t) for large t, yielding that for any constant D,
lim
t
P[A(t) t +D

t] = Q(D), where Q is the complementary normal CDF. The problem has


to do with deviations of size (c c
t
)k, which for any xed D, grow to be larger than D

k ln k +kc.
Thus, with = [c c
t
[, P[[A(k ln k +kc) (k ln k +kc)[ k] Q(D) for k large enough, for any
D. Thus, lim
k
P[[A(k ln k + kc) (k ln k + kc)[ k] = 0, which is equivalent to the required
fact.
(d) Let C
t
be the event that the collection is complete at time t. Condition on the value of A(t).
p(k, t) = P(C
t
) = P(C
t
[A(t) n)P(A(t) n) +P(C
t
[A(t) < n)P(A(t) < n) (10.3)
First we bound from below the right side of (10.3). Since getting more coupons can only help com-
plete a collection, P(C
t
[A(t) n) P(C
t
[A(t) = n) = d(k, n), and other terms on the righthand
side of (10.3) are nonnegative. This yields the desired lower bound.
Similarly, to bound above the right side of (10.3) we use P(C
t
[A(t) n) 1 and P(C
t
[A(t) < n)
d(k, n).
(e) Fix c. The rst part of the inequality of part (c) yields that d(k, n) p(k, t)/P[A(t) n]. Sup-
pose n = k ln k + kc. Let c
t
> c and take t = k ln k + kc
t
as k . Then p(k, t) e
e
c

by part (b) and P[A(t) n] 1 Thus, limsup d(k, n) e


e
c

. Since c
t
is arbitrary with
c
t
> c, limsup d(k, n) e
e
c
. A similar argument shows that liminf d(k, n) e
e
c
. There-
fore, limd(k, n) = e
e
c
.
162
1.4. The sum of a random number of random variables
(a)
E[S] =

n=0
E[S[N = n]P[N = n] =

n=0
E[X
1
+ +X
n
[N = n]P[N = n]
=

n=0
E[X
1
+ +X
n
]P[N = n] =

n=0
nXP[N = n] = X N.
Similarly,
E[S
2
] =

n=0
E[(X
1
+ +X
n
)
2
]P[N = n]
=

n=0
_
E[X
1
+ +X
n
]
2
+ Var(X
1
+ +X
n
)

P[N = n]
=

n=0
_
(nX)
2
+nVar(X)

P[N = n]
= N
2
(X)
2
+NVar(X)
so that Var(S) = E[S
2
] Es
2
= Var(N)(X)
2
+NVar(X).
(b) By the same reasoning as in part (a),

S
(u) = E[e
juS
] =

n=0
E[e
ju(X
1
++Xn)
]P[N = n]
=

n=0

X
1
(u)
n
P[N = n] = B(
X
1
(u))
1.5. Mean hitting time for a simple Markov process
(a)
1
0 1 2 3
1 a 0.5
1!a
0.5
Solve = P and e = 1 to get = (
1a
2(1+a)
,
1
2(1+a)
,
2a
2(1+a)
,
a
2(1+a)
) for all a [0, 1] (unique).
(b) A general way to solve this is to let h
i
= E[minn 0[X(n) = 3[X(0) = i], for 0 i 3.
Our goal is to nd h
0
. Trivially, h
3
= 0. Derive equations for the other values by conditioning on
the rst step of the process: h
i
= 1 +

j
p
ij
h
j
for i ,= 3. Or
h
0
= 1 +h
1
h
2
= 1 + (1 a)h
0
+ah
2
h
2
= 1 + (0.5)h
i
yielding
_
_
h
0
h
1
h
2
_
_
=
_
_
4
a
+ 1
4
a
2
a
+ 1
_
_
. Thus, h
0
=
4
a
+ 1 is the required answer.
1.6. A two station pipeline in continuous time
(a) o = 00, 01, 10, 11
(b)
163

00 01
10 11
!

!
1

2
2
(c) Q =
_
_
_
_
0 0

2

2
0
0
1

1
0
0 0
2

2
_
_
_
_
.
(d) = (
00
+
01
) = (
01
+
11
)
2
=
10

1
. If =
1
=
2
= 1.0 then = (0.2, 0.2, 0.4, 0.2) and
= 0.4.
(e) Let = mint 0 : X(t) = 00, and dene h
s
= E[[X(0) = s], for s o. We wish to nd
h
11
.
h
00
= 0
h
01
=
1

2
+
+

2
h
00

2
+
+
h
11

2
+
h
10
=
1

1
+h
01
h
11
=
1

2
+h
10
For If =
1
=
2
= 1.0 this yields
_
_
_
_
h
00
h
01
h
10
h
11
_
_
_
_
=
_
_
_
_
0
3
4
5
_
_
_
_
. Thus,
h
11
= 5 is the required answer.
1.7. Simple population growth models
(a)
4!
0 1 2 3
0 !
. . .
2! 3!
(b)
0
(t) 0. The Kolmogorov forward equations are given, for k 1, by

k
t
= (k 1)
k1
(t) k
k
(t).
Multiplying each side by
k
and summing yields
P(z, t)
t
=

k=1

k
(t)
t
z
k
=
_
z
2

k=2

k1
(t)(k 1)z
k1
z

k=1

k
(t)kz
k1
_
= (z
2
z)
P(z, t)
z
That is, P(z, t) is a solution of the partial dierential equation:
P(z, t)
t
= (z
2
z)
P(z, t)
z
with the initial condition P(z, 0) = z. It is easy to check that the expression given in the problem
statement is a solution. Although it wasnt requested in the problem, here is a method to nd
the solution. The above PDE is a rst order linear hyperbolic (i.e. wave type) equation which
is well-posed and is readily solved by the method of characteristics. The idea is to rewrite the
equation as [

t
(z
2
z)

z
]P(a, t) = 0, which means that P(z, t) has directional derivative zero
in the direction ((z z
2
), 1) in the (z, t) plane.
(c) Write N
t
= E[N(t)]. We could use the expression for P(z, t) given and compute N
t
=
164
P(z,t)
z
[z = 0. Alternatively, divide each side of the PDE by z 1. Then since
(1)
t
= 0, we
get

t
_
P(z,t)1
z1
_
= z
P(z,t)
z
. Letting z 1 then yields the dierential equation
dNt
dt
= N
t
, so
that N
t
= e
t
.
(d) By part (b), P(z, t) =
ze
t
1z(1e
1t
)
=

k=1
e
t
(1e
t
)
k1
z
k
so that
k
(t) = e
t
(1e
t
)
k1
,
for k 1. That is, the population size at time t has the geometric distribution with parameter
e
t
.
(e) For the deterministic model, n
t
= 2
]t|
2
t
= e
(ln 2)t
. Thus, the growth rate for the de-
terministic model is roughly exponential with exponent (ln 2) (0.693), which is considerably
smaller than the exponent for the random model.
1.8. Equilibrium distribution of the jump chain
The choice of B insures that (Bq
ii

i
)
iS
is a probability distribution. Since p
J
ij
=
_
0 if i = j
q
ij
q
ii
if i ,= j
,
we have for xed j,

iS
(Bq
ii

i
)p
J
ij
=

i:i,=j
Bq
ii

i
_
q
ij
q
ii
_
= B

i:i,=j

i
q
ij
= B
j
q
jj
Thus, B
i
q
ii
is the equilibrium vector for P
J
. In other words, B
i
q
ii
=
J
i
.
Another justication of the same result goes as follows. For the original Markov process, q
1
ii
is
the mean holding time for each visit to state i. Since
J
i
represents the proportion of jumps of the
process which land in state i, we must have
i

J
i
(q
ii
)
1
, or equivalently,
J
i
(q
ii
)
J
i
.
1.9. A simple Poisson process calculation
Suppose 0 < s < t and 0 i k.
P[N(s) = i[N(t) = k] =
P[N(s) = i, N(t) = k]
P[N(t) = k]
=
_
e
s
(s)
i
i!
_
_
e
(ts)
((t s))
ki
(k i)!
_
_
e
t
(t)
k
k!
_
1
=
_
k
i
_
_
s
t
_
i
_
t s
t
_
k1
That is, given N(t) = k, the conditional distribution of N(s) is binomial. This could have been
deduced with no calculation, using the fact that given N(t) = k, the locations of the k points are
uniformly and independently distributed on the interval [0, t].
1.10. An alternating renewal process
(a) Sampled periods between light changes L are deterministic, length one, so the forward residual
lifetime is uniformly distributed over [0, 1]. Thus, P[light changes] = P[ 0.5] = 0.5.
(b) By symmetry, is independent of what color is spotted rst. If green is spotted rst,
W =
_
0 if > 0.5
+ 0.5 if < 0.5
so that E[W[green spotted] =
_
0.5
0
(u + 0.5)du =
3
8
. Similarly,
165
E[W[red spotted] =
_
1
0.5
(u 0.5)du =
1
8
. Thus, E[W] =
3
8
1
2
+
1
8
1
2
=
1
4
. This make sense, be-
cause the light is red with probability one half when the light is reached, and given it is red when
reached, the average wait is 0.5.
(c) P[light changes] = P[ 0.5] =
_
0.5
0
1F
X
(y)
m
1
dy =
_
0.5
0
1
y
2
1
dy =
7
16
.
(d) 2 blocks/minute (The point here is to express the answer is units given. If the length of a block
is specied, the speed could be given in meters/second or miles per hour, for example.)
1.11. A simple question of periods
(a) The period of state 4 is GCD4, 6, 8, 10, 12, 14, . . . = GCD4, 6 = 2.
(b) The process is irreducible, so all states have the same period. So state 6 must also have period 2.
1.12. A mean hitting time problem
(a)
2
0 1
2
1
1
2
2
Q = 0 implies = (
2
7
,
2
7
,
3
7
).
(b) Clearly a
1
= 0. Condition on the rst step. The initial holding time in state i has mean
1
q
ii
and
the next state is j with probability p
J
ij
=
q
ij
q
ii
. Thus
_
a
0
a
2
_
=
_

1
q
00

1
q
22
_
+
_
0 p
J
02
p
J
20
0
__
a
0
a
2
_
.
Solving yields
_
a
0
a
2
_
=
_
1
1.5
_
.
(c) Clearly
2
(t) = 0 for all t.

0
(t +h) =
0
(t)(1 +q
00
h) +
1
(t)q
10
h +o(h)

1
(t +h) =
0
(t)q
01
h +
1
(t)(1 +q
11
h) +o(h)
Subtract
i
(t) from each side and let h 0 to yield (

0
t
,

1
t
) = (
0
,
1
)
_
q
00
q
01
q
10
q
11
_
with the
inital condition (
0
(0),
1
(0)) = (1, 0). (Note: the matrix involved here is the Q matrix with the
row and column for state 2 removed.)
(d) Similarly,

0
(t h) = (1 +q
00
h)
0
(t) +q
01
h
1
(t) +o(h)

1
(t h) = q
10
h
0
(t) + (1 +q
11
h)
1
(t)) +o(h)
Subtract
i
(t)s, divide by h and let h 0 to get:
_

0
t

1
t
_
=
_
q
00
q
01
q
10
q
11
__

0

1
_
with
_

0
(t
f
)

1
(t
f
)
_
=
_
1
1
_
1.13. A birth-death process with periodic rates
(a) Let =
a
b
a
b
. Then S
1
=

n=0

n
(1 +
a
a
), which is nite if and only if < 1. Thus, the
166
process is positive recurrent if and only if < 1. (In case < 1,
2n
=
n
/S
1
and
2n+1
=
n a
S
1
a
.)
(b)
a
=
b
. In general, r
k
=
k

k
/, so that r if and only if the arrival rates are all equal
(corresponding to Poission arrivals, and then PASTA holds.)
1.14. Markov model for a link with resets
(a) Let o = 0, 1, 2, 3, where the state is the number of packets passed since the last reset.

0 1 2 3

! ! !

(b) By the PASTA property, the dropping probability is


3
. We can nd the equilibrium distribu-
tion by solving the equation Q = 0. The balance equation for state 0 is
0
= (1
0
) so that

0
=

+
. The balance equation for state i 1, 2 is
i1
= (+)
i
, so that
1
=
0
(

+
) and

2
=
0
(

+
)
2
. Finally,
2
=
3
so that
3
=
0
(

+
)
2

=

3
(+)
3
. The dropping probability is

3
=

3
(+)
3
. (This formula for
3
can be deduced with virtually no calculation from the properties
of merged Poisson processes. Fix a time t. Each event is a packet arrival with probability

+
and
is a reset otherwise. The types of dierent events are independent. Finally,
3
(t) is the probability
that the last three events before time t were arrivals. The formula follows.)
1.15. An unusual birth-death process
(a) S
2
=

n=0

1
...n

1
...n
=

n=0
(
1p
p
)
n
< + (because p > 0.5) so X is transient.
(b) aQ = 0 is equivalent to
k
a
k
=
k
a
k+1
for k 0, which is easily checked.
(c) p
J
01
= 1 and, for i 1, p
J
ii1
= 1 p and p
J
ii+1
= p. All other transition probabilities are zero.
p
0
1
1!p
. , .
1 2
1!p 1!p
p
(d) S
2
is the same as in (a), so X
J
is transient (as is also obvious from part (c).)
Note: X reaches the graveyard state in nite time. Otherwise, the fact that Qa = 0 for a
probability distribution a would imply that all states are positive recurrent.
1.16. A queue with decreasing service rate
(a)
X(t)
0 . . . . . .
!
! ! ! ! !

/2 /2 /2
1 K K+2 K+1
K
t
(b) S
2
=

k=0
(

2
)
k
2
kK
, where k K = mink, K. Thus, if <

2
then S
2
< + and the
process is recurrent. S
1
=

k=0
(
2

)
k
2
kK
, so if <

2
then S
1
< + and the process is positive
167
recurrent. In this case,
k
= (
2

)2
kK

0
, where

0
=
1
S
1
=
_
1 (/)
K
1 (/)
+
(/)
K
1 (2/)
_
1
.
(c) If =
2
3
, the queue appears to be stable until if uctuates above K. Eventually the queue-
length will grow to innity at rate

2
=

6
. See gure above.
1.17. Limit of a distrete time queueing system
(q) The transition rate diagram for the number in the system is shown:
q
q
!
p
" q " q " q
!
p
!
p
!
p
!
" q! 1! p
!
" q! 1! p
!
" q! 1! p
. . . 3
1
2 0
1!"
"
a
k
= P[no arrivals for k 1 slots, then arrival] = (1 q)
k1
q for k 1. Similarly, b
k
=
(1 q)
k1
q for k 1. Thus, an interarrival time is q times a geometric random variable
with parameter q, and the mean interarrival time is
1

. and a service time is q times a geometric


random variable with parameter q, and the mean service time is
1

.
(b) qp
k+1
= qp
k
for k 0, impies that p
k
= (1)
k
, where =

. This distribution is the same


for all q, so it trivially converges as q 0 to the same value: p
k
(1)
k
. The interarrival times
and service times become exponentially distributed in the limit. The limit system is a continuous
time M/M/1 system.
1.18. An M/M/1 queue with impatient customers
(a)
!
3
1
2 0
. . .
4
! !
+" +2" +3" +4"
! !

(b) The process is positive recurrent for all , if > 0, and p


k
=
c
k
(+)(+(k1))
where c is
chosen so that the p
k
s sum to one.
(c) If = , p
k
=
c
k
k!
k
=
c
k
k!
. Therefore, (p
k
: k 0) is the Poisson distribution with mean .
Furthermore, p
D
is the mean departure rate by defecting customers, divided by the mean arrival
rate . Thus,
p
D
=
1

k=1
p
k
(k 1) =
1 +e


_
1 as
0 as 0
where lHospitals rule can be used to nd the limit as 0.
1.19. Statistical multiplexing
(a) Use an M/M/1 model, with = 40 packets/second, and = 50 packets per second. Then
T =
1

= 0.1 seconds.
(b) Model each link as an M/M/1 queue, with = 20 packets/second, and = 25 packets per
second. Then T =
1

= 0.2 seconds.
(c) Model as an M/M/2 queue with = 40 packets/second and = 25 packets/second.
168
2
3
1
2 0
. . .
4
! ! ! !

!
2 2 2
Let =

2
. Then
0
=
0.5
0.5++
2
+
and
k
= 2
0

k
for k 1. Thus
T =
N

k=1
k
k
(0.5 + +
2
+ )
=
/(1 )
2
((
1
1
) 0.5)
= 0.111 seconds
We can view model (c) as a variation of (b) with statistical multiplexing at the packet level, and
(a) as a variation with multiplexing at the subpacket level. Here (c) is a substantial improvement
over (b), and (a) is a little better than (c).
1.20. A queue with blocking
(a)
5 3
1
2 0 4
! ! ! !

k
=

k
1++
2
+
3
+
4
+
5
=

k
(1)
1
6
for 0 k 5.
(b) p
B
=
5
by the PASTA property.
(c) W = N
W
/((1 p
B
)) where N
W
=

5
k=0
(k 1)
k
. Alternatively, W = N/((1 p
B
))
1

(i.e. W is equal to the mean time in system minus the mean time in service)
(d)
0
=
1
(mean cycle time for visits to state zero)
=
1
(1/+mean busy period duration)
There-
fore, the mean busy period duration is given by
1

[
1

0
1] =

6
(1)
=

5
(1)
1.21. Multiplexing circuit and packet data streams
(a) A portion of the rate diagram for C = 3 is pictured.
c
01
02
03
00
10
11
12
13
20
21
22
23
30
31
32
33
ij
!
!

"
c
(C!j)
p
j
(b) Yes, the phase structure is apparent.
(c)
p
(C n
C
) =

P
C
j=0
(cj)(
c

C
)
j
/j!
P
C
j=0
(
c

C
)
j
/j!
1.22. Three queues and an autonomously traveling server
(a) The equilibrium distribution for the server is = (
1
,
2
,
3
) = (
1
Z
,
1
Z
,
1
Z
), where Z =
1

+
1

+
1

, and the service capacity of station i is thus


i
, for 1 1 3.
(b) The equilibrium distributions for the individual stations can be found one at a time, by the
method of phases (even though the stations are not independent). For a xed station i, the state
space would be (l, ) : l 0, 1 3, where l is the number of customers at station i and is
169
the phase of the server.
1.23. On two distibutions seen by customers
t
k
k+1
N(t)
As can be seen in the picture, between any two transtions from state k to k +1 there is a transition
form state k + 1 to k, and vice versa. Thus, the number of transitions of one type is within one of
the number of transitions of the other type. This establishes that [D(k, t) R(k, t)[ 1 for all k.
(b)Observe that

D(k, t)

R(k, t)

D(k, t)

R(k, t)

R(k, t)

R(k, t)

t
+
R(k, t)

1

t

t
+

1

t

0 as t
Thus,
D(k,t)
t
and
R(k,t)
t
have the same limits, if the limits of either exists.
2.1. Recurrence of mean zero random walks
(Note: The boundedness condition is imposed to make the problem easier it is not required
for the results.) (a) Let V (x) = [x[, the absolute value of x. Then for x M and any
k 0, [x + B
k
[ = x + B
k
with probability one, by the boundedness of B
k
. Thus, if x M,
PV (x) V (x) = E[[x + B
k
[] x = E[x + B
k
] x = 0. Similarly, if x M, PV (x) V (x) =
E[[x + B
k
[] [x[ = E[[x[ B
k
] [x[ = E[B
k
] = 0. Therefore, the Foster-Lyapunov criteria for
recurrence are satised by P, V , and the set C = x : [x[ < M, so that X is recurrent.
(b) Let V (y) = y. As in part (a), we see that PV (y) V (y) = 0 o the nite set C = y : 0 y <
M, so that Y is recurrent.
2.2. Positive recurrence of reected random walk with negative drift
Let V (x) =
1
2
x
2
. Then
PV (x) V (x) = E[
(x +B
n
+L
n
)
2
2
]
x
2
2
E[
(x +B
n
)
2
2
]
x
2
2
= xB +
B
2
2
Therefore, the conditions of the combined Foster stability criteria and moment bound corollary
apply, yielding that X is positive recurrent, and X
B
2
2B
. (This bound is somewhat weaker than
170
Kingmans moment bound, disussed later in the notes: X
Var(B)
2B
.)
2.3. Routing with two arrival streams
We begin by describing the system by stochastic evolution equations. For i 0, 1, let B
i
(t)
be a Bernoulli(a
i
) random variable, and let U
i
(t) be a Bernoulli(u
i
) random variable, and for
j 1, 2, 3 let D
j
(t) be a Bernoulli(d
j
) random variable. Suppose all these Bernoulli random
variables are independent. Then the state process can be described by
X
i
(t + 1) = X
i
(t) +A
i
(t) D
i
(t) +L
i
(t) for i 1, 2, 3 (10.4)
where
A
1
(t) = U
1
(t)B
1
(t)
A
2
(t) = (1 U
1
(t))B
1
(t) +U
2
(t)B
2
(t)
A
3
(t) = (1 U
2
(t))B
2
(t)
and L
i
(t) = ((X
i
(t) +A
i
(t) D
i
(t)))
+
.
Assume that the parameters satisfy the following three conditions:
a
1
+a
2
< d
1
+d
2
+d
3
(10.5)
a
1
< d
1
+d
2
(10.6)
a
2
< d
2
+d
3
(10.7)
These conditions are necessary for any routing strategy to yield positive recurrence. Condition
(10.5) states that the total arrival rate to the network is less than the sum of service rates in the
network. Condition (10.6) states that the total arrival rate to servers 1 and 2 is less than the
sum of the service rates of those servers. Condition (10.7) has a similar interpretation. We will
show that conditions (10.5)-(10.7) are also sucient for positive recurrence for random routing for
a particular choice of u. To that end we consider the Lyapunov-Foster criteria.
Letting V = (x
2
1
+x
2
2
+x
2
3
) and arguing as in Example 1a, and using the fact (A
i
(t)D
i
(t))
2
1
for i = 1, 3 and (A
2
(t) D
2
(t))
2
4 yields (writing u
i
for 1 u
i
):
PV (x) V (x)
_
3

i=1
x
i
E[A
i
(t) D
i
(t)[X(t) = x]
_
+ 3
= x
1
(a
1
u
1
d
1
) +x
2
(a
1
u
1
+a
2
u
2
d
2
) +x
3
(a
2
u
2
d
3
) + 3 (10.8)
(x
1
+x
2
+x
3
) mind
1
a
1
u
1
, d
2
a
1
u
1
a
2
u
2
, d
3
a
2
u
2
+ 3 (10.9)
To obtain the best bound possible, we select u = (u
1
, u
2
) to maximize the min term in (10.9). For
any u, mind
1
a
1
u
1
, d
2
a
1
u
1
a
2
u
2
, d
3
a
2
u
2
, is the minimum of three numbers, which is less
than or equal to the average of the three numbers, is less than or equal to the average of any two
of the three numbers, and is less than or equal to any one of the three numbers. Therefore, for any
choice of u [0, 1]
2
,
mind
1
a
1
u
1
, d
2
a
1
u
1
a
2
u
2
, d
3
a
2
u
2
(10.10)
where
= min
d
1
+d
2
+d
3
a
1
a
2
3
,
d
1
+d
2
a
1
2
,
d
2
+d
3
a
2
2
, d
1
, d
2
, d
3

171
Note that > 0 under the necessary conditions (10.5)-(10.7). Taking u

1
[0, 1] as large as possible
subject to d
1
a
1
u

1
, and taking u

2
[0, 1] as large as possible subject to d
3
a
2
u

2
, yields
the choice
u

1
=
d
1

a
1
1 u

2
=
d
3

a
2
1
It is not hard to check that d
2
a
1
u

1
a
2
u

2
, so that equality holds in (10.10) for u = u

.
This paragraph shows that the above choice of and u

can be better understood by applying


the max-ow min-cut theorem to the ow graph shown.
v
1
q
q
q
2
3
s t
a
2
d !
d !
1
3
!
!
a
1
d !!
2
1
v
2
In addition to the source node s and sink node t, there are two columns of nodes in the graph.
Nodes v
1
and v
2
correspond to the two arrival streams, and nodes q
1
, q
2
and q
3
correspond to the
three queues. There are three stages of links in the graph. The capacity of a link (s,v
i
) in the
rst stage is a
i
, the capacities of the links in the second graph are very large, and the capacity of
a link (q
j
, t) in the third stage is d
j
. The choice of above is the largest possible so that (1)
all links have nonnegative capacity, and (2) the capacity of any s t cut is greater than or equal
to a
1
+ a
2
. Thus, the min ow max cut theorem insures that there exists an s t ow with value
a
1
+a
2
. Then, u

i
can be taken to be the fraction of the ow into v
i
that continues to q
i
, for i = 1
and i = 2.
Under (10.5)-(10.7) and using u

, we have PV (x) V (x) (x


1
+ x
2
+ x
3
) + 3. Thus, the
system is positive recurrent, and X
1
+X
2
+X
3

3

.
1
(b) Suppose each arrival is routed to the shorter of the two queues it can go to. For simplicity, let
such decisions be based on the queue lengths at the beginning of the slot. As far as simple bounds
are concerned, there is no need, for example, to take into account the arrival in the top branch when
making the decision on the bottom branch, although presumably this could improve performance.
Also, it might make sense to break ties in favor of queues 1 and 3 over queue 2, but that is not
very important either. We can denote this route-to-shorter policy by thinking of u in part (a) as
a function of the state, u =
RS
(x). Note that for given x, this choice of u minimizes the right
hand side of (10.8). In particular, under the route-to-shorter policy, the right hand side of (10.8)
is at least as small as its value for the xed control u

. Therefore, writing P
RS
for the one-step
transition matrix for the route to shorther policy, P
RS
V (x) V (x) (x
1
+x
2
+x +3) +3, and
under the necessary condtions (10.5)-(10.7), the system is positive recurrent, and X
1
+X
2
+X
3

3

.
2.4. An inadequacy of a linear potential function
Suppose x is on the postive x
2
axis (i.e. x
1
= 0 and x
2
> 0). Then, given X(t) = x, during
the slot, queue 1 will increase to 1 with probability a(1 d
1
) = 0.42, and otherwise stay at zero.
1
By using the fact that Xi = 0 if the arrival rate to queue i is zero, it can be shown that for u = u

, X1+X2+X3
3

, where

= min{
d
1
+d
2
+d
3
a
1
a
2
3
,
d
1
+d
2
a
1
2
,
d
2
+d
3
a
2
2
}.
172
Queue 2 will decrease by one with probability 0.4, and otherwise stay the same. Thus, the drift
of V , E[V (X(t + 1) V (x)[X(t) = x] is equal to 0.02. Therefore, the drift is strictly positive for
innitely many states, whereas the Foster-Lyapunov condition requires that the drift be negative
o of a nite set C. So, the linear choice for V does not work for this example.
2.5. Allocation of service
Let > 0. The largest we can make
1
(u) is m
1
and the largest we can make
3
(u) is m
2
. Thus,
we can select u so that

1
+
1
(u) and
3
+
3
(u) (10.11)
if and only if m
1

1
+ and m
2

3
+. If we select u so that equality holds in (10.11) then the
value of
2
(u) is maximized subject to (10.11) , and it is given by

2
(u) = m
1
+m
2

1
(u)
3
(u) = m
1
+m
2

1
+
3
2
Therefore, in order that
2
+
2
(u) hold in addition to (10.11), it is necessary and sucient
that

1
+
2
+
3

2
3
and m
1

1
+ and m
2

2
+ . Thus, the value of given in the
example is indeed the largest possible.
2.6. Opportunistic scheduling (Tassiulas and Ephremides [40])
(a) The left hand side of (2.27) is the arrival rate to the set of queues in s, and the righthand side
is the probability that some queue in s is eligible for service in a given time slot. The condition is
necessary for the stability of the set of queues in s.
(b) Fix > 0 so that for all s E with s ,= ,

is
(a
i
+)

B:Bs,=
w(B)
Consider the ow graph shown.
.
a
b
q
1
q
2
q
N
s
1
s
2
s
N!
a
2
a
1
a
N
s
k
N!
2
1
w(s )
w(s )
w(s )
w(s )
k
+!
+!
+!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
In addition to the source node a and sink node b, there are two columns of nodes in the graph. The
rst column of nodes corresponds to the N queues, and the second column of nodes corresponds
to the 2
N
subsets of E. There are three stages of links in the graph. The capacity of a link (a,q
i
)
in the rst stage is a
i
+, there is a link (q
i
, s
j
) in the second stage if and only if q
i
s
j
, and each
173
such link has capacity greater than the sum of the capacities of all the links in the rst stage, and
the weight of a link (s
k
, t) in the third stage is w(s
k
).
We claim that the minimum of the capacities of all a b cuts is v =

N
i=1
(a
i
+ ). Here is a
proof of the claim. The a b cut (a : V a) (here V is the set of nodes in the ow network)
has capacity v

, so to prove the claim, it suces to show that any other a b cut has capacity
greater than or equal to v

. Fix any a b cut (A : B). Let



A = A q
1
, . . . , q
N
, or in words,

A
is the set of nodes in the rst column of the graph (i.e. set of queues) that are in A. If q
i


A and
s
j
B such that (q
i
, s
j
) is a link in the ow graph, then the capacity of (A : B) is greater than or
equal to the capacity of link (q
i
, s
j
), which is greater than v

, so the required inequality is proved


in that case. Thus, we can suppose that A contains all the nodes s
j
in the second column such
that s
j


A ,= . Therefore,
C(A : B)

iq
1
,...,q
N

e
A
(a
i
+) +

sE:s
e
A,=
w(s)

iq
1
,...,q
N

e
A
(a
i
+) +

i
e
A
(a
i
+) = v

, (10.12)
where the inequality in (10.12) follows from the choice of . The claim is proved.
Therefore there is an ab ow f which saturates all the links of the rst stage of the ow graph.
Let u(i, s) = f(q
i
, s)/f(s, b) for all i, s such that f(s, b) > 0. That is, u(i, s) is the fraction of ow
on link (s, b) which comes from link (q
i
, s). For those s such that f(s, b) = 0, dene u(i, s) in some
arbitrary way, respecting the requirements u(i, s) 0, u(i, s) = 0 if i , s, and

iE
u(i, s) = I
s,=
.
Then a
i
+ = f(a, q
i
) =

s
f(q
i
, s) =

s
f(s, b)u(i, s)

s
w(s)u(i, s) =
i
(u), as required.
(c) Let V (x) =
1
2

iE
x
2
i
. Let (t) denote the identity of the queue given a potential service at
time t, with (t) = 0 if no queue is given potential service. Then P[(t) = i[S(t) = s] = u(i, s). The
dynamics of queue i are given by X
i
(t +1) = X
i
(t) +A
i
(t) R
i
((t)) +L
i
(t), where R
i
() = I
=i
.
Since

iE
(A
i
(t) R
i
(
i
(t)))
2


iE
(A
i
(t))
2
+ (R
i
(
i
(t)))
2
N +

iE
A
i
(t)
2
we have
PV (x) V (x)
_

iE
x
i
(a
i

i
(u))
_
+K (10.13)

_

iE
x
i
_
+K (10.14)
where K =
N
2
+

N
i=1
K
i
. Thus, under the necessary stability conditions we have that under the
vector of scheduling probabilities u, the system is positive recurrent, and

iE
X
i

K

(10.15)
(d) If u could be selected as a function of the state, x, then the right hand side of (10.13) would
be minimized by taking u(i, s) = 1 if i is the smallest index in s such that x
i
= max
js
x
j
. This
suggests using the longest connected rst (LCF) policy, in which the longest connected queue is
served in each time slot. If P
LCF
denotes the one-step transition probability matrix for the LCF
policy, then (10.13) holds for any u, if P is replaced by P
LCF
. Therefore, under the necessary
condition and as in part (b), (10.14) also holds with P replaced by P
LCF
, and (10.15) holds for
174
the LCF policy.
2.7. Routing to two queues continuous time model
Suppose <
1
+
2
, which is a necessary condition for positive recurrence.
(a) Under this condition, we can nd u so that
1
> u and
2
> u. If each customer is indepen-
dently routed to queue 1 with probability u, and if V (x) = (x
2
1
+x
2
2
)/2, (2.20) becomes
2QV (x) =
_
(x
1
+ 1)
2
x
2
1
_
u +
_
(x
2
+ 1)
2
x
2
2
_
u +
_
(x
1
1)
2
+
x
2
1
_

1
+
_
(x
2
1)
2
+
x
2
2
_

2
.
Since (x
1
1)
2
+
(x
i
1)
2
, it follows, with = +
1
+
2
, that
QV (x) (x
1
(
1
u) +x
2
(
2
u)) +

2
. (10.16)
Thus, the combined stability criteria and moment bound applies, yielding that the process is positive
recurrent, and X
1
(
1
) +X
2
(
2
u)

2
.
In analogy to Example 1a, we let
= max
0u1
min
1
u,
2
u
= min
1
,
2
,

1
+
2

2

and the corresponding value u

of u is given by
u

=
_
_
_
0 if
1

2
<
1
2
+

1

2
2
if [
1

2
[
1 if
1

2
>
For the system with u = u

, X
1
+ X
2


2
. The remark at the end of Example 1a also carries
over, yielding that for splitting probability u = u

, X
1
+X
2

1
+
2

(b) Consider now the case that when a packet arrives, it is routed to the shorter queue. To be
denite, in case of a tie, the packet is routed to queue 1. Let Q
RS
denote the transition rate matrix
in case the route to short queue policy is used. Note that for any u, (x
1
u + x
2
u) (x
1
x
2
).
Thus, (10.16) continues to hold if Q is replaced by Q
RS
. In particular, if <
1
+
2
, then the
process X under the route-to-shorter routing is positive recurrent, and X
1
+X
2


2
.
2.8. Stability of two queues with transfers
(a) System is positive recurrent for some u if and only if
1
<
1
+,
2
<
2
, and
1
+
2
<
1
+
2
.
(b)
QV (x) =

y:y,=x
q
xy
(V (y) V (x))
=

1
2
[(x
1
+ 1)
2
x
2
1
] +

2
2
[(x
2
+ 1)
2
x
2
2
] +

1
2
[(x
1
1)
2
+
x
2
1
] +

2
2
[(x
2
1)
2
+
x
2
2
] +
uI
x
1
1
2
[(x
1
1)
2
x
2
1
+ (x
2
+ 1)
2
x
2
2
] (10.17)
(c) If the righthand side of (10.17) is changed by dropping the positive part symbols and dropping
the factor I
x
1
1
, then it is not increased, so that
QV (x) x
1
(
1

1
u) +x
2
(
2
+u
2
) +K
(x
1
+x
2
) min
1
+u
1
,
2

2
u +K (10.18)
175
where K =

1
+
2
+
1
+
2
+2
2
. To get the best bound on X
1
+ X
2
, we select u to maximize the min
term in (10.18), or u = u

, where u

is the point in [0, 1] nearest to



1
+
2

2
2
. For u = u

, we
nd QV (x) (x
1
+ x
2
) + K where = min
1
+
1
,
2

2
,

1
+
2

2
2
. Which of the
three terms is smallest in the expression for corresponds to the three cases u

= 1, u

= 0, and
0 < u

< 1, respectively. It is easy to check that this same is the largest constant such that the
stability conditions (with strict inequality relaxed to less than or equal) hold with (
1
,
2
) replaced
by (
1
+,
2
+).
2.9. Stability of a system with two queues and modulated server
By inspection, the equilibrium distribution of the server state is (w(0), w(1), w(2)) = (
1
2
,
1
4
,
1
4
).
Thus, over the long run, the server can serve queue 1 at most
3
4
of the time, it can serve queue
2 at most
3
4
of the time, and, of course, it can oer at most one service in each slot. Thus, it is
necessary that a
1
<
3
4
, a
2
<
3
4
, and a
1
+ a
2
< 1. (It can be shown that these conditions are also
sucient for stability by using the Lyapunov function V (x) =
x
2
1
+x
2
2
2
applied to the time-sampled
process Y (n) = X(nk), where k is a constant large enough so that the mean fraction of time the
server is in each state over k time slots is close to the statistical average, no matter what the state
of the server at the beginning of such interval.)
3.1. A queue with customers arriving in pairs
(a)
. . .
3
1
2 0 4


! ! ! !
!
(b) For any function V on Z, QV (x) = (Q(x+2)Q(x))+(Q((x1)
+
)Q(x)). In particular, if
V (x) = x, then QV (x) = (2) +I
x=0
. Therefore, by the Foster-Lypunov stability criteria,
if 2, then N is recurrent, and if > 2, then N is positive recurrent.
(c) Take V (x) =
x
2
2
. Then,
QV (x) =
_
x( 2) +
4+
2
if x 1
2 if x = 0
Thus QV (x) x( 2) +
4+
2
for all x Z. Conclude by the combined Foster-Lyapunov
criteria and moment bounds that if > 2, then N is positive recurrent, and N
4+
2(2)
.
(d) Since the process is not explosive, it is positive recurrent if and only if there is a probability
distribution p so that pQ = 0. The equation pQ = 0 means that the net probability ux into each
state should be zero. Equivalently, pQ = 0 means that the next ux out of the set 0, 1, . . . , k
is zero for each k 0. Dening p
1
= 0 for convenience, the equation pQ = 0 is equivalent
to the set of equations: (p
k1
+ p
k
) = p
k+1
for k 0. (This is a set of second order linear
dierence equations.) Multiply each side of the k
th
equation by z
k+1
, sum from k = 0 to to
yield (z
2
+z)P(z) = (P(z) p
o
), so that
P(z) =
p
0
1 (z/) (z
2
/)
The condition P(1) = 1 yields that p
0
= 1 2/, which is valid only if 2/ < 1. Under that
176
condition,
P(z) =
1 2/
1 (z/) (z
2
/)
The method of partial fraction expansion can be used to express P as the sum of two terms with
degree one denominators. First, P is rewritten as
P(z) =
1 2/
(1 z/a)(1 z/b)
where a and b are the poles of P (i.e. the zeros of the denominator), given by
a =
1 +
_
1 +
4

2
b =
1
_
1 +
4

2
Matching P and its partial fraction expansion near the poles yields
P(z) = (1 2/)
_
1
(1 z/a)(1 a/b)
+
1
(1 b/a)(1 z/b)
_
so that
p
k
= (1 2/)
_
a
k
1 a/b
+
b
k
1 b/a
_
k 0
(Using this expression we can also nd an explicit expression for N, but it is more complicated than
the bound on N found in part (c).) Thus, we have found that there exists a probability distribution
p such that pQ = 0 if and only if 2/ < 1.
(e) r
k
= p
k
by the PASTA property. d
k
is proportional to p
k+1
for k 1, so that d
k
=
p
k+1
1p
0
.
3.2. Token bucket regulation of a Poisson stream
Suppose for simplicity that the tokens arrive at time 0, 1, 2, , and let X
k
denote the number of
tokens present at time k (Another approach would be to consider the number of tokens just after
time k.) If
k
=
e

k
k!
, then the one-step transition probability matrix for X = (X
k
: k 0) is
P =
_
_
_
_
_
_
_
_
_
1
0

0
1
0

1

1

0
1
0

1

2

2

1

0
.
.
.
.
.
.
.
.
.
1
0

B1

1

0
1
0

B1

1

0
_
_
_
_
_
_
_
_
_
(b) The throughput of packets is equal to the throughput of tokens, where is 1
B
. Therefore,
P
loss for packets
=
(1
B
)

. (The expression used here would be dierent for the other choice of X
mentioned in the solution to part (a)).
(c) Let Z(t) denote the number of tokens present at time t. Then Z is a continuous time Markov
process with the transition rate diagram shown.
. . .
1
2 0 B!1 B
! ! ! ! !
1 1 1 1 1
177
Then,
P
loss for packets
= P[no tokens present] =
1
1 +
1

+ +
1

B
=

B

B
+
B1
+ + 1
3.3. Extremality of constant interarrival times for G/M/1 queues
(a) A

d
(s) = exp(s/). Let t
1
have distribution function A. Then since exp(sx) is convex in x,
Jensens inequality can be applied to yield A

(s) = E[exp(st
1
)] exp(sE[t
1
]) = A

d
(s).
(b) Let
o
and
d
be values of for the respective queues. Since A

d
((1
o
)) A

((1
0
) =
0
and A

d
is convex, it follows (see the gure) that
d

o
. The assertions follow because the required
means are increasing functions of .
o
(1!")
#
A ( ) (1!")
#
A ( )
d
"
1
1
d
"
3.4. Propagation of perturbations
(a) See the solid lines in the gure. The departure times are indicated by where the solid diagonal
lines meet the axis, and occur at times 6,8,12,15, and 17. The waiting times in queue are 0, 4, 3,
0, 1.
9
0 2 1 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
1
2
3
4
5
6
7
t
8
(b) The departure times are indicated by where the dashed diagonal lines meet the axis, and occur
at times 6,9,13,15, and 17. The waiting times in queue are 0, 4, 4, 0, 1.
(c) The departure times are indicated by where the dashed diagonal lines meet the axis, and occur
at times 6,10, 14, 16, and 18. The waiting times in queue are 0, 4, 5, 1, 2.
(d) The waiting times of all following customers in the same busy period increase by the same
amount. Busy periods can merge, and the waiting times of customers originally in a later busy
period can also increase.
178
3.5. On priority M/GI/1 queues
(a) The server works on a type 1 customer whenever one is present, and otherwise works on a type
2 customer, if one is present. If the server is interrupted by a type 1 customer when it is working on
a type 2 customer, the service of the type 2 customer is suspended until no more type 1 customers
are in the system. Then service of the type 2 customer is resumed, with its remaining service time
the same as when the service was interrupted.
(b) Let =
1
+
2
,
i
=
i
X
i
, and =
1
+
2
. Then X =
P
i

i
X
i

and X
2
=
P
i

i
X
2
i

Type 1
customers are not eected by type 2 customers, so the mean system time for type 1 customers is
the same as if there were no type 2 customers:
T
1
= W
1
+X
1
=

1
X
2
!
2(1
1
)
+X
1
where
1
=
1
X
1
. To nd an expression for T
2
, consider a type 2 customer. Recall that the mean
work in the system is given by
X
2
2(1)
. The PASTA property holds. Thus, T
2
is the mean of the
sum of the work already in the system when the customer arrives, the service time of the customer
itself, and the amount of preempting work that arrives while the type 2 customer is in the system:
T
2
=
X
2
2(1 )
+X
2
+
1
T
2
.
Solving for T
2
yields
T
2
=
_
1
1
1
_
_
X
2
2(1 )
+X
2
_
3.6. Optimality of the c rule
Let J() =

i

i
c
i
W
i
() =

i

i
c
i

i
W
i
() denote the cost for permutatation .
Claim: Let be a permutation and let denote another permutation obtained from by
swapping
i
and
i+1
for some i. Then J( ) < J() if and only if

i
c

i
<

i+1
c

i+1
.
The claim is proved as follows. Since only customers of classes i and i + 1 are aected by
the swap, W
j
( ) = W
j
() for j ,
i
,
i+1
. On the other hand, since class
i
(i.e. class
i+1
)
customers have lower priority under than under , W

i
( ) > W

i
(). Thus, > 0, where
=

i
W

i
( )

i
W

i
().
Similarly, W

i+1
( ) < W

i+1
(), and furthermore by the conservation of ow equations,

i+1
W

i+1
( )

i+1
W

i+1
() = .
Therefore, J( ) J() = (

i
c

i+1
c

i+1
). Since > 0, the claim follows immediately.
By the claim, if does not satisfy the ordering condition

1
c

2
c

2
. . .

K
c

K
, then
is not optimal.
Conversely, if does satisfy the ordering condition and if
0
is an arbitrary ordering, there is
a sequence of orderings
0
,
1
, . . . ,
p
= so that J(
0
) J(
1
) J(
p
) = J(), so that
is optimal.
The c rule is intuitively plausible because c is the rate that instantaneous cost is reduced
when a rate server works on a customer with instantaneous holding cost c.
179
3.7. A queue with persistent customers
(a) The system is stable if and only if
D
b
< 1. See part (b) for justication.
(b) Under the PR service order, we can view the network as an ordinary M/GI/1 queue in which
the service time variable X for a typical customer is D times a Geo(b) random variable. The system
is stable if < 1, were = X =
D
b
. Under this condition, the mean time in the system is given
by the Polleczek-Khinchine formula with =
D
b
, X =
D
b
, and X
2
=
D
2
(2b)
b
2
:
T = W +X =
X
2
2(1 )
+X =
(2 D)D
2(b D)
.
(c) The distribution of the number of customers in the system does not depend on the order of
service (assuming no advance knowledge of how many times a given customer will return). So the
mean number of customers in the system does not depend on the service order. So by Littles
law, the mean time in the system does not depend on the order of service. However, the FIFO
order at the queue would lead to a larger variance of total system time than the PR service or-
der. (This is implied by the fact that if a
1
< < a
n
and d
1
< < d
n
and a
i
< d
i
, then

(d
i
a
i
)
2
<

(d

i
a
i
)
2
for any permutation other than the identity permutation. Note that
the PR order is not FIFO for the queue, but is FIFO for the system.)
3.8. A discrete time M/GI/1 queue
The mean service time is 7/2, so the service rate is 2/7. Let = 7p/2, which is the arrival rate, p,
divided by the service rate. Also, is the probability the server is busy in a time slot. Arrivals see
the system in equilibrium by the discrete time version of the PASTA property, which is true when
numbers of arrivals in distinct slots are independent. Given the server is busy during the slot of
an arrival, the residual service time of the customer in service is distributed over 0, 1, 2, 3, 4, 5.
The total service time of the customer in service has distribution P[l = l] =
f
l
m
1
, for 1 l 6,
with mean E[L] =
m
2
m
1
=
1
2
+2
2
+3
2
+4
2
+5
2
+6
2
6m
1
=
13
3
. Given L, is uniform on 0, 1, . . . , L 1, so
E[] =
E[L]1
2
=
5
3
. Thus, W = 3.5N + (
7p
2
)(
5
3
), where N is the mean number of customers seen
by an arrival, equal to pW. Solving for W yields W =
35p
621p
for 0 p <
2
7
. An embedded Markov
process is given by the number of customers in the system just after the departure of a customer.
To be specic, this number could include the new arrival, if any, occurring is the same slot that a
service is completed.
Here ia a second way to solve the problem, obtained by viewing the events at slot boundaries
in a dierent order. Suppose that the customer in service exits if its service is complete, and if so,
the customer at the head of the queue (if any) is moved into the server before a possible arrival is
added. In this view, a new customer still arrives to nd the server busy with probability =
7p
2
, but
the residual service time is distributed over 1, 2, 3, 4, 5, 6. The total service time of the customer
in service hs the same sampled lifetime distribution as before , with mean E[L] =
13
3
, but here,
given L, is uniformly distributed over 1, . . . , L, so E[] =
8
3
. Thus, by the PASTA property,
W = 3.5N + (
7p
2
)(
8
3
), where N is the mean number in the queue seen by an arrival. A customer
waiting W slots for service can be seen in the queue for W 1 observation times, so by Littles law,
N = (W1)p. thus, W = 3.5(W1)p+(
7p
2
)(
8
3
), and solving for W yields W =
35p
621p
for 0 p <
2
7
.
3.9. On stochastic ordering of sampled lifetimes
If f
L
g
L
, then
_

0
(x)f
L
(x)dx >
_

0
(x)g
L
(x)dx for any strictly decreasing positive function
180
on R
+
. Taking (x) =
1
x
yields that 1 =
_

0
f(x)dx >
_

0
g(x)dx = 1, which is impossible.
3.10. Eective bandwidth, buerless link
(a)
1
(s) =
1
s
ln(
e
2s
1
2s
),
2
(s) =
ln(1s)
s
.
(b) n=79.05 (or round to n = 79)
(c)
1
(s

) = 1.05,
2
(s

) = 1.19, and C
eff
=177.0. As one might expect,
1
(s

) is somewhat smaller
than
2
(s

).
*
168
79
79
148
0
0
A(s )
3.11. Eective bandwidth for a buered link and long range dependent Gaussian
trac
(a) n = 88.80 (round down to n=88).
1
(s

) = 1.0085,
2
(s

) = 1.1354, t

= 31.6, C
eff
= 190.38.
(b)
0
0
* A(s , t ) *
188
88
88
167
0
0
* A(s , t ) *
85
184
85
154
(a)
(b) n = 85.33 (round down to n=85).
1
(s

) = 1.015,
2
(s

) = 1.1827, t

= 22.7, C
eff
= 187.53.
Due to the tighter overow probability constraint, the eective bandwidths in part (b) are larger
than those in part (a) and the eective bandwidth of the link is smaller. Interestingly, the critical
time scale is smaller for part (b) than for part (a).
3.12. Extremal equivalent bandwidth for xed mean and range
Intuitively, X should be as bursty as possible, suggesting the choice P[X

= 0] = P[X

= 2] = 0.5.
Heres a proof. Fix s > 0. The function e
sx
is convex in x over the interval x [0, 2], with values 1
and e
2s
at the endpoints. So e
sx
1 +
x
2
(e
2s
1) for x [0, 2]. Plugging in X on each side, taking
expectations, and using the constraint E[X] = 1 yields E[e
sX
]
1+e
2s
2
= E[e
sX

].
5.1. Time reversal of a simple continuous time Markov process
Q
t
=
_
_
_
_
_
1
3
0 0 0
0
1
3(1+)
0 0
0 0
1
3
0
0 0 0

3(1+)
_
_
_
_
_
1
Q
T
_
_
_
_
_
1
3
0 0 0
0
1
3(1+)
0 0
0 0
1
3
0
0 0 0

3(1+)
_
_
_
_
_
=
_
_
_
_
1 0 1 0
1 + (1 +) 0 0
0
1
1+
1

1+
0 1 0 1
_
_
_
_
181
1
3
1+!
1 2
4
1
!
1+!
1+!
1
5.2. Reversibility of a three state Markov process
The process is reversible and = (
1
,
2
,
3
) is the equilibrium distribution, if and only if the
detailed balance equations hold:
1
= a
2
,
2
= b
3
,
3
= c
1
. The rst two of the equations yield
that
1
= ab
3
, which can hold together with the third equation only if abc = 1. This condition
is thus necessary. If abc = 1, we nd that the detailed balance equations hold for the probability
distribution proportional to (ab, b, 1). Thus, the condition abc = 1 is also sucient for reversibility.
5.3. A link with heterogeneous call requirements
(a)
2,0 N,0
j
i
!
!
F
S
i,j
0,0
0,K
0,1 1,1
2,1
1,0
(b) If there were innitely many channels, the numbers of slow and fast connections would form
independent M/M/ systems, which is reversible, because each coordinate process is a one-
dimensional, and hence reversible, Markov process, and two independent reversible processes are
jointly reversible. The equilibrium distribution would be (i, j) =
e

S
S
i
i!
e

F
F
j
j!
. The process
with the nite number of channels is obtained by truncation of the process with innitely many
channels, to the state space o
K,L
= (i, j) Z
2
+
: i +Lj KL, and is hence also reversible. The
equilibrium distribution is
K,L
(i, j) =

S
i

F
j
i!j!Z(K,L)
, where Z(K, L) is selected to make
K,L
sum to
one.
(c) B
S
=

L
j=0

K,L
((K j)L, j), and B
F
= 1
Z(K1,L)
Z(K,L)
.
5.4. Time reversibility of an M/GI/1 processor sharing queue
(a) No, due to the memory in the service time varibles. One Markov process is (t) = (N(t), S
1
(t), . . . , S
N(t)
(t)),
where S
1
(t) S
N(t)
(t)) 0 and S
i
(t) denotes how much service the ith customer in the sys-
tem received so far.
(b) Let nnn = (n
1
, . . . , n
k
) and n = n
1
+ +n
k
. The nonzero o-diagonal rates are
q(nnn, n
t
n
t
n
t
) =
_
_
_
n
i

i
n
if n
t
n
t
n
t
= (n
1
, . . . , n
i
1, n
i+1
+ 1, . . . , n
k
)
if n
t
n
t
n
t
= (n
1
+ 1, n
2
, . . . , n
k
)
n
k

k
n
if n
t
n
t
n
t
= (n
1
, . . . , n
k1
, n
k
1)
182
(c) The conjectured rate matrix for the time-reversed process is given by (only nonzero o-diagonal
rates are indicated):
q
t
(n
t
n
t
n
t
, nnn) =
_

_
(1+n
i+1
)
i+1
n
if n
t
n
t
n
t
= (n
1
, . . . , n
i
1, n
i+1
+ 1, . . . , n
k
)
(1+n
1
)
1
n+1
if n
t
n
t
n
t
= (n
1
+ 1, n
2
, . . . , n
k
)
if n
t
n
t
n
t
= (n
1
, . . . , n
k1
, n
k
1)
To simultaneously prove the conjecture for and q
t
, it suces to check (nnn)q(nnn, n
t
n
t
n
t
) = (n
t
n
t
n
t
)q
t
(n
t
n
t
n
t
, nnn)
for nnn ,= n
t
n
t
n
t
. We nd
(n
t
n
t
n
t
)
(nnn)
=
_

_
a
i+1
n
i+1
+1
n
1
a
i
=
n
i

i
(n
i+1
+1)
i+1
if n
t
n
t
n
t
= (n
1
, . . . , n
i
1, n
i+1
+ 1, . . . , n
k
)
(n+1)a
1

n
1
+1
=
(n+1)

1
(n
1
+1)
if n
t
n
t
n
t
= (n
1
+ 1, n
2
, . . . , n
k
)
n
k
na
k
=

k
n
k
n
if n
t
n
t
n
t
= (n
1
, . . . , n
k1
, n
k
1)
=
q(nnn, n
t
n
t
n
t
)
q
t
(n
t
n
t
n
t
, n
t
n
t
n
t
)
as required.
(d) The total service time for a customer has the same distribution in either system, namely the
sum Exp(
1
) + +Exp(
k
). Hence, the random processes giving the number of customers in the
system are statistically the same for the forward and reverse time systems.
5.5. A sequence of symmetric star shaped loss networks
(a) The set of routes is the set of subsets of 1, . . . , M of size two. Thus, [[ =
_
M
2
_
. The
process at time t has the form X(t) = (X
r
(t) : r ) with state space o = x Z
1
+
:

r:ir
x
r

5 for 1 i M and transition rate matrix given by (only nonzero o diagonal rates are given):
q(x, x +e
r
) =
4
M1
if x, x +e
r
o
q(x, x e
r
) = x
r
if x, x e
r
o
(b) The unthinned rate at a link is given by

=

r:r

= [r : r [
4
M1
= 4, so for
any M, B

E[4, 5] = 0.19907 and the call acceptance probability is greater than or equal to
(1 0.19907)
2
= 0.64149.
If M = 2 then B

= E[4, 5] and the call acceptance probability is 1-0.19907=0.8001.


(c) The reduced load approximation is given by the xed point equations:

= (1

B

) and

B

= E[

, C]
The equations do not depend on M, and the solution is

B

= 0.1464, leading to the estimate


(1

B

)
2
= 0.72857 for the call acceptance probability. The xed point approximation for this
example, for both

B

and call acceptance probability, can be shown to be exact in the limit M .


The numbers are summarized in the following table:
B

Proute r is free
bound 0.1999 0.6915
M = 2 0.199 0.811
M = 3 0.14438 0.745
M = 4 0.146 0.739
M 0.146 0.728
183
6.1. Deterministic delay constraints for two servers in series
(a) The total arrival stream to the rst queue is (
1
+
2
,
1
+
2
) constrained, and the server is
FIFO, so the delay is bounded above by d
1
=

1
+
2
C
1
|. (d
1
= 2 for the example.)
(b) Stream 1 at the output of queue 1 is (
1
+
1
d
1
,
1
)-upper constrained, because A
1
is (
1
,
1
)-
upper constrained and delay in the rst queue is less than or equal to d
1
. ( =
1
+
1
d
1
= 8 for
the example.)
(c) By the example after Propositoin 6.3.2, there is a bound on the delay of the lower priority
stream for a FIFO server, constant service rate C, and (, ) constrained inputs. This yields:
delay for stream 1 in queue 2
(
1
+
1
d
1
) +
3
C
2

3
=
12
3
= 4 (for the example)
6.2. Calculation of an acceptance region based on the SCED algorithm
(a) Let g(t) = +t for , > 0. By the denition given in the notes, g

(0) = 0 and for t 1,


g

(t) = ming(s
1
) + + g(s
n
) : n 1, s
1
, . . . , s
n
1, s
1
+ + s
n
= t = minn + t : n
1 = + t. Another approach to this problem is to use the denition of g

in [5]. Yet another


approach is to let g
0
(t) = I
t1
g(t) and show that g
0
is the maximal subadditive function equal to
0 at 0, and less than or equal to g.
(b) Before working out special cases, lets work out the general case. Let g(t) = + t and
f(t) = (t d)
+
where > 0, D > 0, r > > 0. To nd g

f by a graphical method, we use the


formula (g

f)(t) = min
0st
g

(s) + f(t s). That is, g

f is the minimum of the functions


obtained by considering g

(s) +f(t s) as a function of t on the interval t s for each s xed.


!
" #$"%!%
!
" &!
!
! + " (#!")
So, for the parameters at hand,
t
4 9 20 24
20 24
25
40
36
slope 1
slope 4
24
slope 6
slope 5
t
(c) Since n
1
g

1
f
1
+ n
2
g

2
f
2
is piecewise linear with intial value 0, this function is less than or
equal to Ct for all t 0 if inequality is true at breakpoints and at + (i.e. in the limit as t ).
This requires n
1
(25, 36, 40, 1) + n
2
(0, 0, 24, 4) (900, 2000, 2400, 100), which simplies to n
1
36
and n
1
+ 4n
2
100.
184
100
36
60 100
25
6.3. Serve longer service priority with deterministically constrained arrival processes
(a) The two queues combined is equivalent to a single queue system, with a work conserving
server that serves up to C customers per slot. The total arrival stream is (
1
+
2
,
1
+
2
)-upper
constrained, so that the maximum combined carry over for a given slot t to the next satises
q(t)
1
+
2
.
(b) By the same reason, the delay bound based on system busy periods can be applied to yield
d

1
+
2
1
C
1

2
|.
(c) Under FIFO service within each stream and service to longer queue, each customer from stream
1 exits no later than it would exit from a system with the same arrival streams, FIFO service within
each stream, and pure priority to type 2 customers, a case considered in the notes and class. The
delay is thus bounded above by

1
+
2
C
2
.
7.1. A shortest path problem
(a)
24
12 14 23 4
22 10
3 1 8 29
15 15
2
7
11 17 27 6
16 5
13 9
s
12
5
16 21 43
37 6 2
9
14
37
28 19
(b) Yes, because D
j
> D
i
+d
ij
for every edge ij not in the tree.
(c) Synchronous Bellman-Ford would take 8 iterations (on the 9th iteration no improvement would
be found) because the most hops in a minimum cost path is 8.
7.2. A minumum weight spanning tree problem
s
12 14 23 4
22 10
3 1 8 29
15 15
2
7
11 17 27 6
16 5
13 9
The MWST is unique because during execution of the Prim-Dijkstra algorithm there were no ties.
(The suciency of this condition can be proved by a slight modication of the induction proof used
to prove the correctness of the Prim-Dijkstra algorithm. Suppose the Prim-Dijkstra algorithm is
executed. Prove by induction the following claim: The set of k edges found after k steps of the
algorithm is a subset of all MWSTs.)
7.3. A maximum ow problem
The capacities, a maximum ow, and a minimum cut, are show. The maximum ow value is 15.
185
t
s
1/1
22/6
11/5
15/0 4/4 3/3
3/3 12/9 23/10 29/11
16/9 12/11
9/8 2/2
6/6
7/6
15/6
13/7 1/1
14/11
27/14
1/1
8.1. A ow deviation problem (Frank-Wolfe method)
(a) Initially the 8 links directed counter clockwise along the perimeter of the network carry zero
ow, and the other 16 links carry ow b. So cost(f)=8b
2
.
(b) Flow f uses the 8 links not used initially, as shown.
(b)
y
(a)
Thus, cost(f + (1 )f) = 8(2b)
2
+ 16((1 )b)
2
. Minimizing over , we nd

=
1
3
. Under
the new ow, f
1
=

f +(1

)f, all 24 links carry ow


2b
3
. The rst derivative link length of all
paths used is minimum. So f
1
is an optimal ow.
8.2. A simple routing problem in a queueing network
In equilibrium, the stations are independent and have the distribution of isolated M/M/1 queues,
and the delay in an M/M/1 queue with arrival rate and departure rate is
1

. Therefore,
D
a
=
1
2 (1 +p)
and D
b
(p) = p
_
1
2 (1 +p)
_
+ (1 p)
_
1
2 (1 p)
+
1
2 (1 p)
_
(b)
p=0.17
0
D
a
D
b
0 1 2 1.5
1
1.5
2
p=0
p=1/3
(c) p = 0 minimizes D
a
, because D
a
is increasing in p.
(d) Setting D
t
b
(p) = 0 yields p =
1
3
.
(e) Setting D
t
a
(p) + D
t
b
(p) = 0 yields p 0.17. Only one iteration of the ow deviation algorithm
is necessary because the only degree of freedom is the split for ow b between two paths. The line
186
search for ow deviation solves the problem in one iteration.
(f) The solution is p =
1
3
, because D
a
is increasing in p, D
b
is decreasing in p, and D
a
(
1
3
) = D
b
(
1
3
).
(It is a coincidence that the answers to (d) and (f) are the same.)
8.3. A joint routing and congestion control problem
(a) Since D
t
l
(F) = F, the price for a link is equal to the ow on the link. For example, path 124
has price 2a +d. The optimality conditions are:
2a +d

14
a +b
with equality if a > 0
2b +c

14
a +b
with equality if b > 0
2c +b

23
c +d
with equality if c > 0
a + 2d

23
c +d
with equality if d > 0
The rst two conditions are for o-d pair w = (1, 2) and the last two are for o-d pari w = (2, 3).
(b) If a, b, c, d > 0 the equality constraints hold in all four conditions. The rst two conditions yield
2a +d = 2b +c or, equivalently, 2a 2b = c d. The last two yield 2c +b = a +2d or, equivalently,
2a 2b = 4c 4d. Thus, c d = 4c 4d or, equivalently, c = d, and hence also a = b. (c) We seek
a solution such that all four ows are nonzero, so that part (b) applies. Replacing b by a and d by
c, and using
14
= 66 and
23
= 130, the optimality conditions become
2a +c =
66
2a
2c +a =
130
2c
or a = b = 3 and c = d = 5. To double check, note that the two routes for w = (1, 4) have price 11
each, which is also the marginal value for route (1, 4), and the two routes for w = (2, 3) have price
13 each, which is also the marginal value for route (2, 3).
8.4. Joint routing and congestion control with hard link constraints
(a) The primal variables should be feasible: a, b, c, d 0 and a C
1
, b +c C
2
, d C
3
. The dual
variables should satisfy the positivity and complementary slackness condition: p
i
0 with equality
if link i is not saturated. It is easy to see that the optimal ow will saturate all three links, so that
nonzero prices are permitted. Finally, since U
t
1
(x
1
) = x
1/2
1
and U
t
2
(x
2
) = ln(x
2
), the remaining
conditions are:
(a +b)
1/2
p
1
with equality if a > 0
(a +b)
1/2
p
2
with equality if b > 0
1
c+d
p
2
with equality if c > 0
1
c+d
p
3
with equality if d > 0
(b) Clearly we can set a = d = 8 and c = 8 b. It remains to nd b. The optimality conditions
become
(8 +b)
1/2
p
2
with equality if b > 0
1
16b
p
2
with equality if b < 8
187
Since (8 + b)
1/2
>
1
16b
over the entire range 0 b 8, the optimal choice of b is b = 8. This
yields the assignment (a, b, c, d) = (8, 8, 0, 8), price vector (p
1
, p
2
, p
3
) = (
1
4
,
1
4
,
1
8
), and maximum
value 2 + ln(8).
8.5. Suciency of the optimality condition for hard link constraints
Since L(x, p) is concave in x, for any with 0 < < 1,
L(x, p) L(x

, p)
1

L(x

+(x x

), p) L(x

, p)
Taking the limit as 0 yields
L(x, p) L(x

, p)

r
U
t
r
(x

r
)(x
r
x

r
)

l
p
l
(

r:lr
x
r
x

r
)
=

r
(x
r
x

r
)
_
U
t
r
(x

r
)

lr
p
l
_
0 (10.19)
where the nal inequality follows from the fact that for each r, the quantity in braces in (10.19) is
less than or equal to zero, with equality if x

r
> 0.
(b) Note that

l
p
l
(C
l

r:lr
x

r
) = 0 and if Ax C, then

l
p
l
(C
l

r:lr
x
r
) 0. So if x 0
and Ax C,

r
U(x

r
) = L(x

, p) L(x, p)

r
U(x
r
).
8.6. Fair ow allocation with hard constrained links
(a) By inspection, x
maxmin
= (
1
3
,
1
3
,
1
3
,
1
3
).
(b) (proportional fairness) Let p
l
denote the price for link l. Seek a solution to the equations
x
1
=
1
p
1
+p
2
+p
3
x
2
=
1
p
1
+p
2
x
3
=
1
p
1
x
4
=
1
p
2
+p
3
x
1
+x
2
+x
3
1, with eqaulity if p
1
> 0
x
1
+x
2
+x
4
1, with eqaulity if p
2
> 0
x
1
+x
4
1, with eqaulity if p
3
> 0
Clearly x
1
+x
4
< 1, so that p
3
= 0. Also, links 1 and 2 will be full, so that x
3
= x
4
. But x
3
=
1
p
1
and
x
4
=
1
p
3
, so that p
1
= p
2
. Finally, use
1
2p
1
+
1
2p
1
+
1
p
1
to get p
1
= p
2
= 2, yielding x
pf
= (
1
4
,
1
4
,
1
2
,
1
2
).
Flows 1 and 2 use paths with price p
1
+p
2
= 4 and each have rate
1
4
.
Flows 3 and 4 use paths with price p
1
= p
2
= 2 and each have rate
1
2
.
9.1. Illustration of dynamic programming a stopping time problem
(a) Consider the game for a possible n +1 observations. After seeing X
1
, the player can either sop
and receive reward X
1
, or continue and receive expected reward V
n
. Thus, V
n+1
= E[X
1
V
n
] =
_

0
(x V
n
)e
x
dx =
_
Vn
0
V
n
e
x
dx +
_

Vn
xe
x
dx = V
n
+e
Vn
.
(b) The optimal policy is threshold type. For 1 k n1, the player should stop after observing
X
k
if X
k
V
nk
. The rule for n = 8 is pictured.
188
2
4 6 7 8 5 3 2 1
V
7
V
6
V
1
X
X
X
X
Stop!
1
2
3
4
1
(c)
n V
n
1 +
1
2
+ +
1
n
1 1.00000 1.00000
2 1.36788 1.50000
3 1.62253 1.83333
4 1.81993 2.08333
5 1.98196 2.28333
6 2.11976 2.45000
7 2.23982 2.59286
8 2.34630 2.71786
9 2.44202 2.82897
10 2.52901 2.92897
20 3.12883 3.59774
30 3.49753 3.99499
9.2. Comparison of four ways to share two servers
(a) System 1: Each of the two subsystems is an M/M/1 queue, so that N
1
is twice the mean number
in an M/M/1 queue with arrival rate and departure rate : N
1
=
2
1
.
System 2: Each of the two subsystems is G/M/1 where the interarrival distribution is the same as
the sum of two exponential random variables with parameter 2. We will analyze such a G/M/1
queue. The Laplace transform of the interarrival distribution is A

(s) = (
_

0
e
st
2e
2t
dt)
2
=
(
2
s+2
)
2
. We seek the solution in the range 0 < 1 of the equation = A

( ), or

3
2(1 + 2)
2
+ (1 + 4(1 + )) 4
2
= 0. Since = 1 is a solution, this equation can be
written as (1)(
2
(1+4)+4
2
) = 0 which has solutions = 1 and =
(1+4)

1+8
2
. Thus,
there is a solution [0, 1) if and only if 0 < 1, and it is given by =
1+4

1+8
2
. Therefore,
N
2
=
2
1
=
4
14+

1+8
.
System 3 The third system is an M/M/2 queueing system. The number in the system is a birth-
death Markov process with arrival rates
k
= 2 for all k 0 and death rates
k
= (k 2) for
k 1. Let =

. The usual solution method for birth-death processes yields that p


1
=
2

p
0
and
p
k
= 2
k
p
0
for k 1. The process is positive recurrent if and only if < 1, and for such we nd
p
0
= (1 + 2( +
2
+ )) =
1
1+
and N
3
=
2
1
2
, which is smaller than N
1
by a factor 1 +.
(b) For 0 < < 1, N
3
< N
4
< N
2
< N
1
. Here is the justication. First, we argue that N
3
< N
4
,
using the idea of stochastic domination. Indeed, let there be three independent Poisson streams: an
arrival stream A with rate 2, and for i = 1, 2 a potential departure stream D
i
with departure rate

i
. Consider systems 3 and 4 running, using these same three processes. If stream D
i
has a jump
at time t, then there is a departure from system 3 if queue i is not empty, and there is a departure
from system 4 if the entire system is not empty. By induction on the number of events (arrival or
potential departure events) we see that the number of customers in system 3 is always less than
or equal to the number of customers in system 4, and is occasionally strictly less. Thus, N
3
< N
4
.
We argue using dynamic programming that system 4, using send to the shorter queue, is using the
189
optimal dynamic routing policy. Thus, N
4
is less than N
1
and N
2
. Finally, we can compare the
formulas found above to see that N
1
> N
2
. ( It can be shown that as 1, (1 )N
1

2
3
,
(1 )N
2

2
3
, (1 )N
3
1, and (1 )N
4
1. )
9.3. A stochastic dynamic congestion control problem
We take the state space to be o = 0, 1, . . . , K, where a state represents the number of customers
in the system. Let , with > 0, denote the discount rate. The interevent times are exponentially
distributed with parameter = +
o
+, and the control set is given by U = [0, 1]. The one step
transition probabilities are given by (the quantities in the diagram should be divided by ):
o
! (1!u) ! (1!u) ! +!(1!u)
! +!
o
u ! +!
o
u ! +!
o
u ! +!
o
u ! +!
o
u
(1!u) !
. . . " "#1 "#2
0 1
2

! +!
o
u
! +! (1!u)
P(u) =
1

_
_
_
_
_
_
_
_
_
_
_
_
_
+(1 u)
o
+u
(1 u)
o
+u
(1 u)
o
+u
.
.
.
.
.
.
.
.
.
(1 u)
o
+u

o
+
_
_
_
_
_
_
_
_
_
_
_
_
_
.
The cost is given by
cost = E

t>0
_
cI
x
t
=K and an arrival occurs at time t
rI
.xt=1
_
e
t
= E
_

0
e
t
g(x
t
, u
t
)dt
r

where g(x, u) = c(
o
+ u)I
x=K
+ rI
x=0
. Here we decided that u can be nonzero even when
the system is full. It is optimal to take u
n
(K) = 0 however, because admitting customers when
the system is full incurs a cost with no change in state. Therefore, we could have assumed that
u
n
(K) = 0 and taken g(x, u) = c
o
I
x=K
+rI
x=0
. We will drop the constant term, and instead
take the cost to be
cost = E
_

0
e
t
g(x
t
, u
t
)dt
(b) For the discrete-time equivalent model, the one-step discount factor is =

+
and the value
update equations are given by
V
n+1
(x) = min
0u1
_
g(x, u) +

V
n
((x 1)
+
) +
o
V
n
((x + 1) K) +(1 u)V
n
(x) +uV
n
((x + 1) K)
_
In particular, for x = K,
V
n+1
(x) = min
0u1
_
c(
o
+u) +

V
n
(K 1) + (
o
+)V
n
(K)
_
190
so we see that indeed u = 0 is optimal when x = K. Thus, using g(x, u) = c
o
I
x=K
+ rI
x=0
.
we have that
V
n+1
(x) = g(x) +

[V
n
((x 1)
+
) +
o
V
n
((x + 1) K) +min V
n
(x), V
n
((x + 1) K)]
(c) When there are n steps-to-go and the current state is x, an optimal control is given by
u
n
(x) =
_
0 if V
n
(x) V
n
((x + 1) K)
1 if V
n
(x) > V
n
((x + 1) K)
(d) Let us prove that for each n 1 there exists a threshold
n
such that u
n
(x) = I
xn
. It is
enough to show that the following is true for each n 1:
(a) V
n
is convex, i.e. V
n
(1) V
n
(0) V
n
(2) V
n
(1) V
n
(K) V
n
(K 1)
(b)
r

V
n
(1) V
n
(0)
(c) V
n
(K) V
n
(K 1)
c

To complete the proof, we prove (a)-(c) by induction on n. The details are very similar to those
for the M
controlled
/M/1 example in the notes, and are omitted.
(e) For an exact formulation, we need to describe a state space. If FCFS service order is assumed, it
is not enough to know the number of customers of each type in the system, so that the state space
would be the set of sequences from the alphabet 1, 2 with length between 0 and K. We expect
the optimal control to have some monotonicity properties, but it would be hard to describe the
control because the state space is so complex. If instead, the service order were pure preemptive
resume priority to customers of one type, or processor sharing, then the state space could be taken
to be (n
1
, n
2
) : n
1
0, n
2
0, and n
1
+ n
2
K. We expect the optimal control to have a
switching curve structure.
9.4. Control dependent cost
(a)
E
x
_
tn
0
g(X(t), u(t))e
t
dt =
n1

k=0
E
x
_
t
k+1
t
k
g(X(t), u(t))e
t
dt
=
n1

k=0
E
x
_
t
k+1
t
k
g(X(t
k
), w
k
)e
t
dt
=
n1

k=0
E
x
[g(X(t
k
), w
k
)
_
t
k+1
t
k
e
t
dt]
=
n1

k=0
E
x
[g(X(t
k
), w
k
)]E
x
[
_
t
k+1
t
k
e
t
dt]
=
1

E
x
n1

k=0

k
g(X(t
k
), w
k
)
where we used the fact that
E
x
[
_
t
k+1
t
k
e
t
dt] = E
x
[
1

(e
t
k
e
t
k+1
)] =
1

[
k

k+1
] =
(1 )
k

.
191
(b) The only change is that g should be added to the cost before the minimization over u, yielding:
V
n+1
(x) = inf
u|
_
g(x, u) +

y
p
xy
(u)V
n
(y)
_
.
9.5. Optimal control of a server
(a) We use o = Z
2
+
, where state x = (x
1
, x
2
) denotes x
i
customers in station i, for i = 1, 2. Let
A
1
(x
1
, x
2
) = (x
1
+ 1, x
2
) and D
1
(x
1
, x
2
) = ((x
1
1)
+
, x
2
), and dene A
2
and D
2
similarly. Then
p
x,y
(u) =
1

i=1
_

i
I
y=A
i
(x)
+m
i
(u)I
y=D
i
(x)
_
The CDF F corresponds to the exponential distribution with parameter =
1
+
2
+
0
+
1
+
2
,
and the instantaneous cost function is g(x) = x
1
+x
2
.
(b) The backwards equation of dynamic programming becomes
V
n+1
(x) = g(x) + min
0u1

i=1
(
i
V
n
(A
i
(x)) +m
i
(u)V
n
(D
i
(x)))
or, after plugging in the optimal value for u,
V
n+1
(x) = g(x) +

_
2

i=1
(
i
V
n
(A
i
(x)) +
i
(u)V
n
(D
i
(x))) +
0
minV
n
(D
1
(x)), V
n
(D
2
(x))
_
with the intial condition V
0
0.
(c)
u

n
(x) =
_
1 if V
n
(D
1
x) V
2
(D
2
x)
0 else.
(10.20)
(d) We conjecture that the service rate
0
should be allocated to the station with the longer queue.
That is, u

n
(x) = I
x
1
x
2

. Equivalently, V (x
1
, x
2
) V (x
1
1, x
2
+ 1) whenever 0 x
1
x
2
and
V (x
1
, x
2
) V (x
1
+ 1, x
2
1) whenever 0 x
2
x
1
.
9.6. A dynamic server rate control problem with switching costs
(a) We use the state space o = (l, ) : l Z
+
, H, L, where for a given state x = (l, ),
l denotes the number of customers in the system and denotes the state of the server. Let
= +
H
, which is the maximum event rate that we shall use. (Another natural choice would
be = +
L
+
H
, and then the self loop transition probabilities would be increased in the one
step transition probabilities.) We take the control values u = (u
a
, u
d
) to be in the set | = [0, 1]
2
,
where u
a
is the probability the server is in the high state after the next event, given the next event
is an arrival, and u
d
is the probability the server is in the high state after the next event, given the
next event is a departure. The following diagram gives the transition probabilities (the quantities
shown should be divided by ).
2
2
Variations of these equations are possible, depending on the exact assumptions. We chose to allow a server state
change at potential departure times, even if the potential departures are not used.
192
d H

L
!

L
u
d
(1!u ) !
a
(1!u )
d

L
a
(1!u )
!

H
(1!u )
d

H
u
d
! u
a

H

L
!

L
u
d
(1!u ) !
a
(1!u )
d

L
a
(1!u )
!

H
u
d
! u
a

H
u
d

H

L
!

L
u
d

H
(1!u )
d
! u
a
! u
a

H
(1!u )
d
0,H 1,H 2,H
0,L 1,L 2,L
u
The total costs can be described as follows.
cost due to use of high service rate and queueing = E
x
__
T
0
c
H
I
t=H
+c
W
l
t
e
t
dt
_
cost due to switching = E
x
_
_

tT
c
S
I

t
=L,t=H
e
t
_
_
= E
x
__
T
0
c
S
I

t
=L
[u
a
(t) +
L
u
d
(t)]e
t
dt
_
Thus, the cost is captured by taking
g(x, u) = g((l, ), (u
a
, u
d
)) = c
H
I
=H
+c
W
l + c
S
I
=L
[u
a
+
L
u
d
]
(b) We rst consider V
n+1
(x) for states x = (l, ) with = H. Since g((l, H), u) = c
H
+c
W
l, which
doesnt depend on u, the backwards recursion is:
V
n+1
(l, H) = c
H
+c
W
l + min
u[0,1]
2
_
u
a

V
n
(l + 1, H) +
(1 u
a
)

V
n
(l + 1, L)
+

H
u
d

V
n
((l 1)
+
, H) +

H
(1 u
d
)

V
n
((l 1)
+
, L)
_
The analogous equation for = L is
V
n+1
(l, L) = min
u[0,1]
2
c
W
l +c
S
u
a
+
L
u
d
+
_
u
a

V
n
(l + 1, H) +
(1 u
a
)

V
n
(l + 1, L)
+

L
u
d

V
n
((l 1)
+
, H) +

L
(1 u
d
)

V
n
((l 1)
+
, L) +

H

L

V
n
(l, L)
_
Solving for u in the above equations and setting =
c
S

yields the following simpler version of the


193
dynamic programming backwards recursion:
V
n+1
(l, H) = c
H
+c
W
l +

minV
n
(l + 1, H), V
n
(l + 1, L) (10.21)
+

H

minV
n
((l 1)
+
, H), V
n
((l 1)
+
, L)
V
n+1
(l, L) = c
W
l +

min +V
n
(l + 1, H), V
n
(l + 1, L) (10.22)
+

L

min +V
n
((l 1)
+
, H), V
n
((l 1)
+
, L) +
(
H

L
)

V
n
(l, L)
The optimal controls can be succinctly described as follows. If the current switch state is , and
if there will be l
t
customers in the system after the next arrival or potential departure, then the
optimal server state after such arrival or potential departure is given by:
If = H then: V
n
(l
t
, L) > V
n
(l
t
, H) new server state H
If = L then: V
n
(l
t
, L) > V
n
(l
t
, H) + new server state H
(10.23)
The function V
n
(l, ) represents the cost-to-go for initial state (l, ), with the understanding
that it is not possible to switch the server state until the rst event time. Let W
n
(l, ) denote the
cost-to-go, given the initial state is (l, ), but assuming that the server state can be changed at
time zero. Then the dynamic programming equations become:
3
W
n
(l, ) = minV
n
(l, L), V
n
(l, H) +I
=L
(10.24)
V
n+1
(l, ) = c
H
I
=H
+c
W
l +

W
n
(l + 1, ) +

L

W
n
((l 1)
+
, )
+
(
H

L
)

_
I
=H
W
n
(l 1, H) +I
=L
W
n
(l, L)
_
(10.25)
with the initial conditions V
0
0 and W
n
0.
In the special case that there is no switching cost, = 0. In that case, W
n
(l, ) does not depend
on , so we write it as W
n
(l). Then the dynamic programming equations become:
W
n
(l) = minV
n
(l, L), V
n
(l, H) (10.26)
V
n+1
(l, ) = c
H
I
=H
+c
W
l +

W
n
(l + 1) +

L

W
n
((l 1)
+
)
+
(
H

L
)

_
I
=H
W
n
(l 1) +I
=L
W
n
(l)
_
(10.27)
In the case of no switching costs, it is rather simple to show by induction on n that W
n
is a convex,
nondecreasing function on Z
+
and V
n
satises properties 1-5 in the original solutions. As noted in
the original solutions, property 3 for V
n
implies the conjectured threshold structure.
(c) We conjecture that the optimal control has a threshold behavior, because when there are
more customers in the system, it is more valuable to quickly serve the customer in the server,
thereby reducing the waiting times of all remaining customers. Since there is a switching cost,
3
In this version, we allow a change of server state after any event. Thus, the functions Vn are slightly dierent
than the ones above.
194
however, it is better to be a bit reluctant to switch, because excessive switching is costly. This
suggests that the threshold for switching from H to L should be smaller than the threshold for
switching from L to H. Specically, we expect there are two thresholds,
H
and
L
, with
H

L
,
so that the optimal control law (10.23) becomes
l
t
>

new server state H (10.28)


This leads to the nonzero transition probabilities shown. The transient states are included in the
rst diagram and are omitted in the second.
H
. . .
. . .
!
H
!
L
. . .
L
H
L
We havent found a proof of the conjectured threshold structure in general, but we here sketch a
proof in the case of zero switching cost ( = 0). Consider the following properties of a function V
on Z
+
L, H:
1. V (l, ) is nondecreasing in l for = L and for = H.
2. V (l, ) is convex in l for = L and for = H.
3. V (l + 1, L) V (l + 1, H) V (l, L) +V (l, H) 0 for l 0.
4. V (l + 2, H) V (l + 1, H) V (l + 1, L) +V (l, L) 0 for l 0.
5. V (l + 2, L) V (l + 1, L) V (l + 1, H) +V (l, H) 0 for l 0.
We write property 1.H to denote property 1 for = 1. Properties 1.L, 2.H, and 2.L are dened
analogously. The following gure gives a graphical representation of the ve properties.
+ !
+ 2! + !
+ !
+ ! +
!
! !
2!
1.L 1.H 2.L 2.H 3 4 5
H
L
Property:
! +
+ +
+ + +
For each property, the gure indicates the type of linear combinations of values of V which should
be nonnegative. The collection of properties 1-5 are nearly symmetric in H and L. Property 5 is
obtained by swapping L and H in property 4. Thus, property 3 is the only part of properties 1-5
that is not symmetric in L and H.
Properties 3 and 4 imply property 2 (both 2.L and 2.H). This can be seen graphically by sliding
the diagram for property 4 to the left to overlap the diagram for property 3, and adding. Similarly,
properties 2.L and 3 imply property 5. Thus, properties 3 and 4 together imply properties 2 and
5. So to prove a function has properties 1-5, it suces to prove it has properties 1, 3, and 4.
Property 3 is the one connected to the threshold structure. Another way to state property 3
is that V (l, H) V (l.L) is nonincreasing in l. That is, as l increases, given there are l customers
in the system, it becomes increasingly preferable to be in state H. More specically, if V
n
satises
property 3, and if
L
= maxl : V (l, H)V (l, L) 0 and
H
= maxl : V (l, H)+ V (l, L) 0,
then (10.23) is equivalent to (10.28).
It remains to show that V
n
has properties 1-5 for all n 0. To do so, it can be proved by
induction on n that V
n
has properties 1-5 for all n 0, and W
n
is convex and nondecreasing
on Z
+
. For the base case, observe that the function V
0
given by V
0
0 has properties 1-5.
195
For the general induction step, it can be easily shown that if V
n
has properties 1-5, then W
n
is
convex, nondecreasing. And given that W
n
is convex, nondecreasing, it can be shown that V
n+1
has properties 1-5. As mentioned above, it suces to establish that V
n+1
has properties 1,3, and
4. Unfortunately, this approach doesnt seem to work in case > 0.
(d) First, if c
S
= 0 then the cost-to-go does not depend on the server state, and in particular
the thresholds should be equal:
L
=
H
. Suppose a single customer is served, with no other
customers waiting or arriving. If the server is put into the high state, and if t
H
denotes the time the
customer departs, then the average cost is E[
_

0
e
t
dt](c
H
+c
W
) = E[
1e
t
H

](c
H
+c
W
) =
c
H
+c
W
+
H
.
Similarly, if the server is put into the low state during the service interval, then the cost is
c
W
+
L
.
The given inequality implies that it costs less to use the high rate, even if there is only one customer
in the system over all time. But using the high rate for a given server helps decrease the costs for
any other customers that might be in the system, so that it is optimal to always use the high rate
(when the system is not emput), under the conditions given in the problem.
196
Bibliography
[1] S. Asmussen, Applied Probability and Queues, Wiley, New York, 1987.
[2] D. Bertsekas and R.G. Gallager, Data Networks, 2nd ed., Prentice-Hall, Englewood Clis,New
Jersey, 1992.
[3] D.D. Botvich and N. Dueld, Large deviations, the shape of the loss curve, and economies of
scale in large multiplexers, Queueing Systems, vol. 20, 293-320, 1995.
[4] C.-S. Chang,Stability, queue length, and delay of deterministic and stochastic queueing net-
works,IEEE Trans. on Automatic Control, vol. 39, no. 5, pp. 913-931, May 1994.
[5] C.-S. Chang, Performance Guarantees in Communication Networks, Springer, New York, 2001.
[6] J. Cohen and F. Kelly, A paradox of congestion control in queueing networks, Journal of
Applied Probability, vol. 27, 1990.
[7] C. Courcoubetis and R. Weber, Buer overow asymptotics for a switch handling many trac
sources, J. Applied Probability, vol. 33, no. 3, 886-903, 1996.
[8] R.L. Cruz, A calculus for network delay, part I: network elements in isolation, IEEE Trans-
actions on Information Theory, vol. 37, pp. 114131, Jan. 1991.
[9] R.L. Cruz, A calculus for network delay, part II: network analysis, IEEE Transactions on
Information Theory, vol. 37, pp. 132141, Jan. 1991.
[10] R.L. Cruz, Service burstiness and dynamic burstiness measures: a framework, Journal of
High Speed Networks, vol. 1, no. 2, pp. 105-127, 1992.
[11] G. de Veciana, G. Kesidis, and J. Walrand, Resource management in wide-area ATM networks
using eective bandwidths, IEEE J. Selected Areas of Communications, vol. 13, no. 6, 1081-
1090, 1995.
[12] J. Edmonds, Submodular functions, matroids and certain polyhedra, Proc. Calgary Int. Conf.
Combinatorial Structures and Applications, Calgary, Alta, June 1969, pp. 6987.
[13] A. Elwalid, D. Mitra, and R.H. Wentworth, A new approach for allocating buers and band-
width to heterogeneous regulated trac in an ATM node, IEEE J. Selected Areas of Commu-
nications, vol. 13, no. 6, pp. 991-1003, 1995.
[14] F. G. Foster, On the stochastic matrices associated with certain queueing processes, Ann.
Math Statist., Vol. 24, pp. 355-360, 1953.
197
[15] L. Fratta, M. Gerla, and L. Kleinrock, The ow deviation method: an approach to store and
forward communication network design. Networks, vol. 3, pp. 97-133, 1973.
[16] R.G. Gallager, P.A. Humblet, and P.M. Spira, A distributed algorithm for minimum-weight
spanning trees, ACM Trans. Programming Languages and Systems, 1983.
[17] G.L. Georgiadis, R. Guerin, V. Peris, and K.N. Sivrajan, Ecient network QOS provisioning
based on per node trac shaping, IEEE/ACM Transactions on Networking, vol. 4, no. 4, pp.
482-501, August 1996.
[18] G.R. Grimmett and D.R. Stirzaker, Probability and Random Processes, 3rd Ed., Oxford Uni-
versity Press, Oxford and New York, 2001.
[19] B. Hajek, Optimal control of two interacting service stations, IEEE Trans. Automatic Con-
trol, Vol. 29,June 1984, pp. 491-499.
[20] J.Y. Hui, Resource allocation for broadband networks, IEEE J. Selected Areas of Commu-
nications, vol. 13, no. 6, 1598-1608.
[21] J.Y. Hui Switching an trac theory for integrated broadband networks, Kluwer, Boston, 1990.
[22] J.Y. Hui and E. Karasan, A thermodynamic theory of broadband networks with application
to dynamic routing, IEEE J. Selected Areas of Communications, vol. 13, no. 6, 991-1003, 1995.
[23] F.P. Kelly, Notes on eective bandwidth In Stochastic Networks Theory and Applications,
F.P. Kelly, S. Zachary and I. Ziedins, Eds., Oxford Science, pp. 141-168, 1996.
[24] F. P. Kelly, Charging and rate control for elastic trac, European Transactions on Telecom-
munications, Vol. 8, (1997), pp. 33-37.
[25] F. P. Kelly, A.K. Maulloo, and D.K.H. Tan, Rate control in communication networks: shadow
prices, proportional fairness and stability, Journal of the Operational Research Society 49
(1998), 237-252.
[26] J.F.C. Kingman, Some inequalitiies for the queue GI/G/1, Biometrika, vol. 49, no. 3 and 4,
p. 315-324, 1962.
[27] L. Kleinrock,, Queueing Systems, Vol, 2: Computer Applications, Wiley, 1976.
[28] P.R. Kumar and S.P. Meyn, Stability of queueing networks and scheduling policies, IEEE
Trans. Automatic Control, vol. 40, 1995, pp. 251-260.
[29] N. McKeown, A. Mekkittikul, V. Ananthraram, and J. Walrand, Achieving 100% throuhput
in an input-queued switch, IEEE Transactions on Communications, vol. 47, no. 8, 1999.
[30] S.P. Meyn and R.L. Tweedie, Markov chains and stochastic stability, Springer-Verlag, London,
1993.
[31] J. Mo and J. Walrand, Fair end-to-end window-based congestion control, IEEE/ACM Trans.
Networking, vol. 8, pp. 556-567, 2000.
[32] J.R. Norris, Markov Chains, Cambridge University Press, Cambridge, 1997.
198
[33] A.K. Parekh, and R.G. Gallager, A generalized processor sharing approach to ow control in
integrated services networks: the single node case, IEEE/ACM Trans. on Networking, vol. 1,
no. 3, pp. 344-357, 1993.
[34] Parekh, A.K. and R.G. Gallager, A generalized processor sharing approach to ow control
in integrated services networks: the multiple-node case, IEEE INFOCOM 93, San Francisco,
521-530, March 1993.
[35] B.K. Ryu and A. Elwalid, The importance of long-range dependence of VBR trac engineer-
ing: myths and realities, Proc. ACM SIGCOMM, 3-14, August 1996.
[36] Shwartz, A., and Weiss, A. Large deviations for performance analysis, queues, communication
and computing, Chapman & Hall, London, 1995.
[37] A. Simonian and J. Guibert, Large deviations approximations for uid queues fed by a large
number of sources, IEEE J. Selected Areas of Communications, vol. 13, no. 6, 1081-1090.
[38] R. Srikant, The Mathematics of Internet Congestion Control, Birkhauser, Boston, 2003.
[39] L. Tassiulas and A. Ephremides, Stability properties of constrained queueing systems and
scheduling for maximum throughput in multihop radio networks, IEEE Trans. Automatic
Control, vol. 37, no. 12, December 1992.
[40] L. Tassiulas and A. Ephremides, Dynamic server allocation to parallel queues with randomly
varying connectivity, IEEE Trans. Information Theory, vol. 39, March 1993.
[41] L. Tassiulas, Scheduling and performance limits of networks with constantly changing topol-
ogy, IEEE Trans. on Info. Theory, Vol. 43, No. 3, pp. 1067-1073, May 1997.
[42] R. L. Tweedie, The existence of moments of stationary Markov chains, J. Appl. Prob., vol. 20,
pp. 191 - 196, 1983.
[43] W. Willinger, M.S. Taqqu, and A. Erramilli, A bibliographical guide to self-similar trac
and performance modeling for modern high-speed networks. In Stochastic Networks Theory
and Applications, F.P. Kelly, S. Zachary and I. Ziedins, Eds., Oxford Science, pp. 339-366,
1996.
199

You might also like