QBookSolutionHead Queueing
QBookSolutionHead Queueing
:
Solution Manual
QUEUEING THEORY WITH APPLICA-
TIONS TO PACKET TELECOMMUNICA-
TION: SOLUTION MANUAL
JOHN N. DAIGLE
Prof. of Electrical Engineering
The University of Mississippi
University, MS 38677
Exercise 1.1 Assume values of x̃ and t̃ are drawn from truncated geo-
metric distributions. In particular, let P {x̃ = n} = 0 and P {t̃ = n} = 0
except for 1 ≤ n ≤ 10, and let P {x̃ = n} = αpx (1 − px )n and
P {t̃ = n} = βpt (1 − pt )n for 1 ≤ n ≤ 10 with px = 0.292578 and
pt = 0.14358.
1. Using your favorite programming language or a spreadsheet, generate a
sequence of 100 random variates each for x̃ and t̃.
2. Plot ũ(t) as a function of t, compute z̃n from (1.3) and w̃n from (1.2) for
1 ≤ n ≤ 20.
3. Compute τ̃n for 1 ≤ n ≤ 20 and verify that w̃n can be obtained from
z̃(t).
4. Compute z̃n from (1.3) and w̃n from (1.2) for 1 ≤ n ≤ 100 and compute
the average waiting times for the 100 customers.
Solution Part 1. We must first determine the constants α and β. Since the
probabilities must sum to unity, we find
10
X 10
X
P {x̃ = n} = P t̃ = n = 1.
n=1 n=1
Thus, using the formula for the partial sum of a geometric series, we find
h i h i
α (1 − px ) − (1 − px )11 = β (1 − pt ) − (1 − pt )11 = 1,
Now, in general,
Pn
P {x̃ ≤ n} = = i} = ni=1 αpx (1 − px )i
i=1 P {x̃
P
= α (1 − px ) − (1 − px )n+1 .
We may therefore select random variates from a uniform distribution and use
these numbers in place of the above formula in place of P {x̃ ≤ n} and then
round up to obtain the variates for x̃. Similarly, by solving P {t̃ = n} =
βpt (1 − pt )n for n we may find the variates for t̃ from
ln (1 − pt ) − (1/β)P t̃ ≤ n
n= − 1.
ln (1 − pt )
By drawing samples
from
a uniform distribution and using those values for
P {x̃ ≤ n} and P t̃ ≤ n in the above formulas, one can obtain the variates
for xn and tn for n = 1, 2, · · · , 100, and then apply the formulas to obtain the
following table:
n xn tn zn wn τn
1 3 6 2 0 6
2 5 1 -1 2 7
3 1 6 -4 1 13
4 1 5 -1 0 18
5 7 2 4 0 20
6 2 3 -1 4 23
7 2 3 -3 3 26
8 4 5 3 0 31
9 2 1 1 3 32
10 1 1 0 4 33
11 2 1 -1 4 34
12 1 3 -3 3 37
13 3 4 -3 0 41
14 4 6 2 0 47
15 7 2 3 2 49
16 2 4 -2 5 53
17 5 4 1 3 57
18 1 4 -6 4 61
19 1 7 -6 0 68
20 4 7 -1 0 75
Terminology and Examples 3
n xn tn zn wn τn
21 1 5 -6 0 80
22 5 7 -1 0 87
23 5 6 2 0 93
24 2 3 -8 2 96
25 4 10 -4 0 106
26 1 8 -1 0 114
27 1 2 -2 0 116
28 4 3 2 0 119
29 3 2 -3 2 121
30 1 6 0 0 127
31 2 1 -2 0 128
32 2 4 -6 0 132
33 1 8 -2 0 140
34 9 3 2 0 143
35 3 7 -6 2 150
36 2 9 -3 0 159
37 5 5 3 0 164
38 3 2 2 3 166
39 4 1 3 5 167
40 2 1 -3 8 168
41 6 5 3 5 173
42 3 3 0 8 176
43 2 3 -2 8 179
44 1 4 -9 6 183
45 5 10 -2 0 193
46 2 7 -8 0 200
47 1 10 -1 0 210
48 1 2 0 0 212
49 2 1 -1 0 213
50 1 3 -4 0 216
51 10 5 8 0 221
52 2 2 -2 8 223
53 3 4 1 6 227
54 1 2 -1 7 229
55 5 2 2 6 231
56 2 3 1 8 234
57 2 1 -1 9 235
58 1 3 -1 8 238
59 1 2 -1 7 240
60 5 2 1 6 242
61 2 4 0 7 246
62 3 2 1 7 248
63 1 2 -1 8 250
64 1 2 -4 7 252
65 5 5 -5 3 257
66 3 10 -2 0 267
67 2 5 0 0 272
68 2 2 -1 0 274
69 1 3 -2 0 277
70 4 3 -2 0 280
4 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
n xn tn zn wn τn
71 1 6 -2 0 286
72 4 3 -1 0 289
73 5 5 -5 0 294
74 1 10 -3 0 304
75 1 4 -9 0 308
76 1 10 0 0 318
77 4 1 -4 0 319
78 4 8 -4 0 327
79 5 8 -1 0 335
80 4 6 3 0 341
81 6 1 5 3 342
82 2 1 -1 8 343
83 3 3 2 7 346
84 1 1 -3 9 347
85 6 4 0 6 351
86 9 6 2 6 357
87 3 7 -3 8 364
88 7 6 3 5 370
89 3 4 2 8 374
90 4 1 2 10 375
91 2 2 -1 12 377
92 3 3 2 11 380
93 5 1 1 13 381
94 3 4 2 14 385
95 3 1 -4 16 386
96 8 7 4 12 393
97 2 4 1 16 397
98 1 1 0 17 398
99 4 1 -4 17 399
100 2 8 0 13 407
wn = u(τn− ).
Solution Part 4. The average waiting time is shown at the bottom of the
above table. It is found by simply summing the waiting times and dividing by
the number of customers.
10
7.5
u(t)
2.5
0
0 20 40 60 80
time
Exercise 1.2 Assume values of x̃ and t̃ are drawn from truncated geo-
metric distributions as given in Exercise 1.1.
1. Using the data obtained in Exercise 1.1, determine the lengths of all busy
periods that occur during the interval.
2. Determine the average length of the busy period.
3. Compare the average length of the busy period obtained in the previous
step to the average waiting time computed in Exercise 1.1. Based on the
results of this comparison, speculate about whether or not the average
length of the busy period and the average waiting time are related.
Solution Part 1. A busy period starts whenever a customer waits zero time.
The length of the busy period is the total service time accumulated up to the
point in time at which the next busy period starts, that is, whenever some later
arriving customer has a waiting time of zero. From the table of values given in
the previous problem’s solution, we see that a busy period is in progress at the
time of arrival of C1 00. Therefore we must decide whether or not to count that
busy period. We arbitrarily decide not to count that busy period. The lengths
of the busy periods are given in the following table:
1 9
2 1
3 11
4 10
5 3
6 19
7 1
8 4
9 1
10 5
11 7
12 4
13 1
14 1
15 7
16 1
17 2
18 2
19 1
20 12
21 2
22 26
23 5
24 2
25 1
Terminology and Examples 7
26 1
27 2
28 1
29 44
30 3
31 2
32 2
33 1
34 4
35 1
36 4
37 5
38 1
39 1
40 1
41 4
42 4
43 5
Average 5.209302326
Exercise 2.1 For the experiment described in Example 2.1, specify all
of the event probabilities.
Solution. For the experiment described in Example 2.1, the events are as fol-
lows: ∅, {N }, {S}, {C}, {T }, {S, N }, {C, N }, {T, N }, {C, S}, {T, S},
{T, C}, {C, S, N }, {T, S, N }, {T, C, N }, {T, C, S}, {T, C, S, N }. Since
each of the events is a union of elementary events, which are mutually ex-
clusive, the probability of each event is the sum of the probabilities of its con-
stituent events. For example,
P {T, C, S, N } = P {T }+P {C}+P {S}+P {N } = 0.11+0.66+0.19+0.04 = 1.0.
By proceeding in this manner, we find the results given in the following table:
Exercise 2.2 For the experiment described in Example 2.1, there are a
total of four possible outcomes and the number of events is 16. Show that it
is always true that card (Ω) = 2card (S) , where card (A) denotes the car-
dinality of the set A, which, in turn, is the number of elements of A. [Hint:
The events can be put in one-to-one correspondence with the card (S)-bit
binary numbers.]
10 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Solution. Every event in the event space consists of a union of the elementary
events of the experiment. Thus, every event is characterized by the presence or
absence of elementary events. Suppose an experiment has card (S) elemen-
tary events, then these events can be ordered and put in one-to-one correspon-
dence with the digits of an card (S)-digit binary number, say,
1 2 1
P {ω1 } = P {s1 } + P {s2 } + P {s3 } = + +0 = .
21 21 7
i
Similarly, for experiment B, we find P {si } = 15 for si ∈ SB and si = 0
otherwise. Then, for experiment B, we have
0 0 3 1
P {ω1 } = P {s1 } + P {s2 } + P {s3 } = + + = .
15 15 15 5
We recognize these probabilities to be the same as the conditional probabil-
ities previously computed.
Review of Random Processes 11
P {ω1 } P {ω2 }
P {ω2 |ω1 } = = P {ω2 } .
P {ω1 }
Since
P {ω2 |ω1 } = P {ω2 }
is the definition of ω2 being statistically independent of ω1 , the result follows.
Exercise 2.5 Suppose ω1 , ω2 ∈ Ω but P {ω1 } = 0 or P {ω2 } = 0.
Discuss the concept of independence between ω1 and ω2 .
Solution. We assume ω1 and ω2 are two different events. Suppose P {ω1 } = 0
but P {ω2 } =
6 0. Then
P {ω1 ω2 }
P {ω1 |ω2 } = .
P {ω2 }
We then find,
X
dFc̃ (x) = P {c̃ = c} δ(x − c)dx
c∈C
X
= P {c̃ = c} δ(x − c)dx.
c∈{56.48,70.07,75.29,84.35,96.75,99.99,104.54,132.22}
(2.1)
Thus,
8 7 6
dFc̃ (x) = δ(x − 56.48)dx + δ(x70.07)dx + δ(x − 75.29)dx +
36 36 36
5 4 3
δ(x − 84.35)dx + δ(x − 96.75)dx + δ(x − 99.99)dx +
36 36 36
2 1
δ(x − 104.54)dx + δ(x − 132.22)dx
36 36
Exercise 2.7 Find E[c̃] and var (c̃) for the random variable c̃ defined in
Example 2.4.
Solution. By definition,
X
E [c̃] = cP {c̃ = c} ,
c∈C
h i
var (c̃) = E c̃2 − E 2 [c̃],
and h i
E c̃2 = c2 P {c̃ = c} .
X
c∈C
From Example 2.4 and the previous problem, we have the following table:
c P {c̃ = c}
56.48 0.222222222
70.07 0.194444444
75.29 0.166666667
84.35 0.138888889
96.75 0.111111111
99.99 0.083333333
104.54 0.055555556
132.22 0.027777778
Exercise 2.8 Let X1 and X2 denote the support sets for x̃1 and x̃2 ,
respectively. Specialize (2.12) where X1 = {5, 6, . . . , 14} and X2 =
{11, 12, . . . , 22}.
Solution. Equation (2.12) is as follows:
n
X
P {x̃1 + x̃2 = n} = P {x̃2 = n − i | x̃1 = i} P {x̃1 = i} .
i=0
First we note that the smallest possible value of x̃1 + x̃2 is 5 + 11 = 16, and
the largest possible value x̃1 + x̃2 can take on is 14 + 22 = 36. We also see that
x̃1 + x̃2 can take on all values between 16 and 36. Thus, we find immediately
that the support set for x̃1 + x̃2 is {16, 17, . . . , 36}. We also see that since x̃1
ranges only over the integers from 5 to 14, P {x̃1 = i} > 0 only if i is over
only the integers from 5 to 14.
Similarly, probx̃2 = n − i > 0 only if n − i ≥ 11 and n − i ≤ 22, so that
probx̃2 = n − i > 0 only if n − 11 ≥ i and n − 22 ≤ i or i ≤ n − 11 and
i ≥ n − 22. Hence, for P {x̃1 = i} > 0, we need i ≥ max {5, n − 22}
and i ≤ min set14, n − 11. We, therefore have the following result. For
n 6= {5, 6, . . . , 36}, P {x̃1 + x̃2 = n} = 0, and for n ∈ {5, 6, . . . , 36},
min {14,n−11}
X
P {x̃1 + x̃2 = n} = P {x̃2 = n − i | x̃1 = i} P {x̃1 = i} .
i=max {5,n−22}
For example,
min {14,18−11}
X
P {x̃1 + x̃2 = 18} = P {x̃2 = n − i | x̃1 = i} P {x̃1 = i}
i=max {5,18−22}
7
X
= P {x̃2 = n − i | x̃1 = i} .
i=5
But
P {x̃1 = i∆X} ≈ fx̃1 (i∆X)∆X,
so,
N
X
P {x̃1 + x̃2 ≤ x} = P {x̃1 + x̃2 ≤ x|x̃1 = i∆x} fx̃1 ∆X
i=1
XN
= P {x̃2 ≤ x − i∆X|x̃1 = i∆x} fx̃1 (i∆X)∆X.
i=1
In the limit, we can pass i∆X to the continuous variable y and ∆X to dy. The
sum then passes to the integral over (0, x], which yields the result
Z x
P {x̃1 + x̃2 ≤ x} = P {x̃2 ≤ x − y|x̃1 = y} fx̃1 (y)dy,
0
or Z x
Fx̃1 +x̃2 (x) = Fx̃2 |x̃1 (x − y|y)fx̃1 (y)dy.
0
Differentiating both sides with respect to x then yields
Z x
fx̃1 +x̃2 (x) = fx̃2 |x̃1 (x − y|y)fx̃1 (y)dy.
0
In order to prove the lemma, we simply show that the exponential distribution
satisfies the definition of memoryless; that is, we compute P {x̃ > α + β|x̃ >
β} and see whether this quantity is equal to P {x̃ > α}. From the definition of
conditional probability, we find
P {x̃ > α + β, x̃ > β}
P {x̃ > α + β|x̃ > β} = .
P {x̃ > β}
But the joint probability of the numerator is just P {x̃ > α + β}, and for the
exponential distribution is
Also,
P {x̃ > β} = e−λβ .
Review of Random Processes 15
Thus,
e−λ(α+β)
P {x̃ > α + β|x̃ > β} = = e−λα P {x̃ > α}.
e−λβ
Thus, the definition of memoryless is satisfied and the conclusion follows.
Exercise 2.11 Prove Lemma 2.2. [Hint: Start with rational arguments.
Extend to the real line using a continuity argument. The proof is given
in Feller, [1968] pp. 458 - 460, but it is strongly recommended that the
exercise be attempted without going to the reference.]
Solution. We first prove the proposition that g(n/m) = g(1/m)n for an arbi-
trary integer m. Let T denote the truth set for the proposition. Trivially, n ∈ T ,
so assume (n − 1) ∈ T . Then
n n−1
X 1 X 1 1
g(n/m) = g = g + .
j=1
m j=1
m m
But, by hypothesis of the Lemma 2.2, g(t + s) = g(t)g(s) ∀s, t > 0. Thus,
n−1
1 1
X
g(n/m) = g g ,
j=1
m m
Since the rationals are dense in the reals, each real number is the limit of a
sequence of rational numbers. Further, g is a right continuous function so that
by analytic continuity,
Exercise 2.12 Repeat Example 2.5, assuming all students have a deter-
ministic holding time of one unit. How do the results compare? Would
an exponential assumption on service-time give an adequate explanation of
system performance if the service-time is really deterministic?
Solution. P {Alice before Charlie} must be zero. We see this as follows:
Alice, Bob and Charlie will use the phone exactly one time unit each. Even
if Bob finishes his call at precisely the moment Alice walks up, Charlie has
already begun his. So, Charlie’s time remaining is strictly less than one, while
Alice’s time to completion is exactly one in the best case. Thus, it is impossible
for Alice to finish her call before Charlie. Since assuming exponential hold-
ing times results in P {Alice before Charlie} = 1/4, exponentiality is not an
appropriate assumption in this case.
Exercise 2.13 Prove Theorem 2.2. Theorem2.2. Let x̃ be a non-
negative random variable with distribution Fx̃ (x), and let Fx̃∗ (s) be the
Laplace-Stieltjes transform of x̃. Then
dn ∗
2 n
E[x̃ ] = (−1) F (s) .
dsn x̃ s=0
Let T be the truth set for the proposition. By Definition 2.8, Fx̃∗ (s) = E[e−sx ],
so that Z ∞
d ∗
F (s) = (−x)e−sx dFx̃ (x),
ds x̃ 0
and so 1 ∈ T . Now assume that n ∈ T . Then
dn+1 ∗ d dn ∗
F (s) = F (s)
dsn+1 x̃ ds dsn x̃
Z ∞
d
n n −sx
= (−1) x e dFx̃ (x)
ds 0
Z ∞
n+1
= (−1) xn+1 e−sx dFx̃ (x).
0
(2.2)
dn ∗
n R ∞ n −sx
(−1) Fx̃ (s) = 0 x e dFx̃ (x)
dsn
s=0
s=0
Review of Random Processes 17
R∞
= 0 xn dFx̃ (s)
= E[x̃n ].
(2.3)
λ
where by Exercise 2.5, Fx̃∗ (s) = λ+s . Then
d λ
E[x̃] = (−1)
ds λ + s s=0
" −2 #
1 1
= (−1) λ = .
λ+s
s=0 λ
2
E[x̃2 ] = (−1)2 ds
d
2
λ
λ+s
s=0
" −3 #
1 2
= 2λ = .
λ+s
s=0 λ2
Note that for the exponential random variable, the mean and variance are equal.
Exercise 2.17 Let x̃ and ỹ be independent exponentially distributed ran-
dom variables with parameters α and β, respectively.
1. Find the distribution of z̃ = min{x̃, ỹ}. [Hint: Note that z̃ = min{x̃, ỹ}
and z̃ > z means x̃ > z and ỹ > z.]
2. Find P {x̃ < ỹ}.
3. Show that the conditional distribution Fz̃|x̃<ỹ (z) = Fz̃ (z).
Solution.
1. First, note that P {z̃ ≤ z} = 1 − P {z̃ > z}. As noted in the hint, z̃ =
min{x̃, ỹ} and z̃ > z means x̃ > z and ỹ > z. Therefore,
Solution. First consider P {ñb = 0}, the probability that Albert wins the very
first race. Since nAlbert’so time must be less than Betsy’s
n forohim to win, this
α
probability is P ã < b̃ . By Exercise 2.7 (iii), P ã < b̃ = α+β . Now
consider P {ñb = 1}, the probability that Betsy wins one race and then Albert
wins the second. Both ã and b̃ are exponential random variables; hence they
are memoryless. That is, during the second race they don’t ‘remember’ what
happened during the first race. Thus the times for each race for each runner
are independent of each other. Let ãi , b̃i denote the races times of Albert and
Betsy, respectively, on the ith race. Then
n o
P {ñb = 1} = P ã1 > b̃1 , ã2 < b̃2
n o n o
= P ã1 > b̃1 P ã2 < b̃2
β α
= α+β · α+β .
This proves the Lemma. To complete the exercise, we condition the probability
density function of ỹ on the value of ñ. Recall that here ñ is a random variable
and is independent of x̃i , i = 1, 2, · · ·. Thus,
n xn yn zn wn
n xn yn zn wn
Solution. Observe that the sampled means of x̃, ỹ, and z̃ are close to their
expected values: 0.98820564 as compared to the expected value of 1.0 for x̃;
0.46098839 as compared to the expected value of 0.5 for ỹ; and 0.33863694
for z̃. (Recall that if α and β are the rates for x̃ and ỹ, respectively, then
1
E[z̃] = α+β . In this exercise α = 1 and β = 2; hence E[z̃] = 0.33333333.)
Note that the sampled mean of w̃ is very close to the sampled mean of z̃.
Now, the wi are selected from the samples of z̃. Once the samples of z̃ are
chosen, it does not matter whether the original variates came from sampling the
distribution of x̃ or of ỹ. Hence the same result would hold if w̃ represented
those yi that were less than their corresponding xi values. The distribution of
w̃, and therefore of z̃, does not depend on whether α < β or β < α. Thus z̃ is
not a representation of x̃, and so will not be exponential with rate α.
Review of Random Processes 25
Exercise 2.21 Define counting processes which you think have the fol-
lowing properties:
1. independent but not stationary increments,
2. stationary but not independent increments,
3. neither stationary nor independent increments, and
4. both stationary and independent increments.
What would you think would be the properties of the process which counts
the number of passengers which arrive to an airport by June 30 of a given
year if time zero is defined to be midnight, December 31 of the previous
year?
Solution.
1. Independent, not stationary: Count the number of customers who enter a
store over an hour-long period. Assume the store has infinte capacity. The
number of customers that enter during one 1-hour time interval likely will
be independent of the number that enter during a non-overlapping 1-hour
period. However, because of “peak” hours and slow times, this counting
process is not likely to have stationary increments.
4. Both stationary and independent: Count the number of phone calls through
an operator’s switchboard between 3:00 am and 5:00 am in Watseka, Illi-
nois (a small farming community). Then the process can be modeled using
stationary and independent increments.
For the final part of this exercise, assume the airport has an infinite airplane
capacity. Then this process would likely have independent increments. Pas-
sengers usually will not be interested in how many people arrived before them;
26 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
they are concerned in their own, current, flight. Furthermore, this counting
process is not likely to have stationary increments since there will be peak and
slow times for flying. (For example: “red eye” flights are during slow times.)
Exercise 2.22 Show that E[ñ(t + s) − ñ(s)] = λt if {ñ(t), t > 0} is a
Poisson process with rate λ.
Solution.
∞
X
E [ñ(t + s) − ñ(s)] = nP {ñ(t + s) − ñ(s) = n}
n=0
∞
" #
X (λt)n e−λt
= n
n=0
n!
∞
" #
X (λt)n e−λt
= n
n=1
n!
∞
" #
X (λt)n e−λt
= n
n=1
n − 1!
∞
−λt
X (λt)n−1
= (λt) e
n − 1!
n=1
∞
(λt)n
= (λt) e−λt
X
n=0
n!
= (λt) e−λt eλt
= λt
Exercise 2.23 For each of the following functions, determine whether
the function is o(h) or not. Your determination should be in the form of a
formal proof.
1. f (t) = t,
2. f (t) = t2 ,
1
3. f (t) = t 2 ,
4. f (t) = e−at for a, t > 0
5. f (t) = te−at for a, t > 0
Solution.
h
1. limh→0 h = limh→0 1 = 1. Therefore f (t) = t is not o(h).
h2
2. limh→0 h = limh→0 h = 0. Therefore f (t) = t2 is o(h).
1 1 1
h2
3. limh→0 h = limh→0 h− 2 = ∞. Therefore f (t) = t 2 is not o(h).
Review of Random Processes 27
−ah −ah
4. limh→0 e h = limh→0 e h = ∞, since e−ah → 1 as h → 0. Therefore
f (t) = e−at for a, t > 0 is not o(h).
−ah
5. limh→0 he h = limh→0 e−ah = e0 = 1, Therefore f (t) = te−at for
a, t > 0 is not o(h).
Exercise 2.24 Suppose that f (t) and g(t) are both o(h). Determine
whether each of the following functions is o(h).
1. s(t) = f (t) + g(t)
2. d(t) = f (t) − g(t)
3. p(t) = f (t)g(t)
4. q(t) = f (t)/g(t)
Rt
5. i(t) = 0 f (x) dx
Solution.
1. s(t) = f (t) + g(t) is o(h) by the additive property of limits.
2. d(t) = f (t) − g(t) is o(h) by the additive and scalar multiplication proper-
ties of limits. (Here, the scalar is −1.)
We now prove an itermediate result. Lemma: If f (t) is o(h), then f (0) = 0.
proof: Clearly, by the definition of o(h), f (t) is continuous. So suppose
that f (0) is not equal to 0, say f (0) = a, a > 0. Then
f (h)
lim =∞
h→0 h
since a > 0. But f (t) is o(h), so that the required limit is zero. Hence it
must be that a = 0. So if f (t) is o(h) then f (0) = 0.
f (h)g(h) f (h)
lim = lim lim g(h) = 0 · 0 = 0
h→0 h h→0 h h→0
The limit is then indeterminant, and whether or not q(h) is o(h) depends
upon the specific form of f (h) and g(h).That is, we now need to know what
f (t), g(t) are to proceed further. For example, if f (t) = t4 , g(t) = t2 , then
q(h) h4
lim = lim 2 = lim h = 0
h→0 h h→0 h h h→0
making q(t) an o(h) function. If, on the other hand, f (t) = t2 , g(t) = t4
then
q(h) h2 1
lim = lim 4 = lim 3 = ∞
h→0 h h→0 h h h→0 h
Therefore
i(h) hf (n)
limh→0 = limh→0 h
h
= limh→0 f (n), n in [0, h]
=0
Rt
by the Lemma above. Hence i(t) = 0 f (x) dx is o(h).
Exercise 2.25 Show that definition 1 of the Poisson process implies def-
inition 2 of the Poisson process.
Solution. Denote Property (j) of Poisson process Definition n by (j)n .
1 (i)2 : Immediate.
2 (ii)2 : Property (ii)1 gives us independent increments; it remains to show
they are also stationary. By (iii)1 , we see that P {ñ(t + s) − ñ(s) = n} is
independent of s. This defines stationary increments.
3 (iii)2 : By (iii)1 ,
1 −λh
P {ñ(h) = 1} = λh 1!
e
= λhe−λh
= λhe−λh
= λhe −λh + λh − λh
= λh e−λh − 1 + λh
= g(h) + λh
Review of Random Processes 29
Note that
g(h) λh(e−λh −1)
lim = limh→0 h
h→0 h
= λlimh→0 e−λh − 1
= λ (1 − 1)
=0
P {ñ(h) = 1} = λh + g(h)
= λh + o(h)
(λh)n e−λh
lim = limh→0 λn e−λh hn−1
h→0 n!
= λn limh→0 e−λh hn−1
= λn · 0
=0
1 (i)1 : Immediate.
30 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Thus,
Observe that
n
X Pn
P {ñ(t + h) − ñ(t) = k} Pn−k (t) = k=2 o(h)Pn−k (t) ≤ o(h) · 1
k=2
= o(h)
since Pn−k (t) is a scalar for all k ≤ n, and any scalar multiplied by a o(h)
function is still o(h). (Think of the limit definition of o(h) to see why this
is true.) Thus
so that
′
Pn (t) = −λPn (t) + λPn−1 (t) + 0
= −λPn (t) + λPn−1 (t), n ≥ 1. (2.26.1)
For n = 0,
′
P0 (t) = −λP0 (t).
i.e.,
′
P0 (t) + λP0 (t) = 0.
Observe that
d h λt i
e P0 (t) = 0,
dt
leads to
′
eλt P0 (t) + λeλt P0 (t) = 0
and
P0 (t) = Ke−λt . (2.26.2)
But by (i)2 , P0 (0) = 1. Using this result in (2.26.2) we find K = 1, so that
P0 (t) = eλt .
(λt)n eλt
Pn (t) = .
n!
Then 0 ∈ T . Now suppose n − 1 ∈ T . That is,
" #
(λh)n−1 eλh
Pn−1 (t) = . (2.26.3)
(n − 1)!
Hence,
d λt
e Pn (t) = λeλt Pn−1 (t)
dt
Since n − 1 ∈ T , we have by (2.26.3),
(λt)n−1 e−λt
h i
= λeλt (n−1)!
λ(λt)n−1
= (n−1)!
λn t(n−1)
= (n−1)!
Thus,
(λt)n
Pn (t) = e−λt +c
n!
But by (i)2 , ñ(0) = 0. So for n > 0, Pn (0) = 0. Therefore,
h n i
Pn (0) = e−λ(0) (λ·0)
n! + c
= 1 (c) = 0.
That is,
(λt)n eλt
Pn (t) = .
n!
Thus, n ∈ T , the Proposition is proved, and all properties of Definition 1
are satisfied.
Exercise 2.27 Show that the sequence of interarrival times for a Poisson
process with rate λ forms a set of mutually iid exponential random variables
with parameter λ.
Solution. Recall that if {ñ(t), t ≥ 0} is a Poisson process with rate λ, then for
all s,
P {ñ(t + s) − ñ(s) = 0} = P {ñ(t) = 0} = e−λt
Now, in order for no events to have occurred by time t, the time of the first
event must be greater than t. That is,
This is equivalent to
P t̃1 ≤ t = 1 − e−λt
Review of Random Processes 33
Observe that this is the cumulative distribution function for an exponential ran-
dom variable with parameter λ. Since the c.d.f. is unique, t̃1 is exponentially
distributed with rate λ.
Note that the second interarrival time begins at the end of the first interarrival
time. Furthermore, the process has stationary and independent increments so
that t̃2 has the same distribution as t̃1 and is independent
of t̃1 .
Repeating these arguments for n ≥ 3, we see that t̃1 , t̃2 , t̃3 , · · · are inde-
pendent exponential variables with parameter λ.
Exercise 2.28 Show that
d λ(λt)n−1 e−λt
P {s̃n ≤ t} = .
dt (n − 1)!
Solution.
(i) Clearly, since ñ(0) = max {n : s̃n ≤ 0}, ñ(0) = 0.
(ii) This follows immediately from the independence of t̃i , i ≥ 1 . Fur-
thermore, because these are exponential random variables and thus are
memoryless, ñ has stationary increments.
Therefore
(λt)n e−λt
P {ñ(t) = n} =
n!
and the proof is complete.
Exercise 2.30 Let ñ1 and ñ2 be independent Poisson random variables
with rates α and β, respectively. Define ñ = ñ1 + ñ2 . Show that ñ has the
Poisson distribution with rate α + β. Using this result, prove Property 1 of
the Poisson process.
Solution. From the definition of the Poisson distribution,
αn e−n
P {ñ1 = n} =
n!
and
β n e−n
P {ñ2 = n} =
n!
Now, condition on the value of ñ2 to find:
Pn
P {ñ = n} = k=0 P ñ1 + ñ2 = n, ñ2 = k P {ñ2 = k}
= nk=0 P {ñ
h1= n − k}i hP {ñ 2 =
i k}
n−k −α
(α) e k −β (β) e
P n
= k=0 (n−k)! k!
Review of Random Processes 35
Note that the only way for ñr = r and ñg = g, given that ñ = k, is if their
sum is r + g; that is, if k = r + g. So for k 6= r + g,
Hence
is ! " #
r+g r e−λt (λt)r+g e−λt (λt)r+g
p (1 − p)(r+g)−r =
r (r + g)! (r + g)!
After some rearranging of terms:
" #" #
e−pλt (pλt)r e−(1−p)λt {(1 − p)λt}g
=
r! g!
is Poisson distributed with parameter λt. Let each event be recorded with
probability p. Define ñr (t) to be the number of events recorded by time t,
and ñg (t) to be the number of events not recorded by time t. Then by the
result just shown,ñ(t) = ñr (t) + ñg (t), where ñr (t) and ñg (t) are Poisson
distributed with rates pλt and (1 − p)λt, respectively. This proves property
(iii) of Definition 1 of the Poisson process.
By property (iii) of Definition 1 of the Poisson process, ñ(0) = 0. Since
ñr (t) and ñg (t) are non-negative for all t ≥ 0, ñr (0) = ñg (0) = 0. Thus
property (i) of Definition 1 of the Poisson process holds.
It remains to show property (ii) of Definition 1 of the Poisson process. Con-
sider two non-overlapping intervals of time, say (t0 , t1 ) and (t2 , t3 ). Then,
since ñ(t) = ñr (t) + ñg (t),
ñ(t1 ) − ñ(t0 ) = [ñr (t1 ) − ñr (t0 )] + [ñg (t1 ) − ñg (t0 )],
and
ñ(t3 ) − ñ(t2 ) = [ñr (t3 ) − ñr (t2 )] + [ñg (t3 ) − ñg (t2 )].
By the result shown above, ñr (t) and ñg (t) are independent random variables.
That is, [ñr (t1 ) − ñr (t0 )] and [ñg (t1 ) − ñg (t0 )] are independent, and [ñr (t3 ) −
ñr (t2 )] and [ñg (t3 ) − ñg (t2 )] are independent. Furthermore, {ñ(t), t ≥ 0}
is a Poisson process so it has independent increments: ñ(t1 ) − ñ(t0 ) and
ñ(t3 ) − ñ(t2 ) are independent. Since ñr (t) and ñg (t) are independent of
each other across the sums, which are then in turn independent of each other,
ñr (t) and ñg (t) are independent across the intervals. i.e., [ñr (t1 ) − ñr (t0 )] and
[ñr (t3 ) − ñr (t2 )] and independent, and [ñg (t1 ) − ñg (t0 )] and [ñg (t3 ) − ñg (t2 )]
and independent. This proves property (ii) of Definition 1 of the Poisson pro-
cess.
Since all three properties hold, {ñr (t), t ≥ 0} and {ñg (t), t ≥ 0} are Pois-
son processes.
Exercise 2.32 Events occur at a Poisson rate λ. Suppose all odd num-
bered events and no even numbered events are recorded. Let ñ1 (t) be the
number of events recorded by time t and ñ2 (t) be the number of events not
recorded by time t. Do the processes {ñ1 (t), t ≥ 0} and {ñ2 (t), t ≥ 0}
each have independent increments? Do they have stationary increments?
Are they Poisson processes?
Solution. Since only odd numbered events are recorded, the time between
recorded events is the sum of two exponentially distributed random variables
with parameter λ. Hence, the processes {ñ1 (t), t ≥ 0} and {ñ2 (t), t ≥ 0}
are clearly not Poisson. Now, suppose an event is recorded at time t0 . Then,
the probability that an event will be recorded in (t0 , t0 + h) is o(h) because
this would require two events from the original Poisson process in a period of
length h. Therefore, the increments are not independent. On the other hand,
38 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
But,
n o
P {q̃k+1 = j|q̃k = i} = P (q̃k − 1)+ + ṽk + 1 = j|q̃k = i .
In turn,
n o n o
P (q̃k − 1)+ + ṽk + 1 = j|q̃k = i = P (i − 1)+ + ṽk + 1 = j|q̃k = i
n o
= P ṽk + 1 = j − (i − 1)+ |q̃k = i
For i = 0 or i = 1,
n o
P ṽk + 1 = j − (i − 1)+ = P {ṽk + 1 = j} .
For i ≥ 2,
n o
P ṽk + 1 = j − (i − 1)+ = P {ṽk + 1 = j + 1 − i} ,
where we note that the previous probability is zero whenever j < i − 1. Define
aj = P {ṽk + 1 = j} for i = 0, 1, . . . , N . Then, we have shown that
a0 a1 a2 a3 a4 a5 a6 a7 ··· aN 0 ···
a0
a1 a2 a3 a4 a5 a6 a7 ··· aN 0 ···
.. ..
P= 0
a0 a1 a2 a3 a4 a5 a6 ··· aN −1 . ..
.. ..
0
0 a0 a1 a2 a3 a4 a5 ··· aN −2 . .
.. .. .. .. .. .. .. .. .. .. ..
. . . . . . . . . . .
Review of Random Processes 39
Exercise 2.34 Staring with (2.18), show that the rows of P ∞ must
be identical. [Hint: First calculate P ∞ under the assumption β0 =
[ 1 0 0 . . . ]. Next, calculate P ∞ under the assumption β0 =
[ 0 1 0 . . . ]. Continue along these lines.]
Solution. The general idea is that π is independent of β0 ; that is, the value of
π is the same no matter how we choose the distribution of the starting state.
Suppose we choose β0 = [ 1 0 0 . . . ]. Then, β0 P ∞ is equal to the first
row of P ∞ . Since π = β0 P ∞ , we then have the first row of P ∞ must be π.
if we choose β0 = [ 0 1 0 . . . ], then β0 P ∞ is equal to the second row
of P ∞ . Again, since π = β0 P ∞ , we then have the first row of P ∞ must be
π. In general, if we choose the starting state as i with probability 1, then we
conclude that the i-th row of P ∞ is equal to π. thus, all rows of P ∞ must be
equal to π.
Exercise 2.35 Suppose {x̃k , k = 0, 1, . . .} is a Markov chain such that
0.6 0.4
P= .
0.5 0.5
And, finally subtracting the first equation from the second yields
1 0
[ π1 π2 ] = [ 59 4
9 ].
0 1
or
[ π1 π2 ] = [ 59 4
9 ].
But,
n o i
X n o
P ãi + b̃5−i = j = P ãi + b̃5−i = j|ãi = ℓ P {ãi = ℓ}
ℓ=0
i
X n o
= P b̃5−i = j − ℓ P {ãi = ℓ} ,
ℓ=0
Exercise 2.37 For the example discussed in Section 1.2.2 in the case
where arrivals occur according to an on off process, determine whether or
not {q̃k , k = 0, 1, . . .} is a DPMC. Defend your conclusion mathematically;
that is show whether or not {q̃k , k = 0, 1, . . .} satisfies the definition of a
Markov chain.
Solution. We are given
For the case where the arrival process is an independent sequence of random
variables, the answer is “Yes.” The sequence of values of q̃k does reveal the
number of arrivals in each interval, but the probability mass function for the
queue length at epoch k + 1 can be computed solely form the value of infor-
mation is not needed to compute the new value of q̃k . For the case where the
arrival process is on-off, the sequence of queue lengths reveals the sequence
of arrivals. For example, if q̃k = 7 and q̃k−1 = 3, then we know that there
were 5 arrivals during interval k, which means that there were 5 sources in the
on state during interval k. Thus, the distribution of the number of new arrivals
42 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
−λ,
if j = i,
Qij = λ, if j = i + 1,
0, otherwise.
Show that {x̃1 (t), t ≥ 0} is a Poisson process. [Hint: Simply solve the
infinite matrix differential equation term by term starting with P00 (t) and
completing each column in turn.]
Solution. Solve the infinite matrix differential equation:
d
P (t) = P (t)Q.
dt
Due to independent increments, we need only compute the first row of the
matrix P (t). Begin with P0,0 (t):
d
P0,0 (t) = −λP0,0 (t).
dt
We see immediately that
P0,0 (t) = e−λt .
Now solve for the second column of P (t):
d
P0,1 (t) = λP0,0 (t) − λP0,1 (t)
dt
= λe−λt − λP0,1 (t).
This solves to
P0,1 (t) = λte−λt
Repeating this process for each column of P (t), we find the solution of P (t)
to be
e−λt (λt)n
P0,n (t) =
n!
Observe that this probability is that of a Poisson random variable with param-
eter λt. Hence the time between events is exponential. By Definition 2.16,
{x̃(t), t ≥ 0} is a Poisson process. Furthermore, this is strictly a birth process;
that is, the system cannot lose customers once they enter the system. This
should be apparent from the definition of Q.
Review of Random Processes 43
Q = 0.50
−1.25 0.75 .
1.00 2.00 −3.00
Solution.
1.
−2.00 1.25 0.75
1 0 0
1 0 0
1 0 0
1 0 0
X 6 5 5 70
Pi (∞)Qi1 = + 2= ,
i6=1
25 4 25 100
and
X 6 3 14 3 60
Pi (∞)Qi1 = + = ,
i6=2
25 4 25 4 100
Review of Random Processes 45
Thus,
48 48 70 60
π0 = = . π1 = , and π2 = .
48 + 70 + 60 178 178 178
3. From
Qij
P {x̃k+1 = j|x̃k = i} = ,
−Qii
for i 6= j and P {x̃k+1 = j|x̃k = j} = 0, we have immediately
5 3
0
8 8
P = 52 0 3
5
.
1 2
3 3 0
5 3
0
8 8
48 70 60
[ 178 178 178 ] 25 0 3
5
= 70 2
[ 178 5 +
60 1
178 3
48 5
178 8 + 60 2
178 3
48 3
178 8 + 70 3
178 5 ]
1 2
3 3 0
48 70 60
= [ 178 178 178 ].
Chapter 3
ELEMENTARY CONTINUOUS-TIME
MARKOV CHAIN-BASED QUEUEING MODELS
Exercise 3.1 Carefully pursue the analogy between the random walk
and the occupancy of the M/M/1 queueing system. Determine the proba-
bility of an increase in the queue length, and show that this probability is
less than 0.5 if and only if λ < µ.
Solution. Consider the occupancy of the M/M/1 queueing system as the posi-
tion of a random walker on the nonnegative integers, where a wall is erected
to the left of position zero. When there is an increase in the occupancy of the
queue this is like the walker taking a step to the right; a decrease is a step to
the left. However, if the queue is empty (the walker is at position zero), then a
‘decrease’ in the occupancy is analogous to the walker attempting a step to the
left – but he hits the wall and so remains at position zero (queue empty).
Let λ be the rate of arriving customers and µ be the rate of departing cus-
tomers. Denote the probability of an arrival by p+ and a departure by p− .
Then p+ is simply the probability that an arrival occurs before a departure.
λ
From Chapter 2, this probability is λ+µ . It follows that p+ < 21 if and only if
λ < µ.
Exercise 3.2 Prove Theorem 3.1 and its continuos analog
Z ∞
E[x̃] = P {x̃ > x}dx.
0
Solution.
∞
X
E [ñ] = nP {ñ = n}
n=0
∞ n
!
X X
= 1 P {ñ = n}
n=0 k=1
48 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
∞ n
!
X X
= P {ñ = n}
n=0 k=1
But, Z x
x= dy,
0
so Z ∞Z x
E [x̃] = dyfx̃ (x)dx.
0 0
Changing order of integration yields
Z ∞Z ∞ Z ∞ Z ∞
E [x̃] = fx̃ (x)dxdy = P {x̃ > y} dy = P {x̃ > x} dx.
0 y 0 0
But, P {x̃ > z, ỹ.z} ≤ P {x̃ > z} and P {x̃ > z, ỹ.z} ≤ P {ỹ > z} because
the event {x̃ > z, ỹ > z} ⊆ {x̃ > z} and {x̃ > z, ỹ > z} ⊆ {ỹ > z}. There-
fore, Z ∞ Z ∞
E [z̃] ≤ P {x̃ > z} and E [z̃] ≤ P {ỹ > z} .
0 0
Equivalently,
E [z̃] ≤ E [x̃] and E [z̃] ≤ E [ỹ] .
Elementary CTMC-Based Queueing Models 49
The discrete and mixed cases are proved in the same way.
Solution. Since a customer arrives every two seconds and leaves after one
second, there will either be one customer in the system or no customers in the
system. Each of these events is equally likely since the time intervals that they
occur in is the same (1 second). Hence
On the other hand, because a customer finishes service at the end of every
odd-numbered second, she has left the system by the time the next cusomer
arrives. Hence an arriving customer sees the system occupancy as empty; that
is, as zero. Obviously this is not the same as E [s̃] and we conclude that the
distributions as seen by an arbitrary arriving customer and that of an arbitrary
observer cannot be the same.
50 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Exercise 3.5 For the ordinary M/M/1 queueing system, determine the
limiting distribution of the system occupancy
1. as seen by departing customers, [Hint: Form the system of equa-
tions πd = πd Pd , and then solve the system as was done to obtain
P {ñ = n}.]
2. as seen by arriving customers, and [Hint: First form the system of equa-
tions πa = πa Pa , and then try the solution πa = πd .]
3. at instants of time at which the occupancy changes. That is, embed a
Markov chain at the instants at which the occupancy changes, defining
the state to be the number of customers in the system immediately fol-
lowing the state change. Define π = [ π0 π1 · · · ] to be the stationary
probability vector and P to be the one-step transition probability matrix
for this embedded Markov chain. Determine π, and then compute the
stochastic equilibrium distribution for the process {ñ(t), t ≥ 0} accord-
ing to the following well known result from the theory of Markov chains
as discussed in Chapter 2:
πi E[s̃i ]
Pi = P
∞ ,
πi E[s̃i ]
i=0
where s̃i denotes the time the systems spends in state i on each visit.
Observe that the results of parts 1, 2, and 3 are identical, and that these are
all equal to the stochastic equilibrium occupancy probabilities determined
previously.
Solution.
1. Let ñd (k) denote the number of customers left by the k − th departing
customer. Then the (i, j) − th element of Pd is given by
This results in
2
µ λ µ λ µ
λ+µ λ+µ λ+µ λ+µ λ+µ ...
2
µ µ µ
λ λ
λ+µ λ+µ λ+µ λ+µ λ+µ ...
Pd = 0 µ λ µ
λ+µ λ+µ λ+µ ...
µ
0 0 ...
λ+µ
.. .. ..
..
. . . .
Using this matrix, form the system of equations πd = πd Pd . Then, for
i = 0,
∞
X
πd0 = πdi Pdi0
i=0
∞
X
= πd0 Pd00 + πd1 Pd10 + πdi Pdi0
i=2
µ λ µ
= πd0 + πd1 + 0.
λ+µ λ+µ λ+µ
Hence πd1 = µλ πd0 . Substitute this into the equation for πd1 :
P∞
πd1 = i=0 πdi Pdi1
∞
X
= πd1 Pd01 + πd1 Pd11 + πd2 Pd21 πdi Pdi1
i=3
µ λ λ λ µ
= πd0 + πd0
λ+
µ λ+ µ µ λ+µ λ+µ
µ
+πd2 +0
λ+µ
2
λ
which gives the solution πd2 = µ πd0 . Repeating this process we see
j
λ
that πdj = µ πd0 , j = 0, 1, 2, . . ..
P∞
Now use the normalizing constraint j=0 πdj = 1. Then for λ < µ,
∞ j
!
X λ 1
πd0 = πd0 λ
= 1.
j=0
µ 1− µ
j
Thus πd0 = 1 − µλ , and πdj = λ
µ 1− λ
µ .
2. Let ña (k) denote the number of customers as seen by the k − th arriving
customer. Then the (i, j) − th element of Pa is given by
P {ña (k + 1) = j|ñd (n) = i} .
52 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
πi E[s̃i ]
Pi = P
∞
πi E[s̃i ]
i=0
Exercise 3.6 Using Little’s result, show that the probability that the
server is busy at an arbitrary point in time is equal to the quantity (λ/µ).
Solution. Let B denote the event that the server is busy. Define the system to
be the server only (i.e., no queue). Then
P {B} = P {ñ = 1}
where ñs is the number of customers in the system (i.e., ñs is the number in
service). Hence
Furthermore, the average waiting time in the system is just the average service
time. That is, E[w̃] = µ1 . By Little’s Result, E[ñ] = λE[w̃]. Thus
λ
P {B} = E[ñ] = .
µ
Exercise 3.7 Let w̃ and s̃ denote the length of time an arbitrary cus-
tomer spends in the queue and in the system, respectively, in stochastic
equilibrium. Let Fs̃ (x) ≡ P {s̃ ≤ x} and Fw̃ (x) ≡ P {w̃ ≤ x}. Show that
and
Fw̃ (x) = 1 − ρe−µ(1−ρ)x , for x ≥ 0,
without resorting to the use of Laplace-Stieltjes transform techniques.
Solution. To compute Fs̃ (x),condition on the value of ñ, the number of cus-
tomers in the system:
∞
X
Fs̃ (x) = P {s̃ ≤ x|ñ = n} P {ñ = n}, (3.6.1)
n=0
Condition fw̃ on the value of ñ, the number of customers in the system and use
the fact that the system is in stochastic equilibrium:
∞
µ(µα)n−1 e−µα n
xX
Z
Fw̃ (x) = (1 − ρ) + ρ (1 − ρ)dα
0 n=1 (n − 1)!
∞
(µρα)n
Z x
µρ(1 − ρ)e−µα
X
= (1 − ρ) + dα (reindexing)
0 n=0
n!
Z x
= (1 − ρ) + µρ(1 − ρ)e−µ(1−ρ)α dα
0
−µ(1−ρ)x
= 1 − ρe , x ≥ 0.
time to the next departure is simply the time it takes for the (i+1)−st customer
to complete service. This is Poisson with parameter µ. If the system is left
empty, however, the time to the next departure is the time to the next arrival
plus the time for that arrival to complete service. Denoting the event that the
system is left empty by A,
n o
P d˜ ≤ d|A = P {ã + x̃ ≤ d} .
Recall that in stochastic equilibrium the probability that the system is empty is
(1 − ρ). Then if B denotes the event that the system is not left empty,
n o n o n o
P d˜ ≤ d = P d˜ ≤ d|B P {B} + P d˜ ≤ d|A P {A}
= ρP {x̃ ≤ d} + (1 − ρ)P {ã + x̃ ≤ d} .
Sum over all possible values of ã to get P {ã + x̃ ≤ d}, noting that P {ã + x̃ ≤ d} =
0 if ã > d. Hence
n o Z d
P d˜ ≤ d = ρ 1 − e−µd + (1 − ρ) P {ã + x̃ ≤ d|ã = a} dFã .
0
n o Z d
P d˜ ≤ d = ρ 1−e −µd
+ (1 − ρ) P {a + x̃ ≤ d} λe−λa da
Z0 d
−µd
= ρ 1−e + (1 − ρ) 1 − e−µ(d−a) λe−λa da
0
= 1 − e−λd .
This shows the departure process d˜ is Poisson with parameter λ. That is, the
departure process occurs at the same rate as the arrival process, which we know
to be independent. Because of this ‘rate in equals rate out’ characteristic, this
implies that the interdeparture times are also independent. If the interdepar-
ture times were not independent, then its distribution could not be that of the
interarrival times.
58 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Solution.
Now, the probability of the customer entering the queue n times is simply
pn−1 (1 − p) since he returns to the queue (n − 1) times with probability
pn−1 and leaves after the n-th visit with probability (1
− p). Furthermore,
if the customer was in the queue n times, then P t̃ ≤ x|ñ = n is the
probability that the sum of his n service times will be less than x. That is,
(∞ )
X
P t̃ ≤ x|ñ = n = P x̃i ≤ x
i=1
= P {s̃n ≤ x} , x̃i ≥ 0.
d µ(µx)n e−µx
P {s̃n+1 ≤ x} = .
dx n!
Elementary CTMC-Based Queueing Models 59
∞
µ(µα)n−1 e−µα n−1
xX
Z
Ft̃ (x) = p (1 − p)dα
0 n=1 (n − 1)!
∞
(µpα)n
Z x
µe−µα (1 − p)
X
= dα
0 n=1
n!
Z x
= µe−µα (1 − p)eµpα dα
0
= 1 − e−µ(1−p)x , x ≥ 0.
2. Consider those customers who, on their first pass through the server, still
have increments of service time remaining. Suppose that instead of joining
the end of the queue, they immediately reenter service. In effect, they ’use
up‘ all of their service time on the first pass. This will simply rearrange
the order of service. Since the queue occupancy does not depend on order
of service, reentering these customers immediately will have no effect on
the queue occupancy. Observe that the arrival process is now a Poisson
process: since customers complete service now in one pass and don’t join
the end of the queue as they did before, they arrive to the queue only once.
Hence, arrivals are independent. With this is mind, we see that the system
can now be modeled as an ordinary M/M/1 system with no feedback. From
part (a), the total amount of service time for each customer is exponential
with parameter (1−p)µ. The distribution of the number of customers in the
system in stochastic equilibrium is the same as that of an ordinary M/M/1
λ
system, with this new service rate: Pn = (1 − ρ)ρn , where ρ = (1−p)µ .
4. Using the results of part(b), model this system as an ordinary M/M/1 queue-
ing system whose service is exponential with rate (1 − ρ)µ. Then E[s̃] =
1 1 λ
(1−p)µ 1−ρ , where ρ = (1−ρ)µ .
60 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Solution.
1. Let D denote the event that the first customer completes service before the
first arrival after the busy period has begun, and let A denote the comple-
ment of D. Then
E[h̃] = E[h̃|D]P {D} + E[h̃|A]P {A} .
If D occurs then clearly only the initial customer will complete service
during that busy period; that is, E[h̃|D] = 1. On the other hand, if D
occurs, then the length of the busy period has been shown to be the length
of the interarrival time plus twice the length of ỹ, a generic busy period.
Now, no service will be completed during the interarrival period, but h̃
service completions will take place during each busy period length ỹ. Thus,
E[h̃] = 1 · P {D} + E[h̃ + h̃]P {A}
µ λ
= + 2E[h̃] .
λ+µ λ+µ
Then, if ρ = λµ ,
µ 1
E[h̃] = = .
µ−λ 1−ρ
and
Thus,
2
E[e−sỹ ] = P {D} E[e−sz̃1 ] + P {A} E[e−sz̃1 ] E[e−sỹ ] .
Substituting the expressions for E[e−sz̃1 ], P {A}, and P {D}, and using
the quadratic formula,
q
(s + λ + µ) ± (s + λ + µ)2 − 4λµ
E[e−sỹ ] = .
2λ
We now must decide which sign gives the proper function. By the definition
of the Laplace-Stieltjes transform, E[e−sỹ ]|s=0 is simply the cumulative
distribution function of the random variable evaluated at infinity. This value
is known to be 1. Then
E[e−0·ỹ ] = 1 q
(λ + µ) ± (λ + µ)2 − 4λµ
=
p 2λ
(λ + µ) ± (λ − µ)2
=
2λ
p
(λ + µ) ± (µ − λ)2
=
2λ
(λ + µ) ± (µ − λ)
= .
2λ
This of course implies that the sign should be negative. i.e.,
q
(s + λ + µ) − (s + λ + µ)2 − 4λµ
E[e−sỹ ] = .
2λ
It remains to show that
d 1
Fỹ (y) = √ e−(λ+µ)y I1 (2y λµ).
p
dy y ρ
We first prove a lemma. Lemma: Let g(t) = e−at f (t). Then G(s) =
62 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
F (s + a).
proof:
Recall that if F (s) is the Laplace transform of f (t), then
Z ∞
F (s) = f (t)e−st dt.
0
Note that
√ 1
γ(s) = α(s) ρ =⇒ α(s) = √ γ(s),
ρ
Then by applying the Lemma,
L−1 [γ(s)] = e−bt L−1 [β(s)]
1
= e−bt e−at I1 (at)
t
1 −(a+b)t
= e I1 (at).
t
By linearity, af (t) ⇐⇒ aL[f (t)].
1
L−1 [α(s)] = √ L−1 [γ(s)]
ρ
1 1 −(a+b)t
= √ e I1 (at)
ρt
1 1
√ e−(λ+µ)t I1 (2t λµ).
p
=
ρt
Or, replacing t with y,
d 1 1
Fỹ (y) = √ e−(λ+µ)y I1 (2y λµ).
p
dy ρy
This is the desired result.
Exercise 3.11 For the M/M/1 queueing system, argue that h̃ is a stop-
ping time for the sequence {x̃i , i = 1, 2, . . .} illustrated in Figure 3.5. Find
E[h̃] by using the results given above for E[ỹ] in combination with Wald’s
equation.
Solution. Recall that for h̃ to be a stopping time for a sequence of random
variables, h̃ must be independent of x̃h̃+1 . Now, h̃ describes the number of
64 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
customers served in a busy period and so x̃h̃ is the last customer served during
that particular busy period; all subsequent customers x̃h̃+j , j ≥ 1, are served in
another busy period. Hence h̃ is independent of those customers. This defines
h̃ to be a stopping time for the sequence {x̃i , i = 1, 2, · · ·}. We can thus apply
Wald’s equation
h̃
X
E[ỹ] = E x̃i = E[h̃]E[x̃].
i=1
Hence,
E[ỹ]
E[h̃] =
E[x̃]
1/µ 1
=
1−ρ µ
1
= .
1−ρ
Exercise 3.12 For the M/M/1 queueing system, argue that E[s̃], the ex-
pected amount of time a customer spends in the system, and the expected
length of a busy period are equal. [Hint: Consider the expected waiting
time of an arbitrary customer in the M/M/1 queueing system under a non-
preemptive LCFS and then use Little’s result.]
Solution. Let B be the event that an arbitrary customer finds the system empty
upon arrival, and let I be the event that the system is found to be idle. Then
E[s̃] = E[s̃|B]P {B} + E[s̃|I]P {I} .
Now, if the system is idle with probability (1 − ρ), then the customer’s sojourn
time will just be her service time. On the other hand, if the system is busy upon
arrival, then the customer has to wait until the system is empty again to receive
service. That is, w̃ in this case is equivalent to ỹ, and so
E[s̃|B] = E[w̃] + E x̃]
1
= E[ỹ] +
µ
1
µ 1
= +
1 − ρ µ
1 1
= +1 .
µ 1−ρ
Combining the two conditional probabilities, we see that
ρ 1 1−ρ
E[s̃] = +1 +
µ 1−ρ µ
Elementary CTMC-Based Queueing Models 65
1 1
=
µ 1−ρ
= E[ỹ].
Exercise 3.13 Let s̃LCFS denote the total amount of time an arbitrary
customer spends in the M/M/1 queueing system under a nonpreemptive
discipline. Determine the Laplace-Stieltjes transform for the distribution of
s̃LCFS .
Solution. Condition the Laplace-Stieltjes transform for the distribution of
s̃LCFS on whether or not the service is busy when an arbitrary customer ar-
rives. If the server is idle, the customer will immediately enter service and so
the Laplace-Stieltjes transform of s̃LCFS is that of x̃, the service time. If the
server is busy, however, the customer’s total time in the system will be waiting
time in the queue plus service time. It has already been shown in Exercise 3.11
that in a LCFS discipline the waiting time in queue has the same distribution
as ỹ, an arbitrary busy period. Let B denote the event that the customer finds
the server is busy, and let B c denote the event that the server is idle.
E[e−ss̃LCFS ] = E[e−ss̃LCFS |B c ]P {B c } + E[e−ss̃LCFS |B]P {B}
= (1 − ρ)E[e−sx̃ ] + ρ{E[e−sỹ ] + E[e−sx̃ ]}
= E[e−sx̃ ] + ρE[e−sỹ ] q
µ (s + λ + µ) − (s + λ + µ)2 − 4λµ
= +ρ· .
s+µ 2λ
see that
µ+λ
which we know to be s+µ+λ . If the next arrival occurs before the original
customer finishes service, then there will be two customers in the system. Call
this state of the system ‘state 2’. The second server will be activated and the
overall service rate of the system will be 2µ. It will continue to be 2µ until
there is only one customer in the system again. Call this state of the system
‘state 1’. Consider f˜2,1 , the time it takes to return to state 1 from state 2. (This
is called the ‘first passage time from state 2 to state 1.’) Think of the time spent
in state 2 as an ordinary M/M/1 busy period, one in which the service rate is
2µ. Then
q
˜ (s + λ + 2µ) − (s + λ + 2µ)2 − 8λµ
E[e−sf2,1 ] = .
2λ
This follows directly from the definition of E[e−sỹ ], the Laplace-Stieltjes trans-
formof a generic busy period in the M/M/1 system, with 2µ substituted in for
the service rate. Once the system returns to state 1 again, note that this is ex-
actly the state it was in originally. Because of the Markovian properties, the
expected length of the remaining time in the busy period should be the same as
it was originally: E[e−sỹ2 ]. With these observations, we see that
˜
E[e−sỹ2 |A] = E[e−s(z̃1 +f2,1 +ỹ2 ) ],
where z̃1 represents the interarrival time between the original customer and the
first arrival after the busy period begins. Hence,
˜
Substituting the expressions for E[e−sz̃1 ], E[e−sf2,1 ], P {A}, and P {D}, and
rearranging terms,
2µ
E[e−sỹ2 ] = q .
(s + λ) + (s + λ + 2µ)2 − 8λµ
Elementary CTMC-Based Queueing Models 67
Exercise 3.15 We have shown that the number of arrivals from a Pois-
son process with parameter λ, that occur during an exponentially distributed
service time with parameter µ, is geometrically distributed with parameter
µ/(µ + λ); that is, the probability of n arrivals during a service time is
given by [λ/(λ + µ)]n [µ/(λ + µ)]. Determine the mean length of the busy
period by conditioning on the number of arrivals that occur during the first
service time of the busy period. For example, let ñ1 denote the number of
arrivals that occur during the first service time, and start your solution with
the statement
∞
X
E[ỹ] = E[ỹ|ñ1 = n]P {ñ1 = n}.
n=0
[Hint: The arrivals segment the service period into a sequence of intervals.]
Solution. Condition E[ỹ] on the number of customers who arrive during the
service time of the original customer. Now, if no customers arrive, then the
busy period ends with the completion of the original customer’s service time.
It has already been shown that ỹ|{x̃1 < t̃1 } = z̃1 . Thus,
1
E[ỹ|ñ = 0] = E[z̃1 ] = .
µ+λ
Now suppose that exactly one new customer arrives during the original cus-
tomer’s service period. Then due to the memoryless property of the exponen-
tial distribution, this service time starts over. So the remaining time in this
busy period is equal to the length of a busy period in which there are initially
two customers present, one of whom’s busy period ends service with no new
arrivals. That is,
ỹ|{x̃1 > t̃1 } = z̃1 + ỹ + ỹ|{x̃1 < t̃1 } ,
where the first term, z̃1 , represents the interarrival time between the original
customer and the first arrival. We’ve already show above that the last term of
this sum is equivalent to z̃1 . Hence,
Repeating this process, we see that if there are n arrivals during the service
period of the initial customer, then this is equivalent to the length of a busy
period that has (n+1) initial customers present: n customers we know nothing
about (and so have generic busy periods), and 1 customer whose service period
ends with no new arrivals. i.e.
Solution. Let Pn (t) denote the probability of having n customers in the system
at time t. Then
∞
X
Pn (t + h) = P {ñ(t + h) = n|ñ(t) = k}Pk (t) (3.15.1)
k=0
where ñ(t+h) = n|ñ(t) = k signifies the event of having n−k arrivals or k−n
departures in the time interval (t, t + h]. But these arrivals and departures are
Poisson, so we may apply the properties of Definition 2 of the Poisson process
(modified to take into account the possibility of deaths. In this case we may
think of Definition 2 as governing changes in state and not simply the event of
births.) Let Di , Ai denote the events of i departures and i arrivals, respectively,
in the time interval (t, t + h]. And note that for i ≥ 2, P {Di } = P {Ai } =
o(h). Hence, using general λ and µ,
P {D2 } = P {A2 } = o(h)
Elementary CTMC-Based Queueing Models 69
P {D1 } = µh + o(h).
P {A1 } = λh + o(h).
P {D0 } = 1 − µh + o(h).
P {A0 } = 1 − λh + o(h).
Then, substituting these in (3.15.1) and using state-dependent arrival and de-
parture rates,
Pn′ (t) = −(λn + µn )Pn (t) + λn−1 Pn−1 (t) + µn+1 Pn+1 (t).
W (σ) = (σI − Q) .
W (σ)W −1 (σ) = I.
Now, W (σ), adj W (σ), and det W (σ) are all continuous functions of σ.
Therefore,
lim adj W (σ)W (σ) = lim detW (σ)I,
σ→σi σ→σi
70 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Solution. By Exercise 3.16, since the rows of adj (σi I − Q) are proportional
to Mi , the left eigenvector of Q corresponding to σi , they must be proportional
to each other. Similary, we may use the same technique of Exercise 3.16 to
show that the columns of adj (σi I − Q) are proportional to each other. Since
W (σ)W −1 (σ) = I, and W −1 (σ) = adj W (σ)/det W (σ), we find that
µ
P1 (t) = .
λ+µ
Since P0 (0) and P1 (0) were arbitrary, we see that P0 , P1 converge to the solu-
tion given in the example regardless of the values P0 (0), P1 (0).
Exercise 3.20 For the specific example given here show that the equi-
librium probabilities for the embedded Markov chain are the same as those
for the continuous-time Markov chain.
Solution. As already shown, the probability matrix for the discrete-time em-
bedded Markov chain is
1−ρ
ρ
Pe = .
1 0
Thus, using the equation πe = πe Pe ,
1
πe 0 =
1+ρ
ρ
πe 1 =
1+ρ
Recall that in the original system, the continuous-time Markov chain was
0 1
PO = .
1 0
Now, however, the system can transition from one state back into itself. The
one-step transition probability matrix is then
1−ρ
1
PC = .
1 0
Since this is precisely the probability matrix for the discrete-time embedded
Markov chain, we see that
1
πc 0 = πe 0 =
1+ρ
ρ
πc 1 = πe 1 = .
1+ρ
Furthermore, by the very definition of being uniformized, the mean occupancy
times for each state on each entry are equal. That is, E[s̃0 ] = E[s̃1 ] = 1/ν.
This implies
πci E[s̃i ]
PCi = 1
P
πci E[s̃i ]
i=0
Elementary CTMC-Based Queueing Models 73
πci · ν1
=
πc0 · ν1 + πc1 · 1
ν
πc i
=
πc 0 + πc 1
= πc i
= πe i .
Thus we see that the equilibrium probabilities for the embedded discrete-time
Markov chain are equal to the equilibrium probabilities for the continuous-time
Markov chain when we randomize the system.
Exercise 3.21 Show that the equilibrium probabilities for the embedded
Markov chain underlying the continuous-time Markov chain are equal to
the equilibrium probabilities for the continuous-time Markov chain.
Solution. Let the capacity of the finite M/M/1 queueing system be K, K>1.
In this general case, we must take ν to be at least as large as λ + µ, the rate
leaving states 1, 2, · · · , K − 1. Then
ν−λ λ
0 0 ... 0 0
ν ν
µ ν−(λ+µ) λ
ν ν ν 0 ... 0 0
µ ν−(λ+µ) λ
Pe =
0 ν ν ν ... 0 0 .
.. .. .. ..
. . . . ... 0 0
µ ν−µ
0 0 0 0 ... ν ν
Exercise 3.22 For the special case of the finite capacity M/M/1 queue-
ing system with K = 2, λ0 = λ1 = 0.8, and µ1 = µ2 = 1,
determine the time-dependent state probabilities by first solving the dif-
ferential equation (3.32) directly and then using uniformization for t =
0.0, 0.2, 0.4, . . . , 1.8, 2.0 with P0 (0) = 1, plotting the results for P0 (t),
P1 (t), and P2 (t). Compare the quality of the results and the relative diffi-
culty of obtaining the numbers.
Solution. For K = 2, the matrix Q is:
−λ λ 0
Q = µ −λ − µ λ
0 µ −µ
−0.8 0.8 0
= 1.0 −1.8 0.8 .
0 1.0 −1.0
The eigenvalues of Q are found to be 0, −.9055728, and −2.6944272, and their
corresponding eigenvectors are proportional to [ 1 1 1 ]T , [ −7.5777088 1 10.5901699 ]T ,
and [ −0.42229124 1 −0.5901699 ]T , respectively. Thus, we find
where
1 −7.5777088 −0.42229124
M = 1 1 1
1 10.5901699 −0.5901699
1 0 0
eΛt = 0 e−0.9055728t 0
−2.6944272t
0 0 e
0.4098361 0.3278688 0.2622951
1.0
0.8
Time dependent state probabilities
P_0(t)
0.6
0.4
P_1(t)
0.2
P_2(t)
0.0
0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0
Time (sec)
quality of the computer results differed with the values of t and ǫ, where ǫ de-
termines when the summation in Equation 3.48 may be stopped. As t, depedent
on ǫ, increased to threshold value, the equilibrium probabilities improved. And
as ǫ decreased, the values of t were allowed to get larger and larger, bringing
the equilibrium probabilities closer to those determined by hand.
Exercise 3.23 For the special case of the finite-capacity M/M/1 system,
show that for K = 1, 2, . . . ,
ρPB (K − 1)
PB (K) = ,
1 + ρPB (K − 1)
where PB (0) = 1.
Solution. By Equation (3.60), if the finite capacity is K,
1−ρ
PB (K) = ρK .
1 − ρK+1
76 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Exercise 3.24 For the finite-state general birth-death process, show that
for K = 1, 2, . . . ,
(λK /µK )PB (K − 1)
PB (K) = ,
1 + (λK /µK )PB (K − 1)
where PB (0) = 1.
Solution. If the finite capacity of the M/M/1 queueing system is K, then by
combining Equations (3.56) and (3.58),
K−1
Q
λi
1
PK = n−1 i=0 .
Q K
Q
K λi µi
1+
P i=0
n
i=1
Q
n=1
µi
i=1
Elementary CTMC-Based Queueing Models 77
K−2
Q
λi
1
PK−1 = n−1 i=0 .
Q K−1
λi
Q
K−1
P µi
1+ i=0
n
i=1
Q
n=1
µi
i=1
K−2
Q
λi
λK−1 i=0
( ) ,
µK K−1
Q
µi
i=1
n−1 K−2
Q
λi
Q
K−1 λi
X i=0 λK−1 i=0
1+ +( ) .
n µK K−1
Q
n=1 µi
Q
µi
i=1 i=1
n−1
Q
K−1
X λi
i=0
1+ ,
n
Q
n=1 µi
i=1
K−2
Q
λi
λ i=0
( µK−1
K
) K−1
Q
µi
i=1
n−1
,
Q
K−1 λi
P i=0
1+ n
Q
n=1 µi
i=1
78 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Exercise 3.26 Using the concept of local balance, write and solve the
balance equations for the general birth-death process shown in Figure 3.12.
Solution. In Figure 3.12, the initial boundary is between the nodes represent-
ing state 1 and state 2. We see that the rate going from the left side of the
boundary into the right side (that is, from state 1 into state 2) is λ1 . The rate
going from the right side of the boundary into the left side (from state 2 into
state 1) is µ2 . Since the ‘rate in’ must equal the ‘rate out’, we must be have
λ1 P1 = µ2 P2
λ1
i.e. P2 = P1 .
µ2
Moving the boundary between state 0 and state 1, the local balance is
λ0
P1 = P0 .
µ1
Substituting this into the earlier result for P2 , we see that
λ1 λ0
P2 = P0 .
µ2 µ1
Repeating this process by moving the boundary between every pair of states,
we get the general result Pn−1
λi
Pn = Pi=0n P0 .
i=1 µi
Since Fx̃ (z) = E[z x̃ ], we may differentiate each side with respect to z:
d d
Fx̃ (z) = E[z x̃ ]
dz dz Z
d
= z x dFx̃ (x)
Zdz x
= xz x−1 dFx̃ (x)
x
= E[x̃z x̃−1 ].
Hence,
d
lim Fx̃ (z) = lim E[x̃z x̃−1 ]
z→1 dz z→1
80 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
= E[x̃].
Now use induction on n, assuming the result holds for n − 1. That is, assume
that
dn−1 h
x̃−n+1
i
F x̃ (z) = E x̃(x̃ − 1) · · · (x̃ − n + 2)z .
dz n−1
Then, differentiating this expression with respect to z,
dn d
Z
Fx̃ (z) = x(x − 1) · · · (x − n + 2)z x−n+1 dFx̃ (x)
dz n dz x
Z
= x(x − 1) · · · (x − n + 2)(x − n + 1)z x−n dFx̃ (x)
xh i
= E x̃(x̃ − 1) · · · (x̃ − n + 1)z x̃−n .
Thus,
dn h
x̃−n
i
lim Fx̃ (z) = lim E x̃(x̃ − 1) · · · (x̃ − n + 1)z
z→1 dz n z→1
= E[x̃(x̃ − 1) · · · (x̃ − n + 1)].
On the other hand, the Maclaurin series expansion for Fx̃ (z) is as follows:
∞
1 dn
zn.
X
Fx̃ (z) = F (z)
x̃
n! dz n
n=0 z=0
But E[ñ2 − ñ] = E[ñ2 ] − E[ñ], so after a little arithmetic we find that
ρ2 + ρ
E[ñ2 ] = .
(1 − ρ)2
82 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
(a) Let x̃ denote the time required for transmission of a message over the
trunk. Show that x̃ has the exponential distribution with parameter
µC/8.
(b) Let E[m̃] = 128 octets and C = 56 kilobits per second (kb/s). Deter-
mine λmax , the maximum message-carrying capacity of the trunk.
(c) Let ñ denote the number of messages in the system in stochastic equi-
librium. Under the conditions of (b), determine P {ñ > n} as a func-
tion of λ. Determine the maximum value of λ such that P {ñ > 50} <
10−2 .
(d) For the value of λ determined in part (c), determine the minimum
value of s such that P {s̃ > s} < 10−2 , where s̃ is the total amount
of time a message spends in the system.
(e) Using the value of λ obtained in part (c), determine the maximum
value of K, the system capacity, such that PB (K) < 10−2 .
Solution:
where the last equation follows from the fact that x̃ is exponential
with parameter µ. Therefore, m̃ is exponential with parameter µC/8.
1
(b) Since E[m̃] = 128 octets, µ = 128 . Therefore, x̃ is exponentially
µC 7 7
distributed with parameter 8 = 128 Kbps, or 128 × 103 bps. Then
E[x̃] = 128
7 × 10
−3 sec. Since λ
max E[x̃] = ρ < 1,
1
λ <
E[x̃]
Elementary CTMC-Based Queueing Models 83
7
= × 103
128
= 54.6875 msg/sec.
(c) We know from (3.10) that P {ñ > n} = ρn+1 . Thus, P {ñ > 50} =
ρ51 . Now, ρ51 < 10−2 implies
i.e.,
2
log10 ρ < − .
51
2
Hence, ρ = λE[x̃] < 10− 51 = 0.91366, so that
or
−µ [1 − ρ] s < −2 (ln 10)
i.e.,
2 (ln 10)
s >
µ (1 − ρ)
2 (ln 10)
= 7 3
128 × 10 (1 − 0.91366)
4.60517
=
4.7217
= 0.9753 sec
= 975.3 ms
We wish to find K such that PB (K) < 10−2 . First, we set PB (K) =
10−2 and solve for K.
1−ρ
ρK = 10−2
1 − ρK+1
84 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
(1 − ρ) ρK = 10−2 1 − ρK+1
= 10−2 − 10−2 ρK+1
K −2 K+1
(1 − ρ) ρ + 10 ρ = 10−2
1 − ρ + 10−2 ρ ρK = 10−2
10−2
ρK =
1 − 0.99ρ
Therefore,
−2 (ln 10) − ln (1 − 0.99ρ)
K =
ln ρ
−[2 (ln 10) + ln (1 − 0.99ρ)]
=
ln ρ
For ρ = 0.91366,
−[4.60517 − 2.348874]
K =
−0.09029676
2.256296
=
0.09029676
= 25.799
(a) The first passage time from state i to state i − 1 is the total amount
of time the systems spends in all states from the time it first enters
state i until it makes its first transition to the state i − 1. Let s̃i denote
the total cumulative time the system spends in state i during the first
passage time from state i to state i − 1. Determine the distribution of
s̃i .
(b) Determine the distribution of the number of visits from state i to state
i + 1 during the first passage time from state i to i − 1.
(c) Show that E[ỹK ], the expected length of a busy period, is given by
the following recursion:
1
1 + λ(K − 1)E [ỹK−1 ]
E[ỹK ] = with E [ỹ0 ] = 0.
µ
[Hint: Use the distribution found in part (b) in combination with the
result of part (a) as part of the proof.]
(d) Let P0 (K) denote the stochastic equilibrium probability that the com-
munication channel is idle. Determine P0 (K) using ordinary birth-
death process analysis.
(e) Let E[ı̃K ] denote the expected length of the idle period for the com-
munication channel. Verify that P0 (K) is given by the ratio of the
expected length of the idle period to the sum of the expected lengths
of the idle and busy periods; that is,
E[ı̃K ]
P0 (K) =
E[ı̃K ] + E[ỹK ]
which can be determined iteratively by
1
P0 (K) = .
1 + [(Kλ)/µ] {1 + (K − 1)λ E[ỹK−1 ]}
That is, show that P0 (K) computed by the formula just stated is iden-
tical to that obtained in part (d).
Kλ (K-1) λ (K-2) λ 2λ λ
0 1 2 • • • K-1 K
Solution:
86 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
(a) The state diagram is as in Figure 3.2. For a typical state, say i, we
wish to know the total amount of time spent in state i before the sys-
tem transitions from state i to i − 1 for the first time, i = 1, 2, · · · , K.
Now, on each visit to state i, the time spent in state i is the minimum
of two exponential random variables: the time to first arrival, which
has parameter (K − i)λ, and the time to first departure, which has
parameter µ. Thus, the time spent in state i on each visit is exponen-
tially distributed with parameter (K − i)λ + µ. Now, the time spent in
state i on the j −th visit is independent of everything, and in particular
on the number of visits to state i before transitioning to state i−1. Let
s̃ij denote the time spent in state i on the j −th visit and ṽi denote the
number of visits to state i during the first passage from i to i−1. Then
Pṽi
s̃i = j=1 s̃ij . Now, the number of visits to state i is geometrically
distributed (see proof of exponential random variables, page 35 of the
text) with parameter µ/ [(K − i)λ + µ] because the probability that a
given visit is the last visit is the same as the probability of a service
completion before arrival (see Proposition 3 of exponential random
variables, page 35 of the text) and this probability is independent of
the number of visits that have occurred up to this point. Thus, s̃i is the
geometric sum of exponential random variables and is therefore expo-
µ
nentially distributed with parameter [(K − i)λ + µ] (K−i)λ+µ = µ,
which follows directly from Property 5 of exponential random vari-
ables, given on page 35 of the text. In summary, the total amount of
time spent in any state before transitioning to the next lower state is
exponentially distributed with parameter µ.
(b) From arguments similar to those of part (a), the number of visits to
state i + 1 from state i is also geometric. In this case,
k
µ (K − i)λ
P {ṽi = k} =
(K − i)λ + µ (K − i)λ + µ
so that the mean number of visits is
(K − i)λ + µ (K − i)λ
−1= for i = 1, 2, · · · , K − 1.
µ µ
For i = K, the number of visits is exactly 1.
(c) First note that ỹK−i , the busy period for a system having K − i cus-
tomers, has the same distribution as the first passage time from state
i + 1 to i. From part (b), the expected minimum number of visits to
state i + 1 from state i is (K − i) µλ . Thus, the total time spent in states
i + 1 and above during a first passage time from state i to i − 1 is
Elementary CTMC-Based Queueing Models 87
so that we may compute E[ỹK ] for any value of K by using the re-
cursion, starting with n = 1,
1
E[ỹn ] = (1 + (K − 1)λE[ỹn−1 ]) ,
µ
with E[ỹ0 ] = 0, and iterating to K.
(d) Using ordinary birth-death analysis, we find from (3.58) that
1
P0 =
P∞ Qn−1 Qn
1+ n=1 i=0 λi i=1 µi
1
= .
PK Qn−1 Qn
1+ n=1 i=0 λi i=1 µi
1
= PK n Q .
λ n−1
1+ n=1 µ i=0 (K − i)
Qn−1 K!
But i=0 (K − i) = K(K − 1) · · · K − (n − 1) = (K−n)! . So
1
P0 = PK n .
λ K!
1+ n=1 µ (K−n)!
(e) The expression given for P0 (K) is readily converted to the final form
given as follows:
1
Kλ
P0 (K) = 1
Kλ + E[ỹK ]
1
Kλ
=
1 1
Kλ + µ 1 + λ(K − 1)E[ỹ( K − 1)]
1
=
Kλ
1+ µ 1 + λ(K − 1)E[ỹ( K − 1)]
proof: Let T denote the truth set for the proposition. With K = 1, we
find E[ỹ1 ] = µ1 , which is clearly correct. Now, suppose (K − 1) ∈ T .
Then, by hypothesis,
K−2
" n #
1 X (K − 2)! λ
E[ỹK−1 ] = 1+ (∗)
µ n=1
(K − 2 − n)! µ
K−1
" n #
1 X (K − 1)! λ
= 1+
µ n=1
(K − 1 − n)! µ
We now substitute the expression for E[ỹK−1 ] into the given expres-
sion
−1
K
P0 (K) = 1+λ (1 + (K − 1)λE[ỹK−1 ])
µ
" K−1 n !#−1
K X (K − 1)! λ
= 1+λ 1+
µ n=1
(K − 1 − n)! µ
" K n #−1
X K! λ
= 1+ ,
n=1
(K − n)! µ
3-3 Traffic engineering with finite population. Ten students in a certain grad-
uate program share an office that has four telephones. The students are
always busy doing one of two activities: doing queueing homework (work
state) or using the telephone (service state); no other activities are allowed
- ever. Each student operates continuously as follows: the student is ini-
tially in the work state for an exponential, rate β, period of time. The stu-
dent then attempts to use one of the telephones. If all telephones are busy,
then the student is blocked and returns immediately to the work state. If a
telephone is available, the student uses the telephone for a length of time
drawn from an exponential distribution with rate µ and then returns to the
work state.
(g) Compare the results of part (f) to those of the Erlang loss system hav-
ing 4 servers and total offered traffic equal to that of part (f). That
is, for each value of β, there is a total offered traffic rate for the sys-
tem specified in this problem. Use this total offered traffic to obtain a
value of λ, and then obtain the blocking probability that would result
in the Erlang loss system, and plot this result on the same graph as the
results obtained in (f). Then compare the results.
Solution.
(a) Define the state of the system as the number of students using the
telephones. This can have values 0, 1, 2, 3, 4.
10ß 9ß 8ß 7ß
0 1 2 3 4
1µ 2µ 3µ 4µ
(c) The balance equations can be either local or global; we choose local.
10βP0 = µP1
9βP1 = 2µP2
8βP1 = 3µP3
7βP3 = 4µP4 .
(d) All arrivals that occur while the system is in state 4 are blocked. The
arrival rate while in state 4 is 6β. Over a long period of time T , the
number of blocked attempts is 6βP4 T while the total number of ar-
rivals is (10βP0 + 9βP1 + 8βP2 + 7βP3 + 6βP4 ) T . Taking ratios,
we find that the proportion of blocked calls is
6P4
.
10P0 + 9P1 + 8P2 + 7P3 + 6P4
(e) The average call generation rate is 10P0 β + 9P1 β + 8P2 β + 7P3 β +
6P4 β.
Elementary CTMC-Based Queueing Models 91
Therefore,
10 βµ
P1 =
P4 10 β i
i=0 µ
i
10 β i
i µ
Pi = for i = 0, 1, 2, 3, 4
P4 10 β j
j=0 j µ
With µ = 13 , we have
10
(3β)i
P4
β i=0 (10 − i)
i
λ(β) =
10
(3β)j
P4
j=0 j
and
10
6 (3β)4
4
PB (β) = .
10
(3β)j
P4
j=0 (10 − j) j
3-4 A company has six employees who use a leased line to access a database.
Each employee has a think time which is exponentially distributed with
parameter λ. Upon completion of the think time, the employee needs
the database and joins a queue along with other employees who may be
waiting for the leased line to access the database. Holding times are ex-
ponentially distributed with parameter µ. When the number of waiting
employees reaches a level 2, use of an auxiliary line is authorized. The
time required for the employee to obtain the authorization is exponentially
distributed with rate τ . If the authorization is completed when there are
less than three employees waiting or if the number of employees waiting
drops below two at any time while the extra line is in use, the extra line is
immediately disconnected.
(a) Argue that the set {0, 1, 2, 3, 3r, 3a, 4r, 4r, 5r, 5a, 6r, 6a}, where the
numbers indicate the number of employees waiting and in service, the
letter r indicates that authorization has been requested, and the letter
a indicates that the auxiliary line is actually available for service, is a
suitable state space for this process.
(b) The situation in state 4r is that there are employees waiting and in
service and an authorization has been requested. With the process in
state 4r at time t0 , list the events that would cause a change in the state
of the process.
(c) Compute the probability that each of the possible events listed in part
(b) would actually cause the change of state, and specify the new state
of the process following the event.
(d) What is the distribution of the amount of time the system spends in
state 4r on each visit? Explain.
(e) Draw the state transition rate diagram.
(f) Write the balance equations for the system.
Solution.
(a) Two waiting employees means that there are three employees waiting
and using the system. At a point in time, if an employee requests a
line, then there is a waiting period before the line is authorized. The
idea is that the system behaves differently when the number of wait-
ing persons is increasing than when the number of waiting persons is
decreasing. When the third person arrives, the system is immediately
in the request mode 3a. But, if no other customers arrive before the
line is authorized, the line is dropped, leaving 3 in the system, but
only one line. If a person arrives while the system is in state 3, the
94 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
line is again requested, putting the system in state 4r. If the line au-
thorization is complete while the system is in 4r, 5r, or 6r, then the
system goes into state 4a, 5a, or 6a, respectively.
(b) While in state 4r, the possible next events are customer arrival, cus-
tomer service, line authorization complete, with rates 2λ, µ, and τ ,
respectively.
(c) Let A, S, and AC denote the events customer arrival, customer ser-
vice, line authorization complete, respectively. Then
2λ
P {A} =
2λ + µ + τ
µ
P {S} =
2λ + µ + τ
τ
P {AC} = .
2λ + µ + τ
(e) See Figure 3.4 for the state transition rate diagram.
6λ 5λ µ
0 1 2 3
µ τ 3λ
µ µ 4λ 3λ 2λ λ
3a 4a 5a 6a
2µ µ µ µ
τ τ τ
3λ 2λ λ
3r 4r 5r 6r
2µ 2µ 2µ
0 6λP0 = µP1
1 (5λ + µ)P1 = 6λP0 + µP2
2 (4λ + µ)P2 = 5λP1 + µP3 + µP3a + 2µP3r
3 (3λ + µ)P3 = τ P3a
3a (3λ + τ + µ)P3a = 4λP2 + µP4a
3r (3λ + 2µ)P3r = 2µP4r
4a (2λ + τ + µ)P4a = 3λP3 + 3λP3a + µP5a
4r (2λ + 2µ)P4r = 3λP3r + τ P4a + 2µP5r
5a (λ + τ + µ)P5a = 2λP4a + µP6a
5r (λ + 2µ)P5r = 2λP4r + 2µP6r + τ P5a
6a (µ + τ )P6a = λP5a
6r 2µP6r = λP5r + τ P6a
Solution.
(a) Figure 3.5 details two state diagram for the system. Note that the first
passage time from state 2 to state 1 in the first diagram is the same as
96 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
the length of the busy period in the second diagram. i.e., the M/M/1
with service rate 2µ. Thus, f˜21 = E[ỹM/M/1 ] = 2µ
1
/1 − 2µλ
.
µ µ µ
...
0
1,0 1 2 3
...
λ 2λ 3λ
µ µ
...
1 2
0
1,0 ...
2λ 2λ λ
(b) Let A denote the event of an arrival before the first departure from a
busy period, and let Ac denote its complement. Then,
Thus,
1 !
µ 1 λ 2µ
E[ỹ] = + λ
λ+µ λ+µ λ+µ 1− 2µ
i.e.,
1 !
1 λ 2µ
E[ỹ] = + λ
.
µ µ 1− 2µ
1
1 1 λ 2µ
(c) c̃ = ĩ + ỹ implies that E[c̃] = λ + µ + µ λ
1− 2µ
. Therefore,
1
λ
P {system is idle} = 1
1 1 λ 2µ
λ + µ + µ λ
1− 2µ
1
= λ
λ 2µ
1+ µ 1+ λ
1− 2µ
Elementary CTMC-Based Queueing Models 97
1
= .
λ 1
1+ µ λ
1− 2µ
(d) From an earlier exercise, we know that the total amount of time the
system spends i state 1 before entering state 0 is exponential with
parameter µ because there are a geometric number of visits, and for
each visit the amount of time spent is exponential with parameter (λ+
µ). Thus,
1
µ
P {ñ = 1} = 1
1 1 λ 2µ
λ + µ + µ λ
1− 2µ
λ
= µ .
λ 1
1+ µ λ
1− 2µ
3-6 Consider the M/M/2 queueing system, the system having Poisson arrivals,
exponential service, 2 parallel servers, and an infinite waiting room capac-
ity.
(a) Determine the expected first passage time from state 2 to state 1.
[Hint: How does this period of time compare to the length of the
busy period for an ordinary M/M/1 queueing system?]
(b) Determine the expected length of the busy period for the ordinary
M/M/2 queueing system by conditioning on whether or not an arrival
occurs before the first service completion of the busy period and by
using the result from part (a).
(c) Define c̃ as the length of time between successive entries into busy
periods, that is, as the length of one busy/idle cycle. Determine the
probability that the system is idle at an arbitrary point in time by tak-
ing the ratio of the expected length of an idle period to the expected
length of a cycle.
(d) Determine the total expected amount of time the system spends in
state 1 during a busy period. Determine the probability that there is
exactly one customer in the system by taking the ratio of the expected
amount of time that there is exactly one customer in the system during
a busy period to the expected length of a cycle.
(e) Check the results of (c) and (d) using classical birth-death analysis.
98 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
(f) Determine the expected sojourn time, E[s̃], for an arbitrary customer
by conditioning on whether an arbitrary customer finds either zero,
one, or two or more customers present. Consider the nonpreemptive
last-come-first-serve discipline together with Little’s result and the
fact that the distribution of the number of customers in the system is
not affected by order of service.
Solution.
(a) If the occupancy is n, then there are (n − 1) customers waiting. So,
if there are 3 customers waiting, occupancy is 4. Less than 3 mes-
sages waiting means occupancy is less than 4. We assume that if the
number waiting drops below 3 while attempting to set up the extra
capacity, then the attempt is aborted. In this case, the required states
are 0, 1, 2, 3, 3e, 4, 4e, · · ·, where e indicates extra capacity.
(b) See Figure 3.6.
λ λ λ λ λ
0 1 2 3 4 5
µC µC µC µC T µC T
λ λ
µCe 3e 4e 5e
µCe µCe
(c) Organizing the state vector for this system according to level, we see
that
level 0 : P0
level 1 : P1
level 2 : P2
level n : Pn = [ Pn0 Pn1 ] n>2
We see that this latter form is the same as that given on page 140 of
the text, except that the boundary conditions are very complex. The
text by Neuts treats problems of this form under the heading of queues
with complex boundary conditions.
Chapter 4
ADVANCED CONTINUOUS-TIME
MARKOV CHAIN-BASED QUEUEING MODELS
Exercise 4.1 Using Little’s result, determine the average time spent in
the system for an arbitrary customer when the system is in stochastic equi-
librium.
Solution. Let s̃ denote the time spent in the systemP for an arbitrary customer,
and let γ be the net arrival rate into the system, γ = Mi=1 γi . By Little’s result,
E[ñ]
E[s̃] =
γ
where ñ is the total number of customers in the system in stochastic equilib-
rium. Observe that the total number of customers in the system is the sum of
the customers at each node. i.e., ñ = M
P
i=1 ñi . Therefore,
M
" #
1 X
E[s̃] = E ñi
γ i=1
M
1X
= E[ñi ]
γ i=1
M
1X ρi
= ,
γ i=1 1 − ρi
λR = λ.
1 1 φx+1
I
φx dφ = · | .
j2π C j2π x + 1 C
Note that the closed path C may be traversed by beginning at the point rejθ
and ending at rej(θ+2π) . The integral is then
1 φx+1 1 h i
· | = rej(θ+2π) |x+1 − rejθ |x+1
j2π x + 1 C j2π
1 h i
= r x+1 ej(θ+2π)(x+1) − r x+1 ejθ(x+1)
j2π
1 h i
= r x+1 ejθ(x+1) · 1 − r x+1 ejθ(x+1)
j2π
= 0.
1
= [ln r + j(θ + 2π) − ln r − jθ]
j2π
1
= j2π = 1.
j2π
Exercise 4.4 Suppose that the expression for g(N, M, φ) can be written
as
r
Y 1
g(N, M, φ) = , (4.1)
i=1
(1 − σi φ)νi
where ri=1 νi = M . That is, there are exactly r distinct singular values of
P
g(N, M, φ)- these are called σ1 ,σ2 ,. . .,σr - and the multiplicity of σi is νi .
We may rewrite (4.35) as
νj
r X
X cij
g(N, M, φ) = j. (4.2)
i=1 j=1 (1 − σi φ)
Show that
(νi −j)
1 1
cij = −
(νi − j)! σi
d(νi −j) νi
[(1 − σ φ) g(N, M, φ)] .
i
dφ(νi −j)
φ=1/σi
where F (φ) is the remainder of the terms in the sum not involving σm . Now
fix n, 1 ≤ n ≤ νm . Differentiate both sides of this expression and evaluate at
φ = σ1m . Thus,
d(νm −n)
(1 − σm φ)νm g(N, M, φ) = (−σm )(νm −n) cmn (νm − n)!
dφ (ν m −n)
φ= σ1
i
i.e.,
Solution. Define h(φ) = (1−σi φ)−n . Then by the Maclaurin series expansion
for h,
∞
X h(k) (0)(−σi φ)k
h(φ) = .
k=0
k!
(−1)N (N + n − 1)!(−σi )N
bnN =
N !(n− 1)!
N +n−1
= σiN .
N
5
( )
X
S4,5 = (n1 , n2 , · · · , n5 )| ni = 4 .
i=1
i n1 n2 n3 n4 n5 I
1 0 0 0 0 4 256
2 0 0 0 1 3 128
3 0 0 1 3 3 128
4 0 1 0 0 3 64
5 1 0 0 0 3 64
6 0 0 0 2 2 64
7 0 0 2 0 2 64
8 0 0 1 1 2 64
9 0 2 0 0 2 16
10 2 0 0 0 2 16
11 1 1 0 0 2 16
12 0 1 0 1 2 32
13 1 0 0 1 2 32
14 0 1 1 0 2 32
15 1 0 1 0 2 32
16 0 0 3 0 1 32
17 0 0 0 3 1 32
18 0 0 1 2 1 32
19 0 0 2 1 1 32
20 0 1 2 0 1 16
21 1 0 2 0 1 16
22 1 0 0 2 1 16
23 0 1 0 2 1 16
24 0 1 1 1 1 16
25 1 0 1 1 1 16
26 0 2 1 0 1 8
27 2 0 1 0 1 8
28 0 2 0 1 1 8
29 2 0 0 1 1 8
30 1 1 1 0 1 8
31 1 1 0 1 1 8
32 0 3 0 0 1 4
33 3 0 0 0 1 4
34 1 2 0 0 1 4
35 2 1 0 0 1 4
36 0 0 4 0 0 16
37 0 0 0 4 0 16
38 0 0 3 1 0 16
39 0 0 1 3 0 16
106 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
i n1 n2 n3 n4 n5 I
40 0 0 2 2 0 16
41 1 0 3 0 0 8
42 0 1 3 0 0 8
43 1 0 0 3 0 8
44 0 1 0 3 0 8
45 1 0 2 1 0 8
46 0 1 2 1 0 8
47 1 0 1 2 0 8
48 0 1 1 2 0 8
49 0 2 2 0 0 4
50 2 0 2 0 0 4
51 0 2 0 2 0 4
52 2 0 0 2 0 4
53 1 1 2 0 0 4
54 1 1 0 2 0 4
55 1 1 1 1 0 4
56 0 2 1 1 0 4
57 2 0 1 1 0 4
58 0 3 1 0 0 2
59 3 0 1 0 0 2
60 0 3 0 1 0 2
61 3 0 0 1 0 2
62 1 2 0 1 0 2
63 2 1 0 1 0 2
64 1 2 1 0 0 2
65 2 1 1 0 0 2
66 0 4 0 0 0 1
67 4 0 0 0 0 1
68 1 3 0 0 0 1
69 3 1 0 0 0 1
70 2 2 0 0 0 1
It is easy to verify that the last column does indeed sum to 1497. This shows
that the probabilities given by (3.105) sum to unity.
Exercise 4.8 Using the recursion of (4.43) together with the initial con-
ditions, verify the expression for g(N, 5) for the special case N = 6 numer-
ically for the example presented in this section.
Solution. Letting ρ = 4 as in the example in the chapter, we find that
ρ
ρ1 = ρ2 = = 1,
4
ρ
ρ3 = ρ4 = = 2,
2
ρ5 = ρ = 4.
Then, using (3.109) along with the initial conditions we may compute g(1, 1):
We then substitute this value into the expression for g(1, 2):
Repeating this for each value of N and M until g(6, 5) is reached, we may
form a table of normalizing constants:
M
0 1 2 3 4 5
0: 0 1 1 1 1 1
1: 0 1 2 4 6 10
2: 0 1 3 11 23 63
N 3: 0 1 4 26 72 324
4: 0 1 5 57 201 1497
5: 0 1 6 120 522 6510
6: 0 1 7 247 1291 27331
= L,
and
1
E[s̃M ] =
µ
= W.
Exercise 4.10 Show that det A(z) is a polynomial, the order of which
is not greater than 2(K + 1).
Solution. The proof is by induction. First consider the case where A(z) is a
1 × 1 matrix. Clearly det A(z) is a polynomial whose order is at most 2. Now
assume that for an n × n matrix, its determinant is at most 2(n + 1). Let A(z)
be an (n + 1) × (n + 1) matrix. We wish to show det A(z) is at most 2(n + 2).
Now, the determinant of A(z) will be
K
X
det A(z) = a0,j A0,j ,
j=0
where a0,j is the element in the jth column of the first row of A(z) and A0,j
represents its respective cofactor. Note that for all j, j = 0, 1, · · · , n, a0,j is of
at most order 2. Furthermore, each A0,j is the determinant of an n × n matrix.
By assumption, A0,j is of order at most 2(n + 1). Hence the product a0,j A0,j
is of at most order 2(n + 2). Thus the result holds for all K ≥ 0.
Exercise 4.11 Repeat the above numerical example for the parameter
values β = δ = 2, λ = µ = 1. That is, the proportion of time spent in
each phase is the same and the transition rate between the phases is faster
than in the original example. Compare the results to those of the example
by plotting curves of the respective complementary distributions. Compute
the overall traffic intensity and compare.
Solution. Substituting β = δ = 2, λ = µ = 1, into the example in Chapter 3,
we find
−β β −2
2
Q= = ,
δ −δ 2 −2
µ 0 1 0
M= = ,
0 µ 0 1
Advanced CTMC-Based Queueing Models 109
and
0 0 0 0
Λ= = .
0 λ 0 1
Note that the matrices M and Λ remain unchanged from the example in the
chapter. Then, upon substitution of these definitions into (3.137), we find
1 − 3z
2z
A(z) = ,
2z z 2 − 4z + 1
z 2 − 4z + 1 −2z
adj A(z) = ,
−2z 1 − 3z
and
P0 = [ 0.2752551 0.2247449 ] .
Upon substitution of this result into (3.138), we find after some algebra that
T T
1 0.2752551 − 0.0505103z
G0 (z)
= .
G1 (z) 1 − 0.5505103z 0.2247449
After some additional algebra, this result reduces to
∞
(0.5505103)n z n .
X
+ [ 0.1835034 0.2247449 ]
n=1
The probability generating function for the occupancy distribution can now be
computed from (3.139) or by simply summing G0 (z) and G1 (z). We find
∞
(0.5505103)n z n .
X
Fñ (z) = 0.5 + 0.4082483
n=1
To compute the traffic intensity we determine the probability that the system
will be busy. This is simply the complement of the probability that the system
is not busy: 1 − P0 . Since P0 did not change in this exercise from the example
in the chapter, the traffic intensity for both problems will be the same:
1 − 0.5 = 0.5
Figure 4.1 shows the complementary distributions of both the original example
and its modification β = δ = 2,λ = µ = 1.
Exercise 4.12 Suppose we want to use the packet switch and transmis-
sion line of the above numerical example to serve a group of users who
collectively generate packets at a Poisson rate γ, independent of the com-
puter’s activity, in addition to the computer. This is a simple example of
integrated traffic. Assuming that the user packets also require exponential
service with rate µ, show the impact the user traffic has on the occupancy
distribution of the packet switch by plotting curves for the cases of γ = 0
and γ = 0.1.
Solution. Since the group of users generate packets independent of the com-
puter’s activity, the overall arrival rate to the transmission line is simple the
sum of the arrivals from the computer and from the group of users. That is,
γ 0
Λ= .
0 λ+γ
The matrices Q and M remain unchanged from the example. Now, in the
first case where γ = 0, it should be clear that the matrix Λ will also remain
Advanced CTMC-Based Queueing Models 111
1
P{occupancy exceeds value shown on abscissa}
.1
modified example
.01
original example
.001
.0001
0 3 6 9 12 15
n
unchanged. i.e., the problem is the same as that in the example. In the second
case, where γ = 0.1, the matrix Λ will be:
0.1 0
Λ= .
0 1.1
We repeat the definitions of Q and M:
−β −1
β 1
Q= = ,
δ −δ 1 −1
µ 0 1 0
M= = .
0 µ 0 1
We proceed in the same manner as in Exercise 3.39 and in the example in the
chapter. Substituting these definitions into (3.137) yields:
.1z 2 − 2.1z + 1
z
A(z) = ,
z 1.1z 2 − 3.1z + 1
112 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
1.1z 2 − 3.1z + 1 −z
adj A(z) = ,
−z .1z 2 − 2.1z + 1
and
P0 = [ 0.2346046 0.1653954 ] .
Upon substitution of this result into (3.138), we find after some algebra that
the matrix [G0 (z)G1 (z)] is
T
1 0.2346046 − 0.0739485z
.
(1 − 0.6626404z)(1 − 0.0475680z) 0.1653954 − 0.0047394
After some additional algebra, this result reduces to
1
P{occupancy exceeds value shown on abscissa}
.1
gamma = 0.1
.001
.0001
0
0 5 10 15 20 25
n
Figure 4.2 shows the occupancy distributions for the cases γ = 0, γ = 0.1.
Exercise 4.13 Let Q denote the infinitesimal generator for a finite,
discrete-valued, continuous-time Markov chain. Show that the rows of
adj Q are equal.
114 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Solution. We first note that the rows of adj Q are all proportional to each other
and, in addition, these are proportional to the stationary vector for the CTMC.
Let the stationary vector be denoted by φ. Then, with ki , 0 ≤ i ≤ K, a scalar,
k0 φ k0 φ0 k0 φ1 ... k0 φK
k1 φ k1 φ0 k1 φ1 ... k1 φK
adj Q = .. = .. .. ..
. . . .
kK φ kK φ0 kK φ1 ... kK φK
In addition, the columns of adj Q are all proportional to each other and these
are all proportional to the right eigenvector of Q corresponding to its zero
eigenvalue. Since Qe = 0, this right eigenvector is proportional to e. There-
fore,
j0 j1 ... jK
j0 j1 ... jK
adj Q = [ j0 e j1 e ... jK e ] = .. .. .. ,
. . .
j0 j1 ... jK
where ji , 0 ≤ j ≤ K, is a scalar. Comparing the two expressions for adj Q,
we see that ki φl = jl for all i. So that if φl 6= 0, then ki = jl /φl for all i. Since
φl cannot be zero for all i unless all states are transient, which is impossible
for a finite state CTMC, it follows that the ki are identical so that the rows of
adj Q are identical and the proof is complete.
Exercise 4.14 Obtain (4.73) by starting out with (4.64), differentiating
both sides with respect to z, postmultiplying both sides by e, and then taking
limits as z → 1.
Solution. Recall Equation (4.64):
G(z)A(z) = (1 − z)P0 M,
where
A(z) = Λz 2 − (Λ − Q + M)z + M.
Now, differentiating both sides of (3.137) with respect to z, we see that
′
G (z)A(z) + G(z) [2Λz − (Λ − Q + M)] = −zP0 M.
Taking limits as z → 1,
′
G (1)Q + G(1) [Λ + Q − M] = −P0 M.
Then postmultiply both sides by e, and noting that Qe = 0 yields the desired
result:
G(1) [M − Λ] = P0 M.
Advanced CTMC-Based Queueing Models 115
Exercise 4.15 Show that the sum of the columns of A(z) is equal to the
column vector (1 − z)(M − zΛ)e so that det A(z) has a (1-z) factor.
Solution. To find the sum of the column vectors of A(z), postmultiply the
matrix by the column vector e:
A(z)e = Λz 2 e − (Λ − Q + M) e + Me
= −z(1 − z)Λe + (1 − z)Me
= (1 − z) (M − zΛ) e.
Exercise 4.16 Show that the zeros of the determinant of the λ-matrix
AlA(z) are all real and nonnegative. [Hint: First, do a similarity trans-
formation, transforming A(z) into a symmetric matrix, Â(z). Then, form
the inner product < Xz , Â(z)Xz >, where Xz is the null vector of Â(z)
corresponding to z. Finally, examine the zeros of the resulting quadratic
equation.]
Solution. First we show that the null values are all positive by transforming
A(z) into a symmetric matrix, Â(z), and then forming the quadratic form of
Â(z). This quadratic form results in a scalar quadratic equation whose roots
are positive and one of which is a null value of A(z). Recall the definition of
A(z):
A(z) = Λz 2 − (Λ − Q + M)z + M,
where
Λ = diag(λ0 , λ1 , · · · , λK ),
M = diag(µ0 , µ1 , · · · , µK ),
and
−β0 β0 0
δ1
−(β1 + δ1 ) β1
Q=
.. .
.
δK−1 −(βK−1 + δK−1 ) βK−1
0 δK −δK
Now, since Q is tridiagonal and the off-diagonal terms have the same sign, it
is possible to transform Q to a symmetric matrix using a diagonal similarity
transformation. Define
Qi
Wij = j=0 lj for i = j = 0, 1, · · · , K
0, else
where
l0 = 1,
116 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
and s
δj
lj = for 1 ≤ j ≤ K.
βj−1
Then
Q̂ = W −1 QW,
where Q̂ is the following tridiagonal symmetric matrix:
√
√−β0 β0 δ1 √0
β0 δ1 −(β1 + δ1 ) β1 δ2
Q̂ = .
..
. p
0 βK−1 δK −δK
Now, suppose that det A(z) = 0. Then det Â(z) = 0, and there exists some
nontrivial Xσ , such that
Â(z)Xσ = 0.
Therefore,
< Xσ , Â(z)Xσ >= 0,
where < ·, · > denotes the inner product. Thus,
where
We see that the right-hand side of this equation is the sum of nonnegative terms,
and so the value of σ must be real. Hence all null values of A(z) are real.
We now show that these null values are positive. We see from above that either
l = 0, or l > 0. In the first case, we find from (3.44.1) that
m
σ= > 0.
m−p
In the second case we find from the quadratic equation that
1
q
σ= (l − p + m) ± (l − p + m)2 − 4lm .
2l
p
Clearly, (l − p + m) > (l − p + m)2 − 4lm > 0. Therefore, both roots of
(3.44.1) are positive so that all of the null values of A(z) are positive.
Exercise 4.17 The traffic intensity for the system is defined as the prob-
ability that the server is busy at an arbitrary point in time.
1. Express the traffic intensity in terms of the system parameters and P0 .
2. Determine the average amount of time a customer spends in service us-
ing the results of part 1 and Little’s result.
3. Check the result obtained in part 2 for the special case M = µI.
Solution.
1. Recall (4.74):
G(1)Λe = [G(1) − P0 ] Me.
The left-hand side expresses the average rate at which units enter the service
the system, and the right-hand side expresses the average rate at which units
leave the system. Hence, analogous to the special case of the M/M/1 system
of λ = (1−P0 )µ where ρ = µλ = (1−P0 ), we may write the traffic intensity
using (4.74) as
ρ = [G(1) − P0 ] e.
Exercise 4.18 Show that adj A(zi )e is proportional to the null vector
of A(z) corresponding to zi .
Solution. Since A−1 (zi ) = adj A(zi )/det A(zi ), and A(zi )A−1 (zi ) = I, we
find that
A(zi )adj A(zi ) = det A(zi )I.
By the continuity of A(zi ), adj A(zi ), and det A(zi ), and the fact that zi is a
null value of A(z),
Now, we may compute the determinant of B(1) by expanding about its last
column. In doing this, we find that the vector of cofactors corresponding to the
last column in proportional to µK , the left eigenvector of Q corresponding to
its zero eigenvalue. That is,
φK
G(1) = .
φK e
Thus,
detB(1) = αG(1) [M − Λ] e, (4.57.1)
for some α = φK e 6= 0. But from equation (4.74), G(1) [M − Λ] e = π0 Me.
The left-hand side of (4.57.1) is zero only if π0 = 0. But π0 > 0 if an equi-
librium solution exists, so that if an equilibrium solution exists, then B(1) is
nonsingular. Equivalently, if an equilibrium solution exists, then B(z) has no
null value at z = 1.
We note that the null values of a λ-matrix are continuous functions of the
parameters of the matrix. Thus, we may choose to examine the behavior of the
null values of B(z) as a function of the death rates of the phase process while
keeping the birth rates positive. Toward this end, we define
B̂(z, δ1 , δ2 , · · · , δK ).
First, consider det B̂(z, 0, 0, · · · , 0). Since the lower subdiagonal of this
λ-matrix is zero, the determinant of the matrix is equal to the product of the
diagonal elements. These elements are as follows:
λi z 2 − (λi + βi + µi )z + µi , 0 ≤ i < K,
µK − λK z.
The null values of B̂(z, 0, 0, · · · , 0) may therefore be found by setting each of
the above terms equal to zero. Thus, the null values of B̂(z, 0, 0, · · · , 0) may
be obtained by solving the following equations for z:
λi z 2 − (λi + βi + µi )z + µi = 0, 0 ≤ i < K, (4.57.2)
and
µK − λK = 0.
When solving (4.57.2), two possibilities must be considered: λi = 0, and
λi > 0. In the first case,
µi
z= ,
βi + µi
so that for each i, the corresponding null value is between zero and one. In the
latter case, we find that
lim λi z 2 − (λi + βi + µi )z + µi = µi > 0;
z→0
120 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
tends to +∞. Therefore, each quadratic equation of (4.57.2) has one root in
(0, 1) and one root in (1, ∞) if λi > 0. In summary, the equations (4.57.2)
yield exactly K null values of B̂(z, 0, 0, · · · , 0) in (0, 1), and between 0 and K
null values of B̂(z, 0, 0, · · · , 0) in (1, ∞), the latter quantity being equal to the
number of positive values of λi .
Now, due to (4.57.1), there exists no (δ1 , δ2 , · · · , δK ) such that 0 = det
B̂(1, δ1 , δ2 , · · · , δK ). In addition, all of the null values of B̂(z, δ1 , δ2 , · · · , δK )
are real if Q is a generator for an ergodic birth and death process. Therefore,
it is impossible for any of the null values of B̂(z, δ1 , δ2 , · · · , δK ) in the inter-
val (0, 1) to assume that value 0 for some value of (δ1 , δ2 , · · · , δK ). Hence,
the number of null values of B̂(1, δ1 , δ2 , · · · , δK ) in (0, 1) is independent of
the death rates. Therefore, B(z), and consequently A(z), has exactly K null
values in the interval (0, 1).
Exercise 4.20 Beginning with (4.86) through (4.88), develop expres-
sions for the joint and marginal complementary ergodic occupancy distri-
butions.
Solution. From (4.86),
−n
X
Pn = Ai zK+i .
zK+1 ∈Z (1,∞)
Therefore,
∞
−j
X X
P {ñ > n} = Ai zK+i
j=n+1 zK+1 ∈Z (1,∞)
∞ j
1
X X
= Ai
j=n+1
zK+i
zK+1 ∈Z (1,∞)
∞ j
1
X X
= Ai
j=n+1
zK+i
zK+1 ∈Z (1,∞)
n+1
1
X zK+i
= Ai 1
1− zK+i
zK+1 ∈Z (1,∞)
(n+1)
X zK+i
= Ai −1
.
zK+1 ∈Z (1,∞)
1 − zK+i
Advanced CTMC-Based Queueing Models 121
Similarly,
(n+1)
X zK+i
P {ñ > n} e = Ai e −1
.
zK+1 ∈Z (1,∞)
1 − zK+i
Exercise 4.21 Develop an expression for adj A(zi ) in terms of the outer
products of two vectors using LU decomposition. [Hint: The term in the
lower right-hand corner, and consequently the last row, of the upper trian-
gular matrix will be zero. What then is true of its adjoint?]
Solution. We let A(z) = L(z)U (z), where L(z) is lower triangular and U (z)
is upper triangular. We follow the usual convention that the elements of the
major diagonal of L(z) are all unity. From matrix theory, we know that for
A(z) nonsingular,
adj A = det A(z) A(z)−1
−1 −1
= det A(z)U (z)L (z)
1 1
= det A(z) adj U (z) adj L(z)
det U (z) det L(z)
= adj U (z) adj L(z).
Since adj A(z) is a continuous function of z for an arbitrary matrix G(z), we
then have
adj A(zi ) = adj U (zi )adj L(zi ).
Now, if zi is a simple null value of U (z), the LU decomposition can be
arranged such that the last diagonal element of U (z) is zero. This means the
entire last row of U (z) is zero so that all elements of adj U (z) are zero except
for its last column. This means that adj A(zi ) will be given by the product of
the last column of adj U (zi ) with the last row of adj L(zi ). Let y(zi ) denote
the last column of adj U (zi ) and x(zi ) denote the last row of adj L(zi ). Since
det L(zi ) = 1, we may solve for the last row of adj L(zi ) by solving
x(zi )L(zi ) = fK ,
where fi is the row vector of all zeros except for a 1 in position i. Since the
last column of adj U (z) is unaffected by the elements of the last row of U (z),
we may determine the last column of adj U (zQ i ) in the following way. First,
we replace the (K, K) element of U (zi ) by 1/ K−1 j=0 Ujj (zi ). Call this matrix
Û (zi ) and the note that this matrix is nonsingular and that its determinant is 1.
Now solve the linear system
Û (zi )y(zi ) = eK .
We then have
1
y(zi ) = adj Û (zi )eK .
det Û (zi )
122 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
But det Û (zi ) = 1 and, in addition, the last column of adj Û (zi ) is the same
as the last column of U (zi ). Consequently y(zi ) is identically the last column
of adj U (zi ).
In summary, we solve Û (zi )y(zi ) = eK , x(zi )L(zi ) = fK and then we
have
adj A(zi ) = y(zi )x(zi ).
We note in passing that solving the linear systems is trivial because of the form
of Û (zi ) and L(zi ). We illustrate the above method with an example. Let
2 3 4
A= 4 8
11 .
6 13 18
We may then decompose A as described above:
1 0 0
L =
2 1 1
2 3 4
U = 0 2 3
0 0 0
Now, det L is 1, so that adj L = L−1 . That is, if x̂ is the last row of adj L,
then
x̂L = [ 0 0 1 ]
Thus,
x̂ = [ 1 −2 1 ]
1
We now replace the (3, 3) element of U of 2+2 = 41 , so that
2 3 4
Û = 0 2 3
1
0 0 4
Û ŷ = 0
1
x̂ŷ = −6 12 −6
4 −8 4
Advanced CTMC-Based Queueing Models 123
for values of τ between 1.7215 and 2795. Since the vector G(1) is proportional
to the left eigenvector of Q corresponding to the eigenvalue 0, we find
Pn = P0 Rn n
0.070500 0.029500
= [ 0.234604 0.165395 ] .
0.460291 0.639709
These are the same results as those found in Example 3.7. In particular, we see
that P0 = 0.4.
The following are sample data from the program, where ǫ = 1 × 10−11 , and n
is the number of iterations it took for the program to converge. Note that the
resultant matrix R was the same for each of the sampled τ .
0.070500 0.029500
R= ,
0.460291 0.639709
124 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
τ n
2 300
50 3200
200 11800
500 28000
2795 10928
Exercise 4.23 Solve Exercise 3.22 using the matrix geometric approach.
Evaluate the relative difficulty of using the matrix geometric approach to
that of using the probability generating function approach.
Solution. Substituting different values of τ and the parameters from Exercise
3.39 into the computer program from Exercise 3.50, we find
3.0 −2.0
−1 0.0 0.0
Ri = τ R2i−1 + Ri−1 I − τ −1
+τ −1
.
−2 4.0 0.0 1.0
The matrix R was determined to be
0.0 0.0
R=
0.449490 0.550510
. Since the vector G(1) is proportional to the left eigenvector of Q correspond-
ing to the eigenvalue 0, we find
G(1) = [ 0.5 0.5 ] .
Substituting these result into (3.167) yields
0.0 0.0
P0 = [ 0.5 0.5 ] I −
0.449490 0.550510
= [ 0.275255 0.224745 ] .
These are the same results as those found in Exercise 3.39. In particular, we
see that P0 = 0.5. Note that P0,0 does not play a role in determining Pn since
the first row of R is zero.
Advanced CTMC-Based Queueing Models 125
The following are sample data from the program, where ǫ = 1 × 10−11 , and n
is the number of iterations it took for the program to converge. As in Exercise
3.50, the resultant matrix R was the same for each of the sampled τ .
0.0 0.0
R= ,
0.449490 0.550510
τ n
3 300
10 600
50 2400
100 4500
500 20400
For this exercise, K = 1. Since the matrices involved were 2 × 2, using
the probability generating function approach was fairly straight forward and
did not involve a great deal of computation. However, as the dimension of
these matrices increases, the difficulty involved with using this method also
increases. On the other hand, solving the problem using the matrix-geometric
approach involved developing a computer program. Although this is not al-
ways an easy task, this particular program was relatively simple. Furthermore,
the program should be adaptable to different parameters: matrix dimension, τ ,
λi , βi , etc. So although there is initial overhead, it is reusable. Plus, as the di-
mension of the matrices increases even slightly, the matrix geometric method
is much more practical than the probability generating function approach.
Exercise 4.24 Prove the result given by (4.112) for the n-th factorial
moment of ñ.
Solution. First we prove a lemma:
dn
G(z) = n!G(z) [I − zR]−n Rn , n ≥ 1.
dz n
d
G(z) = P0 [I − zR]−2 R
dz
= P0 [I − zR]−1 [I − zR]−1 R
= G(z) [I − zR]−1 R.
126 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Now assume the result holds true for n; we wish to show it holds true for n + 1.
d(n+1) d dn
G(z) = G(z)
dz (n+1) dz dz n
d
= n!G(z) [I − zR]−n Rn ,
dz
by assumption. Thus,
d(n+1)
G(z) = −(n + 1)n! [I − zR]−(n+1) Rn (−R)
dz (n+1)
= (n + 1)!G(z) [I − zR]−(n+1) Rn+1 .
This proves the lemma. Using this result, we evaluate the n−th derivative of
dn
G(z) e = n!G(z) [I − zR]−n Rn e
dz n
z=1 z=1
= n!G(1) [I −"R]−n R n
e
n ∞
#
Ri Rn e
Y X
= n!G(1)
j=1 " i=0
n ∞
#
−1
N V V −1 N n Ve
i
Y X
= n!G(1) V
j=1 i=0"
n ∞
#
−1 i
N n Ve
Y X
= n!G(1)V N
j=1 i=0
−1 −n
= n!G(1)V [I − N ] N n Ve.
1 1
···
diag 1−ν0 1−νK .
dn
h n n i
−1 ν0 νK
G(z) e = n!G(1)ν 1−ν0 , ··· 1−νK Ve.
dz n
z=1
Advanced CTMC-Based Queueing Models 127
[Hint: First, sum the elementsPof the right hand side of (4.113) from i = 1 to
i = ∞. This will yield 0 = [ ∞ i=0 Pi ] [A0 + A1 + A2 ] + θ(P0 , P1 ). Next,
use the fact that, because Q̃ is stochastic, the sum of the elements of each
of the rows of Q̃ must be a zero matrix to show that θ(P0 , P1 ) = 0. Then
complete the proof in the obvious way.]
Solution. We emphasize that we are dealing here with the specific case that the
arrival and service rates are dependent upon the phase of an auxiliary process.
The arrival, service, and phase processes are independent of the level except
that at level zero, there is no service. For a system in which the previous
condition does not hold, the result is in general not true.
The implication of the fact that Q̃ is stochastic is that the sum of the block
matrices at each level is stochastic; that is [A0 + A1 + A2 ], [B1 + A1 + A0 ],
and [B0 + A0 ] are all stochastic, and each of these stochastic matrices is the
infinitesimal generator of the phase process at its level. Thus, in order for the
phase process to be independent of the level, the three stochastic matrices must
be the same. This can be found by comparing the first two to see that B1 = A2
and the first and third to see that B0 = A1 + A2 .
Upon summing the elements of the right hand side of (4.113) from i = 1 to
i = ∞, we find
"∞ #
X
0= Pi [A0 + A1 + A2 ] − P0 [A1 + A2 ] − P1 A2 .
i=0
because both are the unique left eigenvector of [A0 + A1 + A2 ] whose ele-
ments sum to unity.
In a very general case, the arrival and services while in a given phase could
be accompanied by a phase transition. For xample, we would denote by λij
the arrival rate when the process that starts in phase i and results in an increase
in level by 1 and a transition to phase j simultaneously, and define µij in a
parallel way. At the same time, we could have an independently operating
phase process, whose infinitesimal generator is Q.
Let Λ̂ and M̂ denote the more general arrival and service matrices, respec-
tively, and let Λ and M denote the diagonal matrices whose i-th elements
denote the total arrival and service rates while the process is in phase i. It is
easy to show that we then have the following dynamical equations:
It is then easy to see that the phase process is independent of the level if and
only if the M = M̂. That is, the service process must not result in a phase
shift. Alternatively, the phase process at level zero could be modified to incor-
porate the phase changes resulting from the service process at all other levels,
but then this would contaminate the definition of the independent phase process
at each level.
Solution. The state diagram can be determined by first defining the states in
the diagram which have the from (i, j), where i is the system occupancy and
j is the phase, except for i = 0. It is then a matter of determining transition
rates. Since all phase 0 service completions result in lowering the occupancy
by 1 with probability p, the rate from state (i, 0) to (i − 1, 0) is µ1 (1 − p).
Similarly, the rate from (i, 0) to (i, 1) is µ1 p. This is a direct result of Property
2 of the Poisson Process given on page 44. Since service completions at phase
1 always result in an additional phase 0 service, the transition rate from (i, 1)
to (i, 0) is µ2 . Increases in level occur due to arrivals only, so for every state,
the rate of transition to the next level is λ. The state diagram is detailed in
Figure 4.3.
λ
1,1 2,1 3,1 ...
µ1 p µ2 µ1 p µ2 µ1 p µ2
Find Fx̃ (t) = P {x̃ ≤ t} and fx̃ (t), and identify the form of fx̃ (t). [Hint:
First solve for P0 (t), then for P1 (t), and then for P2 (t) = Pa (t). There is
never a need to do matrix exponentiation.]
Solution. We have
d
P (t) = P (t)Q̃,
dt
or
−µ µ 0
d
[ P0 (t) P1 (t) P2 (t) ] = [ P0 (t) P1 (t) P2 (t) ] 0 −µ µ.
dt
0 0 0
d
Thus, dt P0 (t) = µP0 (t), so that P0 (t) = ke−µt . But P0 (0) = 1, so k = 1.
Therefore P0 (t) = e−µt .
d
P1 (t) = µP0 (t) − µP1 (t).
dt
or
d
P1 (t) + µP1 (t) = µP0 (t).
dt
Advanced CTMC-Based Queueing Models 131
Now,
That is,
Fx̃ (t) = 1 − e−µt − µte−µt ,
and
fx̃ (t) = µ(µt)e−µt .
d
E[x̃] = (−1) Fx̃∗ (s)
ds s=0
dn h i
= (−1) n −bS [sI − S]−1 e
ds s=0
h i
= (−1) bS [sI − S]−2 e
s=0
= −bS −1 e.
We now prove that proposition that
dn ∗
F (s) = −n!(−1)n bS [sI − S]−(n+1) e.
dsn x̃
For n=1,
d ∗
F (s) = bS [sI − S]−2 e
ds x̃
= 1!(−1)(−1)bS [sI − S]−2 e.
So 1 ∈ T , the truth set for the proposition. If (n − 1) ∈ T , then
dn−1 ∗
F (s) = −(n − 1)!(−1)(n−1) bS [sI − S]−n e,
dsn−1 x̃
so that
dn ∗
F (s) = −(n − 1)!(−1)(n−1) bS(−n) [sI − S]−(n+1) e
dsn x̃
= −n!(−1)n bS [sI − S]−(n+1) e.
and the proof of the proposition is complete. Therefore, continuing with the
proof of the exercise,
h i
n n n −(n+1)
E[x̃ ] = (−1) −n!(−1) bS [sI − S] e
s=0
= −n!bS(−1)(n+1) e
= n!(−1)n bS −n e.
Advanced CTMC-Based Queueing Models 133
T T0
Q̃ = ,
0 0
Solution:
(a) Since the Markov chain is irreducible, all states can be reached from
all other states; therefore, state m is reachable from all other states in
the original chain. Since the dynamics of the system have not been
changed for states {0, 1, · · · , m − 1} in the modified chain, state m
is still reachable from all states. But, once in state m, the modified
system will never leave state m because its departure rate to all other
134 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
states is 0. Therefore, all states other than state m are transient states
in the modified chain and state m is an absorbing state.
(b) Observe that
d
[ Pt (t) Pa (t) ] = Pt (t)T
dt
so that
d
Pt (t) = Pt (0)eT t
dt
Now, we know that
lim Pt (t) = 0
t→0
Pa (t) = 1 − Pt (t)e
= 1 − Pt (0)eT t e
Pa (t) = Pt (0)eT t T −1 T 0 + K
Pa (t) = K − Pt (0)eeT t e,
and
lim Pa (t) = 1 = 1 − 0
t→0
P {x̃ ≤ t} = Pa (t)
= 1 − Pt (0)eT t e
Advanced CTMC-Based Queueing Models 135
(e) The vector T 0 represents the rates at which the system leaves the
transient states to enter the absorbing state. The matrix T 0 Pt (0) is
then a nonnegative m × m matrix. Since [T T 0 ]e = 0, it follows that
T + T0 Pt (0) is a matrix whose diagonal terms are nonpositive. This
is because Tii + Ti0 ≤ 0 so that Tii + Ti0 Pti (0) ≤ 0. This is due to
the fact that Pti (0) ≤ 1. All nondiagonal terms are nonnegative since
all off-diagonal terms of T are nonnegative and T 0 is nonnegative. It
remains only to determine if
T̃e = [ T + T 0 Pt (0) ] e = 0
Now, Pt (0)e = 1, so [T + T 0 Pt (0)]e = T e + T 0 , which we al-
ready know is equal to a zero vector. Thus, T + T 0 Pt (0) is the in-
finitesimal generator for an irreducible Markov chain with state space
{0, 1, . . . , m − 1}.
(a) Draw the state transition diagram for the special case of m = 3.
(b) Write the matrix balance equations for the special case of m = 3.
(c) Write the matrix balance equations for the case of general values of
m.
(d) Determine the matrix Q, the infinitesimal generator for the continuous-
time Markov chain defining the occupancy process for this system.
(e) Comment on the structure of the matrix Q relative to that for the
phase-dependent arrival and service rate queueing system and to the
M/PH/1 system. What modifications in the solution procedure would
have to be made to solve this problem? [Hint: See Neuts [1981a], pp.
24-26.]
136 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Solution:
(a) For this exercise, we may think of a customer as staying with the same
server thoughout its service; the server simply does a different task.
See Figure 4.4 for the state diagram of the system.
(b) We first introduce the following notation, where Ik is the k×k identity
matrix:
Λ0 = λ, Λ̂0 = [ λ 0 ]
λ 0 0
Λ1 = λI2 , Λ̂1 =
0 λ 0
λ 0 0 0
Λ2 = λI3 , Λ̂2 = 0 λ 0 0
0 0 λ 0
Λ3 = λI
4
µ1 (1 − ρ) 0 µ1 (1 − ρ)
M1 = , M̂1 = ,
0 0 0
2µ1 (1 − ρ) 0 0
M2 = 0 µ1 (1 − ρ) 0 ,
0 0 0
2µ1 (1 − ρ) 0
M̂2 = 0 µ1 (1 − ρ) ,
0 0
3µ1 (1 − ρ) 0 0 0
0 2µ1 (1 − ρ) 0 0
M3 = ,
0 0 µ1 (1 − ρ) 0
0 0 0 0
3µ1 (1 − ρ) 0 0
0 2µ1 (1 − ρ) 0
M̂3 = ,
0 0 µ1 (1 − ρ)
0 0 0
0 −2µ1 0
−µ1 ρ
0
P1 = , P2 = −µ2 0 −µ1 ρ ,
−µ2 0
0 −2µ2 0
0 −3µ1 0 0
P3 = −µ2 0 −2µ1 0 .
0 0 −3µ2 0
With this notation, we may compactly write the balance equations as
follows:
P0 Λ0 = P1 M̂1
P1 [ Λ1 + P1 + M1 ] = P0 Λ̂0 + P2 M̂2
P2 [ Λ2 + P2 + M2 ] = P1 Λ̂0 + P3 M̂3
Advanced CTMC-Based Queueing Models 137
λ (1-p)µ 1
p µ1
1,0 1,1
µ 2
λ (1-p)µ 1 λ (1-p)µ 1
p µ1 pµ1
2,0 2,1 2,2
µ 2
µ 2
p µ1 pµ1 p µ1
3,0 3,1 3,2 3,3
µ 2
µ 2
µ 2
p µ1 pµ1 p µ1
4,0 4,1 4,2 4,3
µ 2
µ 2
µ 2
p µ1 pµ1 p µ1
P3 [ Λ3 + P3 + M3 ] = P2 Λ̂2 + P4 M3 ,
138 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
and for n ≥ 4,
Pn [ Λ3 + P3 + M3 ] = Pn−1 Λ̂3 + Pn+1 M3
(d) Define
∆ = diag[−Λ0 , − [Λ1 + P1 + M1 ) , − (Λ2 + P2 + M2 ) ,
= −h (Λ3 + P3 + M3 ) , − (Λ3 + i P3 + M3 ) , · · ·]
∆u = superdiag Λ̂0 , Λ̂1 , Λ̂2 , Λ3 , Λ3 , Λ3 , · · ·
h i
∆l = subdiag M̂0 , M̂1 , M̂2 , M̂3 , M3 , M3 , · · ·
(e) This exercise will be harder to solve than the M/PH/1 system; this is
due to the fact that there is memory from service to service. That is,
a new customer starts service at exactly the phase of the previously
completed customer as long as there are customers waiting. In the
M/PH/1 system, the starting phase is selected at random. Since there
is more than one possible starting phase, this problem will be more
difficult to solve. This matrix Q has the form
B00 B01 B02 B03 0 ...
B B11 B12 B13 0 ...
10
B B21 B22 B23 0 ...
20
B30 B31 B32 B33 A0 ...
0 0 0 B43 A1 A0 ...
,
0 0 0 0 A2 A1 A0 ...
0 0 0 0 0 A2 A1 ...
0 0 0 0 0 0 A2 ...
.. .. .. .. .. .. ..
..
. . . . . . . .
Advanced CTMC-Based Queueing Models 139
which is similar to the form of the matrix of the G/M/1 type, but with
more complex boundary conditions.
(f) We know that P4 = P3 R, but we do not know P0 , P1 , P2 , P3 . On the
other hand, we know P Q = 0. Now consider only those elements of
P that are multiplied by the B submatrices. That is, P0 , P1 , P2 , P3 , P4 .
Then,
B00 B01 B02 B03
B10 B11 B12 B13
0 = [ P0 P1 P2 P3 P4 ]
B20 B21 B22 B23
B B31 B32 B33
30
0 0 0 B43
But P4 = P3 R, so that
B00 B01 B02 B03
B10 B11 B12 B13
0 = [ P0 P1 P2 P3 P3 R ]
B20 B21 B22 B23
B B31 B32 B33
30
0 0 0 B43
Then combining the columns involving P3 , we have
B00 B01 B02 B03
B10 B11 B12 B13
0 = [ P0 P1 P2 P3 ]
B20
B21 B22 B23
B30 B31 B32 B33 + RB43
= [ P0 P1 P2 P3 ] B(R)
[ P0 P1 P2 ] = kx0
140 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
P3 = kx1
and
Pn+3 = P3 Rn , n≥1
Chapter 5
Exercise 5.1 With x̃, {ñ(t), t ≥ 0}, and ỹ defined as in Theorem 5.2,
show that E[ỹ(ỹ − 1) · · · (ỹ − n + 1)] = λn E[x̃n ].
P∞ yP
Solution. Recall that Fỹ (z) = y=0 z {ỹ = y}. Then
∞
dn
[y(y − 1) · · · (y − n + 1)] z y−n P {ỹ = y} ,
X
Fỹ (z) =
dz n ỹ=0
so that
dn
Fỹ (z) = E [ỹ(ỹ − 1) · · · (ỹ − n + 1)] .
dz n
z=1
which implies
dn ∗
= λn E[x̃n ].
n
Fx̃ (λ[1 − z])
dz z=1
Alternatively,
Fỹ (z) = Fx̃∗ (s(z)) ,
142 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Solution. The proof follows directly from the definition of Fx̃ (z).
∞
z i P (x̃ − 1)+ = i
X
F(x̃−1)+ (z) =
i=0
∞
= z 0 P (x̃ − 1)+ = 0 + z i P (x̃ − 1)+ = i .
X
i=1
Exercise 5.3 Starting with (5.5), use the properties of Laplace trans-
forms and probability generating functions to establish (5.6).
Solution. Observe that if we take the limit of the right-hand side of (5.5)
as z → 1, both numerator and denominator go to zero. Thus we can apply
L’Hôpital’s rule, and then take the limit as z → 1 of the resultant expression,
using the properties of Laplace transforms and probability generating func-
tions. Let α(z) denote the numerator and β(z) denote the denominator of the
right-hand side of (5.5). Then the derivatives are
d d
α(z) = −P {q̃ = 0} Fx̃∗ (λ[1 − z]) − (1 − z) Fx̃∗ (λ[1 − z])
dz dz
d d ∗
β(z) = F (λ[1 − z]) − 1.
dz dz x̃
Now take the limits of both sides as z → 1. It is easy to verify using Theo-
d ∗
rem 2.2 that limz→1 dz Fx̃ (λ[1 − z]) = λE[x̃] = ρ. Thus, after some simple
The Basic M/G/1 Queueing System 143
algebra,
P {q̃ = 0}
− = 1.
ρ−1
i.e.,
P {q̃ = 0} = 1 − ρ.
Note that this is the probability that the server is busy. On the other hand, we
have by Little’s result that
E[ñ] = λE[s̃]
= λE[x̃]
= ρ.
i.e., the probability that the server is busy is ρ, or, equivalently, that the proba-
bility that no one is in the system is (1 − ρ). Now, it has already been pointed
out that the Poisson arrival’s view of the system is the same as that of a ran-
dom observer. Thus, P {ñ = n} = P {q̃ = n}. In particular, we see that
P {q̃ = 0} = 1 − ρ.
Exercise 5.5 Batch Arrivals. Suppose arrivals to the system occur in
batches of size b̃, and the batches occur according to a Poisson process at
rate λ. Develop an expression equivalent to (5.5) for this case. Be sure to
define each of the variables carefully.
Solution. It should be easy to see that the basic relationship
will remain valid. i.e., the number of customers left in the system by the (n +
1)-st departing customer will be the sum of the number of customers left by the
n-th customer plus the number of customers who arrive during the (n + 1)-st
customer’s service. It should be clear, however, that the number of customers
who arrive during the (n + 1)-st customer’s service will vary according to the
batch size, and so we need to re-specify ṽn+1 . Now,
1 1
Fq̃ (z) = 1− P {q̃ = 0} + Fq̃ (z) Fṽ (z)
z
z
1
1 − z P {q̃ = 0} Fṽ (z)
= .
1 − 1z Fq̃ (z)Fṽ (z)
But, conditioning upon the batch size and the length of the service, we find that
where
α(z) = (1 − ρ)Fx̃∗ (λ[1 − z]),
and
1 − Fx̃∗ (λ[1 − z])
β(z) = 1 − .
1−z
Then, in order to find
d
lim Fñ (z),
dz
z→1
first find the limits as z → 1 of α(z), β(z), dα(z)/dz, and dβ(z)/dz, and
then substitute these limits into the formula for the derivative of a ratio.
Alternatively, multiply both sides of (5.8) to clear fractions and then differ-
entiate and take limits.]
α(z)
Solution. Following the hint given in the book, we first rewrite Fñ (z) as β(z) ,
where
α(z) = (1 − ρ)Fx̃∗ (λ[1 − z]),
and
1 − Fx̃∗ (λ[1 − z])
β(z) = 1 − .
1−z
d d
It is straight forward to find the limit as z → 1 of dz Fñ , α(z) and dz α(z):
lim α(z) = 1 − ρ
z→1
d
lim α(z) = (1 − ρ)ρ
z→1 dz
d
lim Fñ (z) = E[ñ].
z→1 dz
The limit as z → 1 of β(z) and its derivative are more complicated. Note that
if we take the limit of β(z) that both the numerator and the denominator of the
146 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Similarly, we compute
d d 1 − Fx̃∗ (λ[1 − z])
β(z) = −
dz dz 1−z
d 1 − Fx̃∗ (s(z))
= −
dz 1−z
d ∗
−(1 − z)λ ds Fx̃ (s(z)) − [1 − Fx̃∗ (s(z))]
= .
(1 − z)2
Upon taking the limit of this ratio and applying L’Hôpital’s rule twice, we see
that
d 2
(1 − z)λ2 ds ∗
" #
d 2 Fx̃ (s(z))
lim β(z) = lim
z→1 dz z→1 2(z − 1)
d 3 2
−(1 − z)λ3 ds ∗ 2 d ∗
" #
3 Fx̃ (s(z)) − λ ds2 Fx̃ (s(z))
= lim
z→1 2
−λ 2
= E[x̃2 ].
2
To find E[ñ], we substitute these limits into the formula for the derivative of
Fñ (z),
d d
d β(z) dz α(z) − α(z) dz β(z)
Fñ (z) = 2
dz β (z).
Thus,
2
(1 − ρ)2 ρ + (1 − ρ) λ2 E[x̃2 ]
E[ñ] =
(1 − ρ)2
λ E[x̃2 ]
2
= ρ+
1−ρ 2
λ2 (E[x̃2 ] − E 2 [x̃] + E 2 [x̃])
= ρ+
1−ρ 2
The Basic M/G/1 Queueing System 147
!
λ2 E[x̃2 ] − E 2 [x̃] + E 2 [x̃]
= ρ+ E 2 [x̃]
1−ρ 2E 2 [x̃]
ρ2 V ar(x̃) + 1
= ρ+
1−ρ 2E 2 [x̃]!
ρ 2 Cx̃ 2 + 1
= ρ+
1−ρ 2
ρ Cx̃ 2 + 1
= ρ
1 + ,
1−ρ 2
By Little’s result, we know that E[ṽ] = λE[x̃] = ρ. Substitute this value into
the above equation to find that E[δ∞ ] = 1 − ρ.
To find E[ñ], we begin by squaring the equation given in the problem state-
ment.
But E[q̃] = E[ñ], and E[δ∞ 2 ] = E[δ ] since δ can take on values of 0 or 1
∞ n
only. In addition, q̃n and ṽn+1 are independent, as are δn and ṽn+1 . Clearly
their limits are also independent. However, δn and q̃n are not independent.
Furthermore, their product will always be zero; hence their expected value will
also be zero. Using these observations and solving for E[ñ],
2
E[ñ] (2 − 2E[ṽ]) = E[δ∞ ] + E[ṽ 2 ] + 1 + 2E[ṽ]E[δ∞ ] − 2E[ṽ] − 2E[δ∞ ].
That is,
ρ(1 − 2ρ) + E[ṽ 2 ]
E[ñ] = .
2(1 − ρ)
But by Exercise 5.1, E[ṽ 2 ] − E[ṽ] = λ2 E[x̃2 ], so that E[ṽ 2 ] = ρ2 E[x̃2 ] + ρ.
Substituting this into the expression for E[ñ],
ρ
E[ñ] = .
1−ρ
148 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Exercise 5.8 Using (5.14) and the properties of the Laplace transform,
show that
ρ E[x̃2 ]
E[s̃] = + E[x̃]
1 − ρ 2E[x̃]
ρ Cx̃ 2 + 1
= + 1 E[x̃].
(5.2)
1−ρ 2
Combine this result with that of Exercise 5.6 to verify the validity of Little’s
result applied to the M/G/1 queueing system. [Hint: Use (5.15) rather than
(5.14) as a starting point, and use the hint for Exercise 5.6]
Solution. Begin with (5.15) and rewrite Fs̃∗ (s) as α(s)/β(s), where
1 − Fx̃∗ (s)
β(s) = 1 − ρ .
sE[x̃]
d ∗ d
Simple calculations give us the limits of ds Fs̃ (s), α(s), and ds α(s) as s → 0.
lim α(s) = 1 − ρ
s→0
d
lim α(s) = −(1 − ρ)E[x̃]
s→0 ds
d
lim Fs̃ (s) = −E[s̃].
s→0 ds
Taking the limit of the β(s)as s → 0 results in both a zero numerator and
denominator in the second term. We may apply L’Hôpital’s rule in this case to
find
d ∗
" #
− ds Fx̃ (s)
lim β(s) = 1 − ρ lim
s→0 s→0 E[x̃]
E[x̃]
= 1−ρ
E[x̃]
= 1 − ρ.
′
We now find the limit of β (s), where
d ∗
Fx̃ (s) + [1 − Fx̃∗ (s)] E[x̃]
" #
d ρ sE[x̃] ds
β(s) = .
ds E[x̃] s2
Then, taking the limit of this expression as s → 0 and twice applying L’Hôpital’s
rule,
d ∗ d 2 d ∗
∗
" #
d ρ ds Fx̃ (s) + s ds 2 Fx̃ (s) − ds Fx̃ (s)
lim β(s) = lim
s→0 ds E[x̃] s→0 2s
The Basic M/G/1 Queueing System 149
d2 ∗ d 3
∗
" #
ρ ds2 Fx̃ (s) + s ds 3 Fx̃ (s)
= lim
E[x̃] s→0 2
E[x̃2 ]
= ρ .
2E[x̃]
Substituting these expressions into the formula for the derivative of Fs̃∗ (s), we
find that
i.e.,
ρ Cx̃ 2 + 1
E[s̃] = E[x̃] + 1
.
1−ρ 2
To complete the exercise, combine this result with that of Exercise 5.6.
ρ Cx̃ 2 +1
E[ñ] ρ 1 + 1−ρ 2
= 2
E[s̃] ρ Cx̃ +1
E[x̃] 1−ρ + 1
2
ρ
=
E[x̃]
= λ.
Exercise 5.9 Using (5.17) and the properties of the Laplace transform,
show that
ρ Cx̃ 2 + 1
E[w̃] = E[x̃]. (5.3)
1−ρ 2
Combine this result with the result of Exercise 5.6 to verify the validity
of Little’s result when applied to the waiting line for the M/G/1 queueing
system.
Solution. Recall equation (5.17):
(1 − ρ)
Fw̃∗ (s) = h
1−Fx̃∗ (s)
i.
1−ρ sE[x̃]
d
We now calculate the limit of ds β(s) by applying L’Hôpital’s rule twice. This
results in
d ∗
Fx̃ (s) − [1 − Fx̃∗ (s)]
" #
d ρ −s ds
lim β(s) = lim −
s→0 ds s→0 E[x̃] s2
d ∗ d 2 d ∗
∗
" #
ρ ds Fx̃ (s) + s ds 2 Fx̃ (s) − ds Fx̃ (s)
= lim
s→0 E[x̃] 2s
d2 ∗ d 3
∗
" #
ρ ds2 Fx̃ (s) + s ds 3 Fx̃ (s)
= lim
s→0 E[x̃] 2
E[x̃2 ]
= ρ .
2E[x̃]
The Basic M/G/1 Queueing System 151
d ∗
E[w̃] = − lim F (s)
s→0 ds w̃
E[x̃2 ]
= (1 − ρ)(1 − ρ)−2 ρ
2E[x̃]
ρ E[x̃2 ]
=
1 − ρ" 2E[x̃] #
ρ E 2 [x̃] − E 2 [x̃] + E 2 [x̃]
=
1−ρ 2E[x̃]
ρ Cx̃2 + 1
= E[x̃].
1−ρ 2
Combine this result with that of Exercise 5.6 to complete the exercise. That is,
show that
E[ñ] − λE[x̃]
= E[w̃],
λ
where E[ñ] − λE[x̃] = E[ñq ], the expected number of customers in queue.
" #
E[ñ] − λE[x̃] ρ Cx̃ 2 + 1
= E[x̃] 1 + − E[x̃]
λ 1−ρ 2
" #
ρ Cx̃ 2 + 1
= E[x̃] 1 + −1
1−ρ 2
ρ Cx̃ 2 + 1
= E[x̃]
1−ρ 2
= E[w̃].
We see that Fỹ∗ (s) has the form f (s) = F (g(s)), so that
d d d
f (s) = F (g(s)) · g(s).
ds dg ds
152 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Exercise 5.11 For the ordinary M/G/1 queueing system determine E[ỹ]
without first solving for Fỹ (s). [Hint: Condition on the length of the first
customer’s service and the number of customers that arrive during that pe-
riod of time.]
Solution. To determine E[ỹ], condition on the length of the first customer’s
service. Z ∞
E[ỹ] E[ỹ|x̃ = x]dFx̃ (x). (5.11.1)
0
Next, condition this expected value on the number of customers who arrive
during the first customer’s service. i.e.,
∞
X (λx)v −λx
E[ỹ|x̃ = x] = E[ỹ|x̃ = x, ṽ = v] e . (5.11.2)
v=0
v!
Now, given that the length of the first customer’s service is x and the number
of customers who arrive during this time is v, it is easy to see that the length of
the busy period will be the length of the first customer’s service plus the length
of the sub busy periods generated by those v customers. That is,
v
X
E[ỹ|x̃ = x, ṽ = v] = x + E[ỹi ]
i=0
= x + vE[ỹ],
since each of the sub busy periods ỹi are drawn from the same distribution. If
we substitute this expression into (5.11.2),
∞
X (λx)v −λx
E[ỹ|x̃ = x] = (x + vE[ỹ]) e
v=0
v!
= x + E[ṽ]E[ỹ]
= x + ρE[ỹ].
To complete the derivation, substitute this result into (5.11.1) and solve for
E[ỹ]. Thus,
Z ∞
E[ỹ] = (x + ρE[ỹ]) dFx̃ (x)
Z0∞
= xdFx̃ (x) + ρE[ỹ]
0
= E[x̃] + ρE[ỹ].
i.e.,
E[x̃]
E[ỹ] = .
1−ρ
where ỹ denotes the length of the busy period for each of the subsequent ṽe
customers. By Little’s result applied to E[ṽe ], E[ṽe ] = λE[x̃e ]. Then, using
the result from Exercise 5.11,
ρ
E[ỹe ] = E[x̃e ] + E[x̃e ]
1−ρ
154 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
E[x̃e ]
= .
1−ρ
Exercise 5.13 For the M/G/1 queueing system with exceptional first ser-
vice as defined in the previous exercise, show that Fỹ∗e (s) = Fx̃∗e (s + λ −
λFỹ∗ (s)).
Solution. Recall that we may write Fỹ∗ (s) as E[e−sỹe ]. As in the proof of
Exercise 5.11, we shall first condition this expected value on the length of the
first customer’s service, and then condition it again on the number of customers
who arrive during the first customer’s service. That is,
Z ∞
−sỹe
E[e ] = E[e−sỹe |x̃e = x]dFx̃ (x), (5.13.1)
0
where
∞
−sỹe
E[e−sỹe |x̃e = x, ṽ = v]P {ṽ = v} dFx̃ (x).
X
E[e |x̃e = x] = (5.13.2)
v=0
Here, x̃e represents the length of the first customer’s service and ṽ is the num-
ber of arrivals during that time. Now, the length of the busy period will be the
length of the first customer’s service plus the length of the sub busy periods
generated by those customers who arrive during the first customer’s service.
Hence, h P∞ i
E[e−sỹe |x̃e = x, ṽ = v] = E e−s(x+ i=0 ỹi ) ,
where ỹ0 = 0 with probability 1. Furthermore, x and v are constants at this
stage, and the ỹi are independent identically distributed random variables, so
that
v
−sỹe −sx
E[e−sỹi ]
Y
E[e |x̃e = x, ṽ = v] = e
i=1
= e−sx E v [e−sỹ ].
Substituting this into (5.13.2),
∞ v
−sỹe −sx −λx
X E[e−sỹ ]λx
E[e |x̃e = x] = e e
0
v!
−sx −λx (−λxE[e−sỹ ])
= e e e .
To complete the proof, substitute this into (5.13.1), and then combine the ex-
ponents of e.
Z ∞
e−x(s+λ−λE[e ) dF (x)
−sỹe −sỹ ]
E[e ] = x̃
0
The Basic M/G/1 Queueing System 155
= Fỹ∗e s + λ − λE[e−sỹ ]
= Fỹ∗e s + λ − λFỹ∗ (s) .
E[x̃2 ]
E[x̃e ] = ρ .
2E[x̃]
Explain why these formulas have this relationship. What random variable
must x̃e represent in this form? [Hint: Consider the operation of the M/G/1
queueing system under a nonpreemptive, LCFS, service discipline and ap-
ply Little’s result, taking into account that an arriving customer may find
the system empty.]
Solution. Consider the waiting time under LCFS. We know from Little’s result
that E[w̃LCF S ] is the same as E[w̃F CF S ] because E[ñF CF S ] = E[ñLCF S ].
But E[w̃LCF S |no customers present] = 0 and E[w̃LCF S |at least 1 customer
present] = E[length of busy period started by customer in service] = E[x̃e ]/(1−
ρ). But the probability of at least one customer present is ρ, so that
ρ
E[w̃] = E[x̃e ].
1−ρ
On the other hand, we know that
ρ E[x̃2 ]
E[w̃] = .
1 − ρ 2E[x̃]
Therefore
E[x̃2 ]
E[x̃e ] =
2E[x̃]
represents the expected remaining service time of the customer in service, if
any, at the arrival time of an arbitrary customer.
Exercise 5.15 Show that the final summation on the right-hand side of
(5.28) is K + 1 if ℓ = (K + 1)m + n and zero otherwise.
ℓ−n
Solution. First, suppose that ℓ = (K + 1)m + n. Then K+1 = m, and the
final summation is
K K
−j2πkm
X X
e = 1=K+1
k=0 k=0
156 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Now suppose that ℓ is not of the above form; that is, ℓ = (K + 1)m + n + r,
0 < r < (K + 1). Then
since ej2πkm = 1 for all integers k, m. Now, note that 0 < r < K + 1 implies
that this summation may be summed as a geometric series. That is,
2πr K+1
K
X −j 2πkr
1 − e−j K+1
e K+1 = 2πr
k=0 1 − e−j K+1
1 − e−j2πr
= 2πr = 0.
1 − e−j K+1
(1 − ρ)(z − 1) ∗
(z − z0 )Fñ (z) = (z − z0 ) F (λ[1 − z]).
z − Fx̃∗ (λ[1 − z]) x̃
Now, taking the limit as z → z0 , the denominator of the right-hand side goes to
zero by definition of z0 . Hence we may apply L’Hôpital’s rule to the right-hand
side:
h i
d
(z − z0 ) dz (1 − ρ)(z − 1)Fx̃∗ (λ[1 − z]) + (1 − ρ)(z − 1)Fx̃∗ (λ[1 − z])
d ∗
.
1− dz Fx̃ (λ[1 − z])
n=0
where each of the P {ña = n} is, of course, nonnegative, and whose sum is
equal to one. Now Fñ (z) is also a probability generating function and so is
bounded within the unit circle for z > 0. Hence it can have no poles, and its
denominator no zeros, in that region. If Fñ (z) does have a zero at z = 0, then
z = Fx̃∗ (z) = 0 implies that P {ña } = 0. That is, the probability that the
queue is ever empty is zero, and so stability is never achieved. Since this is not
possible we conclude that the denominator has no zero at z = 0. This means
that P {ña = 0} > 0.
Consider the graphs of the functions f1 (z) = z and f2 (z) = Fx̃∗ (z). The
graph of f1 (z) is a line which starts at 0 and whose slope is 1. On the other
hand, the graph of f2 (z) starts above the origin and slowly increases towards
d
1 as z → 1, with f1 (z) = f2 (z) at z = 1. At z = 1, dz f2 (z) = ρ < 1,
d
and dz f1 (z) = 1. But the slope of f2 (z) is an increasing function, so that
d
as z increases, dz f2 (z) also increases. This means that eventually, f2 (z) will
d
intersect f1 (z). This will happen at the point z = z0 . However, since dz f2 (z)
d
is increasing, while dz f1 (z) is constant, f2 (z) and f1 (z) will intersect only
once for z > 1.
Exercise 5.18 Starting with (5.29) and (5.32), establish the validity of
(5.33) through (5.36).
Solution. If i = K + 1 in (5.29), then pK+1+n ≈ pK+1 r0n . In addition, if
i = K and n = 1, then pK+1 ≈ pK r0 . Combining these results,
which implies
∞
X
c0,K − p0 = pm(K+1) . (5.18.2)
m=1
Now,
∞
X ∞
X
pm(K+1) = pK+1 + pm(K+1) − pK+1
m=1 m=1
X∞
= pK+1 + p(m+1)(K+1)
m=1
X∞
= pK+1 + pm(K+1)+K+1
m=1
∞
X
≈ pK r0 + r0 pm(K+1)+K
m=1
∞
" #
X
= r0 pK + pm(K+1)+K
m=1
= r0 cK,K ,
where the last equality follows from (5.29) with n = K. Substituting this
result into (5.18.2) and solving for r0 ,
c0,K − p0
r0 ≈ . (5.34)
cK,K
or
∞
X
c0,K − p0 = pm(K+1) .
m=1
The Basic M/G/1 Queueing System 159
Exercise 5.19 Approximate the distribution of the service time for the
previous example by an exponential distribution with an appropriate mean.
Plot the survivor function for the corresponding M/M/1 system at 95% uti-
lization. Compare the result to those shown in Figure 5.2.
Solution. Figure 5.19.1 illustrates the survivor function for the M/M/1 system,
with an exponential service distribution and ρ = .95. Observe that this function
closely approximates that of the truncated geometric distribution in Figure 5.2
of the text, with a = 1, b = 5000, but that the survivor function is significantly
different for the rest of the curves. Thus, the exponential distribution is a good
approximation only in the special case where the truncated distribution closely
resembles the exponential but not otherwise.
Exercise 5.20 Starting with (5.39), demonstrate the validity of (5.47).
Solution. Beginning with (5.39),
1 − Fx̃∗ (s)
Fx̃∗r (s) =
sE[x̃]
1 − Fx̃∗ (s)
=
λ[1 − z] µ1
∞
1 − Fx̃∗ (s) X
= zi
ρ i=0
P∞
By (5.43), Fx̃∗ (s) = i=0 z
iP ,
x̃,i so that
∞ ∞
" #
1
Fx̃∗r (s) = i
zi
X X
1− z Px̃,i
ρ i=0 i=0
160 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
1
P{occupancy exceeds n}
.1
.01
.001
0 25 50 75 100
occupancy, n
∞ i
" #
1X
zi 1 −
X
= Px̃,n .
ρ i=0 n=0
i=0
∞ i
" #
1 X
i
X
= z 1− Px̃,n .
ρ i=0 n=0
Exercise 5.21 Evaluate Px̃,i for the special case in which x̃ has the ex-
ponential distribution with mean 1/µ. Starting with (5.50), show that the
ergodic occupancy distribution for the M/M/1 system is given by Pi =
(1 − ρ)ρi , where ρ = λ/µ.
Solution. Since the interarrival times of the Poisson process are exponential,
rate λ, we have by Proposition 4, page 35, that
i
λ µ
Px̃,i =
λ+µ λ+µ
i
ρ 1
= .
1+ρ 1+ρ
Therefore
i i n
1 X ρ
X
Px̃,n =
i=0
1 + ρ n=0 1 + ρ
i+1
ρ
1 1 − 1+ρ
= ,
ρ
1+ρ 1 − 1+ρ
" i+1 #
1 ρ
= 1− .
1+ρ 1+ρ
We now show that Pi = (1 − ρ)ρi where ρ = λ/µ. Let T denote the truth set
for this proposition. Then 0 ∈ T since from (5.94),
(1 − ρ)Px̃,0
P0 = = 1 − ρ.
Px̃,0
Exercise 5.23 Evaluate Px̃,i for the special case in which P {x̃ = 21 } =
P {x̃ = 23 } = 12 . Use (5.50) to calculate the occupancy distribution. Com-
pare the complementary occupancy distribution (P {N > i}) for this system
to that of the M/M/1 system with µ = 1.
Solution. As in Exercise 5.5.22, if x̃ is deterministic, then Px̃,i is a weighted
Poisson distribution, with
i i
e− 12 λ e− 23 λ
1 3
1 2λ 1 2λ
Px̃,i = + .
2 i! 2 i!
The Basic M/G/1 Queueing System 163
.1
λ=.9
.01
P{occupancy exceeds n}
.001
.0001
0 λ=.5
0
0
Exponential
0 D eterm inistic
0
0 2 4 6 8 10 12 14 16
The computer program used in Exercise 22 was modified and the new occu-
pancy probabilites calculated. As in Exercise 22, the graphs of both distribu-
tions were very similar for λ = 0.1, with the probabilities of both summing to
unity within n = 9. As λ increased, however, the differences between the two
systems became apparent, with the deterministic complementary distribution
approaching zero before that of the exponential system. The graphs compar-
ing the resultant complementary probabilites to those of the M/M/1 system for
λ = 0.5 and λ = 0.9 is shown in Figure 5.23. 1.
Exercise 5.24 Beginning with (5.59), suppose Fx̃e (x) = Fx̃ (x). Show
that (5.59) reduces to the standard Pollaczek-Khintchine transform equation
for the queue length distribution in an ordinary M/G/1 system.
Solution. Equation 5.59 is as follows:
Fq̃ (z) [z − Fã (z)] = π0 z Fb̃ (z) − Fã (z) . (5.59)
164 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
.1
λ=.9
.01
P{occupancy exceeds n}
.001
.0001
0 λ=.5
0
0
Exponential
0 D eterm inistic
0
0 2 4 6 8 10 12 14 16
Since Fx̃e (x) = Fx̃ (x), we have Fb̃ (z) = Fã (z). Upon substitution of this
result in (5.59), we have
Exercise 5.25 Beginning with (5.59), use the fact that limz→1 Fq̃ (z) =
1 to show that
1 − Fã′ (1)
π0 = . (5.5)
1 − Fã′ (1) + Fb̃′ (1)
[ π0 π1 ... πm ] = π0 [ b0 b1 . . . bm ] +
a0 a1 a2 · · · am
0 a0 a1 · · · am−1
[ π1 π2 . . . πm+1 ] 0
0 a0 · · · am−2 .
. . . . .
. . . . .
. . . . .
0 0 0 ··· a0
We may write the left side of the previous equation as
0 Im
π0 [ 1 0 ... 0 ] + [ π1 π2 ... πm+1 ] ,
0 0
0 a0
a1 ··· am−1
0 Im ···
D= − 0 0 a0 am−2
.
0 0 . .. .. .. ..
.. . . . .
0 0 0 ··· a0
∞ ∞
1
πj z j−1 = πj+1 z j .
X X
[Fq̃ (z) − π0 ] =
z j=1 j=0
The Basic M/G/1 Queueing System 167
Thus, the result holds for i = 1. Assume the result hold for i − 1, then
∞
(i−1)
πi−1+j z j .
X
Fq̃ (z) =
j=0
Now,
(i) 1 h (i−1) i
Fq̃ (z) = Fq̃ (z) − πi−1 ,
z
we have
∞ ∞
(i) 1 X 1 X
Fq̃ (z) = πi−1+j z j − πi−1 = πi−1+j z j .
z j=0 z j=1
Hence
∞
(i)
πi+j z j ,
X
Fq̃ (z) =
j=0
j=0
Now, (5.68) is
h i C−1 h i
Fq̃ (z) z C − Fã (z) = πj z C Fb̃,j (z) − z j Fã (z) .
X
(5.68)
j=0
(1)
To begin, we substitute Fq̃ (z) = zFq̃ (z) + π0 into (5.68) to obtain
h i C−1 C−1
(1) C C
z j Fã (z)+z C π0 +π0 Fã (z).
X X
zFq̃ (z) z − Fã (z) = πj z Fb̃,j (z)−
j=0 j=0
Simplifying, we find
h i C−1 C−1
zFq̃ (z)(1) z C − Fã (z) = πj z C Fb̃,j (z) − z j Fã (z) + z C π0 .
X X
j=0 j=1
(5.68.1)
(2) (1)
Similarly, we substitute zFq̃ (z) + πi for Fq̃ (z) in (5.68.1) to find
h i C−1 C−1
z 2 Fq̃ (z)(2) z C − Fã (z) πj z C Fb̃,j (z) − z j Fã (z)
X X
=
j=0 j=1
C C+1
−z π0 − z π1 + π1 zFã (z),
and simplifying leads to
h i C−1 C−1
z 2 Fq̃ (z)(1) z C − Fã (z) = πj z C Fb̃,j (z)− z j Fã (z)−z C π0 −z C+1 π1 .
X X
j=0 j=2
(5.68.2)
Continuing in this way leads to
h i C−1 C−1
z C Fq̃ (z)(C) z C − Fã (z) πj z C Fb̃,j (z) − z j Fã (z)
X X
=
j=0 j=C−1
C−1
z C+j πj + πC−1 z C−1 Fã (z),
X
+
j=0
j=0 j=0
or
h i C−1 h i
Fq̃ (z)(C) z C − Fã (z) = πj Fb̃,j (z) − z j .
X
(5.69)
j=0
The Basic M/G/1 Queueing System 169
Exercise 5.30 Suppose Fã (z) and Fb̃ (z) are each polynomials of degree
m as discussed in the first part of this section. Define w(z) = 1. Find ν, D,
N , A, and E using (5.72), (5.73), and (5.78). Compare the results to those
presented in (5.62) and (5.66).
Solution. We are considering the case where C = 1. Recall that Fã (z) =
u(z)/w(z). With w(z) = 1, u(z) = Fã (z), Similarly, v(z) = Fb̃ (z). Thus,
d(z) = z − Fã (z) and n0 (z) = n(z) = Fb̃ (z) − 1 for this special case. Now,
m ≥ 1 because otherwise, there would never be any arrivals. Recall that νd and
νn are the degrees of d(z) and n(z), respectively. Since m ≥ 1, νd = νn = m.
Thus,
ν = max {nud , νn + 1} = max {m, m + 1} = m + 1.
Using (5.73), we then have
Table 5.1. Occupancy values as a function of the number of units served during a service
period for the system analyzed in Example 5.8.
Exercise 5.31 Table 5.5 gives numerical values for the survivor function
of the occupancy distributions shown in Figure 5.5. From this table, deter-
mine the probability masses for the first few elements of the distributions
and then compute the mean number of units served during a service time
for C = 1, 2, 4, and 8. Analyze the results of your calculations.
Solution. The main idea here is that if there are at least C units present in the
system, then the number of units served will be C. Otherwise, there will be
the same as the number of units present, or equivalently, the servers will serve
all units present at the beginning of the service interval. Hence, the expected
number of units served will be
C−1
X
iP {q̃ = i} + CP {q̃ ≥ C} .
i=1
To obtain the probability masses, we simply subtract successive values of the
survivor functions. For example, for the case C = 2, we have P {q̃ = 0} =
1 − 0.9493 = 0.0507, P {q̃ = 1} = 0.9493 − 0.8507 = 0.0986, P {q̃ ≥ 2} =
P {q̃ > 1} = 0.8507. Hence the expected number of services during a service
interval is 1 × 0.0986 + 2 × 0.8507 = 1.8. From this, we see that the system
utilization is 0.9. If we repeat this for C = 4, we will find that the expected
number of services that occur during a service time is 3.6 so that again the
system utilization is 0.9.
Exercise 5.32 Use a busy period argument to establish the validity of
(5.90). [Hint: Consider the M/G/1 system under the nonpreemptive LCFS
service discipline.]
The Basic M/G/1 Queueing System 171
Solution. Let B be the event that an arbitrary customer finds the system busy
upon arrival, and let I represent its complement. Then
But E[w̃|I] = 0 since the customer enters service immediately if the system is
idle upon arrival. Therefore,
E[w̃] = ρE[w̃|B].
Now, the total amount of waiting time for the arbitrary customer in a LCFS dis-
cipline will be the length of the busy period started by the customer in service
and generated by the newly arrived customers. Hence,
E[x̃e ]
E[w̃|B] = .
1 − ρ]
Multiplying this expression by ρ, the probability that the system is busy upon
arrival, we find that
ρ
E[w̃] = ρE[w̃|B] = E[x̃e ],
1−ρ
which is the desired result.
Exercise 5.33 Show that the Laplace-Stieltjes transform for the distri-
bution of the residual life for the renewal process having renewal intervals
of length z̃ is given by
1 − Fz̃∗ (s)
Fz̃∗r (s) = . (5.98)
s E[z̃]
Solution. By Definition 2.8,
Z ∞
Fz̃∗r (s) = e−sz dFz̃r (z)
Z0∞
1 − Fz̃ (z)
−sz
= e dz
0 E[z̃]
1 ∞
Z ∞ Z
= e−sz dFz̃ (y)dz
E[z̃] Z0 Z x
1 ∞ y
−sz
= e dz dFz̃ (y)
E[z̃] Z0 0
1 ∞ 1
1 − e−sy dFz̃ (y)
=
E[z̃] 0 s
1
= [1 − Fz̃ (s)] .
sE[z̃]
172 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Exercise 5.35 For the M/G/1 system, suppose that x̃ = x̃1 + x̃2 where
x̃1 and x̃2 are independent, exponentially distributed random variables with
parameters µ1 and µ2 , respectively. Show that Cx̃2 ≤ 1 for all µ1 , µ2 such
that E[x̃] = 1.
Solution. First, by the fact that E[x̃] = 1,
Var(x̃)
Cx̃2 =
E 2 [x̃]
Var(x̃1 + x̃2 )
=
1
= Var(x̃1 ) + Var(x̃2 )
1 1
= + ≤ 1.
µ21 µ22
Exercise 5.36 Compute the expected waiting time for the M/G/1 system
with unit mean deterministic service times and for the M/G/1 system with
service times drawn from the unit mean Erlang-2 distribution. Plot on the
same graph E[w̃] as a function of ρ for these two distributions and for the
M/M/1 queueing system with µ = 1 on the same graph. Compare the
results.
Solution. To compute the expected waiting time, recall equation (5.101):
" #
ρE[z̃] 1 + Cz̃2
E[w̃] = ,
1−ρ 2
where
Var(z̃)
Cz̃2 = .
E 2 [z̃]
For all three systems, note that E[z̃] = 1. Thus, we need only find the variance
(which will be equal to Cz̃2 in this case) of each to compute its expected waiting
time. For the deterministic system, the variance is zero, so that
ρ
ρ 1+0
E[w̃d ] = = 2 .
1−ρ 2 1−ρ
From Figure 5.1, we see that as the coefficient of variance increases, the perfor-
mance of the system deteriorates. i.e., the coefficient of variance and the mean
waiting time are directly proportional. In addition, we see from the figure that
the mean waiting time is also directly proportional to ρ. This should be clear
since if ρ = µλ increases, then the arrival rate also increases, putting a greater
load on the system.
Exercise 5.37 For the M/G/1 system, suppose that x̃ is drawn from the
distribution Fx̃1 (x) with probability p and from Fx̃2 (x) otherwise, where
x̃1 and x̃2 are independent, exponentially distributed random variables with
parameters µ1 and µ2 , respectively. Let E[x̃] = 1. Show that Cx̃2 ≥ 1 for
all p ∈ [0, 1].
Solution. By the definition of Cx̃2 ,
so that
dFx̃ (x) = pdFx̃1 (x) + (1 − p)dFx̃2 (x).
Therefore,
E[x̃n ] = pE[x̃n1 ] + (1 − p)E[x̃n2 ].
For the exponential distribution with rate µ, E[x̃2 ] = 2/µ2 , and so
2 2 p 1−p
E[x̃2 ] = p 2 + (1 − ρ) 2 = 2 2 + .
µ1 µ2 µ1 µ22
The Basic M/G/1 Queueing System 175
100
D eterm inistic
Erlang-2
80 Exponential
E[waiting time]
60
40
20
0
0.80 0.83 0.86 0.89 0.92 0.95 0.98
rho
p (1 − p)
2 + ≥ 1.
µ1 µ22
i.e.,
p 1−p
2 + − 1 ≥ 0.
µ1 µ22
Now,
p 1−p
E[x̃] = + = 1,
µ1 µ2
so that
p2 2p(1 − p) (1 − p)2
E 2 [x̃] = + + = 1.
µ21 µ1 µ2 µ22
176 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Thus,
p 1−p p 1 − p p2 2p(1 − p) (1 − p)2
+ −1 = + − 2− −
µ21 µ22 µ12 µ22 µ1 µ1 µ2 µ22
p(1 − p) p(1 − p) 2p(1 − p)
= + −
µ21 µ22 µ1 µ2
1 2 1
= p(1 − p) 2 − +
µ µ1 µ2 µ22
1 2
1 1
= p(1 − p) − ≥ 0,
µ1 µ2
2p 2(1 − p)
+ = Cx̃2 + 1 (5.38.2)
µ21 µ22
Rearranging (5.38.2),
1 (1 − p)/p Cx̃2 + 1
2 + = . (5.38.3)
µ1 µ22 2p
1
Solving equation (5.38.1) for µ1 , and squaring both sides
" #
1 1 2(1 − p) (1 − p)2
= 1 − + .
µ21 p2 µ2 µ22
Substitute this expression into (5.38.3) to yield
1 − 2p Cx̃2 + 1
1 2
− + = 0.
µ22 µ2 1−p
1
And solving this equation for µ2 , we obtain
s
1 p 1
Cx̃2 − 1 .
= 1± (5.38.4)
µ2 1−p 2
The Basic M/G/1 Queueing System 177
so that
d 1 − Fz̃ (z)
Fz̃ (z) = ,
dz a E[z̃]
as was shown in the previous subsection.
Solution.
Exercise 5.40 Formalize the informal discussion of the previous para-
graph.
Solution. Let Y (t) denote the total length of time the server has spent serving
up to time t, so that Y (t)/t represents the proportion of time the server has
spent serving up to time t. Now let N (t) denote the total number of busy
periods completed up to time t. So long as there has been at least one busy
period up to this time, we may write
Y (t) Y (t) N (t) Y (t) N (t)
= = .
t t N (t) N (t) t
For a fixed t, Y (t) is a random variable. Thus, the long-term average propor-
tion of time the server is busy is
Y (t) Y (t) N (t)
lim = lim .
t→∞ t t→∞ t N (t)
178 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
It remains to show
Y (t) N (t) Y (t) N (t)
lim = lim lim .
t→∞ t N (t) t→∞ N (t) t→∞ t
That is, that these limits exist separately.
We consider the limit of t/N (t), as t → ∞. Let cn = xn + yn denote the
length of the n-th cycle. Then
PN (t) PN (t)+1
t n=0 cn t n=0 cn
lim = lim ≤ ≤ ,
t→∞ N (t) t→∞ N (t) N (t) N (t)
so that
t
lim = E[c̃] = E[x̃] + E[ỹ].
t→∞ N (t)
Therefore,
N (t) 1
lim = .
t→∞ t E[x̃] + E[ỹ]
Since the limits exist separately,
θz 1 − [(1 − θ)z](b−[a−1])
E[z m̃ ] = z (a−1) ,
1 − (1 − θ)z 1 − (1 − θ)(b−[a−1])
and
1 (b − [a − 1])(1 − θ)(b−[a−1])
E[m̃] = a − 1 + − .
θ 1 − (1 − θ)(b−[a−1])
(b) Rearrange the expression for E[m̃] given above by solving for θ −1 to
obtain an equation of the form
1
= f (E[m̃], a, b, θ),
θ
and use this expression to obtain a recursive expression for θ of the
form
1
= f (E[m̃], a, b, θi ).
θi+1
(c) Write a simple program to implement the recursive relationship de-
fined in part (b) to solve for θ in the special case of a = 10, b = 80,
and E[m̃] = 30. Use θ0 = E −1 [m̃] as the starting value for the
recursion.
(d) Argue that Fx̃∗ (s) = Fm̃
∗ (s/C), where C is the transmission capacity
in octets/sec.
(e) Using the computer program given in the Appendix, obtain the com-
plementary occupancy distribution for the transmission system under
its actual message length distribution at a traffic utilization of 95%,
assuming a transmission capacity of 30 characters/sec.
(f) Compare this complementary distribution to one obtained under the
assumption that the message lengths are drawn from an ordinary ge-
ometric distribution. Comment on the suitability of making the geo-
metric assumption.
Solution:
Therefore, h i−1
k = (1 − θ)a−1 − (1 − θ)b .
To find E[m̃], observe that
b
E[z m̃ ] = z m P {m̃ = m}
X
m=a
b
z m kθ(1 − θ)m−1
X
=
m=a
b
[z(1 − θ)]m−1
X
= kθz
m=a
[z(1 − θ)]a−1 − [z(1 − θ)]b
= kθz
h 1 − z(1 − θ) i
θz (1 − θ)a−1 − z (b−(a−1) (1 − θ)b
= z a−1
1 − z(1 − θ)
1
· a−1
(1 − θ) − (1 − θ)b
a−1 θz 1 − [(1 − θ)z]b−(a−1)
= z .
1 − z(1 − θ) 1 − (1 − θ)b−(a−1)
To compute E[m̃], first note the following probabilities:
1, x < a,
b
P {m̃ = m} , a ≤ x < b, ,
P
P {m̃ > x} =
m=x+1
0, x≥b
1,h x < a,
i
x b
k (1 − θ) − (1 − θ) , a ≤ x < b, ,
=
0, x≥b
so that
∞
X
E[m̃] = P {m̃ > x}
x=0
The Basic M/G/1 Queueing System 181
" b−1 #
(1 − θ)x − (b − a)(1 − θ)b + a
X
= k
"x=a #
(1 − θ)a − (1 − θ)b
= k − (b − a)(1 − θ)b + a
1 − (1 − θ)
kh i
= (1 − θ)a − (1 − θ)b − θ(b − a)(1 − θ)b + a
θ" #
1 (1 − θ)a − (1 − θ)b − θ(b − a)(1 − θ)b
= +a
θ (1 − θ)a−1 − (1 − θ)b
" #
1 (1 − θ) − (1 − θ)b−(a−1) − θ(b − a)(1 − θ)b−(a−1)
=
θ 1 − (1 − θ)b−(a−1)
+a
1 1 + (b − a)(1 − θ)b−(a−1)
= − +a
θ 1 − (1 − θ)b−(a−1)
1 1 − (1 − θ)b−(a−1) + [b − (a − 1)](1 − θ)b−(a−1)
= −
θ 1 − (1 − θ)b−(a−1)
+a
1 [b − (a − 1)](1 − θ)b−(a−1)
= (a − 1) + − .
θ 1 − (1 − θ)b−(a−1)
But
∞
θ(1 − θ)i−1
X
P {m̃ > c} =
i=c
θ(1 − θ)c
=
1 − (1 − θ)
= (1 − θ)c ,
and
1
E[m̃1 |m̃ > c] = c + .
θ
Thus,
P {m̃ ≤ c} = 1 − (1 − θ)c ,
182 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
1 [b − (a − 1)](1 − θ)b−(a−1)
= E[m̃] = (a − 1) + ,
θ 1 − (1 − θ)b−(a−1)
or
1 [b − (a − 1)](1 − θi )b−(a−1)
= E[m̃] − (a − 1) + .
θi+1 1 − (1 − θi )b−(a−1)
medskip
The Basic M/G/1 Queueing System 183
1
P{Occupancy exceeds n}
.1
geometric
mean 18.70
.01
mean 12.90
(10,80)
.001
0 25 50 75 100
Occupancy, n
where
α(z) = (1 − ρ)Fx̃∗ (λ[1 − z]),
d2
lim Fñ (z),
z→1 dz 2
184 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
In terms of Cx̃2 ,
" #
ρ 1 + Cx̃2
E[ñ] = ρ 1 + .
1−ρ 2
Taking the second derivatives of α(z) and β(z) evaluated at z = 1,
d2 ∗
′′ 2
= (1 − ρ)λ2 E[x̃2 ],
α (z) = (1 − ρ)(−λ) F (s)
z=1 ds2 x̃ s=0
and
d2 ∗
′′ 2
= −ρλ2 E[x̃2r ].
β (z) = (−ρ)(−λ) 2
Fx̃ (s)
z=1 ds s=0
1−Fx̃∗ (s)
Now, Fx̃∗r (x) = sE[x̃] . Thus,
d ∗ 1 1 1 d ∗
F (s) = − (1 − Fx̃∗ (s)) 2 − F (s)
ds x̃r E[x̃] " s s ds # x̃
d ∗
1 1 − Fx̃∗ (s) + s ds Fx̃ (s)
= − 2
.
E[x̃] s
so that
E[x̃2 ]
E[x̃r ] = .
2E[x̃]
Next we obtain
d ∗ d 2
2[1 − Fx̃∗ (s) + s ds Fx̃ (s)] + s2 ds ∗
" #
d2 ∗ 1 2 Fx̃ (s)
F (s) = .
ds2 x̃ E[x̃] s3
186 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
d2 ∗ E[x̃3 ]
lim Fx̃ (s) = .
s→0 ds2 r
3E[x̃]
Therefore,
E[x̃3 ]
E[x̃2r ] = .
3E[x̃]
Consequently,
′′ −ρλ2 E[x̃3 ]
limz→1 β (z) = .
3 E[x̃]
We may now specify
d2
= E[ñ(ñ − 1)] = E[ñ2 ] − E[ñ]
2
Fñ (z)
dz z=1
2]
2ρ(1 − ρ)[−λρ] E[x̃
2E[x̃]
2 2
= λ E[x̃ ] −
(1 − ρ)2
" #2
2
(1 − ρ)ρλ E[x̃3 ] 2(1 − ρ)λ2 ρ2 E[x̃2 ]
+ +
(1 − ρ)2 3E[x̃] (1 − ρ)3 2E[x̃]
ρ 3 2
E[x̃ ] ρ 3 E[x̃3 ]
= λ2 E[x̃2 ] + +
(1 − ρ) 2E 2 [x̃] 3(1 − ρ) E 3 [x̃]
" #2
2ρ4 E[x̃2 ]
+
4(1 − ρ)2 E[x̃]
E[x̃2 ] ρ3 E[x̃2 ] ρ3 E[x̃3 ]
= ρ2 2 + +
E [x̃] 1 − ρ E 2 [x̃] 3(1 − ρ) E 3 [x̃]
" #2
ρ4 E[x̃2 ]
+ .
2(1 − ρ)2 E[x̃]
Therefore,
and
Fñ2 (z) = Fx̃∗ (λ[1 − z]).
Then
Now,
d
E[ñj ] = Fñ (z)|z=1
dz j
and
d2
E[ñj (ñj − 1)] = 2
Fñj (z) ,
ds z=1
or, rearranging terms,
d2 d
E[ñ2j ] = 2
Fñj (z)|z=1 + Fñj (z)|z=1
dz dz
d2
= Fñ (z)|z=1 + E[ñj ].
dz 2 j
Then, for ñ1 and ñ2 as defined above, and for s(z) = λ[1 − z],
d ∗
−2
−(1 − ρ) 1 − ρFx̃∗r (λ[1 − z])
E[ñ1 ] = ρλ F (s(z))
ds x̃r
z=1
1−ρ
= (ρλ)E[x̃r ],
(1 − ρ)2
and
d
E[ñ2 ] = −λ Fx̃∗ (s)
= ρ.
ds z=1
The second moments of ñ1 and ñ2 are thus
2
d ∗
−3
E[ñ21 ] = 2(1 − ρ) 1 − ρFx̃∗r (λ[1 − z])
ρλ F (s)
−2
ds x̃r
+(1 −"ρ) 1 − ρFx̃∗r (λ[1#− z])
d2 ∗
· ρλ 2 Fx̃r (s(z)) + E[ñ1 ]
ds z=1
188 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
2(1 − ρ) (1 − ρ) 2 ρλ
= 3
(ρλ)2 E 2 [x̃r ] + 2
ρλ E[x̃2r ] + E[x̃r ],
(1 − ρ) (1 − ρ) 1−ρ
and
( )
d2
E[ñ22 ] = (−λ)2 2 Fx̃∗ (s(z))
+ E[ñ1 ]
ds z=1
= λ2 E[x̃2 ] + ρ.
This is the same expression as was found using the first method.
5-3 Jobs arrive to a single server system at a Poisson rate λ. Each job con-
sists of a random number of tasks, m̃, drawn from a general distribution
Fm̃ (m), independent of everything. Each task requires a service time
drawn from a common distribution, Fx̃t , independent of everything.
Solution:
(a) Since the time between interruptions is exponentially distributed with
parameter β, the process that counts the interruptions is a Poisson
process. Hence
(βx)n e−βx
P {ñ = n|x̃ = x} = .
n!
Therefore,
∞ (βx)n e−βx
Z
P {ñ = n} = dFx̃ (x),
0 n!
and
∞
z n P {ñ = n}
X
Fñ (z) =
n=0
∞
(βx)n e−βx
zn∞
X
= dFx̃ (x)
n=0
n!
∞
(zβx)n −βx
Z ∞X
= e Fx̃ (x)
0 n=0 n!
Z ∞
= ezβx e−βx Fx̃ (x).
0
Therefore
Fñ (z) = Fx̃∗ (β[1 − z]),
which, of course, is just the pgf for the distribution of the number of
arrivals from a Poisson process over a period x̃.
Let ñ represent the total number of interruptions. Then c̃ = x̃ +
(b) P
ñ
i=0 x̃si . Therefore,
Pñ
−sc̃ −s x̃+ x̃
i=0 si
E[e ] = E e
ñ
∞
( P
Z ∞X −s x̃+ x̃si ñ=n,x̃=x
= E e i=0
0 n=0
)
P {ñ = n|x̃ = x} dFx̃ (x)
∞
∞X n (βx)n
Z
e−sx Fx̃∗s (s) e−βx dFx̃ (x)
=
0 n=0
n!
Z ∞
−sx βxFx̃∗s (s) −βx
= e e e dFx̃ (x).
0
190 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Thus,
Fc̃ (s) = Fx̃∗ s + β − βFx̃∗s (s) .
The relationship between the two formulas is that if the service time
of the special customer has the same distribution as the length of a
busy period, then the completion time has the same distribution as the
length of an ordinary busy period. The relationship is explained by
simply allowing the first customer of the busy period to be interrupted
by any other arriving customer. Then each interruption of the first
customer has the same distribution as the ordinary busy period. When
service of the first customer is complete, the busy period is over. Thus,
the busy period relationship is simply a special case of the completion
time.
(d) Since c̃ = x̃ + ñi=0 x̃si ,
P
E[c̃|x] = x + βxE[x̃s ],
which implies
E[c̃] = E[x̃] + βE[x̃]E[x̃s ].
Therefore
E[c̃] = E[x̃](1 + ρs ),
where ρs = βE[x̃s ]. As a check,
d ∗ d d
= Fx̃∗ (s)
Fc̃ (s) 1 − β Fx̃s (s) .
ds s=0 ds s=0 ds s=0
It follows that
E[c̃] = E[x̃] (1 + βE[x̃s ]) .
Similarly,
" #
2
d2 d2 ∗ d ∗ d ∗ d2 ∗
Fc̃ (s) = F (s) 1 − β F (s) + F (s) −β F (s) .
ds2 ds2 x̃ ds x̃ ds x̃ ds2 x̃
Taking the limit as s → 0,
E[c̃2 ] = E[x̃2 ] (1 + βE[x̃s ])2 + E[x̃]βE[x̃2s ].
(e) Note that the probability that the server is busy is just the expected
value of the number of customer is service. That is, if B represents
the event the the server is busy,
P {B} = λE[c̃] = λE[x̃](1 + ρs )
The Basic M/G/1 Queueing System 191
= ρ(1 + ρs ).
(a) Suppose that service time for the ordinary customer is chosen once.
Following an interruption, the ordinary customer’s service resumes
from the point of interruption. Determine P {ñ = n|x̃ = x}, the con-
ditional probability that the number of interruptions is n, and Fñ (z),
the probability generating function for the number of interruptions
suffered by the ordinary customer.
(b) Determine Fc̃∗ (s), the Laplace-Stieltjes transform for c̃ under this pol-
icy. [Hint: Condition on the the length of the service time of the or-
dinary customer and the number of service interruptions that occur.]
(c) Compare the results of part (b) with the Laplace-Stieltjes transform
for the length of the M/G/1 busy period. Explain the relationship
between these two results.
(d) Determine E[c̃] and E[c̃2 ].
(e) Determine the probability that the server will be busy at an arbitrary
point in time in stochastic equilibrium, and the stability condition for
this system.
192 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Solution:
Therefore
Fñ (z) = Fx̃∗ (β[1 − z]),
which, of course, is just the pgf for the distribution of the number of
arrivals from a Poisson process over a period x̃.
(b) Let
Pñ
ñ represent the total number of interruptions. Then c̃ = x̃ +
i=0 x̃si . Therefore,
Pñ
−sc̃ −s x̃+ x̃
i=0 si
E[e ] = E e
ñ
∞
( P
Z ∞X −s x̃+ x̃si ñ=n,x̃=x
= E e i=0
0 n=0
)
P {ñ = n|x̃ = x} dFx̃ (x)
∞
∞X n (βx)n
Z
e−sx Fx̃∗s (s) e−βx dFx̃ (x)
=
0 n=0
n!
Z ∞
−sx βxFx̃∗s (s) −βx
= e e e dFx̃ (x).
0
The Basic M/G/1 Queueing System 193
Thus,
Fc̃ (s) = Fx̃∗ s + β − βFx̃∗s (s) .
The relationship between the two formulas is that if the service time
of the special customer has the same distribution as the length of a
busy period, then the completion time has the same distribution as the
length of an ordinary busy period. The relationship is explained by
simply allowing the first customer of the busy period to be interrupted
by any other arriving customer. Then each interruption of the first
customer has the same distribution as the ordinary busy period. When
service of the first customer is complete, the busy period is over. Thus,
the busy period relationship is simply a special case of the completion
time.
(d) Since c̃ = x̃ + ñi=0 x̃si ,
P
E[c̃|x] = x + βxE[x̃s ],
which implies
E[c̃] = E[x̃] + βE[x̃]E[x̃s ].
Therefore
E[c̃] = E[x̃](1 + ρs ),
where ρs = βE[x̃s ]. As a check,
d ∗ d d
= Fx̃∗ (s)
Fc̃ (s) 1 − β Fx̃s (s) .
ds s=0 ds s=0 ds s=0
It follows that
E[c̃] = E[x̃] (1 + βE[x̃s ]) .
Similarly,
" #
2
d2 d2 ∗ d ∗ d ∗ d2 ∗
Fc̃ (s) = F (s) 1 − β F (s) + F (s) −β F (s) .
ds2 ds2 x̃ ds x̃ ds x̃ ds2 x̃
Taking the limit as s → 0,
E[c̃2 ] = E[x̃2 ] (1 + βE[x̃s ])2 + E[x̃]βE[x̃2s ].
(e) Note that the probability that the server is busy is just the expected
value of the number of customer is service. That is, if B represents
the event the the server is busy,
P {B} = λE[c̃] = λE[x̃](1 + ρs )
194 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
= ρ(1 + ρs ).
5-5 Consider a queueing system that services customers from a finite popula-
tion of K identical customers. Each customer, while not being served or
waiting, thinks for an exponentially distributed length of time with param-
eter λ and then joins a FCFS queue to wait for service. Service times are
drawn independently from a general service time distribution Fx̃ (x).
(a) Given the expected length of the busy period for this system, describe
a procedure through which you could obtain the expected waiting
time. [Hint: Use alternating renewal theory.]
(b) Given the expected length of the busy period with K = 2, describe a
procedure for obtaining the expected length of the busy period for the
case of K = 3.
Solution:
(a) First note if B denotes the event that the system is busy,
E[ỹ]
P {B} = .
E[ỹ] + E[ĩ]
Then, by Little’s result,
where λef f is the average arrival rate of jobs to the server. Since the
λ f
customer are statistically identical, λc = ef K is the job generation
rate per customer. Now, a customer generates one job per cycle; hence
λc is equal to the inverse of the expected cycle length. i.e.,
1
λc = .
E[t̃] + E[w̃] + E[x̃]
Therefore, by Little’s result,
E[x̃] E[ỹ]
P {B} = K = .
E[t̃] + E[w̃] + E[x̃] E[ỹ] + E[c̃]
The Basic M/G/1 Queueing System 195
(b) Let ỹK denote the length of a busy period for a system having K
customers, and let Ai denote the event that i arrivals occur during
x̃1 . Further, define ỹij t be the length of a busy period starting with i
customers in the service system and j total customers in the system.
Then
K−1 h i
E[ỹ1K ] E ỹiK |Ai P {Ai } .
X
= E[x̃] +
i=1
In particular, for K = 3,
2 h i
E[ỹ13 ] = E[x̃] + E ỹi3 |Ai P {Ai }
X
i=1
= E[x̃] + E[ỹ13 ]P1 + E[ỹ23 ]P2 .
Now, ỹ23 = ỹ12 + ỹ13 , because the service can be reordered so that one
customer stands aside while the busy period of the second customer
completes and then the first customer is brought back and its busy
period completes. From the point of view of the second customer,
the whole population is only two, but the first customer sees all three
customers. Thus,
E[ỹ13 ] = E[x̃] + E[ỹ13 ]P1 + E[ỹ12 ] + E[ỹ13 ] P2 ,
which implies
That is,
E[x̃] + E[ỹ12 ]P2
E[ỹ13 ] = ,
1 − P1 − P2
196 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
where E[ỹ12 ] is the given quantity, namely the length of the busy pe-
riod if the population size is two.
5-6 For the M/G/∞ queueing system, it is well known that the stochastic equi-
librium distribution for the number of busy servers is Poisson with param-
eter λE[x̃], where λ is the arrival rate and x̃ is the holding time.
Solution:
(a) From the problem statement, the equilibrium distribution of the num-
ber of busy servers is Poisson with parameter λE[x̃]. Now, if x̃ = i
with probability pi , then the arrival process consists of 3 independent
arrival processes with arrival rates λpi , i = 1, 2, 3. Since there are
an infinite number of servers, we may consider each arrival stream as
being served by an independent, infinite set of servers. Thus, if we
denote by ñi the number of busy servers for class i, then
(λE[x̃])n −λE[x̃]
= e ,
n!
as in the problem statement.
(b) Since ñ is a Poisson random variable that is obtained by the addition
of 3 independent Poisson random variables, it follows that the propor-
tion of calls of type i is simply
λipi ipi
P3 = .
λ i=1 ipi E[x̃]
v=0
∞
e−λx (λx)v
E[e−ss̃ |x̃ = x, ṽ = v]
X
= .
v=0
v!
Now, if v customers arrive during the system time of the arbitrary customer,
then the sojourn time will be the service time requirement plus the busy periods
generated by those v customers. But, each of the v customers have the same
busy period distribution ỹ, with ỹ = 0 with probability 1. Hence,
∞ h Pv i e−λx (λx)v
E[e−ss̃ |x̃ = x] = E e−s(x+ ỹ)
X
i=1
v=0
v!
200 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
∞ v
" #
X
−sx
Y
−sỹ e−λx (λx)v
= e E e
v=0 i=1
v!
∞ v e−λx (λx)v
e−sx Fỹ∗ (s)
X
=
v=0
v!
v
∞ λxFỹ∗ (s)
= e−sx e−λx
X
v=0
v!
−sx −λx λxFỹ∗ (s)
= e e e ∗
= e−x[s+λ−λFỹ (s)] .
Therefore,
Fs̃∗ (s) = Fx̃∗ s + λ − λFỹ∗ (s) .
Exercise 6.3 Compare the means and variances of the sojourn times for
the ordinary M/G/1 system and the M/G/1 system under the LCFS-PR dis-
cipline.
Solution. To compute the mean of Fs̃∗LCF S−P R (s), recall that
Hence, by (5.22),
E[x̃]
E[s̃LCF S−P R ] = .
1−ρ
Now, since
Fỹ∗ (s) = Fx̃∗ s + λ − λFỹ∗ (s) ,
we may rewrite the LST of s̃LCF S−P R as
where
u(s) = s + λ − λFỹ∗ (s).
Thus,
d2 ∗ d d ∗ d
F (s) = Fx̃ (u(s)) u(s)
ds2 ỹ ds du ds
2
d2 ∗ d d ∗ d2
= F (u(s)) u(s) + F (u(s)) u(s)
du2 x̃ ds du x̃ ds2
2
d2 ∗ d ∗
= Fx̃ (u(s)) 1 − λ F ỹ (s)
du2 ds
The M/G/1 Queueing System with Priority 201
" #
d d2
+ Fx̃∗ (u(s)) −λ 2 Fỹ∗ (s) .
du ds
d2 ∗
2 2 2
F (s) = E[x̃ ] (1 + λE[ỹ]) + (−E[x̃]) −λE[ỹ ]
ds2 ỹ s=0
2
ρ
2
= E[x̃ ] 1 + − E[x̃] −λE[ỹ 2 ]
1−ρ
= E[ỹ 2 ],
where we have used the above result for E[ỹ]. Then solving for E[ỹ 2 ],
E[x̃2 ]
E[ỹ 2 ] = ,
(1 − ρ)3
so that
Then
d2 ∗ d −2 d ∗
(1 − ρ) 1 − ρFx̃∗r (s)
F (s) = ρ Fx̃r (s)
ds2 w̃ ds ds 2
d ∗
∗ −3
= 2(1 − ρ) 1 − ρFx̃r (s) ρ Fx̃r (s)
ds
−2 d2 ∗
+(1 − ρ) 1 − ρFx̃∗r (s)
ρ 2 Fx̃r (s).
ds
202 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Now, by (5.17), !
ρE[x̃] Cx̃2 + 1
E[w̃] = ,
1−ρ 2
so that the variance of w̃ is then
Var(w̃) = E[w̃2 ] − E 2 [w̃]
" !#2
ρE[x̃] Cx̃2 + 1 ρ E[x̃3 ]
= 2 +
1−ρ 2 1 − ρ 3E[x̃]
" !#2
ρE[x̃] Cx̃2 + 1
−
1−ρ 2
" #2
ρE[x̃] Cx̃2 + 1 ρ E[x̃3 ]
= + .
1−ρ 2 1−ρ 3E[x̃]
Therefore,
Var(s̃) = Var(w̃) + Var(x̃)
" #2
ρE[x̃] Cx̃2 + 1 ρ E[x̃3 ]
= + + E[x̃2 ] − E 2 [x̃]
1−ρ 2 1 − ρ 3E[x̃]
" #2
ρ (1 + Cx̃2 ρ E[x̃3 ]
= E 2 [x̃] + Cx̃2 + .
1−ρ 2 1−ρ 3E[x̃]
Now,
" #
ρ 1 + Cx̃2 E[x̃]
E[s̃F CF S ] − E[s̃LCF S−P R ] = E[x̃] 1 + −
1−ρ 2 1−ρ
The M/G/1 Queueing System with Priority 203
" #
ρ 1 + Cx̃2 1
= E[x̃] 1 + −
1−ρ 2 1−ρ
" #
ρ 1 + Cx̃2 ρ
= E[x̃] −
1−ρ 2 1−ρ
" #
ρE[x̃] 1 + Cx̃ 2
= −1
1−ρ 2
" #
ρE[x̃] Cx̃2 − 1
= .
1−ρ 2
Thus, if Cx̃2 < 1, then E[s̃LCF S−P R ] < E[s̃F CF S ], and if Cx̃2 > 1, then
E[s̃LCF S−P R ] > E[s̃F CF S ]. If Cx̃2 = 1, then E[s̃LCF S−P R ] = E[s̃F CF S ].
This result is clear for exponential service because of Little’s result, which is
reasoned as follows. Since the service time is memoryless, then upon each
arrival, the service time of the customer in service is redrawn again from the
same distribution as all other customers. Thus, all customers are alike, and it
does not matter which order the service occurs; the distribution of the number
of customers in the system is unaffected by order of service. Therefore, the
average number of customers is unaffected by service order. Thus, by Little’s
result, E[s̃LCF S−P R ] = E[s̃F CF S ].We now compare the variances of the two
distributions.
∆s̃ Cx̃2 = Var(s̃F CF S ) − Var(s̃LCF S−P R )
" !#2
ρ 1 + Cx̃2 ρ E[x̃3 ]
= E 2 [x̃] + Cx̃2 +
1−ρ 2 1 − ρ 3E[x̃]
E 2 [x̃] h 2 i
− Cx̃ + ρ .
(1 − ρ)3
That is,
(
E 2 [x̃] ρ2 (1 − ρ) 2
∆s̃ Cx̃2 = 1 + Cx̃2
(1 − ρ)3 4
)
3 ρ E[x̃3 ]
+(1 − ρ) Cx̃2 − Cx̃2 −ρ + .
1 − ρ 3E[x̃]
This means that var(s̃LCF S−P R ) > var(s̃F CF S ) if Cx̃2 = 0 for deterministic
service.
At Cx̃2 = 1, with exponential service,
E[x̃2 ] 6E 3 [x̃]
= = 2E 2 [x̃].
3E[x̃] 3E[x̃]
Hence, at Cx̃2 = 1 and for all ρ,
E 2 [x̃] ρ(1 − ρ)4
∆s̃ Cx̃2 = 3
+ (1 − ρ)3 − 1 − ρ + 2ρ(1 − ρ)2
(1 − ρ) ( 4
E 2 [x̃]
= ρ − ρ2 + 1 − 3ρ + 3ρ2 − ρ3 − 1 − ρ
(1 − ρ)3
)
2 3
+2ρ − 4ρ + 2ρ
ρE 2 [x̃] n 2
o
= −1 − 2ρ + ρ
(1 − ρ)3
ρE 2 [x̃] n o
= − 3
1 + 2ρ − ρ2 < 0.
(d1 − ρ)
Again, this means that var(s̃LCF S−P R ) > var(s̃F CF S ) at Cx̃2 = 1. i.e., with
exponential service. The question then arises as to whether Var(s̃LCF S−P R ) <
var(s̃F CF S ) for any value of Cx̃2 . This question can be answered in part by con-
sidering
E 2 [x̃] ρ E[x̃3 ]
∆s̃ (Cx̃2 = f (C 2
x̃ ) + ,
(1 − ρ)3 1 − ρ 3E[x̃]
where
ρ2 (1 − ρ)
f (Cx̃2 ) = (1 + Cx̃2 ) + (1 − ρ)3 Cx̃2 − Cx̃2 + ρ.
4
Since x̃ is a nonnegative random variable, we conjecture that E[x̃3 ] is an in-
creasing function of Cx̃2 . Now,
d ρn o
f (Cx̃2 ) = ρ(1 − ρ)Cx̃2 − [6 − 7ρ + 3ρ2 ]
Cx̃2 2 ( )
ρ2 (1 − ρ) 2 6 − 7ρ + 3ρ2
= Cx̃ − .
2 ρ(1 − ρ)
This shows that for sufficiently large Cx̃2 , f (Cx̃2 ) is an increasing function. That
is, if
6 − 7ρ + 3ρ2
Cx̃2 > ,
ρ(1 − ρ)
The M/G/1 Queueing System with Priority 205
Exercise 6.4 Compare the probability generating function for the class
1 occupancy distributions for the HOL system to that of the M/G/1 system
with set-up times discussed in Section 6.2. Do they have exactly the same
form? Explain why or why not intuitively.
Solution. Recall Equation (6.36)
and (6.24)
1 ρs
Fñs (z) = Fx̃∗s (λ[1 − z]) + F ∗ (λ[1 − z])
Fñ (z),
1 + ρs 1 + ρs x̃sr
where (6.36) and (6.24) represent the probability generating functions for the
occupancy distribution of HOL Class 1 customers and M/G/1 customers with
set-up times, respectively. These probability generating functions have the
exactly the same form in that they both are the probability generating function
of the sum of two random variables, one of which is the probability generating
function for the number left in the system in the ordinary M/G/1 system. In
each case, the first part of the expression represents the number left due to
arrivals that occur prior to the time an ordinary customer begins service in a
busy period.
In the case of the priority system, either there is a Class 2 customer in service
or there isn’t. If so, then the probability generating function for the number of
Class 1 customers who arrive is the same as for those who arrive during the
residual service time of the Class 2 customer in service, namely Fx̃∗2r (λ1 [1 −
z]). Otherwise, the probability generating function is 1. The probability of no
ρ2
Class 2 customers in service is 1 − 1−ρ 1
.
In the case of set-up, since this is before ordinary customers are serviced,
either the customer in question in the first customer of the busy period or not.
If so, the customer will leave behind all customers who arrive in the set-up
time; otherwise the customer will leave behind only those who arrive after and
during the set-up time. i.e., during the residual of the set-up time. The result
is that in the case of the priority system, the set-up time is either 0 or x̃2r ,
depending upon whether or not a Class 2 customer is in service or not upon
arrival of the first Class 1 customer of a Class 1 busy period. Recall Equation
206 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
(6.36)
1 − ρ1 − ρ2 ρ2 Fx̃∗2r (λ1 [1 − z]) (1 − ρ1 )Fx̃∗1 (λ1 [1 − z])
Fñ1 (z) = + ,
∗
1 − ρ1 1 − ρ1 1 − ρ1 Fx̃1r (λ1 [1 − z])
where (6.36) and (6.24) represent the probability generating functions for the
occupancy distribution of HOL Class 1 customers and M/G/1 customers with
set-up times, respectively. These probability generating functions have the
exactly the same form in that they both are the probability generating function
of the sum of two random variables, one of which is the probability generating
function for the number left in the system in the ordinary M/G/1 system. In
each case, the first part of the expression represents the number left due to
arrivals that occur prior to the time an ordinary customer begins service in a
busy period.
In the case of the priority system, either there is a Class 2 customer in service
or there isn’t. If so, then the probability generating function for the number of
Class 1 customers who arrive is the same as for those who arrive during the
residual service time of the Class 2 customer in service, namely Fx̃∗2r (λ1 [1 −
z]). Otherwise, the probability generating function is 1. The probability of no
ρ2
Class 2 customers in service is 1 − 1−ρ 1
.
In the case of set-up, since this is before ordinary customers are serviced,
either the customer in question in the first customer of the busy period or not.
If so, the customer will leave behind all customers who arrive in the set-up
time; otherwise the customer will leave behind only those who arrive after and
during the set-up time. i.e., during the residual of the set-up time. The result
is that in the case of the priority system, the set-up time is either 0 or x̃2r ,
depending upon whether or not a Class 2 customer is in service or not upon
arrival of the first Class 1 customer of a Class 1 busy period.
Exercise 6.5 Derive the expression for Fñ2 (z) for the case of the HOL-
PR discipline with I = 2.
Solution. The probability generating function for the number left in the system
by a departing class 2 customer is readily derived from (6.45). As noted earlier,
the term
λ1 ∗
1 − γ2 (1 − z) + λ2 {1 − Fỹ11 (λ2 [1 − z])}
1 + γ1 Fx̃∗2c (λ2 [1 − z]) − z
term
Fx̃∗2 (λ2 [1 − z])
is the probability generating functionfor the distribution of the number of cus-
tomers who arrive during the service time of the departing customer. Now, the
distribution of the number of customers who arrive during the waiting time is
the same whether or not servicing of the class 2 customer is preemptive. On the
other hand, the number of class 2 customers who arrive after the time at which
the class 2 customer enters service and the same customer’s time of departure
is equal to the number of class 2 customers who arrive during a completion
time of the class 2 customer, the distribution of which is Fx̃2 c (x). The prob-
ability generating functionof the number of class 2 arrivals during this time is
then
Exercise 6.6 Derive expressions for Fñ1 (z), Fñ2 (z), and Fñ3 (z) for the
ordinary HOL discipline with I = 3. Extend the analysis to the case of
arbitrary I.
Solution. Consider the number of customers left by a class j departing cus-
tomer by separating the class j customers remaining into two sub-classes:
those who arrive during a sub-busy period started by a customer of their own
class are Type 1; all others are Type 2. As before, the order of service is that
the service takes place as a sub-busy period of a sequence of sub-busy periods,
each being started by a Type 2 customer and generated by Type 1 and higher
priority customers. Then, ñj = ñj1 + ñj2 . Since these customers arrive in
non-overlapping intervals and are due to Poisson processes, ñj1 and ñj2 are
independent.
As we have argued previously, ñj1 is the sum of two independent random
variables. The first is the number of Type 1 customers who arrive during the
waiting time of the tagged customer, which has the same distribution as the
corresponding quantity in an ordinary M/G/1 queueing system having traffic
intensity γj and service time equivalent to the completion time of a class j
208 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
customer. The pgf for the distribution of the number of Type 1 customers in
this category is
(1 − γj )
. (6.6.1)
1 − γj Fx̃∗jcr (λj [1 − z])
In turn, Fx̃∗jc (s) = Fỹ∗j (s). That is, the completion time of a class j customer
H
is simply the length of a sub-busy period started by a class j customer and
generated by customers having priority higher than j. Furthermore,
Fỹ∗jH (s) = Fx̃∗j s + λH − λH Fỹ∗HH (s) (6.6.2),
which is the LST of the length of a busy period started by a high priority cus-
tomer and generated by customers whose priority is higher than j. Specifically,
j−1
X
λH = λk , (6.6.4)
k=1
and
j−1
1 X
Fx̃∗H (s) = λk Fx̃∗k (s), (6.6.5)
λH k=1
with λH = 0 and Fx̃∗H (s) undefined if j < 2. In addition,
1 − Fx̃∗jc (s)
Fx̃jcr (s) = (6.6.6)
sE[x̃jc ]
and
E[x̃j ]
E[x̃jc ] = , (6.6.7)
1 − σj−1
where
j
X
σj = Pk .
k=1
λj E[x̃j ]
Since γj = 1−σj−1 ,
∗
λj E[x̃j ] 1 − Fx̃jc (s)
γj Fx̃∗jcr (s) = sE[x̃j ]
1 − σj−1
h 1−σi
j−1
λj 1 − Fx̃∗jc (s)
=
s
The M/G/1 Queueing System with Priority 209
h i
λj 1 − Fỹ∗jH (s)
= (6.6.8)
s
Returning to the second component of ñj1 , this is just the number of class j
customers who arrive during the service time of the class j customer. The pgf
for the distribution of the number of such customers is simply Fx̃∗j (λj [1 − z]).
Thus we find on making appropriate substitutions that
(1 − γj )Fx̃∗j (λj [1 − z])
Fñj1 (z) =
1 − γj Fx̃∗jcr (λj [1 − z])
(1 − γj )Fx̃∗j (λj [1 − z])
= h i
λj 1−Fỹ∗ (λj [1−z])
jH
1− λj [1−z]
(1 − γj )(1 − z)Fx̃∗j (λj [1 − z])
=
Fỹ∗jH (λj [1 − z]) − z
(1 − γj )(z − 1)Fx̃∗j (λj [1 − z])
=
z − Fx̃∗j λj [1 − z] + λH − λH Fỹ∗HH (λj [1 − z])
The proportion of time spent during events E and L serving higher priority
customers is readily computed by simply subtracting ρj and ρL from P {E}
and P {L} respectively. We then find
P {H} = ρH − [P {E} − ρj ] −
[P{L} − ρL ]
ρj ρL
= ρH − − ρj − − ρH ,
1 − ρH 1 − ρH
(1 − ρ)ρH
P {H} = .
1 − ρH
Next we find the probabilities
1−ρ
P I|Ē =
1 − γj
(1 − ρ)ρH
P H|Ē =
(1 − ρH )(1 − γj )
ρL
P L|Ē =
(1 − ρH )(1 − γj )
Now, the number of Type 2 customers left in the system by an arbitrary de-
parture from the system is equal to the number of Type 2 customers left in the
system by an arbitrary Type 2 arrival. This is because each Type 2 arrival is
associated with exactly one Type 1 sub-busy period. Now, the number of Type
2 customers left behind if a Type 1 customer
arrives to an empty system is 0.
The number left behind for the event H|Ē is simply the number that arrive
in the residual time of a high priority busy period started by a high priority cus-
tomer, the pfg of which is given by Fỹ∗HHr (λj [1 − z]). Similarly, the number
left behind for the event L|Ē is simply the number that arrive in the residual
life of a high priority busy period started by a low priority customer, the pgf
of which is given by Fỹ∗LHr (λj [1 − z]). Now, from exceptional first service
results,
Fỹ∗HH (s) = Fx̃∗H s + λH − λH Fỹ∗HH (s) ,
Fỹ∗LH (s) = Fx̃∗L s + λH − λH Fỹ∗HH (s) .
Also,
1 − Fỹ∗HH (s)
Fỹ∗HHr (s) =
sE[ỹHH ]
1 − Fỹ∗HH (s)
=
s E[x̃
1−ρH
H]
h i
(1 − ρH ) 1 − Fỹ∗HH (s)
= .
sE[x̃H ]
The M/G/1 Queueing System with Priority 211
Similarly,
h i
(1 − ρH ) 1 − Fx̃∗L s + λH − λH Fỹ∗HH (s)
Fỹ∗LH (s) = .
sE[x̃L ]
We then find that
h i
∗
1−ρ (1 − ρ)ρH 1 − FỹHH (λj [1 − z])
Fñj2 (z) = + +
1 − γj (1 − γj ) λj [1 − z]E[x̃H ]
h i
ρL 1 − Fx̃∗ λj [1 − z] + λH − λH Fỹ∗ (λj [1 − z])
L HH
.
(1 − γj ) λj [1 − z]E[x̃L ]
Thus,
h i
(1 − ρ)λH 1 − Fỹ∗HH (λj [1 − z])
(
Fñj (z) = (1 − ρ) + +
λj [1 − z]
h i )
λL 1 − Fx̃∗L λL [1 − z] + λH − λH Fỹ∗HH (λj [1 − z])
λj [1 − z]
(z − 1)Fx̃∗j (λj [1 − z])
.
z − Fỹ∗jH (λj [1 − z])
h i
λH 1 − Fỹ∗HH (s)
Fs̃∗ (s) = (1 − ρ) 1 + +
s
!
1 − Fx̃∗L s + λH − λH Fỹ∗HH (s) Fx̃∗j (s)
λL .
s 1 − λj [1 − Fỹ∗jH (s)]/s
That is,
n h io
Fs̃∗j (s) = (1 − ρ) s + λH 1 − Fỹ∗HH (s) +
Fx̃∗j (s)
!
h i
λL 1 − Fx̃∗L s + λH − λH Fỹ∗HH (s)
s − λj + λj Fỹ∗jH (s)
212 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Exercise 6.7 Extend the analysis of the previous case to the case of
HOL-PR.
Solution. The solution to this problem is similar to that of Exercise 6.5, and
proceeds as follows. The probability generating function for the number left in
the system by a departing class j customer is readily derived from the results
of Exercise 6.6, which are as follows
h i
(1 − ρ)λH 1 − Fỹ∗HH (λj [1 − z])
(
Fñj (z) = (1 − ρ) + +
λj [1 − z]
h i )
λL 1 − Fx̃∗L λL [1 − z] + λH − λH Fỹ∗HH (λj [1 − z])
λj [1 − z]
(z − 1)Fx̃∗j (λj [1 − z])
,
z − Fỹ∗jH (λj [1 − z])
or
h i
(1 − ρ)λH 1 − Fỹ∗HH (λj [1 − z])
(
Fñ (z) = (1 − ρ) + +
λj [1 − z]
h i )
λL 1 − Fx̃∗L λL [1 − z] + λH − λH Fỹ∗HH (λj [1 − z])
λj [1 − z]
( )
(z − 1)
Fx̃∗j (λj [1 − z])
z − Fỹ∗jH (λj [1 − z])
Now, the term Fx̃∗j (λj [1 − z]) is the probability generating functionfor the dis-
tribution of the number of customers who arrive during the service time of
the departing class j customer, while the remainder of the expression repre-
sents the probability generating function for the distribution of the number of
customers who arrive during the waiting time of the class j customer. The dis-
tribution of the number of customers who arrive during the waiting time is the
same whether or not servicing of the class j customer is preemptive. On the
other hand, the number of class 2 customers who arrive after the time at which
the class 2 customer enters service and the same customer’s time of departure
is equal to the number of class j customers who arrive during a completion
time of the class j customer, the distribution of which is Fx̃jc (x). The prob-
ability generating function of the number of class j arrivals during this time
is
Fx̃∗jc (s) = Fx̃∗j (s + λH − λH Fỹ∗11 (s))
s=λj [1−z] s=λj [1−z]
= Fx̃∗j (λj [1 − z] + λH − λH Fỹ∗HH (λj [1 − z])).
The M/G/1 Queueing System with Priority 213
Exercise 6.8 Suppose that the service time of the customers in an M/G/1
system are drawn from the distribution Fx̃i (x) with probability pi such that
PI
i=1 pi = 1. Determine E[w̃] for this system.
Solution. Let Si denote the even that distribution i is selected. Then, given x̃
is drawn from Fx̃i (x) with probability pi , we find
I
X
E[x̃] = E[x̃|Si ]P {Si }
i=1
XI
= E[x̃i ]pi .
i=1
Similarly,
I
E[x̃2 ] = E[x̃2i ]pi .
X
i=1
Therefore,
I
E[x̃2i ]pi
P
i=1
E[x̃r ] = I
.
P
2 E[x̃i ]pi
i=1
From (5.96), we then have
ρ
E[w̃] = E[x̃r ]
1−ρ
I
E[x̃2i ]pi
P
ρ i=1
= ·
1−ρ I
2
P
E[x̃i ]pi
i=1
214 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
I
λ
E[x̃2i ]pi
X
=
2(1 − ρ) i=1
I
1
λi E[x̃2i ]
X
=
2(1 − ρ) i=1
I
1 X E[x̃2i ]
= λi E[x̃i ]
1 − ρ i=1 2E[x̃i ]
I
1 X
= ρi E[x̃ri ],
1 − ρ i=1
We wish to show
ρE[x̃r ]
E[w̃FCFS ] = ,
1−ρ
so that
I
X
ρE[w̃FCFS ] = ρ ρi E[w̃i ]
i=1
I
X ρi
= ρE[x̃r ] .
i=1
(1 − σi )(1 − σi−1 )
We prove the following proposition by induction, and the above result follows
as a special case. Proposition: For all j,
j
X ρi σj
= ,
i=1
(1 − σi )(1 − σi−1 ) 1 − σj
with σ0 = 0. proof: Let T denote the truth set for the proposition. Clearly
1 ∈ T . Suppose j − 1 ∈ T . That is,
j−1
X ρi σj−1
= .
i=1
(1 − σi )(1 − σi−1 ) 1 − σj−1
Then
j j−1
X ρi X ρi ρj
= +
i=1
(1 − σi )(1 − σi−1 ) i=1
(1 − σ i )(1 − σ i−1 ) (1 − σ j )(1 − σj−1 )
σj−1 ρj
= +
(1 − σj−1 ) " (1 − σj )(1 − σj−1# )
1 σj−1 (1 − σj ) + ρj
=
(1 − σj−1 ) (1 − σj )(1 − σj−1 )
" #
1 σj (1 − σj−1 )
=
(1 − σj−1 ) 1 − σj
σj
= .
1 − σj
Thus j ∈ T for all j, and with j = I, we have
I
X ρi σI ρ
= = .
i=1
(1 − σi )(1 − σi−1 ) 1 − σI 1−ρ
This proves the proposition. Continuing with the exercise, we have
I
X
ρE[w̃FCFS ] = ρj E[w̃j ],
j=1
or
I
X ρj
E[w̃FCFS ] = E[w̃j ].
j=1
ρ
That is, E[w̃] can be expressed as a weighted sum of the expected waiting times
of individual classes, where the weighting factors are equal to the proportion
of all service times devoted to the particular class. Thus,
I
X λj E[x̃j ]
E[w̃FCFS ] = E[w̃j ].
j=1
λE[x̃]
216 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
But
I
X
E[ñqHOL ] = E[ñqj ], (6.6.1)
j=1
which implies
I I
X λj X ρj
E[w̃qHOL ] = E[w̃j ] 6= E[w̃j ] = E[w̃qFCFS ].
j=1
λ j=1
ρ
That is, if the service times are not identical, the average waiting time may not
be conserved.
Chapter 7
Exercise 7.1 Suppose that P {x̃ = 1} = 1, that is, the service time is
deterministic with mean 1. Determine {ak , k = 0, 1, . . .} as defined by
(7.6).
Solution: if P {x̃ = 1} = 1, then we find that dFx̃ (x) = δ(x − 1)dx, where
δ(x) is the Dirac delta function. Thus, from (5.6),
∞ (λx)k −λx
Z
ak = e dFx̃ (x)
Z0∞
k!
(λx)k −λx
= e δ(x − 1)dx
0 k!
λk −λ
= e for k = 0, 1, 2, · · ·
k!
X (λxj )k e−λxj
= αj
xj ∈X
k!
J
X (λxj )k e−λxj
= αj for k = 0, 1, · · ·
j=0
k!
But, the expression inside the integral is just the density of the Erlang-(k+1)
distribution with parameter (λ + µ), so that the expression integrates to unity.
Therefore,
k
µ λ
ak = , for k = 0, 1, · · ·
λ+µ λ+µ
This result is obvious since ak is just the probability that in a sequence of
independent trials, there will be k failures prior to the first success.
Exercise 7.4 Suppose that x̃ = x̃1 + x̃2 , where x̃1 and x̃2 are exponen-
tial random variables with parameter µ1 and µ2 , respectively. Determine
{ak , k = 0, 1, . . .} as defined by (7.6)).
Solution: First, we determine dFx̃ (x) = fx̃ (x)dx.
Z x
fx̃ (x) = fx̃2 (x − y)fx̃1 (y)dy
Z0x
= µ2 e−µ2 (x−y) µ1 e−µ1 y dy
0 Z x
= µ1 µ2 e−µ2 x e−(µ1 −µ2 )y dy
0
If µ1 = µ2 , we find
fx̃ (x) = µ1 µ2 e−µ2 x x
Vector Markov Chains Analysis 219
= µ22 xe−µ2 x
= µ2 (µ2 x)e−µ2 x .
If µ1 6= µ2 , we have
µ1 µ2 −µ2 x x
Z
fx̃ (x) = e (µ1 − µ2 ) e−(µ1 −µ2 )y dy
µ1 − µ2 0
µ1 µ2 −µ2 x h i
= e 1 − e−(µ1 −µ2 )x
µ1 − µ2
µ1 µ2 −µ2 x
− e−µ1 x
= e
µ1 − µ2
Then, from Exercise 7.3, we find
" k #
k
µ1 µ2 1 µ2 λ 1 µ1 λ
ak = −
µ1 − µ2 µ2 λ + µ2 λ + µ2 µ1 λ + µ1 λ + µ1
k
µ1 µ2 λ
=
µ1 − µ2 λ + µ2 λ + µ2
k
µ2 µ1 λ
− for k = 0, 1, · · ·
µ1 − µ2 λ + µ1 λ + µ1
Exercise 7.5 Show that the quantity Fq̃ (1) corresponds to the stationary
probability vector for the phase process.
Solution: From (5.37), we find
So, with z = 1,
Fq̃ (1) [I − PFã (1)] = 0
But, Fã (1) = I, so the above equation becomes
Fq̃ (1) [I − P ] = 0
i.e.,
Fq̃ (1) = Fq̃ (1)P
Therefore, Fq̃ (1) is the stationary probability vector for the Markov Chain
having transition matrix P; that is, for the phase process.
Postmultiplying by e,
′
h ′
i
Fq̃ (1) [I − P] e + Fq̃ (1) I − PFã (1) e = π0 Pe
That is,
′
1 − Fq̃ (1)PFã (1)e = π0 e
N +M −1
for all M , N positive integers. Let T denote the truth set of
N
Vector Markov Chains Analysis 221
N
N N −i
X
CM = CM −1
i=0
N
N −i+M −1−1
X
=
N −i
i=0
N
N − i + (M − 1) − 1 N + (M − 1) − 1
X
= +
N −i N
i=1
N −1
N − i + (M − 1) − 1 N − (M − 1) − 1
X
= +
N −i N
i=0
To do this, we prove
N
N −i+x
X N +x+1
= ,
N −i N
i=0
by induction on N .
If N = 0, we have
0+x x+1
=1=
0 0
Now assume
N −1
N −1−i+x N −1+x+1
X
=
N −1−i N −1
i=0
222 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Then
N N
N −i+x N −i+x
X X N +x
= +
N −i N −i N
i=0 i=0
N −1
N −1−i+x
X N +x
= +
N −1−i N
i=0
N −1+x+1
N +x
= +
N−1 N
N +x N +x
= +
N −1 N
(N + x)! (N + x)!
= +
(N − 1)! (x + 1)! N !x!
(N + x)! 1 1
= +
(N − 1)! (x + 1)! x + 1 N
(N + x + 1)!
=
N ! (x + 1)!
N +x+1
=
N
Exercise 7.8 Define the matrices P and Fã (a) for the model defined in
Example 7.1 for the special case of N = 3, where the states of the phase
process have the following interpretations shown in Table 7.2. In the table,
if the phase vector is ijk, then there are i sources in phase 0, j sources in
phase 1, and k sources in phase 2.
Solution: If the phase vector is (ijk) then there are i sources in phase 0, j
sources in phase 1, and k sources in phase 2. for i, j, k = 0, 1, 2, 3. Phase
0 corresponds to the phase vector 300; i.e., all sources are in phase 0. Each
source acts independently of all others, and the only transitions from phase 0
are to phase 0 or to phase 1 for individual sources to 0, 1, 2, or 3. For transition
(0− > 0), all sources must remain in the ‘off’ state. Therefore, this transition
has probability β 3 . For (0− > 1), 2 sources remain off andthere is one
3
transition to the ‘on’ state. Thus, the transition probability is β 2 (1 −
1
β) = 3β 2 (1 − β). Continuing in this manner results in the following table of
Vector Markov Chains Analysis 223
transition probabilities:
From To Probability
300 −> 300 β3
−> 210 2β(1 − β)
−> 120 3β(1 − β)
−> 030 (1 − β)3
Exercise 7.9 In the previous example, we specified Fã (z) and P. From
these specifications, we have
2
Ai z i .
X
PFã (z) =
i=0
Therefore,
2
Ai [K(1)]i ,
X
K(1) =
i=0
with
β2 2β(1 − β) (1 − β)2 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
A0 =
β(1 − α)
,
βα + (1 − β)(1 − α) α(1 − β) 0 0 0
0 0 0 0 0 0
(1 − α)2 2α(1 − α) α2 0 0 0
0 0 0 0 0 0
0
0 0 β (1 − β) 0
0 0 0 0 0 0
A1 =
0
,
0 0 0 0 0
0 0 0 (1 − α) α 0
0 0 0 0 0 0
and
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 1
A2 =
0
0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
Now, suppose we compute K(1) iteratively; that is, we use the formula
2
Ai [Kj−1 (1)]i
X
Kj (1) = for j ≥ 1, (7.1)
i=0
with K0 (1) = 0. Prove that the final three columns of Kj (1) are zero
columns for all j.
Solution: The proof is by induction. Consider the proposition that the last 3
columns of Kj (1) are zero columns for all j ≥ 0. Let T denote the truth set for
this proposition. Since K0 (1) = 0, 0 ∈ T . Also, K1 (1) = A0 , so that 1 ∈ T .
Vector Markov Chains Analysis 225
with A = B = Kj−1 (1), we have [Kj−1 (1)]2 with its last 3 columns equal to
zero, and with A = A2 , B = [Kj−1 (1)]2 , we have A2 [Kj−1 (1)]2 with its last
3 columns equal to zero. Also, with A = A1 , B = Kj−1 (1), we have the last
3 columns of A1 Kj−1 (1) = 0. Since this is also true of A0 , the sum leading to
Kj (1) also has its last 3 columns equal to zero. Hence, j ∈ T , and the proof is
complete.
Exercise 7.10 Suppose K(1) has the form
K00
0
K(1) = , (7.2)
K10 0
where K00 is a square matrix. Bearing in mind that K(1) is stochastic, prove
that K00 is also stochastic and that κ has the form [ κ 0 ], where κ0 is the
stationary probability vector for K00 .
Solution: Recall that K(1) stochastic means that K(1) is a one-step transition
probability matrix for a discrete parameter Markov chain. That is, all entries
are nonnegative and the elements of every row sum to unity. Thus, K(1)e = e.
But
K00 0
K(1) = ,
K10 0
and
K00 0
e
K(1)e =
K10 0 e
K00 e
=
K
10 e
e
=
e
Therefore, the element of the rows of K00 sum to unity and are nonnegative
because they come from K(1). Since K00 is square it is a legitimate one-step
transition probability matrix for a discrete paramter Markov chain.
Now, recall that κ is the stationary vector of K(1); thus
κ · K(1) = κ,
with κe = 1 . Therefore,
K00
0
[ κ0 κ1 ] = [ κ0 K00 + κ1 K10 0]
K10 0
226 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
= [ κ0 κ1 ] ,
Then, with z = 1,
∞
X
PFã (1) = P = Ai ,
i=0
so that the latter sum is stochastic.
First, note that [K0 (1)]n represents an n-step transition probability matrix
for a discrete parameter Markov chain so that this matrix is stochastic. We
may also show [K0 (1)]n is stochastic by induction on n. Let T be the truth
set for the propostion. Then 0 ∈ T since [K0 (1)]0 = I, which is stochastic.
Now assume that 0, 1, · · · , n − 1 ∈ T . Hence [K0 (1)]n−1 is nonnegative and
[K0 (1)]n−1 e = e. It follows that
And clearly, [K0 (1)]n is nonnegative for all n. Therefore, if [K0 (1)]n−1 is
stochastic, then so is Kn (1). This proves the proposition.
Now, Ai is a matrix of probabilities and is therefore nonnegative. Thus,
∞
Ai [Kj−1 ]i ,
X
Kj (1) =
i=0
= Pe = e,
eφ = eφ [I − P + eφ]−1
as required.
j=0
Given
h i C−1 h i
Fq̃ (z) z C I − A(z) = πj z C Bj (z) − z j A(z) ,
X
(7.49)
j=0
we find
h ih i C−1 h i
(1)
z C I − A(z) = πj z C Bj (z) − z j A(z) ,
X
zFq̃ (z) + π0
j=0
or
h i h i C−1 h i
(1)
zFq̃ (z) z C I − A(z) = −π0 z C I − A(z) + πj z C Bj (z) − z j A(z) .
X
j=0
j=0 j=0
or
h i C−1 C−1
(1)
zFq̃ (z) z C I − A(z) = −π0 z C I + πj z C Bj (z) − z j A(z).
X X
j=0 j=1
(2) (1)
We now substitute zFq̃ (z) + π1 for Fq̃ (z) in the previous equation to find
h ih i C−1 C−1
(2)
z C I − A(z) = −π0 z C I + πj z C Bj (z)− z j A(z).
X X
z zFq̃ (z) + π1
j=0 j=1
Simplifying, we get
h i C−1 C−1
2 (2) C C C C
z j A(z).
X X
z Fq̃ (z) z I − A(z) = −π0 z I−zπ1 z + πj z Bj (z)−
j=0 j=2
j=0 j=0
or
h i C−1 h i
(C)
(z) z C I − A(z) = πj Bj (z) − z j ,
X
Fq̃
j=0
Vector Markov Chains Analysis 229
Exercise 7.14 Beginning with (7.68), develop an expression for the first
and second moments of the queue length distribution.
C−1
z i πi + z C g [I − F z]−1 H,
X
Fq̃ (z) = (7.68)
i=0
C−1
d
iz i−1 πi + Cz C−1 g [I − F z]−1 H + z C gF [I − F z]−2 H,
X
Fq̃ (z) =
dz i=1
and
C−1
d2
i(i − 1)z i−2 πi + C(C − 1)z C−2 g [I − F z]−1 H
X
Fq̃ (z) =
dz 2 i=2
+2Cz C−1 gF [I − F z]−2 H + z C gF 2 [I − F z]−3 H.
We then have
C−1
d
iπi e + Cg [I − F ]−1 He + gF [I − F ]−2 He,
X
E [q̃] = Fq̃ (1)e =
dz i=1
and
h i C−1
e q̃ 2 = i(i − 1)πi e + C(C − 1)g [I − F ]−1 H
X
E
i=2
+2CgF [I − F ]−2 He + gF 2 [I − F ]−3 He + E [q̃] .
230 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Show that
1
(a) φ2 = 2 [ φ φ ], where φ is the stationary vector of A(1).
(b) "∞ #
X
φ iAi e = 2ρ.
i=0
Solution:
(a) Given
A2i A2i+1
Âi = , i = 1, 2, . . . ,
A2i−1 A2i
we have P∞ i P∞ i
A z i=0 A2i+1 z
Â(z) = P∞i=0 2i i ∞ i .
i=0 2i−1 z
A
P
i=0 A2i z
Upon setting z = 1, we have
P∞ P∞
A i=0 A2i+1 .
Â(1) = P∞i=0 2i P∞
i=0 A2i−1 i=0 A2i
Define
∞
X ∞
X
Ae = A2i and Ao = A2i+1 .
i=0 i=0
We then have
Ae Ao
Â(1) = .
Ao Ae
Now, suppose φ2 = [ φ2e φ2o ] is the stationary vector of Â(1). Then,
we have φ2 = φ2 Â(1) and φ2 e = 1. From φ2 = φ2 Â(1), we have
Ae Ao
[ φ2e φ2o ] = [ φ2e φ2o ]
Ao Ae
Vector Markov Chains Analysis 231
So that ∞ P∞
d
P
iA i=0 iA2i+1 ,
Â(z)kz=1 = P∞i=0 2i P ∞
dz i=1 iA2i−1 i=0 iA2i
Hence,
1 P∞ 1 P∞ P∞
d i=0 2iA2i i=0 (2i + 1)A2i+1 − 12 i=0 A2i+1
Â(z)
= 2 2
1 P∞ P∞ 1 P∞
dz z=1
2 i=1 (2i − 1)A2i−1 + 21 i=1 A2i−1 2 i=0 2iA2i
Therefore,
∞
X d
φ iAi e = 2φ2 Â(z)kz=1 e = 2ρ.
i=0
dz
232 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
Solution:
Vector Markov Chains Analysis 233
(a) The matrices Bj represent the number of arrivals that occur in the first
slot of a busy period while the matrices Aj represent the number of
arrivals that occur in an arbitrary slot. Since arrivals are Poisson and
the slots are of fixed length, there is no difference between the two.
(b) The matrix A(z) represents the generating function for the number
of arrivals that occur during a time slot. In particular, if the phase of
the phase process is 0 at the end of a given time slot, then the phase
will be 0 at the end of the next time slot with probability β, and the
distribution of the number of arrivals will be Poisson with parameter
λ0 . Similarly, if the phase is initially 1, then it will be zero with
probability (1 − α) and the distribution of the number of arrivals will
be Poisson with parameter λ0 . Hence
βe−λ0 (1−z) (1 − β)e−λ1 (1−z)
A(z) = −λ (1−z)
(1 − α)e
0 (1 − β)e−λ1 (1−z)
−λ
β 1−β e 0 (1−z) 0
= .
1−α α 0 e−λ1 (1−z)
(d)
λk0
P {ñ = k, ℘˜ = 0|ñ = 0, ℘˜ = 0} = β
k!
λk1
P {ñ = k, ℘˜ = 1|ñ = 0, ℘˜ = 0} = (1 − β)
k!
λk0
P {ñ = k, ℘˜ = 0|ñ = 0, ℘˜ = 0} = (1 − α)
k!
λk1
P {ñ = k, ℘˜ = 1|ñ = 0, ℘˜ = 1} = α
k!
(e) The level wil be k − 1 if there are no arrivals, this probability matrix
is given by
1−β e−λ0
β 0
A0 = .
1−α α 0 e−λ1
Now, l = k if exactly one arrival given by
1−β λ0 e−λ0
β 0
A1 = ,
1−α α 0 λ1 e−λ1
234 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
(f) The equilibrium distribution for the phase process is found by solving
1−β
β
x=x ,
1−α α
where xe = 1. Also, we can simply take ratios of the mean time in
each phase to the mean of the cycle time. Let ñi be the number of
slots in phase i, i = 0, 1. Thus, P {ñ0 = k} = (1 − β)β k−1 implies
1 1
E[ñ0 ] = 1−β . Similarly, E[ñ1 ] = 1−α . Therefore,
1/(1 − β)
P {phase 0} =
1/(1 − β) + 1/(1 − α)
1−α
=
2−α−β
0.5 2
= = ,
2 − 0.5 − 0.75 3
and
1/(1 − α)
P {phase 1} =
1/(1 − β) + 1/(1 − α)
1−β
=
2−α−β
0.25 1
= = .
2 − 0.5 − 0.75 3
Thus,
2 1
ρ = λ0 + λ1
3 3
2 1
= (1.2) + (0.3) = 0.9
3 3
(g) Since A(z) and B(z) are identical, G(1) and K(1) are also identical.
We may then solve for K(1) by using (5.39). The result is
0.486555 0.513445
K(1) = .
0.355679 0.644321
Note that the result is a legitimate one-step transition probability ma-
trix for a discrete parameter Markov chain. The equilibrium may be
Vector Markov Chains Analysis 235
κ = [ 0.409239 0.590761 ] .
(h) We may compute E[q̃] from (5.47). In order to do this, we must first
define each term in the equation. From part g, Fq̃ (1) = [ 0.6667 0.3333 ].
Also, −λ0 [1−z]
e 0
Fã (z) =
0 e−λ1 [1−z]
so that, in general,
In particular,
(1) λ0 0 1.2 0
Fã (1) = = .
0 λ1 0 0.3
and
λ20
(2) 0 1.44 0
Fã (1) = 2 = .
0 λ1 0 0.09
From (5.41),
h i
(1)
π0 = 1 − Fq̃ (1)Fã (1)e κ
= [ 0.0409239 0.0590761 ] .
All of the terms are now defined. Substituting these into (5.41) yeilds
E[q̃] = 6.319. For the M/G/1 case, the appropriate formula is (6.10),
which is
ρ Cx̃2 + 1
E[ñ] = ρ
1 +
1−ρ 2
= 4.95.
From this, it can be seen that the effect of burstiness is to increase the
average system occupancy. As has been observed many times before,
236 QUEUEING THEORY WITH APPLICATIONS . . . : SOLUTION MANUAL
(c) Define ρ to be the marginal probability that the system is not empty
at the end of a slot. Postmultiply both sides of (7.71) by e, and show
(1)
that ρ = Fq̃ (1)Fã (1)e.
(1) (1)
(d) Add Fq̃ (1)eFq̃ (1) to both sides of (7.71), solve for Fq̃ (1), and
(1)
then postmultiply by PFã (1)e to obtain
(1) (1)
Fq̃ (1)PFq̃ (1) = Fq̃ (1)eρ
(1)
+{π0 P − Fq̃ (1)[I − Fã (1)]} (7.72)
[I − P + eFq̃ (1)]−1 .
Use the fact that eFq̃ (1) [I − P + eFq̃ (1)] = eFq̃ (1), as shown in
Exercise 7.12 in Section 7.3.
(e) Differentiate both sides of (7.70) with respect to z, postmultiply both
sides by e, take limits on both sides as z → 1, and then rearrange
terms to find
(1) (1) (1) 1 (1) (1)
Fq̃ (1)PFã (1)e = Fq̃ (1)e − Fq̃ (1)Fã (1)e
2
(1)
−π0 PFã (1)e (7.73)
Vector Markov Chains Analysis 237
(1)
(f) Equate right-hand sides of (7.72) and (7.73), and then solve for Fq̃ (1)e
to obtain
(1)
E[q̃] = Fq̃ (1)e
1 n1 (2) (1)
= Fq̃ (1)Fã (1)e + π0 PFã (1)e
1−ρ 2
h i
(1)
+ π0 P − Fq̃ (1) I − Fã (1)
o
(1)
× [I − P + eFq̃ (1)]−1 PFã (1)e .
Solution:
(a) This part is accomplished by simply using the formula
d d d
u(z)v(z) = u(z) v(z) + u(z) v(z)
dz dz dz
with u(z) = Fq̃ (z) and v(z) = [Iz − PFã (z)].
(b) Given
h i
(1) (1)
Fq̃ (z) Iz − PFã (z) + Fq̃ (z) [Iz − PFã (z)]
(1)
= π0 [z − 1]PFã (z) + π0 PFã (z) (7.70)
from Part (a) and using the fact that limz→1 Fã (z) = I, the result
follows.
(c) From Part(b),
h i
(1) (1)
Fq̃ (1) [I − P] = π0 P − Fq̃ (1) I − PFã (1) . (7.71)
(1)
we add Fq̃ (1)eFq̃ (1) to both sides to get
(1)
Fq̃ (1) [I − P + eFq̃ (1)]
h i
(1) (1)
= Fq̃ (1)eFq̃ (1) + π0 P − Fq̃ (1) I − PFã (1) .
In addition, Fq̃ (1)P = Fq̃ (1) because Fq̃ (1) is the stationary vector
of P. Hence,
(1) (1)
Fq̃ (1) = Fq̃ (1)eFq̃ (1)
n h io
(1)
+ π0 P + Fq̃ (1) I − Fã (1)
[I − P + eFq̃ (1)]−1 .
(1)
Upon multiplying both sides by PFã (1)e, we have
(1) (1) (1) (1)
Fq̃ (1)PFã (1)e = Fq̃ (1)eFq̃ (1)PFã (1)e
n h io
(1)
+ π0 P + Fq̃ (1) I − Fã (1)
(1)
[I − P + eFq̃ (1)]−1 PFã (1)e.
(1)
Again using Fq̃ (1)P = Fq̃ (1) and ρ = Fq̃ (1)Fã (1)e leads to
(1) (1) (1)
Fq̃ (1)PFã (1)e = Fq̃ (1)eρ
n h io
(1)
+ π0 P + Fq̃ (1) I − Fã (1)
(1)
[I − P + eFq̃ (1)]−1 PFã (1)e.
Vector Markov Chains Analysis 239
(2)
where we have used the fact that Fq̃ (1) [I − PFã (1)] e = 0. Upon
(1) (1)
solving for Fq̃ (1)PFã (1)e, we find
This yields