Solutions Manual For Stochastic Modeling: Analysis and Simulation
Solutions Manual For Stochastic Modeling: Analysis and Simulation
for
Stochastic Modeling:
Analysis and Simulation
Barry L. Nelson
The Ohio State University
April 2, 2002
Contents
Preface iii
2 Sample Paths 1
3 Basics 5
4 Simulation 25
5 Arrival-Counting Processes 31
6 Discrete-Time Processes 51
7 Continuous-Time Processes 75
8 Queueing Processes 93
i
ii CONTENTS
Preface
This manual contains solutions to the problems in Stochastic Modeling: Analysis and Simu-
lation that do not require computer simulation. For obvious reasons, simulation results de-
pend on the programming language, the pseudorandom-number generators and the random-
variate-generation routines in use. The manual does include pseudocode for many of the
simulations, however. A companion disk contains SIMAN, SLAM, Fortran and C code for
the simulation “cases” in the text.
I had help preparing this manual. Jaehwan Yang checked many of the solutions. Peg
Steigerwald provided expert LATEX typing of the first draft. And Hasan Jap coded the SLAM,
Fortran and C cases. Please report any errors, typographical or substantial, to me at
Barry L. Nelson
September 1994
iii
Chapter 2
Sample Paths
1. The simulation of the self-service system ends at time 129 minutes. Self-service cus-
tomers 7 and 13 experience delays of 1 and 7 minutes, respectively, while full-service
customers 19 and 20 experience delays of 3 and 7 minutes, respectively. All other
customers have delays of 0 minutes.
The simulation of the full-service systems ends at time 124 minutes. Self-service cus-
tomers 7 and 13 experience delays of 1 and 5 minutes, respectively, while full-service
customer 20 experiences a delay of 1 minute. All other customers have delays of 0
minutes.
3. If the modification does not change any service times, then the new simulation will be
the same as the simulation of the self-service system with the following exception:
At time 84, self-service customer 13 will switch to the full-service queue, shortening
the customer’s delay to 5 minutes.
4. See Exercise 3.
1
2 CHAPTER 2. SAMPLE PATHS
5.
Inputs: arrival times service times
Logic: first-come-first-served
system events: arrival finish
6.
cust no. arrival finish collate? old copy new copy new
time time finish
0 0.00 0.00 0.0 0.00 0.00 0.00
1 0.09 2.33 0.8 2.24 1.79 1.88
2 0.35 3.09 0.8 0.76 0.61 2.49
3 0.54 4.39 1.1 1.30 1.43 3.92
4 0.92 4.52 0.8 0.13 0.10 4.02
5 2.27 4.62 1.1 0.10 0.11 4.13
6 3.53 4.64 1.1 0.02 0.02 4.16
7 3.66 4.80 0.8 0.16 0.13 4.28
8 4.63 4.86 0.8 0.06 0.05 4.68
9 7.33 8.24 0.8 0.91 0.73 8.06
10 7.61 8.50 1.1 0.26 0.29 8.34
new makespan = 8.34 (or 8.35)
old makespan = 8.50
3
day demand
1 (415 + 585) − 704 = 296
2 704 − 214 = 490
3 (214 + 786) − 856 = 144
4 856 − 620 = 236
5 620 − 353 = 267
6 353 − 172 = 181
7 (172 + 828) − 976 = 24
8 976 − 735 = 241
9 735 − 433 = 302
10 433 − 217 = 216
11 (217 + 783) − 860 = 140
12 860 − 598 = 262
13 (598 + 402) − 833 = 167
No sales were lost. The only change is that no order is placed on day 13.
4 CHAPTER 2. SAMPLE PATHS
8. The available sample path does not permit us to extract the number and timing of
light failures if they remained beyond one year. This is the critical input needed for
the simulation.
Basics
1. (a)
Pr{X = 4} = FX (4) − FX (3)
= 1 − 0.9 = 0.1
(b)
Pr{X = 2} = 1 − Pr{X = 2}
= 1 − 0.3 = 0.7
(c)
Pr{X < 3} = Pr{X ≤ 2}
= FX (2) = 0.4
(d)
Pr{X > 1} = 1 − Pr{X ≤ 1} = 1 − FX (1)
= 1 − 0.1 = 0.9
(b)
Pr{Y > 1} = 1 − Pr{Y ≤ 1}
= 1 − FY (1) = 1 − (1 − e−2(1) )
= e−2 ≈ 0.135
5
6 CHAPTER 3. BASICS
3. (a)
d
fX (a) = FX (a)
da
2a/δ 2 , 0 ≤ a ≤ δ
=
0, otherwise
(b) δ
δ
(c) E[X] = 0 a(2a/δ 2 ) da = (2/3)δ
4. (a)
a
FX (a) = fX (b)db
−∞
0,
a<0
3 3
= a /β , 0 ≤ a ≤ β
1, β<a
(b) 0
β
(c) E[X] = 0 a(3a2 /β 3 ) da = (3/4) β
5. (a)
2
E[Y ] = a(3/16 a2 + 1/4)da
0
= 5/4
7
(b)
a
FY (a) = fY (b)db
−∞
0, a<0
= a3 /16 + a/4, 0 ≤ a ≤ 2
1, 2<a
(c) 1 12 , because the density function is larger in a neighborhood of 1 12 than in a
neighborhood of 12 .
6. (a)
Pr{X = 4, X = 1}
Pr{X = 4 | X = 1} =
Pr{X = 1}
Pr{X = 4} 1 − FX (3)
= =
Pr{X = 1} 1 − FX (1)
1 − 0.9
= = 1/9
1 − 0.1
(b)
Pr{X = 4}
Pr{X = 4 | X = 1, X = 2} =
Pr{X = 1, X = 2}
1 − FX (3) 1 − 0.9
= = = 1/6
1 − FX (2) 1 − 0.4
(c)
Pr{X = 2, X ≤ 2}
Pr{X = 2 | X ≤ 2} =
Pr{X ≤ 2}
Pr{X = 2} FX (2) − FX (1)
= =
Pr{X ≤ 2} FX (2)
0.4 − 0.1
= = 3/4
0.4
7. (a)
Pr{Y > 2/3, Y > 1/2}
Pr{Y > 2/3 | Y > 1/2} =
Pr{Y > 1/2}
Pr{Y > 2/3} 1 − FY (2/3) e−2(2/3)
= = = −2(1/2)
Pr{Y > 1/2} 1 − FY (1/2) e
= e−2(1/6) ≈ 0.72
8 CHAPTER 3. BASICS
e−2(1)
(b) Pr{Y > 1 | Y > 2/3} = e−2(2/3)
= e−2(1/3) ≈ 0.51
(c)
8. (a)
Pr{W = 3, V = 2}
Pr{W = 3 | V = 2} =
Pr{V = 2}
pV W (2, 3) 3/20
= = = 3/12 = 1/4
pV (2) 12/20
(b)
Pr{W ≤ 2, V = 1}
Pr{W ≤ 2 | V = 1} =
Pr{V = 1}
Pr{W = 1, V = 1} + Pr{W = 2, V = 1}
=
pV (1)
2/10 + 1/10 3/10
= = = 3/4
4/10 4/10
(c)
Pr{W = 2} = 1 − Pr{W = 2}
= 1 − pW (2) = 1 − 10/20 = 10/20 = 1/2
9. Joint distribution
X2
0 1
X1 0 0.75 0.05 0.80
1 0.10 0.10 0.20
0.85 0.15
9
(a)
Pr{X2 = 1, X1 = 0}
Pr{X2 = 1 | X1 = 0} =
Pr{X1 = 0}
0.05
= = 0.0625
0.80
0.10
Pr{X2 = 1 | X1 = 1} = = 0.50
0.20
Clearly X1 and X2 are dependent since Pr{X2 = 1 | X1 = 0} = Pr{X2 = 1 |
X1 = 1}.
100, a = 0
(b) Let g(a) =
−20, a = 1
= √
10. µ = 2.75 σ = 0.064 se σ
≈ 0.026
6
µ + se
= 2.776
µ − se
= 2.724
2.90 − 2.724
Pr{X > 2.90} = 1 − Pr Z ≤
0.064
= 1 − Pr{Z ≤ 2.75} ≈ 1 − 0.997 = 0.003
10 CHAPTER 3. BASICS
(c)
0, a < 3.5
1/5, 3.5 ≤ a < 5.9
2/5, 5.9 ≤ a < 7.7
F (a) =
3/5, 7.7 ≤ a < 11.1
4/5, 11.1 ≤ a < 12.2
1, 12.2 ≤ a
(d)
14/30, a=0
9/30, a=1
p(a) = 6/30, a=2
1/30, a=3
0, otherwise
15. Let X be a random variable taking values 0 (to represent a “tail”) and 1 (to represent
a “head”). A plausible model is that X has a Bernoulli distribution with parameter
γ = Pr{X = 1}. Your date will likely show γ > 1/2.
11
16. Let U be a random variable having the uniform distribution of [0, 1].
Let V = 1 − U . Then
Pr{V ≤ a} = Pr{1 − U ≤ a}
= Pr{U ≥ 1 − a} = a, 0 ≤ a ≤ 1.
algorithm Weibull
1. U ← random()
2. Y ← β(− ln(1 − U ))1/α
3. return Y
For α = 1/2, β = 1
U Y
0.1 0.011
0.5 0.480
0.9 5.302
12 CHAPTER 3. BASICS
For α = 2, β = 1
U Y
0.1 0.325
0.5 0.833
0.9 1.517
(c)
1, 0 ≤ U ≤ 0.3
2, 0.3 < U ≤ 0.4
Y = 3, 0.4 < U ≤ 0.7
4, 0.7 < U ≤ 0.95
6, 0.95 < U ≤ 1
Using algorithm discrete inverse cdf with a1 = 1, a2 = 2, a3 = 3, a4 = 4, a5 = 6.
U Y
0.1 1
0.5 3
0.9 4
(d)
0, 0 ≤ U ≤ 1 − γ
Y =
1, 1 − γ < U ≤ 1
algorithm Bernoulli
1. U ← random()
2. if {U ≤ 1 − α} then
Y ←0
else
Y ←1
endif
3. return Y
For γ = 1/4
U Y
0.1 0
0.5 0
0.9 1
(e)
p(a) = F (a) − F (a − 1)
= 1 − (1 − γ)a − (1 − (1 − γ)a−1 )
= (1 − γ)a−1 γ , a = 1, 2, 3 . . .
13
1, 0 ≤ U ≤ γ
2, γ < U ≤ 1 − (1 − γ)2
.. ..
Y = . .
a, 1 − (1 − γ)a−1 < U ≤ 1 − (1 − γ)a
.. ..
. .
algorithm geometric
1. U ← random()
a←1
2. until U ≤ 1 − (1 − γ)a
do
a←a+1
endo
3. Y ← a
4. return Y
For γ = 1/4
U Y
0.1 1
0.5 2
0.9 9
18. (a)
0, a<α
d 1
f (a) = F (a) = β−α
, α≤a≤β
da
0, β<a
α
(b) f (a) = d
da
F (a) = 0 − e−(a/β) (−α(a/β)α−1 (1/β))
α
α β −α aα−1 e−(a/β) , 0 < a
=
0, a≤0
0.3, a=1
0.1, a=2
p(a) = 0.3, a=3
0.25, a=4
0.05, a=6
14 CHAPTER 3. BASICS
(d)
1 − γ, a = 0
p(a) =
γ, a=1
19. (a)
∞ β
a
E[X] = af (a)da = da
−∞ α β−α
β
a2 β 2 − α2 (β + α)(β − α)
= = =
2(β − α) α 2(β − α) 2(β − α)
β+α
=
2
(b)
∞ ∞
α
E[X] = af (a)da = a α β −α aα−1 e−(a/β) da
−∞ 0
∞
α
= a(α/β)(a/β)α−1 e−(a/β) da = (1)
0
∞
du −u
(1) = βu1/α e da
0 da
∞
= β u1/α e−u du
0
∞
= β u(1/α+1)−1 e−u du
0
= βΓ(1/α + 1) = (β/α)Γ(1/α)
(c)
E[X] = a pX (a)
all a
(d)
E[X] = a pX (a) = 0(1 − γ) + 1(γ)
all a
= γ
(e)
∞
E[X] = a pX (a) = aγ(1 − γ)a−1
all a a=1
∞
= γ a(1 − γ)a−1 (let q = 1 − γ)
a=1
∞
d a
= γ q
a=1 dq
∞ ∞
d d
= γ qa =γ qa
dq a=1 dq a=0
d 0
(since q = 0)
dq
d 1 γ γ 1
= γ = = =
dq 1−q (1 − q) 2 (γ) 2 γ
20. From Exercise 25, Var[X] = E[X 2 ]−(E[X])2 . So if we calculate E[X 2 ], we can combine
it with the answers in Exercise 19.
(a)
β
β
a2 a3
2
E[X ] = da =
α β−α 3(β − α) α
β 3 − α3
=
3(β − α)
Therefore,
2
β 3 − α3 β+α
Var[X] = −
3(β − α) 2
β 3 − α3 (β + α)2
= −
3(β − α) 4
16 CHAPTER 3. BASICS
2β 2
= β 2 Γ(2/α + 1) = Γ(2/α)
α
Therefore,
2β 2
Var[X] = Γ(2/α) − (β/α Γ(1/α))2
α
β2
= (2Γ(2/α) − 1/α (Γ(1/α))2 )
α
(c)
∞
d2 a+1
= γ q − aq a−1
a=1 dq 2
∞
d2 1
= γ 2
q a+1 − (from Exercise 19(e))
dq a=1 γ
∞
d2 1 d2 d2
= γ 2 q a+1 − since 2 q 0 = 2 q = 0
dq a=−1 γ dq dq
∞
d2 1 d2 1 1
= γ 2 q − =γ 2
b
−
dq b=0 γ dq 1−q γ
2 1 2 1
= γ − = 2−
(1 − q) 3 γ γ γ
Therefore,
2 1 1 1 1
Var[X] = 2
− − 2 = 2−
γ γ γ γ γ
1−γ
=
γ2
22. You need the solutions from Exercise 17. Using them provides the following inverse
cdf functions
(a) B1 = -LN (1-A1)* 2
(b) B1 = 4* A1
(c) B1 = 2.257* (-LN (1-A1))** (1/2)
All of these distributions have expected value 2, but are otherwise very different.
23. Use exponential distributions for the time data, and parameterize them by using the
fact that 1/λ is the expected value.
interarrival-time gaps
= 6 minutes (sample average)
1/λ
= 1/6 customer/minute
Therefore, λ
self-service customers
= 3 minutes (sample average)
1/λ
= 1/3 customer/minute
Therefore, λ
18 CHAPTER 3. BASICS
full-service customers
= 7 minutes (sample average)
1/λ
= 1/7 customer/minute
Therefore, λ
all customers
= 4.6 minutes (sample average)
1/λ
= 1/4.6 customer/minute
Therefore, λ
customer type
Use a Bernoulli distribution with γ = Pr{full-service customer}
γ = 8/20 = 0.4
All of the models except one are very good fits, since in fact the interarrival time gaps,
customer types and conditonal service times were generated from these distributions!
The model for all customers’ service times will not be as good since it is actually a
mixture of two exponential distributions.
E[aX + b] = (ac + b)pX (c)
all c
= a pX (c) + b pX (c)
all c all c
= aE[X] + b
∞
E[aX + b] = (ac + b)fX (c)dc
−∞
∞ ∞
= a cfX (c)dc + b fX (c)dc
−∞ −∞
= aE[X] + b
∞
Var[X] = (a − E[X])2 fX (a)da
−∞
19
∞ ∞
= a2 fX (a)da − 2E[X] afX (a)da
−∞ −∞
∞
+ (E[X])2 fX (a)da
−∞
∞
a
= Pr{X = a}
a=1 i=1
∞
∞ ∞
∞
= Pr{X = a} =
Pr{X = a}
i=1 a=i i=0 a=i+1
∞
= Pr{X > i}
i=0
27. E[X m ] = 0m (1 − γ) + 1m γ = γ
28. Notice that
E[I(Xi ≤ a)] = 0 Pr{Xi > a} + 1 Pr{Xi ≤ a}
= FX (a)
Therefore,
1 m
E[FX (a)] = E[I(Xi ≤ a)]
m i=1
1
= m FX (a) = FX (a)
m
29.
∞
E[g(Y )] = g(a)fY (a)da
−∞
= 1 fY (a)da + 0
A
= Pr{Y ∈ A}
20 CHAPTER 3. BASICS
30. (a) FX−1 (q) = − ln(1 − q)/λ for the exponential. To obtain the median we set q = 0.5.
Then η = − ln(1 − 0.5)/λ ≈ 0.69/λ. This compares to E[X] = 1/λ.
Notice that
FX (1/λ) = 1 − e−λ(1/λ)
= 1 − e−1 ≈ 0.63
showing that the expected value is approximately the 0.63 quantile for all values
of λ.
(b) FX−1 (q) = α + (β − α)q for the uniform distribution. For q = 0.5, η = α + (β −
α)0.5 = (β + α)/2 which is identical to the E[X].
31. We want Pr{Y = b} = 1/5 for b = 1, 2, . . . , 5.
Pr{Y = b} = Pr{X = b | X = 6}
Pr{X = b, X = 6}
= (definition)
Pr{X = 6}
Pr{X = b} 1/6
= = = 1/5
Pr{X = 6} 5/6
32. (a)
pY (Z)
Pr{Y = a} = Pr Z = a V ≤
cpZ (Z)
pY (Z)
Pr Z = a, V ≤ cpZ (Z) (1)
= =
pY (Z)
Pr V ≤ cpZ (Z)
(2)
pY (a)
(2) = Pr V ≤ | Z = a pZ (a)
a∈B cpZ (a)
pY (a) 1 1
= pZ (a) = pY (a) =
a∈B cpZ (a) c a∈B c
pY (a)
(1) = Pr V ≤ | Z = a pZ (a)
cpZ (a)
pY (a) pY (a)
= pZ (a) =
cpZ (a) c
(1)
Therefore, (2)
= pY (a).
21
33. (a)
fY (Z)
Pr{Y ≤ a} = Pr Z ≤ a V ≤
cfZ (Z)
fY (Z)
Pr Z ≤ a, V ≤ cfZ (Z) (1)
= =
fY (Z)
Pr V ≤ cfZ (Z)
(2)
∞
fY (Z)
(2) = Pr V ≤ | Z = a fZ (a)da
−∞ cfZ (Z)
∞
fY (a)
= Pr V ≤ fZ (a)da
−∞ cfZ (a)
∞
fY (a) 1 ∞ 1
= fZ (a)da = fY (a)da =
−∞ cfZ (a) c −∞ c
∞
fY (Z)
(1) = Pr Z ≤ a, V ≤ | Z = b fZ (b)db
−∞ cfZ (Z)
a ∞
fY (b)
= Pr V ≤ fZ (b)db + 0 fZ (b)db
−∞ cfZ (b) a
a
fY (b) 1 a 1
= fZ (b)db = fY (b)db = FY (a)
−∞ cfZ (b) c −∞ c
22 CHAPTER 3. BASICS
(1)
Therefore, (2)
= FY (a).
(b) Let T ≡ number of trials until acceptance. On each trial
fY (Z) 1
Pr{accept} = Pr V ≤ =
cfZ (Z) c
Since each trial is independent, T has a geometric distribution with γ = 1/c.
1
Therefore, E[T ] = 1/c = c.
(c) Note that fY (a) is maximized at fY (1/2) = 1 12 .
Let fZ (a) = 1, 0 ≤ a ≤ 1
c = 1 12
Therefore, cfZ (a) = 1 12 ≥ fY (a)
1.U ← random()
2.Z←U
3.V ← random()
4.if V ≤ 6Z(1 − Z)/(1 12 ) then
return Y ← Z
else
goto step 1
endif
E[T ] = 1 12
34. (a)
a
2(b − α)
F (a) = db
α (β − α)2
(a − α)2
= , α≤a≤β
(β − α)2
(X − α)2
U =
(β − α)2
(X − α)2 = U (β − α)2
√
X = α+ U (β − α)2 = α + (β − α) U
algorithm
1. U ← random()
√
2. X ← α + (β − α) U
3. return X
23
2 1
(b) f is maximized at a = β giving f (β) = β−α
. Let fZ (a) = β−α
for α ≤ a ≤ β. Set
c = 2 so that cfZ (a) ≥ f (a).
Notice that
2(a−α)
f (a) (β−α)2
= 2
cfZ (a) β−α
a−α
=
β−α
algorithm
1. U ← random()
Z ← α + (β − α)U
2. V ← random()
3. if V ≤ Z−α
β−α
then
return Y ← Z
else
goto step 1
endif
24 CHAPTER 3. BASICS
Chapter 4
Simulation
1. An estimate of the probability of system failure within 5 days should be about 0.082.
2. No answer provided.
3. Let G1 , G2 , . . . represent the interarrival-time gaps between the arrival of jobs. Let
Bn represent the type of the nth job; 0 for a job without collating, 1 for a job with
collating.
Let Xn represent the time to complete the nth job that does not require collating.
Let Zn represent the time to complete the nth job that does require collating.
We model all of these random variables as mutually independent.
Let Sn represent the number of jobs in progress or waiting to start.
Define the following system events with associated clocks:
e0 () (initialization)
S0 ← 0
C1 ← FG−1 (random())
C2 ← ∞
e1 () (arrival of a job)
Sn+1 ← Sn + 1
if {Sn+1 = 1} then
Bn ← FB−1 (random())
if {Bn = 0} then
C2 ← Tn+1 + FX−1 (random())
else
25
26 CHAPTER 4. SIMULATION
e2 () (finish job)
Sn+1 ← Sn − 1
if {Sn+1 > 0} then
Bn ← FB−1 (random())
if {Bn = 0} then
C2 ← Tn+1 + FX−1 (random())
else
C2 ← Tn+1 + FZ−1 (random())
endif
endif
4. Let Dn represent the number of hamburgers demanded on the nth day. We model
D1 , D2 , . . . as independent.
Let Sn represent the number of patties in stock at the beginning of the nth day.
Define the following system events with associated clocks:
e0 () (initialization)
S0 ← 1000
C1 ← 1
e1 () (morning count)
if n is odd then
if {Sn ≤ 500} then
Sn+1 ← Sn − FD−1 (random()) +(1000 − Sn )
else
Sn+1 ← Sn − FD−1 (random())
endif
else
27
e0 () (initialization)
C1 ← 30
C2 ← ∞
S0 ← 0
e1 () (arrival of a job)
Sn+1 ← Sn + 1
if {Sn+1 = 1} then
C2 ← Tn+1 + FX−1 (random())
endif
C1 ← Tn+1 + 30
e2 () (finish a job)
Sn+1 ← Sn − 1
if {Sn+1 > 0} then
C2 ← Tn+1 + FX−1 (random())
endif
6. Let Bn be a Bernoulli random variable representing the CPU assignment of the nth
job.
Let S0,n represent the number of jobs at CPU A, and S1,n similar for CPU B.
Define the following system events and associated clocks:
e0 () (initialization)
S0,n ← 0
S1,n ← 0
C1 ← 40
C2 ← ∞
C3 ← ∞
28 CHAPTER 4. SIMULATION
e1 () (arrival of a job)
B ← FB−1 (random())
SB,n+1 ← SB,n + 1
if {SB,n+1 = 1} then
C2+B ← Tn+1 + 70
endif
C1 ← Tn+1 + 40
e2 () (finish at CPU A)
S0,n+1 ← S0,n − 1
C2 ← Tn+1 + 70
endif
e3 () (finish at CPU B)
S1,n+1 ← S1,n − 1
C3 ← Tn+1 + 70
endif
7. No answer provided.
8. (a)
29
(b)
e1 () (student arrives)
Sn+1 ← Sn + 1
C1 ← Tn+1 + FG−1 (random())
e2 () (pick up)
30 CHAPTER 4. SIMULATION
Arrival-Counting Processes
1. (a)
e−2(2) (2(2))5
Pr{Y2 = 5} = ≈ 0.16
5!
(b)
e−2 2
Pr{Y4 − Y3 = 1} = Pr{Y1 = 1} = ≈ 0.271
1!
(c)
Pr{Y6 − Y3 = 4 | Y3 = 2} = Pr{Y3 = 4}
e−2(3) (2(3))4
= ≈ 0.134
4!
(d)
Pr{Y5 = 4 | Y4 = 2} = Pr{Y5 − Y4 = 2 | Y4 = 2}
e−2 22
= Pr{Y1 = 2} = ≈ 0.271
2!
2. No answer provided.
(a)
Pr{Y4 = 9 | Y2 = 6} = Pr{Y4 − Y2 = 3 | Y2 = 6}
= Pr{Y2 = 3}
e−2(2) (2(2))3
= ≈ 0.195
3!
31
32 CHAPTER 5. ARRIVAL-COUNTING PROCESSES
Let G be the gap between two successive arrivals. Then G is exponentially dis-
tributed with λ = 2.
≈ 0.393
(d)
12 −2(7)
e (2(7))j
Pr{T13 ≤ 7} = 1 −
j=0 j!
≈ 0.641
(e) Let λ0 be the arrival rate for urgent patients.
≈ 0.055
λ2 = λ + 4 = 6/hour
33
m
e−24 24j
Pr{Y30 − Y6 > m} = 1 −
j=0 j!
m = 32 does it.
(a)
Pr{Y200 ≥ 7} = 1 − Pr{Y200 ≤ 6}
1
6
e− 55 (200) ( 200 )j 50
= 1−
j=0 j!
≈ 0.111
c
e−16 (16)j
= 1− ≥ 0.95
j=0 j!
7. Suppose we model the arrival of students to the bus stop as a Poisson arrival process
with expected time between arrivals of
= 17.6 seconds
1/λ
= 1/17.6 students/second ≈ 3.4 students/minute.
giving λ
Now we can compute the probability of more than 60 students arriving during each
proposed time interval
≈ 0.094
Pr{Y12 > 60} ≈ 0.002
Pr{Y10 > 60} ≈ 0.000
Ignoring students left behind on one pick up that add to the next pick up, we see that
there is nearly a 1 in 10 chance of filling the bus when pick up is every 15 minutes.
The carryover will only make the problem worse. Pick up every 12 minutes effectively
eliminates the problem. Every 10 minutes is more often than needed.
35
8. (a) Restricted to periods of the day when the arrival rate is roughly constant, a
Poisson process is appropriate to represent a large number of customers acting
independently.
(b) Not a good approximation, since most arrivals occur during a brief period just
prior to the start, and only a few before or after this period. Therefore, arrivals
do not act independently.
(c) Not a good approximation if patients are scheduled. We do not have independent
increments because patients are anticipated. (May be a good approximation for
a walk-in clinic, however.)
(d) Not a good approximation because the rate of finding bugs will decrease over
time.
(e) Probably a good approximation since fires happen (largely) independently, and
there are a large number of potential customers (buildings).
9. (a)
56 −22(3)
e (22(3))j
Pr{Y3 ≤ 56} = ≈ 0.12
j=0 j!
Since this probability is rather small, we might conclude that Fridays are different
(have a lower arrival rate).
At λ = 22 + se = 23.6
and at λ = 22 − se
= 20.4
So less than or equal to 56 could be quite rare if λ = 23.6. This is further evidence
that Fridays could be different.
36 CHAPTER 5. ARRIVAL-COUNTING PROCESSES
10. Let
λ0 = 8/hour
λ1 = 1/18/hour p1 = 0.005
λ2 = 1/46/hour p2 = 0.08
32
(a) λ = λ0 + λ1 + λ2 = 8 414 surges/hour
By the superposition property, the arrival of all surges is also a Poisson process.
Therefore, E[Y8 ] = λ8 ≈ 64.6 surges.
(b) By the decomposition property, the “small” and “moderate” surges can be de-
composed into Poisson processes of computer-damaging surges.
λ10 = p1 λ1 = 1/3600/hour
λ20 = p2 λ2 = 1/575/hour
(A)
Yt is Poisson with λ(A) = (1/2)λ = 700
(B)
Yt is Poisson with λ(B) = (1/2)λ = 700
and they are independent, therefore,
37
(A) (B)
Pr{Y1.5 > 1000, Y1.5 > 1000}
(A) (B)
= Pr{Y1.5 > 1000} Pr{Y1.5 > 1000}
∞
2
e−700(1.5) (700(1.5))m
= ≈ 0.877
m=1001 m!
13. Let {Yt ; t ≥ 0} model the arrival of autos with λ = 1/minute. Then {Y0,t ; t ≥ 0} which
models the arrival of trucks is Poisson with λ0 = (0.05)λ = 0.05/min, and {Y1,t ; t ≥ 0}
which models all others is Poisson with λ1 = (0.95)λ = 0.95/min.
(a)
∞
e−(0.05)(60) [(0.05)(60)]m
Pr{Y0,60 ≥ 1} =
m=1 m!
e−3 (3)0
= 1− = 1 − e−3 ≈ 0.95
0!
(b) Since Y0,t is independent of Y1,t what happened to Y0,t is irrelevant. Therefore,
10 + E[Y1,60 ] = 10 + 60(.95) = 67.
(c)
50!
= Pr{Y0,60 = 5} Pr{Y1,60 = 45}
e−60 (60)50
e−3 (3)5 e−57 (57)45 50!
=
5! 45! e (60)50
−60
5 45
50! 3 57
=
5!45! 60 60
≈ 0.07
15. (a) The rate at which sales are made does not depend on the time of year; there is
no seasonal demand. Also, the market for A, B and C is neither increasing nor
decreasing.
(b) Let Yt ≡ total sales after t weeks.
By the superposition property Yt is a Poisson process with rate λ = 10 + 10 =
20/week.
∞
e−20(1) (20(1))n
Pr{Y1 > 30} =
n=31 n!
30
e−20 20n
= 1− ≈ 0.013
n=0 n!
=8
λ ≈ 0.27 hurricanes/month
30
λ
=
se ≈ 0.09 hurricanes/month
30
17. (a) The rate at which calls are placed does not vary from 7 a.m.–6 p.m.
(D)
(b) Let Yt ≡ number of long-distance calls placed by hour t, where t = 0 corre-
(D)
sponds to 7 a.m. Then Yt is Poisson with rate λ(D) = (1000)(0.13) = 130/hour.
∞
e−130 (130)n
=
n=413 n!
412 −130
e 130n
= 1−
n=0 n!
≈0
(d)
(D)
Pr{Y1 > 300 | Y1 = 1200}
1200
1200
= (0.13)n (0.87)1200−n ≈ 0
n=301 n
(T )
18. Let {Yt ; t ≥ 0} represent arrival of trucks to the restaurant.
(T ) (C)
(a) Yt = Yt + Yt is Poisson with rate λ = λ(T ) + λ(C) = 3/hour
(T ) (C)
E[P ] = E Y1 + E[C]Y1
19. Traffic engineer’s model: {Yt ; t ≥ 0} models the number of accidents and is Poisson
with rate λ = 2/week.
(a)
∞
e−2(2) (2(2))m
Pr{Y2 ≥ 20} =
m=20 m!
19
e−4 4m
= 1− ≈ 1.01 × 10−8
m=0 m!
41
(b)
20. (a)
t
Λ(t) = λ(a)da
0
t
Λ(t) = 1da = t , for 0 ≤ t < 6
0
t
Λ(t) = Λ(6) + 2da = 6 + 2a |t6
0
= 6 + 2t − 12 = 2t − 6 , for 6 ≤ t < 13
t
Λ(t) = Λ(13) + (1/2)da
13
∞
e−8 8m
Pr{Y8 − Y2 > 12} =
m=13 m!
12
e−8 8m
= 1− ≈ 0.06
m=0 m!
E[Y8 − Y2 ] = 8 patients
(c) 10 a.m. → hour 4
Λ(4) − Λ(2) = 2
Pr{Y4 = 9 | Y2 = 6} = Pr{Y4 − Y2 = 3 | Y2 = 6}
= Pr{Y4 − Y2 = 3}
e−2 23
= ≈ 0.18
3!
(d)
Pr{Y1/4 > 0} = 1 − Pr{Y1/4 = 0}
e−1/4 (1/4)0
= 1−
0!
= 1 − e−1/4 ≈ 0.22
Λ(7) = 2(7) − 6 = 8
∞
e−8 8m
Pr{Y7 ≥ 13} = ≈ 0.06
m=13 m!
144, 0≤t<1 (after rounding)
229, 1≤t<2
λ(t) =
383, 2≤t<3
96, 3≤t≤4
(b)
t
Λ(t) = λ(a)da
0
t
Λ(t) = 144da = 144t 0≤t<1
0
t
Λ(t) = Λ(1) + 229da
1
= 144 + 229(t − 1)
= 229t − 85 1≤t<2
t
Λ(t) = Λ(2) + 383da
2
= 373 + 383(t − 2)
= 383t − 393, 2 ≤ t < 3
t
Λ(t) = Λ(3) + 96da
3
= 756 + 96(t − 3)
= 96t + 468 3≤t≤4
(c)
E[Y3.4 − Y1.75 ]
= Λ(3.4) − Λ(1.75)
= (96(3.4) + 468) − (229(1.75) − 85)
= 478.65 ≈ 479 cars
(d)
22. The algorithms given here are direct consequences of the definitions, and not necessarily
the most efficient possible.
(a) Recall that the inverse cdf for the exponential distribution with parameter λ is
X = − ln(1 − U )/λ
algorithm Erlang
a←0
for i ← 1 to n
do
a ← a − ln (1-random())/λ
enddo
return X ← a
(b) algorithm binomial
a←0
for i ← 1 to n
do
U ← random()
if {U ≥ 1 − γ} then
a←a+1
endif
enddo
return X ← a
(c) algorithm Poisson
a←0
b ← − ln(1-random())/λ
while {b < t}
do
b ← b − ln(1-random())/λ
a←a+1
enddo
return X ← a
(d) algorithm nspp
t ← Λ(t)
a←0
b ← − ln(1-random())
45
while {b < t}
do
b ← b − ln(1-random())
a←a+1
enddo
return X ← a
The first step in algorithm nspp converts the t to the time scale for the rate-1
stationary process.
23.
E[Yt+∆t − Yt ]
= E[Y∆t ] (time stationarity)
∞
e−λ∆t (λ∆t)k
= k (by definition)
k=0
k!
∞
e−λ∆t (λ∆t)k
=
k=1 (k − 1)!
∞
(λ∆t)k−1
= e−λ∆t (λ∆t)
k=1 (k − 1)!
∞
−λ∆t (λ∆t)j
= e (λ∆t)
j=0 j!
= e−λ∆t (λ∆t)eλ∆t
= λ∆t
24.
Pr{Ht = t} = Pr{Yt = 0} = e−λt
Pr{Ht ≤ a} = Pr{Yt − Yt−a ≥ 1} ( at least 1 arrival between t − a and t )
= 1 − Pr{Yt − Yt−a < 1}
= 1 − e−λa
Pr{Ht = t} = e−λt
Pr{Ht ≤ a} = 1 − e−λa , 0 ≤ a ≤ t
Ht
* ** * * * -
0 t−a t
6 6
25.
E[Lt ] = E[Ht + Rt ]
= E[Ht ] + E[Rt ]
= E[Ht ] + E[G] (memoryless)
> E[G]
26. Now pG (a) = 1/60 for a = 1, 2, . . . , 60. Therefore, δ = E[G] = 61/2 = 30 12 months.
60
= d(eia − 1)pG (a)
a=1
60
1
=d eia − 1
60 a=1
70.14
≈d − 1 = d(0.169)
60
η
≈ $1, 662.29 per month
δ
This is about $52 per year less than the other model.
47
27. We suppose that the time to burn-out of lights are independent (almost certainly
true) and identically distributed (true if same brand of bulb and same usage pattern).
Therefore the replacements form a renewal arrival process.
Estimate δ by δ = 248.7 hours (sample average).
Therefore, the long-run replacement rate is
1
≈ 4.021 × 10−4 bulbs/hour
δ
≈ 35 bulbs/year
c 30
δ = a fX (a)da + c fX (a)da
0 c
c3 c2
= +c 1−
1350 900
c3
= c− days
2700
c 30
η = 1000 fX (a)da + 300 fX (a)da
0 c
7c2
= + 300
9
η 7c2 + 2700
= −300 per day
δ c(c2 − 2700)
To minimize the cost we take the derivative w.r.t. c and set it equal to 0, then solve
for c. Of the 3 solutions, the only one in the range [0, 30] is c = 15.9 for which
η
≈ $34.46 per day
δ
Compare this to c = 30 for which
η
= $50 per day
δ
Therefore, it is worthwhile to replace early.
30. Suppose we wait for n patrons before beginning a tour. Then δ = n minutes is the
expected time between tours. The expected cost for a tour of size n is
η 10 (0.50)(n − 1)
= +
δ n 2
A plot as a function of n reveals that η/δ decreases until n = 6, then increases.
At n = 6, η/δ = $2.92 per minute.
(b) δ = 1(6) = 6 minutes
31. To prove the result for decomposition of m processes, do the decomposition in pairs,
as follows:
(i) Decompose the original process into 2 subprocesses with probabilities γ1 and 1 − γ1 .
Clearly both subprocesses are Poisson, the first with rate λ1 = γ1 λ, and the latter with
rate λ = (1 − γ1 )λ.
(ii) Decompose the λ process into two subprocesses with probabilities γ2 /(1 − γ1 ) and
1 − γ2 /(1 − γ1 ). Clearly both subprocesses are Poisson, the first with rate
γ2
λ2 = λ = γ2 λ
1 − γ1
49
Pr{Yτ +∆τ − Yτ = h | Yτ = k}
h
= Pr{Y1,τ +∆τ − Y1,τ = , Y2,τ +∆τ − Y2,τ = h − | Y1,τ + Y2,τ = k}
=0
h
= Pr{Y1,τ +∆τ − Y1,τ = } Pr{Y2,τ +∆τ − Y2,τ = h − }
=0
h
e−∆t1 (∆t1 ) e−∆t2 (∆t2 )h−
= (1)
=0
! (h − )!
where we use the fact that Y1 and Y2 are independent NSPPs, and where ∆ti =
Λi (τ + ∆τ ) − Λi (τ ). Let ∆t = ∆t1 + ∆t2 = Λ(τ + ∆τ ) − Λ(τ ). Then (1) simplifies to
e−∆t (∆t)h h
h! ∆t1 ∆t2 h− e−∆t (∆t)h
= · 1
h! =0 !(h − )! ∆t ∆t h!
Discrete-Time Processes
1. 2 session 1 2 3 2 3 3 3 3 3 2 3 2 2 3 2 4
Pr{2} = p1 p12 p23 p32 p23 p33 p33 p33 p33 p32 p23 p32 p22 p23 p32 p24
= p1 p12 (p23 )4 (p32 )4 (p33 )4 p22 p24
= 1 (0.95)(0.63)4(0.36)4 (0.4)4 (0.27)(0.1)
≈ 1.7 × 10−6
∗ session 1 2 2 3 3 2 3 3 3 4
Pr{∗} = p1 p12 p22 p23 p33 p32 p23 p33 p33 p34
= p1 p12 p22 (p23 )2 (p33 )3 p32 p34
= 1 (0.95)(0.27)(0.63)2(0.4)3 (0.36)(0.24)
≈ 5.6 × 10−4
2. (a) 2, 2, 3, 2, 4,...
(b) 3, 3, 2, 3, 2, 3, 4,...
3. (a) 1, 1, 1, 1, 1,...
(b) 2, 2, 1, 1, 1, 1, 2,...
4. (a) p12 = 0.4
(2)
(b) p11 = 0.68
(c)p21 p11 = 0.48
5. recurrent: {2, 3, 4}
transient: {1}
51
52 CHAPTER 6. DISCRETE-TIME PROCESSES
7. (a)
6(a)
n=2
0.54 0.08 0.09 0.08 0.21
0.27 0.19 0.24 0.02 0.28
0.08 0.26 0.32 0.18 0.16
0.05 0.10 0.01 0.81 0.03
0.31 0.15 0.18 0.02 0.34
n=5
0.27496 0.16438 0.19033 0.14404 0.22629
0.34353 0.14109 0.16246 0.12082 0.23210
0.33252 0.11926 0.11720 0.23690 0.19412
0.12455 0.11640 0.06041 0.60709 0.09155
0.33137 0.14745 0.17200 0.11146 0.23772
n = 20
0.2664964441 0.1393997993 0.1400484926 0.2628192095 0.1912360548
0.2675253416 0.1395887257 0.1406686163 0.2602309763 0.1919863403
0.2632018511 0.1388746569 0.1382171161 0.2707695205 0.1889368557
0.2490755395 0.1364940963 0.1301154884 0.3054031203 0.1789117568
0.2678558364 0.1396398961 0.1408494124 0.2594398254 0.1922150308
6(b)
n=2
0.01 0.55 0.14 0.01 0.29
0 0.74 0 0 0.26
0 0.35 0.09 0 0.56
0 0 0.3 0 0.7
0 0.65 0 0 0.35
53
n=5
0.00001 0.68645 0.00521 0.00001 0.30832
0 0.71498 0 0 0.28502
0 0.69230 0.00243 0 0.30527
0 0.6545 0.0081 0 0.3374
0 0.71255 0 0 0.28745
n = 20
1.0 × 10−20 0.7142857122 7.554699530 × 10−11 1.0 × 10−20 0.2857142881
0 0.7142857146 0 0 0.2857142858
0 0.7142857134 3.486784401 × 10−11 0 0.2857142871
0 0.7142857108 0.0000000001162261467 0 0.2857142895
0 0.7142857147 0 0 0.2857142859
6(c)
n=2
0.03 0.67 0.04 0.15 0.11
0.02 0.74 0 0.08 0.16
0.03 0.55 0.09 0.24 0.09
0 0.8 0 0 0.2
0.01 0.83 0.01 0.03 0.12
n=5
0.01739 0.75399 0.00343 0.07026 0.15493
0.01728 0.75624 0.00242 0.06838 0.15568
0.01737 0.75119 0.00461 0.07201 0.15482
0.0170 0.7578 0.0024 0.0670 0.1558
0.01736 0.75702 0.00259 0.06860 0.15443
n = 20
0.01727541956 0.7564165848 0.002467917087 0.06836130311 0.1554787760
0.01727541955 0.7564165848 0.002467917078 0.06836130306 0.1554787760
0.01727541956 0.7564165848 0.002467917098 0.06836130312 0.1554787759
0.01727541955 0.7564165848 0.002467917078 0.06836130306 0.1554787759
0.01727541955 0.7564165848 0.002467917080 0.06836130306 0.1554787759
54 CHAPTER 6. DISCRETE-TIME PROCESSES
7(b)
6(a)
0.2616580311
0.1386010363
π= 0.1373056995
0.2746113990
0.1878238342
6(b)
0.7142857143
π=
0.2857142857
6(c)
0.01727541955
0.7564165844
π= 0.002467917078
0.06836130306
0.1554787759
8. T = {1, 3, 4}
R1 = {2}, R2 = {5}, R3 = {6}
# $
α = 0.7366666667 0.09666666666 0.1666666667
9. No answer provided.
10. In the answer we use all the data. However, it might also be reasonable to have different
models for free throws and shots from the floor.
(a) Let {Sn ; n = 0, 1, . . .} represent the sequence of shots, where M = {0, 1} and 0
corresponds to a miss, 1 corresponds to a made shot.
If each shot is independent with probability p of being made, then Sn is a Markov
chain with one-step transition matrix
1−p p
P=
1−p p
An estimate of p is
30
p = ≈ 0.67
45
55
with
− p)
p(1
=
se ≈ 0.07
45
In the sample, let nij be the number of times the transition (i, j) occurred, and ni =
1
ij = nij /ni . The observed values were
j=0 nij . Then p
nij 0 1 ni
0 6 9 15
1 8 21 29
Therefore,
= 0.40 0.60
P
0.28 0.72
%
ij (1−pij )
p
An estimate of the se of pij is ni
. is
Therefore, the matrix of se’s
0.13 0.13
=
se
0.08 0.08
(c) Let Sn be a Markov chain with state space M = {0, 1, 2, 3} corresponding to the
two most recent shots:
0 = miss, miss
1 = miss, made
2 = made, miss
3 = made, made
The observed values were
nij 0 1 2 3 ni
0 2 4 0 0 6
1 0 0 1 8 9
2 5 4 0 0 9
3 0 0 6 13 19
56 CHAPTER 6. DISCRETE-TIME PROCESSES
0.33 0.67 0 0
0 0 0.11 0.89
=
P
0.56 0.44 0 0
0 0 0.32 0.68
0.19 0.19 0 0
0 0 0.10 0.10
=
se
0.17 0.17 0 0
0 0 0.11 0.11
(d) Assume a made shot, followed by 4 missed shots, followed by a made shot.
4 p ≈ 0.008
Under model (a) : (1 − p)
Under model (b) : p10 (p00 )3 p01 ≈ 0.011
Under model (c) : In this case we need to know the result of the previous two shots.
If (made, made): p32 p20 (p00 )2 p01 ≈ 0.013
If (miss, made): p12 p20 (p00 )2 p01 ≈ 0.005
Thus, a run of this length is slightly more likely under the first case of model (c).
This is only one small bit of evidence that favors model (c). Essentially, we look to
see if the additional information we account for in model (b) substantially changes the
transition probabilities from model (a). If it does, then we see if the transition proba-
bilities change when moving from (b) to (c). We stop when the additional information
no longer changes the probabilities. For example,
Notice that model (b) seems to indicate that there is some difference depending on
whether or not Robinson made the most recent shot, compared to model (a). Model
(c) indicates some additional differences based on the last two shots. So maybe there
is a “hot hand,” but because of the large standard errors we cannot be sure.
Case 6.4
Sn represents the nth key typed
M = 1, 2, . . . , 26 27, 28, . . . , 36 37, 38, . . . , 42 43, . . .
& '( )& '( )& '( ) & '( )
letters numbers punctuation space and others
≡ {A, B, . . . , Z, 0, 1, . . . , 9, ; , ·, “, ‘, ?, , space, . . .}
Case 6.5
Sn represents the brand of the nth toothpaste purchase
M = {1, 2, 3, 4, . . . , m} where each state represents a brand of toothpaste
pij ≡ probability a consumer next purchases brand j given they currently use i
pi ≡ probability a consumer uses brand i at the beginning of the study
Case 6.6
Sn represents the high temperature on the nth day
M = {−20, −19, −18, . . . , 109, 110} is temperature in Fahrenheit
pij ≡ probability high temperature tomorrow is j given today’s high is i
pi ≡ probability temperature is i on the first day of the study
Case 6.7
Sn represents the task type of the nth task
M = {1, 2, . . . , m} where each state represents a task such as “move,” “place,”
“change tool,” etc.
pij ≡ probability next task is j given current task is i
pi ≡ probability first task is type i
58 CHAPTER 6. DISCRETE-TIME PROCESSES
Case 6.8
Sn represents accident history for years n − 1 and n
M = {1, 2, 3, 4} as defined in Case 6.8
16. (a)
π2 = 1/4 = 0.25
1
15!
Pr{Z15 ≤ 1} = π j (1 − π2 )15−j
j=0 j!(15 − j)! 2
For the Markov-chain model we have to look at all possible sample paths:
• Pr{no defectives}
0 1/2 1/2 0
1/3 0 1/3 1/3
P=
1/3 1/3 0 1/3
1/3 1/3 1/3 0
(5)
(b) p24 ≈ 0.1934
(c) π4 = 3/16 = 0.1875
18. (a)
(2, 000, 000)p12 + (2, 000, 000)p22 + (2, 000, 000)p32 = 2, 107, 800
19. (a) M = {0, 1, 2, . . . , k} is the number of prisoners that remain in the prison
n is the number of months
Let
i
b(i, ) = p (1 − p)i−
which is the probability prisoners are paroled if there are i in prison.
Then P has the following form
60 CHAPTER 6. DISCRETE-TIME PROCESSES
1 0 0 0 ··· 0
b(1, 1) b(1, 0) 0 0 ··· 0
b(2, 2) b(2, 1) b(2, 0) 0 ··· 0
P=
b(3, 3) b(3, 2) b(3, 1) b(3, 0) ··· 0
...
··· ··· ··· ··· ···
b(k, k) b(k, k − 1) b(k, k − 2) b(k, k − 3) · · · b(k, 0)
(m−1)
(b) pk0
(11)
With k = 6, p = 0.1 and m = 12 we obtain p60 ≈ 0.104.
k (m)
(c) =0 pk
6 (12)
With k = 6, p = 0.1 and m = 12 we obtain =0 p6 ≈ 1.7.
(d) For each prisoner, the probability that they have not been paroled by m months
is
q = (1 − p)m
Therefore, 1 − q is the probability they have been paroled before m months.
The prison will close if all have been paroled. Since the prisoners are independent
20. (a) M = {1, 2, 3} ≡ {good and declared good, defective but declared good, defective
and declared defective}
n ≡ number of items produced
0.995 0.005(0.06) 0.005(0.94)
P=
0.495 0.505(0.06) 0.505(0.94)
0.495 0.505(0.06) 0.505(0.94)
(b) π2 = 0.0006
21. (a)
M = {1, 2, 3, 4}
= {(no, no), (no, accident), (accident, no), (accident, accident)}
0.92 0.08 0 0
0 0 0.97 0.03
P=
0.92 0.08 0 0
0 0 0.97 0.03
(b) π1 ≈ 0.85
(c)
(1) (0)
FAB = PAA PAB
Now suppose that the result is true for all n ≤ k, for some k > 1.
(k+1)
fij = Pr{Sk+1 = j, Sk ∈ A, . . . , S2 ∈ A, S1 = | S0 = i}
∈A
= Pr{Sk+1 = j, Sk ∈ A, . . . , S2 ∈ A | S1 = , S0 = i}
∈A
× Pr{S1 = | S0 = i}
= Pr{Sk = j, Sk−1 ∈ A, . . . , S1 ∈ A | S0 = }pi
∈A
(k) (k)
= fj pi = pi fj
∈A ∈A
(k+1) (k)
FAB = PAA FAB
(k−1)
= PAA PAA PAB
(k)
= PAA PAB
Therefore, the result is true for n = k + 1, and is thus true for any n.
62 CHAPTER 6. DISCRETE-TIME PROCESSES
and clearly
0, i = j
I(S0 = j) =
1 i=j
Thus,
*∞ +
µij = E[X] = E I(Sn = j)
n=0
∞
= I(i = j) + E[I(Sn = j)]
n=1
∞
(n) (n)
= I(i = j) + 0(1 − pij ) + 1pij
n=1
∞
(n)
= I(i = j) + pij
n=1
In matrix form
∞
(n)
M = I+ PT T
n=1
∞
= I+ PnT T
n=1
= (I − PT T )−1
Therefore,
1 2.72 2.87
M = (I − PT T )−1 ≈ 0 2.84 2.98
0 1.70 3.46
m
E[Vij ] = E[Vij | S1 = h] Pr{S1 = h | S0 = i}
h=1
= 1 pij + E[1 + Vhj ]pih
h=j
= pij + (1 + νhj )pih
h=j
m
= pij + pih (1 + νhj ) − pij (1 + νjj )
h=1
m
= pih (1 + νhj ) − pij νjj
h=1
In matrix form
V = P(1 1 + V) − PD
= P(V − D) + 1 1
(b) Since
V = PV − PD + 1 1
π V = π PV − π PD + π 1 1
= π V − π D + 1
Therefore,
64 CHAPTER 6. DISCRETE-TIME PROCESSES
π D = 1
or term by term
πi νii = 1
or
1
νii =
πi
26. Let M = {0, 1, 2, . . . , 10} represent the number of cherries a player has in the basket.
The one-step transition matrix is
3/7 1/7 1/7 1/7 1/7 0 0 0 0 0 0
3/7 0 1/7 1/7 1/7 1/7 0 0 0 0 0
3/7 0 0 1/7 1/7 1/7 1/7 0 0 0 0
1/7 2/7 0 0 1/7 1/7 1/7 1/7 0 0 0
1/7 0 2/7 0 0 1/7 1/7 1/7 1/7 0 0
P= 1/7 0 0 2/7 0 0 1/7 1/7 1/7 1/7 0
1/7 0 0 0 2/7 0 0 1/7 1/7 1/7 1/7
1/7 0 0 0 0 2/7 0 0 1/7 1/7 2/7
1/7 0 0 0 0 0 2/7 0 0 1/7 3/7
1/7 0 0 0 0 0 0 2/7 0 0 4/7
0 0 0 0 0 0 0 0 0 0 1
M = (I − PT T )−1
5.642555330 1.237076756 1.468321920 1.508490981 1.699310675
4.430331574 2.051427403 1.388800859 1.464830124 1.619923520
4.189308265 0.9910305820 2.185061681 1.373952904 1.557546460
3.624339776 1.132212235 1.088206223 2.150572935 1.430445774
3.326241706 0.7736368034 1.211218915 1.044607959 2.189326947
=
2.944765281
0.7518819338 0.8321534392 1.159204127 1.064213430
2.502145104 0.5682528341 0.7588874323 0.7378123674 1.120907044
2.094007860 0.4917825124 0.5692601344 0.6742348636 0.6995152842
1.721601833 0.3844024235 0.4797861159 0.4846075656 0.6262492773
1.404367293 0.3172345401 0.3724060269 0.4081372440 0.4426201777
1.105741430 1.019364871 0.9134948380 0.6768445448 0.5307779547
1.191625516 1.002037338 0.9081983520 0.6745406751 0.5394859830
1.126134901 1.081735894 0.8876763410 0.6647276567 0.5371821134
1.104626156 1.008894626 0.9654729638 0.6442056457 0.5318856274
1.000579145 0.9847916242 0.8926316951 0.7239042017 0.5145580951
1.792814119 0.8807446128 0.8711229501 0.6584135873 0.6004421813
0.6677006009 1.639625786 0.7440222642 0.5960365279 0.5210550255
0.7822967692 0.5336608097 1.520642295 0.5051593083 0.4773941689
0.4032312936 0.6566735018 0.4237868366 1.301420130 0.3978731088
0.3814764240 0.2980980700 0.5649684897 0.2410233088 1.212223756
(n)
The distribution of the number of spins is f0,10 , n = 1, 2, . . . with A = {0, 1, . . . , 9} and
(n) (n−1)
B = {10} which is the (1, 1) element of FAB = PAA PAB .
66 CHAPTER 6. DISCRETE-TIME PROCESSES
2 min{X1 , X2 } = 2Z
since spins are independent and identically distributed. Notice that Pr{X1 > n} =
(n)
1 − p0,10 . Then letting T be the total number of spins
This result can be used to calculate the distribution or expectation. You will find that
E[T ] < E[2X1 ].
27. (a)
M = {1, 2, 3, 4, 5}
≡ {insert, withdraw, deposit, information, done}
(b) Both properties are rough approximations at best. Customers are likely to do
only 1 of each type of transaction, so all of their previous transactions influence
what they will do next. Also, the more transactions they have made, the more
likely they are to be done, violating stationarity.
(n)
(c) Need f15 for n = 1, 2, . . . , 20 with A = {1, 2, 3, 4} and B = {5}.
68 CHAPTER 6. DISCRETE-TIME PROCESSES
(n)
n f15
1 0
2 0.9
3 0.09
4 0.009
5 0.0009
6 0.00009
7–20 nearly 0
(d) We need 100µ12 because µ12 is the expected number of times in the withdraw
state.
T = {1, 2, 3, 4} R = {5}
M = (I − PT T )−1
1.0 0.5291005291 0.4338624339 0.1481481481
0 1.005291005 0.05291005291 0.05291005291
=
0 0.05291005291 1.005291005 0.05291005291
0 0.05291005291 0.05291005291 1.005291005
28. Clearly Pr{H = 1} = 1 − pii = γ. Suppose the result is correct for all a ≤ n for some
n > 1. Then
(2)
fjj = Pr{S2 = j, S1 = h | S0 = j}
h∈A
= Pr{S2 = j | S1 = h, S0 = j} Pr{S1 = h | S0 = j}
h∈A
69
= Pr{S1 = j | S0 = h}pjh
h∈A
= phj pjh = pjh phj
h∈A h∈A
In matrix form
(2)
RBB = PBA PAB
For n ≥ 3
(n)
fjj = Pr{Sn = j, Sn−1 ∈ A, . . . , S1 ∈ A | S0 = j}
= Pr{Sn = j, Sn−1 ∈ A, . . . , S1 = h | S0 = j}
h∈A
= Pr{Sn = j, Sn−1 ∈ A, . . . , S2 ∈ A | S1 = h, S0 = j}
h∈A
× Pr{S1 = h | S0 = j}
= Pr{Sn−1 = j, Sn−2 ∈ A, . . . , S1 ∈ A | S0 = h}pjh
h∈A
(n−1) (n−1)
= fhj pjh = pjh fhj
h∈A h∈A
In matrix form
(n)
RBB = PBA (Pn−2
AA PAB )
0.95
PAB = 0.36
0
Therefore
70 CHAPTER 6. DISCRETE-TIME PROCESSES
(5) (5)
RBB = f22 ≈ 0.015
e−2 2a
30. Let p(a) = Pr{Xn = a} = a!
,a = 0, 1, . . . .
For given (r, s), the state space is
M = {r, r + 1, . . . , s}
Careful thinking shows that
p(i − j), j = r, r + 1, . . . , i
pij = 0, j = i + 1, i + 2, . . . , s − 1
i
1 − h=r pih , j = s
s−1
with the exception that pss = 1 − h=r psh .
The one-step transition matrix for (r, s) = (4, 10) is
0.1353352832 0 0 0
0.2706705664 0.1353352832 0 0
0.2706705664 0.2706705664 0.1353352832 0
P= 0.1804470442 0.2706705664 0.2706705664 0.1353352832
0.09022352214 0.1804470442 0.2706705664 0.2706705664
0.03608940886 0.09022352214 0.1804470442 0.2706705664
0.01202980295 0.03608940886 0.09022352214 0.1804470442
0 0 0.8646647168
0 0 0.5939941504
0 0 0.3233235840
0 0 0.1428765398
0.1353352832 0 0.0526530176
0.2706705664 0.1353352832 0.0165636087
0.2706705664 0.2706705664 0.1398690889
The associated steady-state distribution is
0.1249489668
0.1250496100
0.1256593229
π=
0.1258703295
0.1188385832
0.09050677047
0.2891264170
71
s 10
hπh = hπh ≈ 7.4
h=r h=4
72 CHAPTER 6. DISCRETE-TIME PROCESSES
0 0 0 0 0.8646647168
0 0 0 0 0.5939941504
0 0 0 0 0.3233235840
0 0 0 0 0.1428765398
0 0 0 0 0.0526530176
0.1353352832 0 0 0 0.0165636087
0.2706705664 0.1353352832 0 0 0.0045338057
0.2706705664 0.2706705664 0.1353352832 0 0.0010967192
0.1804470442 0.2706705664 0.2706705664 0.1353352832 0.0002374476
0.09022352214 0.1804470442 0.2706705664 0.2706705664 0.1353817816
0.09091028492
0.09091052111
0.09089895276
0.09087291478
0.09094611060
π=
0.09138954271
0.09154300364
0.08642895347
0.06582378587
0.2102759303
12
hπh ≈ 7.9
h=3
m
= Pr{Sn+2 = j | Sn+1 = h, Sn = i, . . . , S0 = z}
h=1
× Pr{Sn+1 = h | Sn = i, . . . , S0 = z}
m
= Pr{Sn+2 = j | Sn+1 = h} Pr{Sn+1 = h | Sn = i}
h=1
m
= Pr{Sn+2 = j, Sn+1 = h | Sn = i}
h=1
= Pr{Sn+2 = j | Sn = i}
Pr{Sn+2 = j | Sn = i}
m
= Pr{Sn+2 = j, Sn+1 = h | Sn = i}
h=1
m
= Pr{Sn+2 = j | Sn+1 = h, Sn = i} Pr{Sn+1 = h | Sn = i}
h=1
m
= Pr{S2 = j, S1 = h | S0 = i}
h=1
= Pr{S2 = j | S0 = i}
independent of n.
74 CHAPTER 6. DISCRETE-TIME PROCESSES
32.
(n)
pij = Pr{Sn = j | S0 = i}
m
= Pr{Sn = j, Sk = h | S0 = i}
h=1
m
= Pr{Sn = j | Sk = h, S0 = i} Pr{Sk = h | S0 = i}
h=1
m
(k)
= Pr{Sn = j | Sk = h}pih
h=1
m
(k)
= Pr{Sn−k = j | S0 = h}pih
h=1
m
(n−k) (k)
= phj pih
h=1
m
(k) (n−k)
= pih phj
h=1
Continuous-Time Processes
1. π1 = 5/7, π2 = 2/7
2. (a) (i)
0 1 0 0 0
2/5 0 2/5 1/5 0
P= 1/4 0 0 0 3/4
3/6 1/6 0 0 2/6
0 0 0 1 0
(b) (i)
0 1 0 0 0
1/2 0 1/2 0 0
P= 1/4 0 0 0 3/4
0 0 0 0 1
0 0 0 1 0
75
76 CHAPTER 7. CONTINUOUS-TIME PROCESSES
(c) (i)
1 0 0 0 0
1/2 0 1/2 0 0
P= 1/4 0 0 0 3/4
0 0 0 0 1
0 0 0 0 1
(ii) T = {2, 3, 4} R1 = {1}, R2 = {5}
(iii) πR1 = 1 πR2 = 1
dpi1 (t)
= −pi1 (t) + 2pi2 (t) + pi3 (t)
dt
dpi2 (t)
= pi1 (t) − 5pi2 (t) + pi4 (t)
dt
dpi3 (t)
= 2pi2 (t) − 4pi3 (t)
dt
dpi4 (t)
= pi2 (t) − 6pi4 (t) + 5pi5 (t)
dt
dpi5 (t)
= 3pi3 (t) + 2pi4 (t) − 5pi5 (t)
dt
(ii)
∞
(6t)n
P(t) = e−6t Qn
n=0 n!
where
5/6 1/6 0 0 0
1/3 1/6 1/3 1/6 0
Q= 1/6 0 1/3 0 1/2
1/2 1/6 0 0 1/3
0 0 0 5/6 1/6
(b) (i) For i = 1, 2, . . . , 5
dpi1 (t)
= −pi1 (t) + 2pi2 (t) + pi3 (t)
dt
dpi2 (t)
= pi1 (t) − 4pi2 (t)
dt
77
dpi3 (t)
= 2pi2 (t) − 4pi3 (t)
dt
dpi4 (t)
= −2pi4 (t) + 5pi5 (t)
dt
dpi5 (t)
= 3pi3 (t) + 2pi4 (t) − 5pi5 (t)
dt
(ii)
∞
(5t)n
P(t) = e−5t Qn
n=0 n!
where
4/5 1/5 0 0 0
2/5 1/5 2/5 0 0
Q= 1/5 0 1/5 0 3/5
0 0 0 3/5 2/5
0 0 0 1 0
(c) (i) For i = 1, 2, . . . , 5
dpi1 (t)
= 2pi2 (t) + pi3 (t)
dt
dpi2 (t)
= −4pi2 (t)
dt
dpi3 (t)
= 2pi2 (t) − 4pi3 (t)
dt
dpi4 (t)
= −2pi4 (t)
dt
dpi5 (t)
= 3pi3 (t) + 2pi4 (t)
dt
(ii)
∞
(4t)n
P(t) = e−4t Qn
n=0 n!
where
1 0 0 0 0
1/2 0 1/2 0 0
Q= 1/4 0 0 0 3/4
0 0 0 1/2 1/2
0 0 0 0 1
78 CHAPTER 7. CONTINUOUS-TIME PROCESSES
e1 () (regular call)
if {Sn = 1} then
Sn+1 ← 2
C3 ← Tn+1 + FX−1 (random())
endif
C1 ← Tn+1 + FG−1 (random())
e2 () (chair call)
if {Sn = 1} then
Sn+1 ← 3
C4 ← Tn+1 + FZ−1 (random())
else
if {Sn = 2} then
Sn+1 ← 4
Z ← FZ−1 (random())
C4 ← Tn+1 + Z
C3 ← C3 + Z
endif
79
endif
C2 ← Tn+1 + FH−1 (random())
Pr{Z > t + b}
Pr{Z > t + b | Z > t} =
Pr{Z > t}
e−λ(t+b)
=
e−λt
= e−λb
= Pr{Z > b}
giving
0.1290322581
0.06451612903
0.09677419355
0.1451612903
π=
0.2177419355
0.1532258065
0.1209677419
0.07258064516
7
jπj ≈ 3.56 units
j=0
81
λπ0 ≈ 0.239
7
j=0 jπj ≈ 3.03
Therefore, (2,7) carries a slightly lower inventory level but with more than double the
lost sales rate.
9. To solve this problem we must evaluate p02 (t) for t = 17520 hours (2 years). Using
the uniformization approach with n∗ = 1000, and carrying 10 digits of accuracy in all
calculations,
10. Let M = {0, 1, 2} represent the number of failed machines, so that {Yt ; t ≥ 0} is the
number of failed machines at time t hours. When {Yt = 0}, the failure rate is 2(0.01),
since each machine fails at rate 0.01/hour and there are two machines. When {Yt = 1}
82 CHAPTER 7. CONTINUOUS-TIME PROCESSES
the failure rate is 0.02/hour, as stated in the problem. When {Yt = 1 or 2} the repair
rate is 1/24/hour ≈ 0.04/hour.
Therefore,
−0.02 0.02 0
G=
0.04 −0.06 0.02
0 0.04 −0.04
and
because if we start in j we must spend Zj there, and we can condition on the first state
visited after state i; the Markov property implies that once the process leaves i for k,
the fact that it started in i no longer matters. But notice that
E[Xkj ] = µkj
Therefore,
µij = 1/gjj I(i = j) + µkj pik
k∈T ,k=i
Then noting that pik = gik /gii and gjj = gii if i = j, we have
I(i = j) gik
µij = + µkj
gii k∈T ,k=i gii
or
gii µij = I(i = j) + gik µkj
k∈T ,k=i
by conditioning on the first state visited after i (if it is not j), and recalling that the
Markov property implies that once the process leaves i for k the fact that it started in
i no longer matters. But notice that
E[Vkj ] = νkj
Therefore,
νij = 1/gii + νkj pik
k=i,j
or
gii νij = 1 + gik νkj
k=i,j
13. (a) We use the embedded Markov chain and holding times to parameterize the Markov
process. The state space and embedded Markov chain are given in the answer to
Exercise 27, Chapter 6.
We take t in minutes, so that 1/ψ1 = 1/2, 1/ψ2 = 1, 1/ψ3 = 2, 1/ψ4 = 1 and ψ5
is ∞. Therefore,
−ψ1 0.5ψ1 0.4ψ1 0.1ψ1 0
0 −ψ2 0.05ψ2 0.05ψ2 0.9ψ2
G= 0 0.05ψ3 −ψ3 0.05ψ3 0.9ψ3
0 0.05ψ3 0.05ψ3 −ψ4 0.9ψ4
0 0 0 0 0
−2 1 0.8 0.2 0
0 −1 0.05 0.05 0.9
= 0 0.025 −0.5 0.025 0.45
0 0.05 0.05 −1 0.9
0 0 0 0 0
(b) We might expect the time to perform each type of transaction to be nearly con-
stant, implying that the exponential distribution is not appropriate. The only
way to be certain, however, is to collect data.
(c) We need 1 − p15 (t) for t = 4 minutes. Using the uniformization approach with
n∗ = 16 and carrying 10 digits of accuracy in all calculations,
gii µij = I(i = j) + gik µkj
k∈T ,k=i
In this problem T = {1, 2, 3, 4} and we need µ11 + µ12 + µ13 + µ14 . There will be
16 equations in 16 unknowns. To set up these equations it helps to rewrite the
result as
i−1 4
−I(i = j) = gik µkj − gii µij + gik µkj
k=1 k=i+1
for i = 1, 2, 3, 4 and j = 1, 2, 3, 4. Letting x = (µ11 µ21 µ31 µ41 µ12 µ22 . . . µ44 )
we can write the system of equations as
Ax = b
with
−2 1 0.8 0.2 0 0 0 0
0 0 0 0 −2 1 0.8 0.2
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 −1 0.05 0.05 0 0 0 0
0 0 0 0 0 −1 0.05 0.05
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
A=
0 0.025 −0.5 0.025 0 0 0 0
0 0 0 0 0 0.025 −0.5 0.025
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0.05 0.05 −1 0 0 0 0
0 0 0 0 0 0.05 0.05 −1
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
85
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
−2 1 0.8 0.2 0 0 0 0
0 0 0 0 −2 1 0.8 0.2
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 −1 0.05 0.05 0 0 0 0
0 0 0 0 0 −1 0.05 0.05
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0.025 −0.5 0.025 0 0 0 0
0 0 0 0 0 0.025 −0.5 0.025
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0.05 0.05 −1 0 0 0 0
0 0 0 0 0 0.05 0.05 −1
−1
0
0
0
0
−1
0
0
b=
0
0
−1
0
0
0
0
−1
86 CHAPTER 7. CONTINUOUS-TIME PROCESSES
14. (a) Let M = {0, 1, 2, 3} represent the state of the system, where 0 corresponds to
an empty system; 1 to the voice-mail being busy but the operator idle; 2 to the
operator being busy and the voice mail idle; and 3 to both the voice mail and
operator being busy. Then {Yt ; t ≥ 0} is the state of the system at time t hours.
−20 20 0 0
30 −50 0 20
G=
10 0 −30 20
0 10 50 −60
(b) Need π2 + π3
π = (17/41, 8/41, 10/41, 6/41)
Therefore π2 + π3 = 16/41 ≈ 0.39 or 39% of the time.
15. Let λ ≡ failure rate, τ ≡ repair rate, and M = {0, 1, 2} represent the number of failed
computers. Then
−λ λ 0
G = τ −(λ + τ ) λ
0 0 0
λµ00 = 1 +λµ10
λµ01 = λµ11
(λ + τ )µ10 = τ µ00
(λ + τ )µ10 = 1 +τ µ01
or
−1 = −λµ00 +λµ10
0= −λµ01 λµ11
0 = τ µ00 −(λ + τ )µ10
−1 = τ µ01 −(λ + τ )µ11
The solution is
* + * +
λ+τ 1
µ00 µ01 λ2 λ
= τ 1
µ10 µ11 λ2 λ
λ+τ 1 2λ+τ
Therefore, E[T T F ] = λ2
+ λ
= λ2
.
System E[T T F ]
A 111,333.33
B 105,473.68
C 100,200.00
D 86,358.28
17. Let
We assume customers take all available copies if there are fewer available than they
want. We assume only one pending order at a time.
S0 ← s (initially s copies)
C1 ← FG−1 (random()) (set clock for first demand)
C2 ← ∞ (no order arrival pending)
18.
∞
Pr{H ≤ t} = Pr{H ≤ t | X = n} Pr{X = n}
n=1
∞
n
= Pr Ji ≤ t γ(1 − γ)n−1
n=1 i=1
n
since X and Ji are independent. But i=1 Ji has an Erlang distribution with parameter
λ and n phases. Therefore,
∞
n−1
e−λt (λt)j
Pr{H ≤ t} = 1− γ(1 − γ)n−1
j!
n=1 j=0
∞ n−1
e−λt (λt)j
= 1− γ(1 − γ)n−1
n=1 j=0 j
∞
n−1
(λt)j
= 1 − γe−λt (1 − γ)n−1
n=1 j=0 j!
∞
∞
−λt (λt)j
= 1 − γe (1 − γ)n−1
j=0 j! n=j+1
∞
∞
(λt)j
j−1
= 1 − γe−λt (1 − γ)n − (1 − γ)n
j=0 j! n=0 n=0
∞
−λt
(λt)j 1 1 − (1 − γ)j
= 1 − γe −
j=0 j! γ γ
∞
(λt(1 − γ))j
= 1 − e−λt
j=0 j!
= 1 − e−λt eλt(1−γ)
= 1 − e−λtγ
90 CHAPTER 7. CONTINUOUS-TIME PROCESSES
an exponential distribution.
This result shows that we can represent the holding time in a state (which is exponen-
tially distributed) as the sum of a geometrically distributed number of exponentially
distributed random variables with a common rate. This is precisely what uniformiza-
tion does.
19.
∞
∗t (n) (g ∗ t)n
pij (t) = e−g qij
n=0 n!
∞
∗t (n) (g ∗ t)n
= p̃ij (t) + e−g qij
n=n∗ +1 n!
≥ p̃ij (t)
(n)
because g ∗ > 0, t > 0 and qij ≥ 0.
Notice that
∞
∗t (n) (g ∗ t)n
e−g qij
n=n∗ +1 n!
−g ∗ t
n∗
(n) (g ∗ t)n
= 1−e qij
n=0 n!
∗t
n∗
(g ∗ t)n
≤ 1 − e−g
n=0 n!
(n)
because qij ≤ 1.
20.
λM
= e−λ1 t e−λM t − e−(λ1 +λM )t
λ1 + λM
λM
= 1− e−(λ1 +λM )t
λ1 + λM
λ1
= e−λH t
λH
21. (a)
1 0.95 0.1 0.4
− 12 12 12 12
0 73
− 1500 73
( 1500 )( 0.63 73
) ( 1500 )( 0.10 )
0.73 0.73
G=
1 1 1
0 ( 50 )( .36
.60
) − 50 ( 50 )( 0.24
0.60
)
1
3
0 0 − 13
Notice that for row 2
15 15 1500
1/g22 = = =
1 − p22 0.73 73
so g22 = 73/1500 to account for transitions from state 2 to itself. Similarly for
row 3. The steady state probabilities are the same:
0.08463233821
0.2873171709
π=
0.6068924063
0.02115808455
(b) No answer provided.
(c) The only change in the generator matrix is that the last row becomes (0, 0, 0, 0).
22. Let G1 , G2 , . . . be the times between visits to state 1. Because the process is semi-
Markov, these are independent, time-stationary random variables.
Let Rn be the time spent in state 1 on the nth visit. Then R1 , R2 , . . . are also inde-
pendent with E[Rn ] = τ1 . Therefore we have a renewal-reward process.
From Exercise 25, Chapter 4, the expected number of states visited between visits to 1
is 1/ξ1 . The fraction of those visits that are to state j has expectation ξj /ξ1 . Therefore,
E[Gn ] = τ1 + (ξj /ξ1 )τj
j=1
m
= 1/ξ1 ξj τj
j=1
92 CHAPTER 7. CONTINUOUS-TIME PROCESSES
E[Rn ] ξ1 τ1
π1 = = m
E[Gn ] j=1 ξj τj
Substituting (1)
ξk /gkk ξj /gjj
gkj − gjj
k=j d d
1 gkj 1
= ξk − ξj
d k=j gkk d
1 ?
= pkj ξk − ξj = 0
d k=j
Queueing Processes
1. Let µ be the production rate. Sample average production time = 31.9/10 = 3.19
1
minutes. Therefore, µ = 3.19 ≈ 0.31 parts/minute. With 2 machines the rate is
2µ = 0.62 parts/minute.
0, a<0
FX (a) = −0.31a
1−e , a≥0
2.
93
94 CHAPTER 8. QUEUEING PROCESSES
3. • Self-service system
We can approximate the self-service and full-service copiers as independent M/M/1
queues.
self-service queue
= 1/10 customer/minute
λ
= 3 minutes so µ
1/µ = 1/3 customer/minute
ρ = µλ = 3/10
ρ2
wq = λ(1−ρ)
= 9/7 ≈ 1.3 minutes
full-service queue
= 1/15 customer/minute
λ
= 7 minutes so µ
1/µ = 1/7 customer/minute
ρ = 7/15
49
wq = 8
≈ 6.1 minutes
• Full-service system
We can approximate it as a single M/M/2 queue. Based on the sample average of all
customers
= 1/6 customer/minute
λ
= 4.6 minutes so µ
1/µ = 1/4.6 customer/minute
ρ = 2λµ ≈ 0.38
2 ≈ 0.13 customers
π2 ρ
q = (1−ρ)
• Concerns
For the self-service system we modeled the service times of self-service customers as
exponentially distributed, and the service times of full-service customers as exponen-
tially distributed. For the full-service system (M/M/2) we modeled the service times
of all customers together as exponentially distributed. These are incompatible models,
but still o.k. as approximations.
4. (a) We approximate the process of potential arrivals as Poisson, and the service times
as exponentially distributed.
95
20(1 − i/16), i = 0, 1, 2, . . . , 15
λi =
0, i = 16, 17, . . .
10, i = 1, 2, . . . , 16
µi =
0, i = 17, 18, . . .
The lost-customer rate in state i is 20(i/16). Therefore, the long-run lost customer
rate is
16
16
20
η= 20(i/16)πi = iπi
i=0 16 i=0
20
=
16
≈ 10 customers/hour
(b)
16
q = (j − 1)πj ≈ 8.3 customers
j=2
20 − η ≈ 10 customers/hour
Therefore, long-run revenue is
µ, i = 1, 2,
µi =
2µ, i = 3, 4, . . .
with µ = 30 customers/hour.
(a) We derive the general result when the second agent goes on duty when there are
k customers in the system.
# $
λ ,
j
µ
j = 0, 1, . . . , k − 1
dj =
λj
µk−1 (2µ)j−k+1
, j = k, k + 1, . . .
# $j
µ λ
, j = 0, 1, . . . , k − 1
= # $
λ j 1
, j = k, k + 1, . . .
µ 2j−k+1
∞ ∞
k−1 (λ/µ)j
dj = (λ/µ)j +
j=0 j=0 j=k
2j−k+1
∞
j
k−1
(λ/µ)k
j λ
= (λ/µ) +
j=0 2 j=0 2µ
k−1
(λ/µ)k
= (λ/µ)j +
j=0 2(1 − λ/(2µ))
1
π0 = ∞ = 2/23 when k = 3
j=0 dj
(b)
∞
πj = 1 − π0 − π1 − π2
j=3
(c)
∞
q = π 2 + (j − 2)πj
j=3
∞
= π0 d2 + (j − 2)π0 dj
j=3
∞
(λ/µ)j
= π0 d2 + π0 (j − 3)
j=3 2j−k+1
j
(λ/µ)3
∞
λ
= π0 d2 + π0 j
2 j=0 2µ
(λ/µ)3 λ 1
= π0 d2 + π0
2 2µ (1 − λ/(2µ))2
2 (λ/µ)3 (λ/(2µ))
= π0 (λ/µ) +
2 (1 − λ/(2µ))2
(45/30)3
2 (45/60)
= 2/23 (45/30) +
2 (1 − 45/60)2
= 45/23 ≈ 1.96 customers
∞
η= πj = 1 − π0 − π1 − π2
j=3
2
= 1− (1 − ρ)ρj
j=0
s πs
5 0.63
6 0.56
7 0.49
8 0.43
9 0.37
10 0.31
11 0.25
12 0.20
13 0.16
14 0.12
15 0.09
16 0.06
17 0.04
18 0.03
19 0.02
20 0.01
10. (a) If we model the arrival of calls as Poisson, service times as exponentially distrib-
uted, and no reneging, then we have an M/M/2 queue.
λ = 20 calls/hour
1/µ = 3 minutes so µ = 20 calls/hour
λ
(b) To keep up ρ = 2µ
< 1 or λ < 2µ = 40 calls/hour.
(c) We want the largest λ such that
π2 ρ
wq = ≤ 4/60 hour
(1 − ρ)2
By trial-and-error, λ ≈ 30 calls/hour.
(d) We want the largest λ such that
∞
7
πj = 1 − πj ≤ 0.15
j=8 j=0
By trial-and-error λ ≈ 31 calls/hour.
100 CHAPTER 8. QUEUEING PROCESSES
(e) Let the reneging rate be β = 1/5 call/minute = 12 calls/hour for customers on
hold.
iµ, i = 1, 2
µi =
2µ + (i − 2)β, i = 3, 4, . . .
with µ = 20.
(f) q = ∞j=3 (j − 2)πj
Therefore, we need the πj for j ≥ 3.
j
(λ/µ) , j = 0, 1, 2
j!
dj = j
,j λ , j = 3, 4, . . .
2µ2 i=3
(2µ+(i−2)β)
1 1 1
π0 = ∞ ≈ 20 ≈ ≈ 0.36
j=0 dj j=0 dj 2.77
n
since j=0 dj does not change in the second decimal place after n ≥ 20.
Therefore
20
q ≈ (j − 2)πj ≈ 0.137 calls on hold
j=3
11. (a) M = {0, 1, 2, . . . , k + m} is the number of users connected or in the wait queue.
λi = λ, i = 0, 1, . . .
iµ, i = 1, 2, . . . , k
µi =
kµ + (i − k)γ, i = k + 1, k + 2, . . . , k + m
m
(b) q = j=k+1 (j − k)πj
(c) λπk+m (60 minutes/hour)
(d) The quantities in (b) and (c) are certainly relevant. Also wq , the expected time
spent in the hold queue.
,j−1
(20−i)τ
i=0
, j = 1, 2
µj j!
dj = ,j−1
(20−i)τ
i=0
j 3−j , j = 3, 4, . . . , 20
3!µ 3
j ,j−1
(τ /µ) i=0 (20 − i), j = 1, 2
j!
=
(τ /µ)j ,j−1
6 33−j i=0 (20 − i), j = 3, 4, . . . , 20
20
j=0 dj ≈ 453.388
0.002205615985
0.01102807993
0.02619168983
0.03928753475
0.05565734086
0.07420978784
0.09276223484
0.1082226072
0.1172411578
0.1172411579
π= 0.1074710613
0.08955921783
0.06716941339
0.04477960892
0.02612143853
0.01306071927
0.005441966361
0.001813988787
0.0004534971966
0.00007558286608
0.000006298572176
(a) ∞j=3 πj = 1 − π0 − π1 − π2 ≈ 0.96
(b) Need w.
20
= j πj ≈ 8.22 jobs
j=0
20
20
λeff = λj π j = (20 − j)πj ≈ 9.78
j=0 j=0
(c)
20
q = (j − 3)πj ≈ 6.20 jobs waiting
j=4
(d)
3π0 + 2π1 + π2 + 0(π3 + π4 + · · · + π20 ) ≈ 0.055 idle computers
(e)
π0 ≈ 0.02 or 2% of the time
(f)
(d)/3 ≈ 0.018 or 1.8% of the time
Therefore,
c
−1
1
π0 = di = c
i=0 i=0 ρi
πj = π0 dj = π0 ρj , j = 0, 1, . . . , c
Therefore,
c
−1
1
π0 = di = c i
i=0 i=0 (λ/µ) /i!
(λ/µ)j
πj = π0 dj = π0 , j = 0, 1, . . . , c
j!
103
dj = (λ/µ)j
s! sj−s
, j = s + 1, . . . , c
0, j = c + 1, c + 2, . . .
Therefore,
(λ/µ)j
πj = π0 dj = π0 , j = 0, 1, . . . , s
j!
and
j−s
(λ/µ)j (λ/µ)s λ
πj = π0 dj = π0 = π 0
s! sj−s s! sµ
= πs ρj−s , j = s + 1, s + 2, . . . , c
Thus,
Pr{L = j}
Pr{L = j | L ≤ s} =
Pr{L ≤ s}
π0 (λ/µ)j
j!
= s (λ/µ)i
i=0 π0 i!
(λ/µ)j /j!
= s i
i=0 (λ/µ) /i!
Also,
Pr{L = s + j}
Pr{L = s + j | L ≥ s} =
Pr{L ≥ s}
πs ρj
= c i−s
i=s πs ρ
ρj
= c−s
i=0 ρi
16.
t
Λ(t) = λ(t)dt
0
3
t ,
3
0 ≤ t < 10
=
2000 (t−20)3
3
+ 3
, 10 ≤ t < 20
λ
= = 100
µ
The stationary model predicts congestion to be over 10% greater than it should be.
This happens because λ(t) increases sharply toward its peak rate, then declines, while
the M/D/∞ model uses a constant rate. Thus, the M (t)/D/∞ is not at its peak rate
long enough to achieve so much congestion.
s−1
Pr{Wq = 0} = πj
j=0
And
∞
Pr{Wq > a} = Pr{Wq > a | L = j} πj
j=0
∞
= Pr{Wq > a | L = j} πj
j=s
since no waiting occurs if there are fewer than s in the system. But, for j ≥ s,
j−s
e−sµa (sµa)n
Pr{Wq > a | L = j} =
n=0 n!
which is Pr{T > a}, where T has an Erlang distribution with parameter sµ and j −s+1
phases. This follows because each of the j −s+1 customers (including the new arrival)
105
will be at the front of the queue of waiting customers for an exponentially distributed
time with parameter sµ. Therefore,
∞ j−s
−sµa n
e (sµa)
Pr{Wq > a} = π
n! j
j=s n=0
∞ j−s
e−sµa (sµa)n π0 (λ/µ)j
= s! sj−s
j=s n=0 n!
π0 (λ/µ)j (λ/µ)s
= π0 (λ/(sµ))j−s
s! sj−s s!
= πs ρj−s
Therefore,
∞ j−s
e−sµa (sµa)n j−s
Pr{Wq > a} = πs
ρ
j=s n=0 n!
∞
j
e−sµa (sµa)n j
= πs
ρ
j=0 n=0 n!
∞
∞
e−sµa (sµa)n j
= πs ρ
n=0 j=n n!
∞
∞
(sµa)n
= πs e−sµa ρj
n=0 n! j=n
∞
(sµa)n ρn
−sµa
= πs e
n=0 n! 1−ρ
∞
πs e−sµa (sµaρ)n
=
1 − ρ n=0 n!
πs e−sµa sµaρ
= e
1−ρ
πs e−(sµ−sµρ)a
=
1−ρ
πs e−(sµ−λ)a
=
1−ρ
106 CHAPTER 8. QUEUEING PROCESSES
Finally,
Pr{Wq > a}
Pr{Wq > a | Wq > 0} =
Pr{Wq > 0}
(πs e−(sµ−λ)a )/(1 − ρ)
= ∞
j=s πj
But ∞ ∞ ∞
j−s
πs
πj = πs ρ = πs ρj =
j=s j=s j=0 1−ρ
so
Pr{Wq > a | Wq > 0} = e−(sµ−λ)a
18. Model the system as an M/M/s queue with λ = 1/3 customer/minute, µ = 1/2
customer per minute, and s the number of ATMs.
They want
Since exp(−5/2 + 5/3) ≈ 0.43 and exp(−5 + 5/3) ≈ 0.03, 2 ATMs are adequate.
19. (a) Since εa = 1 for the exponential distribution
1 + εs
wq = wq (λ, 1, µ, εs , 1) = wq (λ, µ, 1)
2
1 + εs ρ2
=
2 λ(1 − ρ)
σ2
But, εs = (1/µ)2
= µ2 σ 2 . Thus,
1 + µ2 σ 2 ρ2
wq =
2 λ(1 − ρ)
ρ2 + ρ2 µ2 σ 2
=
2λ(1 − ρ)
ρ 2 + λ2 σ 2
=
2λ(1 − ρ)
107
ρ2
wq (λ, µ, 1) =
λ(1 − p)
For the M/D/1
ρ2 1
wq (λ, 1, µ, 0, 1) = = wq (λ, µ, 1)
2λ(1 − ρ) 2
20. No answer provided.
j=1 j=2
a0j 2 1
µ(j) 3 2
s(j) 1 1
0 0.2
R=
0 0
Therefore,
λ(1) = a01 = 2/second
λ(2) = a02 + 0.2 λ(1) = 1.4/second
We approximate it as a Jackson network.
ρ21 (2/3)2
(a) (1)
q = 1−ρ1
= 1−2/3
= 4/3 messages
(1)
wq(1) =
q
λ(1)
= 2/3 second
ρ22 (1.4/2)2
(b) (2)
q = 1−ρ2
= 1−(1.4/2)
≈ 1.63 messages
(2)
wq(2) =
q
λ(2)
≈ 1.2 seconds
(c) (12 K/message) (1.63 messages) = 19.56K
(d) Let h be the inflation factor, so that
λ(1) = ha01 = 2h
λ(2) = ha02 + 0.2λ(1)
= h + 0.2(2h)
= 1.4h
108 CHAPTER 8. QUEUEING PROCESSES
1 1
w(1) + w(2) = wq(1) + (1)
+ wq(2) + (2)
µ µ
ρ21 1 ρ22 1
= + + +
λ (1 − ρ1 ) 3 λ (1 − ρ2 ) 2
(1) (2)
(2h/3)2 1 (1.4h/2)2 1
= + + +
2h(1 − 2h/3) 3 1.4h(1 − 1.4h/2) 2
A plot of this function shows that h = 1.21 (or 21% increase) is the most that
can be allowed before it exceeds 5 seconds.
22. (8.24)
(8.25)
(8.26)
Finally,
∞
∞
ρi1 (1 − ρ1 )ρj2 (1 − ρ2 )
i=0 j=0
∞
∞
= (1 − ρ1 )(1 − ρ2 ) ρi1 ρj2
i=0 j=0
109
∞
1
= (1 − ρ1 )(1 − ρ2 ) ρi1
i=0 1 − ρ2
1 1
= (1 − ρ1 )(1 − ρ2 )
1 − ρ1 1 − ρ2
= 1
23. Let B be a Bernoulli random variable that takes the value 1 with probability r. The
event {B = 1} indicates that a product fails inspection.
The only change occurs in system event e3 .
e3 () (complete inspection)
λ(i) 5/(1 − r) 5
ρi = (i) = = < 1 for i = 1, 2
µ µ(i) µ(i) (1 − r)
For i = 1
5
ρ1 = <1
6(1 − r)
Therefore, r < 1/6.
For i = 2
5
ρ2 = <1
8(1 − r)
Therefore r < 3/8. So we require r < 1/6.
26.
(j)
bjk = rjk εd + (1 − rjk )
= rjk ε(j)
a + (1 − rjk )
m
aij
= rjk bij + (1 − rjk )
i=0 λ(j)
m
aij a0j
= rjk b ij + r jk b0j + (1 − rjk )
i=1 λ(j) λ(j)
m
rij λ(i)
= rjk bij + djk
i=1 λ(j)
m
= rjk cij bij + djk
i=1
27. Approximate the system as a Jackson network of 2 queues with the following parame-
ters:
111
i a0i µ(i)
1 20 30
2 0 30
0 0.9
R=
0 0
Therefore,
λ(1) = a01 = 20
λ(2) = r12 λ(1) = 18
and each station is an M/M/1 queue.
c
(a) For each station individually, we want the minimum c such that j=0 πj ≥ 0.95.
For the M/M/1, πj = (1 − ρ)ρj , so
c
1 − ρc+1
(1 − ρ)ρj = (1 − ρ) ≤ 0.95
j=0 1−ρ
Therefore,
ρc+1 ≥ 0.05
ln(0.05)
c+1 ≥
ln(ρ)
ln(0.05)
c ≥ −1
ln(ρ)
i ρi = λ(i) /µ(i) c
1 2/3 7
2 3/5 5
(b)
= 3/2 jobs
(d) We now have a network of 3 queues, with queue 3 being the rework station for
station 1.
0 0.9 0.1
R= 0 0 0
0 0.5 0
λ(1) = 20
λ(2) = 0.9 λ(1) + 0.5λ(3)
λ(3) = 0.1 λ(1)
Therefore,
λ(1) = 20
λ(2) = 18 + 1 = 19
λ(3) = 2
For station 1 nothing changes. For station 2
ρ2 = 19/30 ≈ 0.63
c = 6
(2) (2) ρ22 1
= λ +
λ(2) (1 − ρ2 ) µ(2)
= 19/11 ≈ 1.7 jobs
(e) This change has no impact on the results from part (d), but would change the
performance of the rework stations.
0 0.79 0.01 0.2
0 0.17 0.63 0.2
R=
0 0.16 0.4 0.44
0 0 0 0
113
1000
0
A1 =
0
0
or
1500
0
A2 =
0
0
1.0 0 0 0
1.197381672 1.510574018 0.4028197382 0
Λ[(I − R) ]−1 A = A
1.273917422 1.586102719 2.089627392 0
1.0 1.0 1.0 1.0
1000.0
1197.381672
= when A = A1
1273.917422
1000.0
or
1500.0
1796.072508
=
when A = A2
1910.876133
1500.0
29. We approximate the job shop as a Jackson network with a single job type.
queue i name
1 casting
2 planer
3 lathe
4 shaper
5 drill
114 CHAPTER 8. QUEUEING PROCESSES
0 0.54 0 0.46 0
0 0 0.46 0 0
R= 0 0 0 0 0
0 0 0 0 1
0 1 0 0 0
We obtain R by noticing that when a job departs a machine group that serves both
job types, the probability it is a type 1 job is (historically) 460/(460+540) = 0.46.
1/11
0
A= 0
0
0
because all jobs arrive initially to the casting group, and the sample-mean interarrival-
time gap was 11 minutes.
λ(1) = 1/11
λ(4) = (0.54)λ(1)
λ(5) = λ(4) = (0.54)λ(1)
λ(2) = (0.46)λ(1) + λ(5) = λ(1)
λ(3) = (0.46)λ(2) = (0.46)λ(1)
1
= (0.46)(125) + (0.54)(235) ≈ 184.4
µ(1)
1
= (0.46)(35) + (0.54)(30) ≈ 32.3
µ(2)
1
= 20
µ(3)
1
= 250
µ(4)
1
= 50
µ(5)
i si ρi
1 19 0.88
2 4 0.75
3 3 0.27
4 16 0.77
5 5 0.50
Clearly the longest delay occurs at the casting units. This appears to be the place to
add capacity.
The expected flow times for each job type are approximated as
type 1: 41.1 + 125 + 14.7 + 35 + 0.5 + 20 = 263.3 minutes
type 2: 41.1 + 235 + 15.8 + 250 + 2.4 + 50 = 639 minutes
The expected WIP is
5
5
(i)
= λ(i) w(i)
i=1 i=1
5
= λ(i) (w(i) q + 1/µ(i) )
i=1
≈ 39 jobs
(23.1)
εa = ≈ 0.19 < 1
(11.0)2
We might also expect εs(i) < 1. These adjustments would reduce q , wq , flow times and
WIP.
116 CHAPTER 8. QUEUEING PROCESSES
30. We have a network of two queues, one for the packer and one for the forklifts, denoted
i = 1 and i = 2 respectively. Define a customer to be 1 case = 75 cans. Therefore,
1/λ(1) = 150 seconds = 2.5 minutes.
1 1
Clearly, λ(2) = λ(1) , µ(1) = 2 minutes, µ(2)
= 3 + 1 = 4 minutes.
If s2 is the number of forklifts, then just to keep up we must have
λ(2) 4
ρ2 = (2)
= <1
s2 µ s2 (2.5)
ε(1)
a = 0
(3−1)2
1
ε(1) = 12
=
s
22 12
(1)
εd = ε(1)
a (from (A5))
(1)
ε(2)
a = εd = 0 (since all departures from the packer go to the forklifts)
12
ε(2)
s = = 1/4
(3 + 1)
and
wq(2)
(2)
q =λ (2)
wq(2) =
2.4
s2 wq(2) (2)
q w(2)
2 0.89 0.35 4.89
3 0.10 0.04 4.10
117
It appears that there will be very little queueing even with the minimum of 2 forklifts.
31. We first approximate the drive-up window as an M/M/s/4 queue with λ1 = 1/2
customer/minute and µ1 = 1/1.8 customer/minute. (This approximation is rough
because the true service-time distribution is more nearly normal, and εs < 1.)
s π4
1 0.16
2 0.03
π4 is the probability that an arriving car will find the queue full and thus have to park.
Adding a window reduces this dramatically.
The rate at which customers are turned away from the drive-up window is λ1 π4 =
(1/2)π4 . Therefore, the overall arrival rate into the bank is λ2 = 1+(1/2) π4 customers
per minute. This process will not be a Poisson process, but we approximate it as one.
As a first cut we model the tellers inside the bank as an M/M/s2 queue with µ2 = 1/1.4
customer/minute.
(s1 , s2 ) λ2 wq (minutes)
(1, 2) 1.08 1.9
(2, 2) 1.02 1.5
(1, 3) 1.08 0.2
The bank can now decide which improvement in performance is more valuable.
Comment: The approximation for the inside tellers can be improved by using the
GI/G/s adjustment. Clearly, εs = 1.0/(1.4)2 ≈ 0.5. The exact value of εa can also be
computed, but requires tools not used in this book, so set εa = 1 for a Poisson process.
8. To obtain a rough-cut model we will (a) use a 0.5 probability that an item joins the
queue of each inspector, rather than selecting the shortest queue, and (b) treat all
processing times as exponentially distributed (later we will refine (b)).
• Current System
0 0 1
R= 0 0 1
0.1 0.1 0
0.125
A = 0.125
0
λ(1) 0.156
Λ = λ = [(I − R) ] A = 0.156
(2) −1
λ(3) 0.313
119
120 CHAPTER 9. TOPICS IN SIMULATION OF STOCHASTIC PROCESSES
λ(j)
ρ(j) = ≈ 0.93 for j = 1, 2
µ(j)
• Proposed System
The repair technicians now become a single station with λ = λ(1) + λ(2) = 0.313 and
λ
µ = 0.167. The utilization is ρ = 2µ ≈ 0.93, so it is unchanged.
Treating the repair station as an M/M/2 queue, the expected flow time is
1 1
wq + + wq(3) + (3) (1.25)
µ µ
≈ (43 + 6 + 47 + 3)(1.25)
≈ 124 minutes
• Refinement
We can refine the approximation by using the ideas in Section 8.10. Notice that
j ε(j)
s
1 0.0625
2 0.0625
3 0.0370
9. To obtain a rough-cut model, first note that the registration function is not really
relevant to the issue of additional bed versus additional doctor, so we will ignore it.
We also (a) ignore the interaction of beds and doctors, (b) treat treatment time as
exponentially distributed, and (c) develop a composite patient and ignore patient type
and priority.
• Additional doctor.
With an additional doctor there is a one-to-one correspondence between bed and doc-
tor.
λ = 1/20 patient/minute
1/µ = 72(0.15) + 25(0.85) ≈ 32 minutes
Using an M/M/3 model
q = 0.3 patients waiting for a doctor and a bed
wq = 6.3 minutes wait for a doctor and a bed
• Additional bed.
In terms of time to wait to see a doctor, the beds are not a constraint. Using an
M/M/2 model
q = 2.8 patients waiting to see a doctor
wq = 57 minutes to see a doctor
The expected number of patients in the system is
10. To obtain a rough-cut model we will replace all random variables by their expected
values and treat it as deterministic.
Suppose the company is open 8 hours per day. Then the period of interest is 90(8) =
720 hours.
The number of lost sales during 1 day is (8 hours) (1 customer/hour) (6 bags/customer)
= 48 bags.
If the company orders s bags, they will run out in
s bags
= s/6 hours
6 bags/hour
122 CHAPTER 9. TOPICS IN SIMULATION OF STOCHASTIC PROCESSES
11. To obtain a rough-cut model we will (a) ignore the interaction of loading bays and
forklifts, and (b) treat all service times as exponentially distributed (later we refine
(b)).
• Adding a bay
Again start by treating the bays as unlimited, and the forklifts as an M/M/2 with
λ = 6.5
1/µ = 1/4
123
wq = 0.49 hours
q = 3.2 trucks
λ
ρ= 2µ
= 0.81 utilization
Next treat the bays as servers in an M/M/5 model with
λ = 6.5
1/µ = 1/2
wq = 0.09 hours = 5.4 minutes
q = 0.6 trucks in the lot
εs
forklift 0.01
(1.52 +3(5)2 )
bay (15+15)2
≈ 0.09
p(k+1) = p Pk+1
= (p Pk )P
= π P
= π
15. # $ # $
k k
1 k
j=1 Xj h=1 Zh
σXZ = Xi Z i −
k − 1 i=1 k
k
= ((Xi − X̄) − (Zi − Z̄))2
i=1
k
= (Xi − X̄)2 − 2(Xi − X̄)(Zi − Z̄)
i=1
+(Zi − Z̄)2
k
k
2
= (Xi − X̄) + (Zi − Z̄)2
i=1 i=1
k
−2 (Xi − X̄)(Zi − Z̄)
i=1
2
= (k − 1)(σX + σZ2 − 2σXZ )
E[θ | S0 = 1]
Pr{S = 3} = π3 ≈ 0.238
E[S] = π1 + 2π2 + 3π3 ≈ 1.809
3
Var[S] = (j − 1.809)2 πj ≈ 0.630
j=1
| 1.809 − E[Sn | S0 = 1] |
≤ 0.01
1.809
Therefore, | 1.809 − E[Sn | S0 = 1] |≤ 0.018 where
3
(n)
E[Sn | S0 = 1] = j p1j
j=1
| 0.630 − Var[Sn | S0 = 1] |
≤ 0.01
0.063
126 CHAPTER 9. TOPICS IN SIMULATION OF STOCHASTIC PROCESSES
(n)
n p13 E[Sn | S0 = 1] Var[Sn | S0 = 1]
0 0 1.0 0
1 0.2 1.5 0.65
2 0.25 1.67 0.721
3 0.258 1.735 0.711
4 0.256 1.764 0.691
5 0.251 1.780 0.674
6 0.248 1.789 0.662
7 0.245 1.796 0.652
8 0.243 1.800 0.646
9 0.241 1.803 0.641
10 0.240 1.805 0.638
11 .240 1.806 0.636