CH02
CH02
Therefore,
( √
exp(−λ(2 β−α−α))
fX,Y (u.v) √ β > α + (max{0, α})2
fW,Z (α, β) = = β−α
det J 0 else
2.1 Limits and infinite sums for deterministic sequences (a) Before beginning
the proof we observe that | cos(θ)| ≤ 1, so |θ(1 + cos(θ))| ≤ 2|θ|. Now, for the
proof. Given an arbitrary with > 0, let δ = /2. For any θ with |θ − 0| ≤ δ,
the following holds: |θ(1 + cos(θ)) − 0| ≤ 2|θ| ≤ 2δ = . Since was arbitrary the
convergence is proved.
(b) Before beginning the proof we observe that if 0 < θ < π/2, then cos(θ) ≥ 0
and 1+cos(θ)
θ ≥ 1/θ. Now, for the proof. Given an arbitrary positive number K,
1
let δ = min{ π2 , K }. For any θ with 0 < θ < δ, the following holds: 1+cos(θ)
θ ≥
1/θ ≥ 1/δ ≥ K. Since K was arbitrary the convergence is proved. √
PN
(c) The sum is by definition equal to limN →∞ sN where sN = n=1 1+ n
1+n2 . The
sequence SN is increasing in N . Note that the n = 1 term of the sum is 1 and
for any n ≥ 1 the nth term of the sum can be bounded as follows:
√ √
1+ n 2 n
≤ 2 = 2n−3/2 .
1 + n2 n
Therefore, comparing the partial sum with an integral, yields
XN Z N
sN ≤ 1 + 2n−3/2 ≤ 1 + 2x−3/2 dx = 5 − 4N −1/2 ≤ 5.
n=2 1
8 Solutions to Odd Numbered Problems Random Processes for Engineers
2.3 The reciprocal of the limit is the limit of the reciprocal Let > 0. Let
x2
0 = min{ |x2∞ | , 2∞ }. By the hypothesis, there exists a value of no so large that
for all n ≥ no , |xn −x∞ | ≤ 0 . This condition implies that |xn | ≥ |x∞ |/2, because
of the choice of 0 . Therefore, for all n ≥ no ,
1 1 |xn − x∞ | 20
− = ≤ 2 ≤ ,
xn x∞ |xn ||x∞ | x∞
(b) Let = 1/3 and let xn = (2/3)1/n for n ≥ 1. Note that xn ∈ [0, 1) and
fn (xn ) = 32 . Thus, there is no positive integer n such that |fn (x) − 0| ≤ for all
x ∈ [0, 1). So it is impossible to select n with the property required for uniform
convergence. Therefore fn does not converge uniformly to zero.
(c) Let c < supD f . Then there is an x ∈ D so that c ≤ f (x). Therefore,
c ≤ f (x) − g(x) + g(x) ≤ supD |f − g| + supD g. Thus, c < supD f implies
c < supD |f − g| + supD g. Equivalently, supD f ≤ supD |f − g| + supD g, or
supD f − supD g ≤ supD |f − g|. Exchanging the roles of f and g yields supD g −
supD f ≤ supD |f − g|. Combining yields the desired inequality, | supD f −
supD g| ≤ supD |f −g|. As an application, suppose fn → f uniformly on D. Then
given any > 0, there exists an n so large, that supD |fn − f | ≤ , whenever n ≥
n . But then by the inequality proved, | supD fn − supD f | ≤ supD |fn − f | ≤ ,
whenever n ≥ n . Thus, by definition, supD fn → supD f as n → ∞.
(b)
n
X n
X
Sn = Ak B k − Ak Bk−1 since B−1 = 0
k=0 k=1
n
X n−1
X
= Ak B k − Ak+1 Bk
k=0 k=0
n
!
X
= (Ak − Ak+1 )Bk − An+1 Bn
k=0
n
!
X
= ak Bk − An+1 Bn .
k=0
Pn
(c) Since |ak Bk | ≤ Lak for all k, the sequence of sums k=0 ak Bk is convergent
by the result of part (a). Also, |An+1 Bn | ≤ LAn+1 → 0 as n → ∞. Thus, by
part (b), Sn has a finite limit.
2.9 Convergence of a random sequence (a) The sequence Xn (ω) is monotone
nondecreasing in n for each ω. Also, by induction on n, Xn (ω) ≤ 1 for all n and
ω. Since bounded monotone sequences have finite limits, limn→∞ Xn exists in
the a.s. sense and the limit is less than or equal to one with probability one.
(b) Since a.s. convergence of bounded sequences implies m..s. convergence,
limn→∞ Xn also exists in the m.s. sense.
(c) Since (Xn ) converges a.s., it also converges in probability to the same ran-
dom variable, so Z = limn→∞ Xn a.s. It can be shown that P {Z = 1} = 1.
Here is one of several proofs. Let 0 < < 1. Let a0 = 0 and ak = ak−12+1− for
k ≥ 1. By induction, ak = (1 − )(1 − 2−k ). Consider the sequence of events:
{Ui ≥ 1 − } for i ≥ 1. These events are independent and each has probability .
So with probability one, for any k ≥ 1, the probability that at least k of these
events happens is one. If at least k of these events happen, then Z ≥ ak . So,
P {(1 − )(1 − 2−k ) ≤ Z ≤ 1} = 1. Since can be arbitrarily close to zero and k
can be arbitrarily large, it follows that P {Z = 1} = 1.
2
ANOTHER APPROACH is to calculate that E[Xn |Xn−1 = v] = v + (1−v) 2 .
2 2
Thus, E[Xn ] = E[Xn−1 ]+ E[(1−X2 n−1 ) ]
≥ E[Xn−1 ]+ (1−E[X2 n−1 ]) . Since E[Xn ] →
2
E[Z], it follows that E[Z] ≥ E[Z] + (1−E[Z]) 2 . So E[Z] = 1. In view of the fact
P {Z ≤ 1} = 1, it follows that P {Z = 1} = 1.
2.11 Convergence of some sequences of random variables (a)For each fixed
ω., V (ω)
n → 0 so Xn (ω) → 1. Thus, Xn → 1 in the a.s sense, and hence also
in the p. and d. senses. Since the random variables Xn are uniformly bounded
(specifically, |Xn | ≤ 1 for all n), the convergence in p. sense implies convergence
in m.s. sense as well. So Xn → 1 in all four senses.
(b)To begin we note that P {V ≥ 0} = 1 with P {V > 1} = e−3 > 0. For
any ω such that V (ω) < 1, Yn (ω) → 0, and for any ω such that V (ω) > 1,
Yn (ω) → +∞, so (Yn ) does not converge in the a.s. sense to a finite random
variable.
10 Solutions to Odd Numbered Problems Random Processes for Engineers
Let us show Yn does not converge in d. sense. For any c > 0 limn→∞ Fn (c) =
limn→∞ P {Yn ≤ c} = P {V < 1} = 1 − e−3 . The limit exists but the limit
function F satisfies F (c) = e−1 for all c > 0, so the limit is not a valid CDF.
Thus, (Yn ) does not converge in the d. sense (to a finite limit random variable),
and hence does not converge in any of the four senses to a finite limit random
variable.
(c)For each ω fixed, Zn (ω) → eV (ω) . So Zn → eV in the a.s. sense, and hence also
in the p. and d. senses. Using the inequality 1+u ≤ eu shows that V
R ∞Zn2u≤ e−3ufor all
V V 2 2V
n so that |Zn | ≤ e for all n. Note that E[(e ) ] = E[e ] = 0 e 3e du =
3 < ∞. Therefore, the sequence (Zn ) is dominated by a single random variable
with finite second moment (namely, eV ), so the convergence of (Zn ) in the p.
sense to eV implies that (Zn ) converges to eV in the m.s. sense as well. So
Zn → eV in all four senses.
2.13 On the maximum of a random walk with negative drift (a) By the
strong law of large numbers, P {Sn /n → −1} = 1. Therefore, with probability
one, Sn /n ≤ 0 for all sufficiently large n. That is, with probability one, Sn > 0
only finitely many times. The random variable Z, with probability one, is thus
the maximum of only finitely many nonnegative numbers. So Z is finite with
probability one.
(b) Suppose P {X1 = c − 1} = P {X1 = −c − 1} = 0.5 for a constant c > 0.
Then X1 has mean -1 as required. Following the hint, for c ≥ 1, we have E[Z] ≥
E[max{0, X1 }] = (c − 1)/2. Observe that E[Z] can be made arbitrarily large by
taking c arbitrarily large. So the answer to the question is no. (Note: More can
be said about E[Z] if the variance of X1 is known. A celebrated bound of J.F.C.
Kingman is that E[Z] ≤ Var(X1 ) .)
−2E[X1 ]
2.15 Convergence in distribution to a nonrandom limit Suppose P {X = c} =
1 and limn→∞ Xn = X d. Let > 0. It suffices to prove that
P {Xn − X| ≤ } → 1 as n → ∞. Note that P {|Xn − X| ≤ } ≥ P {c − <
Xn ≤ c + } = Fn (c + ) − Fn (c − ). Since c − is a continuity point of FX
and FX (c − ) = 0, it follows that Fn (c − ) → 0. Similarly, Fn (c + ) → 1. Thus
Fn (c + ) − Fn (c − ) → 1, so that P {|Xn − X| ≤ } → 1. Therefore convergence
in probability holds.
Note: A slightly different approach would be to prove that for any > 0, there
is an n so large that P {|Xn − c| ≤ } ≥ 1 − .
2.17 Convergence of a product (a) Examine Sn = ln Xn . The sequence Sn , n ≥
1 is the sequence of partial sums of the independent R 2 and identically distributed
random variables ln Uk . Observe that E[ln Uk ] = 0 ln(u) 21 du = 12 (x ln x − x)|20 =
ln 2 − 1 ≈ −0.306. Therefore, by the strong law of large numbers, limn→∞ Snn =
ln 2 − 1 a.s. This means that, given an > 0, there is an a.s. finite random
variable N so large that | Snn − (ln 2 − 1)| ≤ for all n ≥ N . Equivalently,
n n
2(1 − ) 2(1 + )
≤ Xn ≤ for n ≥ N .
e e
Conclude that limn→∞ Xn = 0 a.s., which implies that also limn→∞ Xn = 0 p.
Solutions to Odd Numbered ProblemsRandom Processes for Engineers 11
To calculate the θ
√ Chernoff bound, we find M (θ) = log(0.5e + 0.1√ + 0.4e−θ ) and
`(0) = exp(.4 5 + 0.1), yielding the upper bound P (S ≥ 0) ≤ (.4 5 + 0.1)100 =
0.57187.
It is not difficult to calculate P (S ≥ 0) numerically. The result is P (S ≥ 0) =
0.1572.... Thus, in this example, the approximation based on the central limit
theorem is fairly accurate, the Chernoff bound is somewhat loose, and the Cheby-
chev inequality is very loose.
2.21 Sums of i.i.d. random variables, III (a) ΦXi,n (u) = E ejuXi,n = 1 +
λ ju n
n (e − 1) so ΦYn (u) = 1 + nλ (eju − 1) .
α ju
(b) Since limn→∞ 1 + α n = eα , it follows that limn→∞ ΦYn (u) = eλ(e −1) .
This limit as a function of u is the characteristic function of a random variable
Y with the Poisson distribution with mean λ.
(c) Thus, (Yk ) converges in distribution, and the limiting distribution is the
12 Solutions to Odd Numbered Problems Random Processes for Engineers
Poisson distribution with mean λ. There is not enough information given in the
problem to determine whether Yn converges in any of the stronger senses (p.,
m.s., or a.s.), because the given information only describes the distribution of Yn
for each n but gives nothing about the joint distribution of the Yn ’s. Note that
Yn has a binomial distribution for each n.
2.23 On the growth of the maximum of n independent exponentials (a) Let
n ≥ 2. Clearly FZn (c) = 0 for c ≤ 0. For c > 0,
X∞ ∞
X √
≥1− P {Wn+k ≤ −3 · 2k } = 1 − Q(3 · 2k · 2)
k=0 k=0
∞
X ∞
1 1X
≥1− exp(−(3 · 2k )2 ) ≥ 1 − (e−9 )k+1
2 2
k=0 k=0
−9
e
=1− ≥ 0.9999.
2(1 − e−9 )
The pieces are put together as follows. Let N1 be the smallest time such that
XN1 ≥ 3. Then N1 is finite with probability one, as explained above. Then X
diverges nicely from time N1 with probability at least 0.9999. However, if X
does not diverge nicely from time N1 , then there is some first time of the form
N1 + k such that XN1 +k < 3 · 2k . Note that the future of the process beyond
that time has the same evolution as the original process. Let N2 be the first
time after that such that XN2 ≥ 3. Then X again has chance at least 0.9999
to diverge nicely to infinity. And so on. Thus, X will have arbitrarily many
chances to diverge nicely to infinity, with each chance having probability at least
0.9999. The number of chances needed until success is a.s. finite (in fact it has
the geometric distribution), so that X diverges nicely to infinity from some time,
with probability one.
2.27 Convergence analysis of successive averaging (b) The means µn of Xn
for all n are determined by the recursion µ0 = 0, µ1 = 1, and, for n ≥ 1,
µn+1 = (µn + µn−1 )/2. This second order recursion has a solution of the form
µn = Aθ1n +Bθ2n , where θ1 and θ2 are the solutions to the equation θ2 = (1+θ)/2.
This yields µn = 23 (1 − (− 12 )n ).
R 1 that Dn = U1 · · · Un−1 . Since
(c) It is first proved that limn→∞ Dn = 0 a.s.. Note
ln Dn = ln(U1 ) + · · · ln(Un−1 ) and E[ln Ui ] = 0 ln(u)du = (x ln x − x)|10 = −1,
the strong law of large numbers implies that limn→∞ lnn−1 Dn
= −1 a.s., which in
turn implies limn→∞ ln Dn = −∞ a.s., or equivalently, limn→∞ Dn = 0 a.s.,
which was to be proved. By the hint, for each ω such that Dn (ω) converges to
zero, the sequence Xn (ω) is a Cauchy sequence of numbers, and hence has a
limit. The set of such ω has probability one, so Xn converges a.s.
2.29 Mean square convergence of a random series Let Yn = X1 + · · · + Xn .
We are interested in determining whether limn→∞ Yn exists in the m.s. sense. By
Proposition 2.11, the m.s. limit exists if and only if the limit limm,n→∞ E[Ym Yn ]
Pn∧m P∞
exists and is finite. But E[Ym Yn ] = k=1 σk2 which converges to k=1 σk2 as
P∞ 2
n, m → ∞. Thus, (Yn ) converges in the m.s. sense if and only if k=1 σk < ∞.
14 Solutions to Odd Numbered Problems Random Processes for Engineers
2.31 A large deviation Since E[X12 ] = 2 > 1, Cramér’s theorem implies that
R∞ 2 R ∞ − x12 p
b = `(2), which we compute. For a > 0, −∞ e−ax dx = −∞ e 2( 2a ) dx = πa ,
so
Z ∞
2 1 2 1 1
M (θ) = ln E[eθx ] = ln √ e−x ( 2 −θ) dx = − ln(1 − 2θ).
−∞ 2π 2
1
`(a) = max θa + ln(1 − 2θ)
θ 2
1 1
= 1−
2 a
1 1
θ∗ = (1 − )
2 a
1
b = `(2) = (1 − ln 2) = 0.1534
2
e−100b = 2.18 × 10−7 .
c ≥ 0.
(b) The log moment generating function of Y is given by
P∞ θk k −λ θ
MY (θ) = ln k=0 e λk!e = ln(eλ(e −1) ) = λ(eθ − 1). Therefore,
a
l(a) = max aθ − eλ (θ − 1) = a ln( ) + λ − a.
θ λ
Solutions to Odd Numbered ProblemsRandom Processes for Engineers 15