0% found this document useful (0 votes)
71 views

Chapter 1 - Conditional Probability and Conditional Expectation

This document discusses conditional probability and conditional expectation. It defines conditional probability as the probability of an event occurring given that another event has occurred. It also defines conditional distributions and conditional expectations for both discrete and continuous random variables. Several examples are provided to illustrate how to calculate conditional probabilities, distributions, and expectations.

Uploaded by

METEOR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views

Chapter 1 - Conditional Probability and Conditional Expectation

This document discusses conditional probability and conditional expectation. It defines conditional probability as the probability of an event occurring given that another event has occurred. It also defines conditional distributions and conditional expectations for both discrete and continuous random variables. Several examples are provided to illustrate how to calculate conditional probabilities, distributions, and expectations.

Uploaded by

METEOR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Chapter 1 - Conditional Probability and

Conditional Expectation

AMA3658 Stochastic Processes for Investment

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 1 / 34
Conditional Probability for Events

An event is a subset of a sample space. The conditional probability of an event


E is the probability that the event will occur given the knowledge that an event
F has already occurred. This probability is written P (E | F ), notation for the
probability of E given F .

For any two events E and F , the conditional probability of E given F is


defined, as long as P (F ) > 0, by

P (E ∩ F )
P (E | F ) = .
P (F )

Events E and F are independent if P (E | F ) = P (E).

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 2 / 34
Conditional Distribution
Let X and Y be discrete random variables. The conditional probability mass
function of X given Y = y is

pX|Y (x | y) = P (X = x | Y = y)
P (X = x, Y = y)
=
P (Y = y)
p(x, y)
=
pY (y)
for all values of y such that P (Y = y) > 0, where p(·, ·) denotes the joint
probability mass function of (X, Y ), and pY (·) denotes the marginal probability
mass function of Y .

The conditional distribution function of X given Y = y is defined, for all y


such that P (Y = y) > 0, by

FX|Y (x | y) = P (X ≤ x | Y = y)
X
= pX|Y (k | y).
k≤x

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 3 / 34
Conditional Distribution

The conditional expectation of X given that Y = y is defined by


X
E(X | Y = y) = x P (X = x | Y = y)
x
X
= x pX|Y (x | y).
x

If X is independent of Y , then the conditional mass function, distribution


function, and expectation are the same as the unconditional ones.

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 4 / 34
Conditional Distribution

Example 1.1 Suppose that p(x, y), the joint probability mass function of X
and Y , is given by

p(1, 1) = 0.5, p(1, 2) = 0.1, p(2, 1) = 0.1, p(2, 2) = 0.3.

Calculate the conditional probability mass function of X given that Y = 1.

Solution. We first note that


X
pY (1) = p(x, 1) = p(1, 1) + p(2, 1) = 0.6.
x

Hence,

pX|Y (1 | 1) = P (X = 1 | Y = 1)
P (X = 1, Y = 1) p(1, 1) 5
= = = .
P (Y = 1) pY (1) 6
1
Likewise, pX|Y (2 | 1) = .
6

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 5 / 34
Conditional Distribution
Example 1.2 If X1 and X2 are independent binomial random variables with
respective parameters (n1 , p) and (n2 , p), calculate the conditional probability
mass function of X1 given that X1 + X2 = m for some non-negative
m ≤ n1 + n2 .

Solution. With q = 1 − p and max(0, m − n2 ) ≤ k ≤ min(m, n1 ),


P (X1 = k, X1 + X2 = m)
P (X1 = k | X1 + X2 = m) =
P (X1 + X2 = m)
P (X1 = k, X2 = m − k)
=
P (X1 + X2 = m)
P (X1 = k)P (X2 = m − k)
=
P (X1 + X2 = m)
n1 n2

pk q n1 −k

k m−k
pm−k q n2 −m+k
= n1 +n2
 .
m
pm q n1 +n2 −m

Thus, the conditional probability mass function of X1 given X1 + X2 = m is


n1 n2
 
k m−k
P (X1 = k | X1 +X2 = m) = n1 +n2
 for max(0, m − n2 ) ≤ k ≤ min(m, n1 ).
m

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 6 / 34
Conditional Distribution
Example 1.3 If X and Y are independent Poisson random variables with
respective parameters λ1 and λ2 , calculate the conditional probability mass
function of X given that X + Y = n.

Solution. For 0 ≤ k ≤ n,
P (X = k, X + Y = n)
P (X = k | X + Y = n) =
P (X + Y = n)
P (X = k, Y = n − k)
=
P (X + Y = n)
P (X = k)P (Y = n − k)
=
P (X + Y = n)
−1
e−λ1 λk1 e−λ2 λn−k

2 e−(λ1 +λ2 ) (λ1 + λ2 )n
=
k! (n − k)! n!
  k  n−k
n λ1 λ2
= .
k λ1 + λ2 λ1 + λ2

The conditional distribution of X given X + Y = n is the binomial distribution


with parameters n and λ1 /(λ1 + λ2 ).

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 7 / 34
Conditional Distribution
If X and Y have a joint probability density function f (x, y), then the
conditional density function of X, given Y = y, is defined for all values of y
such that fY (y) > 0, by

f (x, y)
fX|Y (x | y) = ,
fY (y)

where fY (·) is the marginal density of Y .

To motivate this definition, multiply the left side by dx and the right side of
(dxdy)/dy to get

f (x, y) dxdy
fX|Y (x | y) dx =
fY (y) dy
P (x ≤ X ≤ x + dx, y ≤ Y ≤ y + dy)

P (y ≤ Y ≤ y + dy)
= P (x ≤ X ≤ x + dx | y ≤ Y ≤ y + dy).

In other words, for small values of dx and dy, fX|Y (x | y) dx is approximately


the conditional probability that X is between x and x + dx given that Y is
between y and y + dy.

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 8 / 34
Conditional Distribution

The conditional expectation of X given Y = y is defined for all values of y


such that fY (y) > 0 by
Z ∞
E(X | Y = y) = x fX|Y (x | y) dx.
−∞

Example 1.4 The joint density of X and Y is given by


(
1
2
ye−xy , 0 < x < ∞, 0 < y < 2;
f (x, y) =
0, otherwise.

What is E(eX/2 | Y = 1)?

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 9 / 34
Conditional Distribution

Solution. The conditional density of X given Y = 1 is given by

f (x, 1)
1
(1)e−x(1)
fX|Y (x | 1) = = R ∞ 21 = e−x
fY (1) 2
(1)e −x(1) dx
0

Hence,
Z ∞ Z ∞
E(eX/2 | Y = 1) = ex/2 fX|Y (x | 1) dx = ex/2 e−x dx
0 0
Z ∞
= e−x/2 dx = 2.
0

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 10 / 34
Conditional Distribution

Example 1.5 Let X1 and X2 be independent exponential random variables


with rate parameters µ1 and µ2 . Find the conditional density of X1 given that
X1 + X2 = t.

Solution. First, note that if f (x, y) is the joint density of X and Y , then the
joint density of X and X + Y is

fX,X+Y (x, t) = fX,Y (x, t − x),

which is easily seen by noting that the Jacobian of the transformation

g1 (x, y) = x g2 (x, y) = x + y

is equal to 1.

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 11 / 34
Conditional Distribution

Solution. (con’t) Applying this result, we have


fX1 ,X1 +X2 (x, t)
fX1 |X1 +X2 (x | t) =
fX1 +X2 (t)
fX1 ,X2 (x, t − x)
=
fX1 +X2 (t)
µ1 e−µ1 x µ2 e−µ2 (t−x)
=
fX1 +X2 (t)
= Ce−(µ1 −µ2 )x , 0 ≤ x ≤ t,

where
µ1 µ2 e−µ2 t
C=
fX1 +X2 (t)
is a constant.

Note: If µ1 = µ2 , then fX1 | X1 +X2 (x | t) = C for 0 ≤ x ≤ t. In this case, X1


given X1 + X2 = t is uniformly distributed on (0, t).

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 12 / 34
Computation of Expectations by Conditioning

Let us denote by E(X | Y ) the function of the random variable Y whose value
at Y = y is E(X | Y = y). Note that E(X | Y ) is a random variable.

For random variables X and Y ,


 
E(X) = E E(X | Y ) .

If Y is discrete, then
X
E(X) = E(X | Y = y)P (Y = y).
y

If Y is continuous with density function fY (y), then


Z ∞
E(X) = E(X | Y = y)fY (y) dy.
−∞

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 13 / 34
Computation of Expectations by Conditioning

We prove the result on the expectation of X for discrete Y . Note that


X XX
E(X | Y = y)P (Y = y) = x P (X = x | Y = y)P (Y = y)
y y x
X X P (X = x, Y = y)
= x P (Y = y)
P (Y = y)
y x
XX
= x P (X = x, Y = y)
y x
X X
= x P (X = x, Y = y)
x y
X
= x P (X = x)
x

= E(X).

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 14 / 34
Computation of Expectations by Conditioning

Example 1.6 Suppose that the expected number of accidents per week at an
industrial plant is 4. Suppose also that the numbers of workers injured in each
accident are independent random variables with a common mean of 2. In
addition, assume that the number of workers injured in each accident is
independent of the number of accidents that occur. What is the expected
number of injuries during a week?

Solution. Let N denote the number of accidents and Xi the number injured in
the ith accident,
PN i = 1, 2, . . ., then the total number of injuries can be
expressed as i=1
Xi . Now,
N
X  N
h X i
E Xi =E E Xi | N .
i=1 i=1

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 15 / 34
Computation of Expectations by Conditioning

Solution. (con’t) Note that


N
X  n
X 
E Xi | N = n = E Xi | N = n
i=1 i=1
n
X 
=E Xi = nE(X1 ),
i=1
PN 
which yields E i=1
Xi | N = N E(X1 ), and
N
X  N
h X i  
E Xi =E E Xi | N = E N E(X1 ) = E(N )E(X1 ).
i=1 i=1

In the example, the expected number of injuries during a week equals 4 × 2 = 8.

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 16 / 34
Computation of Expectations by Conditioning
Example 1.7 (Random Hats Problem) At a party, N men throw their hats
into the center of a room. The hats are mixed up and each man randomly
selects one. Find the expected number of men who select their own hats.

Solution. Let X denote the number of men who select their own hats. Note
that X = H1 + H2 + · · · + HN , where

1, if the ith man selects his own hat;
Hi =
0, otherwise.

Now, because the ith man is equally likely to select any of the N hats, it
follows that
1
P (Hi = 1) = P (ith man selects his own hat) =
N
1
and so E(Hi ) = 1P (Hi = 1) + 0P (Hi = 0) = N
. Hence,

N
E(X) = E(H1 ) + · · · + E(HN ) = = 1.
N
Therefore, on average exactly one of the men will select his own hat.

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 17 / 34
Computation of Expectations by Conditioning

Example 1.8 (The Matching Rounds Problem) Suppose that in the previous
example, those choosing their own hats depart, while the others (those without
a match) put their selected hats in the center of the room, mix them up, and
then reselect. Also, suppose that this process continues until each individual
has his own hat.

(a) Find E(Rn ), where Rn is the number of rounds that are necessary when
n individuals are initially present.

(b) Find E(Sn ), where Sn is the total number of selections made by the n
individuals for n ≥ 2.

(c) Find the expected number of false selections made by one of the n people
for n ≥ 2.

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 18 / 34
Computation of Expectations by Conditioning

Solution.
(a) It follows from the previous example that no matter how many people
remain, there will, on average, be one match per round. Hence, one
might suggest that E(Rn ) = n. This turns out to be true, and an
induction proof will now be given.

Because it is obvious that E(R1 ) = 1, assume that E(Rk ) = k for


k = 1, . . . , n − 1. To compute E(Rn ), start by conditioning on Xn , the
number of matches that occur in the first round. This gives
n
X
E(Rn ) = E(Rn | Xn = i)P (Xn = i).
i=0

Now, given a total of i matches in the initial round, the number of rounds
needed will equal 1 plus the number of rounds that are required when
n − i persons are to be matched with their hats.

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 19 / 34
Computation of Expectations by Conditioning

Solution. (con’t) Therefore,


n
X
E(Rn ) = [1 + E(Rn−i )]P (Xn = i)
i=0
n
X
= 1 + E(Rn )P (Xn = 0) + E(Rn−i )P (Xn = i)
i=1
n
X
= 1 + E(Rn )P (Xn = 0) + (n − i)P (Xn = i)
i=1

= 1 + E(Rn )P (Xn = 0) + n[1 − P (Xn = 0)] − E(Xn )

= E(Rn )P (Xn = 0) + n[1 − P (Xn = 0)].

The induction result is proven.

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 20 / 34
Computation of Expectations by Conditioning
Solution. (con’t)
(b) For n ≥ 2, conditionally on Xn , we have
n n
X X
E(Sn ) = E(Sn | Xn = i)P (Xn = i) = [n + E(Sn−i )]P (Xn = i)
i=0 i=0
n
X
= n+ E(Sn−i )P (Xn = i),
i=0

where E(S0 ) = 0.

To solve the preceding equation, rewrite it as E(Sn ) = n + E(Sn−Xn ).


Now, if there were exactly one match in each round, then it would take a
total of 1 + · · · + n = n(n + 1)/2 selections.

Try a solution of the form E(Sn ) = an + bn2 . We need

an + bn2 = n + E (a(n − Xn ) + b(n − Xn )2 .


 

Using the facts that E(Xn ) = Var(Xn ) = 1 (why?), we conclude that the
above equation holds with b = 1/2 and a = 1.

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 21 / 34
Computation of Expectations by Conditioning

Solution. (con’t) We formally prove that E(Sn ) = n + n2 /2 for n ≥ 2 by


induction. Clearly, the induction proposition holds for n = 2. Recursion gives
n
X
E(Sn ) = n + E(Sn )P (Xn = 0) + E(Sn−i )P (Xn = i).
i=1

Assume that E(Sk ) = k + k2 /2 for k = 2, . . . , n − 1. Using the fact that


P (Xn = n − 1) = 0, we see that
n
X
n − i + (n − i)2 /2 P (Xn = i)
 
E(Sn ) = n + E(Sn )P (Xn = 0) +
i=1

= n + E(Sn )P (Xn = 0) + (n + n2 /2)[1 − P (Xn = 0)]


− (n + 1)E(Xn ) + E(Xn2 )/2.

Substituting the identities E(Xn ) = 1, E(Xn2 ) = 2 in the preceding shows that


E(Sn ) = n + n2 /2, and the induction proof is complete.

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 22 / 34
Computation of Expectations by Conditioning

Solution. (con’t)
(c) Let Cj denote the number of hats chosen by person j (j = 1, . . . , n), then
n
X
C j = Sn .
j=1

Taking expectation and using the fact that each Cj has the same mean,
yields the result
E(Cj ) = E(Sn )/n = 1 + n/2.
Hence, the expected number of false selections by person j is

E(Cj − 1) = n/2.

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 23 / 34
Computation of Variances by Conditioning
Similar to the case of expectation, it is easier to compute the variance of a
random variable X by conditioning on another variable Y in many cases.
Theorem 1.1 (Law of total variance)
If X and Y are random variables on the same probability space, and the
variance of X is finite, then

Var(X) = E[Var(X | Y )] + Var[E(X | Y )].

Proof. First,
E[Var(X | Y )] = E E(X 2 | Y ) − E(X | Y )2
 

= E[E(X 2 | Y )] − E E(X | Y )2
 

= E(X 2 ) − E E(X | Y )2 .
 

Also,
2
Var[E(X | Y )] = E E(X | Y )2 − E[E(X | Y )]
  

= E E(X | Y )2 − [E(X)]2 .
 

Combining the above results,


E[Var(X | Y )] + Var[E(X | Y )] = E(X 2 ) − [E(X)]2 = Var(X).
AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 24 / 34
Computation of Variances by Conditioning
Example 1.9 (The Variance of a Compound Random Variable) Let X1 ,
X2 , . . . be independent and identically distributed random variables with
distribution F having mean µ and variance σ 2 , and assume that they are
independent of the non-negative
PN integer-valued random variable N . The
random variable S = i=1 Xi is called a compound random variable. Find the
variance of S.

Solution. We use the conditional variance formula. First,


N
X 
Var(S | N = n) = Var Xi | N = n
i=1
n
X 
= Var Xi | N = n
i=1
n
X 
= Var Xi = nσ 2 .
i=1

By the same reasoning, E(S | N = n) = nµ. Therefore,

Var(S) = E(N σ 2 ) + Var(N µ) = σ 2 E(N ) + µ2 Var(N ).

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 25 / 34
Computation of Variances by Conditioning

PN
Solution. (con’t) If N is a Poisson random variable, then S = i=1
Xi is
called a compound Poisson random variable.

Because the variance of a Poisson random variable is equal to its mean, it


follows that for a compound Poisson random variable having E(N ) = λ,

Var(S) = λσ 2 + λµ2 = λE(X 2 ),

where X has the distribution F .

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 26 / 34
Computation of Probabilities by Conditioning

We can compute probabilities using the conditioning approach for expectation.


For a given event E, define an indicator random variable X by

1, if E occurs;
X=
0, if E does not occur.

It follows from the definition of X that

E(X) = P (E)
E(X | Y = y) = P (E | Y = y) for any random variable Y (P (Y = y) > 0).

Therefore, we have
X

 P (E | Y = y)P (Y = y), if Y is discrete;
 y
P (E) = Z ∞


 P (E | Y = y)fY (y) dy, if Y is continuous.
−∞

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 27 / 34
Computation of Probabilities by Conditioning

Example 1.10 Suppose that X and Y are independent continuous random


variables having densities fX and fY , respectively. Compute P (X < Y ).

Solution. Conditioning on the value of Y yields


Z ∞
P (X < Y ) = P (X < Y | Y = y)fY (y) dy
−∞
Z ∞
= P (X < y | Y = y)fY (y) dy
−∞
Z ∞
= P (X < y)fY (y) dy
−∞
Z ∞
= FX (y)fY (y) dy,
−∞
Z y
where FX (y) = fX (x) dx.
−∞

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 28 / 34
Computation of Probabilities by Conditioning
Example 1.11 An insurance company supposes that the number of accidents
that each of its policyholders will have in a year is Poisson-distributed, with the
mean of the Poisson distribution depending on the policyholder. If the Poisson
mean of a randomly chosen policyholder has a gamma distribution with density
function
g(λ) = λe−λ , λ > 0,
what is the probability that a randomly chosen policyholder has exactly n
accidents next year?

Solution. Let X denote the number of accidents that a randomly chosen


policyholder has next year. Let Y be the Poisson mean number of accidents for
this policyholder. Conditioning on Y yields
Z ∞
P (X = n) = P (X = n | Y = λ)g(λ) dλ
0

λn −λ
Z
= e−λ λe dλ
0
n!
Z ∞
1
= λn+1 e−2λ dλ.
n! 0

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 29 / 34
Computation of Probabilities by Conditioning

Solution. (con’t) However, because

2e−2λ (2λ)n+1
h(λ) = , λ>0
(n + 1)!

is the density function of a gamma (n + 2, 2) random variable, its integral is 1.


Therefore,
∞ ∞
2e−λ (2λ)n+1 2n+2
Z Z
1= dλ = λn+1 e−2λ dλ,
0
(n + 1)! (n + 1)! 0

which implies that


n+1
P (X = n) = .
2n+2

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 30 / 34
Computation of Probabilities by Conditioning

Example 1.12 (The Best Prize Problem) Suppose that we are to be presented
with n distinct prizes in sequence. After being presented with a prize we must
immediately decide whether to accept it or reject it and consider the next prize.

The only information we are given when deciding whether to accept a prize is
the relative rank of that prize compared to the ones already seen. That is, for
instance, when the fifth prize is presented we learn how it compares with the
first four prizes already seen.

Suppose that once a prize is rejected, it is lost, and that our objective is to
maximize the probability of obtaining the best prize. Assuming that all n!
orderings of the prizes are equally likely, how well can we do?

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 31 / 34
Computation of Probabilities by Conditioning

Solution. Fix a value k (0 ≤ k < n), and consider the strategy that rejects the
first k prizes and then accepts the first one that is better than all of those first
k.

Let Pk (best) denote the probability that the best prize is selected when this
strategy is employed. To compute this probability, condition on X, the position
of the best prize. This gives
n
X
Pk (best) = Pk (best | X = i)P (X = i)
i=1
n
1X
= Pk (best | X = i).
n
i=1

Now, if the overall best prize is among the first k, then no prize is ever selected
under the strategy considered. On the other hand, if the best prize is in
position i, where i > k, then the best prize will be selected if the best of the
first k prizes is also the best of the first i − 1 prizes.

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 32 / 34
Computation of Probabilities by Conditioning

Solution. (con’t) Hence, we see that

Pk (best | X = i) = 0, if i ≤ k;
Pk (best | X = i) = P (best of first i − 1 is among the first k)
= k/(i − 1), if i > k.

We have
n
k X 1
Pk (best) =
n i−1
i=k+1
Z n−1
k 1
≈ dx
n k x
k n−1
 
= log
n k
k n
 
≈ log .
n k

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 33 / 34
Computation of Probabilities by Conditioning

Solution. (con’t) Now, if we consider the function g(x) = x log(n/x)/n, then


1 n 1
 
g ′ (x) = log − ,
n x n
and so
g ′ (x) = 0 ⇒ log(n/x) = 1 ⇒ x = n/e.

Thus, since Pk (best) ≈ g(k), we see that the best strategy of the type
considered is to let the first n/e prizes go by and then accept the first one to
appear that is better than all of those.

In addition, since g(n/e) = 1/e, the probability that this strategy selects the
best prize is approximately 1/e ≈ 0.36788.

Reference

Ross, S. M. (2014). Introduction to Probability Models (11th ed.). Academic Press.

AMA3658 Stochastic Processes for Investment Chapter 1 - Conditional Prob. & Expectation 34 / 34

You might also like