Probability
Probability
Probability
Probability
Thinking Conditionally
Independence
Law of Total Probability (LOTP)
Let B1 , B2 , B3 , ...Bn be a partition of the sample space (i.e., they are
disjoint and their union is the entire sample space).
Cheat Sheet
Sheet
Independent Events A and B are independent if knowing whether P (A) = P (A|B1 )P (B1 ) + P (A|B2 )P (B2 ) + · · · + P (A|Bn )P (Bn )
Cheat
Cheat Sheet
A occurred gives no information about whether B occurred. More
formally, A and B (which have nonzero probability) are independent if
and only if one of the following equivalent statements holds:
P (A ∩ B) = P (A)P (B)
P (A|B) = P (A)
P (A) = P (A ∩ B1 ) + P (A ∩ B2 ) + · · · + P (A ∩ Bn )
For LOTP with extra conditioning, just add in another event C!
P (A|C) = P (A|B1 , C)P (B1 |C) + · · · + P (A|Bn , C)P (Bn |C)
P (A|C) = P (A ∩ B1 |C) + P (A ∩ B2 |C) + · · · + P (A ∩ Bn |C)
P (B|A) = P (B)
Special case of LOTP with B and B c as partition:
Conditional Independence A and B are conditionally independent P (A) = P (A|B)P (B) + P (A|B )P (B )
c c
given C if P (A ∩ B|C) = P (A|C)P (B|C). Conditional independence
Counting does not imply independence, and independence does not imply P (A) = P (A ∩ B) + P (A ∩ B )
c
conditional independence.
1.0
of a population of size n, under various assumptions about how the
sample is collected. Simpson’s Paradox
0.8
0.6
Order Matters Not Matter heart
pmf
n + k − 1
k
0.4
With Replacement n ●
k
n! n ● ●
0.2
Without Replacement
(n − k)! k
band-aid ● ●
0.0
Naive Definition of Probability 0 1 2 3 4
●
IA =
● ●
0 if A does not occur. Z ∞
E(X) = xf (x)dx (for continuous X)
0.8
2 −∞
● ● Note that IA = IA , IA IB = IA∩B , and IA∪B = IA + IB − IA IB .
The Law of the Unconscious Statistician (LOTUS) states that
0.6
Distribution IA ∼ Bern(p) where p = P (A). you can find the expected value of a function of a random variable,
cdf
X
Variance and Standard Deviation E(g(X)) = g(x)P (X = x) (for discrete X)
● ●
0.0
● x
2 2 2
Var(X) = E (X − E(X)) = E(X ) − (E(X))
0 1 2 3 4 Z ∞
E(g(X)) = g(x)f (x)dx (for continuous X)
q
x SD(X) = Var(X) −∞
The CDF is an increasing, right-continuous function with Continuous RVs, LOTUS, UoU What’s a function of a random variable? A function of a random
variable is also a random variable. For example, if X is the number of
FX (x) → 0 as x → −∞ and FX (x) → 1 as x → ∞
bikes you see in an hour, then g(X) = 2X is the number of bike wheels
X(X−1)
you see in that hour and h(X) = X
Independence Intuitively, two random variables are independent if Continuous Random Variables (CRVs) 2 = 2 is the number of
knowing the value of one gives no information about the other. pairs of bikes such that you see both of those bikes in that hour.
Discrete r.v.s X and Y are independent if for all values of x and y What’s the probability that a CRV is in an interval? Take the
difference in CDF values (or use the PDF as described later). What’s the point? You don’t need to know the PMF/PDF of g(X)
P (X = x, Y = y) = P (X = x)P (Y = y) to find its expected value. All you need is the PMF/PDF of X.
P (a ≤ X ≤ b) = P (X ≤ b) − P (X ≤ a) = FX (b) − FX (a)
Expected Value and Indicators
For X ∼ N (µ, σ 2 ), this becomes Universality of Uniform (UoU)
Expected Value and Linearity
b−µ
a−µ
When you plug any CRV into its own CDF, you get a Uniform(0,1)
P (a ≤ X ≤ b) = Φ −Φ random variable. When you plug a Uniform(0,1) r.v. into an inverse
Expected Value (a.k.a. mean, expectation, or average) is a weighted σ σ
CDF, you get an r.v. with that CDF. For example, let’s say that a
average of the possible outcomes of our random variable.
random variable X has CDF
Mathematically, if x1 , x2 , x3 , . . . are all of the distinct possible values What is the Probability Density Function (PDF)? The PDF f
that X can take, the expected value of X is is the derivative of the CDF F . −x
P F (x) = 1 − e , for x > 0
E(X) = xi P (X = xi ) 0
F (x) = f (x)
i By UoU, if we plug X into this function then we get a uniformly
A PDF is nonnegative and integrates to 1. By the fundamental distributed random variable.
X Y X+Y
3 4 7
theorem of calculus, to get from PDF back to CDF we can integrate: −X
F (X) = 1 − e ∼ Unif(0, 1)
2 2 4 Z x
6 8 14 F (x) = f (t)dt
10 23 33 −∞ Similarly, if U ∼ Unif(0, 1) then F −1 (U ) has CDF F . The key point is
1 –3 –2 that for any continuous random variable X, we can transform it into a
1 0 1 Uniform random variable and back by using its CDF.
0.30
1.0
5 9 14
0.8
4 1 5
Moments and MGFs
0.20
0.4
n n n
0.10
1 1 1
n∑ xi + n∑ yi = n ∑ (xi + yi)
0.2
0.0
+
+
+
+
1 2
0 T1 T2 T3 T4 T5 the expected value of Y given the random variable X. This is a X̄n = (X1 + X2 + · · · + Xn ) ∼
˙ N (µX , σX /n)
function of the random variable X. It is not a number except in n
Count-Time Duality Consider a Poisson process of emails arriving certain special cases such as if X ⊥
⊥ Y . To find E(Y |X), find
in an inbox at rate λ emails per hour. Let Tn be the time of arrival of E(Y |X = x) and then plug in X for x. For example: Asymptotic Distributions using CLT
the nth email (relative to some starting time 0) and Nt be the number D
of emails that arrive in [0, t]. Let’s find the distribution of T1 . The • If E(Y |X = x) = x3 + 5x, then E(Y |X) = X 3 + 5X. We use −→ to denote converges in distribution to as n → ∞. The
event T1 > t, the event that you have to wait more than t hours to get CLT says that if we standardize the sum X1 + · · · + Xn then the
the first email, is the same as the event Nt = 0, which is the event that • Let Y be the number of successes in 10 independent Bernoulli distribution of the sum converges to N (0, 1) as n → ∞:
there are no emails in the first t hours. So trials with probability p of success and X be the number of
successes among the first 3 trials. Then E(Y |X) = X + 7p. 1 D
−λt −λt √ (X1 + · · · + Xn − nµX ) −→ N (0, 1)
P (T1 > t) = P (Nt = 0) = e −→ P (T1 ≤ t) = 1 − e σ n
• Let X ∼ N (0, 1) and Y = X 2 . Then E(Y |X = x) = x2 since if
Thus we have T1 ∼ Expo(λ). By the memoryless property and similar we know X = x then we know Y = x2 . And E(X|Y = y) = 0 In other words, the CDF of the left-hand side goes to the standard
√ Normal CDF, Φ. In terms of the sample mean, the CLT says
reasoning, the interarrival times between emails are i.i.d. Expo(λ), i.e., since if we know Y = y then we know X = ± y, with equal
the differences Tn − Tn−1 are i.i.d. Expo(λ). probabilities (by symmetry). So E(Y |X) = X 2 , E(X|Y ) = 0. √
n(X̄n − µX ) D
−→ N (0, 1)
σX
Order Statistics Properties of Conditional Expectation
M × M matrix where element qij is the probability that the chain goes Uniform Distribution
from state i to state j in one step:
0.10
0.2
Let us say that U is distributed Unif(a, b). We know the following:
PDF
qij = P (Xn+1 = j|Xn = i) Properties of the Uniform For a Uniform distribution, the
0.05
0.1
probability of a draw from any interval within the support is
To find the probability that the chain goes from state i to state j in proportional to the length of the interval. See Universality of Uniform
0.00
0.0
exactly m steps, take the (i, j) element of Qm . and Order Statistics for other properties. 0 5 10 15 20 0 5 10 15 20
x x
Example William throws darts really badly, so his darts are uniform Gamma(10, 1) Gamma(5, 0.5)
(m)
qij = P (Xn+m = j|Xn = i) over the whole room because they’re equally likely to appear anywhere.
0.10
William’s darts have a Uniform distribution on the surface of the
0.10
If X0 is distributed according to the row vector PMF p
~, i.e., room. The Uniform is the only distribution where the probability of
~Qn .
0.05
hitting in any specific region is proportional to the length/area/volume
PDF
pj = P (X0 = j), then the PMF of Xn is p
0.05
of that region, and where the density of occurrence in any one specific
spot is constant throughout the whole support.
Chain Properties
0.00
0.00
0 5 10 15 20 0 5 10 15 20
A chain is irreducible if you can get from anywhere to anywhere. If a Normal Distribution x x
chain (on a finite state space) is irreducible, then all of its states are Let us say that X is distributed Gamma(a, λ). We know the following:
Let us say that X is distributed N (µ, σ 2 ). We know the following:
recurrent. A chain is periodic if any of its states are periodic, and is
aperiodic if none of its states are periodic. In an irreducible chain, all Central Limit Theorem The Normal distribution is ubiquitous Story You sit waiting for shooting stars, where the waiting time for a
states have the same period. because of the Central Limit Theorem, which states that the sample star is distributed Expo(λ). You want to see n shooting stars before
mean of i.i.d. r.v.s will approach a Normal distribution as the sample you go home. The total waiting time for the nth shooting star is
A chain is reversible with respect to ~ s if si qij = sj qji for all i, j. size grows, regardless of the initial distribution. Gamma(n, λ).
Examples of reversible chains include any chain with qij = qji , with Location-Scale Transformation Every time we shift a Normal Example You are at a bank, and there are 3 people ahead of you.
1 1 1
s = ( M , M , . . . , M ), and random walk on an undirected network.
~ r.v. (by adding a constant) or rescale a Normal (by multiplying by a The serving time for each person is Exponential with mean 2 minutes.
constant), we change it to another Normal r.v. For any Normal Only one person at a time can be served. The distribution of your
Stationary Distribution X ∼ N (µ, σ 2 ), we can transform it to the standard N (0, 1) by the waiting time until it’s your turn to be served is Gamma(3, 12 ).
following transformation:
Let us say that the vector ~s = (s1 , s2 , . . . , sM ) be a PMF (written as a X−µ Beta Distribution
row vector). We will call ~
s the stationary distribution for the chain Z= ∼ N (0, 1)
σ
if ~
sQ = ~s. As a consequence, if Xt has the stationary distribution, Beta(0.5, 0.5) Beta(2, 1)
2.0
Standard Normal The Standard Normal, Z ∼ N (0, 1), has mean 0
5
then all future Xt+1 , Xt+2 , . . . also have the stationary distribution.
and variance 1. Its CDF is denoted by Φ.
1.5
For irreducible, aperiodic chains, the stationary distribution exists, is
3
PDF
PDF
unique, and si is the long-run probability of a chain being at state i.
1.0
Exponential Distribution
2
The expected number of steps to return to i starting from i is 1/si .
0.5
1
Let us say that X is distributed Expo(λ). We know the following:
To find the stationary distribution, you can solve the matrix equation
0.0
0
(Q0 − I)~
s 0 = 0. The stationary distribution is uniform if the columns Story You’re sitting on an open meadow right before the break of 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x x
of Q sum to 1. dawn, wishing that airplanes in the night sky were shooting stars,
because you could really use a wish right now. You know that shooting Beta(2, 8) Beta(5, 5)
2.5
Reversibility Condition Implies Stationarity If you have a PMF ~ s stars come on average every 15 minutes, but a shooting star is not
2.0
and a Markov chain with transition matrix Q, then si qij = sj qji for “due” to come just because you’ve waited so long. Your waiting time
1.5
all states i, j implies that ~
s is stationary. is memoryless; the additional time until the next shooting star comes
2
PDF
PDF
1.0
does not depend on how long you’ve waited already.
0.5
Random Walk on an Undirected Network Example The waiting time until the next shooting star is distributed
0.0
Expo(4) hours. Here λ = 4 is the rate parameter, since shooting
0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
1
stars arrive at a rate of 1 per 1/4 hour on average. The expected time x x
• X+Y ⊥
⊥ X • Binomial-Poisson Relationship Bin(n, p) is approximately 1. Sum X + Y ∼ Pois(λ1 + λ2 )
X+Y
Pois(λ) if p is small.
λ1
This is known as the bank–post office result. 2. Conditional X|(X + Y = n) ∼ Bin n,
• Binomial-Normal Relationship Bin(n, p) is approximately λ1 +λ2
N (np, np(1 − p)) if n is large and p is not near 0 or 1. 3. Chicken-egg If there are Z ∼ Pois(λ) items and we randomly
χ2 (Chi-Square) Distribution and independently “accept” each item with probability p, then
Let us say that X is distributed χ2n . We know the following: Geometric Distribution the number of accepted items Z1 ∼ Pois(λp), and the number of
rejected items Z2 ∼ Pois(λ(1 − p)), and Z1 ⊥
⊥ Z2 .
Story A Chi-Square(n) is the sum of the squares of n independent Let us say that X is distributed Geom(p). We know the following:
standard Normal r.v.s. Story X is the number of “failures” that we will achieve before we
Properties and Representations achieve our first success. Our successes have probability p. Multivariate Distributions
1
2 2 2 Example If each pokeball we throw has probability to catch Mew,
10
X is distributed as Z1 + Z2 + · · · + Zn for i.i.d. Zi ∼ N (0, 1) 1 Multinomial Distribution
the number of failed pokeballs will be distributed Geom( 10 ).
X ∼ Gamma(n/2, 1/2) Let us say that the vector X ~ = (X1 , X2 , X3 , . . . , Xk ) ∼ Multk (n, p
~)
First Success Distribution where p
~ = (p1 , p2 , . . . , pk ).
Discrete Distributions Equivalent to the Geometric distribution, except that it includes the Story We have n items, which can fall into any one of the k buckets
first success in the count. This is 1 more than the number of failures. independently with the probabilities p ~ = (p1 , p2 , . . . , pk ).
If X ∼ FS(p) then E(X) = 1/p.
Distributions for four sampling schemes Example Let us assume that every year, 100 students in the Harry
Negative Binomial Distribution Potter Universe are randomly and independently sorted into one of
Replace No Replace four houses with equal probability. The number of people in each of the
Let us say that X is distributed NBin(r, p). We know the following: houses is distributed Mult4 (100, p
~), where p
~ = (0.25, 0.25, 0.25, 0.25).
Fixed # trials (n) Binomial HGeom
(Bern if n = 1) Story X is the number of “failures” that we will have before we Note that X1 + X2 + · · · + X4 = 100, and they are dependent.
Draw until r success NBin NHGeom achieve our rth success. Our successes have probability p. Joint PMF For n = n1 + n2 + · · · + nk ,
(Geom if r = 1) Example Thundershock has 60% accuracy and can faint a wild n! n n n
~ =~
P (X n) = p 1 p 2 . . . pk k
Raticate in 3 hits. The number of misses before Pikachu faints n1 !n2 ! . . . nk ! 1 2
Raticate with Thundershock is distributed NBin(3, 0.6).
Bernoulli Distribution Marginal PMF, Lumping, and Conditionals Marginally,
The Bernoulli distribution is the simplest case of the Binomial Hypergeometric Distribution Xi ∼ Bin(n, pi ) since we can define “success” to mean category i. If
distribution, where we only have one trial (n = 1). Let us say that X is you lump together multiple categories in a Multinomial, then it is still
Let us say that X is distributed HGeom(w, b, n). We know the Multinomial. For example, Xi + Xj ∼ Bin(n, pi + pj ) for i 6= j since
distributed Bern(p). We know the following: following: we can define “success” to mean being in category i or j. Similarly, if
Story A trial is performed with probability p of “success”, and X is Story In a population of w desired objects and b undesired objects, k = 6 and we lump categories 1-2 and lump categories 3-5, then
the indicator of success: 1 means success, 0 means failure. X is the number of “successes” we will have in a draw of n objects, (X1 + X2 , X3 + X4 + X5 , X6 ) ∼ Mult3 (n, (p1 + p2 , p3 + p4 + p5 , p6 ))
Example Let X be the indicator of Heads for a fair coin toss. Then without replacement. The draw of n objects is assumed to be a
Conditioning on some Xj also still gives a Multinomial:
X ∼ Bern( 21 ). Also, 1 − X ∼ Bern( 21 ) is the indicator of Tails. simple random sample (all sets of n objects are equally likely).
Examples Here are some HGeom examples. p1 pk−1
Binomial Distribution X1 , . . . , Xk−1 |Xk = nk ∼ Multk−1 n − nk , ,...,
1 − pk 1 − pk
• Let’s say that we have only b Weedles (failure) and w Pikachus
Bin(10,1/2) (success) in Viridian Forest. We encounter n Pokemon in the Variances and Covariances We have Xi ∼ Bin(n, pi ) marginally, so
forest, and X is the number of Pikachus in our encounters. Var(Xi ) = npi (1 − pi ). Also, Cov(Xi , Xj ) = −npi pj for i 6= j.
0.30
●
Multivariate Uniform Distribution
0.20
• You have w white balls and b black balls, and you draw n balls.
● ●
See the univariate Uniform for stories and examples. For the 2D
0.15
pmf
You will draw X white balls. Uniform on some region, probability is proportional to area. Every
● ●
0.10
• You have w white balls and b black balls, and you draw n balls point in the support has equal density, of value area of1 region . For the
0.05
● ●
without replacement. The number of white balls in your sample 3D Uniform, probability is proportional to volume.
is HGeom(w, b, n); the number of black balls is HGeom(b, w, n).
0.00
● ●
● ●
0 2 4 6 8 10
• Capture-recapture A forest has N elk, you capture n of them, Multivariate Normal (MVN) Distribution
x
tag them, and release them. Then you recapture a new sample ~ = (X1 , X2 , . . . , Xk ) is Multivariate Normal if every linear
A vector X
Let us say that X is distributed Bin(n, p). We know the following: of size m. How many tagged elk are now in the new sample? combination is Normally distributed, i.e., t1 X1 + t2 X2 + · · · + tk Xk is
Story X is the number of “successes” that we will achieve in n HGeom(n, N − n, m) Normal for any constants t1 , t2 , . . . , tk . The parameters of the
independent trials, where each trial is either a success or a failure, each Multivariate Normal are the mean vector µ ~ = (µ1 , µ2 , . . . , µk ) and
with the same probability p of success. We can also write X as a sum Poisson Distribution the covariance matrix where the (i, j) entry is Cov(Xi , Xj ).
of multiple independent Bern(p) random variables. Let X ∼ Bin(n, p)
and Xj ∼ Bern(p), where all of the Bernoullis are independent. Then Let us say that X is distributed Pois(λ). We know the following: Properties The Multivariate Normal has the following properties.
Story There are rare events (low probability events) that occur many • Any subvector is also MVN.
X = X1 + X2 + X3 + · · · + Xn different ways (high possibilities of occurences) at an average rate of λ
occurrences per unit space or time. The number of events that occur • If any two elements within an MVN are uncorrelated, then they
Example If Jeremy Lin makes 10 free throws and each one in that unit of space or time is X. are independent.
independently has a 43 chance of getting in, then the number of free • The joint PDF of a Bivariate Normal (X, Y ) with N (0, 1)
throws he makes is distributed Bin(10, 34 ). Example A certain busy intersection has an average of 2 accidents
marginal distributions and correlation ρ ∈ (−1, 1) is
per month. Since an accident is a low probability event that can
Properties Let X ∼ Bin(n, p), Y ∼ Bin(m, p) with X ⊥
⊥ Y.
happen many different ways, it is reasonable to model the number of 1 1 2 2
fX,Y (x, y) = exp − 2 (x + y − 2ρxy) ,
accidents in a month at that intersection as Pois(2). Then the number 2πτ 2τ
• Redefine success n − X ∼ Bin(n, 1 − p)
of accidents that happen in two months at that intersection is p
• Sum X + Y ∼ Bin(n + m, p) distributed Pois(4). with τ = 1 − ρ2 .
Distribution Properties Euler’s Approximation for Harmonic Sums Linearity and First Success
This problem is commonly known as the coupon collector problem.
1 1 1 There are n coupon types. At each draw, you get a uniformly random
Important CDFs 1+ + + ··· + ≈ log n + 0.577 . . .
2 3 n coupon type. What is the expected number of coupons needed until
Standard Normal Φ you have a complete set? Answer: Let N be the number of coupons
Exponential(λ) F (x) = 1 − e−λx , for x ∈ (0, ∞) Stirling’s Approximation for Factorials needed; we want E(N ). Let N = N1 + · · · + Nn , where N1 is the
draws to get our first new coupon, N2 is the additional draws needed
Uniform(0,1) F (x) = x, for x ∈ (0, 1)
to draw our second new coupon and so on. By the story of the First
√
n
n Success, N2 ∼ FS((n − 1)/n) (after collecting first coupon type, there’s
Convolutions of Random Variables n! ≈ 2πn
e (n − 1)/n chance you’ll get something new). Similarly,
A convolution of n random variables is simply their sum. For the N3 ∼ FS((n − 2)/n), and Nj ∼ FS((n − j + 1)/n). By linearity,
following results, let X and Y be independent.
1. X ∼ Pois(λ1 ), Y ∼ Pois(λ2 ) −→ X + Y ∼ Pois(λ1 + λ2 )
Miscellaneous Definitions n n n X1 n
E(N ) = E(N1 ) + · · · + E(Nn ) = + + ··· + = n
n n−1 1 j
2. X ∼ Bin(n1 , p), Y ∼ Bin(n2 , p) −→ X + Y ∼ Bin(n1 + n2 , p). j=1
Bin(n, p) can be thought of as a sum of i.i.d. Bern(p) r.v.s. Medians and Quantiles Let X have CDF F . Then X has median This is approximately n(log(n) + 0.577) by Euler’s approximation.
3. X ∼ Gamma(a1 , λ), Y ∼ Gamma(a2 , λ) m if F (m) ≥ 0.5 and P (X ≥ m) ≥ 0.5. For X continuous, m satisfies
−→ X + Y ∼ Gamma(a1 + a2 , λ). Gamma(n, λ) with n an F (m) = 1/2. In general, the ath quantile of X is min{x : F (x) ≥ a}; Orderings of i.i.d. random variables
integer can be thought of as a sum of i.i.d. Expo(λ) r.v.s. the median is the case a = 1/2.
I call 2 UberX’s and 3 Lyfts at the same time. If the time it takes for
4. X ∼ NBin(r1 , p), Y ∼ NBin(r2 , p) log Statisticians generally use log to refer to natural log (i.e., base e). the rides to reach me are i.i.d., what is the probability that all the
−→ X + Y ∼ NBin(r1 + r2 , p). NBin(r, p) can be thought of as Lyfts will arrive first? Answer: Since the arrival times of the five cars
a sum of i.i.d. Geom(p) r.v.s. i.i.d r.v.s Independent, identically-distributed random variables. are i.i.d., all 5! orderings of the arrivals are equally likely. There are
5. X ∼ N (µ1 , σ12 ), Y ∼ N (µ2 , σ22 ) 3!2! orderings that involve the Lyfts arriving first, so the probability
−→ X + Y ∼ N (µ1 + µ2 , σ12 + σ22 ) 3!2!
Example Problems = 1/10 . Alternatively, there are 53
that the Lyfts arrive first is
5!
Special Cases of Distributions ways to choose 3 of the 5 slots for the Lyfts to occupy, where each of
1. Bin(1, p) ∼ Bern(p) the choices are equally likely. One of these choices has all 3 of the
2. Beta(1, 1) ∼ Unif(0, 1) 5
Lyfts arriving first, so the probability is 1/ = 1/10 .
3. Gamma(1, λ) ∼ Expo(λ) 3
Calculating Probability
4. χ2n ∼ Gamma n 1
2, 2 Expectation of Negative Hypergeometric
5. NBin(1, p) ∼ Geom(p) A textbook has n typos, which are randomly scattered amongst its n
pages, independently. You pick a random page. What is the What is the expected number of cards that you draw before you pick
probability that it has no typos? Answer: There is a 1 − n 1 your first Ace in a shuffled deck (not counting the Ace)? Answer:
Inequalities Consider a non-Ace. Denote this to be card j. Let Ij be the indicator
probability that any specific typo isn’t on your page, and thus a
that card j will be drawn before the first Ace. Note that Ij = 1 says
p
1. Cauchy-Schwarz |E(XY )| ≤ E(X 2 )E(Y 2 )
1 n
1− probability that there are no typos on your page. For n that j is before all 4 of the Aces in the deck. The probability that this
E|X|
2. Markov P (X ≥ a) ≤ a for a > 0 n occurs is 1/5 by symmetry. Let X be the number of cards drawn
before the first Ace. Then X = I1 + I2 + ... + I48 , where each indicator
3. Chebyshev P (|X − µ| ≥ a) ≤ σ2
for E(X) = µ, Var(X) = σ 2 large, this is approximately e−1 = 1/e.
a2 corresponds to one of the 48 non-Aces. Thus,
4. Jensen E(g(X)) ≥ g(E(X)) for g convex; reverse if g is E(X) = E(I1 ) + E(I2 ) + ... + E(I48 ) = 48/5 = 9.6 .
concave Linearity and Indicators (1)
Minimum and Maximum of RVs
Formulas In a group of n people, what is the expected number of distinct
birthdays (month and day)? What is the expected number of birthday What is the CDF of the maximum of n independent Unif(0,1) random
matches? Answer: Let X be the number of distinct birthdays and Ij variables? Answer: Note that for r.v.s X1 , X2 , . . . , Xn ,
Geometric Series be the indicator for the jth day being represented. P (min(X1 , X2 , . . . , Xn ) ≥ a) = P (X1 ≥ a, X2 ≥ a, . . . , Xn ≥ a)
n−1 n n
Similarly,
2 n−1
X k 1−r E(Ij ) = 1 − P (no one born on day j) = 1 − (364/365)
1 + r + r + ··· + r = r = P (max(X1 , X2 , . . . , Xn ) ≤ a) = P (X1 ≤ a, X2 ≤ a, . . . , Xn ≤ a)
k=0
1−r
n We will use this principle to find the CDF of U(n) , where
2 1 By linearity, E(X) = 365 (1 − (364/365) ) . Now let Y be the U(n) = max(U1 , U2 , . . . , Un ) and Ui ∼ Unif(0, 1) are i.i.d.
1 + r + r + ··· = if |r| < 1
1−r number of birthday matches and Ji be the indicator that the ith pair P (max(U1 , U2 , . . . , Un ) ≤ a) = P (U1 ≤ a, U2 ≤ a, . . . , Un ≤ a)
of people have the same birthday. The probability that any two
Exponential Function (ex ) n = P (U1 ≤ a)P (U2 ≤ a) . . . P (Un ≤ a)
∞ specific people share a birthday is 1/365, so E(Y ) = /365 .
xn x2 x3 x n
x
X 2 = a
n
e = =1+x+ + + · · · = lim 1+
n! 2! 3! n→∞ n
n=0 for 0 < a < 1 (and the CDF is 0 for a ≤ 0 and 1 for a ≥ 1).
Gamma and Beta Integrals Linearity and Indicators (2)
Pattern-matching with ex Taylor series
You can sometimes solve complicated-looking integrals by This problem is commonly known as the hat-matching problem.
1
pattern-matching to a gamma or beta integral: There are n people at a party, each with hat. At the end of the party, For X ∼ Pois(λ), find E . Answer: By LOTUS,
Z ∞ Z 1 they each leave with a random hat. What is the expected number of X+1
t−1 −x a−1 b−1 Γ(a)Γ(b)
x e dx = Γ(t) x (1 − x) dx = people who leave with the right hat? Answer: Each hat has a 1/n ∞ ∞
1 e−λ λk e−λ X λk+1 e−λ λ
Γ(a + b) chance of going to the right person. By linearity, the average number 1 X
0 0 E = = = (e − 1)
X+1 k+1 k! λ k=0 (k + 1)! λ
Also, Γ(a + 1) = aΓ(a), and Γ(n) = (n − 1)! if n is a positive integer. of hats that go to their owners is n(1/n) = 1 . k=0
Adam’s Law and Eve’s Law 4. Calculating expectation. If it has a named distribution,
check out the table of distributions. If it’s a function of an r.v.
William really likes speedsolving Rubik’s Cubes. But he’s pretty bad with a named distribution, try LOTUS. If it’s a count of
at it, so sometimes he fails. On any given day, William will attempt something, try breaking it up into indicator r.v.s. If you can
N ∼ Geom(s) Rubik’s Cubes. Suppose each time, he has probability p condition on something natural, consider using Adam’s law.
of solving the cube, independently. Let T be the number of Rubik’s
Cubes he solves during a day. Find the mean and variance of T . 5. Calculating variance. Consider independence, named
Answer: Note that T |N ∼ Bin(N, p). So by Adam’s Law, Robber distributions, and LOTUS. If it’s a count of something, break it
up into a sum of indicator r.v.s. If it’s a sum, use properties of
p(1 − s) covariance. If you can condition on something natural, consider
E(T ) = E(E(T |N )) = E(N p) = using Eve’s Law.
s
6. Calculating E(X 2 ). Do you already know E(X) or Var(X)?
Similarly, by Eve’s Law, we have that Recall that Var(X) = E(X 2 ) − (E(X))2 . Otherwise try
LOTUS.
Var(T ) = E(Var(T |N )) + Var(E(T |N )) = E(N p(1 − p)) + Var(N p)
7. Calculating covariance. Use the properties of covariance. If
(a) Is this Markov chain irreducible? Is it aperiodic? Answer:
p(1 − p)(1 − s) p2 (1 − s) p(1 − s)(p + s(1 − p)) you’re trying to find the covariance between two components of
= + = Yes to both. The Markov chain is irreducible because it can a Multinomial distribution, Xi , Xj , then the covariance is
s s2 s2 −npi pj for i 6= j.
get from anywhere to anywhere else. The Markov chain is
aperiodic because the robber can return back to a square in
8. Symmetry. If X1 , . . . , Xn are i.i.d., consider using symmetry.
MGF – Finding Moments 2, 3, 4, 5, . . . moves, and the GCD of those numbers is 1.
3 (b) What is the stationary distribution of this Markov chain? 9. Calculating probabilities of orderings. Remember that all
Find E(X ) for X ∼ Expo(λ) using the MGF of X. Answer: The n! ordering of i.i.d. continuous random variables X1 , . . . , Xn
λ Answer: Since this is a random walk on an undirected graph,
MGF of an Expo(λ) is M (t) = λ−t . To get the third moment, we can are equally likely.
the stationary distribution is proportional to the degree
take the third derivative of the MGF and evaluate at t = 0: sequence. The degree for the corner pieces is 3, the degree for 10. Determining independence. There are several equivalent
the edge pieces is 4, and the degree for the center pieces is 6. definitions. Think about simple and extreme cases to see if you
3 6 To normalize this degree sequence, we divide by its sum. The
E(X ) = 3 can find a counterexample.
λ sum of the degrees is 6(3) + 6(4) + 7(6) = 84. Thus the
stationary probability of being on a corner is 3/84 = 1/28, on 11. Do a painful integral. If your integral looks painful, see if
But a much nicer way to use the MGF here is via pattern recognition: an edge is 4/84 = 1/21, and in the center is 6/84 = 1/14. you can write your integral in terms of a known PDF (like
note that M (t) looks like it came from a geometric series: Gamma or Beta), and use the fact that PDFs integrate to 1?
(c) What fraction of the time will the robber be in the center tile
∞ n ∞ 12. Before moving on. Check some simple and extreme cases,
1 X t X n! tn in this game, in the long run? Answer: By the above, 1/14 .
check whether the answer seems plausible, check for biohazards.
t
= =
1− λ λ λn n!
n=0 n=0 (d) What is the expected amount of moves it will take for the
n robber to return to the center tile? Answer: Since this chain is
The coefficient of tn! here is the nth moment of X, so we have irreducible and aperiodic, to get the expected time to return we
E(X n ) = λn!
n for all nonnegative integers n. can just invert the stationary probability. Thus on average it
will take 14 turns for the robber to return to the center tile.
Markov chains (1)
Suppose Xn is a two-state Markov chain with transition matrix Problem-Solving Strategies
0 1
0 1−α α 1. Getting started. Start by defining relevant events and
Q=
1 β 1−β random variables. (“Let A be the event that I pick the fair
coin”; “Let X be the number of successes.”) Clear notion is
Find the stationary distribution ~
s = (s0 , s1 ) of Xn by solving ~
sQ = ~
s, important for clear thinking! Then decide what it is that you’re P (B|A)P (A)
and show that the chain is reversible with respect to ~ s. Answer: The supposed to be finding, in terms of your notation (“I want to P (B)
, it is not
equation ~
sQ = ~s says that find P (X = 3|A)”). Think about what type of object your correct to say “P (B) = 1 because we know B happened”; P (B)
answer should be (a number? A random variable? A PMF? A is the prior probability of B. Don’t confuse P (A|B) with
s0 = s0 (1 − α) + s1 β and s1 = s0 (α) + s0 (1 − β) PDF?) and what it should be in terms of. P (A, B).
By solving this system of linear equations, we have Try simple and extreme cases. To make an abstract experiment 3. Don’t assume independence without justification. In the
more concrete, try drawing a picture or making up numbers matching problem, the probability that card 1 is a match and
β α
that could have happened. Pattern recognition: does the card 2 is a match is not 1/n2 . Binomial and Hypergeometric
~
s= , structure of the problem resemble something we’ve seen before? are often confused; the trials are independent in the Binomial
α+β α+β
2. Calculating probability of an event. Use counting story and dependent in the Hypergeometric story.
principles if the naive definition of probability applies. Is the
To show that the chain is reversible with respect to ~ s, we must show 4. Don’t forget to do sanity checks. Probabilities must be
probability of the complement easier to find? Look for
si qij = sj qji for all i, j. This is done if we can show s0 q01 = s1 q10 . between 0 and 1. Variances must be ≥ 0. Supports must make
symmetries. Look for something to condition on, then apply
And indeed, sense. PMFs must sum to 1. PDFs must integrate to 1.
αβ Bayes’ Rule or the Law of Total Probability.
s0 q01 = = s1 q10 3. Finding the distribution of a random variable. First make 5. Don’t confuse random variables, numbers, and events.
α+β Let X be an r.v. Then g(X) is an r.v. for any function g. In
sure you need the full distribution not just the mean (see next
item). Check the support of the random variable: what values particular, X 2 , |X|, F (X), and IX>3 are r.v.s.
Markov chains (2) can it take on? Use this to rule out distributions that don’t fit. P (X 2 < X|X ≥ 0), E(X), Var(X), and g(E(X)) are numbers.
William and Sebastian play a modified game of Settlers of Catan, Is there a story for one of the named distributions that fits the X = 2R and F (X) ≥ −1 are events. It does not make sense to
∞
where every turn they randomly move the robber (which starts on the problem at hand? Can you write the random variable as a write −∞ F (X)dx, because F (X) is a random variable. It does
center tile) to one of the adjacent hexagons. function of an r.v. with a known distribution, say Y = g(X)? not make sense to write P (X), because X is not an event.
Table of Distributions
Bernoulli P (X = 1) = p
Bern(p) P (X = 0) = q = 1 − p p pq q + pet
n k n−k
Binomial P (X = k) = k
p q
Bin(n, p) k ∈ {0, 1, 2, . . . n} np npq (q + pet )n
Geometric P (X = k) = q k p
p
Geom(p) k ∈ {0, 1, 2, . . . } q/p q/p2 1−qet
, qet < 1
r+n−1 r n
Negative Binomial P (X = n) = r−1
p q
p
NBin(r, p) n ∈ {0, 1, 2, . . . } rq/p rq/p2 ( 1−qe r t
t ) , qe < 1
w+b
P (X = k) = w b /
Hypergeometric k n−k n
nw w+b−n µ µ
HGeom(w, b, n) k ∈ {0, 1, 2, . . . , n} µ= b+w w+b−1
nn (1 − n
) messy
e−λ λk
Poisson P (X = k) = k!
t
Pois(λ) k ∈ {0, 1, 2, . . . } λ λ eλ(e −1)
1
Uniform f (x) = b−a
a+b (b−a)2 etb −eta
Unif(a, b) x ∈ (a, b) 2 12 t(b−a)
2 2
f (x) = √1 e−(x − µ) /(2σ )
Normal σ 2π
σ 2 t2
N (µ, σ 2 ) x ∈ (−∞, ∞) µ σ2 etµ+ 2
1
f (x) = Γ(a)
(λx)a e−λx x1
Gamma a
a a λ
Gamma(a, λ) x ∈ (0, ∞) λ λ2 λ−t
,t<λ
Γ(a+b) a−1
f (x) = Γ(a)Γ(b)
x (1 − x)b−1
Beta
a µ(1−µ)
Beta(a, b) x ∈ (0, 1) µ= a+b (a+b+1)
messy
2
1
√ e−(log x−µ) /(2σ 2 )
Log-Normal xσ 2π
2 2
LN (µ, σ 2 ) x ∈ (0, ∞) θ = eµ+σ /2 θ2 (eσ − 1) doesn’t exist
1
Chi-Square xn/2−1 e−x/2
2n/2 Γ(n/2)
χ2n x ∈ (0, ∞) n 2n (1 − 2t)−n/2 , t < 1/2
Γ((n+1)/2)
√
nπΓ(n/2)
(1 + x2 /n)−(n+1)/2
Student-t
n
tn x ∈ (−∞, ∞) 0 if n > 1 n−2
if n > 2 doesn’t exist