Book
Book
Michael Lavine
List of Figures v
List of Tables x
Preface xi
1 Probability 1
1.1 Basic Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Probability Densities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Parametric Families of Distributions . . . . . . . . . . . . . . . . . . . 14
1.3.1 The Binomial Distribution . . . . . . . . . . . . . . . . . . . . . 14
1.3.2 The Poisson Distribution . . . . . . . . . . . . . . . . . . . . . 17
1.3.3 The Exponential Distribution . . . . . . . . . . . . . . . . . . . 20
1.3.4 The Normal Distribution . . . . . . . . . . . . . . . . . . . . . 22
1.4 Centers, Spreads, Means, and Moments . . . . . . . . . . . . . . . . . . 29
1.5 Joint, Marginal and Conditional Probability . . . . . . . . . . . . . . . 41
1.6 Association, Dependence, Independence . . . . . . . . . . . . . . . . . . 50
1.7 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
1.7.1 Calculating Probabilities . . . . . . . . . . . . . . . . . . . . . 57
1.7.2 Evaluating Statistical Procedures . . . . . . . . . . . . . . . . . 61
1.8 R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
1.9 Some Results for Large Samples . . . . . . . . . . . . . . . . . . . . . . 76
1.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
2 Modes of Inference 94
2.1 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
2.2 Data Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
2.2.1 Summary Statistics . . . . . . . . . . . . . . . . . . . . . . . . . 95
ii
CONTENTS iii
3 Regression 202
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
3.2 Normal Linear Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
3.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
3.2.2 Inference for Linear Models . . . . . . . . . . . . . . . . . . . 222
3.3 Generalized Linear Models . . . . . . . . . . . . . . . . . . . . . . . . 235
3.3.1 Logistic Regression . . . . . . . . . . . . . . . . . . . . . . . . 235
3.3.2 Poisson Regression . . . . . . . . . . . . . . . . . . . . . . . . . 244
3.4 Predictions from Regression . . . . . . . . . . . . . . . . . . . . . . . . 248
3.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Bibliography 456
List of Figures
2.1 quantiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
2.2 Histograms of tooth growth . . . . . . . . . . . . . . . . . . . . . . . . 102
2.3 Histograms of tooth growth . . . . . . . . . . . . . . . . . . . . . . . . 103
v
LIST OF FIGURES vi
6.1 Posterior densities of β0 and β1 in the ice cream example using the prior
from Equation 6.4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
6.2 Numbers of pine cones in 1998 as a function of dbh . . . . . . . . . . . . 354
6.3 Numbers of pine cones in 1999 as a function of dbh . . . . . . . . . . . . 355
6.4 Numbers of pine cones in 2000 as a function of dbh . . . . . . . . . . . . 356
6.5 10,000 MCMC samples of the Be(5, 2) density. Top panel: histogram of
samples from the Metropolis-Hastings algorithm and the Be(5, 2) den-
sity. Middle panel: θi plotted against i. Bottom panel: p(θi ) plotted
against i. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
6.6 10,000 MCMC samples of the Be(5, 2) density. Left column: (θ∗ | θ) =
U(θ − 100, θ + 100); Right column: (θ∗ | θ) = U(θ − .00001, θ + .00001).
Top: histogram of samples from the Metropolis-Hastings algorithm and
the Be(5, 2) density. Middle: θi plotted against i. Bottom: p(θi ) plotted
against i. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
6.7 Trace plots of MCMC output from the pine cone code on page 365. . . . 367
6.8 Trace plots of MCMC output from the pine cone code with a smaller
proposal radius. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
6.9 Trace plots of MCMC output from the pine cone code with a smaller
proposal radius and 100,000 iterations. The plots show every 10’th it-
eration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
6.10 Trace plots of MCMC output from the pine cone code with proposal
function g.one and 100,000 iterations. The plots show every 10’th it-
eration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
6.11 Pairs plots of MCMC output from the pine cones example. . . . . . . . . 373
6.12 Trace plots of MCMC output from the pine cone code with proposal
function g.group and 100,000 iterations. The plots show every 10’th
iteration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
6.13 Pairs plots of MCMC output from the pine cones example with proposal
g.group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
6.14 Posterior density of β2 and γ2 from Example 6.3. . . . . . . . . . . . . . 377
7.3 Percent body fat of major (blue) and minor (purple) Pheidole morrisi
ants at three sites in two seasons. . . . . . . . . . . . . . . . . . . . . . 391
7.4 Residuals from Model 7.4. Each point represents one colony. There is
an upward trend, indicating the possible presence of colony effects. . . . 393
7.5 Some time series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
7.6 Yt+1 vs. Yt for the Beaver and Presidents data sets . . . . . . . . . . . . 399
7.7 Yt+k vs. Yt for the Beaver data set and lags 0–5 . . . . . . . . . . . . . . 400
7.8 coplot of Yt+1 ∼ Yt−1 | Yt for the Beaver data set . . . . . . . . . . . . . 402
7.9 Fit of CO2 data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
7.10 DAX closing prices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
7.11 DAX returns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
7.12 Survival curve for bladder cancer. Solid line for placebo; dashed line
for thiotepa. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
7.13 Cumulative hazard and log(hazard) curves for bladder cancer. Solid
line for thiotepa; dashed line for placebo. . . . . . . . . . . . . . . . . 415
8.1 Mean Squared Error for estimating Binomial θ. Sample size = 5, 20,
100, 1000. α = β = 0: solid line. α = β = 0.5: dashed line. α = β = 1:
dotted line. α = β = 4: dash–dotted line. . . . . . . . . . . . . . . . . . 427
8.2 The Be(.39, .01) density . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
8.3 Densities of Ȳin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
8.4 Densities of Zin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
8.5 The δ-method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
8.6 Top panel: asymptotic standard deviations of δn and δ0n for Pr[X ≤ a].
The solid line shows the actual relationship. The dotted line is the line
of equality. Bottom panel: the ratio of asymptotic standard deviations. . 448
List of Tables
2.1 New and Old seedlings in quadrat 6 in 1992 and 1993 . . . . . . . . . . 151
6.1 The numbers of pine cones on trees in the FACE experiment, 1998–2000. 352
7.1 Fat as a percentage of body weight in ant colonies. Three sites, two
seasons, two castes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
x
Preface
xi
Chapter 1
Probability
2. µ(X) = 1, and
One can show that property 3 holds for any finite collection of disjoint sets, not just two;
see Exercise 1. It is common practice, which we adopt in this text, to assume more — that
property 3 also holds for any countable collection of disjoint sets.
When X is a finite or countably infinite set (usually integers) then µ is said to be a
discrete probability. When X is an interval, either finite or infinite, then µ is said to be
a continuous probability. In the discrete case, F usually contains all possible subsets of
X. But in the continuous case, technical complications prohibit F from containing all
possible subsets of X. See Casella and Berger [2002] or Schervish [1995] for details. In
this text we deemphasize the role of F and speak of probability measures on X without
mentioning F .
In practical examples X is the set of outcomes of an “experiment” and µ is determined
by experience, logic or judgement. For example, consider rolling a six-sided die. The set
of outcomes is {1, 2, 3, 4, 5, 6} so we would assign X ≡ {1, 2, 3, 4, 5, 6}. If we believe the
1
1.1. BASIC PROBABILITY 2
die to be fair then we would also assign µ({1}) = µ({2}) = · · · = µ({6}) = 1/6. The laws of
probability then imply various other values such as
Often we omit the braces and write µ(2), µ(5), etc. Setting µ(i) = 1/6 is not automatic
simply because a die has six faces. We set µ(i) = 1/6 because we believe the die to be fair.
We usually use the word “probability” or the symbol P in place of µ. For example, we
would use the following phrases interchangeably:
• P(1)
• µ({1})
For the dice thrower (shooter) the object of the game is to throw a 7 or
an 11 on the first roll (a win) and avoid throwing a 2, 3 or 12 (a loss). If
none of these numbers (2, 3, 7, 11 or 12) is thrown on the first throw (the
Come-out roll) then a Point is established (the point is the number rolled)
against which the shooter plays. The shooter continues to throw until one
of two numbers is thrown, the Point number or a Seven. If the shooter rolls
the Point before rolling a Seven he/she wins, however if the shooter throws
a Seven before rolling the Point he/she loses.
Ultimately we would like to calculate P(shooter wins). But for now, let’s just calculate
Using the language of page 1, what is X in this case? Let d1 denote the number
showing on the first die and d2 denote the number showing on the second die. d1 and
d2 are integers from 1 to 6. So X is the set of ordered pairs (d1 , d2 ) or
(6, 6) (6, 5) (6, 4) (6, 3) (6, 2) (6, 1)
(5, 6) (5, 5) (5, 4) (5, 3) (5, 2) (5, 1)
(4, 6) (4, 5) (4, 4) (4, 3) (4, 2) (4, 1)
(3, 6) (3, 5) (3, 4) (3, 3) (3, 2) (3, 1)
(2, 6) (2, 5) (2, 4) (2, 3) (2, 2) (2, 1)
(1, 6) (1, 5) (1, 4) (1, 3) (1, 2) (1, 1)
If the dice are fair, then the pairs are all equally likely. Since there are 36 of them, we
assign P(d1 , d2 ) = 1/36 for any combination (d1 , d2 ). Finally, we can calculate
P(7 or 11) = P(6, 5) + P(5, 6) + P(6, 1) + P(5, 2)
+ P(4, 3) + P(3, 4) + P(2, 5) + P(1, 6) = 8/36 = 2/9.
The previous calculation uses desideratum 3 for probability measures. The different
pairs (6, 5), (5, 6), . . . , (1, 6) are disjoint, so the probability of their union is the sum of
their probabilities.
Example 1.1 illustrates a common situation. We know the probabilities of some simple
events like the rolls of individual dice, and want to calculate the probabilities of more
complicated events like the success of a Come-out roll. Sometimes those probabilities
can be calculated mathematically as in the example. Other times it is more convenient to
calculate them by computer simulation. We frequently use R to calculate probabilities. To
illustrate, Example 1.2 uses R to calculate by simulation the same probability we found
directly in Example 1.1.
Example 1.2 (Craps, continued)
To simulate the game of craps, we will have to simulate rolling dice. That’s like ran-
domly sampling an integer from 1 to 6. The sample() command in R can do that. For
example, the following snippet of code generates one roll from a fair, six-sided die and
shows R’s response:
> sample(1:6,1)
[1] 1
>
When you start R on your computer, you see >, R’s prompt. Then you can type a
command such as sample(1:6,1) which means “take a sample of size 1 from the
1.1. BASIC PROBABILITY 4
numbers 1 through 6”. (It could have been abbreviated sample(6,1).) R responds
with [1] 1. The [1] says how many calculations R has done; you can ignore it. The 1
is R’s answer to the sample command; it selected the number “1”. Then it gave another
>, showing that it’s ready for another command. Try this several times; you shouldn’t
get “1” every time.
Here’s a longer snippet that does something more useful.
> x <- sample ( 6, 10, replace=T ) # take a sample of
# size 10 and call it x
> x # print the ten values
[1] 6 4 2 3 4 4 3 6 6 2
Note
• # is the comment character. On each line, R ignores all text after #.
• We have to tell R to take its sample with replacement. Otherwise, when R selects
“6” the first time, “6” is no longer available to be sampled a second time. In
replace=T, the T stands for True.
• <- does assignment. I.e., the result of sample ( 6, 10, replace=T ) is as-
signed to a variable called x. The assignment symbol is two characters: < fol-
lowed by -.
• A variable such as x can hold many values simultaneously. When it does, it’s
called a vector. You can refer to individual elements of a vector. For example,
x[1] is the first element of x. x[1] turned out to be 6; x[2] turned out to be 4;
and so on.
• == does comparison. In the snippet above, (x==3) checks, for each element of
x, whether that element is equal to 3. If you just type x == 3 you will see a string
of T’s and F’s (True and False), one for each element of x. Try it.
On average, we expect 1/6 of the draws to equal 1, another 1/6 to equal 2, and so
on. The following snippet is a quick demonstration. We simulate 6000 rolls of a die
and expect about 1000 1’s, 1000 2’s, etc. We count how many we actually get. This
snippet also introduces the for loop, which you should try to understand now because
it will be extremely useful in the future.
Each number from 1 through 6 was chosen about 1000 times, plus or minus a little bit
due to chance variation.
Now let’s get back to craps. We want to simulate a large number of games, say
1000. For each game, we record either 1 or 0, according to whether the shooter wins
on the Come-out roll, or not. We should print out the number of wins at the end. So
we start with a code snippet like this:
Now we have to figure out how to simulate the Come-out roll and decide whether the
shooter wins. Clearly, we begin by simulating the roll of two dice. So our snippet
expands to
1.2. PROBABILITY DENSITIES 6
The “||” stands for “or”. So that line of code sets wins[i] <- 1 if the sum of the rolls
is either 7 or 11. When I ran this simulation R printed out 219. The calculation in Ex-
ample 1.1 says we should expect around (2/9) × 1000 ≈ 222 wins. Our calculation and
simulation agree about as well as can be expected from a simulation. Try it yourself a
few times. You shouldn’t always get 219. But you should get around 222 plus or minus
a little bit due to the randomness of the simulation.
Try out these R commands in the version of R installed on your computer. Make
sure you understand them. If you don’t, print out the results. Try variations. Try any
tricks you can think of to help you learn R.
Oceanography the temperature of ocean water at a specified latitude, longitude and depth
1.2. PROBABILITY DENSITIES 7
Probabilities for such outcomes are called continuous. For example, let Y be the time a
Help Line caller spends on hold. The random variable Y is often modelled with a density
similar to that in Figure 1.1.
p(y)
The curve in the figure is a probability density function or pdf. The pdf is large near
y = 0 and monotonically decreasing, expressing the idea that smaller values of y are
more likely than larger values. (Reasonable people may disagree about whether this pdf
accurately represents callers’ experience.) We typically use the symbols p, π or f for pdf’s.
We would write p(50), π(50) or f (50) to denote the height of the curve at y = 50. For a
pdf, probability is the same as area under the curve. For example, the probability that a
caller waits less than 60 minutes is
Z 60
P[Y < 60] = p(t) dt.
0
R b first property holds because, if p(y) < 0 on the interval (a, b) then P[Y ∈ (a, b)] =
The
a
p(y) dy < 0; and we can’t have probabilities less than 0. The second property holds
R∞
because P[Y ∈ (−∞, ∞)] = −∞ p(y) dy = 1.
One peculiar fact about any continuous random variable Y is that P[Y = a] = 0, for
every a ∈ R. That’s because
Z a+
P[Y = a] = lim P[Y ∈ [a, a + ]] = lim pY (y) dy = 0.
→0 →0 a
P[Y ∈ (a, b)] = P[Y ∈ [a, b)] = P[Y ∈ (a, b]] = P[Y ∈ [a, b]].
The use of “density” in statistics is entirely analagous to its use in physics. In both
fields
mass
density = (1.1)
volume
In statistics, we interpret density as probability density, mass as probability mass and
volume as length of interval. In both fields, if the density varies from place to place (In
physics it would vary within an object; in statistics it would vary along the real line.) then
the density at a particular location is the limit of Equation 1.1 as volume → 0.
Probability density functions are derivatives of probabilities. For any fixed number a
Z b
d d
P[X ∈ (a, b]] = fX (x) dx = fX (b). (1.2)
db db a
Similarly, d/da P[X ∈ (a, b]] = − fX (a).
Sometimes we can specify pdf’s for continuous random variables based on the logic
of the situation, just as we could specify discrete probabilities based on the logic of dice
rolls. For example, let Y be the outcome of a spinner that is marked from 0 to 1. Then Y
will be somewhere in the unit interval, and all parts of the interval are equally likely. So
the pdf pY must look like Figure 1.2.
Note:
1.2. PROBABILITY DENSITIES 9
• c(0,1) collects 0 and 1 and puts them into the vector (0,1). Likewise, c(1,1)
creates the vector (1,1).
• ylim=c(0,1.1) sets the limits of the y-axis on the plot. If ylim is not specified
then R sets the limits automatically. Limits on the x-axis can be specified with xlim.
At other times we use probability densities and distributions as models for data, and
estimate the densities and distributions directly from the data. Figure 1.3 shows how that
works. The upper panel of the figure is a histogram of 112 measurements of ocean tem-
perature at a depth of 1000 meters in the North Atlantic near 45◦ North latitude and 20◦
degrees West longitude. Example 1.5 will say more about the data. Superimposed on the
histogram is a pdf f . We think of f as underlying the data. The idea is that measuring a
1.2. PROBABILITY DENSITIES 10
temperature at that location is like randomly drawing a value from f . The 112 measure-
ments, which are spread out over about a century of time, are like 112 independent draws
from f . Having the 112 measurements allows us to make a good estimate of f . If oceanog-
raphers return to that location to make additional measurements, it would be like making
additional draws from f . Because we can estimate f reasonably well, we can predict with
some degree of assurance what the future draws will be like.
The bottom panel of Figure 1.3 is a histogram of the discoveries data set that comes
with R and which is, as R explains, “The numbers of ‘great’ inventions and scientific dis-
coveries in each year from 1860 to 1959.” It is overlaid with a line showing the Poi(3.1)
distribution. (Named distributions will be introduced in Section 1.3.) It seems that the
number of great discoveries each year follows the Poi(3.1) distribution, at least approxi-
mately. If we think the future will be like the past then we should expect future years to
follow a similar pattern. Again, we think of a distribution underlying the data. The number
of discoveries in a single year is like a draw from the underlying distribution. The figure
shows 100 years, which allow us to estimate the underlying distribution reasonably well.
par ( mfrow=c(2,1) )
Note:
0.4
0.2
0.0
5 6 7 8 9 10 11
temperature
0.20
0.10
0.00
0 2 4 6 8 10 12
discoveries
Figure 1.3: (a): Ocean temperatures at 1000m depth near 45◦ N latitude, -20◦ longitude;
(b) Numbers of important discoveries each year 1860–1959
1.2. PROBABILITY DENSITIES 12
• good <- ... calls those points good whose longitude is between -19 and -21 and
whose latitude is between 44 and 46.
• hist() makes a histogram. prob=T turns the y-axis into a probability scale (area
under the histogram is 1) instead of counts.
• mean() calculates the mean. var() calculates the variance. Section 1.4 defines the
mean and variance of distributions. Section 2.2.1 defines the mean and variance of
data sets.
It is often necessary to transform one variable into another as, for example, Z = g(X)
for some specified function g. We might know pX (The subscript indicates which random
variable we’re talking about.) and want to calculate pZ . Here we consider only monotonic
functions g, so there is an inverse X = h(Z).
Theorem 1.1. Let X be a random variable with pdf pX . Let g be a differentiable, mono-
tonic, invertible function and define Z = g(X). Then the pdf of Z is
d g−1 (t)
pZ (t) = pX (g−1 (t))
dt
d d
pZ (b) = P[Z ∈ (a, b]] = P[X ∈ (g−1 (a), g−1 (b)]]
db db
Z g−1 (b)
d d g−1 (t)
= pX (x) dx = × pX (g−1 (b))
db g−1 (a) dt b
To illustrate, suppose that X is a random variable with pdf pX (x) = 2x on the unit
interval. Let Z = 1/X. What is pZ (z)? The inverse transformation is X = 1/Z. Its
derivative is dx/dz = −z−2 . Therefore,
d g−1 (z) 2 1 2
pZ (z) = pX (g−1 (z)) = − 2 = 3
dz z z z
And the possible values of Z are from 1 to ∞. So pZ (z) = 2/z3 on the interval (1, ∞). As a
partial check, we can verify that the integral is 1.
Z ∞
2 1 ∞
3
dz = − = 1.
1 z z2 1
Theorem 1.1 can be explained by Figure 1.4. The figure shows an x, a z, and the
function z = g(x). A little interval is shown around x; call it I x . It gets mapped by g into a
little interval around z; call it Iz . The density is
The approximations in Equation 1.3 are exact as the lengths of I x and Iz decrease to 0.
If g is not one-to-one, then it is often possible to find subsets of R on which g is one-
to-one, and work separately on each subset.
Z=g(X)
z
and so on. Instead of “given” we also use the word “conditional”. So we would say “the
probability of Heads conditional on θ”, etc.
The unknown constant θ is called a parameter. The set of possible values for θ is
denoted Θ (upper case θ). For each θ there is a probability measure µθ . The set of all
possible probability measures (for the problem at hand),
{µθ : θ ∈ Θ},
is called a parametric family of probability measures. The rest of this chapter introduces
four of the most useful parametric families of probability measures.
Toxicity tests Many laboratory animals are exposed to a potential carcinogen. Each either
develops cancer or not.
Quality control Many supposedly identical items are subjected to a test. Each either
passes or not.
1. We are looking at repeated trials. Each Come-out roll is a trial. It results in either
success, or not.
2. The outcome of one trial does not affect the other trials.
0.35
probability
probability
probability
0.6
0.6
0.15
0.0
0.0
0 1 2 3 0 1 2 3 0 1 2 3
x x x
N = 30 N = 30 N = 30
p = 0.1 p = 0.5 p = 0.9
0.15
0.20
0.20
probability
probability
probability
0.00
0.00
0.00
0 10 30 0 10 30 0 10 30
x x x
probability
probability
0.06
0.06
0.00
0.00
0.00
x x x
Let X be the number of successes. There are four trials, so N = 4. We calculated the
probability of success in Example 1.1; it’s p = 2/9. So X ∼ Bin(4, 2/9). The probability
of success in at least one Come-out roll is
which can be quickly calculated in R. The dbinom() command computes Binomial prob-
abilities. To compute Equation 1.5 we would write
1 - dbinom(0,4,2/9)
The 0 says what value of X we want. The 4 and the 2/9 are the number of trials and the
probability of success. Try it. Learn it.
Such observations are called Poisson after the 19th century French mathematician Siméon-
Denis Poisson. The number of events in the domain of study helps us learn about the rate.
Some examples are
The rate at which events occur is often called λ; the number of events that occur in the
domain of study is often called X; we write X ∼ Poi(λ). Important assumptions about
Poisson observations are that two events cannot occur at exactly the same location in space
or time, that the occurence of an event at location `1 does not influence whether an event
occurs at any other location `2 , and the rate at which events arise does not vary over the
domain of study.
When a Poisson experiment is observed, X will turn out to be a nonnegative integer.
The associated probabilities are given by Equation 1.6.
λk e−λ
P[X = k | λ] = . (1.6)
k!
One of the main themes of statistics is the quantitative way in which data help us learn
about the phenomenon we are studying. Example 1.4 shows how this works when we want
to learn about the rate λ of a Poisson distribution.
13 e−1
P[X = 3 | λ = 1] = ≈ 0.06
3!
3 −2
2e
P[X = 3 | λ = 2] = ≈ 0.18
3!
33 e−3
P[X = 3 | λ = 3] = ≈ 0.22
3!
3 −4
4e
P[X = 3 | λ = 4] = ≈ 0.20
3!
In other words, the value λ = 3 explains the data almost four times as well as the
value λ = 1 and just a little bit better than the values λ = 2 and λ = 4. Figure 1.6
shows P[X = 3 | λ] plotted as a function of λ. The figure suggests that P[X = 3 | λ] is
maximized by λ = 3. The suggestion can be verified by differentiating Equation 1.6
with respect to lambda, equating to 0, and solving. The figure also shows that any
value of λ from about 0.5 to about 9 explains the data not too much worse than λ = 3.
0.00 0.10 0.20
P[x=3]
0 2 4 6 8 10
lambda
Note:
• dpois calculates probabilities for Poisson distributions the way dbinom does for
Binomial distributions.
• plot produces a plot. In the plot(...) command above, lam goes on the x-axis,
y goes on the y-axis, xlab and ylab say how the axes are labelled, and type="l"
says to plot a line instead of indvidual points.
Making and interpreting plots is a big part of statistics. Figure 1.6 is a good example.
Just by looking at the figure we were able to tell which values of λ are plausible and which
are not. Most of the figures in this book were produced in R.
In these examples it is expected that most calls, times or distances will be short and a
few will be long. So the density should be large near x = 0 and decreasing as x increases.
1.3. PARAMETRIC FAMILIES OF DISTRIBUTIONS 21
lambda = 2
lambda = 1
lambda = 0.2
8
lambda = 0.1
6
p(x)
4
2
0
x
Figure 1.7: Exponential densities
for ( i in 1:4 )
y[,i] <- dexp ( x, 1/lam[i] ) # exponential pdf
matplot ( x, y, type="l", xlab="x", ylab="p(x)", col=1 )
legend ( 1.2, 10, paste ( "lambda =", lam ),
lty=1:4, cex=.75 )
• We want to plot the exponential density for several different values of λ so we choose
40 values of x between 0 and 2 at which to do the plotting.
• Then we need to calculate and save p(x) for each combination of x and λ. We’ll
save them in a matrix called y. matrix(NA,40,4) creates the matrix. The size of
the matrix is 40 by 4. It is filled with NA, or Not Available.
• dexp calculates the exponential pdf. The argument x tells R the x values at which to
calculate the pdf. x can be a vector. The argument 1/lam[i] tells R the value of the
parameter. R uses a different notation than this book. Where this book says Exp(2),
R says Exp(.5). That’s the reason for the 1/lam[i].
• matplot plots one matrix versus another. The first matrix is x and the second is y.
matplot plots each column of y against each column of x. In our case x is vector,
so matplot plots each column of y, in turn, against x. type="l" says to plot lines
instead of points. col=1 says to use the first color in R’s library of colors.
• legend (...) puts a legend on the plot. The 1.2 and 10 are the x and y coordi-
nates of the upper left corner of the legend box. lty=1:4 says to use line types 1
through 4. cex=.75 sets the character expansion factor to .75. In other words, it
sets the font size.
• paste(..) creates the words that go into the legend. It pastes together "lambda ="
with the four values of lam.
• dnorm(...) computes the Normal pdf. The first argument is the set of x values;
the second argument is the mean; the third argument is the standard deviation.
1.3. PARAMETRIC FAMILIES OF DISTRIBUTIONS 24
mu = !2 ; sigma = 1
mu = 0 ; sigma = 2
1.2
mu = 0 ; sigma = 0.5
mu = 2 ; sigma = 0.3
mu = !0.5 ; sigma = 3
1.0
0.8
y
0.6
0.4
0.2
0.0
!6 !4 !2 0 2 4 6
0.2
0.0
temperature
Figure 1.9: Ocean temperatures at 45◦ N, 30◦ W, 1000m depth. The N(5.87, .72) density.
• hist produces a histogram. The argument prob=T causes the vertical scale to be
probability density instead of counts.
• The line t <- ... sets 40 values of t in the interval [4, 7.5] at which to evaluate
the Normal density for plotting purposes.
When working with Normal distributions it is extremely useful to think in terms of units
of standard deviation, or simply standard units. One standard unit equals one standard
deviation. In Figure 1.10(a) the number 6.6 is about 1 standard unit above the mean,
1.3. PARAMETRIC FAMILIES OF DISTRIBUTIONS 27
(a)
0.6
0.4
density
0.2
0.0
degrees C
(b)
0.30
density
0.15
0.00
39 40 41 42 43 44 45
degrees F
(c)
0.8
density
0.4
0.0
!2 !1 0 1 2
standard units
Figure 1.10: (a): A sample of size 100 from N(5.87, .72) and the N(5.87, .72) density.
(b): A sample of size 100 from N(42.566, 1.296) and the N(42.566, 1.296) density. (c): A
sample of size 100 from N(0, 1) and the N(0, 1) density.
1.3. PARAMETRIC FAMILIES OF DISTRIBUTIONS 28
while the number 4.5 is about 2 standard units below the mean. To see why that’s a
useful way to think, Figure 1.10(b) takes the sample from Figure 1.10(a), multiplies by
9/5 and adds 32, to simulate temperatures measured in ◦ F instead of ◦ C. The histograms in
panels (a) and (b) are slightly different because R has chosen the bin boundaries differently;
but the two Normal curves have identical shapes. Now consider some temperatures, say
6.5◦ C = 43.7◦ F and 4.5◦ C = 40.1◦ F. Corresponding temperatures occupy corresponding
points on the plots. A vertical line at 6.5 in panel (a) divides the density into two sections
exactly congruent to the two sections created by a vertical line at 43.7 in panel (b). A
similar statement holds for 4.5 and 40.1. The point is that the two density curves have
exactly the same shape. They are identical except for the scale on the horizontal axis, and
that scale is determined by the standard deviation. Standard units are a scale-free way of
thinking about the picture.
To continue, we converted the temperatures in panels (a) and (b) to standard units,
and plotted them in panel (c). Once again, R made a slightly different choice for the bin
boundaries, but the Normal curves all have the same shape.
Let Y ∼ N(µ, σ) and define a new random variable Z = (Y −µ)/σ. Z is in standard units.
It tells how many standard units Y is above or below its mean µ. What is the distribution
of Z? The easiest way to find out is to calculate pZ , the density of Z, and see whether we
1.4. CENTERS, SPREADS, MEANS, AND MOMENTS 29
different locations. The upper right panel in Figure 1.12 is the same as the top panel of
Figure 1.3. Each histogram in Figure 1.12 has a black circle indicating the “center” or
“location” of the points that make up the histogram. These centers are good estimates
of the centers of the underlying pdf’s. The centers range from a low of about 5◦ at
latitude 45 and longitude -40 to a high of about 9◦ at latitude 35 and longitude -20. (By
convention, longitudes to the west of Greenwich, England are negative; longitudes to
the east of Greenwich are positive.) It’s apparent from the centers that for each latitude,
temperatures tend to get colder as we move from east to west. For each longitude,
temperatures are warmest at the middle latitude and colder to the north and south.
Data like these allow oceanographers to deduce the presence of a large outpouring
of relatively warm water called the Mediterranean tongue from the Mediterranean Sea
into the Atlantic ocean. The Mediterranean tongue is centered at about 1000 meters
depth and 35◦ N latitude, flows from east to west, and is warmer than the surrounding
Atlantic waters into which it flows.
There are many ways of describing the center of a data sample. But by far the most
common is the mean. The mean of a sample, or of any list of numbers, is just the average.
Definition 1.1 (Mean of a sample). The mean of a sample, or any list of numbers, x1 , . . . , xn
is
1X
mean of x1 , . . . , xn = xi . (1.9)
n
The black circles in Figure 1.12 are means. The mean of x1 , . . . , xn is often denoted
x̄. Means are often a good first step in describing data that are unimodal and roughly
symmetric.
Similarly, means are often useful in describing distributions. For example, the mean
of the pdf in the upper panel of Figure 1.3 is about 8.1, the same as the mean of the data
in same panel. Similarly, in the bottom panel, the mean of the Poi(3.1) distribution is 3.1,
the same as the mean of the discoveries data. Of course we chose the distributions to
have means that matched the means of the data.
For some other examples, consider the Bin(n, p) distributions shown in Figure 1.5. The
center of the Bin(30, .5) distribution appears to be around 15, the center of the Bin(300, .9)
distribution appears to be around 270, and so on. The mean of a distribution, or of a
random variable, is also called the expected value or expectation and is written E(X).
Definition 1.2 (Mean of a random variable). Let X be a random variable with cdf F X and
pdf pX . Then the mean of X (equivalently, the mean of F X ) is
P
R i i P[X = i] if X is discrete
E(X) =
x p (x) dx if X is continuous (1.10)
X
1.4. CENTERS, SPREADS, MEANS, AND MOMENTS 31
60
50
latitude
40
30
20
longitude
Figure 1.11: hydrographic stations off the coast of Europe and Africa
1.4. CENTERS, SPREADS, MEANS, AND MOMENTS 32
20
60
20
40
10
10
20
5
5
0
0
4 6 8 4 6 8 4 6 8
15
15
0 2 4 6 8
10
10
5
5
0
4 6 8 4 6 8 4 6 8
15
20
10
10
5 10
5
0
4 6 8 4 6 8 4 6 8
Figure 1.12: Water temperatures (◦ C) at 1000m depth, latitude 25, 35, 45 degrees North
longitude 20, 30, 40 degrees West
1.4. CENTERS, SPREADS, MEANS, AND MOMENTS 33
The logic of the definition is that E(X) is a weighted average of the possible values of
X. Each value is weighted by its importance, or probability. In addition to E(X), another
common notation for the mean of a random variable X is µX .
Let’s look at some of the families of probability distributions that we have already
studied and calculate their expectations.
The first five equalities are just algebra. The sixth is worth remembering. The sum
Pn−1
j=0 · · · is the sum of the probabilities of the Bin(n − 1, p) distribution. Therefore
the sum is equal to 1. You may wish to compare E(X) to Figure 1.5.
Statisticians also need to measure and describe the spread of distributions, random
variables and samples. In Figure 1.12, the spread would measure how much variation
there is in ocean temperatures at a single location, which in turn would tell us something
about how heat moves from place to place in the ocean. Spread could also describe the
variation in the annual numbers of “great” discoveries, the range of typical outcomes for
a gambler playing a game repeatedly at a casino or an investor in the stock market, or the
uncertain effect of a change in the Federal Reserve Bank’s monetary policy, or even why
different patches of the same forest have different plants on them.
By far the most common measures of spread are the variance and its square root, the
standard deviation.
Var(Y) = E((Y − µY )2 )
The variance (standard deviation) of Y is often denoted σ2Y (σY ). The variances of
common distributions will be derived later in the book.
Caution: for reasons which we don’t go into here, many books define the variance of a
sample as Var(y1 , . . . , yn ) = (n − 1)−1 (yi − ȳ)2 . For large n there is no practical difference
P
between the two definitions. And the definition of variance of a random variable remains
unchanged.
While the definition of the variance of a random variable highlights its interpretation
as deviations away from the mean, there is an equivalent formula that is sometimes easier
to compute.
Proof.
To develop a feel for what the standard deviation measures, Figure 1.13 repeats Fig-
ure 1.12 and adds arrows showing ± 1 standard deviation away from the mean. Standard
deviations have the same units as the original random variable; variances have squared
units. E.g., if Y is measured in degrees, then SD(Y) is in degrees but Var(Y) is in degrees2 .
Because of this, SD is easier to interpret graphically. That’s why we were able to depict
SD’s in Figure 1.13.
Most “mound-shaped” samples, that is, samples that are unimodal and roughly sym-
metric, follow this rule of thumb:
• about 2/3 of the sample falls within about 1 standard deviation of the mean;
• about 95% of the sample falls within about 2 standard deviations of the mean.
The rule of thumb has implications for predictive accuracy. If x1 , . . . , xn are a sample
from a mound-shaped distribution, then one would predict that future observations will
be around x̄ with, again, about 2/3 of them within about one SD and about 95% of them
within about two SD’s.
To illustrate further, we’ll calculate the SD of a few mound-shaped random variables
and compare the SD’s to the pdf’s.
1.4. CENTERS, SPREADS, MEANS, AND MOMENTS 36
20
60
20
40
10
10
20
5
5
0
0
4 6 8 4 6 8 4 6 8
15
15
0 2 4 6 8
10
10
5
5
0
4 6 8 4 6 8 4 6 8
15
20
10
10
5 10
5
0
4 6 8 4 6 8 4 6 8
Figure 1.13: Water temperatures (◦ C) at 1000m depth, latitude 25, 35, 45 degrees North,
longitude 20, 30, 40 degrees West, with standard deviations
1.4. CENTERS, SPREADS, MEANS, AND MOMENTS 37
Var(Y) = E(Y 2 )
Z ∞ 2
y y2
= √ e− 2 dy
−∞ 2π
Z ∞ 2
y y2
=2 √ e− 2 dy (1.13)
Z0 ∞ 2π 2
1 y
=2 √ e− 2 dy
0 2π
=1
Figure 1.14 shows the comparison. The top panel shows the pdf of the Bin(30, .5) distri-
bution; the bottom panel shows the N(0, 1) distribution.
1.4. CENTERS, SPREADS, MEANS, AND MOMENTS 38
0.15
0.10
p(y)
0.05
+/− 1 SD
0.00
+/− 2 SD’s
0 5 10 15 20 25 30
y
0.0 0.1 0.2 0.3 0.4
p(y)
+/− 1 SD
+/− 2 SD’s
−3 −2 −1 0 1 2 3
Figure 1.14: Two pdf’s with ±1 and ±2 SD’s. top panel: Bin(30, .5); bottom panel: N(0, 1).
1.4. CENTERS, SPREADS, MEANS, AND MOMENTS 39
par ( mfrow=c(2,1) )
y <- 0:30
sd <- sqrt ( 15 / 2 )
plot ( y, dbinom(y,30,.5), ylab="p(y)" )
arrows ( 15-2*sd, 0, 15+2*sd, 0, angle=60, length=.1,
code=3, lwd=2 )
text ( 15, .008, "+/- 2 SD’s" )
arrows ( 15-sd, .03, 15+sd, .03, angle=60, length=.1,
code=3, lwd=2 )
text ( 15, .04, "+/- 1 SD" )
y <- seq(-3,3,length=60)
plot ( y, dnorm(y,0,1), ylab="p(y)", type="l" )
arrows ( -2, .02, 2, .02, angle=60, length=.1, code=3, lwd=2 )
text ( 0, .04, "+/- 2 SD’s" )
arrows ( -1, .15, 1, .15, angle=60, length=.1, code=3, lwd=2 )
text ( 0, .17, "+/- 1 SD" )
• arrows(x0, y0, x1, y1, length, angle, code, ...) adds arrows to a
plot. See the documentation for the meaning of the arguments.
• text adds text to a plot. See the documentation for the meaning of the arguments.
Variances are second moments. Moments above the second have little applicability.
R has built-in functions to compute means and variances and can compute other mo-
ments easily. Note that R uses the divisor n − 1 in its definition of variance.
# Use R to calculate moments of the Bin(100,.5) distribution
1.4. CENTERS, SPREADS, MEANS, AND MOMENTS 40
• rbinom(...) generates random draws from the binomial distribution. The 5000
says how many draws to generate. The 100 and .5 say that the draws are to be from
the Bin(100, .5) distribution.
= a + bEY
1.5. JOINT, MARGINAL AND CONDITIONAL PROBABILITY 41
Proof. We prove the continuous case; the discrete case is left as an exercise. Let µ = E[Y].
Suppose a polling organization finds that 80% of Democrats and 35% of Republicans favor
the bond referendum. The 80% and 35% are called conditional probabilities because they
are conditional on party affiliation. The notation for conditional probabilities is pS | A . As
usual, the subscript indicates which random variables we’re talking about. Specifically,
pS | A (Y | D) = 0.80; pS | A (N | D) = 0.20;
pS | A (Y | R) = 0.35; pS | A (N | R) = 0.65.
For Against
Democrat 48% 12% 60%
Republican 14% 26% 40%
62% 38%
The event A = D can be partitioned into the two smaller events (A = D, S = Y) and
(A = D, S = N). So
pA (D) = .60 = .48 + .12 = pA,S (D, Y) + pA,S (D, N).
The event A = R can be partitioned similarly. Too, the event S = Y can be partitioned into
(A = D, S = Y) and (A = R, S = Y). So
pS (Y) = .62 = .48 + .14 = pA,S (D, Y) + pA,S (R, Y).
These calculations illustrate a general principle: To get a marginal probability for
one variable, add the joint probabilities for all values of the other variable. The general
formulae for working simultaneously with two discrete random variables X and Y are
fX,Y (x, y) = fX (x) · fY | X (y | x) = fY (y) · fX | Y (x | y) (1.14)
X X
fX (x) = fX,Y (x, y) fY (y) = fX,Y (x, y)
y x
Sometimes we know joint probabilities and need to find marginals and conditionals; some-
times it’s the other way around. And sometimes we know fX and fY | X and need to find fY
or fX | Y . The following story is an example of the latter. It is a common problem in drug
testing, disease screening, polygraph testing, and many other fields.
The participants in an athletic competition are to be randomly tested for steroid use.
The test is 90% accurate in the following sense: for athletes who use steroids, the test has
a 90% chance of returning a positive result; for non-users, the test has a 10% chance of
returning a positive result. Suppose that only 30% of athletes use steroids. An athlete is
randomly selected. Her test returns a positive result. What is the probability that she is a
steroid user?
This is a problem of two random variables, U, the steroid use of the athlete and T , the
test result of the athlete. Let U = 1 if the athlete uses steroids; U = 0 if not. Let T = 1 if
the test result is positive; T = 0 if not. We want fU | T (1 | 1). We can calculate fU | T if we
know fU,T ; and we can calculate fU,T because we know fU and fT | U . Pictorially,
fU , fT | U −→ fU,T −→ fU | T
1.5. JOINT, MARGINAL AND CONDITIONAL PROBABILITY 43
T =0 T =1
U=0 .63 .07 .70
U=1 .03 .27 .30
.66 .34
so
and finally
In other words, even though the test is 90% accurate, the athlete has only an 80% chance
of using steroids. If that doesn’t seem intuitively reasonable, think of a large number of
athletes, say 100. About 30 will be steroid users of whom about 27 will test positive.
About 70 will be non-users of whom about 7 will test positive. So there will be about 34
athletes who test positive, of whom about 27, or 80% will be users.
Table 1.2 is another representation of the same problem. It is important to become fa-
miliar with the concepts and notation in terms of marginal, conditional and joint distribu-
tions, and not to rely too heavily on the tabular representation because in more complicated
problems there is no convenient tabular representation.
Example 1.6 is a further illustration of joint, conditional, and marginal distributions.
Example 1.6 (Seedlings)
Example 1.4 introduced an observational experiment to learn about the rate of seedling
production and survival at the Coweeta Long Term Ecological Research station in
western North Carolina. For a particular quadrat in a particular year, let N be the
number of new seedlings that emerge. Suppose that N ∼ Poi(λ) for some λ > 0.
Each seedling either dies over the winter or survives to become an old seedling the
next year. Let θ be the probability of survival and X be the number of seedlings that
1.5. JOINT, MARGINAL AND CONDITIONAL PROBABILITY 44
survive. Suppose that the survival of any one seedling is not affected by the survival
of any other seedling. Then X ∼ Bin(N, θ). Figure 1.15 shows the possible values
of the pair (N, X). The probabilities associated with each of the points in Figure 1.15
are denoted fN,X where, as usual, the subscript indicates which variables we’re talking
about. For example, fN,X (3, 2) is the probability that N = 3 and X = 2.
...
6
4
X
...
2
0
0 1 2 3 4 5 6 7
Figure 1.15: Permissible values of N and X, the number of new seedlings and the number
that survive.
The next step is to figure out what the joint probabilities are. Consider, for example,
the event N = 3. That event can be partitioned into the four smaller events (N = 3, X =
0), (N = 3, X = 1), (N = 3, X = 2), and (N = 3, X = 3). So
fN (3) = fN,X (3, 0) + fN,X (3, 1) + fN,X (3, 2) + fN,X (3, 3)
The Poisson model for N says fN (3) = P[N = 3] = e−λ λ3 /6. But how is the total e−λ λ3 /6
divided into the four parts? That’s where the Binomial model for X comes in. The
division is made according to the Binomial probabilities
! ! ! !
3 3 3 2 3 3
(1 − θ)3 θ(1 − θ)2 θ (1 − θ) θ
0 1 2 3
The e−λ λ3 /6 is a marginal probability like the 60% in the affiliation/support problem.
The binomial probabilities above are conditional probabilities like the 80% and 20%;
1.5. JOINT, MARGINAL AND CONDITIONAL PROBABILITY 45
they are conditional on N = 3. The notation is fX|N (2 | 3) or P[X = 2 | N = 3]. The joint
probabilities are
e−λ λ3 e−λ λ3
fN,X (3, 0) = (1 − θ)3 fN,X (3, 1) = 3θ(1 − θ)2
6 6
e−λ λ3 2 e−λ λ3 3
fN,X (3, 2) = 3θ (1 − θ) fN,X (3, 3) = θ
6 6
In general,
e−λ λn n x
!
fN,X (n, x) = fN (n) fX | N (x | n) = θ (1 − θ)n−x
n! x
An ecologist might be interested in fX , the pdf for the number of seedlings that will
be recruited into the population in a particular year. For a particular number x, fX (x)
is like looking in Figure 1.15 along the horizontal line corresponding to X = x. To get
fX (x) ≡ P[X = x], we must add up all the probabilities on that line.
∞
e−λ λn n x
X X !
fX (x) = fN,X (n, x) = θ (1 − θ)n−x
n n=x
n! x
∞
X e−λ(1−θ) (λ(1 − θ))n−x e−λθ (λθ) x
=
n=x
(n − x)! x!
∞
e−λθ (λθ) x X e−λ(1−θ) (λ(1 − θ))z
=
x! z=0
z!
e−λθ (λθ) x
=
x!
The last equality follows since z · · · = 1 because it is the sum of probabilities from the
P
Poi(λ(1 − θ)) distribution. The final result is recognized as a probability from the Poi(λ∗ )
distribution where λ∗ = λθ. So X ∼ Poi(λ∗ ).
In the derivation we used the substitution z = n− x. The trick is worth remembering.
For continuous random variables, conditional and joint densities are written pX | Y (x | y)
and pX,Y (x, y) respectively and, analogously to Equation 1.14 we have
The logic is the same as for discrete random variables. In order for (X = x, Y = y) to occur
we need either of the following.
1.5. JOINT, MARGINAL AND CONDITIONAL PROBABILITY 46
Just as for single random variables, probabilities are integralsR of the density. If A is a
region in the (x, y) plane, P[(X, Y) ∈ A] = A p(x, y) dx dy, where A . . . indicates a double
R
Z Z !
P[X ∈ B] = P[(X, Y) ∈ B × R] = pX,Y (x, y) dy dx
B R
An example will help illustrate. A customer calls the computer Help Line. Let X be
the amount of time he spends on hold and Y be the total duration of the call. The amount
of time a consultant spends with him after his call is answered is W = Y − X. Suppose the
joint density is pX,Y (x, y) = e−y in the region 0 < x < y < ∞.
p(x, y)
p(x | y) = = y−1
p(y)
1.5. JOINT, MARGINAL AND CONDITIONAL PROBABILITY 47
The two formulae are, of course, equivalent. But when X does arise as part of a pair, there
is still another way to view p(x) and E(X):
Z
pX (x) = pX | Y (x | y) pY (y) dy = E pX | Y (x | y)
(1.16)
Z Z !
E(X) = xpX | Y (x | y) dx pY (y) dy = E (E(X | Y)) . (1.17)
1.5. JOINT, MARGINAL AND CONDITIONAL PROBABILITY 48
(a) (b)
0.8
3
p(x)
2
y
0.4
1
0.0
0
0 1 2 3 4 0 2 4 6 8 10
x x
(c) (d)
2.0
y=1
y=2
p(x|y)
0.2
y=3
p(y)
1.0
0.0
0.0
y x
(e) (f)
x=
6
0.8
x=
p(y|x)
y!x=w
4
x=
y
0.4
y!x=0
0.0
0 2 4 6 8 10 0 1 2 3 4
y x
Figure 1.16: (a): the region of R2 where (X, Y) live; (b): the marginal density of X; (c):
the marginal density of Y; (d): the conditional density of X given Y for three values of Y;
(e): the conditional density of Y given X for three values of X; (f): the region W ≤ w
1.5. JOINT, MARGINAL AND CONDITIONAL PROBABILITY 49
0 if shooter loses
Let X =
1 if shooter wins
(Make sure you see why pX (1) = E(X).) Let Y be the outcome of the Come-out roll.
1.6. ASSOCIATION, DEPENDENCE, INDEPENDENCE 50
Figure 1.17 shows each variable plotted against every other variable. It is evident from the
plot that petal length and petal width are very closely associated with each other, while
the relationship between sepal length and sepal width is much weaker. Statisticians need
a way to quantify the strength of such relationships.
pairs ( iris[,1:4] )
• pairs() produces a pairs plot, a matrix of scatterplots of each pair of variables. The
names of the variables are shown along the main diagonal of the matrix. The (i, j)th
plot in the matrix is a plot of variable i versus variable j. For example, the upper
right plot has sepal length on the vertical axis and petal width on the horizontal axis.
By far the most common measures of association are covariance and correlation.
in which the diagonal entries are variances and the off-diagonal entries are covariances.
The measurements in iris are in centimeters. To change to millimeters we would
multiply each measurement by 10. Here’s how that affects the covariances.
1.6. ASSOCIATION, DEPENDENCE, INDEPENDENCE 52
7.5
6.5
Sepal.Length
5.5
4.5
2.0 2.5 3.0 3.5 4.0
Sepal.Width
7
6
5
Petal.Length
4
3
2
1
0.5 1.0 1.5 2.0 2.5
Petal.Width
Figure 1.17: Lengths and widths of sepals and petals of 150 iris plants
1.6. ASSOCIATION, DEPENDENCE, INDEPENDENCE 53
Each covariance has been multiplied by 100 because each variable has been multiplied by
10. In fact, this rescaling is a special case of the following theorem.
Theorem 1.5. Let X and Y be random variables. Then Cov(aX +b, cY +d) = ac Cov(X, Y).
Proof.
Theorem 1.5 shows that Cov(X, Y) depends on the scales in which X and Y are mea-
sured. A scale-free measure of association would also be useful. Correlation is the most
common such measure.
> cor(iris[,1:4])
Sepal.Length Sepal.Width Petal.Length Petal.Width
Sepal.Length 1.0000000 -0.1175698 0.8717538 0.8179411
Sepal.Width -0.1175698 1.0000000 -0.4284401 -0.3661259
Petal.Length 0.8717538 -0.4284401 1.0000000 0.9628654
Petal.Width 0.8179411 -0.3661259 0.9628654 1.0000000
which confirms the visual impression that sepal length, petal length, and petal width are
highly associated with each other, but are only loosely associated with sepal width.
Theorem 1.6 tells us that correlation is unaffected by linear changes in measurement
scale.
1.6. ASSOCIATION, DEPENDENCE, INDEPENDENCE 54
Theorem 1.6. Let X and Y be random variables. Then Cor(aX + b, cY + d) = Cor(X, Y).
Proof. See Exercise 47.
Correlation doesn’t measure all types of association; it only measures clustering around
a straight line. The first two columns of Figure 1.18 show data sets that cluster around a
line, but with some scatter above and below the line. These data sets are all well described
by their correlations, which measure the extent of the clustering; the higher the correla-
tion, the tighter the points cluster around the line and the less they scatter. Negative values
of the correlation correspond to lines with negative slopes. The last column of the figure
shows some other situations. The first panel of the last column is best described as having
two isolated clusters of points. Despite the correlation of .96, the panel does not look at all
like the last panel of the second column. The second and third panels of the last column
show data sets that follow some nonlinear pattern of association. Again, their correlations
are misleading. Finally, the last panel of the last column shows a data set in which most
of the points are tightly clustered around a line but in which there are two outliers. The
last column demonstrates that correlations are not good descriptors of nonlinear data sets
or data sets with outliers.
Correlation measures linear association between random variables. But sometimes we
want to say whether two random variables have any association at all, not just linear.
Definition 1.8. Two random variables, X and Y, are said to be independent if p(X | Y) =
p(X), for all values of Y. If X and Y are not independent then they are said to be dependent.
If X and Y are independent then it is also true that p(Y | X) = p(Y). The interpretation
is that knowing one of the random variables does not change the probability distribution
of the other. If X and Y are independent (dependent) we write X ⊥ Y (X 6⊥ Y). If X and
Y are independent then Cov(X, Y) = Cor(X, Y) = 0. The converse is not true. Also, if
X ⊥ Y then p(x, y) = p(x)p(y). This last equality is usually taken to be the definition of
independence.
Do not confuse independent with mutually exclusive. Let X denote the outcome of a
die roll and let A = 1 if X ∈ {1, 2, 3} and A = 0 if X ∈ {4, 5, 6}. A is called an indicator
variable because it indicates the occurence of a particular event. There is a special notation
for indicator variables:
A = 1{1,2,3} (X).
1{1,2,3} is an indicator function. 1{1,2,3} (X) is either 1 or 0 according to whether X is in the
subscript. Let B = 1{4,5,6} (X), C = 1{1,3,5} (X), D = 1{2,4,6} (X) and E = 1{1,2,3,4} (X). A and
B are dependent because P[A] = .5 but P[A | B] = 0. D and E are independent because
P[D] = P[D | E] = .5. You can also check that P[E] = P[E | D] = 2/3. Do not confuse
dependence with causality. A and B are dependent, but neither causes the other.
1.6. ASSOCIATION, DEPENDENCE, INDEPENDENCE 55
For an example, recall the Help Line story on page 46. X and Y were the amount of
time on hold and the total length of the call, respectively. The difference was W = Y − X.
We found p(x | y) = y−1 . Because p(x | y) depends on y, X 6⊥ Y. Similarly, p(y | x) depends
on x. Does that make sense? Would knowing something about X tell us anything about Y?
What about X and W? Are they independent?
d d
p(w | x) = P[W ≤ w | X = x] = P[Y ≤ w + x | X = x] = e−w
dw dw
which does not depend on x. Therefore X ⊥ W. Does that make sense? Would knowing
something about X tell us anything about W?
Example 1.9 (Seedlings, continued)
Examples 1.4, 1.6, and 1.7 were about new seedlings in forest quadrats. Suppose that
ecologists observe the number of new seedlings in a quadrat for k successive years;
call the observations N1 , . . . , Nk . If the seedling arrival rate is the same every year,
then we could adopt the model Ni ∼ Poi(λ). I.e., λ is the same for every year. If λ is
known, or if we condition on λ, then the number of new seedlings in one year tells us
nothing about the number of new seedlings in another year, we would model the Ni ’s
as being independent, and their joint density, conditional on λ, would be
k k
e−λ λni e−kλ λ ni
P
Y Y
p(n1 , . . . , nk | λ) = p(ni | λ) = = Q
i=1 i=1
ni ! ni !
But if λ is unknown then we might treat it like a random variable. It would be a random
variable if, for instance, different years had different λ’s, we chose a year at random or
if we thought of Nature as randomly choosing λ for our particular year. In that case the
data from early years, N1 , . . . , Nm , say, yield information about λ and therefore about
likely values of Nm+1 , . . . , Nk , so the Ni ’s are dependent. In fact,
e−kλ λ ni
Z Z P
1.7 Simulation
We have already seen, in Example 1.2, an example of computer simulation to estimate a
probability. More broadly, simulation can be helpful in several types of problems: cal-
1.7. SIMULATION 57
culating probabilities, assessing statistical procedures, and evaluating integrals. These are
explained and exemplified in the next several subsections.
So as we do a larger and larger simulation we get a more and more accurate estimate of
µg .
Probabilities can be computed as special cases of expectations. Suppose we want to
calculate P[Y ∈ S ] for some set S . Define X ≡ 1S (Y). Then P[Y ∈ S ] = E(X) and is
sensibly estimated by
number of occurences of S X
= n−1 x(i) .
number of trials
Example 1.10 illustrates with the game of Craps.
Example 1.10 (Craps, continued)
Example 1.8 calculated the chance of winning the game of Craps. Here is the R code
to calculate the same probability by simulation.
return ( win )
}
• while(!determined) begins a loop. The loop will repeat as many times as neces-
sary as long as !determined is T.
Try the example code a few times. See whether you get about 49% as Example 1.8 sug-
gests.
Along with the estimate itself, it is useful to estimate the accuracy of µ̂g as an estimate
of µg . If the simulations are independent then Var(µ̂g ) = n−1 Var(g(Y)); if there are many
of them then Var(g(Y)) can be well estimated by n−1 ((g(y) − µ̂g )2 ) and SD(g(Y)) can be
P
1.7. SIMULATION 59
well estimated by
((g(y) − µ̂g )2 ). Because SD’s decrease in proportion to n−1/2 (See the Central
p
−1/2 P
n
Limit Theorem.), it takes a 100 fold increase in n to get, for example, a 10 fold increase in
accuracy.
Similar reasoning applies to probabilities, but when we are simulating the occurence
or nonoccurence of an event, then the simulations are Bernoulli trials, so we have a more
explicit formula for the variance and SD.
Example 1.11 (Craps, continued)
How accurate is the simulation in Example 1.10?
The simulation keeps track of X , the number of successes in n.sim trials. Let θ
be the true probability of success. (Example 1.8 found θ ≈ .49, but in most practical
applications we won’t know θ.)
X ∼ Bin(n.sim, θ)
Var(X) = n.sim(θ)(1 − θ)
SD(X/n.sim) = (θ)(1 − θ)/n.sim
p
What does this mean in practical terms? How accurate is the simulation when n.sim =
50, or 200, or 1000, say? To illustrate we did 1000 simulations with n.sim = 50, then
another 1000 with n.sim = 200, and then another 1000 with n.sim = 1000.
The results are shown as a boxplot in Figure 1.19. In Figure 1.19 there are three
boxes, each with whiskers extending vertically. The box for n.sim = 50 shows that the
median of the 1000 θ̂’s was just about .50 (the horizontal line through the box), that
50% of the θ̂’s fell between about .45 and .55 (the upper and lower ends of the box),
and that almost all of the θ̂’s fell between about .30 and .68 (the extent of the whiskers).
In comparison, the 1000 θ̂’s for n.sim = 200 are spread out about half as much, and
the 1000 θ̂’s for n.sim = 1000 are spread out about half as much again. The factor of
about a half comes from the n.sim.5 in the formula for SD(θ̂). When n.sim increases
by a factor of about 4, the SD decreases by a factor of about 2. See the notes for
Figure 1.19 for a further description of boxplots.
0.7
0.5
0.3
50 200 1000
n.sim
N <- 1000
for ( i in seq(along=n.sim) ) {
for ( j in 1:N ) {
wins <- 0
for ( k in 1:n.sim[i] )
wins <- wins + sim.craps()
theta.hat[j,i] <- wins / n.sim[i]
}
}
entries in the matrix and nrows and ncols are the numbers of rows and columns.
• A boxplot is one way to display a data set. It produces a rectangle, or box, with
a line through the middle. The rectangle contains the central 50% of the data. The
line indicates the median of the data. Extending vertically above and below the box
are dashed lines called whiskers. The whiskers contain most of the outer 50% of the
data. A few extreme data points are plotted singly. See Example 2.3 for another use
of boxplots and a fuller explanation.
A poll is commisioned to estimate θ and it is agreed that the pollster will sample 100
students. Three different procedures are proposed.
1. Randomly sample 100 students. Estimate
Which procedure is best? One way to answer the question is by exact calculation, but
another way is by simulation. In the simulation we try each procedure many times to see
how accurate it is, on average. We must choose some “true” values of θG , θI and θ under
which to do the simulation. Here is some R code for the simulation.
print ( apply(theta.hat,2,mean) )
boxplot ( theta.hat ~ col(theta.hat) )
• apply applies a function to a matrix. In the code above, apply(...) applies the
function mean to dimension 2 of the matrix theta.hat. That is, it returns the mean
of of each column of theta.hat
The boxplot, shown in Figure 1.20 shows little practical difference between the three
procedures.
The next example shows how simulation was used to evaluate whether an experiment
was worth carrying out.
Example 1.12 (FACE)
The amount of carbon dioxide, or CO2 , in the Earth’s atmosphere has been steadily
increasing over the last century or so. You can see the increase yourself in the co2
data set that comes with R. Typing ts.plot(co2) makes a time series plot, repro-
duced here as Figure 1.21. Typing help(co2) gives a brief explanation. The data
are the concentrations of CO2 in the atmosphere measured at Mauna Loa each month
from 1959 to 1997. The plot shows a steadily increasing trend imposed on a regular
annual cycle. The primary reason for the increase is burning of fossil fuels. CO2 is
a greenhouse gas that traps heat in the atmosphere instead of letting it radiate out,
so an increase in atmospheric CO2 will eventually result in an increase in the Earth’s
temperature. But what is harder to predict is the effect on the Earth’s plants. Carbon
1.7. SIMULATION 64
1 2 3
Figure 1.20: 1000 simulations of θ̂ under three possible procedures for conducting a poll
(a) (b)
4
360
2
resids
340
co2
0
!2
320
!4
1960 1980 1960 1980
Time Time
(c) (d)
3
360
2
1
cycle
340
co2
0
!3 !2 !1
320
2 4 6 8 12 1960 1980
Index Time
2. The code simulates 1000 repetitions of the experiment. That’s the meaning of
nreps <- 1000.
1.7. SIMULATION 67
8. Next we simulate the actual biomass of the sites. For the first year where we
already have measurements that’s
For subsequent years the biomass in year i is the biomass in year i-1 multiplied
by the growth factor beta.control or beta.treatment. Biomass is simulated
in the loop for(i in 2:(nyears+1)).
10. The simulations for each year were analyzed each year by a two-sample t-test
which looks at the ratio
biomass in year i
biomass in year 1
to see whether it is significantly larger for treatment sites than for control sites.
For our purposes here, we have replaced the t-test with a plot, Figure 1.22,
which shows a clear separation between treatment and control sites after about
5 years.
The DOE did decide to fund the proposal for a FACE experiment in Duke Forest,
at least partly because of the demonstration that such an experiment would have a
reasonably large chance of success.
########################################################
# A power analysis of the FACE experiment
#
npairs <- 3
nyears <- 10
nreps <- 1000
2.0
growth rate
1.5
1.0
2 4 6 8 10
year
Figure 1.22: 1000 simulations of a FACE experiment. The x-axis is years. The y-axis
shows the mean growth rate (biomass in year i / biomass in year 1) of control plants (lower
solid line) and treatment plants (upper solid line). Standard deviations are shown as dashed
lines.
1.7. SIMULATION 70
#############################################################
# measurement errors in biomass
sigmaE <- 0.05
M.actual.control [ , 1, ] <-
M.observed.control [ , 1, ] / errors.control[ , 1, ]
1.7. SIMULATION 71
M.actual.treatment [ , 1, ] <-
M.observed.treatment [ , 1, ] / errors.treatment[ , 1, ]
M.actual.treatment [ , i, ] <-
M.actual.treatment [ , i-1, ] * beta.treatment
}
##############################################################
# two-sample t-test on (M.observed[j]/M.observed[1]) removed
# plot added
for ( i in 2:(nyears+1) ) {
ratio.control [ i-1, ] <-
as.vector ( M.observed.control[,i,]
/ M.observed.control[,1,] )
ratio.treatment [ i-1, ] <-
as.vector ( M.observed.treatment[,i,]
/ M.observed.treatment[,1,] )
}
1.8 R
This section introduces a few more of the R commands we will need to work fluently
with the software. They are introduced in the context of studying a dataset on the percent
bodyfat of 252 men. You should download the data onto your own computer and try out the
analysis in R to develop your familiarity with what will prove to be a very useful tool. The
data can be found at StatLib, an on-line repository of statistical data and software. The
data were originally contributed by Roger Johnson of the Department of Mathematics and
Computer Science at the South Dakota School of Mines and Technology. The StatLib
website is lib.stat.cmu.edu. If you go to StatLib and follow the links to datasets
and then bodyfat you will find a file containing both the data and an explanation. Copy
just the data to a text file named bodyfat.dat on your own computer. The file should
contain just the data; the first few lines should look like this:
The following snippet shows how to read the data into R and save it into bodyfat.
• dim gives the dimension of a matrix, a dataframe, or anything else that has a dimen-
sion. For a matrix or dataframe, dim tells how many rows and columns.
• names gives the names of things. names(bodyfat) should tell us the names density,
percent.fat, . . . . It’s used here to check that the data were read the way we in-
tended.
for ( i in 1:15 ) {
hist ( bodyfat[[i]], xlab="", main=names(bodyfat)[i] )
}
Although it’s not our immediate purpose, it’s interesting to see what the relationships are
among the variables. Try pairs(bodyfat).
1.8. R 74
To illustrate some of R’s capabilities and to explore the concepts of marginal, joint and
conditional densities, we’ll look more closely at percent fat and its relation to abdomen
circumference. Begin with a histogram of percent fat.
fat <- bodyfat$per # give these two variables short names
abd <- bodyfat$abd # so we can refer to them easily
par ( mfrow=c(1,1) ) # need just one plot now, not 15
hist ( fat )
We’d like to rescale the vertical axis to make the area under the histogram equal to 1, as
for a density. R will do that by drawing the histogram on a “density” scale instead of a
“frequency” scale. While we’re at it, we’ll also make the labels prettier. We also want to
draw a Normal curve approximation to the histogram, so we’ll need the mean and standard
deviation.
hist ( fat, xlab="", main="percent fat", freq=F )
mu <- mean ( fat )
sigma <- sqrt ( var ( fat )) # standard deviation
lo <- mu - 3*sigma
hi <- mu + 3*sigma
x <- seq ( lo, hi, length=50 )
lines ( x, dnorm ( x, mu, sigma ) )
That looks better, but we can do better still by slightly enlarging the axes. Redraw the
picture, but use
hist ( fat, xlab="", main="percent fat", freq=F,
xlim=c(-10, 60), ylim=c(0,.06) )
The Normal curve fits the data reasonably well. A good summary of the data is that it is
distributed approximately N(19.15, 8.37).
Now examine the relationship between abdomen circumference and percent body fat.
Try the following command.
plot ( abd, fat, xlab="abdomen circumference",
ylab="percent body fat" )
The scatter diagram shows a clear relationship between abdomen circumference and body
fat in this group of men. One man doesn’t fit the general pattern; he has a circumference
around 148 but a body fat only around 35%, relatively low for such a large circumference.
To quantify the relationship between the variables, let’s divide the men into groups accord-
ing to circumference and estimate the conditional distribution of fat given circumference.
If we divide the men into twelfths we’ll have 21 men per group.
1.8. R 75
• If you don’t see what the cut(abd,...) command does, print out cut.pts and
groups, then look at them until you figure it out.
• Boxplots are a convenient way to compare different groups of data. In this case
there are 12 groups. Each group is represented on the plot by a box with whiskers.
The box spans the first and third quartiles (.25 and .75 quantiles) of fat for that
group. The line through the middle of the box is the median fat for that group. The
whiskers extend to cover most of the rest of the data. A few outlying fat values fall
outside the whiskers; they are indicated as individual points.
• "fat ˜ groups" is R’s notation for a formula. It means to treat fat as a function
of groups. Formulas are extremely useful and will arise repeatedly.
The medians increase in not quite a regular pattern. The irregularities are probably due
to the vagaries of sampling. We can find the mean, median and variance of fat for each
group with
mu.fat <- tapply ( fat, groups, mean )
me.fat <- tapply ( fat, groups, median )
sd.fat <- sqrt ( tapply ( fat, groups, var ) )
cbind ( mu.fat, me.fat, sd.fat )
• tapply means "apply to every element of a table." In this case, the table is fat,
grouped according to groups.
The Normal curves seem to fit well. We saw earlier that the marginal (Marginal means
unconditional.) distribution of percent body fat is well approximated by N(19.15, 8.37).
Here we see that the conditional distribution of percent body fat, given that abdomen cir-
cumference is in between the (i − 1)/12 and i/12 quantiles is N(mu.fat[i], sd.fat[i]).
If we know a man’s abdomen circumference even approximately then (1) we can estimate
his percent body fat more accurately and (2) the typical estimation error is smaller.
y1 , . . . , yn ∼ i.i.d. f
and refer the interested reader to our favorite introductory text on the subject, Freedman
et al. [1998] which has an excellent description of random sampling in general as well as
detailed discussion of the US census and theR Current Population Survey.
Suppose y1 , y2 , . . . , ∼ i.i.d. f . Let µ = y f (y) dy and σ2 = (y − µ)2 f (y) dy be the
R
mean and variance of f . Typically µ and σ are unknown and we take the sample in order
to learn about them. We will often use ȳn ≡ (y1 + · · · + yn )/n, the mean of the first n
observations, to estimate µ. Some questions to consider are
• For a sample of size n, how accurate is ȳn as an estimate of µ?
• Does ȳn get closer to µ as n increases?
• How large must n be in order to achieve a desired level of accuracy?
Theorems 1.12 and 1.14 provide answers to these questions. Before stating them we need
some preliminary results about the mean and variance of ȳn .
Theorem 1.7. Let x1 , . . . , xn be random variables with means µ1 , . . . , µn . Then E[x1 + · · · +
xn ] = µ1 + · · · + µn .
Proof. It suffices to prove the case n = 2.
"
E[x1 + x2 ] = (x1 + x2 ) f (x1 , x2 ) dx1 dx2
" "
= x1 f (x1 , x2 ) dx1 dx2 + x2 f (x1 , x2 ) dx1 dx2
= µ1 + µ2
Corollary 1.8. Let y1 , . . . , yn be a random sample from f with mean µ. Then E[ȳn ] = µ.
Proof. The corollary follows from Theorems 1.3 and 1.7.
Theorem 1.9. Let x1 , . . . , xn be independent random variables with means
µ1 , . . . , µn and SDs σ1 , . . . , σn . Then Var[x1 + · · · + xn ] = σ21 + · · · + σ2n .
Proof. It suffices to prove the case n = 2. Using Theorem 1.2,
Var(X1 + X2 ) = E((X1 + X2 )2 ) − (µ1 + µ2 )2
= E(X12 ) + 2E(X1 X2 ) + E(X22 ) − µ21 − 2µ1 µ2 − µ22
= E(X12 ) − µ21 + E(X22 ) − µ22 + 2 (E(X1 X2 ) − µ1 µ2 )
= σ21 + σ22 + 2 (E(X1 X2 ) − µ1 µ2 ) .
1.9. SOME RESULTS FOR LARGE SAMPLES 78
But if X1 ⊥ X2 then
"
E(X1 X2 ) = x1 x2 f (x1 , x2 ) dx1 dx2
Z Z
= x1 x2 f (x2 ) dx2 f (x1 ) dx1
Z
= µ2 x1 f (x1 ) dx1 = µ1 µ2 .
Another version of Theorem 1.12 is called the Strong Law of Large Numbers.
Theorem 1.13 (Strong Law of Large Numbers). Let y1 , . . . , yn be a random sample from
a distribution with mean µ and variance σ2 . Then for any > 0,
i.e.,
P[ lim Ȳn = µ] = 1.
n→∞
It is beyond the scope of this section to explain the difference between the WLLN and
the SLLN. See Section 8.4.
Theorem 1.14 (Central Limit Theorem). Let y√ 1 , . . . , yn be a random sample from f with
mean µ and variance σ2 . Let zn = (ȳn − µ)/(σ/ n). Then, for any numbers a < b,
Z b
1
√ e−w /2 dw.
2
lim P[zn ∈ [a, b]] =
n→∞ a 2π
I.e. The limiting distribution of zn is N(0, 1).
The Law of Large Numbers is what makes simulations work and why large samples
are better than small. It says that as the number of simulation grows or as the sample
size grows, (n → ∞), the average of the simulations or the average of the sample gets
closer and closer to the true value (X̄n → µ). For instance, in Example 1.11, where we
used simulation to estimate P[Shooter wins] in Craps, the estimate became more and more
accurate as the number of simulations increased from 50, to 200, and then to 1000. The
Central Limit Theorem helps us look at those simulations more closely.
Colloquially, the Central Limit Theorem says that
1.10 Exercises
1. Show: if µ is a probability measure then for any integer n ≥ 2, and disjoint sets
A1 , . . . , An
[n Xn
µ( Ai ) = µ(Ai ).
i=1 i=1
1.10. EXERCISES 81
0 1 2 3 4 5 6
Density
theta hat
n.sim = 50
8 10
Density
6
4
2
0
theta hat
n.sim = 200
20
Density
10
5
0
theta hat
n.sim = 1000
Figure 1.23: Histograms of craps simulations. Solid curves are Normal approximations
according to the Central Limit Theorem.
1.10. EXERCISES 82
(a) simulate 6000 dice rolls. Count the number of 1’s, 2’s, . . . , 6’s.
(b) You expect about 1000 of each number. How close was your result to what
you expected?
(c) About how often would you expect to get more than 1030 1’s? Run an R
simulation to estimate the answer.
3. The Game of Risk In the board game Risk players place their armies in different
countries and try eventually to control the whole world by capturing countries one
at a time from other players. To capture a country, a player must attack it from an
adjacent country. If player A has A ≥ 2 armies in country A, she may attack adjacent
country D. Attacks are made with from 1 to 3 armies. Since at least 1 army must
be left behind in the attacking country, A may choose to attack with a minimum of
1 and a maximum of min(3, A − 1) armies. If player D has D ≥ 1 armies in country
D, he may defend himself against attack using a minimum of 1 and a maximum of
min(2, D) armies. It is almost always best to attack and defend with the maximum
permissible number of armies.
When player A attacks with a armies she rolls a dice. When player D defends with
d armies he rolls d dice. A’s highest die is compared to D’s highest. If both players
use at least two dice, then A’s second highest is also compared to D’s second highest.
For each comparison, if A’s die is higher than D’s then A wins and D removes one
army from the board; otherwise D wins and A removes one army from the board.
When there are two comparisons, a total of two armies are removed from the board.
• If A attacks with one army (she has two armies in country A, so may only attack
with one) and D defends with one army (he has only one army in country D)
what is the probability that A will win?
• Suppose that Player 1 has two armies each in countries C1 , C2 , C3 and C4 , that
Player 2 has one army each in countries B1 , B2 , B3 and B4 , and that country
Ci attacks country Bi . What is the chance that Player 1 will be successful in at
least one of the four attacks?
5. Y is a random variable. Y ∈ (−1, 1). The pdf is p(y) = ky2 for some constant, k.
1.10. EXERCISES 83
(a) Find k.
(b) Use R to plot the pdf.
(c) Let Z = −Y. Find the pdf of Z. Plot it.
(a) V = U 2 . On what interval does V live? Plot V as a function of U. Find the pdf
of V. Plot pV (v) as a function of v.
(b) W = 2U. On what interval does W live? Plot W as a function of U. Find the
pdf of W. Plot pW (w) as a function of w.
(c) X = − log(U). On what interval does X live? Plot X as a function of U. Find
the pdf of X. Plot pX (x) as a function of x.
f (y) = ce−y /2 ,
2
−∞<y<∞
(a) Find c.
(b) Find the expected value and variance of Y.
(c) What is the pdf of Y 2 ?
9. A teacher randomly selects a student from a Sta 103 class. Let X be the number of
math courses the student has completed. Let Y = 1 if the student is female and Y = 0
if the student is male. Fifty percent of the class is female. Among the women, thirty
percent have completed one math class, forty percent have completed two math
classes and thirty percent have completed three. Among the men, thirty percent
have completed one math class, fifty percent have completed two math classes and
twenty percent have completed three.
11. The random variables X and Y have joint pdf fX,Y (x, y) = 1 in the triangle of the
XY-plane determined by the points (-1,0), (1,0), and (0,1). Hint: Draw a picture.
12. X and Y are uniformly distributed in the unit disk. I.e., the joint density p(x, y) is
constant on the region of R2 such that x2 + y2 ≤ 1.
13. Verify the claim in Example 1.4 that argmaxλ P[x = 3 | λ] = 3. Hint: differentiate
Equation 1.6.
R
14. (a) p is the pdf of a continuous random variable w. Find R p(s) ds.
1.10. EXERCISES 85
R
(b) Find R
p(s) ds for the pdf in Equation 1.7.
15. Page 7 says “Every pdf must satisfy two properties . . . ” and that one of them is
“p(y) ≥ 0 for all y.” Explain why that’s not quite right.
16.
1 1 2
p(y) = √ e− 2 y
2π
R0
is the pdf of a continuous random variable y. Find −∞
p(s) ds.
17. When spun, an unbiased spinner points to some number y ∈ (0, 1]. What is p(y)?
18. Some exercises on the densities of tranformed variables. One of them should illus-
trate the need for the absolute value of the Jacobian.
19. (a) Prove: if X ∼ Poi(λ) then E(X) = λ. Hint: use the same trick we used to derive
the mean of the Binomial distribution.
(b) Prove: if X ∼ N(µ, σ) then E(X) = µ. Hint: change variables in the integral.
20. (a) Prove: if X ∼ Bin(n, p) then Var(X) = np(1 − p). Hint: use Theorem 1.9.
(b) Prove: if X ∼ Poi(λ) then Var(X) = λ. Hint: use the same trick we used to
derive the mean of the Binomial distribution and Theorem 1.2.
(c) If X ∼ Exp(λ), find Var(X). Hint: use Theorem 1.2.
(d) If X ∼ N(µ, σ), find Var(X). Hint: use Theorem 1.2.
24. Consider customers arriving at a service counter. Interarrival times often have a
distribution that is approximately exponential with a parameter λ that depends on
conditions specific to the particular counter. I.e., p(t) = λe−λt . Assume that succes-
sive interarrival times are independent of each other. Let T 1 be the arrival time of the
next customer and T 2 be the additional time until the arrival of the second customer.
1.10. EXERCISES 86
25. A gambler plays at a roulette table for two hours betting on Red at each spin of the
wheel. There are 60 spins during the two hour period. What is the distribution of
26. If human DNA contains xxx bases, and if each base mutates with probability p over
the course of a lifetime, what is the average number of mutations per person? What
is the variance of the number of mutations per person?
27. Isaac is in 5th grade. Each sentence he writes for homework has a 90% chance of
being grammatically correct. The correctness of one sentence does not affect the
correctness of any other sentence. He recently wrote a 10 sentence paragraph for a
writing assignment. Write a formula for the chance that no more than two sentences
are grammatically incorrect.
28. Teams A and B play each other in the World Series of baseball. Team A has a 60%
chance of winning each game. What is the chance that B wins the series? (The
winner of the series is the first team to win 4 games.)
29. A basketball player shoots ten free throws in a game. She has a 70% chance of
making each shot. If she misses the shot, her team has a 30% chance of getting the
rebound.
(a) Let m be the number of shots she makes. What is the distribution of m? What
are its expected value and variance? What is the chance that she makes some-
where between 5 and 9 shots, inclusive?
(b) Let r be the number of rebounds her team gets from her free throws. What is
the distribution of r? What are its expected value and variance? What is the
chance that r ≥ 1?
1.10. EXERCISES 87
30. Let (x, y) have!joint density function f x,y . There are two ways to find Ey. One way
is to evaluate y f x,y (x, y) dxdy. The other
R is to start with the joint density f x,y , find
the marginal density fy , then evaluate y fy (y) dy. Show that these two methods give
the same answer.
33. A researcher randomly selects mother-daughter pairs. Let xi and yi be the heights of
the i’th mother and daughter, respectively. True or False:
34. As part of his math homework Isaac had to roll two dice and record the results. Let
X1 be the result of the first die and X2 be the result of the second. What is the
probability that X1=1 given that X1 + X2 = 5?
35. A blood test is 99 percent effective in detecting a certain disease when the disease
is present. However, the test also yields a false-positive result for 2 percent of the
healthy patients tested, who have no such disease. Suppose 0.5 percent of the popu-
lation has the disease. Find the conditional probability that a randomly tested indi-
vidual actually has the disease given that her test result is positive.
36. A doctor suspects a patient has the rare medical condition DS, or disstaticularia,
the inability to learn statistics. DS occurs in .01% of the population, or one person
in 10,000. The doctor orders a diagnostic test. The test is quite accurate. Among
people who have DS the test yields a positive result 99% of the time. Among people
who do not have DS the test yields a positive result only 5% of the time.
For the patient in question, the test result is positive. Calculate the probability that
the patient has DS.
37. Ecologists are studying salamanders in a forest. There are two types of forest. Type
A is conducive to salamanders while type B is not. They are studying one forest but
don’t know which type it is. Types A and B are equally likely.
1.10. EXERCISES 88
Now we observe y1 = 0.
39. Let X and Y be distributed in the triangle with corners (0, 0), (1, 0), and (1, 1). Let
their joint density be p(x, y) = k(x + y) for some constant k.
(a) Find k.
(b) Find p(x) and p(y).
(c) Are X and Y independent? If so, prove. If not, find a pair (x, y) such that
p(x, y) , p(x)p(y).
(d) Find p(x | y) and p(y | x).
(e) Find E[X] and E[Y].
(f) Find E[X | Y = y] and E[Y | X = x].
(g) Plot E[X | Y = y] as a function of y and E[Y | X = x] as a function of x.
40. Let A ∼ N(0, 1). Given A = a, let B and C be independent, each with distribution
N(a, 1). Given A = a, B = b, and C = c, let D ∼ N(b, 1), E ∼ N(c, 1) and D and E
be independent. Prove: D and E are conditionally independent given A = a.
41. Let λ ∼ Exp(1). Given λ, let X and Y be independent, each with distribution
Exp(1/λ).
42. For various reasons, researchers often want to know the number of people who have
participated in embarassing activities such as illegal drug use, cheating on tests,
robbing banks, etc. An opinion poll which asks these questions directly is likely to
elicit many untruthful answers. To get around the problem, researchers have devised
the method of randomized response. The following scenario illustrates the method.
A pollster identifies a respondent and gives the following instructions. “Toss a coin,
but don’t show it to me. If it lands Heads, answer question (a). If it lands tails,
answer question (b). Just answer ’yes’ or ’no’. Do not tell me which question you
are answering.
1.10. EXERCISES 90
(a) What is the probability that a randomly selected person answers "yes"?
(b) Suppose we survey 100 people. Let X be the number who answer "yes". What
is the distribution of X?
43. In a 1991 article (See Utts [1991] and discussants.) Jessica Utts reviews some of
the history of probability and statistics in ESP research. This question concerns a
particular series of autoganzfeld experiments in which a sender looking at a picture
tries to convey that picture telepathically to a receiver. Utts explains:
In the series of autoganzfeld experiments analyzed by Utts, there were a total of 355
trials. Let X be the number of direct hits.
44. This exercise is based on a computer lab that another professor uses to teach the
Central Limit Theorem. It was originally written in MATLAB but here it’s translated
into R.
Enter the following R commands:
These create a 1000x250 (a thousand rows and two hundred fifty columns) matrix of
random draws, called u and a 250-dimensional vector y which contains the means
of each column of U.
Now enter the command hist(u[,1]). This command takes the first column of u
(a column vector with 1000 entries) and makes a histogram. Print out this histogram
and describe what it looks like. What distribution is the runif command drawing
from?
Now enter the command hist(y). This command makes a histogram from the
vector y. Print out this histogram. Describe what it looks like and how it differs from
the one above. Based on the histogram, what distribution do you think y follows?
You generated y and u with the same random draws, so how can they have different
distributions? What’s going on here?
45. Suppose that extensive testing has revealed that people in Group A have IQ’s that
are well described by a N(100, 10) distribution while the IQ’s of people in Group
1.10. EXERCISES 92
B have a N(105, 10) distribution. What is the probability that a randomly chosen
individual from Group A has a higher IQ than a randomly chosen individual from
Group B?
(a) Write a formula to answer the question. You don’t need to evaluate the formula.
(b) Write some R code to answer the question.
46. The so-called Monty Hall or Let’s Make a Deal problem has caused much conster-
nation over the years. It is named for an old television program. A contestant is
presented with three doors. Behind one door is a fabulous prize; behind the other
two doors are virtually worthless prizes. The contestant chooses a door. The host
of the show, Monty Hall, then opens one of the remaining two doors, revealing one
of the worthless prizes. Because Monty is the host, he knows which doors conceal
the worthless prizes and always chooses one of them to reveal, but never the door
chosen by the contestant. Then the contestant is offered the choice of keeping what
is behind her original door or trading for what is behind the remaining unopened
door. What should she do?
There are two popular answers.
• There are two unopened doors, they are equally likely to conceal the fabulous
prize, so it doesn’t matter which one she chooses.
• She had a 1/3 probability of choosing the right door initially, a 2/3 chance of
getting the prize if she trades, so she should trade.
48. Theorem 1.11, Chebychev’s Inequality, gives an upper bound for how far a random
variable can vary from its mean. Specifically, the theorem states that for any random
variable X, with mean µ and SD σ and for any > 0, P[|X − µ| ≥ ] ≤ σ2 / 2 . This
exercise investigates the conditions under which the upper bound is tight, i.e., the
conditions under which P[|X − µ| ≥ ] = σ2 / 2 .
1.10. EXERCISES 93
(a) Let µ = 0 and σ = 1. Consider the value = 1. Are there any random variables
X with mean 0 and SD 1 such that P[|X| ≥ 1] = 1? If not, explain why. If so,
construct such a distribution.
(b) Now consider values of other than 1. Are there any random variables X with
mean 0 and SD 1, and values of , 1 such that P[|X − µ| ≥ ] = σ2 / 2 ?
(c) Now let µ and σ be arbitrary and let = σ. Are there any continuous random
variables X with mean µ and SD σ such that P[|X − µ| ≥ ] = σ2 / 2 ? If not,
explain. If so, show how to construct one.
(d) Are there any discrete random variables X with mean µ and SD σ such that
P[|X − µ| ≥ ] = σ2 / 2 ? If not, explain. If so, show how to construct one.
(e) Let µ and σ be arbitrary and let , σ. Are there any random variables X with
mean µ and SD σ such that P[|X − µ| ≥ ] = σ2 / 2 ? If not, explain. If so, show
how to construct one.
49. (a) State carefully the Central Limit Theorem for a sequence of i.i.d.random vari-
ables.
(b) Suppose X1 , . . . , X100 ∼ iid U(0, 1). What is the standard deviation of X̄, the
mean of X1 , . . . , X100 ?
(c) Use the Central Limit Theorem to find approximately the probability that the
average of the 100 numbers chosen exceeds 0.56.
Chapter 2
Modes of Inference
2.1 Data
This chapter takes up the heart of statistics: making inferences, quantitatively, from data.
The data, y1 , . . . , yn are assumed to be a random sample from a population.
In Chapter 1 we reasoned from f to Y. That is, we made statements like “If the ex-
periment is like . . . , then f will be . . . , and (y1 , . . . , yn ) will look like . . . ” or “E(Y) must
be . . . ”, etc. In Chapter 2 we reason from Y to f . That is, we make statements R such as
“Since (y1 , . . . , yn ) turned out to be . . . it seems that f is likely to be . . . ”, or “ y f (y) dy
is likely to be around . . . ”, etc. This is a basis for knowledge: learning about the world
by observing it. Its importance cannot be overstated. The field of statistics illuminates
the type of thinking that allows us to learn from data and contains the tools for learning
quantitatively.
Reasoning from Y to f works because samples are usually like the populations from
which they come. For example, if f has a mean around 6 then most reasonably large
samples from f also have a mean around 6, and if our sample has a mean around 6 then
we infer that f likely has a mean around 6. If our sample has an SD around 10 then we
infer that f likely has an SD around 10, and so on. So much is obvious. But can we be
more precise? If our sample has a mean around 6, then can we infer that f likely has a
mean somewhere between, say, 5.5 and 6.5, or can we only infer that f likely has a mean
between 4 and 8, or even worse, between about -100 and 100? When can we say anything
quantitative at all about the mean of f ? The answer is not obvious, and that’s where
statistics comes in. Statistics provides the quantitative tools for answering such questions.
This chapter presents several generic modes of statistical analysis.
Data Description Data description can be visual, through graphs, charts, etc., or numeri-
cal, through calculating sample means, SD’s, etc. Displaying a few simple features
94
2.2. DATA DESCRIPTION 95
of the data y1 , . . . , yn can allow us to visualize those same features of f . Data de-
scription requires few a priori assumptions about f .
Estimation The goal of estimation is to estimate various aspects of f , such as its mean,
median, SD, etc. Along with the estimate, statisticians try to give quantitative mea-
sures of how accurate the estimates are.
Bayesian Inference Bayesian inference is a way to account not just for the data y1 , . . . , yn ,
but also for other information we may have about f .
Prediction Sometimes the goal of statistical analysis is not to learn about f per se, but
to make predictions about y’s that we will see in the future. In addition to the usual
problem of not knowing f , we have the additional problem that even if we knew f ,
we still wouldn’t be able to predict future y’s exactly.
Hypothesis Testing Sometimes we want to test hypotheses like Head Start is good for
kids or lower taxes are good for the economy or the new treatment is better than the
old.
Decision Making Often, decisions have to be made on the basis of what we have learned
about f . In addition, making good decisions requires accounting for the potential
gains and losses of each decision.
temperatures from each of 9 locations. The measurements from each location were sum-
marized by the sample mean ȳ = n−1 yi ; comparisons of the 9 sample means helped
P
oceanographers deduce the presence of the Mediterranean tongue. Similarly, the essen-
tial features of many data sets can be captured in a one-dimensional or low-dimensional
summary. Such a summary is called a statistic. The examples below refer to a data set
y1 , . . . , yn of size n.
Definition 2.1 (Statistic). A statistic is any function, possibly vector valued, of the data.
The most important statistics are measures of location and dispersion. Important ex-
amples of location statistics include
y <- 1:10
mean(y)
median A median of the data is any number m such that at least half of the yi ’s are less
than or equal to m and at least half of the yi ’s are greater than or equal to m. We
say “a” median instead of “the” median because a data set with an even number of
observations has an interval of medians. For example, if y <- 1:10, then every
m ∈ [5, 6] is a median. When R computes a median it computes a single number by
taking the midpoint of the interval of medians. So median(y) yields 5.5.
quantiles For any p ∈ [0, 1], the p-th quantile of the data should be, roughly speaking,
the number q such that pn of the data points are less than q and (1 − p)n of the data
points are greater than q.
Figure 2.1 illustrates the idea. Panel a shows a sample of 100 points plotted as
a stripchart (page 109). The black circles on the abcissa are the .05, .5, and .9
quantiles; so 5 points (open circles) are to the left of the first vertical line, 50 points
are on either side of the middle vertical line, and 10 points are to the right of the
third vertical line. Panel b shows the empirical cdf of the sample. The values .05, .5,
and .9 are shown as squares on the vertical axis; the quantiles are found by following
the horizontal lines from the vertical axis to the cdf, then the vertical lines from the
cdf to the horizontal axis. Panels c and d are similar, but show the distribution from
which the sample was drawn instead of showing the sample itself. In panel c, 5%
of the mass is to the left of the first black circle; 50% is on either side of the middle
black circle; and 10% is to the right of the third black dot. In panel d, the open
2.2. DATA DESCRIPTION 97
squares are at .05, .5, and .9 on the vertical axis; the quantiles are the circles on the
horizontal axis.
Denote the p-th quantile as q p (y1 , . . . , yn ), or simply as q p if the data set is clear from
the context. With only a finite sized sample q p (y1 , . . . , yn ) cannot be found exactly.
So the algorithm for finding quantiles works as follows.
y(1) ≤ · · · ≤ y(n) .
If p is a “nice” number then q p is often given a special name. For example, q.5 is
the median; (q.25 , q.5 , q.75 ), the first, second and third quartiles, is a vector-valued
statistic of dimension 3; q.1 , q.2 , . . . are the deciles; q.78 is the 78’th percentile.
i
R can compute quantiles. When faced with p ∈ ( n−1 , n−1
i+1
) R does linear interpolation.
E.g. quantile(y,c(.25,.75)) yields (3.25, 7.75).
The vector (y(1) , . . . , y(n) ) defined in step 1 of the algorithm for quantiles is an n-
dimensional statistic called the order statistic. y(i) by itself is called the i’th order
statistic.
par ( mfrow=c(2,2) )
quant <- c ( .05, .5, .9 )
nquant <- length(quant)
a b
0.8
F(y)
0.4
0.0
0 2 4 6 8 0 2 4 6 8
y y
c d
0.8
0.15
F(y)
p(y)
0.4
0.00
0.0
0 2 4 6 8 0 2 4 6 8
y y
Figure 2.1: Quantiles. The black circles are the .05, .5, and .9 quantiles. The open squares
are the numbers .05, .5, and .9 on the vertical axis. Panels a and b are for a sample; panels c
and d are for a distribution.
2.2. DATA DESCRIPTION 99
y <- seq(0,10,length=100)
plot ( y, dgamma(y,3), type="l", xlim=c(0,10), ylab="p(y)",
main="c" )
points ( x=qgamma(quant,3), y=rep(0,nquant), pch=19 )
Dispersion statistics measure how spread out the data are. Since there are many ways
to measure dispersion there are many dispersion statistics. Important dispersion statistics
include
standard deviation The sample standard deviation or SD of a data set is
rP
(yi − ȳ)2
s≡
n
Note: some statisticians prefer
rP
(yi − ȳ)2
s≡
n−1
2.2. DATA DESCRIPTION 100
for reasons which do not concern us here. If n is large there is little difference
between the two versions of s.
variance The sample variance is
(yi − ȳ)2
P
2
s ≡
n
Note: some statisticians prefer
(yi − ȳ)2
P
2
s ≡
n−1
for reasons which do not concern us here. If n is large there is little difference
between the two versions of s2 .
interquartile range The interquartile range is q.75 − q.25
Presenting a low dimensional statistic is useful if we believe that the statistic is repre-
sentative of the whole population. For instance, in Example 1.5, oceanographers believe
the data they have collected is representative of the long term state of the ocean. Therefore
the sample means at the nine locations in Figure 1.12 are representative of the long term
state of the ocean at those locations. More formally, for each location we can imagine
a population of temperatures, one temperature for each moment in time. That popula-
tion has an unknown pdf f . Even though are data are not really a random sample from
f (The sampling times were not chosen randomly, among other problems.) we can think
of them that way without making too serious an error. The histograms in Figure 1.12 are
estimates of the f ’s for the nine locations. The mean of each f is what oceanographers
call a climatological mean, or an average which, because it is taken over a long period of
time, represents the climate. The nine sample means are estimates of the nine climato-
logical mean temperatures at those nine locations. Simply presenting the sample means
reveals some interesting structure in the data, and hence an interesting facet of physical
oceanography.
Often, more than a simple data description or display is necessary; the statistician has
to do a bit of exploring the data set. This activity is called exploratory data analysis or
simply eda. It is hard to give general rules for eda, although displaying the data in many
different ways is often a good idea. The statistician must decide what displays and eda
are appropriate for each data set and each question that might be answered by the data set.
That is one thing that makes statistics interesting. It cannot be reduced to a set of rules
and procedures. A good statistician must be attuned to the potentially unique aspects of
each analysis. We now present several examples to show just a few of the possible ways to
explore data sets by displaying them graphically. The examples reveal some of the power
of graphical display in illuminating data and teasing out what it has to say.
2.2. DATA DESCRIPTION 101
Histograms The next examples use histograms to display the full distribution of some
data sets. Visual comparison of the histograms reveals structure in the data.
Example 2.1 (Tooth Growth)
The R statistical language comes with many data sets. Type data() to see what they
are. This example uses the data set ToothGrowth on the effect of vitamin C on tooth
growth in guinea pigs. You can get a description by typing help(ToothGrowth). You
can load the data set into your R session by typing data(ToothGrowth). ToothGrowth
is a dataframe of three columns. The first few rows look like this:
Column 1, or len, records the amount of tooth growth. Column 2, supp, records
whether the guinea pig was given vitamin C in ascorbic acid or orange juice. Column 3,
dose, records the dose, either 0.5, 1.0 or 2.0 mg. Thus there are six groups of guinea
pigs in a two by three layout. Each group has ten guinea pigs, for a total of sixty
observations. Figure 2.2 shows histograms of growth for each of the six groups. From
Figure 2.2 it is clear that dose affects tooth growth.
3.0
5
4
2.0
3
2
1.0
1
0.0
0
0 5 10 20 30 0 5 10 20 30
VC, 1 OJ, 1
0 5 10 20 30 0 5 10 20 30
VC, 2 OJ, 2
0.0 0.5 1.0 1.5 2.0
3.0
2.0
1.0
0.0
0 5 10 20 30 0 5 10 20 30
Figure 2.2: Histograms of tooth growth by delivery method (VC or OJ) and dose (0.5, 1.0
or 2.0).
2.2. DATA DESCRIPTION 103
3.0
2.0
4
2.5
1.5
3
2.0
1.5
1.0
2
1.0
0.5
1
0.5
0.0
0.0
0
0 10 25 0 10 25 0 10 25
3.0
5
2.5
4
1.5
2.0
3
1.0
1.5
2
1.0
0.5
1
0.5
0.0
0.0
0
0 10 25 0 10 25 0 10 25
Figure 2.3: Histograms of tooth growth by delivery method (VC or OJ) and dose (0.5, 1.0
or 2.0).
2.2. DATA DESCRIPTION 104
2.0
4
3
1.5
1.0
2
1
0.0
0.0
0 10 25 0 0 10 25 0 10 25
3.0
4
1.0
1.5
2
0.0
0.0
0
0 10 25 0 10 25 0 10 25
Figure 2.4: Histograms of tooth growth by delivery method (VC or OJ) and dose (0.5, 1.0
or 2.0).
2.2. DATA DESCRIPTION 105
• unique(x) returns the unique values in x. For example, if x <- c(1,1,2) then
unique(x) would be 1 2.
Figure 2.3 is similar to Figure 2.2 but laid out in the other direction. (Notice that it’s
easier to compare histograms when they are arranged vertically rather than horizon-
tally.) The figures suggest that delivery method does have an effect, but not as strong
as the dose effect. Notice also that Figure 2.3 is more difficult to read than Figure 2.2
because the histograms are too tall and narrow. Figure 2.4 repeats Figure 2.3 but us-
ing less vertical distance; it is therefore easier to read. Part of good statistical practice
is displaying figures in a way that makes them easiest to read and interpret.
The figures alone have suggested that dose is the most important effect, and de-
livery method less so. A further analysis could try to be more quantitative: what is
the typical size of each effect, how sure can we be of the typical size, and how much
does the effect vary from animal to animal. The figures already suggest answers, but
a more formal analysis is deferred to Section 2.7.
Figures 1.12, 2.2, and 2.3 are histograms. The abscissa has the same scale as the data.
The data are divided into bins. The ordinate shows the number of data points in each bin.
(hist(...,prob=T) plots the ordinate as probability rather than counts.) Histograms are
a powerful way to display data because they give a strong visual impression of the main
features of a data set. However, details of the histogram can depend on both the number
of bins and on the cut points between bins. For that reason it is sometimes better to use
a display that does not depend on those features, or at least not so strongly. Example 2.2
illustrates.
Density Estimation
Example 2.2 (Hot Dogs)
In June of 1986, Consumer Reports published a study of hot dogs. The data are
available at DASL, the Data and Story Library, a collection of data sets for free use by
statistics students. DASL says the data are
2.2. DATA DESCRIPTION 106
This example looks at the calorie content of beef hot dogs. (Later examples will com-
pare the calorie contents of different types of hot dogs.)
Figure 2.5(a) is a histogram of the calorie contents of beef hot dogs in the study.
From the histogram one might form the impression that there are two major varieties of
beef hot dogs, one with about 130–160 calories or so, another with about 180 calories
or so, and a rare outlier with fewer calories. Figure 2.5(b) is another histogram of the
same data but with a different bin width. It gives a different impression, that calorie
content is evenly distributed, approximately, from about 130 to about 190 with a small
number of lower calorie hot dogs. Figure 2.5(c) gives much the same impression as
2.5(b). It was made with the same bin width as 2.5(a), but with cut points starting at 105
instead of 110. These histograms illustrate that one’s impression can be influenced by
both bin width and cut points.
Density estimation is a method of reducing dependence on cut points. Let x1 , . . . ,
x20 be the calorie contents of beef hot dogs in the study. We think of x1 , . . . , x20 as a
random sample from a density f representing the population of all beef hot dogs. Our
goal is to estimate f . For any fixed number x, how shall we estimate f (x)? The idea
is to use information local to x to estimate f (x). We first describe a basic version, then
add two refinements to get kernel density estimation and the density() function in R.
Let n be the sample size (20 for the hot dog data). Begin by choosing a number
h > 0. For any number x the estimate fˆbasic (x) is defined to be
n
1 X fraction of sample points within h of x
fˆbasic (x) ≡ 1(x−h,x+h) (xi ) =
2nh i=1 2h
1. fˆbasic (x) gives equal weight to all data points in the interval (x − h, x + h) and has
abrupt cutoffs at the ends of the interval. It would be better to give the most
weight to data points closest to x and have the weights decrease gradually for
points increasingly further away from x.
We deal with these problems by introducing a weight function that depends on distance
from x. Let g0 be a probability density function. Usually g0 is chosen to be symmetric
and unimodal, centered at 0. Define
1X
fˆ(x) ≡ g0 (x − xi )
n
density estimate to be
1X 1 X
fˆh (x) ≡ g(x − xi ) = g0 ((x − xi )/h)
n nh
h is called the bandwidth. Of course fˆh still depends on h. It turns out that dependence
on bandwidth is not really a problem. It is useful to view density estimates for several
different bandwidths. Each reveals features of f at different scales. Figures 2.5(d), (e),
and (f) are examples. Panel (d) was produced by the default bandwidth; panels (e)
and (f) were produced with 1/4 and 1/2 the default bandwidth. Larger bandwidth
makes a smoother estimate of f ; smaller bandwidth makes it rougher. None is exactly
right. It is useful to look at several.
(a) (b)
8
4
6
3
4
2
2
1
0
0
120 140 160 180 120 140 160 180
calories calories
(c) (d)
5
4
0.010
density
3
2
1
0.000
0
calories calories
(e) (f)
0.030
0.020
density
density
0.015
0.010
0.000
0.000
calories calories
Figure 2.5: (a), (b), (c): histograms of calorie contents of beef hot dogs; (d), (e), (f):
density estimates of calorie contents of beef hot dogs.
2.2. DATA DESCRIPTION 109
• In panel (a) R used its default method for choosing histogram bins.
• R uses a Gaussian kernel by default which means that g0 above is the N(0, 1)
density.
• In panels (e) and (f) the bandwidth was set to 1/4 and 1/2 the default by
density(..., adjust=...).
Stripcharts and Dotplots Figure 2.6 uses the ToothGrowth data to illustrate stripcharts,
also called dotplots, an alternative to histograms. Panel (a) has three rows of points corre-
sponding to the three doses of ascorbic acid. Each point is for one animal. The abscissa
shows the amount of tooth growth; the ordinate shows the dose. The panel is slightly
misleading because points with identical coordinates are plotted directly on top of each
other. In such situations statisticians often add a small amount of jitter to the data, to avoid
2.2. DATA DESCRIPTION 110
overplotting. The middle panel is a repeat of the top, but with jitter added. The bottom
panel shows tooth growth by delivery method. Compare Figure 2.6 to Figures 2.2 and 2.3.
Which is a better display for this particular data set?
par ( mfrow=c(3,1) )
stripchart ( ToothGrowth$len ~ ToothGrowth$dose, pch=1,
main="(a)", xlab="growth", ylab="dose" )
stripchart ( ToothGrowth$len ~ ToothGrowth$dose,
method="jitter", main="(b)", xlab="growth",
ylab="dose", pch=1 )
stripchart ( ToothGrowth$len ~ ToothGrowth$supp,
method="jitter", main="(c)", xlab="growth",
ylab="method", pch=1 )
(a)
2
dose
1
0.5
5 10 15 20 25 30 35
growth
(b)
2
dose
1
0.5
5 10 15 20 25 30 35
growth
(c)
VC
method
OJ
5 10 15 20 25 30 35
growth
Figure 2.6: (a) Tooth growth by dose, no jittering; (b) Tooth growth by dose with jittering;
(c) Tooth growth by delivery method with jittering
2.2. DATA DESCRIPTION 112
range away from the median.) Finally, there may be some individual points plotted
above or below each boxplot. These indicate outliers, or scores that are extremely
high or low relative to other scores on that quiz. Many quizzes had low outliers; only
Quiz 5 had a high outlier.
Box plots are extremely useful for comparing many sets of data. We can easily
see, for example, that Quiz 5 was the most difficult (75% of the class scored 3 or less.)
while Quiz 1 was the easiest (over 75% of the class scored 10.)
There were no exams or graded homeworks. Students’ grades were determined
by their best 20 quizzes. To compute grades, each student’s scores were sorted, the
first 4 were dropped, then the others were averaged. Those averages are displayed
in a stripchart in the bottom panel of the figure. It’s easy to see that most of the class
had quiz averages between about 5 and 9 but that 4 averages were much lower.
QQ plots Sometimes we want to assess whether a data set is well modelled by a Normal
distribution and, if not, how it differs from Normal. One obvious way to assess Normality
is by looking at histograms or density estimates. But the answer is often not obvious from
the figure. A better way to assess Normality is with QQ plots. Figure 2.8 illustrates for the
nine histograms of ocean temperatures in Figure 1.12.
Each panel in Figure 2.8 was created with the ocean temperatures near a particular
(latitude, longitude) combination. Consider, for example, the upper left panel which was
2.2. DATA DESCRIPTION 113
Individual quizzes
10
8
6
4
2
0
Student averages
0 2 4 6 8 10
score
constructed from the n = 213 points x1 , . . . , x213 taken near (45, -40). Those points are
sorted, from smallest to largest, to create the order statistic (x(1) , . . . , x(213) ). Then they are
plotted against E[(z(1) , . . . , z(213) )], the expected order statistic from a Normal distribution.
If the xi s are approximately Normal then the QQ plot will look approximately linear. The
slope of the line indicates the standard deviation.
In Figure 2.8 most of the panels do look approximately linear, indicating that a Normal
model is reasonable. But some of the panels show departures from Normality. In the upper
left and lower left panels, for example, the plots looks roughly linear except for the upper
right corners which show some data points much warmer than expected if they followed
a Normal distribution. In contrast, the coolest temperatures in the lower middle panel are
not quite as cool as expected from a Normal distribution.
10
9
6.5
8
9
7
8
5.5
6
7
5
6
4.5
4
!3 !1 1 3 !2 0 2 !2 0 2
10.5
9.5
7.5
9.0
9.5
8.5
6.5
8.0
8.5
5.5
!2 0 2 !2 0 1 2 !2 0 2
n = 37 n = 24 n = 44
7.5
7.2
7.5
7.0
6.8
6.5
6.5
!2 0 2 !2 0 2 !2 0 2
n = 47 n = 35 n = 27
Example 2.4
In 1973 UC Berkeley investigated its graduate admissions rates for potential sex bias.
Apparently women were more likely to be rejected than men. The data set UCBAdmissions
gives the acceptance and rejection data from the six largest graduate departments on
which the study was based. Typing help(UCBAdmissions) tells more about the data.
It tells us, among other things:
...
Format:
No Name Levels
1 Admit Admitted, Rejected
2 Gender Male, Female
3 Dept A, B, C, D, E, F
...
The major question at issue is whether there is sex bias in admissions. To investigate
we ask whether men and women are admitted at roughly equal rates.
Typing UCBAdmissions gives the following numerical summary of the data.
, , Dept = A
Gender
Admit Male Female
Admitted 512 89
Rejected 313 19
, , Dept = B
Gender
Admit Male Female
Admitted 353 17
Rejected 207 8
, , Dept = C
2.2. DATA DESCRIPTION 117
Gender
Admit Male Female
Admitted 120 202
Rejected 205 391
, , Dept = D
Gender
Admit Male Female
Admitted 138 131
Rejected 279 244
, , Dept = E
Gender
Admit Male Female
Admitted 53 94
Rejected 138 299
, , Dept = F
Gender
Admit Male Female
Admitted 22 24
Rejected 351 317
For each department, the twoway table of admission status versus sex is displayed.
Such a display, called a crosstabulation, simply tabulates the number of entries in each
cell of a multiway table. It’s hard to tell from the crosstabulation whether there is a sex
bias and, if so, whether it is systemic or confined to just a few departments. Let’s
continue by finding the marginal (aggregated by department as opposed to conditional
given deparment) admissions rates for men and women.
The admission rate for men is 1198/(1198 + 1493) = 44.5% while the admission rate
for women is 557/(557 + 1493) = 30.4%, much lower. A mosaic plot, created with
Admitted Rejected
Male
Gender
Female
Admit
Male Female
Admitted
Admit
Rejected
Gender
The next example is about the duration of eruptions and interval to the next eruption of
the Old Faithful geyser. It explores two kinds of relationships — the relationship between
duration and eruption and also the relationship of each variable with time.
Example 2.5 (Old Faithful)
Old Faithful is a geyser in Yellowstone National Park and a great tourist attraction. As
Denby and Pregibon [1987] explain, “From August 1 to August 8, 1978, rangers and
naturalists at Yellowstone National Park recorded the duration of eruption and interval
to the next eruption (both in minutes) for eruptions of Old Faithful between 6 a.m.
and midnight. The intent of the study was to predict the time of the next eruption, to
be posted at the Visitor’s Center so that visitors to Yellowstone can usefully budget
their time.” The R dataset faithful contains the data. In addition to the references
listed there, the data and analyses can also be found in Weisberg [1985] and Denby and
Pregibon [1987]. The latter analysis emphasizes graphics, and we shall follow some of
their suggestions here.
We begin exploring the data with stripcharts and density estimates of durations
and intervals. These are shown in Figure 2.11. The figure suggests bimodal distri-
butions. For duration there seems to be one bunch of data around two minutes and
2.2. DATA DESCRIPTION 122
another around four or five minutes. For interval, the modes are around 50 minutes
and 80 minutes. A plot of interval versus duration, Figure 2.12, suggests that the bi-
modality is present in the joint distribution of the two variables. Because the data were
collected over time, it might be useful to plot the data in the order of collection. That’s
Figure 2.13. The horizontal scale in Figure 2.13 is so compressed that it’s hard to see
what’s going on. Figure 2.14 repeats Figure 2.13 but divides the time interval into two
subintervals to make the plots easier to read. The subintervals overlap slightly. The
persistent up-and-down character of Figure 2.14 shows that, for the most part, long
and short durations are interwoven, as are long and short intervals. (Figure 2.14 is po-
tentially misleading. The data were collected over an eight day period. There are eight
separate sequences of eruptions with gaps in between. The faithful data set does
not tell us where the gaps are. Denby and Pregibon [1987] tell us where the gaps are
and use the eight separate days to find errors in data transcription.) Just this simple
analysis, a collection of four figures, has given us insight into the data that will be very
useful in predicting the time of the next eruption.
Figures 2.11, 2.12, 2.13, and 2.14 were produced with the following R code.
data(faithful)
attach(faithful)
par ( mfcol=c(2,2) )
stripchart ( eruptions, method="jitter", pch=1, xlim=c(1,6),
xlab="duration (min)", main="(a)" )
plot ( density ( eruptions ), type="l", xlim=c(1,6),
xlab="duration (min)", main="(b)" )
stripchart ( waiting, method="jitter", pch=1, xlim=c(40,100),
xlab="waiting (min)", main="(c)" )
plot ( density ( waiting ), type="l", xlim=c(40,100),
xlab="waiting (min)", main="(d)" )
par ( mfrow=c(1,1) )
plot ( eruptions, waiting, xlab="duration of eruption",
ylab="time to next eruption" )
par ( mfrow=c(2,1) )
plot.ts ( eruptions, xlab="data number", ylab="duration",
main="a" )
2.2. DATA DESCRIPTION 123
(a) (c)
1 2 3 4 5 6 40 60 80 100
(b) (d)
0.5
0.03
0.4
0.3
Density
Density
0.02
0.2
0.01
0.1
0.00
0.0
1 2 3 4 5 6 40 60 80 100
Figure 2.11: Old Faithful data: duration of eruptions and waiting time between eruptions.
Stripcharts: (a) and (c). Density estimates: (b) and (d).
2.2. DATA DESCRIPTION 124
90
80
time to next eruption
70
60
50
duration of eruption
Figure 2.12: Waiting time versus duration in the Old Faithful dataset
2.2. DATA DESCRIPTION 125
4.5
duration
3.5
2.5
1.5
data number
b
90
waiting time
70
50
data number
Figure 2.13: (a): duration and (b): waiting time plotted against data number in the Old
Faithful dataset
2.2. DATA DESCRIPTION 126
a1
duration
3.5
1.5
0 50 100 150
data number
a2
duration
4.0
2.0
data number
b1
waiting time
80
50
0 50 100 150
data number
b2
waiting time
80
50
data number
Figure 2.14: (a1), (a2): duration and (b1), (b2): waiting time plotted against data number
in the Old Faithful dataset
2.2. DATA DESCRIPTION 127
par ( mfrow=c(4,1) )
plot.ts ( eruptions[1:150], xlab="data number",
ylab="duration", main="a1" )
plot.ts ( eruptions[130:272], xlab="data number",
ylab="duration", main="a2" )
plot.ts ( waiting[1:150], xlab="data number",
ylab="waiting time", main="b1")
plot.ts ( waiting[130:272], xlab="data number",
ylab="waiting time", main="b2")
Figures 2.15 and 2.16 introduce coplots, a tool for visualizing the relationship among
three variables. They represent the ocean temperature data from Example 1.5. In Fig-
ure 2.15 there are six panels in which temperature is plotted against latitude. Each panel is
made from the points in a restricted range of longitude. The upper panel, the one spanning
the top of the Figure, shows the six different ranges of longitude. For example, the first
longitude range runs from about -10 to about -17. Points whose longitude is in the interval
(−17, −10) go into the upper right panel of scatterplots. These are the points very close
to the mouth of the Mediterranean Sea. Looking at that panel we see that temperature
increases very steeply from South to North, until about 35◦ , at which point they start to
decrease steeply as we go further North. That’s because we’re crossing the Mediterranean
tongue at a point very close to its source.
The other longitude ranges are about (−20, −13), (−25, −16), (−30, −20), (−34, −25)
and (−40, −28). They are used to create the scatterplot panels in the upper center, upper
left, lower right, lower center, and lower left, respectively. The general impression is
• the angle in the scatterplot becomes slightly shallower as we move East to West, and
• there are some points that don’t fit the general pattern.
Notice that the longitude ranges are overlapping and not of equal width. The ranges are
chosen by R to have a little bit of overlap and to put roughly equal numbers of points into
each range.
2.2. DATA DESCRIPTION 128
Figure 2.16 reverses the roles of latitude and longitude. The impression is that temper-
ature increases gradually from West to East. These two figures give a fairly clear picture
of the Mediterranean tongue.
Example 2.6 shows one way to display the relationship between two sequences of
events.
Example 2.6 (Neurobiology)
To learn how the brain works, neurobiologists implant electrodes into animal brains.
These electrodes are fine enough to record the firing times of individual neurons. A
sequence of firing times of a neuron is called a spike train. Figure 2.17 shows the spike
train from one neuron in the gustatory cortex of a rat while the rat was in an experiment
on taste. This particular rat was in the experiment for a little over 80 minutes. Those
minutes are marked on the y-axis. The x-axis is marked in seconds. Each dot on the
plot shows a time at which the neuron fired. We can see, for example, that this neuron
fired about nine times in the first five seconds, then was silent for about the next ten
seconds. We can also see, for example, that this neuron undergoes some episodes of
very rapid firing lasting up to about 10 seconds.
Since this neuron is in the gustatory cortex — the part of the brain responsible for
taste — it is of interest to see how the neuron responds to various tastes. During the
experiment the rat was licking a tube that sometimes delivered a drop of water and
sometimes delivered a drop of water in which a chemical, or tastant, was dissolved.
The 55 short vertical lines on the plot show the times at which the rat received a drop
of 300 millimolar (.3 M) solution of NaCl. We can examine the plot for relationships
between deliveries of NaCl and activity of the neuron.
Given : lon
!40 !35 !30 !25 !20 !15 !10
20 30 40 50 20 30 40 50
15
10
5
temp
15
10
5
20 30 40 50
lat
Given : lat
20 25 30 35 40 45 50
15
10
5
temp
15
10
5
lon
a spike train
13 18 24 29 34 39 44 50 55 60 65 70 76 81
| | | ||
|||||
|| | | |
minutes
| |||
| ||||
||
||||
||| || | | || |
8
|||||
4
| | | || | | || |
0
0 10 20 30 40 50 60
seconds
Figure 2.17: Spike train from a neuron during a taste experiment. The dots show the times
at which the neuron fired. The solid lines show times at which the rat received a drop of a
.3 M solution of NaCl.
2.2. DATA DESCRIPTION 132
• The line datadir <- ... stores the name of the directory in which I keep the
neuro data. When used in paste it identifies individual files.
• The command list() creates a list. The elements of a list can be anything. In this
case the list named spikes has ten elements whose names are sig002a, sig002b,
. . . , and sig017a. The list named tastants has five elements whose names are
MSG100, MSG300, NaCl100, NaCl300, and water. Lists are useful for keeping
related objects together, esepecially when those objects aren’t all of the same type.
• Each element of the list is the result of a scan(). scan() reads a file and stores the
result in a vector. So spikes is a list of ten vectors. Each vector contains the firing
times, or a spike train, of one neuron. tastants is a list of five vectors. Each vector
contains the times at which a particular tastant was delivered.
2.3. LIKELIHOOD 133
• There are two ways to refer an element of a list. For example, spikes[[8]] refers
to the eighth element of spikes while tastants$NaCl300 refers to the element
named NaCl300.
• Lists are useful for keeping related objects together, especially when those objects
are not the same type. In this example spikes$sig002a is a vector whose length is
the number of times neuron 002a fired, while the length of spikes$sig002b is the
number of times neuron 002b fired. Since those lengths are not the same, the data
don’t fit neatly into a matrix, so we use a list instead.
2.3 Likelihood
2.3.1 The Likelihood Function
It often happens that we observe data from a distribution that is not known precisely but
whose general form is known. For example, we may know that the data come from a
Poisson distribution, X ∼ Poi(λ), but we don’t know the value of λ. We may know that
X ∼ Bin(n, θ) but not know θ. Or we may know that the values of X are densely clustered
around some central value and sparser on both sides, so we decide to model X ∼ N(µ, σ),
but we don’t know the values of µ and σ. In these cases there is a whole family of prob-
ability distributions indexed by either λ, θ, or (µ, σ). We call λ, θ, or (µ, σ) the unknown
parameter; the family of distributions is called a parametric family. Often, the goal of
the statistical analysis is to learn about the value of the unknown parameter. Of course,
learning which value of the parameter is the true one, or which values of the parameter are
plausible in light of the data, is the same as learning which member of the family is the
true one, or which members of the family are plausible in light of the data.
The different values of the parameter, or the different members of the family, represent
different theories or hypotheses about nature. A sensible way to discriminate among the
theories is according to how well they explain the data. Recall the Seedlings data (Exam-
ples 1.4, 1.6, 1.7 and 1.9) in which X was the number of new seedlings in a forest quadrat,
X ∼ Poi(λ), and different values of λ represent different theories or hypotheses about the
arrival rate of new seedlings. When X turned out to be 3, how well a value of λ explains
the data is measured by Pr[X = 3 | λ]. This probability, as a function of λ, is called the
likelihood function and denoted `(λ). It says how well each value of λ explains the datum
X = 3. Figure 1.6 (pg. 19) is a plot of the likelihood function.
2.3. LIKELIHOOD 134
In a typical problem we know the data come from a parametric family indexed by a
parameter θ, i.e. X1 , . . . , Xn ∼ i.i.d. f (x | θ), but we don’t know θ. The joint density of all
the data is Y
f (X1 , . . . , Xn | θ) = f (Xi | θ). (2.2)
Equation 2.2, as a function of θ, is the likelihood function. We sometimes write f (Data | θ)
instead of indicating each individual datum. To emphasize that we are thinking of a func-
tion of θ we may also write the likelihood function as `(θ) or `(θ | Data).
The interpretation of the likelihood function is always in terms of ratios. If, for exam-
ple, `(θ1 )/`(θ2 ) > 1, then θ1 explains the data better than θ2 . If `(θ1 )/`(θ2 ) = k, then θ1
explains the data k times better than θ2 . To illustrate, suppose students in a statistics class
conduct a study to estimate the fraction of cars on Campus Drive that are red. Student
A decides to observe the first 10 cars and record X, the number that are red. Student A
observes
NR, R, NR, NR, NR, R, NR, NR, NR, R
and records X = 3. She did a Binomial
experiment; her statistical model is X ∼ Bin(10, θ);
her likelihood function is `A (θ) = 10
3
θ3 (1 − θ)7 . It is plotted in Figure 2.18. Because only
ratios matter, the likelihood function can be rescaled by any arbitrary positive constant. In
Figure 2.18 it has been rescaled so the maximum is 1. The interpretation of Figure 2.18
is that values of θ around θ ≈ 0.3 explain the data best, but that any value of θ in the
interval from about 0.1 to about 0.6 explains the data not too much worse than the best.
I.e., θ ≈ 0.3 explains the data only about 10 times better than θ ≈ 0.1 or θ ≈ 0.6, and a
factor of 10 is not really very much. On the other hand, values of θ less than about 0.05 or
greater than about 0.7 explain the data much worse than θ ≈ 0.3.
• expression is R’s way of getting mathematical symbols and formulae into plot
labels. For more information, type help(plotmath).
2.3. LIKELIHOOD 135
1.0
0.8
likelihood function
0.6
0.4
0.2
0.0
Figure 2.18: Likelihood function `(θ) for the proportion θ of red cars on Campus Drive
2.3. LIKELIHOOD 136
To continue the example, Student B decides to observe cars until the third red one
drives by and record Y, the total number of cars that drive by until the third red one.
Students A and B went to Campus Drive at the same time and observed the same cars. B
records Y = 10. For B the likelihood function is
`B (θ) = P[Y = 10 | θ]
= P[2 reds among first 9 cars] × P[10’th car is red]
!
9 2
= θ (1 − θ)7 × θ
2
!
9 3
= θ (1 − θ)7
2
`B differs from `A by the multiplicative constant 92 / 103 . But since multiplicative constants
don’t matter, A and B really have the same likelihood function and hence exactly the same
information about θ. Student B would also use Figure 2.18 as the plot of her likelihood
function.
Student C decides to observe every car for a period of 10 minutes and record Z1 , . . . ,
Zk where k is the number of cars that drive by in 10 minutes and each Zi is either 1 or 0
according to whether the i’th car is red. When C went to Campus Drive with A and B,
only 10 cars drove by in the first 10 minutes. Therefore C recorded exactly the same data
as A and B. Her likelihood function is
`C is proportional to `A and `B and hence contains exactly the same information and looks
exactly like Figure 2.18. So even though the students planned different experiments they
ended up with the same data, and hence the same information about θ.
The next example follows the Seedling story and shows what happens to the likelihood
function as data accumulates.
`(λ) ≡ p(Data | λ)
= p(y1 , . . . , y60 | λ)
Y60
= p(yi | λ)
1
60
Y e−λ λyi
=
1
yi !
∝e λ
−60λ 40
Note that yi ! is a multiplicative factor that does not depend on λ and so is irrelevant
Q
to `(λ). Note also that `(λ) depends only on yi , not on the individual yi ’s. I.e., we
P
only need to know yi = 40; we don’t need to know the individual yi ’s. `(λ) is plotted
P
in Figure 2.19. Compare to Figure 1.6 (pg. 19). Figure 2.19 is much more peaked.
That’s because it reflects much more information, 60 quadrats instead of 1. The extra
information pins down the value of λ much more accurately.
0.8
likelihood
0.4
0.0
For our purposes we can assume that X , the number of invasive cancer cases at
the Slater School has the Binomial distribution X ∼ Bin(145, θ). We observe x = 8.
The likelihood function
`(θ) ∝ θ8 (1 − θ)137 (2.3)
is pictured in Figure 2.20. From the Figure it appears that values of θ around .05 or .06,
explain the data better than values less than .05 or greater than .06, but that values of
θ anywhere from about .02 or .025 up to about .11 explain the data reasonably well.
2.3. LIKELIHOOD 139
likelihood
The first line of code creates a sequence of 100 values of θ at which to compute
`(θ), the second line does the computation, the third line rescales so the maximum
likelihood is 1, and the fourth line makes the plot.
Examples 2.7 and 2.8 show how likelihood functions are used. They reveal which val-
ues of a parameter the data support (equivalently, which values of a parameter explain the
data well) and values they don’t support (which values explain the data poorly). There is
no hard line between support and non-support. Rather, the plot of the likelihood functions
shows the smoothly varying levels of support for different values of the parameter.
Because likelihood ratios measure the strength of evidence for or against one hypoth-
2.3. LIKELIHOOD 140
esis as opposed to another, it is important to ask how large a likelihood ratio needs to be
before it can be considered strong evidence. Or, to put it another way, how strong is the
evidence in a likelihood ratio of 10, or 100, or 1000, or more? One way to answer the
question is to construct a reference experiment, one in which we have an intuitive under-
standing of the strength of evidence and can calculate the likelihood; then we can compare
the calculated likelihood to the known strength of evidence.
For our reference experiment imagine we have two coins. One is a fair coin, the
other is two-headed. We randomly choose a coin. Then we conduct a sequence of coin
tosses to learn which coin was selected. Suppose the tosses yield n consecutive Heads.
P[n Heads | fair] = 2−n ; P[n Heads | two-headed] = 1. So the likelihood ratio is 2n . That’s
our reference experiment. A likelihood ratio around 8 is like tossing three consecutive
Heads; a likelihood ratio around 1000 is like tossing ten consecutive Heads.
In Example 2.8 argmax `(θ) ≈ .055 and `(.025)/`(.055) ≈ .13 ≈ 1/8, so the evidence
against θ = .025 as opposed to θ = .055 is about as strong as the evidence against the
fair coin when three consecutive Heads are tossed. The same can be said for the evidence
against θ = .1. Similarly, `(.011)/`(.055) ≈ `(.15)/`(.055) ≈ .001, so the evidence against
θ = .011 or θ = .15 is about as strong as 10 consecutive Heads. A fair statement of the
evidence is that θ’s in the interval from about θ = .025 to about θ = .1 explain the data not
much worse than the maximum of θ ≈ .055. But θ’s below about .01 or larger than about
.15 explain the data not nearly as well as θ’s around .055.
In the preceding reasoning we separated the data into two parts — X̄ and {δi }; used {δi } to
estimate σ; and used X̄ to find a likelihood function for µ. We cannot, in general, justify
such a separation mathematically. We justified it if and when our main interest is in µ and
we believe {δi } tell us little about µ.
Function 2.4 is called a marginal likelihood function. Tsou and Royall [1995] show
that marginal likelihoods are good approximations to true likelihoods and can be used to
make accurate inferences, at least in cases where the Central Limit Theorem applies. We
shall use marginal likelihoods throughout this book.
Example 2.9 (Slater School, continued)
We redo the Slater School example (Example 2.8) to illustrate the marginal likelihood
and see how it compares to the exact likelihood. In that example the Xi ’s were 1’s and
0’s indicating which teachers got cancer. There were 8 1’s out of 145 teachers, so
X̄ = 8/145 ≈ .055. Also, σ̂2 = (8(137/145)2 + 137(8/145)2 ))/145 ≈ .052, so σ̂ ≈ .23.
We get
1 µ − .055 2
!
` M (µ) ∝ exp − √ (2.5)
2 .23/ 145
Figure 2.21 shows the marginal and exact likelihood functions. The marginal likelihood
is a reasonably good approximation to the exact likelihood.
0.8
marginal
likelihood
0.4
0.0 exact
“Forbes magazine published data on the best small firms in 1993. These
were firms with annual sales of more than five and less than $350 million.
Firms were ranked by five-year average return on investment. The data
extracted are the age and annual salary of the chief executive officer for
the first 60 ranked firms. In question are the distribution patterns for the
ages and the salaries.”
2.3. LIKELIHOOD 143
53 145
43 621
33 262
In this example we treat the Forbes data as a random sample of size n = 60 of CEO
salaries for small firms. We’re interested in the average salary µ. Our approach is to
calculate the marginal likelihood function ` M (µ).
Figure 2.22(a) shows a stripchart of the data. Evidently, most salaries are in the
range of $200 to $400 thousand dollars, but with a long right-hand tail. Because the
right-hand tail is so much larger than the left, the data are not even approximately
Normally distributed. But the Central Limit Theorem tells us that X̄ is approximately
Normally distributed, so the method of marginal likelihood applies. Figure 2.22(b)
displays the marginal likelihood function ` M (µ).
par ( mfrow=c(2,1) )
stripchart ( ceo$SAL, "jitter", pch=1, main="(a)",
xlab="Salary (thousands of dollars)" )
• In s <- sqrt ... the (length(ceo$SAL)-1) is there to account for one miss-
ing data point.
2.3. LIKELIHOOD 144
(a)
(b)
1.0
likelihood
0.6
0.2
mean salary
• The data strongly support the conclusion that the mean salary is between about
$350 and $450 thousand dollars. That’s much smaller than the range of salaries
on display in Figure 2.22(a). Why?
• Is inference about the mean salary useful in this data set? If not, what would be
better?
Figure 2.23b is a contour plot of the likelihood function. The dot in the center, where
(µ, σ) ≈ (1.27, .098), is where the likelihood function is highest. That is the value of
2.3. LIKELIHOOD 146
(µ, σ) that best explains the data. The next contour line is drawn where the likelihood
is about 1/4 of its maximum; then the next is at 1/16 the maximum, the next at 1/64,
and the last at 1/256 of the maximum. They show values of (µ, σ) that explain the data
less and less well.
Ecologists are primarily interested in µ because they want to compare the µ’s from
different rings to see whether the excess CO2 has affected the average growth rate.
(They’re also interested in the σ’s, but that’s a secondary concern.) But ` is a function
of both µ and σ, so it’s not immediately obvious that the data tell us anything about
µ by itself. To investigate further, Figure 2.23c shows slices through the likelihood
function at σ = .09, .10, and.11, the locations of the dashed lines in Figure 2.23b. The
three curves are almost identical. Therefore, the relative support for different values
of µ does not depend very much on the value of σ, and therefore we are justified in
interpreting any of the curves in Figure 2.23c as a “likelihood function” for µ alone,
showing how well different values of µ explain the data. In this case, it looks as though
values of µ in the interval (1.25, 1.28) explain the data much better than values outside
that interval.
(a) (b)
0.12
5
4
0.11
3
0.10
!
2
0.09
1
0.08
0
(c)
1.0
0.8
likelihood
0.6
0.4
0.2
0.0
1.20 1.30
Figure 2.23: FACE Experiment, Ring 1. (a): (1998 final basal area) ÷
(1996 initial basal area); (b): contours of the likelihood function. (c): slices of the
likelihood function.
2.3. LIKELIHOOD 148
• The line x <- x[!is.na(x)] is there because some data is missing. This line
selects only those data that are not missing and keeps them in x. When x is a
vector, is.na(x) is another vector, the same length as x, with TRUE or FALSE,
indicating where x is missing. The ! is “not”, or negation, so x[!is.na(x)]
selects only those values that are not missing.
• The lines mu <- ... and sd <- ... create a grid of µ and σ values at which
to evaluate the likelihood.
• The line lik <- matrix ( NA, 50, 50 ) creates a matrix for storing the val-
ues of `(µ, σ) on the grid. The next three lines are a loop to calculate the values
and put them in the matrix.
• The line lik <- lik / max(lik) rescales all the values in the matrix so the
maximum value is 1. Rescaling makes it easier to set the levels in the next
line.
• abline is used for adding lines to plots. You can say either abline(h=...) or
abline(v=...) to get horizontal and vertical lines, or
abline(intercept,slope) to get arbitrary lines.
• lik.09, lik.10, and lik.11 pick out three columns from the lik matrix. They
are the three columns for the values of σ closest to σ = .09, .10, .11. Each
column is rescaled so its maximum is 1.
that most students scored between about 5 and 10, while 4 students were well below
the rest of the class. In fact, those students did not show up for every quiz so their
averages were quite low. But the remaining students’ scores were clustered together
in a way that can be adequately described by a Normal distribution. What do the data
say about (µ, σ)?
Figure 2.24 shows the likelihood function. The data support values of µ from about
7.0 to about 7.6 and values of σ from about 0.8 to about 1.2. A good description of the
data is that most of it follows a Normal distribution with (µ, σ) in the indicated intervals,
except for 4 students who had low scores not fitting the general pattern. Do you think
the instructor should use this analysis to assign letter grades and, if so, how?
1.3
1.2
1.1
1.0
!
0.9
0.8
0.7
µ
Figure 2.24: Likelihood function for Quiz Scores
2.3. LIKELIHOOD 150
x <- sort(scores.ave)[5:58]
mu <- seq ( 6.8, 7.7, length=60 )
sig <- seq ( .7, 1.3, length=60 )
lik <- matrix ( NA, 60, 60 )
for ( i in 1:60 )
for ( j in 1:60 )
lik[i,j] <- prod ( dnorm ( x, mu[i], sig[j] ) )
lik <- lik/max(lik)
contour ( mu, sig, lik, xlab=expression(mu),
ylab=expression(sigma) )
Examples 2.11 and 2.12 have likelihood contours that are roughly circular, indicating
that the likelihood function for one parameter does not depend very strongly on the value
of the other parameter, and so we can get a fairly clear picture of what the data say about
one parameter in isolation. But in other data sets two parameters may be inextricably
entwined. Example 2.13 illustrates the problem.
Table 2.1: Numbers of New and Old seedlings in quadrat 6 in 1992 and 1993.
year i, i.e., including those that emerge after the census; and let NiO be the observed
number of seedlings in year i, i.e., those that are counted in the census. As in Exam-
ple 1.4 we model NiT ∼ Poi(λ). Furthermore, each seedling has some chance θ f of
being found in the census. (Nominally θ f is the proportion of seedlings that emerge
before the census, but in fact it may also include a component accounting for the fail-
ure of ecologists to find seedlings that have already emerged.) Treating the seedlings
as independent and all having the same θ f leads to the model NiO ∼ Bin(NiT , θ f ). The
data are the NiO ’s; the NiT ’s are not observed. What do the data tell us about the two
parameters (λ, θ f )?
Ignore the Old seedlings for now and just look at 1992 data N1992O
= 0. Dropping
the subscript 1992, the likelihood function is
`(λ, θ f ) = P[N O = 0 | λ, θ f ]
X∞
= P[N O = 0, N T = n | λ, θ f ]
n=0
∞
X
= P[N T = n | λ] P[N O = 0 | N T = n, θ f ]
n=0
∞ (2.6)
X e−λ λn
= (1 − θ f ) n
n=0
n!
n
∞ e−λ(1−θ f ) λ(1 − θ )
X f
=
n=0
eλθ f n!
=e −λθ f
Figure 2.25a plots log10 `(λ, θ f ). (We plotted log10 ` instead of ` for variety.) The
contour lines are not circular. To see what that means, focus on the curve log10 `(λ, θ f ) =
−1 which runs from about (λ, θ f ) = (2.5, 1) to about (λ, θ f ) = (6, .4). Points (λ, θ f ) along
that curve explain the datum N O = 0 about 1/10 as well as the m.l.e.(The m.l.e. is any
pair where either λ = 0 or θ f = 0.) Points below and to the left of that curve explain the
datum better than 1/10 of the maximum.
2.3. LIKELIHOOD 152
The main parameter of ecological interest is λ, the rate at which New seedlings
tend to arrive. The figure shows that values of λ as large as 6 can have reasonably
large likelihoods and hence explain the data reasonably well, at least if we believe
that θ f might be as small as .4. To investigate further, Figure 2.25b is similar to 2.25a
but includes values of λ as large as 1000. It shows that even values of λ as large
as 1000 can have reasonably large likelihoods if they’re accompanied by sufficiently
small values of θ f . In fact, arbitrarily large values of λ coupled with sufficiently small
values of θ f can have arbitrarily large likelihoods. So from the data alone, there is no
way to rule out extremely large values of λ. Of course extremely large values of λ don’t
make ecological sense, both in their own right and because extremely small values of
θ f are also not sensible. Scientific background information of this type is incorporated
into statistical analysis often through Bayesian inference (Section 2.5). But the point
here is that λ and θ f are linked, and the data alone does not tell us much about either
parameter individually.
(a)
0.8
"f
0.4
0.0
0 1 2 3 4 5 6
(b)
0.8
"f
0.4
0.0
Figure 2.25: Log of the likelihood function for (λ, θ f ) in Example 2.13
2.4. ESTIMATION 154
We have now seen two examples (2.11 and 2.12) in which likelihood contours are
roughly circular and one (2.13) in which they’re not. By far the most common and impor-
tant case is similar to Example 2.11 because it applies when the Central Limit Theorem
applies. That is, there are many instances in which we are trying to make an inference
about a parameter θ and can invoke the Central Limit Theorem saying that for some statis-
tic t, t ∼ N(θ, σt ) approximately and where we can estimate σt . In these cases we can,
if necessary, ignore any other parameters in the problem and make an inference about θ
based on ` M (θ).
2.4 Estimation
Sometimes the purpose of a statistical analysis is to compute a single best guess at a
parameter θ. An informed guess at the value of θ is called an estimate and denoted θ̂.
One way to estimate θ is to find θ̂ ≡ argmax `(θ), the value of θ for which `(θ) is largest
and hence the value of θ that best explains the data. That’s the subject of Section 2.4.1.
For instance, in Example 2.8 and Figure 2.20 θ was the rate of cancer occurence and
we calculated `(θ) based on y = 8 cancers in 145 people. Figure 2.20 suggests that the
m.l.e. is about θ̂ ≈ .05.
When `(θ) is differentiable, the m.l.e. can be found by differentiating and equating to
zero. In Example 2.8 the likelihood was `(θ) ∝ θ8 (1 − θ)137 . The derivative is
d`(θ)
∝ 8θ7 (1 − θ)137 − 137θ8 (1 − θ)136
dθ (2.7)
= θ7 (1 − θ)136 [8(1 − θ) − 137θ]
2.4. ESTIMATION 155
Equating to 0 yields
0 = 8(1 − θ) − 137θ
145θ = 8
θ = 8/145 ≈ .055
So θ̂ ≈ .055 is the m.l.e. Of course if the mode is flat, there are multiple modes, the
maximum occurs at an endpoint, or ` is not differentiable, then more care is needed.
Equation 2.7 shows more generally the m.l.e. for Binomial data. Simply replace 137
with n − y and 8 with y to get θ̂ = y/n. In the Exercises you will be asked to find the m.l.e.
for data from other types of distributions.
There is a trick that is often useful for finding m.l.e.’s. Because log is a monotone func-
tion, argmax `(θ) = argmax log(`(θ)), so the m.l.e. can be found by maximizing log `. For
i.i.d. data, `(θ) = p(yi | θ), log `(θ) = log p(xi | θ), and it is often easier to differentiate
Q P
the sum than the product. For the Slater example the math would look like this:
log `(θ) = 8 log θ + 137 log(1 − θ)
d log `(θ) 8 137
= −
dθ θ 1−θ
137 8
=
1−θ θ
137θ = 8 − 8θ
8
θ= .
145
Equation 2.7 shows that if y1 , . . . , yn ∼ Bern(θ) then the m.l.e. of θ is
X
θ̂ = n−1 yi = sample mean
The Exercises ask you to show the following.
1. If y1 , . . . , yn ∼ N(µ, σ) then the m.l.e. of µ is
X
µ̂ = n−1 yi = sample mean
LSα = [θl , θu ]
where θl and θu are the lower and upper endpoints, respectively, of the interval.
In Example 2.9 (Slater School) θ̂ = 8/145, so we can find `(θ̂) on a calculator, or by
using R’s built-in function
which yields about .144. Then θl and θu can be found by trial and error. Since dbinom(8,145,.023) ≈
.013 and dbinom(8,145,.105) ≈ .015, we conclude that LS.1 ≈ [.023, .105] is a rough
likelihood interval for θ. Review Figure 2.20 to see whether this interval makes sense.
The data in Example 2.9 could pin down θ to an interval of width about .08. In general,
an experiment will pin down θ to an extent determined by the amount of information in the
2.4. ESTIMATION 157
data. As data accumulates so does information and the ability to determine θ. Typically the
likelihood function becomes increasingly more peaked as n → ∞, leading to increasingly
accurate inference for θ. We saw that in Figures 1.6 and 2.19. Example 2.14 illustrates the
point further.
Example 2.14 (Craps, continued)
Example 1.10 introduced a computer simulation to learn the probability θ of winning
the game of craps. In this example we use that simulation to illustrate the effect of
gathering ever increasing amounts of data. We’ll start by running the simulation just a
few times, and examining the likelihood function `(θ). Then we’ll add more and more
simulations and see what happens to `(θ).
The result is in Figure 2.26. The flattest curve is for 3 simulations, and the curves
become increasingly peaked for 9, 27, and 81 simulations. After only 3 simulations
LS.1 ≈ [.15, .95] is quite wide, reflecting the small amount of information. But after
9 simulations `(θ) has sharpened so that LS.1 ≈ [.05, .55] is much smaller. After 27
simulations LS.1 has shrunk further to about [.25, .7], and after 81 it has shrunk even
further to about [.38, .61].
for ( i in seq(along=n.sim) ) {
wins <- 0
for ( j in 1:n.sim[i] )
wins <- wins + sim.craps()
lik[,i] <- dbinom ( wins, n.sim[i], th )
lik[,i] <- lik[,i] / max(lik[,i])
}
In Figure 2.26 the likelihood function looks increasingly like a Normal density as the
number of simulations increases. That is no accident; it is the typical behavior in many
2.4. ESTIMATION 158
1.0
0.8
likelihood
0.6
0.4
0.2
0.0
Figure 2.26: Likelihood function for the probability θ of winning a game of craps. The
four curves are for 3, 9, 27, and 81 simulations.
2.4. ESTIMATION 159
par ( mfrow=c(2,2) )
for ( i in seq(along=sampsize) ) {
y <- matrix ( rnorm ( n.sim*sampsize[i], 0, 1 ),
2.4. ESTIMATION 160
(a) (b)
0.5
1
0.0
0
!1
!1.0
!2
(c) (d)
0.2
0.4
0.0
0.0
!0.2
!0.4
Figure 2.27: Sampling distribution of θ̂1 , the sample mean and θ̂2 , the sample median.
Four different sample sizes. (a): n=4; (b): n=16; (c): n=64; (d): n=256
2.4. ESTIMATION 161
nrow=sampsize[i], ncol=n.sim )
that.1 <- apply ( y, 2, mean )
that.2 <- apply ( y, 2, median )
boxplot ( that.1, that.2, names=c("mean","median"),
main=paste("(",letters[i],")",sep="") )
abline ( h=0, lty=2 )
}
For us, comparing θ̂1 to θ̂2 is only a secondary point of the simulation. The main point
is four-fold.
3. Statisticians study conditions under which one estimator is better than another.
4. Simulation is useful.
When the m.l.e. is the sample mean, as it is when FY is a Bernoulli, Normal, Poisson
or Exponential distribution, the Central Limit Theorem tells us that in large samples, θ̂ is
approximately Normally distributed. Therefore, in these cases, its distribution can be well
described by its mean and SD. Approximately,
θ̂ ∼ N(µθ̂ , σθ̂ ).
where
µθ̂ = µY
σY (2.8)
σθ̂ = √
n
both of which can be easily estimated from the sample. So we can use the sample to
compute a good approximation to the sampling distribution of the m.l.e.
To see that more clearly, let’s make 1000 simulations of the m.l.e. in n = 5, 10, 25, 100
Bernoulli trials with p = .1. We’ll make histograms of those simulations and overlay
them with kernel density estimates and Normal densities. The parameters of the Normal
densities will be estimated from the simulations. Results are shown in Figure 2.28.
2.4. ESTIMATION 162
12 (a) (b)
12
density
density
0 2 4 6 8
0 2 4 6 8
0.0 0.2 0.4 0.6 0.0 0.2 0.4 0.6
!^ !^
(c) (d)
12
12
density
density
0 2 4 6 8
0 2 4 6 8
!^ !^
Figure 2.28: Histograms of θ̂, the sample mean, for samples from Bin(n, .1). Dashed line:
kernel density estimate. Dotted line: Normal approximation. (a): n=5; (b): n=10; (c):
n=25; (d): n=100
2.4. ESTIMATION 163
Notice that the Normal approximation is not very good for small n. That’s because the
underlying distribution FY is highly skewed, nothing at all like a Normal distribution. In
fact, R was unable to compute the Normal approximation for n = 5. But for large n, the
Normal approximation is quite good. That’s the Central Limit Theorem kicking in. For
2.5. BAYESIAN INFERENCE 164
any n, we can use the sample to estimate the parameters in Equation 2.8. For small n, those
parameters don’t help us much. But for n = 256, they tell us a lot about the accuracy of
θ̂, and the Normal approximation computed from the first sample is a good match to the
sampling distribution of θ̂.
The SD of an estimator is given a special name. It’s called the standard error or SE
of the estimator because it measures the typical size of estimation errors |θ̂ − θ|. When
θ̂ ∼ N(µθ̂ , σθ̂ ), approximately, then σθ̂ is the SE. For any Normal distribution, about 95%
of the mass is within ±2 standard deviations of the mean. Therefore,
In other words, estimates are accurate to within about two standard errors about 95% of
the time, at least when Normal theory applies.
We have now seen two ways of assessing estimation accuracy — through `(θ) and
through Fθ̂ . Often these two apparently different approaches almost coincide. That hap-
pens under the following conditions.
√
1. When θ̂ ∼ N(θ, σθ̂ ), and σθ̂ ≈ σ/ n, an approximation often justified by the Central
Limit Theorem, then we can estimate θ to within about ±2σθ , around 95% of the
time. So the interval (θ̂ − 2σθ̂ , θ̂ + 2σθ̂ ) is a reasonable estimation interval.
2. When most of the information in the data come from the sample mean,and in other
2
cases when a marginal likelihood argument applies, then `(θ) ≈ exp − 12 σ̂/θ−√Ȳn
(Equation 2.4) and LS.1 ≈ (θ̂ − 2σθ̂ , θ̂ + 2σθ̂ ). So the two intervals are about the
same.
• In deciding whether to fund Head Start, legislators must assess whether the program
is likely to be beneficial and, if so, the degree of benefit.
• When investing in the stock market, investors must assess the future probability
distributions of stocks they may buy.
• When making business decisions, firms must assess the future probability distribu-
tions of outcomes.
• Public policy makers must assess whether the observed increase in average global
temperature is anthropogenic and, if so, to what extent.
• Doctors and patients must assess and compare the distribution of outcomes under
several alternative treatments.
• At the Slater School, Example 2.8, teachers and administrators must assess their
probability distribution for θ, the chance that a randomly selected teacher develops
invasive cancer.
Information of many types goes into assessing probability distributions. But it is often
useful to divide the information into two types: general background knowledge and infor-
mation specific to the situation at hand. How do those two types of information combine to
form an overall distribution for θ? Often we begin by summarizing just the background in-
formation as p(θ), the marginal distribution of θ. The specific information at hand is data
which we can model as p(y1 , . . . , yn | θ), the conditional distribution of y1 , . . . , yn given
θ. Next, the marginal and conditional densities are combined to give the joint distribu-
tion p(y1 , . . . , yn , θ). Finally, the joint distribution yields p(θ | y1 , . . . , yn ) the conditional
distribution of θ given y1 , . . . , yn . And p(θ | y1 , . . . , yn ) represents our state of knowledge
accounting for both the background information and the data specific to the problem at
hand. p(θ) is called the prior distribution and p(θ | y1 , . . . , yn ) is the posterior distribution.
2.5. BAYESIAN INFERENCE 166
P[D = 1 and T = 1]
P[D = 1 | T = 1] =
P[T = 1]
P[D = 1 and T = 1]
=
P[T = 1 and D = 1] + P[T = 1 and D = 0]
P[D = 1] P[T = 1 | D = 1]
= (2.9)
P[D = 1] P[T = 1 | D = 1] + P[D = 0] P[T = 1 | D = 0]
(.001)(.95)
=
(.001)(.95) + (.999)(.05)
.00095
= ≈ .019.
.00095 + .04995
That is, a patient who tests positive has only about a 2% chance of having the disease, even
though the test is 95% accurate.
Many people find this a surprising result and suspect a mathematical trick. But a quick
heuristic check says that out of 1000 people we expect 1 to have the disease, and that
person to test positive; we expect 999 people not to have the disease and 5% of those, or
about 50, to test positive; so among the 51 people who test postive, only 1, or a little less
than 2%, has the disease. The math is correct. This is an example where most people’s
intuition is at fault and careful attention to mathematics is required in order not to be led
astray.
What is the likelihood function in this example? There are two possible values of the
parameter, hence only two points in the domain of the likelihood function, D = 0 and
D = 1. So the likelihood function is
Here’s another way to look at the medical screening problem, one that highlights the mul-
tiplicative nature of likelihood.
P[D = 1 | T = 1] P[D = 1 and T = 1]
=
P[D = 0 | T = 1] P[D = 0 and T = 1]
P[D = 1] P[T = 1 | D = 1]
=
P[D = 0] P[T = 1 | D = 0]
P[D = 1] P[T = 1 | D = 1]
! !
=
P[D = 0] P[T = 1 | D = 0]
.95
! !
1
=
999 .05
≈ .019
The LHS of this equation is the posterior odds of having the disease. The penultimate
line shows that the posterior odds is the product of the prior odds and the likelihood ratio.
Specifically, to calculate the posterior, we need only the likelihood ratio, not the absolute
value of the likelihood function. And likelihood ratios are the means by which prior odds
get transformed into posterior odds.
Let’s look more carefully at the mathematics in the case where the distributions have
densities. Let y denote the data, even though in practice it might be y1 , . . . , yn .
p(θ, y)
p(θ | y) =
p(y)
p(θ, y)
=R (2.10)
p(θ, y) dθ
p(θ)p(y | θ)
=R
p(θ)p(y | θ) dθ
Equation 2.10 is the same as Equation 2.9, only in more general terms. Since we are
treating the data as given and p(θ | y) as a function of θ, we are justified in writing
p(θ)`(θ)
p(θ | y) = R
p(θ)`(θ) dθ
or
p(θ)`(θ)
p(θ | y) =
c
where c = p(θ)`(θ) dθ is a constant that does not depend on θ. (An integral with respect
R
to θ does not depend on θ; after integration it does not contain θ.) The effect of the constant
2.5. BAYESIAN INFERENCE 168
And since c plays this role, the likelihood function can absorb an arbitrary constant which
will ultimately be compensated for by c. One often sees the expression
The next two examples show Bayesian statistics with real data.
In Figure 2.29 the posterior density is more similar to the prior density than to the
likelihood function. But the analysis deals with only a single data point. Let’s see what
happens as data accumulates. If we have observations y1 , . . . , yn , the likelihood function
becomes
Y Y e−λ λyi
`(λ) = p(yi | λ) = ∝ e−nλ λ yi
P
yi !
To see what this means in practical terms, Figure 2.30 shows (a): the same prior we used
in Example 2.15, (b): `(λ) for n = 1, 4, 16, and (c): the posterior for n = 1, 4, 16, always
with ȳ = 3.
0.5
prior
likelihood
posterior
0.4
0.3
0.2
0.1
0.0
0 1 2 3 4 5
Figure 2.29: Prior, likelihood and posterior densities for λ in the seedlings example after
the single observation y = 3 !
2.5. BAYESIAN INFERENCE 170
2. As n increases the posterior density becomes increasingly peaked and becomes in-
creasingly like `(λ). That’s because as n increases, the amount of information in
the data increases and the likelihood function becomes increasingly peaked. Mean-
while, the prior density remains as it was. Eventually the data contains much more
information than the prior, so the likelihood function becomes much more peaked
than the prior and the likelihood dominates. So the posterior, the product of prior
and likelihood, looks increasingly like the likelihood.
Another way to look at it is through the loglikelihood log `(λ) = c + log p(λ) +
1 log p(yi | λ). As n → ∞ there is an increasing number of terms in the sum, so the
Pn
sum eventually becomes much larger and much more important than log p(λ).
Example 2.16 shows Bayesian statistics at work for the Slater School. See Lavine
[1999] for further analysis.
Example 2.16 (Slater School, cont.)
At the time of the analysis reported in Brodeur [1992] there were two other lines of ev-
idence regarding the effect of power lines on cancer. First, there were some epidemi-
ological studies showing that people who live near power lines or who work as power
line repairmen develop cancer at higher rates than the population at large, though only
slightly higher. And second, chemists and physicists who calculate the size of mag-
netic fields induced by power lines (the supposed mechanism for inducing cancer) said
that the small amount of energy in the magnetic fields is insufficient to have any ap-
preciable affect on the large biological molecules that are involved in cancer genesis.
These two lines of evidence are contradictory. How shall we assess a distribution for
θ, the probability that a teacher hired at Slater School develops cancer?
Recall from page 138 that Neutra, the state epidemiologist, calculated “4.2 cases
of cancer could have been expected to occur” if the cancer rate at Slater were equal to
the national average. Therefore, the national average cancer rate for women of the age
typical of Slater teachers is 4.2/145 ≈ .03. Considering the view of the physicists, our
prior distribution should have a fair bit of mass on values of θ ≈ .03. And considering
the epidemiological studies and the likelihood that effects would have been detected
before 1992 if they were strong, our prior distribution should put most of its mass
2.5. BAYESIAN INFERENCE 171
a b
0.5
0.8
0.4
0.6
likelihood
0.3
prior
0.4
0.2
0.2
0.1
0.0
0.0
0 1 2 3 4 5 0 1 2 3 4 5
! !
c
1.0
0.8
posterior
0.6
n=1
n=4
n = 16
0.4
0.2
0.0
0 1 2 3 4 5
prior
likelihood
posterior
3
density
2
1
0
Figure 2.31: Prior, likelihood and posterior densities for λ with n = 60, yi = 40.
P
2.5. BAYESIAN INFERENCE 173
below θ ≈ .06. For the sake of argument let’s adopt the prior depicted in Figure 2.32.
Its formula is
Γ(20)Γ(400) 19
p(θ) = θ (1 − θ)399 (2.13)
Γ(420)
which we will see in Section 5.6 is the Be(20, 400) density. The likelihood function is
`(θ) ∝ θ8 (1 − θ)137 (Equation 2.3, Figure 2.20). Therefore the posterior density p(θ | y) ∝
θ27 (1 − θ)536 which we will see in Section 5.6 is the Be(28, 537) density. Therefore we
can easily write down the constant and get the posterior density
Γ(28)Γ(537) 27
p(θ | y) = θ (1 − θ)536
Γ(565)
which is also pictured in Figure 2.32.
40
prior
likelihoo
30
posterio
20
10
0
Figure 2.32: Prior, likelihood and posterior density for Slater School
Examples 2.15 and 2.16 have the convenient feature that the prior density had the same
form — λa e−bλ in one case and θa (1 − θ)b in the other — as the likelihood function, which
made the posterior density and the constant c particularly easy to calculate. This was not
2.6. PREDICTION 174
a coincidence. The investigators knew the form of the likelihood function and looked for
a convenient prior of the same form that approximately represented their prior beliefs.
This convenience, and whether choosing a prior density for this property is legitimate, are
topics which deserve serious thought but which we shall not take up at this point.
2.6 Prediction
Sometimes the goal of statistical analysis is to make predictions for future observations.
Let y1 , . . . , yn , y f be a sample from p(· | θ). We observe y1 , . . . , yn but not y f , and want a
prediction for y f . There are three common forms that predictions take.
point predictions A point prediction is a single guess for y f . It might be a predictive
mean, predictive median, predictive mode, or any other type of point prediction that
seems sensible.
Therefore, our prediction should be based on the knowledge of θ alone, not on any aspect
of y1 , . . . , yn .
2.6. PREDICTION 175
A sensible point prediction for y f is ŷ f = −2, because -2 is the mean, median, and mode
of the N(−2, 1) distribution. Some sensible 90% prediction intervals are (−∞, −0.72),
(−3.65, −0.36) and (−3.28, ∞). We would choose one or the other depending on whether
we wanted to describe the lowest values that y f might take, a middle set of values, or the
highest values. And, of course, the predictive distribution of y f is N(−2, 1). It completely
describes the extent of our knowledge and ability to predict y f .
In real problems, though, we don’t know θ. The simplest way to make a prediction
consists of two steps. First use y1 , . . . , yn to estimate θ, then make predictions based on
p(y f | θ̂). Predictions made by this method are called plug-in predictions. In the example
of the previous paragraph, if y1 , . . . , yn yielded µ̂ = −2 and σ̂ = 1, then predictions would
be exactly as described above.
For an example with discrete data, refer to Examples 1.4 and 1.6 in which λ is the
arrival rate of new seedlings. We found λ̂ = 2/3. The entire plug-in predictive distribution
is displayed in Figure 2.33. ŷ f = 0 is a sensible point prediction. The set {0, 1, 2} is a
97% plug-in prediction interval or prediction set (because ppois(2,2/3) ≈ .97); the set
{0, 1, 2, 3} is a 99.5% interval.
0.4
probability
0.2
0.0
0 2 4 6 8 10
Figure 2.33: Plug-in predictive distribution y f ∼ Poi(λ = 2/3) for the seedlings example
There are two sources of uncertainty in making predictions. First, because y f is ran-
dom, we couldn’t predict it perfectly even if we knew θ. And second, we don’t know θ.
In any given problem, either one of the two might be the more important source of uncer-
2.6. PREDICTION 176
tainty. The first type of uncertainty can’t be eliminated. But in theory, the second type can
be reduced by collecting an increasingly large sample y1 , . . . , yn so that we know θ with
ever more accuracy. Eventually, when we know θ accurately enough, the second type of
uncertainty becomes negligible compared to the first. In that situation, plug-in predictions
do capture almost the full extent of predictive uncertainty.
But in many practical problems the second type of uncertainty is too large to be ig-
nored. Plug-in predictive intervals and predictive distributions are too optimistic because
they don’t account for the uncertainty involved in estimating θ. A Bayesian approach to
prediction can account for this uncertainty. The prior distribution of θ and the conditional
distribution of y1 , . . . , yn , y f given θ provide the full joint distribution of y1 , . . . , yn , y f , θ,
which in turn provides the conditional distribution of y f given y1 , . . . , yn . Specifically,
Z
p(y f | y1 , . . . , yn ) = p(y f , θ | y1 , . . . , yn ) dθ
Z
= p(θ | y1 , . . . , yn )p(y f | θ, y1 , . . . , yn ) dθ (2.14)
Z
= p(θ | y1 , . . . , yn )p(y f | θ) dθ
Equation 2.14 is just the y f marginal density derived from the joint density of (θ, y f ),
all densities being conditional R on the dataRobserved so far. To say it another way, the
predictive density p(y f ) is p(θ, y f ) dθ = p(θ)p(y f | θ) dθ, but where p(θ) is really the
posterior p(θ | y1 , . . . , yn ). The role of y1 , . . . , yn is to give us the posterior density of θ
instead of the prior.
The predictive distribution in Equation 2.14 will be somewhat more dispersed than
the plug-in predictive distribution. If we don’t know much about θ then the posterior
will be widely dispersed and Equation 2.14 will be much more dispersed than the plug-in
predictive distribution. On the other hand, if we know a lot about θ then the posterior
distribution will be tight and Equation 2.14 will be only slightly more dispersed than the
plug-in predictive distribution.
Z
pY f (y) ≡ P[Y f = y] = pY f | Λ (y | λ)pΛ (λ) dλ
λ e 2 2 −2λ
Z y −λ 3
= λ e dλ
y! Γ(3)
23
Z
= λy+2 e−3λ dλ (2.15)
y!Γ(3)
23 Γ(y + 3) 3y+3 y+2 −3λ
Z
= λ e dλ
y!Γ(3)3y+3 Γ(y + 3)
! !3 !y
y+2 2 1
= ,
y 3 3
(We will see in Chapter 5 that this is a Negative Binomial distribution.) Thus for exam-
ple, according to our prior,
!3
2 8
Pr[Y f = 0] = =
3 27
!3
2 1 8
Pr[Y f = 1] = 3 =
3 3 27
etc.
So, by calculations similar to Equation 2.15, the predictive distribution after observing
y1 = 3 is
Z
pY f | Y1 (y | y1 = 3) =pY f | Λ (y | λ)pΛ | Y1 (λ | y1 = 3) dλ
! !6 !y (2.16)
y+5 3 1
=
y 4 4
2.7. HYPOTHESIS TESTING 178
6243 42 −62λ
p(λ | y1 , . . . , y60 ) = λ e (2.17)
42!
Therefore , by calculations similar to Equation 2.15, the predictive distribution is
!43 !y
y + 42 62
!
1
Pr[Y f = y | y1 , . . . , y60 ] = (2.18)
y 63 63
medicine
• H0 : the new drug and the old drug are equally effective.
• Ha : the new drug is better than the old.
public health
0.5
n=0
n=1
n=60
plug!in
0.4
0.3
0.2
0.1
0.0
0 2 4 6 8 10
Figure 2.34: Predictive distributions of y f in the seedlings example after samples of size
n = 0, 1, 60, and the plug-in predictive
2.7. HYPOTHESIS TESTING 180
public policy
astronomy
physics
public trust
ESP
• H0 : There is no ESP.
• Ha : There is ESP.
ecology
By tradition H0 is the hypothesis that says nothing interesting is going on or the current
theory is correct, while Ha says that something unexpected is happening or our current
theories need updating. Often the investigator is hoping to disprove the null hypothesis
and to suggest the alternative hypothesis in its place.
It is worth noting that while the two hypotheses are logically exclusive, they are not
logically exhaustive. For instance, it’s logically possible that forest fires decrease diver-
sity even though that possibility is not included in either hypothesis. So one could write
Ha : Forest fires decrease forest diversity, or even Ha : Forest fires change forest diversity.
2.7. HYPOTHESIS TESTING 181
Which alternative hypothesis is chosen makes little difference for the theory of hypothesis
testing, though it might make a large difference to ecologists.
Statisticians have developed several methods called hypothesis tests. We focus on just
one for the moment, useful when H0 is specific. The fundamental idea is to see whether
the data are “compatible” with the specific H0 . If so, then there is no reason to doubt H0 ; if
not, then there is reason to doubt H0 and possibly to consider Ha in its stead. The meaning
of “compatible” can change from problem to problem but typically there is a four step
process.
1. Formulate a scientific null hypothesis and translate it into statistical terms.
2. Choose a low dimensional statistic, say w = w(y1 , . . . , yn ) such that the distribution
of w is specified under H0 and likely to be different under Ha .
public policy Test a sample children who have been through Head Start. Model their
test scores as X1 , . . . , Xn ∼ i.i.d. N(µ1 , σ1 ). Do the same for children who have not
been through Head Start, getting Y1 , . . . , Yn ∼ i.i.d. N(µ2 , σ2 ). H0 says µ1 = µ2 .
Let w = µ̂1 − µ̂2 . The parameters µ1 , µ2 , σ1 , σ2 can all be estimated from the data;
therefore w can be calculated and its SD estimated. Ask How many SD’s is w away
from its expected value of 0. If it’s off by many SD’s, more than about 2 or 3, that’s
evidence against H0 .
2.7. HYPOTHESIS TESTING 182
ecology We could either do an observational study, beginning with one sample of plots
that had had frequent forest fires in the past and another sample that had had few
fires. Or we could do an experimental study, beginning with a large collection of
plots and subjecting half to a regime of regular burning and the other half to a regime
of no burning. In either case we would measure and compare species diversity in
both sets of plots. If diversity is similar in both groups, there is no reason to doubt
H0 . But if diversity is sufficiently different (Sufficient means large compared to what
is expected by chance under H0 .) that would be evidence against H0 .
To illustrate in more detail, let’s consider testing a new blood pressure medication. The
scientific null hypothesis is that the new medication is not any more effective than the old.
We’ll consider two ways a study might be conducted and see how to test the hypothesis
both ways.
Method 1 A large number of patients are enrolled in a study and their blood pressures
are measured. Half are randomly chosen to receive the new medication (treatment); half
receive the old (control). After a prespecified amount of time, their blood pressure is
remeasured. Let YC,i be the change in blood pressure from the beginning to the end of the
experiment for the i’th control patient and YT,i be the change in blood pressure from the
beginning to the end of the experiment for the i’th treatment patient. The model is
for some unknown means µC and µT and variances σC and σT . The translation of the
hypotheses into statistical terms is
H0 : µT = µC
Ha : µT , µC
Because we’re testing a difference in means, let w = ȲT − ȲC . If the sample size n is
reasonably large, then the Central Limit Theorem says approximately w ∼ N(0, σ2w ) under
H0 with σ2w = (σ2T + σC2 )/n. The mean of 0 comes from H0 . The variance σ2w comes from
adding variances of independent random variables. σ2T and σC2 and therefore σ2w can be
estimated from the data. So we can calculate w from the data and see whether it is within
about 2 or 3 SD’s of where H0 says it should be. If it isn’t, that’s evidence against H0 .
Method 2 A large number of patients are enrolled in a study and their blood pressure
is measured. They are matched together in pairs according to relevant medical character-
istics. The two patients in a pair are chosen to be as similar to each other as possible. In
each pair, one patient is randomly chosen to receive the new medication (treatment); the
2.7. HYPOTHESIS TESTING 183
other receives the old (control). After a prespecified amount of time their blood pressures
are measured again. Let YT,i and YC,i be the change in blood pressure for the i’th treatment
and i’th control patients. The researcher records
1 if YT,i > YC,i
Xi =
0 otherwise
The model is
X1 , . . . , Xn ∼ i.i.d. Bern(p)
for some unknown probability p. The translation of the hypotheses into statistical terms is
H0 : p = .5
Ha : p , .5
Let w = Xi . Under H0 , w ∼ Bin(n, .5). To test H0 we plot the Bin(n, .5) distribution and
P
see where w falls on the plot . Figure 2.35 shows the plot for n = 100. If w turned out
to be between about 40 and 60, then there would be little reason to doubt H0 . But on the
other hand, if w turned out to be less than 40 or greater than 60, then we would begin to
doubt. The larger |w − 50|, the greater the cause for doubt.
0.08
0.04
0.00
30 40 50 60 70
This blood pressure example exhibits a feature common to many hypothesis tests.
First, we’re testing a difference in means. I.e., H0 and Ha disagree about a mean, in this
case the mean change in blood pressure from the beginning to the end of the experiment.
So we take w to be the difference in sample means. Second, since the experiment is run
on a large number of people, the Central Limit Theorem says that w will be approximately
Normally distributed. Third, we can calculate or estimate the mean µ0 and SD σ0 under
H0 . So fourth, we can compare the value of w from the data to what H0 says its distribution
should be.
In Method 1 above, that’s just what we did. In Method 2 above, we didn’t use the
Normal approximation; we used the Binomial distribution. But we could have used the
approximation.
√ From facts about the Binomial distribution we know µ0 = n/2 and σ0 =
n/2 under H0 . For n = 100, Figure 2.36 compares the exact Binomial distribution to the
Normal approximation.
In general, when the Normal approximation is valid, we compare w to the N(µ0 , σ0 )
density, where µ0 is calculated according to H0 and σ0 is either calculated according to H0
or estimated from the data. If t ≡ |w − µ0 |/σ0 is bigger than about 2 or 3, that’s evidence
against H0 .
0.08
0.04
0.00
30 40 50 60 70
Figure 2.36: pdfs of the Bin(100, .5) (dots) and N(50, 5) (line) distributions
Define the two means to be µVC ≡ E[xi ] and µOJ ≡ E[yi ]. The scientific hypothesis and
its alternative, translated into statistical terms become
H0 : µVC = µOJ
Ha : µVC , µOJ
approximately. The statistic w can be calculated, its SD estimated, and its approximate
density plotted as in Figure 2.37. We can see from the Figure, or from the fact that
w/σw ≈ 3.2 that the observed value of w is moderately far from its expected value
under H0 . The data provide moderately strong evidence against H0 .
2.7. HYPOTHESIS TESTING 186
0.15
0.00
!6 !4 !2 0 2 4 6
Figure 2.37: Approximate density of summary statistic w. The black dot is the value of w
observed in the data.
1. Recip identifies the juvenile who received help. In the four lines shown here, it
is always ABB.
2. Father identifes the father of the juvenile. Researchers know the father through
DNA testing of fecal samples. In the four lines shown here, it is always EDW.
3. Maleally identifies the adult male who helped the juvenile. In the fourth line we
see that POW aided ABB who is not his own child.
4. Dadpresent tells whether the father was present in the group when the juvenile
was aided. In this data set it is always Y.
5. Group identifies the social group in which the incident occured. In the four lines
shown here, it is always OMO.
Let w be the number of cases in which a father helps his own child. The snippet
1
We have slightly modified the data to avoid some irrelevant complications.
2.7. HYPOTHESIS TESTING 188
dim ( baboons )
sum ( baboons$Father == baboons$Maleally )
reveals that there are n = 147 cases in the data set, and that w = 87 are cases in
which a father helps his own child. The next step is to work out the distribution of w
under H0 : adult male baboons do not know which juveniles are their children.
Let’s examine one group more closely, say the OMO group. Typing
baboons[baboons$Group == "OMO",]
displays the relevant records. There are 13 of them. EDW was the father in 9, POW
was the father in 4. EDW provided the help in 9, POW in 4. The father was the ally
in 9 cases; in 4 he was not. H0 implies that EDW and POW would distribute their
help randomly among the 13 cases. If H0 is true, i.e., if EDW distributes his 9 helps
and POW distributes his 4 helps randomly among the 13 cases, what would be the
distribution of W , the number of times a father helps his own child? We can answer
that question by a simulation in R. (We could also answer it by doing some math or by
knowing the hypergeometric distribution, but that’s not covered in this text.)
Try out the simulation for yourself. It shows that the observed number in the data,
w = 9, is not so unusual under H0 .
What about the other social groups? If we find out how many there are, we can do
a similar simulation for each. Let’s write an R function to help.
Figure 2.38 shows histograms of g.sim for each group, along with a dot showing the
observed value of w in the data set. For some of the groups the observed value of
w, though a bit on the high side, might be considered consistent with H0 . For others,
the observed value of w falls outside the range of what might be reasonably expected
by chance. In a case like this, where some of the evidence is strongly against H0
and some is only weakly against H0 , an inexperienced statistician might believe the
overall case against H0 is not very strong. But that’s not true. In fact, every one of
the groups contributes a little evidence against H0 , and the total evidence against H0
is very strong. To see this, we can combine the separate simulations into one. The
following snippet of code does this. Each male’s help is randomly reassigned to a
juvenile within his group. The number of times when a father helps his own child is
summed over the different groups. Simulated numbers are shown in the histogram in
Figure 2.39. The dot in the figure is at 84, the actual number of instances in the full
data set. Figure 2.39 suggests that it is almost impossible that the 84 instances arose
by chance, as H0 would suggest. We should reject H0 and reach the conclusion that
(a) adult male baboons do know who their own children are, and (b) they give help
preferentially to their own children.
OMO VIV
400
400
200
200
0
0
5 6 7 8 9 10 11 8 10 12 14 16
w w
NYA WEA
300
300
200
200
100
100
0
0 1 2 3 4 5 15 20 25 30
w w
LIN
100 150
50
0
8 10 14 18
Figure 2.38: Number of times baboon father helps own child in Example 2.19. Histograms
are simulated according to H0 . Dots are observed data.
2.7. HYPOTHESIS TESTING 191
50 60 70 80
w.tot
Figure 2.39: Histogram of simulated values of w.tot. The dot is the value observed in the
baboon data set.
2.8 Exercises
1. (a) Justify Equation 2.1 on page 107.
(b) Show that the function g(x) defined just after Equation 2.1 is a probability
density. I.e., show that it integrates to 1.
2. This exercise uses the ToothGrowth data from Examples 2.1 and 2.18.
(a) Estimate the effect of delivery mode for doses 1.0 and 2.0. Does it seem that
delivery mode has a different effect at different doses?
(b) Does it seem as though delivery mode changes the effect of dose?
(c) For each delivery mode, make a set of three boxplots to compare the three
doses.
3. This exercise uses data from 272 eruptions of the Old Faithful geyser in Yellowstone
National Park. See Example 2.5. The data are in the R dataset faithful. One
column contains the duration of each eruption; the other contains the waiting time
to the next eruption. Figure 2.12 suggests that the duration of an eruption is a useful
predictor of the waiting time until the next eruption. Figure 2.14 suggests that the
previous waiting time is another potentially useful predictor of the next waiting time.
This exercise explores what can be gained by using the pair (duration, previous
waiting time) to predict the next waiting time.
(a) From Figure 2.11, estimate, visually, how accurately one could predict waiting
time if one knew nothing about the previous duration or the previous waiting
time.
2.8. EXERCISES 193
(b) From Figure 2.12, estimate, visually, how accurately one could predict waiting
time if one knew the duration of the previous eruption.
(c) Plot waiting time as a function of the previous waiting time. You can use a
command such as plot(waiting[-1]˜waiting[-272]). Describe the re-
sult. Estimate, visually, how accurately one could predict waiting time if one
knew the the previous waiting time. Is the previous waiting time a useful pre-
dictor of the next waiting time?
(d) Make a coplot of waiting time as a function of duration, given the previous
waiting time. Does the relationship between waiting time and duration depend
on the length of the previous waiting time? What does that say about the joint
distribution of (waitingi−1 , waitingi , durationi )?
(e) Make a coplot of the next waiting time as a function of the previous waiting
time, given duration. Given duration, are the previous waiting time and the next
waiting time strongly related? What does that say about the joint distribution
of (waitingi−1 , waitingi , durationi )?
(f) If we’re already using duration to predict the next waiting time, will there be
much benefit also to use the previous waiting time to improve the prediction?
4. This exercise relies on data from the neurobiology experiment described in Exam-
ple 2.6.
5. This exercise relies on Example 2.8 about the Slater school. There were 8 cancers
among 145 teachers. Figure 2.20 shows the likelihood function. Suppose the same
incidence rate had been found among more teachers. How would that affect `(θ)?
Make a plot similar to Figure 2.20, but pretending that there had been 80 cancers
among 1450 teachers. Compare to Figure 2.20. What is the result? Does it make
sense? Try other numbers if it helps you see what is going on.
2.8. EXERCISES 194
6. This exercise continues Exercise 42 in Chapter 1. Let p be the fraction of the popu-
lation that uses illegal drugs.
(a) Suppose researchers know that p ≈ .1. Jane and John are given the randomized
response question. Jane answers “yes”; John answers “no”. Find the posterior
probability that Jane uses cocaine; find the posterior probability that John uses
cocaine.
(b) Now suppose that p is not known and the researchers give the randomized
response question to 100 people. Let X be the number who answer “yes”.
What is the likelihood function?
(c) What is the mle of p if X=50, if X=60, if X=70, if X=80, if X=90?
7. This exercise deals with the likelihood function for Poisson distributions.
8. The book Data Andrews and Herzberg [1985] contains lots of data sets that have
been used for various purposes in statistics. One famous data set records the annual
number of deaths by horsekicks in the Prussian Army from 1875-1894 for each
of 14 corps. Download the data from statlib at http://lib.stat.cmu.edu/
datasets/Andrews/T04.1. (It is Table 4.1 in the book.) Let Yi j be the number of
deaths in year i, corps j, for i = 1875, . . . , 1894 and j = 1, . . . , 14. The Yi j s are in
columns 5–18 of the table.
(e) Is there any evidence that different corps had different death rates? How would
you investigate that possibility?
9. Use the data from Example 2.8. Find the m.l.e. for θ.
10. John is a runner and frequently runs from his home to his office. He wants to measure
the distance, so once a week he drives to work and measures the distance on his car’s
odometer. Unfortunately, the odometer records distance only to the nearest mile.
(John’s odometer changes abruptly from one digit to the next. When the odometer
displays a digit d, it is not possible to infer how close it is to becoming d + 1.) John
drives the route ten times and records the following data from the odometer: 3, 3, 3,
4, 3, 4, 3, 3, 4, 3. Find the m.l.e. of the exact distance. You may make reasonable
assumptions as needed.
11. X1 , . . . , Xn ∼ Normal(µ, 1). Multiple choice: The m.l.e. µ̂ is found from the equation
(a) d d
dµ dx
f (x1 , . . . , xn |µ) = 0
(b) d
dµ
f (x1 , . . . , xn |µ) = 0
(c) d
dx
f (x1 , . . . , xn |µ) = 0
12. This exercise deals with the likelihood function for Normal distributions.
13. Let y1 , . . . , yn be a sample from N(µ, 1). Show that µ̂ = ȳ is the m.l.e.
14. Let y1 , . . . , yn be a sample from N(µ, σ) where µ is known. Show that σ̂2 = n−1
P
(yi −
µ)2 is the m.l.e.
2.8. EXERCISES 196
15. Recall the discoveries data from page 10 on the number of great discoveries each
year. Let Yi be the number of great discoveries in year i and suppose Yi ∼ Poi(λ).
Plot the likelihood function `(λ). Figure 1.3 suggested that λ ≈ 3.1 explained the
data reasonably well. How sure can we be about the 3.1?
17. Page 159 discusses a simulation experiment comparing the sample mean and sample
median as estimators of a population mean. Figure 2.27 shows the results of the
simulation experiment. Notice that the vertical scale decreases from panel (a) to
(b), to (c), to (d). Why? Give a precise mathematical formula for the amount by
which the vertical scale should decrease. Does the actual decrease agree with your
formula?
18. In the medical screening example on page 166, find the probability that the patient
has the disease given that the test is negative.
19. Country A suspects country B of having hidden chemical weapons. Based on secret
information from their intelligence agency they calculate
P[B has weapons] = .8. But then country B agrees to inspections, so A sends in-
spectors. If there are no weapons then of course the inspectors won’t find any. But
if there are weapons then they will be well hidden, with only a 20% chance of being
found. I.e.,
P[finding weapons|weapons exist] = .2. (2.19)
No weapons are found. Find the probability that B has weapons. I.e., find
20. Let T be the amount of time a customer spends on Hold when calling the computer
help line. Assume that T ∼ exp(λ) where λ is unknown. A sample of n calls is
randomly selected. Let t1 , . . . , tn be the times spent on Hold.
21. There are two coins. One is fair; the other is two-headed. You randomly choose a
coin and toss it.
22. There are two coins. For coin A, P[H] = 1/4; for coin B, P[H] = 2/3. You randomly
choose a coin and toss it.
23. At Dupont College (apologies to Tom Wolfe) Math SAT scores among math majors
are distributed N(700, 50) while Math SAT scores among non-math majors are dis-
tributed N(600, 50). 5% of the students are math majors. A randomly chosen student
has a math SAT score of 720. Find the probability that the student is a math major.
24. The Great Randi is a professed psychic and claims to know the outcome of coin
flips. This problem concerns a sequence of 20 coin flips that Randi will try to guess
(or not guess, if his claim is correct).
(b) Two statistics students, a skeptic and a believer discuss Randi after class.
Believer: I believe her, I think she’s psychic.
Skeptic: I doubt it. I think she’s a hoax.
Believer: How could you be convinced? What if Randi guessed 10 in a row?
What would you say then?
Skeptic: I would put that down to luck. But if she guessed 20 in a row then I
would say P[Randi can guess coin flips] ≈ .5.
Find the skeptic’s prior probability that Randi can guess coin flips.
(c) Suppose that Randi doesn’t claim to guess coin tosses perfectly, only that she
can guess them at better than 50%. 100 trials are conducted. Randi gets 60
correct. Write down H0 and Ha appropriate for testing Randi’s claim. Do the
data support the claim? What if 70 were correct? Would that support the
claim?
(d) The Great Sandi, a statistician, writes the following R code to calculate a prob-
ability for Randi.
y <- rbinom ( 500, 100, .5)
sum ( y == 60 ) / 500
What is Sandi trying to calculate? Write a formula (Don’t evaluate it.) for the
quantity Sandi is trying to calculate.
25. Let w be the fraction of free throws that Shaquille O’Neal (or any other player of
your choosing) makes during the next NBA season. Find a density that approxi-
mately represents your prior opinion for w.
26. Let t be the amount of time between the moment when the sun first touches the hori-
zon in the afternoon and the moment when it sinks completely below the horizon.
Without making any observations, assess your distribution for t.
27. Assess your prior distribution for b, the proportion of M&M’s that are brown. Buy as
many M&M’s as you like and count the number of browns. Calculate your posterior
distribution. Give the M&M’s to your professor, so he or she can check your work.
28. (a) Let y ∼ N(θ, 1) and let the prior distribution for θ be θ ∼ N(0, 1).
i. When y has been observed, what is the posterior density of θ?
ii. Show that the density in part i. is a Normal density.
iii. Find its mean and SD.
2.8. EXERCISES 199
(b) Let y ∼ N(θ, σy ) and let the prior distribution for θ be θ ∼ N(m, σ). Suppose
that σy , m, and σ are known constants.
i. When y has been observed, what is the posterior density of θ?
ii. Show that the density in part i. is a Normal density.
iii. Find its mean and SD.
(c) Let y1 , . . . , yn be a sample of size n from N(θ, σy ) and let the prior distribution
for θ be θ ∼ N(m, σ). Suppose that σy , m, and σ are known constants.
i. When y1 , . . . , yn have been observed, what is the posterior density of θ?
ii. Show that the density in part i. is a Normal density.
iii. Find its mean and SD.
30. Refer to the discussion of predictive intervals on page 175. Justify the claim that
(−∞, −.72), (−3.65, −0.36), and (−3.28, ∞) are 90% prediction intervals. Find the
corresponding 80% prediction intervals.
31. (a) Following Example 2.17 (pg. 176), find Pr[y f = k | y1 , . . . yn ] for k = 1, 2, 3, 4.
(b) Using the results from part (a), make a plot analagous to Figure 2.33 (pg. 175).
32. Suppose you want to test whether the random number generator in R generates each
of the digits 0, 1, . . . , 9 with probability 0.1. How could you do it? You may con-
sider first testing whether R generates 0 with the right frequency, then repeating the
analysis for each digit.
33. (a) Repeat the analysis of Example 2.18 (pg. 184), but for dose = 1 and dose = 2.
(b) Test the hypothesis that increasing the dose from 1 to 2 makes no difference in
tooth growth.
(c) Test the hypothesis that the effect of increasing the dose from 1 to 2 is the same
for supp = VC as it is for supp = OJ.
(d) Do the answers to parts (a), (b) and (c) agree with your subjective assessment
of Figures 2.2, 2.3, and 2.6?
34. Continue Exercise 43 from Chapter 1. The autoganzfeld trials resulted in X = 122.
y <- rexp(15,.01)
m <- mean(y)
s <- sqrt( var(y ) / 15)
lo <- m - 2*s
hi <- m + 2*s
n <- 0
for ( i in 1:1000 ) {
y <- rexp(15,.01)
m <- mean(y)
s <- sqrt ( var(y ))
lo <- m - 2*s
hi <- m + 2*s
if ( lo < 100 & hi > 100 ) n <- n+1
}
print (n/1000)
37. Refer to the R code in Example 2.1 (pg. 101). Why was it necessary to have a brace
(“{”) after the line
for ( j in 1:3 )
but not after the line
for ( i in 1:2 )?
Chapter 3
Regression
3.1 Introduction
Regression is the study of how the distribution of one variable, Y, changes according to the
value of another variable, X. R comes with many data sets that offer regression examples.
Four are shown in Figure 3.1.
1. The data set attenu contains data on several variables from 182 earthquakes, in-
cluding hypocenter-to-station distance and peak acceleration. Figure 3.1 (a) shows
acceleration plotted against distance. There is a clear relationship between X =
distance and the distribution of Y = acceleration. When X is small, the distribution
of Y has a long right-hand tail. But when X is large, Y is always small.
2. The data set airquality contains data about air quality in New York City. Ozone
levels Y are plotted against temperature X in Figure 3.1 (b). When X is small then
the distribution of Y is concentrated on values below about 50 or so. But when X is
large, Y can range up to about 150 or so.
3. Figure 3.1 (c) shows data from mtcars. Weight is on the abcissa and the type of
transmission (manual=1, automatic=0) is on the ordinate. The distribution of weight
is clearly different for cars with automatic transmissions than for cars with manual
transmissions.
4. The data set faithful contains data about eruptions of the Old Faithful geyser in
Yellowstone National Park. Figure 3.1 (d) shows Y = time to next eruption plotted
against X = duration of current eruption. Small values of X tend to indicate small
values of Y.
202
3.1. INTRODUCTION 203
(a) (b)
0.8
150
0.6
Acceleration
100
ozone
0.4
50
0.2
0.0
0
0 100 300 60 70 80 90
Distance temperature
(c) (d)
90
1
Manual Transmission
80
waiting
70
60
50
0
Weight eruptions
par ( mfrow=c(2,2) )
data ( attenu )
plot ( attenu$dist, attenu$accel, xlab="Distance",
ylab="Acceleration", main="(a)", pch="." )
data ( airquality )
plot ( airquality$Temp, airquality$Ozone, xlab="temperature",
ylab="ozone", main="(b)", pch="." )
data ( mtcars )
stripchart ( mtcars$wt ~ mtcars$am, pch=1, xlab="Weight",
method="jitter", ylab="Manual Transmission",
main="(c)" )
data ( faithful )
plot ( faithful, pch=".", main="(d)" )
Both continuous and discrete variables can turn up in regression problems. In the
attenu, airquality and faithful datasets, both X and Y are continuous. In mtcars, it
seems natural to think of how the distribution of Y = weight varies with X = transmission,
in which case X is discrete and Y is continuous. But we could also consider how the
fraction of cars Y with automatic transmissions varies as a function of X = weight, in
which case Y is discrete and X is continuous.
In many regression problems we just want to display the relationship between X and
Y. Often a scatterplot or stripchart will suffice, as in Figure 3.1. Other times, we will use
a statistical model to describe the relationship. The statistical model may have unknown
parameters which we may wish to estimate or otherwise make inference for. Examples of
parametric models will come later. Our study of regression begins with data display.
In many instances a simple plot is enough to show the relationship between X and Y.
But sometimes the relationship is obscured by the scatter of points. Then it helps to draw
a smooth curve through the data. Examples 3.1 and 3.2 illustrate.
The result of the 1970 draft lottery is available at DASL . The website explains:
“In 1970, Congress instituted a random selection process for the mili-
tary draft. All 366 possible birth dates were placed in plastic capsules in
a rotating drum and were selected one by one. The first date drawn from
the drum received draft number one and eligible men born on that date
were drafted first. In a truly random lottery there should be no relationship
between the date and the draft number.”
Figure 3.2 shows the data, with X = day of year and Y = draft number. There is no
apparent relationship between X and Y .
More formally, a relationship between X and Y usually means that the expected
value of Y is different for different values of X . (We don’t consider changes in SD
or other aspects of the distribution here.) Typically, when X is a continuous variable,
changes in Y are smooth, so we would adopt the model
x <- draft$Day.of.year
y <- draft$Draft.No
plot ( x, y, xlab="Day of year", ylab="Draft number" )
lines ( lowess ( x, y ) )
3.1. INTRODUCTION 206
300
Draft number
200
100
0
Day of year
Figure 3.2: 1970 draft lottery. Draft number vs. day of year
3.1. INTRODUCTION 207
300
Draft number
200
100
0
Day of year
Figure 3.3: 1970 draft lottery. Draft number vs. day of year. Solid curve fit by lowess;
dashed curve fit by supsmu.
3.1. INTRODUCTION 208
In a regression problem the data are pairs (xi , yi ) for i = 1, . . . , n. For each i, yi is a
random variable whose distribution depends on xi . We write
yi = g(xi ) + i . (3.2)
Equation 3.2 expresses yi as a systematic or explainable part g(xi ) and an unexplained part
i . g is called the regression function. Often the statistician’s goal is to estimate g. As
usual, the most important tool is a simple plot, similar to those in Figures 3.1 through 3.4.
Once we have an estimate, ĝ, for the regression function g (either by a scatterplot
smoother or by some other technique) we can calculate ri ≡ yi − ĝ(xi ). The ri ’s are estimates
3.1. INTRODUCTION 209
25
total new seedlings
20
15
10
5
0
0 10 20 30 40 50 60
quadrat index
of the i ’s and are called residuals. The i ’s themselves are called errors. Because the ri ’s
are estimates they are sometimes written with the “hat” notation:
ˆi = ri = estimate of i
Residuals are used to evaluate and assess the fit of models for g, a topic which is beyond
the scope of this book.
In regression we use one variable to explain or predict the other. It is customary in
statistics to plot the predictor variable on the x-axis and the predicted variable on the y-
axis. The predictor is also called the independent variable, the explanatory variable, the
covariate, or simply x. The predicted variable is called the dependent variable, or simply
y. (In Economics x and y are sometimes called the exogenous and endogenous variables,
respectively.) Predicting or explaining y from x is not perfect; knowing x does not tell
us y exactly. But knowing x does tell us something about y and allows us to make more
accurate predictions than if we didn’t know x.
Regression models are agnostic about causality. In fact, instead of using x to predict y,
we could use y to predict x. So for each pair of variables there are two possible regressions:
using x to predict y and using y to predict x. Sometimes neither variable causes the other.
For example, consider a sample of cities and let x be the number of churches and y be the
number of bars. A scatterplot of x and y will show a strong relationship between them. But
the relationship is caused by the population of the cities. Large cities have large numbers
of bars and churches and appear near the upper right of the scatterplot. Small cities have
small numbers of bars and churches and appear near the lower left.
Scatterplot smoothers are a relatively unstructured way to estimate g. Their output
follows the data points more or less closely as the tuning parameter allows ĝ to be more or
less wiggly. Sometimes an unstructured approach is appropriate, but not always. The rest
of Chapter 3 presents more structured ways to estimate g.
calories
There are 20 Beef, 17 Meat and 17 Poultry hot dogs in the sample. We think of
them as samples from much larger populations. Figure 3.6 shows density estimates
of calorie content for the three types. For each type of hot dog, the calorie contents
cluster around a central value and fall off to either side without a particularly long left or
3.2. NORMAL LINEAR MODELS 212
right tail. So it is reasonable, at least as a first attempt, to model the three distributions
as Normal. Since the three distributions have about the same amount of spread we
model them as all having the same SD. We adopt the model
B1 , . . . , B20 ∼ i.i.d. N(µB , σ)
M1 , . . . , M17 ∼ i.i.d. N(µ M , σ) (3.3)
P1 , . . . , P17 ∼ i.i.d. N(µP , σ),
where the Bi ’s, Mi ’s and Pi ’s are the calorie contents of the Beef, Meat and Poultry hot
dogs respectively. Figure 3.6 suggests
µB ≈ 150; µ M ≈ 160; µP ≈ 120; σ ≈ 30.
An equivalent formulation is
B1 , . . . , B20 ∼ i.i.d. N(µ, σ)
M1 , . . . , M17 ∼ i.i.d. N(µ + δ M , σ) (3.4)
P1 , . . . , P17 ∼ i.i.d. N(µ + δP , σ)
Models 3.3 and 3.4 are mathematically equivalent. Each has three parameters
for the population means and one for the SD. They describe exactly the same set of
distributions and the parameters of either model can be written in terms of the other.
The equivalence is shown in Table ??. For the purpose of further exposition we adopt
Model 3.4.
We will see later how to carry out inferences regarding the parameters. For now
we stop with the model.
par ( mfrow=c(3,1))
plot ( density ( hotdogs$C[hotdogs$T=="Beef"], bw=20 ),
xlim=c(50,250), yaxt="n", ylab="", xlab="calories",
main="Beef" )
plot ( density ( hotdogs$C[hotdogs$T=="Meat"], bw=20 ),
xlim=c(50,250), yaxt="n", ylab="", xlab="calories",
main="Meat" )
plot ( density ( hotdogs$C[hotdogs$T=="Poultry"], bw=20 ),
xlim=c(50,250), yaxt="n", ylab="", xlab="calories",
main="Poultry" )
3.2. NORMAL LINEAR MODELS 213
Beef
calories
Meat
calories
Poultry
calories
The PlantGrowth data set in R provides another example. As R explains, the data
previously appeared in Dobson [1983] and are
weight group
1 4.17 ctrl
2 5.58 ctrl
3 5.18 ctrl
Figure 3.7 shows the whole data set.
It appears that plants grown under different treatments tend to have different weights.
In particular, plants grown under Treatment 1 appear to be smaller on average than plants
grown under either the Control or Treatment 2. What statistical model should we adopt?
trt2
trt1
ctrl
weight
First, we think of the 10 plants grown under each condition as a sample from a much
larger population of plants that could have been grown. Second, a look at the data sug-
gests that the weights in each group are clustered around a central value, approximately
3.2. NORMAL LINEAR MODELS 216
symmetrically without an especially long tail in either direction. So we model the weights
as having Normal distributions.
But we should allow for the possibility that the three populations have different means.
(We do not address the possibility of different SD’s here.) Let µ be the population mean of
plants grown under the Control condition, δ1 and δ2 be the extra weight due to Treatment 1
and Treatment 2 respectively, and σ be the SD. We adopt the model
There is a mathematical structure shared by 3.4, 3.5 and many other statistical models,
and some common statistical notation to describe it. We’ll use the hot dog data to illustrate.
Example 3.4 (Hot Dogs, continued)
Example 3.4 continues Example 2.2. First, there is the main variable of interest, often
called the response variable and denoted Y . For the hot dog data Y is calorie content.
(Another analysis could be made in which Y is sodium content.)
The distribution of Y is different under different circumstances. In this example, Y
has a Normal distribution whose mean depends on the type of hot dog. In general, the
distribution of Y will depend on some quantity of interest, called a covariate, regressor,
or explanatory variable. Covariates are often called X .
The data consists of multiple data points, or cases. We write Yi and Xi for the i’th
case. It is usual to represent the data as a matrix with one row for each case. One
column is for Y ; the other columns are for explanatory variables. For the hot dog data
the matrix is
Rewriting the data matrix in a slightly different form reveals some mathematical
structure common to many models. There are 54 cases in the hotdog study. Let
(Y1 , . . . , Y54 ) be their calorie contents. For each i from 1 to 54, define two new variables
X1,i and X2,i by
1 if the i’th hot dog is Meat,
=
X1,i
0 otherwise
and
1 if the i’th hot dog is Poultry,
=
X2,i
0 otherwise.
X1,i and X2,i are indicator variables. Two indicator variables suffice because, for the i’th
hot dog, if we know X1,i and X2,i , then we know what type it is. (More generally, if there
are k populations, then k − 1 indicator variables suffice.) With these new variables,
Model 3.4 can be rewritten as
Yi = µ + δ M X1,i + δP X2,i + i (3.6)
for i = 1, . . . , 54, where
1 , . . . , 54 ∼ i.i.d. N(0, σ).
Equation 3.6 is actually 54 separate equations, one for each case. We can write
them succinctly using vector and matrix notation. Let
Y = (Y1 , . . . , Y54 )t ,
B = (µ, δ M , δP )t ,
E = (1 , . . . , 54 )t ,
(The transpose is there because, by convention, vectors are column vectors.) and
1 0 0
1 0 0
... .. ..
. .
1 0 0
1 1 0
X = .. .. ..
. . .
1 1 0
1 0 1
.. .. ..
. . .
1 0 1
3.2. NORMAL LINEAR MODELS 218
X is a 54 × 3 matrix. The first 20 lines are for the Beef hot dogs; the next 17 are for
the Meat hot dogs; and the final 17 are for the Poultry hot dogs. Equation 3.6 can be
written
Y = XB + E (3.7)
Equations similar to 3.6 and 3.7 are common to many statistical models. For the
PlantGrowth data (page 214) let
Y = (Y1 , . . . , Y30 )t
B = (µ, δ1 , δ2 )t
E = (1 , . . . , 30 )t
and
1 0 0
1 0 0
... .. ..
. .
1 0 0
1 1 0
X = .. .. ..
. . .
1 1 0
1 0 1
.. .. ..
. . .
1 0 1
Then analogously to 3.6 and 3.7 we can write
and
Y = XB + E. (3.9)
3.2. NORMAL LINEAR MODELS 219
Notice that Equation 3.6 is nearly identical to Equation 3.8 and Equation 3.7 is identical
to Equation 3.9. Their structure is common to many statistical models. Each Yi is written
as the sum of two parts. The first part, XB, (µ + δ M X1,i + δP X2,i for the hot dogs; µ + δ1 X1,i +
δ2 X2,i for PlantGrowth) is called systematic, deterministic, or signal and represents the
explainable differences between populations. The second part, E, or i , is random, or noise,
and represents the differences between hot dogs or plants within a single population. The
i ’s are called errors. In statistics, the word “error” does not indicate a mistake; it simply
means the noise part of a model, or the part left unexplained by covariates. Modelling a
response variable as
response = signal + noise
is a useful way to think and will recur throughout this book.
In 3.6 the signal µ + δ M X1,i + δP X2,i is a linear function of (µ, δ M , δP ). In 3.8 the signal
µ + δ1 X1,i + δ2 X2,i is a linear function of (µ, δ1 , δ2 ). Models in which the signal is a linear
function of the parameters are called linear models.
In our examples so far, X has been an indicator. For each of a finite number of X’s
there has been a corresponding population of Y’s. As the next example illustrates, linear
models can also arise when X is a continuous variable.
Example 3.5 (Ice Cream Consumption)
This example comes from DASL, which says
http://lib.stat.cmu.edu/DASL/Datafiles/IceCream.html.
Y = (IC1 , . . . , IC30 )t
B = (β0 , β1 )t
E = (1 , . . . , 30 )t
and
1 temp1
1 temp2
X = .. ..
. .
1 temp30
The model is
Y = XB + E. (3.11)
Equation 3.11 is a linear model, identical to Equations 3.7 and 3.9.
Equation 3.7 (equivalently, 3.9 or 3.11) is the basic form of all linear models. Linear
models are extremely useful because they can be applied to so many kinds of data sets.
Section 3.2.2 investigates some of their theoretical properties and R’s functions for fitting
them to data.
3.2. NORMAL LINEAR MODELS 221
0.55
0.50
0.45
consumption
0.40
0.35
0.30
0.25
30 40 50 60 70
temperature
Figure 3.8: Ice cream consumption (pints per capita) versus mean temperature (◦ F)
3.2. NORMAL LINEAR MODELS 222
In general there is an arbitrary number of cases, say n, and an arbitrary number of covari-
ates, say p. Equation 3.12 is shorthand for the collection of univariate equations
or equivalently,
Yi ∼ N(µi , σ)
for i = 1, . . . , n where µi = β0 + j β j X j,i and the i ’s are i.i.d. N(0, σ). There are p + 2
P
parameters: (β0 , . . . , β p , σ). The likelihood function is
n
Y
`(β0 , . . . , β p , σ) = p(yi | β0 , . . . , β p , σ)
i=1
n
Y 1 yi −µi 2
e− 2 (
1
= √ σ )
i=1 2πσ
n y −(β
0 + β j X j,i )
P 2
Y 1 − 12 i
= e√ σ
i=1 2πσ
− n 2
= 2πσ2 2 e− 2σ2 i (yi −(β0 + β j X j,i ))
1 P P
(3.14)
1 X X
β̂ + β̂ =0
yi − ( 0 j X i, j )
σ̂2 i j
1 X X
β̂ + β̂ Xi,1 = 0
y i − ( 0 j X i, j )
σ̂2 i j
..
. (3.15)
1 X X
β̂ + β̂ Xi,p = 0
yi − ( 0 j X i, j )
σ̂2 i j
n 1 X X 2
− + 3 yi − (β̂0 + β̂ j Xi, j ) = 0
σ̂ σ̂ i j
Note the hat notation to indicate estimates. The m.l.e.’s (β̂0 , . . . , β̂ p , σ̂) are the values of
the parameters that make the derivatives equal to 0 and therefore satisfy Equations 3.15.
The first p + 1 of these equations can be multiplied by σ2 , yielding p + 1 linear equations
in the p + 1 unknown β’s. Because they’re linear, they can be solved by linear algebra. The
solution is
B̂ = (Xt X)−1 Xt Y,
using the notation of Equation 3.12.
For each i ∈ {1, . . . , n}, let
ri = yi − ŷi
= yi − β̂0 + x1i β̂1 + · · · + x pi β̂ p
and are estimates of the errors i . Finally, referring to the last line of Equation 3.15, the
m.l.e. σ̂ is found from
n 1 X X 2
0=− + 3 yi − (β0 + β j Xi, j )
σ̂ σ̂ i j
n 1 X
=− + 3 r2
σ̂ σ̂ i i
3.2. NORMAL LINEAR MODELS 224
so
1X 2
σ̂2 = ri
n
and
! 12
ri2
P
σ̂ = (3.16)
n
In addition to the m.l.e.’s we often want to look at the likelihood function to judge,
for example, how accurately each β can be estimated. The likelihood function for a sin-
gle βi comes from the Central Limit Theorem. We will not work out the math here but,
fortunately, R will do all the calculations for us. We illustrate with the hot dog data.
Example 3.6 (Hot Dogs, continued)
Estimating the parameters of a model is called fitting a model to data. R has built-in
commands for fitting models. The following snippet fits Model 3.7 to the hot dog data.
The syntax is similar for many model fitting commands in R, so it is worth spending
some time to understand it.
• ˜ stands for “is a function of”. It is used in many of R’s modelling commands. y
˜ x is called a formula and means that y is modelled as a function of x. In the
case at hand, Calories is modelled as a function of Type.
• R automatically creates the X matrix in Equation 3.12 and estimates the param-
eters.
• The result of fitting the model is stored in a new object called hotdogs.fit. Of
course we could have called it anything we like.
To see hotdogs.fit, use R’s summary function. It’s use and the resulting output are
shown in the following snippet.
> summary(hotdogs.fit)
Call:
lm(formula = hotdogs$Calories ~ hotdogs$Type)
Residuals:
Min 1Q Median 3Q Max
-51.706 -18.492 -5.278 22.500 36.294
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 156.850 5.246 29.901 < 2e-16 ***
hotdogs$TypeMeat 1.856 7.739 0.240 0.811
hotdogs$TypePoultry -38.085 7.739 -4.921 9.4e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
The most important part of the output is the table labelled Coefficients:. There is
one row of the table for each coefficient. Their names are on the left. In this table the
names are Intercept, hotdogs$TypeMeat, and hotdogs$TypePoultry. The first
column is labelled Estimate. Those are the m.l.e.’s. R has fit the model
Yi = β0 + β1 X1,i + β2 X2,i + i
3.2. NORMAL LINEAR MODELS 226
where X1 and X2 are indicator variables for the type of hot dog. The model implies
β̂0 = 156.850
β̂1 = 1.856
β̂2 = −38.085
The next column of the table is labelled Std. Error. It contains the SD’s of the
estimates. In this case, β̂0 has an SD of about 5.2; β̂1 has an SD of about 7.7, and β̂2
also has an SD of about 7.7. The Central Limit Theorem says that approximately, in
large samples
The SD’s in the table are estimates of the SD’s in the Central Limit Theorem.
Figure 3.9 plots the likelihood functions. The interpretation is that β0 is likely some-
where around 157, plus or minus about 10 or so; β1 is somewhere around 2, plus or
minus about 15 or so; and β2 is somewhere around -38, plus or minus about 15 or so.
(Compare to Table ??.) In particular, there is no strong evidence that Meat hot dogs
have, on average, more or fewer calories than Beef hot dogs; but there is quite strong
evidence that Poultry hot dogs have considerably fewer.
3.2. NORMAL LINEAR MODELS 227
likelihood
µ !M
likelihood
!P
Figure 3.9: Likelihood functions for (µ, δ M , δP ) in the Hot Dog example.
3.2. NORMAL LINEAR MODELS 228
par ( mfrow=c(2,2) )
panel (c)) more thoroughly with the goal of modelling mpg (miles per gallon) as a
function of the other variables. As usual, type data(mtcars) to load the data into R
and help(mtcars) for an explanation. As R explains:
“The data was extracted from the 1974 Motor Trend US magazine, and
comprises fuel consumption and 10 aspects of automobile design and per-
formance for 32 automobiles (1973-74 models).”
In an exploratory exercise such as this, it often helps to begin by looking at the data.
Accordingly, Figure 3.10 is a pairs plot of the data, using just the continuous variables.
pairs ( mtcars[,c(1,3:7)] )
Clearly, mpg is related to several of the other variables. Weight is an obvious and
intuitive example. The figure suggests that the linear model
mpg = β0 + β1 wt + (3.17)
is a good start to modelling the data. Figure 3.11(a) is a plot of mpg vs. weight plus
the fitted line. The estimated coefficients turn out to be β̂0 ≈ 37.3 and β̂1 ≈ −5.34. The
interpretation is that mpg decreases by about 5.34 for every 1000 pounds of weight.
Note: this does not mean that if you put a 1000 pound weight in your car your mpg
will decrease by 5.34. It means that if car A weighs about 1000 pounds less than car
B, then we expect car A to get an extra 5.34 miles per gallon. But there are likely
many differences between A and B besides weight. The 5.34 accounts for all of those
differences, on average.
We could just as easily have begun by fitting mpg as a function of horsepower with
the model
mpg = γ0 + γ1 hp + (3.18)
We use γ’s to distinguish the coefficients in Equation 3.18 from those in Equation 3.17.
The m.l.e.’s turn out to be γ̂0 ≈ 30.1 and γ̂1 ≈ −0.069. Figure 3.11(b) shows the
corresponding scatterplot and fitted line. Which model do we prefer? Choosing among
different possible models is a major area of statistical practice with a large literature
that can be highly technical. In this book we show just a few considerations.
One way to judge models is through residual plots, which are plots of residuals
versus either X variables or fitted values. If models are adequate, then residual plots
3.2. NORMAL LINEAR MODELS 230
25
mpg
10
400
disp
100
250
hp
50
4.5
drat
3.0
4
wt
2
22
qsec
16
10 25 50 250 2 4
Figure 3.10: pairs plot of the mtcars data. Type help(mtcars) in R for an explanation.
3.2. NORMAL LINEAR MODELS 231
should show no obvious patterns. Patterns in residual plots are clues to model in-
adequacy and ways to improve models. Figure 3.11(c) and (d) are residual plots for
mpg.fit1 (mpg vs. wt) and mpg.fit2 (mpg vs. hp). There are no obvious patterns in
panel (c). In panel (d) there is a suggestion of curvature. For fitted values between
about 15 and 23, residuals tend to be low but for fitted values less than about 15 or
greater than about 23, residuals tend to be high (The same pattern might have been
noted in panel (b).) suggesting that mpg might be better fit as a nonlinear function of
hp. We do not pursue that suggestion further at the moment, merely noting that there
may be a minor flaw in mpg.fit2 and we therefore slightly prefer mpg.fit1.
Another thing to note from panels (c) and (d) is the overall size of the residuals.
In (c), they run from about -4 to about +6, while in (d) they run from about -6 to about
+6. That is, the residuals from mpg.fit2 tend to be slightly larger in absolute value
than the residuals from mpg.fit1, suggesting that wt predicts mpg slightly better than
does hp. That impression can be confirmed by getting the summary of both fits and
checking σ̂. From mpg.fit1 σ̂ ≈ 3.046 while from mpg.fit2 σ̂ ≈ 3.863. I.e., from wt
we can predict mpg to within about 6 or so (two SD’s) while from hp we can predict
mpg only to within about 7.7 or so. For this reason too, we slightly prefer mpg.fit1 to
mpg.fit2.
What about the possibility of using both weight and horsepower to predict mpg?
Consider
mpg.fit3 <- lm ( mpg ~ wt + hp, data=mtcars )
(a) (b)
30
30
mpg
mpg
20
20
10
10
2 3 4 5 50 150 250
weight horsepower
(c) (d)
0 2 4 6
6
resid
resid
2
!2
!4
!6
10 15 20 25 30 10 15 20 25
(e)
6
4
2
resid
0
!4
10 15 20 25 30
Figure 3.11: mtcars — (a): mpg vs. wt; (b): mpg vs. hp; (c): residual plot from mpg˜
wt; (d): residual plot from mpg˜ hp; (e): residual plot from mpg˜ wt+hp
3.2. NORMAL LINEAR MODELS 233
# panel c
plot ( fitted(mpg.fit1), resid(mpg.fit1), main="(c)",
xlab="fitted values from fit1", ylab="resid" )
# panel d
plot ( fitted(mpg.fit2), resid(mpg.fit2),
xlab="fitted values from fit2", ylab="resid",
main="(d)" )
In Example 3.7 we fit three models for mpg, repeated here with their original equation
numbers.
mpg = β0 + β1 wt + (3.17)
mpg = γ0 + γ1 hp + (3.18)
mpg = δ0 + δ1 wt1 + δ2 hp + (3.19)
What is the connection between, say, β1 and δ1 , or between γ1 and δ2 ? β1 is the average
mpg difference between two cars whose weights differ by 1000 pounds. Since heavier
cars tend to be different than lighter cars in many ways, not just in weight, β1 captures
the net effect on mpg of all those differences. On the other hand, δ1 is the average mpg
difference between two cars of identical horsepower but whose weights differ by 1000
pounds. Figure 3.12 shows the likelihood functions of these four parameters. The evidence
suggests that β1 is probably in the range of about -7 to about -4, while δ1 is in the range of
about -6 to -2. It’s possible that β1 ≈ δ1 . On the other hand, γ1 is probably in the interval
(−.1, −.04) while δ2 is probably in the interval (−.05, 0). It’s quite likely that γ1 0 δ2 .
Scientists sometimes ask the question “What is the effect of variable X on variable Y?”
That question does not have an unambiguous answer; the answer depends on which other
variables are accounted for and which are not.
par ( mfrow=c(2,2) )
x <- seq ( -8, -1.5, len=60 )
3.2. NORMAL LINEAR MODELS 234
!1 "1
#1 #2
the number of pine cones on each tree. For each tree let X be its DBH and Y be either
1 or 0 according to whether the tree has pine cones.
Figure 3.13(a) is a plot of Y versus X for all the trees in Ring 1. It does appear that
larger trees are more likely to have pine cones.
Fitting straight lines to Figure 3.13 doesn’t make sense. In panel (a) what we need is a
curve such that
1. E[Y | X] = P[Y = 1 | X] is close to 0 when X is smaller than about 10 or 12 cm., and
eβ0 +β1 x
E[Y | X] = P[Y = 1 | X] = (3.20)
1 + eβ0 +β1 x
Figure 3.14 shows the same data as Figure 3.13 with some curves added according to
Equation 3.20. The values of β0 and β1 are in Table ??.
β0 β1
solid -8 .45
(a) dashed -7.5 .36
dotted -5 .45
solid 20 -.3
(b) dashed 15 -.23
dotted 18 -.3
par ( mfrow=c(2,1) )
plot ( cones$dbh[ring1], mature[ring1], xlab="DBH",
ylab="pine cones present", main="(a)" )
x <- seq ( 4, 25, length=40 )
b0 <- c(-8, -7.5, -5)
b1 <- c ( .45, .36, .45 )
for ( i in 1:3 )
lines ( x, exp(b0[i] + b1[i]*x)/(1 + exp(b0[i] + b1[i]*x) ),
lty=i )
3.3. GENERALIZED LINEAR MODELS 238
(a)
pine cones present
0.8
0.4
0.0
5 10 15 20 25
DBH
(b)
damage present
0.8
0.4
0.0
55 60 65 70 75 80
temperature
Figure 3.13: (a): pine cone presence/absence vs. dbh. (b): O-ring damage vs. launch
temperature
3.3. GENERALIZED LINEAR MODELS 239
(a)
pine cones present
0.8
0.4
0.0
5 10 15 20 25
DBH
(b)
damage present
0.8
0.4
0.0
55 60 65 70 75 80
temperature
Figure 3.14: (a): pine cone presence/absence vs. dbh. (b): O-ring damage vs. launch
temperature, with some logistic regression curves
3.3. GENERALIZED LINEAR MODELS 240
Model 3.20 is known as logistic regression. Let the i’th observation have covariate xi
and probability of success θi = E[Yi | xi ]. Define
θi
!
φi ≡ log .
1 − θi
eφi
θi = .
1 + eφi
This is called a generalized linear model or glm because it is a linear model for φ, a
transformation of E(Y | x) rather than for E(Y | x) directly. The quantity β0 + β1 x is called
the linear predictor. If β1 > 0, then as x → +∞, θ → 1 and as x → −∞, θ → 0. If
β1 < 0 the situation is reversed. β0 is like an intercept; it controls how far to the left or
right the curve is. β1 is like a slope; it controls how quickly the curve moves between its
two asymptotes.
Logistic regression and, indeed, all generalized linear models differ from linear regres-
sion in two ways: the regression function is nonlinear and the distribution of Y | x is not
Normal. These differences imply that the methods we used to analyze linear models are
not correct for generalized linear models. We need to derive the likelihood function and
find new calculational algorithms.
3.3. GENERALIZED LINEAR MODELS 241
This is a rather complicated function of the two variables (β0 , β1 ). However, a Central
Limit Theorem applies to give a likelihood function for β0 and β1 that is accurate when
n is reasonable large. The theory is beyond the scope of this book, but R will do the
calculations for us. We illustrate with the pine cone data from Example 3.8. Figure 3.15
shows the likelihood function.
• mature is an indicator variable for whether a tree has at least one pine cone.
• The lines b0 <- ... and b1 <- ... set some values of (β0 , β1 ) at which to eval-
uate the likelihood. They were chosen after looking at the output from fitting the
logistic regression model.
3.3. GENERALIZED LINEAR MODELS 242
0.50
0.45
0.40
0.35
!1
0.30
0.25
0.20
0.15
!11 !10 !9 !8 !7 !6 !5 !4
!0
• lik <- ... creates a matrix to hold values of the likelihood function.
One notable feature of Figure 3.15 is the diagonal slope of the contour ellipses. The
meaning is that we do not have independent information about β0 and β1 . For example if
we thought, for some reason, that β0 ≈ −9, then we could be fairly confident that β1 is
in the neighborhood of about .4 to about .45. But if we thought β0 ≈ −6, then we would
believe that β1 is in the neighborhood of about .25 to about .3. More generally, if we knew
β0 , then we could estimate β1 to within a range of about .05. But since we don’t know β0 ,
we can only say that β1 is likely to be somewhere between about .2 and .6. The dependent
information for (β0 , β1 ) means that our marginal information for β1 is much less precise
than our conditional information for β1 given β0 . That imprecise marginal information is
reflected in the output from R, shown in the following snippet which fits the model and
summarizes the result.
• cones ... reads in the data. There is one line for each tree. The first few lines
look like this.
ID is a unique identifying number for each tree; xcoor and ycoor are coordinates in
the plane; spec is the species; pita stands for pinus taeda or loblolly pine, X1998,
X1999 and X2000 are the numbers of pine cones each year.
• mature ... indicates whether the tree had any cones at all in 2000. It is not a
precise indicator of maturity.
• fit ... fits the logistic regression. glm fits a generalized linear model. The argu-
ment family=binomial tells R what kind of data we have. In this case it’s binomial
because y is either a success or failure.
• summary(fit) shows that (β̂0 , β̂1 ) ≈ (−7.5, 0.36). The SD’s are about 1.8 and .1.
These values guided the choice of b0 and b1 in creating Figure 3.15. It’s the SD
of about .1 that says we can estimate β1 to within an interval of about .4, or about
±2SD’s.
log λ = β0 + β1 x (3.21)
Model 3.21 is another example of a generalized linear model. Example 3.10 illustrates its
use.
Example 3.10 (Seedlings, continued)
Several earlier examples have discussed data from the Coweeta LTER on the emer-
gence and survival of red maple (acer rubrum) seedlings. Example 3.2 showed that
the arrival rate of seedlings seemed to vary by quadrat. Refer especially to Figure 3.4.
Example 3.10 follows up that observation more quantitatively.
Roughly speaking, New seedlings arise in a two-step process. First, a seed falls
out of the sky, then it germinates and emerges from the ground. We may reasonably
3.3. GENERALIZED LINEAR MODELS 245
assume that the emergence of one seedling does not affect the emergence of another
(They’re too small to interfere with each other.) and hence that the number of New
seedlings has a Poi(λ) distribution. Let Yi j be the number of New seedlings observed
in quadrat i and year j. Here are two fits in R, one in which λ varies by quadrat and
one in which it doesn’t.
We created a dataframe called new, having three columns called count, quadrat,
and year. Each row of new contains a count (of New seedlings), a quadrat num-
ber and a year. There are as many rows as there are observations.
• The command as.factor turns its argument into a factor. That is, instead of
treating quadrat and year as numerical variables, we treat them as indicator
variables. That’s because we don’t want a quadrat variable running from 1 to
60 implying that the 60th quadrat has 60 times as much of something as the 1st
quadrat. We want the quadrat numbers to act as labels, not as numbers.
• glm stands for generalized linear model. The family=poisson argument says
what kind of data we’re modelling. data=new says the data are to be found in a
dataframe called new.
• The formula count ˜ 1 says to fit a model with only an intercept, no covariates.
• The formula count ˜ quadrat says to fit a model in which quadrat is a covari-
ate. Of course that’s really 59 new covariates, indicator variables for 59 of the 60
quadrats.
3.3. GENERALIZED LINEAR MODELS 246
To examine the two fits and see which we prefer, we plotted actual versus fitted
values and residuals versus fitted values in Figure 3.16. Panels (a) and (b) are from
fit0. Because there may be overplotting, we jittered the points and replotted them
in panels (c) and (d). Panels (e) and (f) are jittered values from fit1. Comparison
of panels (c) to (e) and (d) to (f) shows that fit1 predicts more accurately and has
smaller residuals than fit0. That’s consistent with our reading of Figure 3.4. So we
prefer fit1.
Figure 3.17 continues the story. Panel (a) shows residuals from fit1 plotted
against year. There is a clear difference between years. Years 1, 3, and 5 are high
while years 2 and 4 are low. So perhaps we should use year as a predictor. That’s
done by
Panels (b) and (c) show diagnostic plots for fit2. Compare to similar panels in Fig-
ure 3.16 to see whether using year makes an appreciable difference to the fit.
par ( mfrow=c(3,2) )
plot ( fitted(fit0), new$count, xlab="fitted values",
ylab="actual values", main="(a)" )
abline ( 0, 1 )
plot ( fitted(fit0), residuals(fit0), xlab="fitted values",
ylab="residuals", main="(b)" )
plot ( jitter(fitted(fit0)), jitter(new$count),
xlab="fitted values", ylab="actual values",
main="(c)" )
abline ( 0, 1 )
plot ( jitter(fitted(fit0)), jitter(residuals(fit0)),
xlab="fitted values", ylab="residuals", main="(d)" )
plot ( jitter(fitted(fit1)), jitter(new$count),
xlab="fitted values", ylab="actual values",
main="(e)" )
abline ( 0, 1 )
plot ( jitter(fitted(fit1)), jitter(residuals(fit1)),
3.3. GENERALIZED LINEAR MODELS 247
(a) (b)
8
15
actual values
6
residuals
10
4
2
5
0
0
(c) (d)
8
15
actual values
6
residuals
10
4
2
5
0
0
(e) (f)
15
3
actual values
residuals
10
1
!1
5
!3
0
0 1 2 3 4 5 0 1 2 3 4 5
Figure 3.16: Actual vs. fitted and residuals vs. fitted for the New seedling data. (a) and
(b): fit0. (c) and (d): jittered values from fit0. (e) and (f): jittered values from fit1.
3.4. PREDICTIONS FROM REGRESSION 248
par ( mfrow=c(2,2) )
plot ( new$year, residuals(fit1),
xlab="year", ylab="residuals", main="(a)" )
plot ( jitter(fitted(fit2)), jitter(new$count),
xlab="fitted values", ylab="actual values",
main="(b)" )
abline ( 0, 1 )
plot ( jitter(fitted(fit2)), jitter(residuals(fit2)),
xlab="fitted values", ylab="residuals", main="(c)" )
(a) (b)
15
3
2
actual values
10
residuals
1
0
!1
5
!3
0
1 2 3 4 5 0 2 4 6 8
(c)
3
2
1
residuals
0
!1
!2
!3
0 2 4 6 8
fitted values
Figure 3.17: New seedling data. (a): residuals from fit1 vs. year. (b): actual vs. fitted
from fit2. (c): residuals vs. fitted from fit2.
3.4. PREDICTIONS FROM REGRESSION 250
We plot the fitted values against each other to see whether there are any noticable
differences. Figure 3.18 displays the result. Figure 3.18 shows that the mpg.fit1 and
mpg.fit3 produce fitted values substantially similar to each other and agreeing fairly
well with actual values, while mpg.fit2 produces fitted values that differ somewhat
from the others and from the actual values, at least for a few cars. This is another
reason to prefer mpg.fit1 and mpg.fit3 to mpg.fit2. In Example 3.7 this lack of fit
showed up as a higher σ̂ for mpg.fit2 than for mpg.fit1.
• fitted(xyz) extracts fitted values. xyz can be any model previously fitted by
lm, glm, or other R functions to fit models.
y = β0 + β1 x + (3.22)
where x was mean temperature during the week and y was ice cream consumption during
the week. Now we want to fit the model to the data and use the fit to predict consumption.
In addition, we want to say how accurate the predictions are. Let x f be the predicted mean
temperature for some future week and y f be consumption. x f is known; y f is not. Our
model says
y f ∼ N(µ f , σ)
where
µ f = β0 + β1 x f
µ f is unknown because β0 and β1 are unknown. But (β0 , β1 ) can be estimated from the
data, so we can form an estimate
µ̂ f = β̂0 + β̂1 x f
3.4. PREDICTIONS FROM REGRESSION 251
10 20 10 20 30
30
25
20
fitted from wt
15
10
25
20
fitted from hp
15
10
30
25
20
fitted from both
15
10
30
25
actual mpg
20
15
10
10 20 30 10 20 30
Figure 3.18: Actual mpg and fitted values from three models
3.5. EXERCISES 252
How accurate is µ̂ f as an estimate of µ f ? The answer depends on how accurate (β̂0 , β̂1 ) are
as estimates of (β0 , β1 ). Advanced theory about Normal distributions, beyond the scope of
this book, tells us
µ̂ f ∼ N(µ f , σfit )
for some σfit which may depend on x f ; we have omitted the dependency from the notation.
µ f is the average ice cream consumption in all weeks whose mean temperature is x f .
So µ̂ f is also an estimator of y f . But in any particular week the actual consumption won’t
exactly equal µ f . Our model says
yf = µf +
where ∼ N(0, σ). So in any given week y f will differ from µ f by an amount up to about
±2σ or so.
Thus the uncertainty σfit in estimating y f has two components: (1) the uncertainty
of µ f which comes because we don’t know (β0 , β1 ) and (2) the variability σ due to .
We can’t say in advance which component will dominate. Sometimes it will be the first,
sometimes the second. What we can say is that as we collect more and more data, we
learn (β0 , β1 ) more accurately, so the first component becomes negligible and the second
component dominates. When that happens, we won’t go far wrong by simply ignoring the
first component.
3.5 Exercises
1. (a) Use the attenu , airquality and faithful datasets to reproduce Figures 3.1 (a),
(b) and (d).
(b) Add lowess and supsmu fits.
(c) Figure out how to use the tuning parameters and try out several different values.
(Use the help or help.start functions.)
2. With the mtcars dataset, use a scatterplot smoother to plot the relationship between
weight and displacement. Does it matter which we think of as X and which as Y? Is
one way more natural than the other?
3. Download the 1970 draft data from DASL and reproduce Figure 3.3. Use the tun-
ing parameters (f for lowess; span for supsmu) to draw smoother and wigglier
scatterplot smoothers.
4. How could you test whether the draft numbers in Example 3.1 were generated uni-
formly? What would H0 be? What would be a good test statistic w? How would
estimate the distribution of w under H0 ?
3.5. EXERCISES 253
5. Using the information in Example 3.6 estimate the mean calorie content of meat and
poultry hot dogs.
(a) Formulate statistical hypotheses for testing whether the mean calorie content
of Poultry hot dogs is equal to the mean calorie content of Beef hot dogs.
(b) What statistic will you use?
(c) What should that statistic be if H0 is true?
(d) How many SD’s is it off?
(e) What do you conclude?
(f) What about Meat hot dogs?
7. Refer to Examples 2.2, 3.4, and 3.6. Figure 3.5 shows plenty of overlap in the calorie
contents of Beef and Poultry hot dogs. I.e., there are many Poultry hot dogs with
more calories than many Beef hot dogs. But Figure 3.9 shows very little support for
values of δP near 0. Can that be right? Explain?
8. Examples 2.2, 3.4, and 3.6 analyze the calorie content of Beef, Meat, and Poultry
hot dogs. Create a similar analysis, but for sodium content. Your analysis should
cover at least the following steps.
(a) A stripchart similar to Figure 3.5 and density estimates similar to Figure 3.6.
(b) A model similar to Model 3.4, including definitions of the parameters.
(c) Indicator variables analogous to those in Equation 3.6.
(d) A model similar to Model 3.7, including definitions of all the terms.
(e) A fit in R, similar to that in Example 3.6.
(f) Parameter estimates and SD’s.
(g) Plots of likelihood functions, analagous to Figure 3.9.
(h) Interpretation.
9. Analyze the PlantGrowth data from page 214. State your conclusion about whether
the treatments are effective. Support you conclusion with analysis.
10. Analyze the Ice Cream data from Example 3.5. Write a model similar to Model 3.7,
including definitions of all the terms. Use R to fit the model. Estimate the coefficients
and say how accurate your estimates are. If temperature increases by about 5 ◦ F,
3.5. EXERCISES 254
about how much would you expect ice cream consumption to increase? Make a
plot similar to Figure 3.8, but add on the line implied by Equation 3.10 and your
estimates of β0 and β1 .
11. Verify the claim that for Equation 3.18 γ̂0 ≈ 30, γ̂1 ≈ −.07 and σ̂ ≈ 3.9.
12. Does a football filled with helium travel further than one filled with air? DASL has
a data set that attempts to answer the question. Go to DASL, http://lib.stat.
cmu.edu/DASL, download the data set Helium football and read the story. Use
what you know about linear models to analyze the data and reach a conclusion. You
must decide whether to include data from the first several kicks and from kicks that
appear to be flubbed. Does your decision affect your conclusion?
13. Use the PlantGrowth data from R. Refer to page 214 and Equation 3.5.
14. Jack and Jill, two Duke University sophomores, have to choose their majors. They
both love poetry so they might choose to be English majors. Then their futures
would be full of black clothes, black coffee, low paying jobs, and occasional vol-
umes of poetry published by independent, non-commercial presses. On the other
hand, they both see the value of money, so they could choose to be Economics ma-
jors. Then their futures would be full of power suits, double cappucinos, investment
banking and, at least for Jack, membership in the Augusta National golf club. But
which would make them more happy?
To investigate, they conduct a survey. Not wanting to embarass their friends and
themselves, Jack and Jill leave Duke’s campus and go up Chapel Hill to interview
poets and investment bankers at UNC-CH. In all of Chapel Hill there are 90 po-
ets but only 10 investment bankers. J&J interview them all. From the interviews
J&J compute the Happiness Quotient or HQ of each subject. The HQ’s are in Fig-
ure 3.19. J&J also record two indicator variables for each person: Pi = 1 or 0 (for
poets and bankers); Bi = 1 or 0 (for bankers and poets).
Jill and Jack each write a statistical model:
p
b
39 40 41 42 43 44 45
HQ
schools_poverty
at this text’s website contains relevant data from the Durham, NC school system in
2001. The first few lines are
Each school in the Durham public school system is represented by one line in the
file. The variable pfl stands for percent free lunch. It records the percentage of the
school’s student population that qualifies for a free lunch program. It is an indicator
of poverty. The variable eog stands for end of grade. It is the school’s average score
on end of grade tests and is an indicator of academic success. Finally, type indicates
the type of school — e, m, or h for elementary, middle or high school, respectively.
You are to investigate whether pfl is predictive of eog.
(a) Read the data into R and plot it in a sensible way. Use different plot symbols
for the three types of schools.
(b) Does there appear to be a relationship between pfl and eog? Is the relation-
ship the same for the three types of schools? Decide whether the rest of your
analysis should include all types of schools, or only one or two.
(c) Using the types of schools you think best, remake the plot and add a regression
line. Say in words what the regression line means.
(d) During the 2000-2001 school year Duke University, in Durham, NC, sponsored
a tutoring program in one of the elementary schools. Many Duke students
served as tutors. From looking at the plot, and assuming the program was
successful, can you figure out which school it was?
3.5. EXERCISES 257
16. Load mtcars into an R session. Use R to find the m.l.e.’s (β̂0 , β̂1 ). Confirm that they
agree with the line drawn in Figure 3.11(a). Starting from Equation 3.17, derive the
m.l.e.’s for β0 and β1 .
17. Get more current data similar to mtcars. Carry out a regression analysis similar to
Example 3.7. Have relationships among the variables changed over time? What are
now the most important predictors of mpg?
18. Repeat the logistic regression of am on wt, but use hp instead of wt.
19. A researcher randomly selects cities in the US. For each city she records the number
of bars yi and the number of churches zi . In the regression equation zi = β0 + β1 yi do
you expect β1 to be positive, negative, or around 0?
Make an intelligent guess of what she found for β̂0 and β̂1 .
(c) Using advanced statistical theory she calculates
SD(β̂0 ) = .13
SD(β̂1 ) = .22
Make an intelligent guess of in0 and in1 after Jane ran this code.
21. The Army is testing a new mortar. They fire a shell up at an angle of 60◦ and
track its progress with a laser. Let t1 , t2 , . . . , t100 be equally spaced times from
t1 = (time of firing) to t100 = (time when it lands). Let y1 , . . . , y100 be the shell’s
heights and z1 , . . . , z100 be the shell’s distance from the howitzer (measured horizon-
tally along the ground) at times t1 , t2 , . . . , t100 . The yi ’s and zi ’s are measured by the
laser. The measurements are not perfect; there is some measurement error. In an-
swering the following questions you may assume that the shell’s horizontal speed
remains constant until it falls to ground.
yi = β0 + β1 ti + i
yi = β0 + β1 ti + β2 ti2 + i (3.23)
zi = β0 + β1 ti + i (3.24)
zi = β0 + β1 ti + β2 ti2 + i (3.25)
yi = β0 + β1 zi + i (3.26)
yi = β0 + β1 zi + β2 z2i + i (3.27)
22. Some nonstatisticians (not readers of this book, we hope) do statistical analyses
based almost solely on numerical calculations and don’t use plots. R comes with the
data set anscombe which demonstrates the value of plots. Type data(anscombe)
to load the data into your R session. It is an 11 by 8 dataframe. The variable names
are x1, x2, x3, x4, y1, y2, y3, and y4.
(a) Start with x1 and y1. Use lm to model y1 as a function of x1. Print a summary
of the regression so you can see β̂0 , β̂1 , and σ̂.
(b) Do the same for the other pairs: x2 and y2, x3 and y3, x4 and y4.
(c) What do you conclude so far?
(d) Plot y1 versus x1. Repeat for each pair. You may want to put all four plots
on the same page. (It’s not necessary, but you should know how to draw the
regression line on each plot. Do you?)
(e) What do you conclude?
(f) Are any of these pairs well described by linear regression? How would you
describe the others? If the others were not artificially constructed data, but
were real, how would you analyze them?
x <- rnorm ( 1, 2, 3 )
y <- -2*x + 1 + rnorm ( 1, 0, 1 )
What does z1 estimate? What does z2 estimate? Why did the code writer write
sqrt ( var ( w + 2 ) ) instead of sqrt ( var ( w ) )? Does it matter?
24. A statistician thinks the regression equation yi = β0 + β1 xi + i fits her data well. She
would like to learn β1 . She is able to measure the yi ’s accurately but can measure the
xi ’s only approximately. In fact, she can measure wi = xi + δi where δi ∼ N(0, .1).
So she can fit the regression equation yi = β∗0 + β∗1 wi + i∗ . Note that (β∗0 , β∗1 ) might
be different than (β0 , β1 ) because they’re for the wi ’s, not the xi ’s. So the statistician
writes the following R code.
}
m <- mean ( val )
sd <- sqrt ( var ( val ) )
print ( c ( m, sd ) )
}
What is she trying to do? The last time through the loop, the print statement yields
[1] 9.0805986 0.5857724. What does this show?
25. The purpose of this exercise is to familiarize yourself with plotting logistic regres-
sion curves and getting a feel for the meaning of β0 and β1 .
(a) Choose some values of x. You will want between about 20 and 100 evenly
spaced values. These will become the abscissa of your plot.
(b) Choose some values of β0 and β1 . You are trying to see how different values
of the β’s affect the curve. So you might begin with a single value of β1 and
several values of β0 , or vice versa.
(c) For each choice of (β0 , β1 ) calculate the set of θi = eβ0 +β1 xi /(1 + eβ0 +β1 xi ) and
plot θi versus xi . You should get sigmoidal shaped curves. These are logistic
regression curves.
(d) You may find that the particular x’s and β’s you chose do not yield a visually
pleasing result. Perhaps all your θ’s are too close to 0 or too close to 1. In that
case, go back and choose different values. You will have to play around until
you find x’s and β’s compatible with each other.
26. Carry out a logistic regression analysis of the O-ring data. What does your analysis
say about the probability of O-ring damage at 36◦ F, the temperature of the Chal-
lenger launch. How relevant should such an analysis have been to the decision of
whether to postpone the launch?
(a) Why are the points lined up vertically in Figure 3.16, panels (a) and (b)?
(b) Why do panels (c) and (d) appear to have more points than panels (a) and (b)?
(c) If there were no jittering, how many distinct values would there be on the
abscissa of panels (c) and (d)?
3.5. EXERCISES 262
(d) Download the seedling data. Fit a model in which year is a predictor but
quadrat is not. Compare to fit1. Which do you prefer? Which variable is
more important: quadrat or year? Or are they both important?
Chapter 4
More Probability
There are infinitely many functions having the same integrals as fX and f ∗ . These functions
differ from each other on “sets of measure zero”, terminology beyond our scope but defined
in books on measure theory. For our purposes we can think of sets of measure zero as sets
containing at most countably many points. In effect, the pdf of X can be arbitrarily changed
on sets of measure zero. It does not matter which of the many equivalent functions we use
as the probability density of X. Thus, we define
Definition 4.1. Any function f such that, for all intervals A,
Z
P[X ∈ A] = f (x) dx
A
263
4.2. RANDOM VECTORS 264
is called a probability density function, or pdf, for the random variable X. Any such func-
tion may be denoted fX .
Definition 4.1 can be used in an alternate proof of Theorem 1.1 on page 12. The central
step in the proof is just a change-of-variable in an integral, showing that Theorem 1.1 is,
in essence, just a change of variables. For convenience we restate the theorem before
reproving it.
Theorem 1.1 Let X be a random variable with pdf pX . Let g be a differentiable,
monotonic, invertible function and define Z = g(X). Then the pdf of Z is
d g−1 (t)
pZ (t) = pX (g−1 (t))
dt
Proof. For any set A, P[Z ∈ g(A)] = P[X ∈ A] = A pX (x) dx. Let z = g(x) and change
R
I.e., P[Z ∈ g(A)] = g(A) something dz. Therefore something must be pZ (z). Hence,
R
pX~ (x1 , . . . , xn ).
4.2. RANDOM VECTORS 265
As in the univariate case, the pdf is any function whose integral yields probabilities. That
is, if A is a region in Rn then
(
~
P[X ∈ A] = pX~ (x1 , · · · , xn ) dx1 . . . dxn
A
The random variables (X1 , . . . , Xn ) are said to be mutually independent or jointly inde-
pendent if
pX~ (x1 , . . . , xn ) = pX1 (x1 ) × · · · × pXn (xn )
for all vectors (x1 , . . . , xn ).
Mutual independence implies pairwise independence. I.e., if (X1 , . . . , Xn ) are mutually
independent, then any pair (Xi , X j ) are also independent. The proof is left as an exercise.
It is curious but true that pairwise independence does not imply joint independence. For
an example, consider the discrete three-dimensional distribution on X ~ = (X1 , X2 , X3 ) with
where σi j = Cov(Xi , X j ) and σ2i = Var(Xi ). Sometimes σ2i is also denoted σii .
1. E[Y] = E[ ai Xi ] = ai E[Xi ]
P P
Proof. Use Lemma 4.1 and Theorems 1.3 (pg. 40) and 1.4 (pg. 41). See Exercise 8.
The next step is to consider several linear combinations simultaneously. For some
k ≤ n, and for each i = 1, . . . , k, let
~
X
Yi = ai1 X1 + · · · ain Xn = ai j X j = ~ati X
j
~ = (Y1 , . . . , Yk ). In
where the ai j ’s are arbitrary constants and ~ai = (ai1 , . . . , ain ). Let Y
matrix notation,
~ = AX
Y ~
where A is the k × n matrix of elements ai j . Covariances of the Yi ’s are given by
~ ~atj X)
Cov(Yi , Y j ) = Cov(~ati X, ~
Xn X n
= Cov(aik Xk , a j` X j )
k=1 `=1
Xn n−1 X
X n
= aik a jk σ2k + (aik a j` + a jk ai` )σk`
k=1 k=1 `=k+1
= ~ati ΣX~ ~a j
Combining the previous result with Theorem 4.2 yields Theorem 4.3.
~ be a random vector of dimension n with mean E[X]
Theorem 4.3. Let X ~ = µ and covari-
~ = Σ; let A be a k × n matrix of rank k; and let Y
ance matrix Cov(X) ~ = AX.
~ Then
~ = Aµ, and
1. E[Y]
~ = AΣA0
2. Cov(Y)
Finally, we take up the question of multivariate transformations, extending the univari-
ate version, Theorem 1.1 (pg. 12). Let X ~ = (X1 , . . . , Xn ) be an n-dimensional continuous
random vector with pdf fX~ . Define a new n-dimensional random vector Y ~ = (Y1 , . . . , Yn ) =
~ . . . , gn (X))
(g1 (X), ~ where the gi ’s are differentiable functions and where the the transfor-
~
mation g : X 7→ Y ~ is invertible. What is f ~ , the pdf of Y? ~
Y
Let J be the so-called Jacobian matrix of partial derivatives.
∂Y1 ∂Y1
∂X1 · · · ∂X n
∂Y2 ∂Y2
∂X · · · ∂Xn
J = . 1 . ..
.. .. .
∂Y ∂Yn
∂X1
n
· · · ∂Xn
and |J| be the absolute value of the determinant of J.
4.2. RANDOM VECTORS 268
Theorem 4.4.
fY~ (~y) = fX~ (g−1 (~y))|J|−1
'
~ ∈ g(A)] =
I.e., P[Y something dy1 · · · dyn . Therefore something must be pY~ (~y).
g(A)
Hence, pY~ (~y) = pX~ (g−1 (~y))|J|−1 .
To illustrate the use of Theorem 4.4 we solve again an example previously given on
page 265, which we restate here. Let X1 ∼ Exp(1); X2 ∼ Exp(2); X1 ⊥ X2 ; and X ~ =
(X1 , X2 ) and suppose we want to find P[|X1 − X2 | ≤ 1]. We solved this problem previously
by finding the joint density of X ~ = (X1 , X2 ), then integrating over the region where |X1 −
X2 | ≤ 1. Our strategy this time is to define new variables Y1 = X1 − X2 and Y2 , which is
essentially arbitrary, find the joint density of Y ~ = (Y1 , Y2 ), then integrate over the region
where |Y1 | ≤ 1. We define Y1 = X1 − X2 because that’s the variable we’re interested in. We
need a Y2 because Theorem 4.4 is for full rank transformations from Rn to Rn . The precise
definition of Y2 is unimportant, as long as the transformation from X ~ to Y
~ is differentiable
and invertible. For convenience, we define Y2 = X2 . With these definitions,
∂Y1 ∂Y1 !
1 −1
J = ∂X ∂X2
=
1
∂Y2 ∂Y2
∂X1 ∂X2
0 1
|J| = 1
X1 = Y1 + Y2
and
X2 = Y2
From the solution on page 265 we know pX~ (x1 , x2 ) = e−x1 × 12 e−x2 /2 , so pY~ (y1 , y2 ) =
4.2. RANDOM VECTORS 269
e−(y1 +y2 ) × 12 e−y2 /2 = 12 e−y1 e−3y2 /2 . Figure 4.1 shows the region over which to integrate.
"
P[|X1 − X2 | ≤ 1] = P[|Y1 | ≤ 1] = pY~ (y1 , y2 ) dy1 dy2
A
∞
0
1 1 −y1 ∞ −3y2 /2
Z Z Z Z
1 −3y2 /2
= e −y1
e dy2 dy1 + e e dy2 dy1
2 −1 −y1 2 0 0
1 0 y1 /2 1 1 −y1
Z Z
= e dy1 + e dy1
3 −1 3 0
2 0 1 1
= ey1 /2 −1 − e−y1 0
3 3
2h i 1h i
= 1−e −1/2
+ 1 − e−1
3 3
≈ 0.47 (4.3)
X2 Y2
X1 Y1
~
Figure 4.1: The (X1 , X2 ) plane and the (Y1 , Y2 ) plane. The light gray regions are where X
~ live. The dark gray regions are where |X1 − X2 | ≤ 1.
and Y
4.3. REPRESENTING DISTRIBUTIONS 270
par ( mar=c(0,0,0,0) )
plot ( c(0,6), c(0,2), type="n", xlab="", ylab="", xaxt="n",
yaxt="n", bty="n" )
polygon ( c(1,1.9,1.9,1), c(1,1,1.9,1.9), col=gray(.8),
border=NA )
polygon ( c(1,1.2,1.9,1.9,1.7,1), c(1,1,1.7,1.9,1.9,1.2),
col=gray(.5), border=NA )
segments ( 0, 1, 1.9, 1, lwd=3 ) # x1 axis
segments ( 1, 0, 1, 1.9, lwd=3 ) # x2 axis
text ( c(2,1), c(1,2), c(expression(bold(X[1])),
expression(bold(X[2]))) )
polygon ( c(5,5.9,5.9,4), c(1,1,1.9,1.9), col=gray(.8),
border=NA )
polygon ( c(5,5.2,5.2,4.8,4.8), c(1,1,1.9,1.9,1.2),
col=gray(.5), border=NA )
segments ( 4, 1, 5.9, 1, lwd=3 ) # y1 axis
segments ( 5, 0, 5, 1.9, lwd=3 ) # y2 axis
text ( c(6,5), c(1,2), c(expression(bold(Y[1])),
expression(bold(Y[2]))) )
The point of the example, of course, is the method, not the answer. Functions of
random variables and random vectors are common in statistics and probability. There are
many methods to deal with them. The method of transforming the pdf is one that is often
useful.
Equation 4.4 defines the cdf in terms of the pmf or pdf. It is also possible to go the
other way. If Y is continuous, then for any number b ∈ R
Z b
P(Y ≤ b) = F(b) = p(y) dy
−∞
which shows by the Fundamental Theorem of Calculus that p(y) = F 0 (y). On the other
hand, if Y is discrete, Then P[Y = y] = P[Y ≤ y] − P[Y < y] = FY (y) − FY (y− ). (We use the
notation FY (y− ) to mean the limit of FY (z) as z approaches y from below. It is also written
lim↓0 FY (y − )). Thus the reverse of Equation 4.4 is
FY (y) − FY (y− ) if Y is discrete
pY (y) =
(4.5)
FY0 (y)
if Y is continuous
Equation 4.5 is correct except in one case which seldom arises in practice. It is possible
that FY (y) is a continuous but nondifferentiable function, in which case Y is a continuous
random variable, but Y does not have a density. In this case there is a cdf FY without a
corresponding pmf or pdf.
Figure 4.2 shows the pmf and cdf of the Bin(10, .7) distribution and the pdf and cdf of
the Exp(1) distribution.
par ( mfrow=c(2,2) )
y <- seq ( -1, 11, by=1 )
plot ( y, dbinom ( y, 10, .7 ), type="p", ylab="pmf",
main="Bin (10, .7)" )
plot ( y, pbinom ( y, 10, .7 ), type="p", pch=16,
ylab="cdf", main="Bin (10, .7)" )
segments ( -1:10, pbinom ( -1:10, 10, .7 ),
0:11, pbinom ( -1:10, 10, .7 ) )
y <- seq ( 0, 5, len=50 )
plot ( y, dexp ( y, 1 ), type="l", ylab="pdf", main="Exp(1)" )
plot ( y, pexp ( y, 1 ), type="l", ylab="cdf", main="Exp(1)" )
4.3. REPRESENTING DISTRIBUTIONS 272
0.8
0.20
pmf
cdf
0.10
0.4
0.00
0.0
0 4 8 0 4 8
y y
Exp(1) Exp(1)
0.8
0.8
pdf
cdf
0.4
0.4
0.0
0.0
0 1 2 3 4 5 0 1 2 3 4 5
y y
• segments ( x0, y0, x1, y1) draws line segments. The line segments run from
(x0,y0) to (x1,y1). The arguments may be vectors.
The other alternative representation for Y is its moment generating function or mgf MY .
The moment generating function is defined as
P
R y ety pY (y)
if Y is discrete
MY (t) = E[etY ] =
(4.6)
ety pY (y)
if Y is continuous
dn
E[Y n ] = MY(n) (0) ≡ MY (t)
dtn 0
Proof. We provide the proof for the case n = 1. The proof for larger values of n is similar.
Z
d d
MY (t) = ety pY (y) dy
dt 0 dt
Z
0
d ty
= e pY (y) dy
Z dt 0
= yety pY (y) dy
0
Z
= ypY (y) dy
= E[Y]
4.3. REPRESENTING DISTRIBUTIONS 274
2. M is the mgf of F.
Theorems 4.6 and 4.7 both assume that the necessary mgf’s exist. It is inconvenient
that not all distributions have mgf’s. One can avoid the problem by using characteristic
functions (also known as Fourier transforms) instead of moment generating functions. The
characteristic function is defined as
CY (t) = E[eitY ]
√
where i = −1. All distributions have characteristic functions, and the characteristic
function completely characterizes the distribution, so characteristic functions are ideal for
our purpose. However, dealing with complex numbers presents its own inconveniences.
We shall not pursue this topic further. Proofs of Theorems 4.6 and 4.7 and similar results
for characteristic functions are omitted but may be found in more advanced books.
Two more useful results are theorems 4.8 and 4.9.
4.4. EXERCISES 275
Proof. h i
MZ (t) = E e(X+Y)t = E[eXt eYt ] = E[eXt ]E[eYt ] = MX (t)MY (t)
Corollary 4.10. Let Y1 , . . . , Yn be a collection of i.i.d. random variables each with mgf
MY . Define X = Y1 + · · · + Yn . Then
4.4 Exercises
1. Refer to Equation 4.1 on page 265.
(a) To help visualize the joint density pX~ , make a contour plot. You will have to
choose some values of x1 , some values of x2 , and then evaluate pX~ (x1 , x2 ) on
all pairs (x1 , x2 ) and save the values in a matrix. Finally, pass the values to the
contour function. Choose values of x1 and x2 that help you visualize pX~ . You
may have to choose values by trial and error.
(b) Draw a diagram that illustrates how to find the region A and the limits of inte-
gration in Equation 4.1.
(c) Supply the missing steps in Equation 4.1. Make sure you understand them.
Verify the answer.
(d) Use R to verify the answer to Equation 4.1 by simulation.
4.4. EXERCISES 276
2. Refer to Example 1.6 on page 43 on tree seedlings where N is the number of New
seedlings that emerge in a given year and X is the number that survive to the next
year. Find P[X ≥ 1].
3. (X1 , X2 ) have a joint distribution that is uniform on the unit disk. Find p(X1 ,X2 ) .
4. The random vector (X, Y) has pdf p(X,Y) (x, y) ∝ ky for some k > 0 and (x, y) in the
triangular region bounded by the points (0, 0), (−1, 1), and (1, 1).
(a) Find k.
(b) Find P[Y ≤ 1/2].
(c) Find P[X ≤ 0].
(d) Find P[|X − Y| ≤ 1/2].
5. Prove the assertion on page 265 that mutual independence implies pairwise inde-
pendence.
(a) Begin with the case of three random variables X ~ = (X1 , X2 , X3 ). Prove that if
X1 , X2 , X3 are mutually independent, then any two of them are independent.
(b) Generalize to the case X ~ = (X1 , . . . , Xn ).
9. X and Y are uniformly distributed in the rectangle whose corners are (1, 0), (0, 1),
(−1, 0), and (0, −1).
12. Just below Equation 4.6 is the statement “the mgf is always defined at t = 0.” For
any random variable Y, find MY (0).
14. Refer to Theorem 4.9. Where in the proof is the assumption X ⊥ Y used?
Chapter 5
Special Distributions
279
5.1. BINOMIAL AND NEGATIVE BINOMIAL 280
Now let X =
Pn
i=1 Yi where the Yi ’s are i.i.d. Bern(θ) and apply Corollary 4.10.
Proof.
The first equality is by Theorem 4.9; the second is by Theorem 5.2. We recognize the
last expression as the mgf of the Bin(n1 + n2 , θ) distribution. So the result follows by
Theorem 4.6.
The mean of the Binomial distribution was calculated in Equation 1.11. Theorem 5.4
restates that result and gives the variance and standard deviation.
1. E[X] = nθ.
Proof. The proof for E[X] was given earlier. If X ∼ Bin(n, θ), then X = ni=1 Xi where
P
Xi ∼ Bern(θ) and the Xi ’s are mutually independent. Therefore, by Theorem 1.9, Var(X) =
n Var(Xi ). But
Var(Xi ) = E(Xi2 ) − E(Xi )2 = θ − θ2 = θ(1 − θ).
So Var(X) = nθ(1 − θ). The result for SD(X) follows immediately.
Exercise 1 asks you to prove Theorem 5.4 by moment generating functions.
5.1. BINOMIAL AND NEGATIVE BINOMIAL 282
R comes with built-in functions for working with Binomial distributions. You can get
the following information by typing help(dbinom), help(pbinom), help(qbinom), or
help(rbinom). There are similar functions for working with other distributions, but we
won’t repeat their help pages here.
Usage:
Arguments:
x, q: vector of quantiles.
p: vector of probabilities.
Details:
for x = 0, ..., n.
5.1. BINOMIAL AND NEGATIVE BINOMIAL 283
Value:
References:
See Also:
‘dnbinom’ for the negative binomial, and ‘dpois’ for the Poisson
distribution.
Examples:
Figure 5.1 shows the Binomial pmf for several values of x, n, and p. Note that for a
fixed p, as n gets larger the pmf looks increasingly like a Normal pdf. That’s the Central
Limit Theorem. Let Y1 , . . . , Yn ∼ i.i.d. Bern(p). Then the distribution of X is the same as
P P
the distribution of Yi and the Central Limit Theorem tells us that Yi looks increasingly
Normal as n → ∞.
Also, for a fixed n, the pmf looks more Normal when p = .5 than when p = .05. And
that’s because convergence under the Central Limit Theorem is faster when the distribution
of each Yi is more symmetric.
par ( mfrow=c(3,2) )
n <- 5
p <- .05
x <- 0:5
plot ( x, dbinom(x,n,p), ylab="p(x)", main="n=5, p=.05" )
...
The Negative Binomial Distribution Rather than fix in advance the number of trials,
experimenters will sometimes continue the sequence of trials until a prespecified number
of successes r has been achieved. In this case the total number of failures N is the ran-
dom variable and is said to have the Negative Binomial distribution with parameters (r, θ),
written N ∼ NegBin(r, θ). (Warning: some authors say that the total number of trials,
N + r, has the Negative Binomial distribution.) One example is a gambler who decides to
play the daily lottery until she wins. The prespecified number of successes is r = 1. The
number of failures N until she wins is random. In this case, and whenever r = 1, N is
said to have a Geometric distribution with parameter θ; we write N ∼ Geo(θ). Often, θ is
unknown. Large values of N are evidence that θ is small; small values of N are evidence
5.1. BINOMIAL AND NEGATIVE BINOMIAL 285
0.8
0.25
0.6
p(x)
p(x)
0.4
0.15
0.2
0.05
0.0
0 1 2 3 4 5 0 1 2 3 4 5
x x
p(x)
0.1
0.0
0 1 2 3 4 5 6 8 10 12 14
x x
0.06
p(x)
p(x)
0.05
0.02
0 2 4 6 8 30 35 40 45 50
x x
pN (k) = P(N = k)
= P(r − 1 successes in the first k + r − 1 trials
and k + r’th trial is a success)
k+r−1 r
!
= θ (1 − θ)k
r−1
for k = 0, 1, . . . .
Let N1 ∼ NegBin(r1 , θ), . . . , Nt ∼ NegBin(rt , θ), and N1 , . . . , Nt be independent of each
other. Then one can imagine a sequence of trials of length (Ni +ri ) having ri successes.
P P
N1 is the number of failures before the r1 ’th success; . . . ; N1 + · · · + Nt is the number of
failures before the r1 + · · · + rt ’th success. It is evident that N ≡ Ni is the number of
P
failures before the r ≡ ri ’th success occurs and therefore that N ∼ NegBin(r, θ).
P
Theorem 5.5. If Y ∼ NegBin(r, θ) then E[Y] = r(1 − θ)/θ and Var(Y) = r(1 − θ)/θ2 .
Proof. It suffices to prove the result for r = 1. Then the result for r > 1 will follow by the
foregoing argument and Theorems 1.7 and 1.9. For r = 1,
∞
X
E[N] = n P[N = n]
n=0
X∞
= n(1 − θ)n θ
n=1
∞
X
= θ(1 − θ) n(1 − θ)n−1
n=1
∞
d
X
= −θ(1 − θ) (1 − θ)n
n=1
dθ
∞
d X
= −θ(1 − θ) (1 − θ)n
dθ n=1
d 1−θ
= −θ(1 − θ)
dθ θ
−1
= −θ(1 − θ) 2
θ
1−θ
=
θ
5.1. BINOMIAL AND NEGATIVE BINOMIAL 287
The trick of writing each term as a derivative, then switching the order of summation and
derivative is occasionally useful. Here it is again.
∞
X
E(N 2 ) = n2 P[N = n]
n=0
∞
X
= θ(1 − θ) (n(n − 1) + n) (1 − θ)n−1
n=1
∞
X ∞
X
= θ(1 − θ) n(1 − θ)n−1 + θ(1 − θ)2 n(n − 1)(1 − θ)n−2
n=1 n=1
∞
1−θ X d2
= + θ(1 − θ)2 (1 − θ)n
θ n=1
θ2
∞
1−θ d2 X
= + θ(1 − θ)2 2 (1 − θ)n
θ θ n=1
1−θ d2 1 − θ
= + θ(1 − θ)2 2
θ θ θ
1−θ θ(1 − θ)2
= +2
θ θ3
2 − 3θ + θ 2
=
θ2
Therefore,
1−θ
Var(N) = E[N 2 ] − (E[N])2 = .
θ2
The R functions for working with the negative Binomial distribution are dnbinom,
pnbinom, qnbinom, and rnbinom. Figure 5.2 displays the Negative Binomial pdf and
illustrates the use of qnbinom.
r <- c ( 1, 5, 30 )
p <- c ( .1, .5, .8 )
par ( mfrow=c(3,3) )
for ( i in seq(along=r) )
for ( j in seq(along=p) ) {
5.1. BINOMIAL AND NEGATIVE BINOMIAL 288
0.8
0.08
0.4
0.6
probability
probability
probability
0.4
0.04
0.2
0.2
0.00
0.0
0 20 40 0 2 4 6 0.0 1.0 2.0
N N N
0.25
probability
probability
probability
0.010
0.15
0.05
0.000
20 60 100 0 4 8 14 0 2 4
N N N
probability
probability
0.03
0.002
0.01
200 350 15 30 45 2 6 12
N N N
lo <- qnbinom(.01,r[i],p[j])
hi <- qnbinom(.99,r[i],p[j])
x <- lo:hi
plot ( x, dnbinom(x,r[i],p[j]), ylab="probability", xlab="N",
main = substitute ( list ( r == a, theta == b),
list(a=i,b=j) ) )
}
• lo and hi are the limits on the x-axis of each plot. The use of qbinom ensures that
each plot shows at least 98% of its distribution.
Clinical Trials In clinical trials, each patient is administered a treatment, usually an ex-
perimental treatment or a standard, control treatment. Later, each patient may be
scored as either success, failure, or censored. Censoring occurs because patients
don’t show up for their appointments, move away, or can’t be found for some other
reason.
Craps After the come-out roll, each successive roll is either a win, loss, or neither.
Genetics Each gene comes in several variants. Every person has two copies of the gene,
one maternal and one paternal. So the person’s status can be described by a pair
like {a, c} meaning that she has one copy of type a and one copy of type c. The pair
is called the person’s genotype. Each person in a sample can be considered a trial.
Geneticists may count how many people have each genotype.
Political Science In an election, each person prefers either the Republican candidate, the
Democrat, the Green, or is undecided.
5.2. MULTINOMIAL 290
In this case we count the number of outcomes of each type. If there are k possible outcomes
then the result is a vector y1 , . . . , yk where yi is the number of times that outcome i occured
and y1 + · · · + yk = n is the number of trials.
Let p ≡ (p1 , . . . , pk ) be the probabilities of the k categories and n be the number of
trials. We write Y ∼ Mult(n, p). In particular, Y ≡ (Y1 , . . . , Yk ) is a vector of length k.
Because Y is a vector, so is its expectation
The i’th coordinate, Yi , is a random variable in its own right. Because Yi counts the
number of times outcome i occurred in n trials, its distribution is
Although the Yi ’s are all Binomial, they are not independent. After all, if Y1 = n, then
Y2 = · · · = Yk = 0, so the Yi ’s must be dependent. What is their joint pmf? What is the
conditional distribution of, say, Y2 , . . . , Yk given Y1 ? The next two theorems provide the
answers.
Theorem 5.6. If Y ∼ Mult(n, p) then
!
n
fY (y1 , . . . , yk ) = py1 · · · pykk
y1 · · · yk 1
n
where y1 ···yk
is the multinomial coefficient
!
n n!
=Q
y1 · · · yk yi !
Proof. When the n trials of a multinomial experiment are carried out, there will be a
sequence of outcomes such as abkdbg · · · f , where the letters indicate the outcomes of
individual trials. One such sequence is
· · · a b|{z}
a|{z} · · · b · · · k|{z}
···k
y1 times y2 times yk times
R’s functions for the multinomial distribution are rmultinom and dmultinom. rmultinom(m,n,p)
draws a sample of size m. p is a vector of probabilities. The result is a k × m matrix. Each
column is one draw, so each column sums to n. The user does not specify k; it is deter-
mined by k = length(p).
Let y be the total number of events that arise in the domain. Y has a Poisson distribution
with rate parameter λ, written Y ∼ Poi(λ). The pmf is
e−λ λy
pY (y) = for y = 0, 1, . . .
y!
E[Y] = λ.
Proof.
∞ ∞
X X e−λ λy
MY (t) = E[etY ] = ety pY (y) = ety
y=0 y=0
y!
∞ ∞ t
X e−λ (λet )y e−λ X e−λe (λet )y
= =
y=0
y! e−λet y=0
y!
= eλ(e −1)
t
Var(Y) = λ
Proof. Just for fun (!) we will prove the theorem two ways — first directly and then with
moment generating functions.
Proof 1.
∞
X e−λ λy
E[Y ] =
2
y2
y=0
y!
∞ ∞
X e−λ λy X e−λ λy
= y(y − 1) + y
y=0
y! y=0
y!
∞
X e−λ λy
= y(y − 1) +λ
y=2
y!
∞
X e−λ λz+2
= +λ
z=0
z!
=λ +λ2
Proof 2.
d2
E[Y 2 ] = MY (t)
dt2 t=0
2
d
= 2 eλ(e −1)
t
dt t=0
d t λ(et −1)
= λe e
hdt t=0
t λ(et −1)
+ λ2 e2t eλ(e −1)
t
i
= λe e
t=0
= λ + λ2
Theorem 5.10. Let Yi ∼ Poi(λi ) for i = 1, . . . , n and let the Yi s be mutually independent.
Let Y = n1 Yi and λ = n1 λi . Then Y ∼ Poi(λ).
P P
!=1 !=4
0.20
0.3
0.15
0.2
pY(y)
pY(y)
0.10
0.1
0.05
0.0
0.00
0 2 4 6 0 2 4 6 8
y y
! = 16 ! = 64
0.08
0.04
pY(y)
pY(y)
0.04
0.02
0.00
0.00
10 15 20 25 50 60 70 80
y y
y <- 0:7
plot ( y, dpois(y,1), xlab="y", ylab=expression(p[Y](y)),
main=expression(lambda==1) )
y <- 0:10
plot ( y, dpois(y,4), xlab="y", ylab=expression(p[Y](y)),
main=expression(lambda==4) )
y <- 6:26
plot ( y, dpois(y,16), xlab="y", ylab=expression(p[Y](y)),
main=expression(lambda==16) )
y <- 44:84
plot ( y, dpois(y,64), xlab="y", ylab=expression(p[Y](y)),
main=expression(lambda==64) )
One of the early uses of the Poisson distribution was in The probability variations in
the distribution of α particles by Rutherford and Geiger 1910. (An α particle is a Helium
nucleus, or two protons and two neutrons.)
Example 5.1 (Rutherford and Geiger)
The phenomenon of radioactivity was beginning to be understood in the early 20th
century. In their 1910 article, Rutherford and Geiger write
“In counting the α particles emitted from radioactive substances . . . [it] is of
importance to settle whether . . . variations in distribution are in agreement
with the laws of probability, i.e. whether the distribution of α particles on an
average is that to be anticipated if the α particles are expelled at random
both in regard to space and time. It might be conceived, for example,
that the emission of an α particle might precipitate the disintegration of
neighbouring atoms, and so lead to a distribution of α particles at variance
with the simple probability law.”
So Rutherford and Geiger are going to do three things in their article. They’re going
to count α particle emissions from some radioactive substance; they’re going to de-
rive the distribution of α particle emissions according to theory; and they’re going to
compare the actual and theoretical distributions.
Here they describe their experimental setup.
“The source of radiation was a small disk coated with polonium, which was
placed inside an exhausted tube, closed at one end by a zinc sulphide
5.3. POISSON 296
screen. The scintillations were counted in the usual way . . . the number of
scintillations . . . corresponding to 1/8 minute intervals were counted . . . .
“The following example is an illustration of the result obtained. The num-
bers, given in the horizontal lines, correspond to the number of scintilla-
tions for successive intervals of 7.5 seconds.
Finally, how did Rutherford and Geiger compare their actual and theoretical distribu-
tions? They did it with a plot, which we reproduce as Figure 5.4. Their conclusion:
“It will be seen that, on the whole, theory and experiment are in excellent
accord. . . . We may consequently conclude that the distribution of α par-
ticles in time is in agreement with the laws of probability and that the α
particles are emitted at random. . . . Apart from their bearing on radioactive
problems, these results are of interest as an example of a method of test-
ing the laws of probability by observing the variations in quantities involved
in a spontaneous material process.”
Now we fill in each element by counting the number of neuron firings in the time
interval.
5.3. POISSON 299
500
400
Number of Groups
300
200
100
0
0 2 4 6 8 10 12
Figure 5.4: Rutherford and Geiger’s Figure 1 comparing theoretical (solid line) to actual
(open circles) distribution of α particle counts.
5.3. POISSON 300
for ( i in seq(along=nspikes) )
for ( j in seq(along=nspikes[[i]]) )
nspikes[[i]][j] <- sum ( spikes[[8]] > tastants[[i]][j]
& spikes[[8]] <= tastants[[i]][j] + .15
)
Now we can see how many times the neuron fired after each delivery of, say, MSG100
by typing nspikes$MSG100.
Figure 5.5 compares the five tastants graphically. Panel A is a stripchart. It has
five tick marks on the x-axis for the five tastants. Above each tick mark is a collection
of circles. Each circle represents one delivery of the tastant and shows how many
times the neuron fired in the 150 msec following that delivery. Panel B shows much
the same information in a mosaic plot. The heights of the boxes show how often that
tastant produced 0, 1, . . . , 5 spikes. The width of each column shows how often that
tastant was delivered. Panel C shows much the same information in yet a different
way. It has one line for each tastant; that line shows how often the neuron responded
with 0, 1, . . . , 5 spikes. Panel D compares likelihood functions. The five curves are the
likelihood functions for λ1 , . . . , λ5 .
There does not seem to be much difference in the response of this neuron to
different tastants. Although we can compute the m.l.e. λ̂i ’s with
and find that they range from a low of λ̂3 ≈ 0.08 for .1 M NaCl to a high of λ̂1 ≈ 0.4 for
.1M MSG, panel D suggests the plausibility of λ1 = · · · = λ5 ≈ .2.
A Neuron’s Responses
to 5 Tastants
A B
1 2 3 4 5
5
number of firings
0
counts
3
2
1
1
543 2
0
1 2 3 4 5 tastant
tastant
C D
0.8
0.8
likelihood
fraction
0.4
0.4
0.0
0.0
Figure 5.5: Numbers of firings of a neuron in 150 msec after five different tastants. Tas-
tants: 1=MSG .1M; 2=MSG .3M; 3=NaCl .1M; 4=NaCl .3M; 5=water. Panels: A: A
stripchart. Each circle represents one delivery of a tastant. B: A mosaic plot. C: Each line
represents one tastant. D: Likelihood functions. Each line represents one tastant.
5.3. POISSON 302
• The line spiketable <- ... creates a matrix to hold the data and illustrates
the use of dimnames to name the dimensions. Some plotting commands use
those names for labelling axes.
• The line spiketable[i,] <- ... shows an interesting use of the hist com-
mand. Instead of plotting a histogram it can simply return the counts.
• The line freqtable <- ... divides each row of the matrix by its sum, turning
counts into proportions.
But let’s investigate a little further. Do the data really follow a Poisson distribution?
Figure 5.6 shows the Poi(.2) distribution while the circles show the actual fractions of
firings. There is apparently good agreement. But numbers close to zero can be de-
ceiving. The R command dpois ( 0:5, .2 ) reveals that the probability of getting
5 spikes is less than 0.00001, assuming λ ≈ 0.2. So either the λi ’s are not all ap-
proximately .2, neuron spiking does not really follow a Poisson distribution, or we have
witnessed a very unusual event.
Let us examine one aspect of Example 5.1 more closely. Rutherford and Geiger are
counting α particle emissions from a polonium source and find that the number of emis-
sions in a fixed interval of time has a Poisson distribution. But they could have reasoned
as follows: at the beginning of the experiment there is a fixed number of polonium atoms;
each atom either decays or not; atoms decay independently of each other; therefore the
number of decays, and hence the number of α particles, has a Binomial distribution where
n is the number of atoms and p is the probability that a given atom decays within the time
interval.
Why did Rutherford and Geiger end up with the Poisson distribution and not the Bino-
mial; are Rutherford and Geiger wrong? The answer is that a binomial distribution with
large n and small p is extremely well approximated by a Poisson distribution with λ = np,
5.3. POISSON 303
0.8
0.6
fraction
0.4
0.2
0.0
0 1 2 3 4 5
number of firings
Figure 5.6: The line shows Poisson probabilities for λ = 0.2; the circles show the fraction
of times the neuron responded with 0, 1, . . . , 5 spikes for each of the five tastants.
5.4. UNIFORM 304
so Rutherford and Geiger are correct, to a very high degree of accuracy. For a precise
statement of this result we consider a sequence of random variables X1 ∼ Bin(n1 , p1 ), X2 ∼
Bin(n2 , p2 ), . . . such that limi→∞ ni = ∞ and limi→∞ pi = 0. But if the sequence were some-
thing like X1 ∼ Bin(100, .01), X2 ∼ Bin(1000, .005), X3 ∼ Bin(10000, .0025), . . . then the
Xi ’s would have expected values E[X1 ] = 1, E[X2 ] = 5, E[X3 ] = 25, . . . and their distribu-
tions would not converge. To get convergence we need, at a minimum, for the expected
values to converge. So let M be a fixed number and consider the sequence of random
variables X1 ∼ Bin(n1 , p1 ), X2 ∼ Bin(n2 , p2 ), . . . where pi = M/ni . That way, E[Xi ] = M,
for all i.
exp−M M k
lim P[Xi = k] = .
i→∞ k!
I.e., the limit of the distributions of the Xi ’s is the Poi(M) distribution.
for y = 1, . . . , n. The discrete uniform distribution is used to model, for example, dice
rolls, or any other experiment in which the outcomes are deemed equally likely. The only
parameter is n. It is not an especially useful distribution in practical work but can be used
to illustrate concepts in a simple setting. For an applied example see Exercise 23.
The Continuous Uniform Distribution The continuous uniform distribution is the dis-
tribution whose pdf is flat over the interval [a, b]. We write Y ∼ U(a, b). Although the
notation might be confused with the discrete uniform, the context will indicate which is
meant. The pdf is
p(y) = 1/(b − a)
for y ∈ [a, b]. The mean, variance, and moment generating function are left as Exercise 24.
5.5. GAMMA, EXPONENTIAL, CHI SQUARE 305
Suppose we observe a random sample y1 , . . . , yn from U(a, b). What is the m.l.e.(â, b̂)?
The joint density is
n
1
if a ≤ y(1) and b ≥ y(n)
p(y1 , . . . , yn ) =
b−a
0
otherwise
which is maximized, as a function of (a, b), if b − a is as small as possible without making
the joint density 0. Thus, â = y(1) and b̂ = y(n) .
! = 0.5 !=1
3.0
1.5
" = 0.5 " = 0.5
"=1 "=1
"=2 "=2
2.0
1.0
"=4 "=4
p(y)
p(y)
1.0
0.5
0.0
0.0
0 1 2 3 4 5 0 2 4 6 8
y y
!=2 !=4
0.8
0.4
0.3
"=2 "=2
"=4 "=4
p(y)
p(y)
0.4
0.2
0.2
0.1
0.0
0.0
0 5 10 15 20 0 10 20 30 40
y y
par ( mfrow=c(2,2) )
for ( i in seq(along=scale) ) {
ymax <- scale[i]*max(shape) + 3*sqrt(max(shape))*scale[i]
y <- seq ( 0, ymax, length=100 )
den <- NULL
for ( sh in shape )
den <- cbind ( den, dgamma(y,shape=sh,scale=scale[i]) )
matplot ( y, den, type="l", main=letters[i], ylab="p(y)" )
legend ( ymax*.1, max(den[den!=Inf]), legend = leg )
}
Theorem 5.12. Let X ∼ Gam(α, β) and let Y = cX. Then Y ∼ Gam(α, cβ).
1
pY (y) = α
(y/c)α−1 e−y/cβ
cΓ(α)β
1
= (y)α−1 e−y/cβ
Γ(α)(cβ)α
The mean, mgf, and variance are recorded in the next several theorems.
Proof.
Z ∞
1
E[Y] = y yα−1 e−y/β dy
0 Γ(α)βα
Γ(α + 1)β ∞
Z
1
= yα e−y/β dy
Γ(α) 0 Γ(α + 1)β α+1
= αβ.
The last equality follows because (1) Γ(α + 1) = αΓ(α), and (2) the integrand is a Gamma
density so the integral is 1. Also see Exercise 10.
The last trick in the proof — recognizing an integrand as a density and concluding that
the integral is 1 — is very useful. Here it is again.
Theorem 5.14. Let Y ∼ Gam(α, β). Then the moment generating function is MY (t) =
(1 − tβ)−α for t < 1/β.
Proof.
Z ∞
1
MY (t) = ety yα−1 e−y/β dy
0 Γ(α)βα
β α Z ∞
( 1−tβ ) 1 1−tβ
α−1 −y β
= y e dy
βα 0 Γ(α)(
β
)α1−tβ
= (1 − tβ) −α
Var(Y) = αβ2
and √
SD(Y) = αβ.
The most fundamental probability distribution for such situations is the exponential
distribution. Let Y be the time until the item dies or the event occurs. If Y has an exponen-
tial distribution then for some λ > 0 the pdf of Y is
and we write Y ∼ Exp(λ). This density is pictured in Figure 5.8 (a repeat of Figure 1.7) for
four values of λ. The exponential distribution is the special case of the Gamma distribution
when α = 1. The mean, SD, and mgf are given by Theorems 5.13 – 5.15.
Each exponential density has its maximum at y = 0 and decreases monotonically. The
value of λ determines the value pY (0 | λ) and the rate of decrease. Usually λ is unknown.
Small values of y are evidence for small values of λ; large values of y are evidence for
large values of λ.
The answer is m = λ log 2. You will be asked to verify this claim in Exercise 31.
Uranium-238 has a half-life of 4.47 billion years. Thus its λ is about 6.45 billion.
Plutonium-239 has a half-life of 24,100 years. Thus its λ is about 35,000.
10
lambda = 2
lambda = 1
lambda = 0.2
8
lambda = 0.1
6
p(x)
4
2
0
The Poisson process There is a close relationship between Exponential, Gamma, and
Poisson distributions. For illustration consider a company’s customer call center. Suppose
that calls arrive according to a rate λ such that
1. in a time interval of length T , the number of calls is a random variable with distri-
bution Poi(λT ) and
2. if I1 and I2 are disjoint time intervals then the number of calls in I1 is independent
of the number of calls in I2 .
When calls arrive in this way we say the calls follow a Poisson process.
Suppose we start monitoring calls at time t0 . Let T 1 be the time of the first call after t0
and Y1 = T 1 − t0 , the time until the first call. T 1 and Y1 are random variables. What is the
distribution of Y1 ? For any positive number y,
What about the time to the second call? Let T 2 be the time of the second call after t0 and
Y2 = T 2 − t0 . What is the distribution of Y2 ? For any y > 0,
and therefore
λ2 −λy
pY2 (y) = λe−λy − λe−λy + yλ2 e−λy = ye
Γ(2)
so Y2 ∼ Gam(2, 1/λ).
In general, the time Yn until the n’th call has the Gam(n, 1/λ) distribution. This fact is
an example of the following theorem.
Theorem 5.16. Let Y1 , . . . , Yn be mutually independent and let Yi ∼ Gam(αi , β). Then
X
Y≡ Yi ∼ Gam(α, β)
where α ≡ αi .
P
The Chi-squared Distribution The Gamma distribution with β = 2 and α = p/2 where
p is a positive integer is called the chi-squared distribution with p degrees of freedom. We
write Y ∼ χ2p .
Theorem 5.17. Let Y1 , . . . , Yn ∼ i.i.d. N(0, 1). Define X = Yi2 . Then X ∼ χ2n .
P
Therefore,
d Γ(n + 1)
pX(1) (x) = F X(1) (x) = n(1 − x)n−1 = (1 − x)n−1
dx Γ(1)Γ(n)
which is the Be(1, n) density. For the distribution of the largest order statistic see Exer-
cise 29.
par ( mfrow=c(3,1) )
y <- seq ( 0, 1, length=100 )
mean <- c ( .2, .5, .9 )
alpha <- c ( .3, 1, 3, 10 )
for ( i in 1:3 ) {
beta <- (alpha - mean[i]*alpha) / mean[i]
den <- NULL
for ( j in 1:length(beta) )
den <- cbind ( den, dbeta(y,alpha[j],beta[j]) )
matplot ( y, den, type="l", main=letters[i], ylab="p(y)" )
if ( i == 1 )
legend ( .6, 8, paste ( "(a,b) = (", round(alpha,2), ",",
round(beta,2), ")", sep="" ), lty=1:4 )
else if ( i == 2 )
legend ( .1, 4, paste ( "(a,b) = (", round(alpha,2), ",",
round(beta,2), ")", sep="" ), lty=1:4 )
5.6. BETA 314
8
(a,b) = (0.3,1.2)
(a,b) = (1,4)
6
(a,b) = (3,12)
p(y) (a,b) = (10,40)
4
2
0
b
4
(a,b) = (0.3,0.3)
3
(a,b) = (1,1)
(a,b) = (3,3)
p(y)
(a,b) = (10,10)
1
0
c
8 10
(a,b) = (0.3,0.03)
(a,b) = (1,0.11)
p(y)
(a,b) = (3,0.33)
(a,b) = (10,1.11)
4
2
0
Figure 5.9: Beta densities — a: Beta densities with mean .2; b: Beta densities with mean
.5; c: Beta densities with mean .9;
5.7. NORMAL 315
else if ( i == 3 )
legend ( .1, 10, paste ( "(a,b) = (", round(alpha,2), ",",
round(beta,2), ")", sep="" ), lty=1:4 )
The Beta density is closely related to the Gamma density by the following theorem.
Data with these properties is ubiquitous in nature. Statisticians and other scientists often
have to model similar looking data. One common probability density for modelling such
data is the Normal density, also known as the Gaussian density. The Normal density is
also important because of the Central Limit Theorem.
For some constants µ ∈ R and σ > 0, the Normal density is
1 1 x−µ 2
p(x | µ, σ) = √ e− 2 ( σ ) . (5.4)
2πσ
5.7. NORMAL 316
0.2
0.0
4 6 8 10 12
temperature
Figure 5.10: Water temperatures (◦ C) at 1000m depth, 44 − 46◦ N latitude and 19 − 21◦ W
longitude. The dashed curve is the N(8.08,0.94) density.
Visually, the Normal density appears to fit the data well. Randomly choosing one
of the 112 historical temperature measurements, or making a new measurement near
45◦ N and 20◦ W at a randomly chosen time are like drawing a random variable t from
the N(8.08,0.94) distribution.
Look at temperatures between 8.5◦ and 9.0◦ C. The N(8.08, 0.94) density says the
probability that a randomly drawn temperature t is between 8.5◦ and 9.0◦ C is
Z 9.0
1 1 t−8.08 2
P [t ∈ (8.5, 9.0]] = √ e− 2 ( 0.94 ) dt ≈ 0.16. (5.5)
8.5 2π 0.94
The integral in Equation 5.5 is best done on a computer, not by hand. In R it can
be done with pnorm(9.0,8.08,.94 ) - pnorm(8.5,8.08,.94 ). A fancier way to
do it is diff(pnorm(c(8.5,9),8.08,.94)).
In fact, 19 of the 112 temperatures fell into that bin, and 19/112 ≈ 0.17, so the
N(8.08, 0.94) density seems to fit very well.
However, the N(8.08, 0.94) density doesn’t fit as well for temperatures between
7.5 and 8.0◦ C.
◦
Z 8.0
1 t−8.08 2
e− 2 (
1
P [t ∈ (7.5, 8.0]] = √ 0.94) dt ≈ 0.20.
7.5 2π 0.94
In fact, 15 of the 112 temperatures fell into that bin; and 15/112 ≈ 0.13. Even so, the
N(8.08,0.94) density fits the data set very well.
Proof.
Z
1 1 2
MY (t) = ety √ e− 2σ2 (y−µ) dy
Z 2πσ
1
e− 2σ2 (y −(2µ−2σ t)y+µ ) dy
1 2 2 2
= √
2πσ
Z
µ2 1 2 (µ+σ2 t)2
e− 2σ2 (y−(µ+σ t)) + 2σ2 dy
1 2
= e− 2σ2 √
2πσ
2µσ2 t+σ4 t2
=e 2σ2
σ2 t 2
=e 2 +µt .
The technique used in the proof of Theorem 5.20 is worth remembering, so let’s look
at it more abstractly. Apart from multiplicative constants, the first integral in the proof is
Z Z
ty − 2σ1 2 (y−µ)2
e− 2σ2 (y−µ) +ty dy
1 2
e e dy =
The exponent is quadratic in y and therefore, for some values of a, b, c, d, e, and f can be
written !2
1 1 y−d
− 2 (y − µ) + ty = ay + by + c = −
2 2
+f
2σ 2 e
This last expression has the form of a Normal distribution with mean d and SD e. So the
integral can be evaluated by putting it in this form and manipulating the constants so it
becomes the integral of a pdf and therefore equal to 1. It’s a technique that is often useful
when working with integrals arising from Normal distributions.
Theorem 5.21. Let Y ∼ N(µ, σ). Then
E[Y] = µ and Var(Y) = σ2 .
Proof. For the mean,
σ2 t 2
E[Y] = MY0 (0) = (tσ2 + µ)e 2 +µt = µ.
t=0
For the variance,
σ2 t 2 σ2 t 2
E[Y 2 ] = MY00 (0) = σ2 e 2 +µt + (tσ2 + µ)2 e 2 +µt = σ2 + µ2 .
t=0
So,
Var(Y) = E[Y 2 ] − E[Y]2 = σ2 .
5.7. NORMAL 319
The N(0, 1) distribution is called the standard Normal distribution. As Theorem 5.22
shows, all Normal distributions are just shifted, rescaled versions of the standard Nor-
mal distribution. The mean is a location parameter and the standard deviation is a scale
parameter, as shown by Theorem 5.22.
Theorem 5.22.
1. If X ∼ N(0, 1) and Y = σX + µ then Y ∼ N(µ, σ).
2. If Y ∼ N(µ, σ) and X = (Y − µ)/σ then X ∼ N(0, 1).
Proof. 1. Let X ∼ N(0, 1) and Y = σX + µ. By Theorem 4.8
σ2 t2
MY (t) = eµt MX (σt) = eµt e 2
So X ∼ Gam(n/2, 2) = χ2n .
5.7. NORMAL 320
where |Σ| refers to the determinant of the matrix Σ. We write X ~ ∼ N(µ, Σ). Comparison
of Equations 5.4 (page 315) and 5.6 shows that the latter is a generalization of the former.
The multivariate version has the covariance matrix Σ in place of the scalar variance σ2 .
To become more familiar with the multivariate Normal distribution, we begin with the
case where the covariance matrix is diagonal:
σ1 0 0 · · ·
2
0 σ2 0 · · ·
2
Σ = . . . ..
0 0 .
. .
.. .. · · · σ2
n
(2π)n/2 |Σ| 2
!n Y n 2
1 − 21 ni=1 (xi σ−µ2i )
!
1
P
= √ e i
2π i=1 σi
n
x −µ 2
!
Y 1 − 21 iσ i
= √ e i ,
i=1 2πσ i
the product of n separate one dimensional Normal densities, one for each dimension.
Therefore the Xi ’s are independent and Normally distributed, with Xi ∼ N(µi , σi ). Also
see Exercise 34.
When σ1 = · · · = σn = 1, then Σ is the n-dimensional identity matrix In . When,
in addition, µ1 = · · · = µn = 0, then X ~ ∼ N(0, In ) and X ~ is said to have the standard
n-dimensional Normal distribution.
Note: for two arbitrary random variables X1 and X2 , X1 ⊥ X2 implies Cov(X1 , X2 ) = 0;
but Cov(X1 , X2 ) = 0 does not imply X1 ⊥ X2 . However, if X1 and X2 are jointly Normally
distributed then the implication is true. I.e. if (X1 , X2 ) ∼ N(µ, Σ) and Cov(X1 , X2 ) = 0, then
X1 ⊥ X2 . In fact, something stronger is true, as recorded in the next theorem.
5.7. NORMAL 321
~ = (X1 , . . . , Xn ) ∼
Theorem 5.24. Let X N(µ, Σ) where Σ has the so-called block-diagonal
form
Σ11
012 ··· 01m
021 Σ22 ··· 02m
Σ = .. .. .. ..
. . . .
0m1 · · · 0mm−1 Σmm
1 − 21 m yi −νi )t Σ−1
=
P
i=1 (~ yi −νi )
ii (~
Q m 1
e
n/2
(2π) i=1 |Σii |
2
m
Y 1
e− 2 (~yi −νi ) Σii (~yi −νi )
1 t −1
= 1
n /2
i |Σ | 2
i=1 (2π) ii
To learn more about the multivariate Normal density, look at the curves on which pX~
is constant; i.e., {~x : pX~ (~x) = c} for some constant c. The density depends on the xi ’s
through the quadratic form (~x − µ)t Σ−1 (~x − µ), so pX~ is constant where this quadratic form
is constant. But when Σ is diagonal, (~x − µ)t Σ−1 (~x − µ) = n1 (xi − µi )2 /σ2i so pX~ (~x) = c is
P
the equation of an ellipsoid centered at µ and with eccentricities determined by the ratios
σi /σ j .
What does this density look like? It is easiest to answer that question in two dimen-
sions. Figure 5.11 shows three bivariate Normal densities. The left-hand column shows
contour plots of the bivariate densities; the right-hand column shows samples from the
joint distributions. In all cases, E[X1 ] = E[X2 ] = 0. In the top row, σX1 = σX2 = 1; in
the second row, σX1 = 1; σX2 = 2; in the third row, σX1 = 1/2; σX2 = 2. The standard
5.7. NORMAL 322
deviation is a scale parameter, so changing the SD just changes the scale of the random
variable. That’s what gives the second and third rows more vertical spread than the first,
and makes the third row more horizontally squashed than the first and second.
x1 <- seq(-5,5,length=60)
x2 <- seq(-5,5,length=60)
den.1 <- dnorm ( x1, 0, 1 )
den.2 <- dnorm ( x2, 0, 1 )
den.jt <- den.1 %o% den.2
contour ( x1, x2, den.jt, xlim=c(-5,5), ylim=c(-5,5), main="(a)",
xlab=expression(x[1]), ylab=expression(x[2]) )
(a) (b)
4
2
2
x2
x2
0
0
!4
!4
!4 !2 0 2 4 !4 !2 0 2 4
x1 x1
(c) (d)
4
4
2
2
x2
x2
0
0
!4
!4
!4 !2 0 2 4 !4 !2 0 2 4
x1 x1
(e) (f)
4
4
2
2
x2
x2
0
0
!4
!4
!4 !2 0 2 4 !4 !2 0 2 4
x1 x1
• The code makes heavy use of the fact that X1 and X2 are independent for (a) calcu-
lating the joint density and (b) drawing random samples.
• den.1 %o% den.2 yields the outer product of den.1 and den.2. It is a matrix
whose i j’th entry is den.1[i] * den.2[j].
Now let’s see what happens when Σ is not diagonal. Let Y ∼ N(µY~ , ΣY~ ), so
The preceding result says that any multivariate Normal random variable, Y ~ in our notation
above, has the same distribution as a linear transformation of a standard Normal random
variable.
To see what multivariate Normal densities look like it is easiest to look at 2 dimen-
sions. Figure 5.12 shows three bivariate Normal densities. The left-hand column shows
contour plots of the bivariate densities; the right-hand column shows samples from the
joint distributions. In all cases, E[X1 ] = E[X2 ] = 0 and σ1 = σ2 = 1. In the top row,
σ1,2 = 0; in the second row, σ1,2 = .5; in the third row, σ1,2 = −.8.
5.7. NORMAL 325
a b
4
2
2
x2
x2
0
0
!4
!4
!4 !2 0 2 4 !4 !2 0 2 4
x1 x1
c d
4
4
2
2
x2
x2
0
0
!4
!4
!4 !2 0 2 4 !4 !2 0 2 4
x1 x1
e f
4
4
2
2
x2
x2
0
0
!4
!4
!4 !2 0 2 4 !4 !2 0 2 4
x1 x1
x1 <- seq(-5,5,length=npts)
x2 <- seq(-5,5,length=npts)
for ( i in 1:3 )
Sig <- Sigma[i,,]
Siginv <- solve(Sig) # matrix inverse
for ( j in 1:npts )
for ( k in 1:npts )
x <- c ( x1[j], x2[k] )
den.jt[j,k] <- ( 1 / sqrt(2*pi*det(Sig)) ) *
exp ( -.5 * t(x) %*% Siginv %*% x )
~ 1 ∼ N(µ1 , Σ1,1 ).
Theorem 5.27. X
~1 ~1
! ! !
~≡ Y I 0 X
Y ~2 = B I X
Y ~2 .
µ µ
! ! !
~ = I 0
E(Y) 1
= 1
.
B I µ2 Bµ1 + µ2
Then Theorem 5.24 says Y ~1 ∼ N(µ1 , Σ1,1 ). It also says Y ~2 ∼ N(Bµ1 + µ2 , Σ2,2 − Σ2,1 Σ−1 Σ1,2 ),
1,1
a fact we shall use in the proof of Theorem 5.28
~2 | X
~ 1 ∼ N µ2 + Σ2,1 Σ−1 (X ~ 1 − µ1 ), Σ2,2 − Σ2,1 Σ−1 Σ1,2 .
Theorem 5.28. X 1,1 1,1
~1 ~1
! ! !
~≡ Y I 0 X
Y ~2 = B I X
Y ~2 .
The distribution of Y~ was worked out in the proof of Theorem 5.27. Here we work back-
~1
!
X ~ 1 to get
wards and derive the joint density of ~ , then divide by the marginal density of X
X2
the density ~2 | X
of X ~ 1 . This may seem strange, because we already know the joint density
~
!
X
of ~ 1 . However, in this proof we shall write the joint density in a different way.
X2
5.7. NORMAL 329
Because the Jacobian of the transformation is 1, the joint density pX~ 1 ,X~ 2 (~x1 , ~x2 ) is found
by writing the joint density pY~1 ,Y~2 (~y1 , ~y2 ) = pY~1 (~y1 )· pY~2 | Y~1 (~y2 | ~y1 ) = pY~1 (~y1 )· pY~2 (~y2 ) (because
~1 ⊥ Y
Y ~2 ), then substituting ~x1 for ~y1 and B~x1 + ~x2 for ~y2 . Let n2 be the length of X ~ 2 and
Σ2,2 | 1 = Σ2,2 − Σ2,1 Σ1,1 Σ1,2 . Then,
−1
(2π) 2 |Σ2,2 | 1 | 2
1 h t
− 12 (~x2 −(µ2 −B(~x1 −µ1 ))) Σ−1 ~
2,2 | 1 ( x2 −(µ2 −B(~x1 −µ1 )))
i
= n2 1
e ,
(2π) 2 |Σ2,2 | 1 | 2
Theorem 5.29. Let X1 , . . . , Xn ∼ i.i.d. N(µ, σ). Define S 2 ≡ ni=1 (Xi − X̄)2 . Then X̄ ⊥ S 2 .
P
~ = (Y1 , . . . , Yn )t by
Proof. Define the random vector Y
Y1 = X1 − X̄
Y2 = X2 − X̄
..
.
Yn−1 = Xn−1 − X̄
Yn = X̄
2. (Y1 , . . . , Yn−1 )t ⊥ Yn .
3. Therefore S 2 ⊥ Yn .
5.8. THE T DISTRIBUTION 330
n−1
n−1 2 n−1 n−1 2
X X X X
S = (Xi − X̄) + (Xi − X̄) = Yi + Yi
2 2
2
2.
1 − n1 − 1n − 1n · · · − 1n
− 1 1 − 1 − 1 · · · − 1
n n n n
~ .. .
. .
. .
. .
.
~ ~
Y = . . . . . ≡ AX
X
1
− n − 1n · · · 1 − 1n − 1n
1 1 1 1
n n n
··· n
S2
∼ χ2n−1 .
σ 2
Proof. Let
n
X Xi − µ 2
V= .
i=1
σ
Then V ∼ χ2n and
n !2
X (Xi − X̄) + (X̄ − µ)
V=
i=1
σ
n 2 !2 n
X̄ − µ
!
X Xi − X̄ X
= +n + 2(X̄ − µ) (Xi − X̄)
i=1
σ σ i=1
n !2 !2
X Xi − X̄ X̄ − µ
= + √
i=1
σ σ/ n
S2
≡ + V2
σ2
5.8. THE T DISTRIBUTION 332
n−1
X Xi − µ 2 X − µ 2
n
V= + ≡ W1 + W2
i=1
σ σ
where W1 ⊥ W2 , W1 ∼ χ2n−1 and W2 ∼ χ21 . Now the conclusion follows by Lemma 5.30.
Define r √
n − 1 X̄ − µ n(X̄ − µ)/σ
!
T≡ √ = p .
n σ̂/ n S 2 /(n − 1)σ2
Then√ by Corollary 5.29 and Theorem 5.31,2 T has the distribution of
U/ V/(n − 1) where U ∼ N(0, 1), V ∼ χn−1 , and U ⊥ V. This distribution is called the t
distribution with n − 1 degrees of freedom. We write T ∼ tn−1 . Theorem 5.32 derives its
density.
Theorem 5.32. Let U ∼ N(0, 1), V ∼ χ2p , and U ⊥ V. Then T ≡ U/ V/p has density
p
p p+1
Γ( p+1 Γ( p+1
!−
)p 2 2 − p+1 ) t2 2
pT (t) = 2
√ t +p
2
= p √ 2
1+ .
Γ( 2p ) π Γ( 2 ) pπ p
Proof. Define
U
T= p and Y=V
V/p
We make the transformation (U, V) → (T, Y), find the joint density of (T, Y), and then the
marginal density of T . The inverse transformation is
1
TY 2
U= √ and V=Y
p
The Jacobian is
1 1 1
dU dU Y√2 T Y√− 2 Y2
dT
dV
dY
dV = p 2 p = √
dT dY 0 1 p
1 u2 1 p v
pU,V (u, v) = √ e− 2 p p v 2 −1 e− 2 .
2π Γ( 2 )2 2
5.8. THE T DISTRIBUTION 333
Γ( 2p ) π
!− p+1
Γ( p+1 ) t2 2
= p √
2
1+ .
Γ( 2 ) pπ p
We formalize Theorem 5.32 with a definition. A random variable T with density
p p+1
Γ( p+1 Γ( p+1
!−
)p 2 2 − p+1 ) t2 2
pT (t) = 2
√ t +p
2
= p √ 2
1+ .
Γ( 2p ) π Γ( 2 ) pπ p
is said to have a t distribution, or a Student’s t distribution, with p degrees of freedom. The
distribution is named for William Sealy Gosset who used the pseudonym Student when he
wrote about the t distribution in 1908.
Figure ?? shows the t density for 1, 4, 16, and 64 degrees of freedom, and the N(0, 1)
density. The two points to note are
1. The t densities are unimodal and symmetric about 0, but have less mass in the middle
and more mass in the tails than the N(0, 1) density.
2. In the limit, as p → ∞, the t p density appears to approach the N(0, 1) density. (The
appearance is correct. See Exercise 36.)
5.8. THE T DISTRIBUTION 334
0.4
df = 1
df = 4
0.3
df = 16
df = 64
density
0.2
Normal
0.1
0.0
!4 !2 0 2 4
Figure 5.13: t densities for four degrees of freedom and the N(0, 1) density
5.9. EXERCISES 335
√
At the beginning of Section 5.8 we said the quantity n(X̄ − µ)/σ̂ had a N(0, 1)
distribution,
√ approximately. Theorem 5.32 derives the density of the related quantity
n − 1(X̄ − µ)/σ̂ which has a tn−1 distribution, exactly. Figure ?? shows how similar
those distributions are. The t distribution has slightly more spread than the N(0, 1) distri-
bution, reflecting the fact that σ has to be estimated. But when n is large, i.e. when σ is
well estimated, then the two distributions are nearly identical.
If T ∼ t p , then
! p+1
Γ( p+1 2 − 2
Z ∞
) t
E[T ] = t p √ 2
1+ dt (5.7)
−∞ Γ( 2
) pπ p
In the limit as t → ∞, the integrand behaves like t−p ; hence 5.7 is integrable if and only
if p > 1. Thus the t1 distribution, also known as the Cauchy distribution, has no mean.
When p > 1, E[T ] = 0, by symmetry. By a similar argument, the t p distribution has a
variance if and only if p > 2. When p > 2, then Var(T ) = p/(p − 2). In general, T has a
k-th moment (E[T k ] < ∞) if and only if p > k. Because the t distribution does not have
moments of all order, it does not have a moment generating function.
5.9 Exercises
1. Prove Theorem 5.4 by moment generating functions.
3. Assume that all players on a basketball team are 70% free throw shooters and that
free throws are independent of each other.
(a) The team takes 40 free throws in a game. Write down a formula for the prob-
ability that they make exactly 37 of them. You do not need to evaluate the
formula.
(b) The team takes 20 free throws the next game. Write down a formula for the
probability that they make exactly 9 of them.
(c) Write down a formula for the probability that the team makes exactly 37 free
throws in the first game and exactly 9 in the second game. That is, write a
formula for the probability that they accomplish both feats.
4. Explain the role of qnbinom(...) in the R code for Figure 5.2.
5. Write down the distribution you would use to model each of the following random
variables. Be as specific as you can. I.e., instead of answering “Poisson distribu-
tion”, answer “Poi(3)” or instead of answering “Binomial”, answer “Bin(n, p) where
n = 13 but p is unknown.”
(a) The temperature measured at a randomly selected point on the surface of Mars.
(b) The number of car accidents in January at the corner of Broad Street and Main
Street.
(c) Out of 20 people in a post office, the number who, when exposed to anthrax
spores, actually develop anthrax.
(d) Out of 10,000 people given a smallpox vaccine, the number who develop
smallpox.
(e) The amount of Mercury in a fish caught in Lake Ontario.
6. A student types dpois(3,1.5) into R. R responds with 0.1255107.
(a) Write down in words what the student just calculated.
(b) Write down a mathematical formula for what the student just calculated.
7. Name the distribution. Your answers should be of the form Poi(λ) or N(3, 22), etc.
Use numbers when parameters are known, symbols when they’re not.
You spend the evening at the roulette table in a casino. You bet on red 100 times.
Each time the chance of winning is 18/38. If you win, you win $1; if you lose,
you lose $1. The average amount of time between bets is 90 seconds; the standard
deviation is 5 seconds.
5.9. EXERCISES 337
8. A golfer plays the same golf course daily for a period of many years. You may
assume that he does not get better or worse, that all holes are equally difficult and
that the results on one hole do not influence the results on any other hole. On any one
hole, he has probabilities .05, .5, and .45 of being under par, exactly par, and over
par, respectively. Write down what distribution best models each of the following
random variables. Be as specific as you can. I.e., instead of answering "Poisson
distribution" answer "Poi(3)" or "Poi(λ) where λ is unknown." For some parts the
correct answer might be "I don’t know."
9. During a PET scan, a source (your brain) emits photons which are counted by a
detector (the machine). The detector is mounted at the end of a long tube, so only
photons that head straight down the tube are detected. In other words, though the
5.9. EXERCISES 338
source emits photons in all directions, the only ones detected are those that are emit-
ted within the small range of angles that lead down the tube to the detector.
Let X be the number of photons emitted by the source in 5 seconds. Suppose the
detector captures only 1% of the photons emitted by the source. Let Y be the number
of photons captured by the detector in those same 5 seconds.
(a) What is a good model for the distribution of X?
(b) What is the conditional distribution of Y given X?
(c) What is the marginal distribution of Y?
Try to answer these questions from first principles, without doing any calculations.
10. (a) Prove Theorem 5.12 using moment generating functions.
(b) Prove Theorem 5.13 using moment generating functions.
11. (a) Prove Theorem 5.15 by finding E[Y 2 ] using the trick that was used to prove
Theorem 5.13.
(b) Prove Theorem 5.15 by finding E[Y 2 ] using moment generating functions.
12. Case Study 4.2.3 in Larsen and Marx [add reference] claims that the number of
fumbles per team in a football game is well modelled by a Poisson(2.55) distribution.
For this quiz, assume that claim is correct.
(a) What is the expected number of fumbles per team in a football game?
(b) What is the expected total number of fumbles by both teams?
(c) What is a good model for the total number of fumbles by both teams?
(d) In a game played in 2002, Duke fumbled 3 times and Navy fumbled 4 times.
Write a formula (Don’t evaluate it.) for the probability that Duke will fumble
exactly 3 times in next week’s game.
(e) Write a formula (Don’t evaluate it.) for the probability that Duke will fumble
exactly three times given that they fumble at least once.
13. Clemson University, trying to maintain its superiority over Duke in ACC football,
recently added a new practice field by reclaiming a few acres of swampland sur-
rounding the campus. However, the coaches and players refused to practice there in
the evenings because of the overwhelming number of mosquitos.
To solve the problem the Athletic Department installed 10 bug zappers around the
field. Each bug zapper, each hour, zaps a random number of mosquitos that has a
Poisson(25) distribution.
5.9. EXERCISES 339
(a) What is the exact distribution of the number of mosquitos zapped by 10 zappers
in an hour? What are its expected value and variance?
(b) What is a good approximation to the distribution of the number of mosquitos
zapped by 10 zappers during the course of a 4 hour practice?
(c) Starting from your answer to the previous part, find a random variable relevant
to this problem that has approximately a N(0,1) distribution.
14. Bob is a high school senior applying to Duke and wants something that will make
his application stand out from all the others. He figures his best chance to impress
the admissions office is to enter the Guinness Book of World Records for the longest
amount of time spent continuously brushing one’s teeth with an electric toothbrush.
(Time out for changing batteries is permissible.) Batteries for Bob’s toothbrush last
an average of 100 minutes each, with a variance of 100. To prepare for his assault
on the world record, Bob lays in a supply of 100 batteries.
The television cameras arrive along with representatives of the Guinness company
and the American Dental Association and Bob begins the quest that he hopes will
be the defining moment of his young life. Unfortunately for Bob his quest ends in
humiliation as his batteries run out before he can reach the record which currently
stands at 10,200 minutes.
Justice is well served however because, although Bob did take AP Statistics in high
school, he was not a very good student. Had he been a good statistics student he
would have calculated in advance the chance that his batteries would run out in less
than 10,200 minutes.
Calculate, approximately, that chance for Bob.
15. An article on statistical fraud detection (Bolton and Hand [1992]), when talking
about records in a database, says:
"One of the difficulties with fraud detection is that typically there are many legiti-
mate records for each fraudulent one. A detection method which correctly identifies
99% of the legitimate records as legitimate and 99% of the fraudulent records as
fraudulent might be regarded as a highly effective system. However, if only 1 in
1000 records is fraudulent, then, on average, in every 100 that the system flags as
fraudulent, only about 9 will in fact be so."
QUESTION: Can you justify the "about 9"?
16. In 1988 men averaged around 500 on the math SAT, the SD was around 100 and the
histogram followed the normal curve.
5.9. EXERCISES 340
(a) Estimate the percentage of men getting over 600 on this test in 1988.
(b) One of the men who took the test in 1988 will be picked at random, and you
have to guess his test score. You will be given a dollar if you guess it right to
within 50 points.
i. What should you guess?
ii. What is your chance of winning?
i. 1
ii. the question doesn’t make sense
iii. can’t tell from the information given.
(f) X and Y have joint density f (x, y) on the unit square. f (x) =
R1
i. 0 f (x, y) dx
R1
ii. 0 f (x, y) dy
Rx
iii. 0 f (x, y) dy
(g) X1 , . . . , Xn ∼ Gamma(r, 1/λ) and are mutually independent.
f (x1 , . . . , xn ) =
i. [λr /(r − 1)!]( xi )r−1 e−λ xi
Q P
18. In Figure 5.2, the plots look increasingly Normal as we go down each column. Why?
Hint: a well-known theorem is involved.
19. Prove Theorem 5.7.
20. Prove a version of Equation 5.1 on page 290. Let k = 3. Start from the joint pmf of
Y1 and Y2 (Use Theorem 5.7.), derive the marginal pmf of Y1 , and identify it.
21. Ecologists who study forests have a concept of seed rain. The seed rain in an area is
the number of seeds that fall on that area. Seed rain is a useful concept in studying
how forests rejuvenate and grow. After falling to earth, some seeds germinate and
become seedlings; others do not. For a particular square-meter quadrat, let Y1 be
the number of seeds that fall to earth and germinate and Y2 be the number of seeds
that fall to earth but do not germinate. Let Y = Y1 + Y2 . Adopt the statistical model
Y1 ∼ Poi(λ1 ); Y2 ∼ Poi(λ2 ) and Y1 ⊥ Y2 . Theorem 5.10 says Y ∼ Poi(λ) where
λ = λ1 + λ2 . Find the distribution of Y1 given Y. I.e., find P[Y1 = y1 | Y = y].
22. Prove Theorem 5.11. Hint: use moment generating functions and the fact that
limn→∞ (1 + 1/n)n = e.
23. (a) Let Y ∼ U(1, n) where the parameter n is an unknown positive integer. Suppose
we observe Y = 6. Find the m.l.e. n̂. Hint: Equation 5.2 defines the pmf for
y ∈ {1, 2, . . . , n}. What is p(y) when y < {1, 2, . . . , n}?
5.9. EXERCISES 342
(b) In World War II, when German tanks came from the factory they had serial
numbers labelled consecutively from 1. I.e., the numbers were 1, 2, . . . . The
Allies wanted to estimate T , the total number of German tanks and had, as data,
the serial numbers of the tanks they had captured. Assuming that tanks were
captured independently of each other and that all tanks were equally likely to
be captured find the m.l.e. T̂ .
25. (a) Is there a discrete distribution that is uniform on the positive integers? Why or
why not? If there is such a distribution then we might call it U(1, ∞).
(b) Is there a continuous distribution that is uniform on the real line? Why or why
not? If there is, then we might call it U(−∞, ∞).
26. Ecologists are studying pitcher plants in a bog. To a good approximation, pitcher
plants follow a Poisson process with parameter λ. That is, (i) in a region of area A,
the number of pitcher plants has a Poisson distribution with parameter Aλ; and (ii)
the numbers of plants in two disjoint regions are conditionally independent given λ.
The ecologists want to make an inference about λ.
(a) This section of the problem describes a simplified version of the ecologists’
sampling plan. They choose an arbitrary site s near the middle of the bog.
Then they look for pitcher plants near s and record the location of the nearest
one. Let r be the distance from s to the nearest pitcher plant and let q = r2 .
i. Find the density of q.
ii. Find the density of r.
iii. Find the m.l.e., λ̂.
iv. Answer the previous questions under the supposition that the ecologists
had recorded the distance to the k’th nearest pitcher plant. (The previous
questions are for k = 1.)
(b) This section of the problem describes the ecologists’ sampling plan more ac-
curately. First, they choose some sites s1 , . . . , sn in the bog. Then they go to
each site, find the nearest pitcher plant and record its location. The data are
these locations L1 , . . . , Ln . Some sites may share a nearest plant, so some of
5.9. EXERCISES 343
the Li ’s may be referring to the same plant. Let Di be the distance from si to Li
and Di, j be the distance from si to L j .
i. Let D1 be the distance from s1 to L1 . Find the density p(d1 |λ). You may
assume that pitcher plants arise according to a homogeneous Poisson pro-
cess with rate λ. Hint: use the relationship between a Poisson process and
the exponential distribution.
ii. When the ecologists go to s2 they discover that the nearest plant is the
same plant they already found nearest to s1 . In other words, they discover
that D2 = D2,1 . Find Pr[D2 = D2,1 |λ].
iii. Find the likelihood function `(λ) ≡ p[L1 , . . . , Ln |λ].
iv. Find the m.l.e. λ̂ ≡ argmaxλ `(λ).
v. Suppose the prior distribution for λ is Gam(α, β). Find the posterior dis-
tribution for λ.
27. Let x ∼ Gam(α, β) and let y = 1/x. Find the pdf of y. We say that y has an inverse
Gamma distribution with parameters α and β and write y ∼ invGam(α, β).
28. Prove Theorem 5.18. Hint: Use the method of Theorem 5.13.
29. Let x1 , . . . , xn ∼ i.i.d. U(0, 1). Find the distribution of x(n) , the largest order statistic.
30. In the R code to create Figure 2.19, explain how to use dgamma(...) instead of
dpois(...).
31. Prove the claim on page 309 that the half-life of a radioactive isotope is m = λ log 2.
34. Page 320 shows that the n-dimensional Normal density with a diagonal covariance
matrix is the product of n separate univariate Normal densities. In this problem
you are to work in the opposite direction. Let X1 , . . . , Xn be independent Normally
distributed random variables with means µ1 , . . . , µn and SD’s σ1 , . . . , σn .
Given (µ, τ), let Y1 , . . . , Yn be a random sample from the Normal distribution with
mean µ and precision τ. Our goal is to find the posterior distribution of (µ, τ).
Bayesian Statistics
346
6.1. MULTIDIMENSIONAL BAYESIAN ANALYSIS 347
Unfortunately, the integral in Equation 6.2 is often not analytically tractable and must be
integrated numerically. Standard numerical integration techniques such as quadrature may
work well in low dimensions, but in Bayesian statistics Equation 6.2 is often sufficiently
high dimensional that standard techniques are unreliable. Therefore, new numerical inte-
gration techniques are needed. The most important of these is called Markov chain Monte
Carlo integration, or MCMC. Other techniques can be found in the references at the be-
ginning of the chapter. For the purposes of this book, we investigate MCMC. But first,
to get a feel for Bayesian analysis, we explore posteriors in low dimensional, numerically
tractable examples.
The general situation is that there are multiple parameters θ1 , . . . , θk , and data y1 , . . . ,
yn . We may be interested in marginal, conditional, or joint distributions of the parameters
either a priori or a posteriori. Some examples:
• p(θ1 , . . . , θk ), the joint prior
of θ1
the conditional joint posterior density of (θ2 , . . . , θk ) given θ1 , where the “∝” means
that we substitute θ1 into the numerator and treat the denominator as a constant.
The examples in this section illustrate the ideas.
Example 6.1 (Ice Cream Consumption, cont.)
This example continues Example 3.5 in which weekly ice cream consumption is mod-
elled as a function of temperature and possibly other variables. We begin with the
model in Equation 3.10:
or
yi = β0 + β1 xi + i
where 1 , . . . , 30 ∼ i.i.d.N(0, σ). For now, let us suppose that σ is known. Later we’ll
drop that assumption. Because σ is known, there are only two parameters: β0 and β1 .
6.1. MULTIDIMENSIONAL BAYESIAN ANALYSIS 348
For a Bayesian analysis we need a prior distribution for them; then we can compute
the posterior distribution. For now we adopt the following prior without comment. Later
we will see why we chose this prior and examine its consequences.
β0 ∼ N(µ0 , σ0 )
β1 ∼ N(µ1 , σ1 )
β0 ⊥ β1
i. e.
β0 µ0 σ20 0
! ! !!
∼N ,
β1 µ1 0 σ21
for some choice of (µ0 , µ1 , σ0 , σ1 ). The likelihood function is
β0 µ0
! !
To find the posterior density we will use matrix notation. Let β = , µ = ,
β1 µ1
σ2 0 ~
!
Σ= 0 , Y = (Y1 , . . . , Yn )t and
0 σ21
1 temperature1
1 temperature
2
X = .. .. .
. .
1 temperature30
hood.
~ ~
~ ∝ e− 2 (β−µ) Σ
p(β | Y)
1 t −1 (β−µ)− 1 (Y−Xβ)
2
t (Y−Xβ)/σ2
(6.3)
At this point we observe that the exponent is a quadratic form in β. Therefore the
posterior density will be a two-dimensional Normal distribution for β and we just have
6.1. MULTIDIMENSIONAL BAYESIAN ANALYSIS 349
to complete the square to find the mean vector and covariance matrix. The exponent
is, apart from a factor of − 12 and some irrelevant constants involving the yi ’s,
~ 2 ] + · · · = (β − µ∗ )t (Σ∗ )−1 (β − µ∗ ) + · · ·
βt [Σ−1 + X t X/σ2 ]β − 2βt [Σ−1 µ + X t Y/σ
where Σ∗ = (Σ−1 + X t X/σ2 )−1 and µ∗ = Σ∗ (Σ−1 µ + X t Y/σ ~ 2 ). Therefore, the posterior
distribution of β given Y ~ is Normal with mean µ and covariance matrix Σ∗ . It is worth
∗
noting (1) that the posterior precision matrix (Σ∗ )−1 is the sum of the prior precision
matrix Σ−1 and a part that comes from the data, X t X/σ2 and (2) that the posterior mean
~ 2 ) = Σ∗ (Σ−1 µ + (X t X/σ2 )(X t X)−1 X t Y)
is Σ∗ (Σ−1 µ + X t Y/σ ~ , a matrix weighted average of
the prior mean µ and the least-squares estimate (X X)−1 X t Y
t ~ where the weights are the
two precisions Σ and X X/σ .
−1 t 2
The derivation of the posterior distribution does not depend on any particular choice
of (µ, Σ), but it does depend on the fact that the prior distribution was Normal because
that’s what gives us the quadratic form in the exponent. That’s one reason we took the
prior distribution for β to be Normal: it made the calculations easy.
Now let’s look at the posterior more closely, see what it implies for (β0 , β1 ), and
see how sensitive the conclusions are to the choice of (µ, Σ). We’re also treating σ as
known, so we’ll need a value. Let’s start with the choice
µ0 σ20 0
! ! ! !
0 106 0
µ= = ; Σ= = ; and σ = 0.05. (6.4)
µ1 0 0 σ21 0 106
The large diagonal entries in Σ say that we have very little a priori knowledge of β. We
can use R to calculate the posterior mean and covariance.
ic <- read.table ( "data/ice cream.txt", header=T )
mu <- c ( 0, 0 )
Sig <- diag ( rep ( 10^6, 2 ) )
sig <- 0.05
x <- cbind ( 1, ic$temp )
y <- ic$IC
The result is
.207
! !
8.54 × 10−4 −1.570 × 10−5
µ =
∗
and Σ =
∗
. (6.5)
.003 −1.570 × 10−5 3.20 × 10−7
6.1. MULTIDIMENSIONAL BAYESIAN ANALYSIS 350
600
12
8
300
4
0
0
0.10 0.25 0.001 0.004
β0 β1
Figure 6.1: Posterior densities of β0 and β1 in the ice cream example using the prior from
Equation 6.4.
Note:
• The matrix product t(x) %*% y is so common it has a special shortcut, crossprod
( x, y ). We could have used crossprod to find Σ∗ and µ∗ .
Figure 6.1 shows the posterior densities. The posterior density of β0 is not very mean-
ingful because it pertains to ice cream consumption when the temperature is 0. Since
our data was collected at temperatures between about 25 and 75, extrapolating to tem-
peratures around 0 would be dangerous. And because β0 is not meaningful, neither
is the joint density of (β0 , β1 ). On the other hand, our inference for β1 is meaningful. It
says that ice cream consumption goes up about .003 (±.001 or so) pints per person
for every degree increase in temperature. You can verify whether that’s sensible by
looking at Figure 3.8.
6.1. MULTIDIMENSIONAL BAYESIAN ANALYSIS 351
Table 6.1: The numbers of pine cones on trees in the FACE experiment, 1998–2000.
for ( i in 1:6 ) {
good <- cones$ring == i
print ( c ( sum ( cones$X1998[good] > 0 ) / sum(good),
sum ( cones$X1999[good] > 0 ) / sum(good),
sum ( cones$X2000[good] > 0 ) / sum(good) ) )
}
[1] 0.0000000 0.1562500 0.2083333
[1] 0.05633803 0.36619718 0.32394366
[1] 0.01834862 0.21100917 0.27522936
[1] 0.05982906 0.39316239 0.37606838
[1] 0.01923077 0.10576923 0.22115385
[1] 0.04081633 0.19727891 0.18367347
Since there’s not much action in 1998 we will ignore the data from that year. The data
show a greater contrast between treatment (rings 2, 3, 4) and control (rings 1, 5, 6) in
1999 than in 2000. So for the purpose of this example we’ll use the data from 1999. A
good scientific investigation, though, would use data from all years.
We’re looking for a model with two features: (1) the probability of cones is an
increasing function of dbh and of the treatment and (2) given that a tree has cones,
the number of cones is an increasing function of dbh and treatment. Here we describe
a simple model with these features. The idea is (1) a logistic regression with covariates
dbh and treatment for the probability that a tree is sexually mature and (2) a Poisson
regression with covariates dbh and treatment for the number of cones given that a tree
is sexually mature. Let Yi be the number of cones on the i’th tree. Our model is
1 if the i’th tree had extra CO2
xi =
0 otherwise
1 if the i’th tree is sexually mature
θi =
0 otherwise
(6.7)
exp(β0 + β1 dbhi + β2 xi )
πi = P[θi = 1] =
1 + exp(β0 + β1 dbhi + β2 xi )
φi = exp(γ0 + γ1 dbhi + γ2 xi )
Yi ∼ Poi(θi φi )
This model is called a zero-inflated Poisson model. There are six unknown parame-
ters: β0 , β1 , β2 , γ0 , γ1 , γ2 . We must assign prior distributions and compute posterior
6.1. MULTIDIMENSIONAL BAYESIAN ANALYSIS 354
ring
6
5
4
3
2
1
5 10 15 20 25 5 10 15 20 25
100
80
60
40
20
cones 1998
●
●● ● ● ● ● ●
● ●●●● ●
●●●●●
●●●
●
●
●●●
●●
●●
●
●●●
●
●●●
●
●●
●●
●●
●●●
●
●
●●●
●●
●●
●●
●
●●●
●● ● ● ●● ● ●●●●●●
●
●●●●●
●●●
●●
●●●
●●●
●●●●
●●
●●●
●
●●●
●●●
●●●
●
●●
●●●
●
●●●
●
●●
●●●● ● ●●
●
●●
●
●●●
●
●●●
●●
●●
●●
●●
●●
●●
●●
●●
●
●●
●●●●
●● ●
●●●
●
●●
●●●
●●●
●●
●
●●
●●
●
●●
●●●
●●
● ●
●●●
●●
● ●
0
100
80
60
40
20
● ●●
●● ●●●
●●
●
●●●●●● ● ●
●●●●
●●●● ●
●●●
●
●●
●
●●●●
●
●●
●●●
●
●●●●
●●●
●●● ●●● ●● ● ●●
●● ●●●
●● ● ●
●●
●●●●● ●
●● ●●●●●●●
●●
● ●
●●
●
●●
●●●
●
●●● ●● ●●
● ● ● ●● ●● ●●● ●
●●
●●
●
●●●
●●
●●●
●
●●●●●
●
●●
●●
●
●●●
●
●●●
●●●●
●
●●
●●●
●●
●●
●●●
●●●● ●
0
5 10 15 20 25
dbh
ring
6
5
4
3
2
1
5 10 15 20 25 5 10 15 20 25
100
80
60
●
40
● ●
●
● ●
●
20
●
●
cones 1999
●
●●● ● ● ● ●●
● ● ●● ●●● ●
●●●
●●●● ●
●
●●
● ●● ●●
● ●
● ● ● ●● ●●● ●● ●● ● ●
● ●●●● ● ●
● ●●●●
●●●
●
● ●
●●●
●
●●
● ●
●●●●●●●●
●
● ●
●●
●
●● ●●● ● ●
●●●
● ● ●● ● ●●●●●●
●
●●●●●
●●●
●●
●●●
●●●
●●●
● ●●
●●
●●
●●
● ●●●
●●●● ●
●●
●●●
●
●●●●
●●● ● ●●
●
●●
●
●●●
●
●●●
●●
●●●●
●●●
●●●
●
●●●
●
●●
●●●●●
●
●●●●●
●
●●●●
●●
●
●●
●●
●
●
●●
●●●
●●
● ● ●●●
●
0
100
80
60
●
40
●
● ● ●
● ●
20
●● ●
● ● ●
●● ●● ●
● ●
●
● ●
● ● ●● ● ● ●●
●
●
●●●●● ●●● ● ●
● ●
● ●
●● ●●●
●●
●
●●●●●● ● ●
●●●●
●●●● ●
●●●
●
●●
●
●●●●●●●●●●●
●●●
●●● ●● ● ●● ●● ● ●
●● ●●● ●●●●● ●
●● ●● ●●●●●●
●●
● ●●●●
●● ●
●●
●● ●● ● ● ●● ●● ●●● ●
●●
●●
●
●●●
●●
●●●
●
●●●●●
●
●●
●●●
●
● ●
●●
●●●●●
●●●
●
●●
●
●●●
●
●●● ●
0
5 10 15 20 25
dbh
ring
6
5
4
3
2
1
5 10 15 20 25 5 10 15 20 25
100
●
80
60
40
● ● ●
● ● ●
●
20
●● ● ● ●
cones 2000
● ● ● ●
● ●
● ● ●●
●●● ●●● ● ● ● ● ●
● ● ●● ● ●● ●
● ●● ● ●●●●●●●●●●
● ●●●●● ●●● ●●●
●●●● ● ● ● ●● ●
●●●● ●●●● ●
● ●●●● ●
●●●●●●●
●
●
●●●
●●
●●
●●●
●
●●●
●
●●
●●●●
●●●
●●●●●
●
● ● ● ● ●●
● ●● ●
●●●●●
●●●
●●
●●●
●●●
●●●●
●●
●●●
●
●●●
●●●
●●●
●● ●●
●●●●●● ● ● ●●
●
●●
●
●●●
●
●●●
●●
●●
●●
●●●●
●●●●
●●●
●●●●●
●●
● ●●●
●
●●
●●
●
●●●●
●●
●
●●
●●
●●
●
●●●
● ● ●●●
●
0
100
80
60
● ●
40
●
●
●● ●●
● ●
●
20
● ● ●
● ● ● ●
● ● ●
● ● ● ●
● ● ● ● ● ● ●●
● ●
●
● ●●● ●●●● ● ●●●● ●●●●●● ●●●●● ●●●● ●
●● ●●●
●●
●
●●●●●● ● ● ●●●● ●
●●●● ●●●
●
●●
●
●●●●●
●
●●●●●●
●●●● ●● ● ●●
●● ●●●
●● ● ●
●● ●● ●●●●●●●● ●
●●●● ● ●● ●●● ●● ●●
● ● ● ●● ●● ●●● ●
●●
●●
●
●●●
●●
●●●
●
●● ●
●●●●
●●
●
●●
●●●●●●●●●●●●●
●●
●● ●
0
5 10 15 20 25
dbh
distributions of these parameters. In addition, each tree has an indicator θi and we will
be able to calculate the posterior probabilities P[θi = 1 | y1 , . . . , yn ] for i = 1, . . . , n.
We start with the priors β0 , β1 , β2 , γ0 , γ1 , γ2 ∼ i.i.d.U(−100, 100). This prior distribu-
tion is not, obviously, based on any substantive prior knowledge. Instead of arguing
that this is a sensible prior, we will later check the robustness of conclusions to spec-
ification of the prior. If the conclusions are robust, then we will argue that almost any
sensible prior would lead to roughly the same conclusions.
To begin the analysis we write down the joint distribution of parameters and data.
p(y1 , . . . , yn , β0 , β1 , β2 , γ0 , γ1 , γ2 )
= p(β0 , β1 , β2 , γ0 , γ1 , γ2 ) × p(y1 , . . . , yn | β0 , β1 , β2 , γ0 , γ1 , γ2 )
!6
1
= 1(−100,100) (β0 ) × 1(−100,100) (β1 ) × 1(−100,100) (β2 ) × 1(−100,100) (γ0 )
200
× 1(−100,100) (γ1 ) × 1(−100,100) (γ2 )
Y exp(β0 + β1 dbhi + β2 xi )
×
i:y >0
1 + exp(β0 + β1 dbhi + β2 xi )
i
exp(β0 + β1 dbhi + β2 xi )
!
exp(− exp(γ0 + γ1 dbhi + γ2 xi )) (6.8)
1 + exp(β0 + β1 dbhi + β2 xi )
Q
In Equation 6.8 each term in the product i:yi >0 is
To learn about the posterior in, say, Equation 6.8 it is easy to write an R function
that accepts (β0 , β1 , β2 , γ0 , γ1 , γ2 ) as input and returns 6.8 as output. But that’s quite a
complicated function of (β0 , β1 , β2 , γ0 , γ1 , γ2 ) and it’s not obvious how to use the function
or what it says about any of the six parameters or the θi ’s. Therefore, in Section 6.2 we
present an algorithm that is very powerful for evaluating the integrals that often arise in
multivariate Bayesian analyses.
3. using θ1,1 , . . . , θ M,1 and standard density estimation techniques (page 105) to estimate
p(θ1 | y), or
1. under some fairly benign conditions (See the references at the beginning of the chap-
ter for details.) the sequence p1 , p2 , . . . converges to a limit p, the stationary distri-
bution, that does not depend on ~θ1 ;
6.2. METROPOLIS, METROPOLIS-HASTINGS, AND GIBBS 359
2. the transition density k(~θi | ~θi−1 ) can be chosen so that the stationary distribution p is
equal to p(~θ | y);
2. Choose ~θ1 .
3. For i = 2, 3, . . .
• Set
~θ∗
~θi =
with probability r,
~θi−1
with probability 1 − r.
Step 3 define the transition kernel k. In many MCMC chains, the acceptance probability r
may be strictly less than one, so the kernel k is a mixture of two parts: one that generates
a new value of ~θi+1 , ~θi and one that sets ~θi+1 = ~θi .
To illustrate MCMC, suppose we want to generate a sample θ1 , . . . , θ10,000 from the
Be(5, 2) distribution. We arbitrarily choose a proposal density g(θ∗ | θ) = U(θ − .1, θ + .1)
and arbitrarily choose θ1 = 0.5. The following R code draws the sample.
else
new <- prev
samp[i] <- new
}
The top panel of Figure 6.5 shows the result. The solid curve is the Be(5, 2) density and the
histogram is made from the Metropolis-Hastings samples. They match closely, showing
that the algorithm performed well.
par ( mfrow=c(3,1) )
hist ( samp[-(1:1000)], prob=TRUE, xlab=expression(theta),
ylab="", main="" )
x <- seq(0,1,length=100)
lines ( x, dbeta(x,5,2) )
plot ( samp, pch=".", ylab=expression(theta) )
plot ( dbeta(samp,5,2), pch=".", ylab=expression(p(theta)) )
The code samp[-(1:1000)] discards the first 1000 draws in the hope that the sampler
will have converged to its stationary distribution after 1000 iterations.
Assuming that convergence conditions have been met and that the algorithm is well-
constructed, MCMC chains are guaranteed eventually to converge and deliver samples
from the desired distribution. But the guarantee is asymptotic and in practice the output
from the chain should be checked to diagnose potential problems that might arise in finite
samples.
The main thing to check is mixing. An MCMC algorithm operates in the space of ~θ.
At each iteration of the chain, i.e., for each value of i, there is a current location ~θi . At the
next iteration the chain moves to a new location ~θi . In this way the chain explores the ~θ
space. While it is exploring it also evaluates p(~θi ). In theory, the chain should spend many
iterations at values of ~θ where p(~θ) is large — and hence deliver many samples of ~θ’s with
large posterior density — and few iterations at values where p(~θ) is small. For the chain to
do its job it must find the mode or modes of p(~θ), it must move around in their vicinity, and
it must move between them. The process of moving from one part of the space to another
is called mixing.
The middle and bottom panels of Figure 6.5 illustrate mixing. The middle panel plots
θi vs. i. It shows that the chain spends most of its iterations in values of θ between about 0.6
6.2. METROPOLIS, METROPOLIS-HASTINGS, AND GIBBS 361
2.0
1.0
0.0
θ
1.0
0.6
θ
0.2
Index
2.0
p (θ )
1.0
0.0
Index
Figure 6.5: 10,000 MCMC samples of the Be(5, 2) density. Top panel: histogram of
samples from the Metropolis-Hastings algorithm and the Be(5, 2) density. Middle panel:
θi plotted against i. Bottom panel: p(θi ) plotted against i.
6.2. METROPOLIS, METROPOLIS-HASTINGS, AND GIBBS 362
and 0.9 but makes occasional excursions down to 0.4 or 0.2 or so. After each excursion
it comes back to the mode around 0.8. The chain has taken many excursions, so it has
explored the space well. The bottom panel plots p(θi ) vs. i. It shows that the chain spent
most of its time near the mode where p(θ) ≈ 2.4 but made multiple excursions down to
places where p(θ) is around 0.5, or even less. This chain mixed well.
To illustrate poor mixing we’ll use the same MCMC algorithm but with different pro-
posal kernels. First we’ll use (θ∗ | θ) = U(θ − 100, θ + 100) and change the corresponding
line of code to
thetastar <- runif ( 1, prev - 100, prev + 100 ). Then we’ll use (θ∗ | θ) =
U(θ − .00001, θ + .00001) and change the corresponding line of code to
thetastar <- runif ( 1, prev - .00001, prev + .00001 ). Figure 6.6 shows
the result. The left-hand side of the figure is for (θ∗ | θ) = U(θ − 100, θ + 100). The top
panel shows a very much rougher histogram than Figure 6.5; the middle and bottom pan-
els show why. The proposal radius is so large that most proposals are rejected; therefore,
θi+1 = θi for many iterations; therefore we get the flat spots in the middle and bottom pan-
els. The plots reveal that the sampler explored fewer than 30 separate values of θ. That’s
too few; the sampler has not mixed well. In contrast, the right-hand side of the figure —
for (θ∗ | θ) = U(θ − .00001, θ + .00001) — shows that θ has drifted steadily downward, but
over a very small range. There are no flat spots, so the sampler is accepting most propos-
als, but the proposal radius is so small that the sampler hasn’t yet explored most of the
space. It too has not mixed well.
Plots such as the middle and bottom plots of Figure 6.6 are called trace plots because
they trace the path of the sampler.
In this problem, good mixing depends on getting the proposal radius not too large and
not too small, but just right (Hassall [1909]). To be sure, if we run the MCMC chain
long enough, all three samplers would yield good samples from Be(5, 2). But the first
sampler mixed well with only 10,000 iterations while the others would require many more
iterations to yield a good sample. In practice, one must examine the output of one’s MCMC
chain to diagnose mixing problems. No diagnostics are fool proof, but not diagnosing is
foolhardy.
Several special cases of the Metropolis-Hastings algorithm deserve separate mention.
Metropolis algorithm It is often convenient to choose the proposal density g(~θ∗ | ~θ) to
be symmetric; i.e., so that g(~θ∗ | ~θ) = g(~θ | ~θ∗ ). In this case the Metropolis ratio
p(~θ∗ | y)g(~θi−1 | ~θ∗ )/p(~θi−1 | y)g(~θ∗ | ~θi−1 ) simplifies to p(~θ∗ | y)/p(~θi−1 | y). That’s what
happened in the Be(5, 2) illustration and why the line
r <- min ( 1, dbeta(thetastar,5,2) / dbeta(prev,5,2) ) doesn’t involve
g.
6.2. METROPOLIS, METROPOLIS-HASTINGS, AND GIBBS 363
4
3
500 1000
2
1
0
0
0.2 0.4 0.6 0.8 0.4990 0.4994 0.4998 0.5002
θ θ
0.9
0.4998
0.7
θ
θ
0.5
0.4992
0.3
Index Index
0.938
2.0
p (θ )
p (θ )
0.935
1.0
0.932
0.0
Index Index
Figure 6.6: 10,000 MCMC samples of the Be(5, 2) density. Left column: (θ∗ | θ) = U(θ −
100, θ + 100); Right column: (θ∗ | θ) = U(θ − .00001, θ + .00001). Top: histogram of
samples from the Metropolis-Hastings algorithm and the Be(5, 2) density. Middle: θi
plotted against i. Bottom: p(θi ) plotted against i.
6.2. METROPOLIS, METROPOLIS-HASTINGS, AND GIBBS 364
Independence sampler It may be convenient to choose g(~θ∗ | ~θ) = g(~θ∗ ) not dependent
on ~θ. For example, we could have used thetastar <- runif(1) in the Be(5, 2)
illustration.
Gibbs sampler [Geman and Geman, 1984] In many practical examples, the so-called full
conditionals or complete conditionals p(θ j | ~θ(− j) , y) are known and easy to sample
for all j, where θ(− j) = (θ1 , . . . , θ j−1 , θ j+1 , . . . , θk ). In this case we may sample θi, j
from
p(θ j | θi,1 , . . . , θi, j−1 , θi−1, j+1 , . . . , θi−1,k ) for j = 1, . . . , k and set ~θi = (θi,1 , . . . , θi,k ). We
would do this for convenience.
The next example illustrates several MCMC algorithms on the pine cone data of Ex-
ample 6.2.
Example 6.3 (Pine Cones, cont)
In this example we try several MCMC algorithms to evaluate and display the posterior
distribution in Equation 6.8. Throughout this example, we shall, for compactness, refer
to the posterior density as p(~θ) instead of p(~θ | y1 , . . . , yn ).
First we need functions to return the prior density and the likelihood function.
Now we write a proposal function. This ones makes (~θ∗ | ~θ) ∼ N(~θ, .1I6 ), where I6 is the
6 × 6 identity matrix.
g.all <- function ( params ) {
sig <- c(.1,.1,.1,.1,.1,.1)
proposed <- mvrnorm ( 1, mu=params, Sigma=diag(sig) )
return ( list ( proposed=proposed, ratio=1 ) )
}
Finally we write the main part of the code. Try to understand it; you may have to write
something similar. Notice an interesting feature of R: assigning names to the compo-
nents of params allows us to refer to the components by name in the lik function.
# initial values
params <- c ( "b0"=0, "b1"=0, "b2"=0, "g0"=0, "g1"=0, "g2"=0 )
# number of iterations
mc <- 10000
if ( as.logical ( rbinom(1,1,accept.ratio) ) )
params <- new
Figure 6.7 shows trace plots of the output. The plots show that the sampler did not
move very often; it did not mix well and did not explore the space effectively.
When samplers get stuck, sometimes it’s because the proposal radius is too large.
So next we try a smaller radius: sig <- rep(.01,6). Figure 6.8 shows the result.
The sampler is still not mixing well. The parameter β0 travelled from its starting point of
β0 = 0 to about β0 ≈ −1.4 or so, then seemed to get stuck; other parameters behaved
similarly. Let’s try running the chain for more iterations: mc <- 100000. Figure 6.9
shows the result. Again, the sampler does not appear to have mixed well. Parameters
β0 and β1 , for example, have not yet settled into any sort of steady-state behavior and
6.2. METROPOLIS, METROPOLIS-HASTINGS, AND GIBBS 367
0.1
−0.2 0.2
0.0
b0
b1
−0.2
−0.8
Index Index
0.0
−0.2
−0.5
b2
g0
−0.6
−1.5
−1.0
Index Index
0.2 0.4
0.10
g1
g2
−0.2
0.00
Index Index
1500
p(θ)
500
−500
Index
Figure 6.7: Trace plots of MCMC output from the pine cone code on page 365.
6.2. METROPOLIS, METROPOLIS-HASTINGS, AND GIBBS 368
0.10
−0.2
0.00
−0.8
b0
b1
−0.15
−1.4
Index Index
0.2
0.1
−0.1
−0.4
b2
g0
−0.3
Index Index
0.30
0.14
0.15
0.10
g1
g2
0.06
0.00
Index Index
1400
p(θ)
1000
Index
Figure 6.8: Trace plots of MCMC output from the pine cone code with a smaller proposal
radius.
6.2. METROPOLIS, METROPOLIS-HASTINGS, AND GIBBS 369
p(~θ) seems to be steadily increasing, indicating that the sampler may not yet have
found the posterior mode.
It is not always necessary to plot every iteration of an MCMC sampler. Figure 6.9
plots every 10’th iteration; plots of every iteration look similar. The figure was produced
by the following snippet.
The sampler isn’t mixing well. To write a better one we should try to understand
why this one is failing. It could be that proposing a change in all parameters simulta-
neously is too dramatic, that once the sampler reaches a location where p(~θ) is large,
changing all the parameters at once is likely to result in a location where p(~θ) is small,
therefore the acceptance ratio will be small, and the proposal will likely be rejected. To
ameliorate the problem we’ll try proposing a change to only one parameter at a time.
The new proposal function is
which randomly chooses one of the six parameters and proposes to update that pa-
rameter only. Naturally, we edit the main loop to use g.one instead of g.all. Fig-
ure 6.10 shows the result. This is starting to look better. Parameters β2 and γ2 are ex-
hibiting steady-state behavior; so are β0 and β1 , after iteration 10,000 or so ( x = 1000
in the plots). Still, γ0 and γ1 do not look like they have converged.
Figure 6.11 illuminates some of the problems. In particular, β0 and β1 seem to be
linearly related, as do γ0 and γ1 . This is often the case in regression problems; and we
6.2. METROPOLIS, METROPOLIS-HASTINGS, AND GIBBS 370
−2.0
0.12
b0
b1
−3.0
0.06
−4.0
Index Index
0.7
−1.2
0.5
−1.6
b2
g0
0.3
0.1
Index Index
0.19
0.45
0.17
g1
g2
0.35
0.25
0.15
Index Index
1770
1755
p(θ)
1740
Index
Figure 6.9: Trace plots of MCMC output from the pine cone code with a smaller proposal
radius and 100,000 iterations. The plots show every 10’th iteration.
6.2. METROPOLIS, METROPOLIS-HASTINGS, AND GIBBS 371
0.25
−1
−3
b0
b1
0.10
−5
−0.05
−7
Index Index
−0.5 0.0
1.2
0.8
b2
g0
0.4
0.0
Index Index
0.4
0.10
g1
g2
0.2
0.00
0.0
Index Index
1500
p(θ)
500
−500
Index
Figure 6.10: Trace plots of MCMC output from the pine cone code with proposal function
g.one and 100,000 iterations. The plots show every 10’th iteration.
6.2. METROPOLIS, METROPOLIS-HASTINGS, AND GIBBS 372
have seen it before for the pine cones in Figure 3.15. In the current setting it means
that p(~θ | y1 , . . . , yn ) has ridges: one along a line in the (β0 , β1 ) plane and another along
a line in the (γ0 , γ1 ) plane.
As Figure 6.10 shows, it took the first 10,000 iterations or so for β0 and β1 to reach a
roughly steady state and for p(~θ) to climb to a reasonably large value. If those iterations
were included in Figure 6.11, the points after iteration 10,000 would be squashed
together in a small region. Therefore we made plotem <- seq ( 10,000, 100000,
by=10 ) to drop the first 9999 iterations from the plots.
If our MCMC algorithm proposes a move along the ridge, the proposal is likely to be
accepted. But if the algorithm proposes a move that takes us off the ridge, the proposal
is likely to be rejected because p would be small and therefore the acceptance ratio
would be small. But that’s not happening here: our MCMC algorithm seems not to be
stuck, so we surmise that it is proposing moves that are small compared to the widths
of the ridges. However, because the proposals are small, the chain does not explore
the space quickly. That’s why γ0 and γ1 appear not to have reached a steady state.
We could improve the algorithm by proposing moves that are roughly parallel to the
ridges. And we can do that by making multivariate Normal proposals with a covariance
matrix that approximates the posterior covariance of the parameters. We’ll do that by
finding the covariance of the samples we’ve generated and using it as the covariance
matrix of our proposal distribution. The R code is
We drop the first 9999 iterations because they seem not to reflect p(~θ) accurately. Then
we calculate the covariance matrix of the samples from the previous MCMC sampler.
6.2. METROPOLIS, METROPOLIS-HASTINGS, AND GIBBS 373
−6.5 −5.0
b0
0.24
b1
0.16
1.2
0.6
b2
0.0
−0.4
−1.0
g0
−1.6
0.15
g1
0.12
0.40
g2
0.20
1768
density
1760
Figure 6.11: Pairs plots of MCMC output from the pine cones example.
6.2. METROPOLIS, METROPOLIS-HASTINGS, AND GIBBS 374
That covariance matrix is used in the proposal function. The results are shown in
Figures 6.12 and 6.13. Figure 6.12 shows that the sampler seems to have converged
after the first several thousand iterations. The posterior density has risen to a high
level and is hovering there; all six variables appear to be mixing well. Figure 6.13
confirms our earlier impression that the posterior density seems to be approxiately
Normal — at least, it has Normal-looking two dimensional marginals — with β0 and
β1 highly correlated with each other, γ1 and γ2 highly correlated with each other, and
no other large correlations. The sampler seems to have found one mode and to be
exploring it well.
Figures 6.12 and 6.13 were produced with the following snippet.
Now that we have a good set of samples from the posterior, we can use it to
answer substantive questions. For instance, we might want to know whether the extra
atmospheric CO2 has allowed pine trees to reach sexual maturity at an earlier age
or to produce more pine cones. This is a question of whether β2 and γ2 are positive,
negative, or approximately zero. Figure 6.14 shows the answer by plotting the posterior
densities of β2 and γ2 . Both densities put almost all their mass on positive values,
indicating that P[β2 > 0] and P[γ2 > 0] are both very large, and therefore that pines
trees with excess CO2 mature earlier and produce more cones than pine trees grown
under normal conditions.
par ( mfrow=c(1,2) )
plot ( density ( mcmc.out[10000:100000,"b2"] ),
6.2. METROPOLIS, METROPOLIS-HASTINGS, AND GIBBS 375
0.4
−2
−4
b0
b1
0.2
−6
−8
0.0
0 2000 6000 10000 0 2000 6000 10000
Index Index
1.0
0.0
b2
g0
−1.5
0.0
−3.0
−1.0
Index Index
0.6
0.20
0.4
0.10
g1
g2
0.2
0.0
0.00
Index Index
1500
p(θ)
500
−500
Index
Figure 6.12: Trace plots of MCMC output from the pine cone code with proposal function
g.group and 100,000 iterations. The plots show every 10’th iteration.
6.2. METROPOLIS, METROPOLIS-HASTINGS, AND GIBBS 376
−5
b0
−7
0.15 0.25 0.35
b1
1.0
b2
0.0
−1.0
g0
−2.0
0.16
g1
0.12
0.6
0.4
g2
0.2
1770
density
1760
Figure 6.13: Pairs plots of MCMC output from the pine cones example with proposal
g.group.
6.3. EXERCISES 377
2.0
6
p(β2)
p(γ2)
1.0
4
2
0.0
0
0.0 1.0 0.2 0.4 0.6
β2 γ2
xlab=expression(beta[2]),
ylab=expression(p(beta[2])), main="" )
plot ( density ( mcmc.out[10000:100000,"g2"] ),
xlab=expression(gamma[2]),
ylab=expression(p(gamma[2])), main="" )
6.3 Exercises
1. This exercise follows from Example 6.1.
(a) Find the posterior density for C50 , the expected amount of ice cream consumed
when the temperature is 50 degrees, by writing C50 as a linear function of
(β0 , β1 ) and using the posterior from Example 6.1.
(b) Find the posterior density for C50 by reparameterizing. Instead of working with
parameters (β0 , β1 ), work with parameters (C50 , β1 ). Write the equation for Yi
as a linear function of (C50 , β1 ) and find the new X matrix. Use that new matrix
and a convenient prior to calculate the posterior density of (C50 , β1 ).
6.3. EXERCISES 378
(c) Do parts (a) and (b) agree about the posterior distribution of C50 . Does part
(b) agree with Example 6.1 about the posterior distribution of β1 ? Should they
agree?
2. This exercise asks you to enhance the code for the Be(5, 2) example on page 359.
(a) How many samples is enough? Instead of 10,000, try different numbers. How
few samples can you get away with and still have an adequate approximation
to the Be(5, 2) distribution? You must decide what "adequate" means; you can
use either a firm or fuzzy definition. Illustrate your results with figures similar
to 6.5.
(b) Try an independence sampler in the Be(5, 2) example on page 359. Replace
the proposal kernel with θ∗ ∼ U(0, 1). Run the sampler, make a figure similar
to Figure 6.5 and describe the result.
(c) Does the proposal distribution matter? Instead of proposing with a radius of
0.1, try different numbers. How much does the proposal radius matter? Does
the proposal radius change your answer to part 2a? Illustrate your results with
figures similar to 6.5.
(d) Try a non-symmetric proposal. For example, you might try a proposal distri-
bution of Be(5, 1), or a distribution that puts 2/3 of its mass on (xi−1 − .1, xi−1 )
and 1/3 of its mass on (xi−1 , xi−1 +.1). Illustrate your results with figures similar
to 6.5.
(e) What would happen if your proposal distribution were Be(5, 2)? How would
the algorithm simplify?
3. (a) Some researchers are interested in θ, the proportion of students who ever cheat
on college exams. They randomly sample 100 students and ask “Have you
ever cheated on a college exam?” Naturally, some students lie. Let φ1 be the
proportion of non-cheaters who lie and φ2 be the proportion of cheaters who
lie. Let X be the number of students who answer “yes” and suppose X = 40.
i. Create a prior distribution for θ, φ1 , and φ2 . Use your knowledge guided
by experience. Write a formula for your prior and plot the marginal prior
density of each parameter.
ii. Write a formula for the likelihood function `(θ, φ1 , φ2 ).
iii. Find the m.l.e..
iv. Write a formula for the joint posterior density p(θ, φ1 , φ2 | X = 40).
v. Write a formula for the marginal posterior density p(θ | X = 40).
6.3. EXERCISES 379
4. The EPA conducts occasional reviews of its standards for airborne asbestos. During
a review, the EPA examines data from several studies. Different studies keep track
of different groups of people; different groups have different exposures to asbestos.
Let ni be the number of people in the i’th study, let xi be their asbestos exposure,
and let yi be the number who develop lung cancer. The EPA’s model is yi ∼ Poi(λi )
where λi = ni xi λ and where λ is the typical rate at which asbestos causes cancer.
The ni ’s and xi ’s are known constants; the yi ’s are random variables. The EPA wants
a posterior distribution for λ.
(d) If the EPA used the prior λ ∼ Gam(a, b), what would be the EPA’s posterior?
5. Figures 6.12 and 6.13 suggest that the MCMC sampler has found one mode of the
posterior density. Might there be others? Use the lik function and R’s optim func-
tion to find out. Either design or randomly generate some starting values (You must
decide on good choices for either the design or the randomization.) and use optim
to find a mode of the likelihood function. Summarize and report your results.
6. Example 6.3 shows that β2 and γ2 are very likely positive, and therefore that pine
trees with extra CO2 mature earlier and produce more cones. But how much earlier
and how many more?
(a) Find the posterior means E[β2 | y1 , . . . , yn ] and E[γ2 | y1 , . . . , yn ] approximately,
from the Figures in the text.
(b) Suppose there are three trees in the control plots that have probabilities 0.1, 0.5,
and 0.9 of being sexually mature. Plugging in E[β2 | y1 , . . . , yn ] from the pre-
vious question, estimate their probabilities of being mature if they had grown
with excess CO2 .
(c) Is the plug-in estimate from the previous question correct? I.e., does it correctly
calculate the probability that those trees would be sexually mature? Explain
why or why not. If it’s not correct, explain how to calculate the probabilities
correctly.
7. In the context of Equation 6.7 in Example 6.3 we might want to investigate whether
the coefficient of dbh should be the same for control trees and for treated trees.
(a) Write down a model enhancing that on page 353 to allow for the possibility of
different coefficients for different treatments.
(b) What parts of the R code have to change?
(c) Write the new code.
(d) Run it.
(e) Summarize and report results. Report any difficulties with modifying and run-
ning the code. Say how many iterations you ran and how you checked mixing.
Also report conclusions: does it look like different treatments need different
coefficients? How can you tell?
8. You go to the bus stop to catch a bus. You know that buses arrive every 15 minutes,
but you don’t know when the next is due. Let T be the time elapsed, in hours, since
the previous bus. Adopt the prior distribution T ∼ Unif(0, 1/4).
6.3. EXERCISES 381
Passengers, apart from yourself, arrive at the bus stop according to a Poisson process
with rate λ = 2 people per hour; i.e., in any interval of length `, the number of
arrivals has a Poisson distribution with parameter 2` and, if two intervals are disjoint,
then their numbers of arrivals are independent. Let X be the number of passengers,
other than yourself, waiting at the bus stop when you arrive.
(b) Suppose X = 0. Write an intuitive argument for whether that should increase
or decrease your expected value for T . I.e., is E[T |X = 0] greater than, less
than, or the same as E[T ]?
(d) Suppose X = 1. Write an intuitive argument for whether that should increase
or decrease your expected value for T . I.e., is E[T |X = 1] greater than, less
than, or the same as E[T ]?
More Models
This chapter takes up a wide variety of statistical models. It is beyond the scope of this
book to give a full treatment of any one of them. But we hope to introduce each model
enough so the reader can see in what situations it might be useful, what it’s primary char-
acteristcs are, and how a simple analysis might be carried out in R. A more thorough
treatment of many of these models can be found in Venables and Ripley [2002].
Specialized models call for specialized methods. Because not all methods are built
in to R, many people have contributed packages of specialized methods. Their packages
can be downloaded from http://probability.ca/cran/. To use a package, you must
first install it. Then, in each session of R in which you want to use it, you must load
it. For example, to use the survival package for survival analysis you must first type
install.packages("survival"). R should respond either by installing the package or
by asking you to choose a mirror, then installing the package. Next, whenever you want
to use the package, you must type library("survival").
382
7.1. RANDOM EFFECTS 383
Each child was measured four times, so there are 108 lines in the data set. The first several
lines of data are
Figure 7.1 shows the data and was produced by xyplot ( distance ˜ age | Sex,
data = Orthodont, groups = Subject, type="o", col=1 ). The function xyplot
is part of the lattice package. As illustrated in the figure, it produces plots of Y versus
X in which points can be grouped by one variable and separated into different panels by
another variable.
The figure shows that for most subjects, distance is an approximately linear function
of age, that males tend to be larger than females, and that different subjects have different
intercepts. Therefore, to fit this data we need a model with the following features: (1) a
parameter for the average intercept of males (or females); (2) a parameter for the average
difference in intercepts for males and females; (3) one parameter per subject for how that
subject’s intercept differs from its group average; and (4) one parameter for the slope.
Model 7.1 has those features.
8 9 10 11 12 13 14
Male Female
● ●
●
30 ●
●
●
● ●
● ● ●
● ● ●
distance
●
● ●
● ●
●
● ● ●
● ●
● ●
●
25 ●
●
●
● ●
●
● ● ●
●
●
●
●
●
●
●
● ●
● ●
● ●
● ● ●
● ●
●
●
● ●
● ● ● ●
● ●
● ● ●
● ●
● ●
● ●
● ●
20 ● ● ●
● ● ●
● ●
8 9 10 11 12 13 14
age
Figure 7.1: Plots of the Orthodont data: distance as a function of age, grouped by Subject,
separated by Sex.
Here is the key feature: we want β0 to represent the average intercept of all males and
β0 + β1 to represent the average intercept of all females, not just those in this study; and
we want to think of the δi ’s in this study as random draws from a population of δ’s rep-
resenting all children. The terminology is that β0 , β1 , and β2 are fixed effects because
they describe the general population, not the particular individuals in the sample, while
δ1 , . . . , δn are random effects because they are random draws from the general population.
It is customary to add a term to the model to describe the distribution of random effects.
In our case we shall model the δi ’s as N(0, σran. eff. ). Thus Model 7.1 becomes
Modelling the random effects as Normal is an arbitrary choice. We could have used a
different distribution, but the Normal is convenient and not obviously contraindicated by
Figure 7.1. Choosing the mean of the Normal distribution to be 0 is not arbitrary. It’s
the result of thinking of the δi ’s as draws from a larger population. In any population, the
average departure from the mean must be 0.
7.1. RANDOM EFFECTS 385
Model 7.2 is called a mixed effects model because it contains both fixed and random
effects. Often, in mixed effects models, we are interested in σran. eff. . Whether we are
interested in the individual δi ’s depends on the purpose of the investigation.
We can fit Model 7.2 by the following R code.
• lme stands for “linear mixed effects model.” It is in the nlme package.
• The formula distance ˜ age + Sex is just like formulas for linear models.
• random = ˜ 1 specifies the random effects. In this case, the random effects are
intercepts, or coefficients of 1. If we thought that each child had his or her own slope,
then we would have said random = ˜ age - 1 to say that the random effects are
coefficients of age but not of 1.
Random effects:
Formula: ~1 | Subject
(Intercept) Residual
StdDev: 1.807425 1.431592
The fixed effects part of the output is just like the fixed effects part of linear models. The
estimates of (β0 , β1 , β2 ) are around (17.7, −2.32, 0.66). The random effects part of the
output shows that the estimate of σ is about 1.43 while the estimate of σran. eff. is about
1.81. In other words, most of the intercepts are within about 3.6 or so of their mean and
the differences from child to child are about the same size as any unexplained variation
with each child’s data. We can see whether that’s sensible by inspecting Figure 7.1. It also
7.1. RANDOM EFFECTS 386
appears, from Figure 7.1, that one of the males doesn’t fit the general pattern. Because that
subject’s data fluctuates wildly from the pattern, it may have inflated the estimate of σ. We
might want to remove that subject’s data and refit the model, just to see how influential it
really is. That’s a topic we don’t pursue here.
We can see estimates of δ1 , . . . , δ27 by typing random.effects(ortho.fit1), which
yields
(Intercept)
M16 -1.70183357
M05 -1.70183357
M02 -1.37767479
M11 -1.16156894
M07 -1.05351602
M08 -0.94546309
M03 -0.62130432
M12 -0.62130432
M13 -0.62130432
M14 -0.08103969
M09 0.13506616
M15 0.78338371
M06 1.21559540
M04 1.43170125
M01 2.40417758
M10 3.91691853
F10 -3.58539251
F09 -1.31628108
F06 -1.31628108
F01 -1.10017523
F05 -0.01964599
F07 0.30451279
F02 0.30451279
F08 0.62867156
F03 0.95283034
F04 1.92530666
F11 3.22194176
The reasonableness of these estimates can be judged by comparison to Figure 7.2, where
we can also see that the strange male is M09.
7.1. RANDOM EFFECTS 387
8 9 11 13
F04 F11
● ●
30
●
● ● ● ● ● 25
20
●
25
● ● ● ●
● ● ● ●
●● ● ● 20
distance
● ●
●
M09 M15 M06 M04 M01
● ●
30 ●
● ●
● ● ● ●●
● ● ● ●
25 ● ●
● ● ●
20 ●
8 9 11 13 8 9 11 13 8 9 11 13
age
Figure 7.2: Plots of the Orthodont data: distance as a function of age, separated by
Subject.
7.1. RANDOM EFFECTS 388
To see what is gained by fitting the mixed effects model, we compare it to two other
models that have fixed effects only.
...
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 17.70671 1.11221 15.920 < 2e-16 ***
age 0.66019 0.09776 6.753 8.25e-10 ***
SexFemale -2.32102 0.44489 -5.217 9.20e-07 ***
...
Residual standard error: 2.272 on 105 degrees of freedom
...
The estimates of the coefficients are exactly the same as in ortho.fit1 But, since ortho.fit2
ignores differences between subjects, it attributes those differences to the observation SD
σ. That’s why the estimate of σ from ortho.fit1, 1.43, is smaller than the estimate from
ortho.fit2, 2.27.
The summary of ortho.fit3 is
We see that the estimates of β0 and β1 have changed. That’s because, in the fixed effects
model, those estimates are too dependent on the particular subjects in the study. The
7.1. RANDOM EFFECTS 389
estimates from ortho.fit1 are preferred, as they come from a model that describes the
data more correctly.
Finally, in Figure 7.1 there is a slight suggestion that the slope for males is slightly
larger than the slope for females. Model 7.3 allows for that possibility: β3 is the difference
in slope between males and females.
The model can be fit by ortho.fit4 <- lme ( distance ˜ age * Sex, random =
˜ 1, data = Orthodont ) and yields
...
Random effects:
Formula: ~1 | Subject
(Intercept) Residual
StdDev: 1.816214 1.386382
This model suggests that the slope for males is about 0.78 while the slope for females is
about 0.3 less, plus or minus about 0.25 or so. The evidence is not overwhelming in either
direction whether males and females have the same slope. But if they differ, it could be by
an amount (roughly, .30 ± 2 × .12) that is large compared to the slope for males. For its
extra complexity, this model has reduced the residual SD, or our accuracy of prediction,
from about 1.43 to about 1.39. At this point, we don’t have a strong preference for either
model and, if we were investigating further, would keep both of them in mind.
The next example comes from a study of how ants store nutrients over the winter.
Example 7.1 (Ant Fat)
The data in this example were collected to examine the strategy that ants of the
7.1. RANDOM EFFECTS 390
species Pheidole morrisi employ to store nutrients, specifically fat, over the winter;
the study was reported by Yang [2006]. Many animals need to store nutrients over
the winter when food is scarce. For most species, individual animals store their own
nutrients. But for some species, nutrients can be stored collectively. As Yang explains,
"Among ants, a common mechanism of colony fat storage is for workers of both castes
(majors and minors) to uniformly increase the amount of fat they hold . . . The goal of
this study is to better understand the specific mechanisms by which ants use divi-
sion of labor to store colony fat . . . ." The need to store fat varies with the severity of
winter, which in turn varies with latitude. So Yang studied ants at three sites, one in
Florida, one in North Carolina, and one in New York. At each site he dug up several
ant colonies in the Spring and another several colonies in the Fall. From each colony,
the ants were separated into their two castes, majors and minors, and the fat content
of each caste was measured. The first several lines of data look like this.
colony season site caste fat
1 1 Spring New York minors 15.819209
2 2 Spring New York minors 10.526316
3 3 Spring New York minors 18.534483
4 4 Spring New York minors 21.467098
5 5 Spring New York minors 9.784946
6 6 Spring New York minors 20.138289
...
There are 108 lines in all. Figure 7.3 shows the data along with kernel density esti-
mates. It was produced by densityplot ( ˜ fat | site+season, groups=caste,
data=ants, adjust=1.5 ). Purple lines are for minors; blue lines for majors. In New
York and North Carolina, it appears that minors have, on average, less fat than ma-
jors and less variability. There is also some suggestion that fat content increases with
increasing latitude. The main question is how the average percent fat differs by site,
season, and caste. Because the predictors are categorical, we need to define indicator
variables: NYi = 1 if the i’th observation was from NY; NCi = 1 if the i’th observation
was from NC; Springi = 1 if the i’th observation was from Spring; and minorsi = 1 if
the i’th observation was on the minor caste. With those conventions, we begin with
the following linear model.
0 20 40 60
0.15
0.10
0.05
●
●●●
●
●●●
● ● ●
●●●
● ●
●
●●●●●●● ●● ●
●●
●●●●●
● 0.00
Density
0.15
0.10
0.05
0.00 ●
● ●
●
●●
●
●●
● ●●
●
●● ● ●●
●●
●●●
●
●●●●●●●●
●● ●
●●
●●
●●
●
●●●
● ●
●
0 20 40 60 0 20 40 60
fat
Figure 7.3: Percent body fat of major (blue) and minor (purple) Pheidole morrisi ants at
three sites in two seasons.
7.1. RANDOM EFFECTS 392
In Model 7.4, β0 is the average fat content of major, Florida ants in the Fall; β1 is the
difference between New York majors and Florida majors in the Fall; β2 is the difference
between North Carolina majors and Florida majors in the Fall; . . . ; β9 is the difference
between Florida majors in the Fall and Florida minors in the Spring; . . . ; etc. Model 7.4
can be fit by
The parameter estimates and their SD’s, given by summary ( fit1 ), are
We see that New York majors are much fatter, by about 13.2 percentage points on
average, than Florida majors in the Fall, while North Carolina majors are in between.
That seems consistent with the theory that New York ants need to store more fat in
the Fall because they are going to face a longer, harder winter than Florida ants. In
the Spring, Florida majors have about 16.9% − 6.1% ≈ 13.8% body fat while New York
majors have about 16.9% + 13.2% − 6.1% + 4.3% ≈ 28.2% body fat. It appears that
New York majors did not lose as much fat over the winter as Florida majors. Perhaps
they didn’t need to store all that fat after all.
Before addressing that question more thoroughly, we want to examine one possible
inadequacy of the model. The data sets contains two data points from each colony —
one for majors, one for minors — and it’s possible that those two data points are not
independent. In particular, there might be colony effects; one colony might be fatter,
on average, than another from the same site and season, and that extra fatness might
apply to both castes. To see whether that’s true, we’ll examine residuals from fit1.
Specifically, we’ll plot residuals for minors on the abscissa and residuals for majors
7.1. RANDOM EFFECTS 393
residuals
10 15
●
● ●
● ● ● ●
● ●
● ●●●
●
5
majors
● ●
● ●
● ● ●●●
● ●
0
●● ●
● ● ●●●
●●
●
●● ●
● ● ●
● ●
−10
●● ●
●
●●
●
−10 −5 0 5 10 15
minors
Figure 7.4: Residuals from Model 7.4. Each point represents one colony. There is an
upward trend, indicating the possible presence of colony effects.
on the ordinate. Figure 7.4 is the plot. There is a clear, upward, approximately linear
trend, indicating that majors and minors from one colony tend to be thin or fat together.
We can capture that tendency by including random colony effects in the model. That
leads to Model 7.5:
which differs from Model 7.4 by the presence of the δ’s, which are assumed to be
distributed i.i.d.N(0, σran. eff. ). Model 7.5 can be fit by
The R code (...random = ˜1|colony) says to give each colony its own, random,
intercept. The summary of fit2 is, in part,
Random effects:
Formula: ~1 | colony
(Intercept) Residual
StdDev: 4.164122 4.76696
There are several things to note about fit2. First, σ̂ ≈ 4.77 is somewhat smaller than
the estimate from fit1. That’s because some of the variability that fit1 attributes to
the ’s, fit2 attributes to the δ’s. Second, σ̂ran. eff. ≈ 4.16 is about the same size as
σ̂, indicating that colony effects explain a sizable portion of the variation in the data.
Third, estimates of the β’s are unchanged. And fourth, some of the estimates of SD’s
of the β’s are changed while others are not. Specifically, all the SD’s involving caste
effects have decreased. That’s because some of the variability that fit1 attributes to
7.1. RANDOM EFFECTS 395
Majors Minors
Fall Spring Fall Spring
Florida 16.9 10.8 15.3 10.1
North Carolina 25.0 15.7 24.0 11.0
New York 30.1 28.3 24.3 18.0
Table 7.1: Fat as a percentage of body weight in ant colonies. Three sites, two seasons,
two castes.
• Both castes in all locations stored more fat in the Fall than in the Spring.
• Majors store more fat than minors, in some cases by a lot, in other cases by a
little.
• Moving from Florida to North Carolina to New York, majors increase their fat
content in both Fall and Spring. But moving from Florida to North Carolina,
minors increase their fat content only in the Fall while from North Carolina to
New York, they increase only in the Spring.
The summary in Table 7.1 and the preceding bullets could have been carried out with
no formal statistical analysis. That’s the way it goes, sometimes. In this example,
we could use the statistical analysis to be more precise about how accurately we can
estimate each of the effects noted in the bullets. Here, there seems little need to do
that.
Yang [2006] carries the analysis further. For one thing, he is more formal about
statistics. In addition, he notes that majors can be further divided into repletes and non-
repletes. According to Yang, “[R]epletes carry and store a disproportionate amount of
nutrients relative to other individuals in a colony and provide it to other colony members
through trophallaxis in times of food scarcity." And according to Wikipedia, "Trophal-
laxis is the transfer of food or other fluids among members of a community through
mouth-to-mouth (stomodeal) or anus-to-mouth (proctodeal) feeding. It is most highly
7.2. TIME SERIES AND MARKOV CHAINS 396
developed in social insects such as ants, termites, wasps and bees." Yang finds in-
teresting differences in fat storage among repletes and other majors, differences that
vary according to season and location.
Mauna Loa Monthly atmospheric concentrations of CO2 are expressed in parts per mil-
lion (ppm) and reported in the preliminary 1997 SIO manometric mole fraction
scale.
DAX The data are the daily closing prices of Germany’s DAX stock index. The data are
sampled in business time; i.e., weekends and holidays are omitted.
UK Lung Disease The data are monthly deaths from bronchitis, emphysema and asthma
in the UK, 1974 – 1979.
Canadian Lynx The data are annual numbers of lynx trappings for 1821 – 1934 in Canada.
Presidents The data are (approximately) quarterly approval rating for the President of the
United states from the first quarter of 1945 to the last quarter of 1974.
UK drivers The data are monthly totals of car drivers in Great Britain killed or seriously
injured Jan 1969 to Dec 1984. Compulsory wearing of seat belts was introduced on
31 Jan 1983.
Sun Spots The data are monthly numbers of sunspots. They come from the World Data
Center-C1 For Sunspot Index Royal Observatory of Belgium, Av. Circulaire, 3, B-
1180 BRUSSELS http://www.oma.be/KSB-ORB/SIDC/sidc_txt.html.
What these data sets have in common is that they were all collected sequentially in time.
Such data are known as time series data. Because each data point is related to the ones
before and the ones after, they usually cannot be treated as independent random variables.
Methods for analyzing data of this type are called time series methods. More formally,
7.2. TIME SERIES AND MARKOV CHAINS 397
• The mar argument in the par command says how many lines are in the margins of
each plot. Those lines are used for titles and axis labels. The command is used here
to decrease the default so there is less white space between the plots, hence more
room for each plot.
The data sets in Figure 7.5 exhibit a feature common to many time series: if one data
point is large, the next tends to be large, and if one data point is small, the next tends to
be small; i.e., Yt and Yt+1 are dependent. The dependence can be seen in Figure 7.6 which
plots Yt+1 vs. Yt , for the Beaver and President datasets. The upward trend in each panel
shows the dependence. Time series analysts typically use the term autocorrelation — the
prefix auto refers to the fact that the time series is correlated with itself — even though
they mean dependence. R has the built-in function acf for computing autocorrelations.
The following snippet shows how it works.
> acf ( beaver1$temp, plot=F, lag.max=5 )
7.2. TIME SERIES AND MARKOV CHAINS 398
Beaver Mauna Loa
360
Temperature
CO2 (ppm)
37.0
340
320
36.4
0 20 40 60 80 100 1960 1970 1980 1990
Time
DAX Time
UK Lung Disease
6000
monthly deaths
Closing Price
3000
4000
2000
1500
1992 1994 1996 1998 1974 1976 1978 1980
Time Lynx
Canadian Time
Presidents
5000
70
trappings
approval
50
0 2000
30
UK Time
drivers SunTime
Spots
number of sunspots
200
160
deaths
100
60 100
Figure 7.5: Time series. Beaver: Body temperature of a beaver, recorded every 10 min-
utes; Mauna Loa: Atmospheric concentration of CO2 ; DAX: Daily closing prices of the
DAX stock exchange in Germany; UK Lung Disease: monthly deaths from bronchitis,
emphysema and asthma; Canadian Lynx: annual number of trappings; Presidents: quar-
terly approval ratings; UK drivers: deaths of car drivers; Sun Spots: monthly sunspot
numbers. In all cases the abscissa is time.
7.2. TIME SERIES AND MARKOV CHAINS 399
0 1 2 3 4 5
1.000 0.826 0.686 0.580 0.458 0.342
The six numbers in the bottom line are Cor(Yt , Yt ), Cor(Yt , Yt+1 ), . . . , Cor(Yt , Yt+5 ) and are
referred to as autocorrelations of lag 0, lag 1, . . . , lag 5. Those autocorrelations can, as
usual, be visualized with plots as in Figure 7.7.
Beaver Presidents
30 50 70
37.0
yt+1
yt+1
36.4
36.4 37.0 30 60
yt yt
Figure 7.6: Yt+1 plotted against Yt for the Beaver and Presidents data sets
dim ( beaver1 )
plot ( beaver1$temp[-114], beaver1$temp[-1], main="Beaver",
xlab=expression(y[t]), ylab=expression(y[t+1]) )
length ( presidents )
plot ( presidents[-120], presidents[-1], main="Presidents",
xlab=expression(y[t]), ylab=expression(y[t+1]) )
7.2. TIME SERIES AND MARKOV CHAINS 400
lag = 0 lag = 1
37.2
37.2
Yt+k
Yt+k
36.8
36.8
36.4
36.4
36.4 36.8 37.2 36.4 36.8 37.2
Yt Yt
lag = 2 lag = 3
37.2
37.2
Yt+k
Yt+k
36.8
36.8
36.4
36.4
Yt Yt
lag = 4 lag = 5
37.4
37.4
37.0
37.0
Yt+k
Yt+k
36.6
36.6
Yt Yt
Figure 7.7: Yt+k plotted against Yt for the Beaver data set and lags k = 0, . . . , 5
7.2. TIME SERIES AND MARKOV CHAINS 401
par ( mfrow=c(3,2) )
temp <- beaver1$temp
n <- length(temp)
for ( k in 0:5 ) {
x <- temp[1:(n-k)]
y <- temp[(1+k):n]
plot ( x, y, xlab=expression(Y[t]),
ylab=expression(Y[t+k]), main=paste("lag =", k) )
}
Because time series data cannot usually be treated as independent, we need special
methods to deal with them. It is beyond the scope of this book to present the major the-
oretical developments of time series methods. As Figure 7.5 shows, there can be a wide
variety of structure in time series data. In particular, the Beaver, and Presidents data sets
have no structure readily apparent to the eye; DAX has seemingly minor fluctuations im-
posed on a general increasing trend; UK Lung Disease and UK drivers have an annual
cycle; Mauna Loa has an annual cycle imposed on a general increasing trend; and Cana-
dian Lynx and Sun Spots are cyclic, but for no obvious reason and with no obvious length
of the cycle. In the remainder of this section we will show, by analyzing some of the data
sets in Figure 7.5, some of the possibilities.
Beaver Our goal is to develop a more complete picture of the probabilistic structure of
the {Yt }’s. To that end, consider the following question. If we’re trying to predict Yt+1 , and
if we already know Yt , does it help us also to know Yt−1 ? I.e., are Yt−1 and Yt+1 conditionally
independent given Yt ? That question can be answered visually with a coplot (Figures 2.15
and 2.16). Figure 7.8 shows the coplot for the Beaver data.
Yt
36.4 36.6 36.8 37.0 37.2 37.4
37.2
36.8
36.4
Yt+1
37.2
36.8
36.4
Yt!1
Figure 7.8: coplot of Yt+1 as a function of Yt−1 given Yt for the Beaver data set
7.2. TIME SERIES AND MARKOV CHAINS 403
ylab=expression(Y[t+1]) )
The figure is ambiguous. In the first, second, and sixth panels, Yt+1 and Yt−1 seem
to be linearly related given Yt , while in the third, fourth, and fifth panels, Yt+1 and Yt−1
seem to be independent given Yt . We can examine the question numerically with the
partial autocorrelation, the conditional correlation of Yt+1 and Yt−1 given Yt . The following
snippet shows how to compute partial autocorrelations in R using the function pacf.
> pacf ( temp, lag.max=5, plot=F )
1 2 3 4 5
0.826 0.014 0.031 -0.101 -0.063
The numbers in the bottom row are Cor(Yt , Yt+k | Yt+1 , . . . , Yt+k−1 ). Except for the first,
they’re small. Figure 7.8 and the partial autocorrelations suggest that a model in which
Yt+1 ⊥ Yt−1 | Yt would fit the data well. And the second panel in Figure 7.7 suggests that
a model of the form Yt+1 = β0 + β1 Yt + t+1 might fit well. Such a model is called an
autoregression. R has a function ar for fitting them. Here’s how it works with the Beaver
data.
> fit <- ar ( beaver1$temp, order.max=1 )
> fit # see what we’ve got
Call:
ar(x = beaver1$temp, order.max = 1)
Coefficients:
1
0.8258
Mauna Loa The Mauna Loa data look like an annual cycle superimposed on a steadily
increasing long term trend. Our goal is to estimate both components and decompose the
data as
Yt = long term trend + annual cycle + unexplained variation.
Our strategy, because it seems easiest, is to estimate the long term trend first, then use
deviations from the long term trend to estimate the annual cycle. A sensible estimate of
the long term trend at time t is the average of a year’s CO2 readings, for a year centered at
t. Thus, let
.5yt−6 + yt−5 + · · · + yt+5 + .5yt+6
ĝ(t) = (7.6)
12
where g(t) represents the long term trend at time t. R has the built-in command filter to
compute ĝ. The result is shown in Figure 7.9 (a) which also shows how to use filter.
Deviations from ĝ are co2 - g.hat. See Figure 7.9 (b). The deviations can be grouped
by month, then averaged. The average of the January deviations, for example, is a good
estimate of how much the January CO2 deviates from the long term trend, and likewise for
other months. See Figure 7.9 (c). Finally, Figure 7.9 (d) shows the data, ĝ, and the fitted
values ĝ + monthly effects. The fit is good: the fitted values differ very little from the data.
DAX Yt is the closing price of the German stock exchange DAX on day t. Investors
often care about the rate of return Yt∗ = Yt+1 /Yt , so we’ll have to consider whether to
7.2. TIME SERIES AND MARKOV CHAINS 405
(a) (b)
4
360
2
resids
340
co2
0
!2
320
!4
1960 1980 1960 1980
Time Time
(c) (d)
3
360
2
1
cycle
340
co2
0
!3 !2 !1
320
2 4 6 8 12 1960 1980
Index Time
Figure 7.9: (a): CO2 and ĝ; (b): residuals; (c): residuals averaged by month; (d): data, ĝ,
and fitted values
7.2. TIME SERIES AND MARKOV CHAINS 406
analyze the Yt ’s directly or convert them to Yt∗ ’s first. Figure 7.10 is for the DAX prices
directly. Panel (a) shows the Yt ’s. It seems to show minor fluctuations around a steadily
increasing trend. Panel (b) shows the time series of Yt − Yt−1 . It seems to show a series
of fluctuations approximately centered around 0, with no apparent pattern, and with larger
fluctuations occuring later in the series. Panel (c) shows Yt versus Yt−1 . It shows a strong
linear relationship between Yt and Yt−1 . Two lines are drawn on the plot: the lines Yt = β0 +
β1 Yt−1 for (β0 , β1 ) = (0, 1) and for (β0 , β1 ) set equal to the ordinary regression coefficients
found by lm. The two lines are indistinguishable, suggesting that Yt ≈ Yt−1 is a good model
for the data. Panel (d) is a Q-Q plot of Yt − Yt−1 . It is not approximately linear, suggesting
that Yt ∼ N(Yt−1 , σ) is not a good model for the data.
par ( mfrow=c(2,2) )
plot.ts ( DAX, main="(a)" )
plot.ts ( diff(DAX), ylab = expression ( DAX[t] - DAX[t-1] ),
main="(b)" )
plot ( DAX[-n], DAX[-1], xlab = expression ( DAX[t-1] ),
ylab = expression ( DAX[t] ), main="(c)", pch="." )
abline ( 0, 1 )
abline ( lm ( DAX[-1] ~ DAX[-n] )$coef, lty=2 )
qqnorm ( diff(DAX), main="(d)", pch="." )
• The R command diff is for taking differences, typically of time series. diff(y)
yields y[2]-y[1], y[3]-y[2], ... which could also be accomplished eas-
ily enough without using diff: y[-1] - y[-n]. But additional arguments, as
in diff ( y, lag, differences ) make it much more useful. For exam-
ple, diff(y,lag=2) yields y[3]-y[1], y[4]-y[2], ... while diff ( y,
differences=2 ) is the same as diff ( diff(y) ). The latter is a construct
very useful in time series analysis.
Figure 7.11 is for the Yt∗ ’s. Panel (a) shows the time series. It shows a seemingly
patternless set of data centered around 1. Panel (b) shows the time series of Yt∗ − Yt−1 ∗
,a
∗ ∗
seemingly patternless set of data centered at 0. Panel (c) shows Yt versus Yt−1 . It shows
no apparent relationship between Yt∗ and Yt−1 ∗
, suggesting that Yt∗ ⊥ Yt−1
∗
is a good model
for the data. Panel (d) is a Q-Q plot of Yt . It is approximately linear, suggesting that Yt∗ ∼
∗
N(µ, σ) is a good model for the data, with a few outliers on both the high and low ends.
7.2. TIME SERIES AND MARKOV CHAINS 407
(a) (b)
DAXt − DAXt−−1
5000
DAX
0
2000
−200
1992 1995 1998 1992 1995 1998
Time Time
(c) (d)
Sample Quantiles
5000
DAXt
0
2000
−200
Figure 7.10: DAX closing prices. (a): the time series of Yt ’s; (b): Yt − Yt−1 ; (c): Yt versus
Yt−1 ; (d): QQ plot of Yt − Yt−1 .
7.2. TIME SERIES AND MARKOV CHAINS 408
The mean and SD of the Y ∗ ’s are about 1.000705 and 0.01028; so Y ∗ ∼ N(1.0007, 0.01)
might be a good model.
(a) (b)
ratet − ratet−−1
0.05
1.00
rate
−0.05
0.92
Time Time
(c) (d)
Sample Quantiles
●●●
●
●●
●
●●
●
●
●
●●
●
●
●
●●
●
●●
●
●
●
1.00
1.00
●
●
●
●●
●
●
●●
●
●
●
●
●
●●
●
●
●●
●
●
●●
●
●
●
●●
●
●
●●
●
●
●●
●
●
●●
ratet
●●
●
●
●●
●
●
●●
●
●
●
●●
●
●
●●
●
●
●●
●
●
●●
●
●●
●
●●
●
●
●
●
●
0.92
0.92
Figure 7.11: DAX returns. (a): the time series of Yt∗ ’s; (b): Yt∗ − Yt−1
∗
; (c): Yt∗ versus Yt−1
∗
;
∗
(d): QQ plot of Yt .
par ( mfrow=c(2,2) )
plot.ts ( rate, main="(a)" )
7.3. SURVIVAL ANALYSIS 409
We now have two possible models for the DAX data: Yt ≈ Yt−1 with a still to be
determined distribution and Y ∗ ∼ N(1.0007, 0.01) with the Yt∗ ’s mutually independent.
Both seem plausible on statistical grounds. (But see Exercise 8 for further development.)
It is not necessary to choose one or the other. Having several ways of describing a data
set is useful. Each model gives us another way to view the data. Economists and investors
might prefer one or the other at different times or for different purposes. There might
even be other useful models that we haven’t yet considered. Those would be beyond the
scope of this book, but could be covered in texts on time series, financial mathematics,
econometrics, or similar topics.
higher education The time until an Associated Professor is promoted to Full Professor.
Such data are called survival data. For the i’th person, neuron, computer, etc., there is a
random variable
yi = time of event on i’th unit.
We usually call yi the lifetime, even though the event is not necessarily death. It is often the
case with survival data that some measurements are censored. For example, if we study
a university’s records to see how long it takes to get promoted from Associate to Full
Professor, we will find some Associate Professors leave the university — either through
retirement or by taking another job — before they get promoted while others are still
Associate Professors at the time of our study. For these people we don’t know their time
7.3. SURVIVAL ANALYSIS 410
of promotion. If either (a) person i left the university after five years, or (b) person i
became Associate Professor five years prior to our study, then we don’t know yi exactly.
All we know is yi > 5. This form of censoring is called right censoring. In some data sets
there may also be left censoring or interval censoring. Survival analysis typically requires
specialized statistical techniques. R has a package of functions for this purpose; the name
of the package is survival. The survival package is automatically distributed with
R. To load it into your R session, type library(survival). The package comes with
functions for survival analysis and also with some example data sets. Our next example
uses one of those data sets.
Example 7.2 (Bladder Tumors)
This example comes from a study of bladder tumors, originally published in Byar [1980]
and later reanalyzed in Wei et al. [1989]. Patients had bladder tumors. The tumors
were removed and the patients were randomly assigned to one of three treatment
groups (placebo, thiotepa, pyridoxine). Then the patients were followed through time
to see whether and when bladder tumors would recur. R’s survival package has the
data for the first two treatment groups, placebo and thiotepa. Type bladder to see it.
(Remember to load the survival package first.) The last several lines look like this.
id rx number size stop event enum
341 83 2 3 4 54 0 1
342 83 2 3 4 54 0 2
343 83 2 3 4 54 0 3
344 83 2 3 4 54 0 4
345 84 2 2 1 38 1 1
346 84 2 2 1 54 0 2
347 84 2 2 1 54 0 3
348 84 2 2 1 54 0 4
349 85 2 1 3 59 0 1
350 85 2 1 3 59 0 2
351 85 2 1 3 59 0 3
352 85 2 1 3 59 0 4
• id is the patient’s id number. Note that each patient has four lines of data. That’s
to record up to four recurrences of tumor.
For example, patient 83 was followed for 54 months and had no tumor recurrences;
patient 85 was followed for 59 months and also had no recurrences. But patient 84,
who was also followed for 54 months, had a tumor recurrence at month 38 and no fur-
ther recurrences after that. Our analysis will look at the time until the first recurrence,
so we want bladder[bladder$enum==1,], the last several lines of which are
Patients 80, 81, 83, and 85 had no tumors for as long as they were followed; their
data is right-censored. The data for patients 82 and 84 is not censored; it is observed
exactly.
Figure 7.12 is a plot of the data. The solid line is for placebo; the dashed line for
thiotepa. The abscissa is in months. The ordinate shows the fraction of patients who
have survived without a recurrence of bladder tumors. The plot shows, for example,
that at 30 months, the survival rate without recurrence is about 50% for thiotepa pa-
tients compared to a little under 40% for placebo patients. The circles on the plot show
censoring. I.e., the four circles on the solid curve between 30 and 40 months repre-
sent four placebo patients whose data was right-censored. There is a circle at every
censoring time that is not also the time of a recurrence (for a different patient).
1.0
fraction without recurrence
0.8
●
●● ●
0.6
●
●
●
● ●●● ●● ● ●
0.4
●
●
●●
●● ● ● ●
0.2
0.0
0 10 20 30 40 50 60
months
Figure 7.12: Survival curve for bladder cancer. Solid line for placebo; dashed line for
thiotepa.
7.3. SURVIVAL ANALYSIS 413
bladder[event.first,"event"] )
blad.fit <- survfit ( blad.surv ~ bladder[event.first,"rx"] )
plot ( blad.fit, conf.int=FALSE, mark=1, xlab="months",
ylab="fraction without recurrence", lty=1:2 )
• Surv is R’s function for creating a survival object. You can type
print(blad.surv) and summary(blad.surv) to learn more about survival ob-
jects.
This estimate is reasonably accurate so long as r(t) is reasonably large; Ŝ (t) is more accu-
rate for small values of t than for large values of t; and there is no information at all for
estimating S (t) for t > max{yi }.
Survival data is often modelled in terms of the hazard function
P[y ∈ [t, t + h) | y ≥ t] P[y ∈ [t, t + h)] f (t)
h(t) = lim = lim = . (7.7)
h→0 h h→0 h P[y ≥ t] S (t)
The interpretation of h(t) is the fraction, among people who have survived to time t, of
those who will die soon thereafter. There are several parametric families of distributions
for lifetimes in use for survival analysis. The most basic is the exponential — f (y) =
7.3. SURVIVAL ANALYSIS 414
(1/λ) exp−y/λ — which has hazard function h(y) = 1/λ, a constant. A constant hazard
function says, for example, that young people are just as likely to die as old people, or that
new air conditioners are just as likely to fail as old air conditioners. For many applications
that assumption is unreasonable, so statisticians may work with other parametric families
for lifetimes, especially the Weibull, which has h(y) = λα(λy)α−1 , an increasing function
of y if α > 1. We will not dwell further on parametric models; the interested reader should
refer to a more specialized source.
However, the goal of survival analysis is not usually to estimate S and h, but to compare
the survivor and hazard functions for two groups such as treatment and placebo or to see
how the survivor and hazard functions vary as functions of some covariates. Therefore it
is not usually necessary to estimate S and h well, as long as we can estimate how S and
h differ between groups, or as a function of the covariates. For this purpose it has become
common to adopt a proportional hazards model:
where h0 is the baseline hazard function that is adjusted according to x, a vector of covari-
ates and β, a vector of coefficients. Equation 7.8 is known as the Cox proportional hazards
model. The goal is usually to estimate β. R’s survival package has a function for fitting
Equation 7.8 to data.
●
●● ● ● ●●● ● ●
1.2
log(cumulative hazard)
●●
0
●● ●●
●
●●
●●●
cumulative hazard
●● ● ●
●
−1
● ●● ●
0.8
● ●●
●●●
●● ● ●
●
●
●
−2
0.4
●
●●
●
−3
0.0
−4
0 10 30 50 20 50 200
months months
Figure 7.13: Cumulative hazard and log(hazard) curves for bladder cancer. Solid line for
thiotepa; dashed line for placebo.
7.4. EXERCISES 416
The estimated coefficient is β̂trt = −0.371. Thus the hazard function for thiotepa pa-
tients is estimated to be exp(−0.371) = 0.69 times that for placebo patients. The
standard error of β̂trt = −0.371 is about 0.3; so β̂trt is accurate to about ±0.6 or so.
7.4 Exercises
1. In the orthodont data, M09 doesn’t follow the general pattern. Maybe there’s an
error in the data or maybe that person really did grow in an unusual way. Fit a model
similar to ortho.fit1 but excluding the data from M09. Does anything change in
an important way?
2. In the orthodont data, it seems clear that different individuals have different inter-
cepts. It’s not as clear whether they have different slopes. Conduct an analysis to
7.4. EXERCISES 417
investigate that possibility. Your analysis must address at least two questions. First,
do different individuals seem to have different slopes? Second, is the apparent differ-
ence in Male and Female slopes found by ortho.fit4 really a difference between
males and females on average, or is it due to the particular males and females who
happened to be chosen for this study?
3. Carry out a Bayesian analysis of the model in Equation 7.2.
4. (a) Make plots analagous to Figures 7.6 and 7.7, compute autocorrelations, and
interpret for the other datasets in Figure 7.5.
(b) Make plots analagous to Figure 7.8, compute partial autocorrelations, and in-
terpret for the other data sets in Figure 7.5.
5. Create and fit a good model for body temperatures of the second beaver. Use the
dataset beaver2.
6. (a) Why does Equation 7.6 average over a year? Why isn’t it, for example,
.5yt−k + yt−k+1 + · · · + yt+k−1 + .5yt+k
ĝ(t) =
2k
for some k , 6?
(b) Examine ĝ in Equation 7.6. Use R if necessary. Why are some of the entries
NA?
7. The R code for Figure 7.9 contains the lines
9. Figures 7.10 and 7.11 and the accompanying text analyze the DAX time series as
though it has the same structure throughout the entire time. Does that make sense?
Think of and implement some way of investigating whether the structure of the
series changes from early to late.
10. Choose one or more of the other EU Stock Markets that come with the DAX data.
Investigate whether it has the same structure as the DAX.
14. This question follows up Example 7.3. In the example we analyzed the data to learn
the effect of thiotepa on the recurrence of bladder tumors. But the data set has two
other variables that might be important covariates: the number of initial tumors and
the size of the largest initial tumor.
(a) Find the distribution of the numbers of initial tumors. How many patients had
1 initial tumor, how many had 2, etc?
(b) Divide patients, in a sensible way, into groups according to the number of
initial tumors. You must decide how many groups there should be and what
the group boundaries should be.
(c) Make plots similar to Figures 7.12 and 7.13 to see whether a proportional haz-
ard model looks sensible for number of initial tumors.
(d) Fit a proportional hazard model and report the results.
7.4. EXERCISES 419
(e) Repeat the previous analysis, but for size of largest initial tumor.
(f) Fit a proportional hazard model with three covariates: treatment, number of
initial tumors, size of largest initial tumor. Report the results.
Chapter 8
Mathematical Statistics
1. Let Y1 , . . . , Yn ∼ i.i.d. Poi(λ). Chapter 2, Exercise 7 showed that `(λ) depends only
P
on Yi and not on the specific values of the individual Yi ’s.
2. Let Y1 , . . . , Yn ∼ i.i.d. Exp(λ). Chapter 2, Exercise 20 showed that `(λ) depends only
P
on Yi and not on the specific values of the individual Yi ’s.
Further, since `(λ) quantifies how strongly the data support each value of λ, other aspects
of y are irrelevant. Thus, for inference about λ it suffices to know `(λ), and therefore,
P
for Poisson and Exponential data, it suffices to know Yi . We don’t need to know the
individual Yi ’s. We say that Yi is a sufficient statistic for λ.
P
Section 8.1.1 examines the general concept of sufficiency. We work in the context of a
parametric family. The idea of sufficiency is formalized in Definition 8.1.
Definition 8.1. Let {p(· | θ)} be a family of probability densities indexed by a parameter θ.
Let y = (y1 , . . . , yn ) be a sample from p(· | θ) for some unknown θ. Let T (y) be a statistic
such that the joint distribution factors as
Y
p(yi | θ) = g(T (y), θ)h(y).
420
8.1. PROPERTIES OF STATISTICS 421
The idea is that once the data have been observed, h(y) is a constant that does not
depend of θ, so `(θ) ∝ p(yi | θ) = g(T, θ)h(y) ∝ g(T, θ). Therefore, in order to know the
Q
likelihood function and make inference about θ, we need only know T (y), not anything
else about y. For our Poisson and Exponential examples we can take T (y) = yi .
P
For a more detailed look at sufficiency, think of generating three Bern(θ) trials y ≡
(y1 , y2 , y3 ). y can be generated, obviously, by generating y1 , y2 , y3 sequentially. The possi-
ble outcomes and their probabilities are
(0, 0, 0) (1 − θ)3
(1, 0, 0)
(0, 1, 0) θ(1 − θ)2
(0, 0, 1)
(1, 1, 0)
(1, 0, 1) θ2 (1 − θ)
(0, 1, 1)
(1, 1, 1) θ3
yi = 0, generate (0, 0, 0)
P
2. (a) If
yi = 1, generate (1, 0, 0), (0, 1, 0), or (0, 0, 1) each with probability 1/3.
P
(b) If
yi = 2, generate (1, 1, 0), (1, 0, 1), or (0, 1, 1) each with probability 1/3.
P
(c) If
yi = 3, generate (1, 1, 1)
P
(d) If
It is easy to check that the two-step procedure generates each of the 8 possible outcomes
with the same probabilities as the obvious sequential procedure. For generating y the two
procedures are equivalent. But in the two-step procedure, only the first step depends on θ.
So if we want to use the data to learn about θ, we need only know the outcome of the first
P P
step. The second step is irrelevant. I.e., we need only know yi . In other words, yi is
sufficient.
8.1. PROPERTIES OF STATISTICS 422
For an example of another type, let y1 , . . . , yn ∼ i.i.d. U(0, θ). What is a sufficient
statistic for θ?
θ1n if yi < θ for i = 1, . . . , n
p(y | θ) =
0 otherwise
1
= n 1(0,θ) (y(n) )
θ
shows that y(n) , the maximum of the yi ’s, is a one dimensional sufficient statistic for θ.
Example 8.1
In World War II, when German tanks came from the factory they had serial numbers
labelled consecutively from 1. I.e., the numbers were 1, 2, . . . . The Allies wanted to
estimate T , the total number of German tanks and had, as data, the serial numbers
of captured tanks. See Exercise 23 in Chapter 5. Assume that tanks were captured
independently of each other and that all tanks were equally likely to be captured. Let
x1 , . . . , xn be the serial numbers of the captured tanks. Then x(n) is a sufficient statistic.
Inference about the total number of German tanks should be based on x(n) and not on
any other aspect of the data.
Ytn = {y ∈ Yn : T (y) = t}
p(y | y ∈ Ytn )
even more severe. The whole data set T (y) = (y) is an n-dimensional sufficient statistic
because Y
p(yi | θ) = g(T (y), θ)h(y)
where g(T (y), θ) = p(y | θ) and h(y) = 1. The order statistic T (y) = (y(1) , . . . , y(n) ) is
another n-dimensional sufficient statistic. Also, if T is any sufficient one dimensional
statistic then T 2 = (y1 , T ) is a two dimensional sufficient statistic. But it is intuitively
clear that these sufficient statistics are higher-dimensional than necessary. They can be
reduced to lower dimensional statistics while retaining sufficiency, that is, without losing
information.
The key idea in the preceding paragraph is that the high dimensional sufficient statistics
can be transformed into the low dimensional ones, but not vice versa. E.g., ȳ is a function
of (y(1) , . . . , y(n) ) but (y(1) , . . . , y(n) ) is not a function of ȳ. Definition 8.2 is for statistics that
have been reduced as much as possible without losing sufficiency.
Definition 8.2. A sufficient statistic T (y) is called minimal sufficient if, for any other suf-
ficient statistic T 2 , T (y) is a function of T 2 (y).
This book does not delve into methods for finding minimal sufficient statistics. In most
cases the user can recognize whether a statistic is minimal sufficient.
Does the theory of sufficiency imply that statisticians need look only at sufficient statis-
tics and not at other aspects of the data? Not quite. Let y1 , . . . , yn be binary random vari-
ables and suppose we adopt the model y1 , . . . , yn ∼ i.i.d. Bern(θ). Then for estimating θ
we need look only at yi . But suppose (y1 , . . . , yn ) turn out to be
P
· · · }1,
· · · }0 1| 1{z
0| 0{z
many 0’s many 1’s
i.e., many 0’s followed by many 1’s. Such a dataset would cast doubt on the assumption
that the yi ’s are independent. Judging from this dataset, it looks much more likely that the
yi ’s come in streaks or that θ is increasing over time. So statisticians should look at all
the data, not just sufficient statistics, because looking at all the data can help us create and
critique models. But once a model has been adopted, then inference should be based on
sufficient statistics.
we have to define it for every sample size. To that end, let Y1 , Y2 , · · · ∼ i.i.d. f for some
unknown density f having finite mean µ and SD σ. For each n ∈ N let T n : Rn → R. I.e.
T n is a real-valued function of (y1 , . . . , yn ). For example, if we’re trying to estimate µ we
might take T n = n−1 n1 yi .
P
Definition 8.3. The sequence of estimators T 1 , T 2 , . . . is said to be consistent for the pa-
rameter θ if for every θ and for every > 0.
For example, the Law of Large Numbers, Theorem 1.12, says the sequence of sample
means {T n = n−1 n1 yi } is consistent for µ. Similarly, let S n = n−1 i (yi − T n )2 be the
P P
sample variance. Then {S n } is consistent for σ2 . More generally, m.l.e.’s are consistent.
Theorem 8.1. Let Y1 , Y2 , · · · ∼ i.i.d. pY (y | θ) and let θ̂n be the m.l.e. from the sample
(y1 , . . . , yn ). Further, let g be a continuous function of θ. Then, subject to regularity condi-
tions, {g(θ̂n )} is a consistent sequence of estimators for g(θ).
Proof. The proof requires regularity conditions relating to differentiability and the inter-
change of integral and derivative. It is beyond the scope of this book.
as an estimate of σ2 . Theorem 5.31 says that (yi − ȳ)2 /σ2 ∼ χ2n−1 and therefore
P
E[σ̂2 ] = n−1n
σ2 . I.e., σ̂2 is a biased estimator of σ2 . It’s bias is −σ2 /n. Some
statisticians prefer to use the unbiased estimator σ̃2 = (n − 1)−1 (yi − ȳ)2 .
P
Mean Squared Error If θ̂ is an estimate of θ, then the mean squared error (MSE) of θ̂
is E[(θ̂ − θ)2 ]. MSE is a combination of variance and bias:
h i h i
MSE(θ̂) = E (θ̂ − θ)2 = E (θ̂ − Eθ̂ + Eθ̂ − θ)2
h i
= E (θ̂ − Eθ̂)2 + 2(θ̂ − Eθ̂)(Eθ̂ − θ) + (Eθ̂ − θ)2
h i
= E (θ̂ − Eθ̂)2 + (Eθ̂ − θ)2 = Var(θ̂) + [bias(θ̂)]2 .
Whenever possible, one would like to have an estimator with minimum bias and min-
imum variance; those two properties would also imply minimum MSE. But it is not al-
ways possible to achieve both desiderata simultaneously. For example, let Y1 , . . . , Yn ∼
i.i.d. N(µ, σ) and suppose we want an estimate of σ2 . We have looked at two estima-
tors so far: the mle σ̂2 = n−1 (yi − ȳ)2 and the unbiased estimator σ̃2 = (n/(n − 1))σ̂2 .
P
We already know the bias of σ̃2 is 0 and the bias of σ̂2 is −σ2 /n. We also know that
(yi − ȳ)2 /σ2 ∼ χ2n−1 . Therefore Var( (yi − ȳ)2 /σ2 ) = 2(n − 1); Var(σ̂2 ) = 2(n − 1)σ4 /n2
P P
and Var(σ̃2 ) = 2σ4 /(n − 1). Thus,
But we could take a Bayesian approach to the problem instead and use the posterior
mean, E[θ | X1 , . . . , Xn ], as an estimator of θ. For convenience, adopt the prior distribution
θ ∼ Be(α, β). Then the posterior is given by [θ | X1 , . . . , Xn ] ∼ Be(α + X, β + n − X) and
the posterior mean is θ̃ ≡ E[θ | X1 , . . . , Xn ] = (α + X)/(α + β + n). We want the MSE of θ̃,
which we shall calculate from its bias and variance.
X+α nθ + α
" #
E[θ̃] = E =
α+β+n α+β+n
X+α nθ(1 − θ)
" #
Var[θ̃] = Var = .
α+β+n (α + β + n)2
Therefore
!2
nθ(1 − θ) nθ + α
MSE(θ̃) = + −θ .
(α + β + n)2 α+β+n
n = 100 n = 1000
0.00000 0.00015
0.0015
0.0000
mse
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
n=5 n = 20
0.000 0.010 0.020
0.08
0.04
0.00
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
theta
Figure 8.1: Mean Squared Error for estimating Binomial θ. Sample size = 5, 20, 100,
1000. α = β = 0: solid line. α = β = 0.5: dashed line. α = β = 1: dotted line. α = β = 4:
dash–dotted line.
8.2. INFORMATION 428
}
mse.out$n <- factor ( paste("n =", mse.out$n),
levels = paste("n =", c(5,20,100,1000) ) )
8.2 Information
We have talked loosely about the amount of information in a sample and we now want to
make the notion more precise. This is an important idea in statistics that can be found in
many texts. One devoted wholly to information is Kullback [1968].
Suppose that Y1 , . . . , Yn is a random sample from an unknown density. And suppose
there are only two densities under consideration, p1 and p2 . We know that the Yi ’s are a
sample from one of them, but we don’t know which. Now consider the first observation,
Y1 . How can we be precise about the amount of information that Y1 gives us for distin-
guishing between p1 and p2 ? We have already seen that the strength of evidence provided
by Y1 = y1 is the likelihood ratio p1 (y1 )/p2 (y1 ). The strength of evidence provided by the
Q
whole sample is the product i [p1 (yi )/p2 (yi )] of the evidence provided by the individual
observations. If we transform to a log scale, then the evidence is additive:
p1 (y1 , . . . , yn ) X p1 (yi )
log = log .
p2 (y1 , . . . , yn ) p2 (yi )
This seems a suitable quantity to call information. Therefore we make the following defi-
nition.
Definition 8.5. The information in a datum y for distinguishing between densities p1 and
(y)
p2 is log pp21 (y) .
The information in a random sample is the sum of the information in the individual el-
ements. The i’th term in the sum is log pp12 (y i)
(yi )
. That term is a function of yi and is therefore
a random variable whose distribution is determined by either p1 or p2 , whichever is the
true density generating the
R sample. If p1 is the true distribution, then the expected value of
(y)
that random variable is log pp21 (y) p1 (y) dy and is the expected amount of information from
a single observation. The Law of Large Numbers says that a large sample will contain
8.2. INFORMATION 429
approximately n times that amount of information. This view of information was origi-
nally discussed thoroughly in Kullback and Leibler [1951]. Accordingly, we make the
following definition.
Definition 8.6. I(p1 , p2 ) ≡ log pp21 (y)
R (y)
p1 (y) dy is called the Kullback-Leibler divergence
from p1 to p2 .
For example, suppose that we’re tossing a coin, and we know the probability of Heads
is either 0.4 or 0.8. Over a sequence of many tosses, how will information accumulate
to distinguish the two possibilities? First, if we’re tossing the 0.4 coin, then the likeli-
hood ratio on a single toss is either 0.4/0.8 (if the coin lands Heads) or 0.6/0.2 (if the
coin lands Tails). The two possibilities have probabilities 0.4 and 0.6, respectively. So
I (Bern(.4), Bern(.8)) = 0.4 log(.5) + 0.6 log(3) ≈ .382. But for tossing the 0.8 coin,
I (Bern(.8), Bern(.4)) = 0.8 log(2) + 0.2 log(1/3) ≈ .335. Notice that the divergence
is not symmetric. The calculation shows that information accumulates slightly faster if
we’re tossing the .4 coin than if we’re tossing the .8 coin. To carry the example to an
extreme, suppose one coin is fair but the other is two-headed. For tossing the fair coin,
I (Bern(.5), Bern(1)) = .5 log(.5) + .5 log(∞) = ∞; but I (Bern(1), Bern(.5)) = log(2) ≈
.693. The asymmetry and the infinity make sense: if we’re tossing the fair coin then even-
tually we will toss a Tail and we’ll know for certain which coin it is. But if we’re tossing
the two-headed coin, we’ll always toss Heads and never know for certain which coin it is.
Theorem 8.2. The Kullback-Leibler divergence between two distributions is non-negative.
Proof. We prove the theorem in the continuous case; the discrete case is similar. Our proof
follows Theorem 3.1 in Kullback [1968]. Let f1 and f2 be the two densities. We want
Z ! Z
f1 (x)
I( f1 , f2 ) = log f1 (x) dx = g(x) log g(x) f2 (x) dx
f2 (x)
where g(x) = f1 (x)/ f2 (x). Let φ(t) = t log(t) and use Taylor’s Theorem to get
1
φ(g(x)) = φ(1) + [g(x) − 1]φ0 (1) + [g(x) − 1]2 φ00 (h(x))
2
where for every x, h(x) ∈ [1, g(x)]. Integrate to get
Z
I( f1 , f2 ) = φ(g(x)) f2 (x) dx =
Z Z Z
1
φ(1) f2 (x) dx + [g(x) − 1]φ (1) f2 (x) dx +
0
[g(x) − 1]2 φ00 (h(x)) f2 (x) dx
2Z
1
=0+0+ [g(x) − 1]2 φ00 (h(x)) f2 (x) dx
2
8.2. INFORMATION 430
d2
" #
d 1 d
log f (x | θ) = f (x | θ)
dθ2 dθ f (x | θ) dθ
!2
1 d 1 d2
=− f (x | θ) + f (x | θ).
f (x | θ)2 dθ f (x | θ) dθ2
δ δ2 d2
Z ( " #)
d
≈ f (x | θ) log f (x | θ) − log f (x | θ) + f (x | θ) + log f (x | θ) dx
f (x | θ) dθ 2 dθ2
δ2 d 2
Z Z
d
= −δ f (x | θ) − f (x | θ) log f (x | θ) dx.
dθ 2 dθ2
Assume we can differentiate under the integral so that
Z Z
d d d
f (x | θ) dx = f (x | θ) dx = 1 = 0
dθ dθ dθ
and the first term in the previous expression vanishes to yield
δ2 d2
Z
I( f (x | θ), f (x | θ + δ)) ≈ − f (x | θ) 2 log f (x | θ) dx
2 dθ
!2
δ 2 Z 1 d 1 d2
=− f (x | θ) θ) + θ)
− f (x | f (x | dx
2 f (x | θ) dθ
2 f (x | θ) dθ 2
#2 #2
δ2 δ2
Z " "
1 d d
= f (x | θ) f (x | θ) dx = E log f (x | θ) dx.
2 f (x | θ) dθ 2 dθ
8.3. EXPONENTIAL FAMILIES 431
is called the Fisher Information for sampling from the family f (x | θ) and is denoted I(θ).
Fisher Information is the most studied form of information in statistics and is relevant
for inference in parametric families when the sample size is large. The penultimate integral
2
in the previous derivation shows that I(θ) is also equal to −E[ dθd 2 log f (x | θ)]. The word
information is justified because the Fisher Information also tells us the maximum precision
(minimum variance) with which we can estimate parameters, in a limiting, asymptotic
sense to be made precise in Section 8.4.3. Next we calculate the information in some
common parametric families. Others are in the exercises.
Let X ∼ N(µ, σ) where σ is fixed and µ is the unknown parameter.
#2 #2 #2
1 (x − µ)
" " "
d 1 d 1
I(µ) = E log f (x | µ) = E f (x | µ) = E f (x | µ) = 2 .
dµ f (x | µ) dµ f (x | µ) σ2 σ
Let X ∼ Poi(λ).
" #2 " #2
d d
I(λ) = E log f (x | λ) = E (−λ + x log λ − log x!)
dλ dλ
x 2 x − λ 2 Var(x | λ) 1
= E −1 + =E = = .
λ λ λ2 λ
which we can write as h(x)c(µ, σ) exp( wi (µ, σ)ti (x)) for some functions h, c, wi , and ti .
P
These two examples reveal the structure we’re looking for.
A parametric family of distributions p(x | θ) is called an exponential family if its pdf’s
can be written as
Pd
p(x | θ) = h(x)c(θ)e i=1 wi (θ)ti (x) .
We study exponential families because they have many features in common. A good
source for an advanced treatment of exponential families is Brown [1986].
The normalizing constant c(θ) can be absorbedP into the exponent, so it is common to
see exponential families written as p(x | θ) = h(x)e wi (θ)ti (x)−c (θ) . We have just seen that the
∗
dividing by its integral, provided that its integral is finite. The set of η’s for which
ηi ti (x)
< ∞ is called the natural parameter space and denoted H. For every η ∈ H,
R P
h(x)e
there is a number c(η) such that p(x | η) = h(x)c(η)e ηi ti (x) is a probability density.
P
n
n
Y Y Pd Pn
p(x1 , . . . , xn | θ) = p(x j | θ) = h(x j ) (c(θ))n e i=1 wi (θ) j=1 ti (x j )
j=1 j=1
Theorem 8.3. For any integrable function g and any η in the interior of H, the integral
Z
ηi ti (x)
P
g(x)h(x)c(η)e dx
is continuous and has derivatives of all orders with respect to the η’s, and these can be
obtained by differentiating under the integral sign.
8.4. ASYMPTOTICS 433
For example, take g(x) = 1. Write a one-parameter exponential family in the form
p(x | η) = h(x)eηt(x)−c (η) . The integral of the density is, as always, 1, so taking derivatives
∗
yields
Z Z Z
d ηt(x)−c∗ (η) ηt(x)−c∗ (η)
dx − c∗ h(x)eηt(x)−c (η) dx,
0 ∗
0= h(x)e dx = t(x)h(x)e (8.1)
dη
or E[t(x)] = c∗ .
0
It is sometimes useful and natural to consider the random variable T = t(X). We have
just derived its expectation.
8.4 Asymptotics
In real life, data sets are finite: (y1 , . . . , yn ). Yet we often appeal to the Law of Large
Numbers or the Central Limit Theorem, Theorems 1.12, 1.13, and 1.14, which concern
the limit of a sequence of random variables as n → ∞. The hope is that when n is large
those theorems will tell us something, at least approximately, about the distribution of the
sample mean. But we’re faced with the questions “How large is large?” and “How close
is the approximation?”
To take an example, we might want to apply the Law of Large Numbers or the Central
Limit Theorem to a sequence Y1 , Y2 , . . . of random variables from a distribution with mean
µ and SD σ. Here are a few instances of the first several elements of such a sequence.
Each sequence occupies one row of the array. The “· · · ” indicates that the sequence con-
.
tinues infinitely. The “..” indicates that there are infinitely many such sequences. The
numbers were generated by
• I chose to generate Yi ’s from the N(0, 1) distribution so I used rnorm, and so, for
this example, µ = 0 and σ = 1. Those are arbitrary choices. I could have used any
values of µ and σ and any distribution for which I know how to generate random
variables on the computer.
• round does rounding. In this case we’re printing each number to two decimal places.
Because there are multiple sequences, each with multiple elements, we need two subscripts
to keep track of things properly. Let Yi j be the j’th element of the i’th sequence. A real
data set X1 , . . . , Xn is analogous to the first n observations, {Yi, j }nj=1 , along one row of the
array. For each sequence of random variables (each row of the array), we’re interested in
things like the behavior as n → ∞ of the sequence of sample means Ȳi1 , Ȳi2 , . . . where
Ȳin = (Yi1 + · · · + Yin )/n. And for √the Central Limit Theorem, we’re also interested in the
sequence Zi1 , Zi2 , . . . where Zin = n(Ȳin − µ). For the three instances above, the Ȳin ’s and
Zin ’s can be printed with
for ( i in 1:3 ) {
print ( round ( cumsum(y[i,]) / 1:9, 2) )
print ( round ( cumsum(y[i,]) / (sqrt(1:9)), 2) )
}
• cumsum computes a cumulative sum; so cumsum(y[1,]) yields the vector y[1,1],
y[1,1]+y[1,2], ..., y[1,1]+...+y[1,9]. (Print out
cumsum(y[1,]) if you’re not sure what it is.) Therefore,
cumsum(y[i,])/1:9 is the sequence of Ȳin ’s.
• sqrt computes the square root. So the second print statement prints the sequence
of Zin ’s.
The results for the Ȳin ’s are
0.70 0.49 0.36 0.21 0.11 -0.04 -0.14 -0.16 0.05 · · ·
-0.23 -0.23 -0.06 -0.08 0.01 0.00 -0.07 -0.13 -0.07 · · ·
-1.10 -1.01 -0.78 -0.53 -0.21 -0.43 -0.43 -0.45 -0.40 · · ·
.. .. .. .. .. .. .. .. .. . .
. . . . . . . . . .
and for the Zin ’s are
0.70 0.91 0.96 0.84 0.71 0.39 0.11 -0.01 0.59 · · ·
-0.23 -0.40 -0.23 -0.31 -0.15 -0.15 -0.33 -0.54 -0.41 · · ·
-1.10 -1.74 -1.94 -1.83 -1.35 -1.97 -2.12 -2.35 -2.33 · · ·
.. .. .. .. .. .. .. .. .. . .
. . . . . . . . . .
8.4. ASYMPTOTICS 435
1. Will every sequence of Ȳi ’s or Zi ’s converge? This is a question about the limit along
each row of the array.
3. If not every sequence converges, what fraction of them converge; or what is the
probability that a randomly chosen sequence of Ȳi ’s or Zi ’s converges?
Some simple examples and the Strong Law of Large Numbers, Theorem 1.13, answer
questions 1, 2, and 3 for the sequences of Ȳi ’s. The Central Limit Theorem, Theorem 1.14,
answers question 6 for the sequences of Zi ’s.
2. If they converge, do they have the same limit? No. Here are two sequences of Yi ’s.
1 1 1 ···
-1 -1 -1 · · ·
P[ lim Ȳn = µ] = 1.
n→∞
So the probability of getting sequences like 1, 1, 1, . . . or −1, −1, −1, . . . that con-
verges to something other than µ is 0.
8.4. ASYMPTOTICS 436
5. Does the distribution of Zn depend on n? Yes, except in the special case where
Yi ∼ N(0, 1) for all i.
6. Is there a limiting distribution? Yes. That’s the Central Limit Theorem. Regardless
of the distribution of the Yi j ’s, as long as Var(Yi j ) < ∞, the limit, as n → ∞, of the
distribution of Zn is N(0, 1).
The Law of Large Numbers and the Central Limit Theorem are theorems about the
limit as n → ∞. When we use those theorems in practice we hope that our sample size n
is large enough that Ȳin ≈ µ and Zin ∼ N(0, 1), approximately. But how large should n be
before relying on these theorems, and how good is the approximation? The answer is, “It
depends on the distribution of the Yi j ’s”. That’s what we look at next.
To illustrate, we generate sequences of Yi j ’s from two distributions, compute Ȳin ’s and
Zin ’s for several values of n, and compare. One distribution is U(0, 1); the other is a
recentered and rescaled version of Be(.39, .01).
The Be(.39, .01) density, shown in Figure 8.2, was chosen for its asymmetry. It has
a mean of .39/.40 = .975 and a variance of (.39)(.01)/((.40)2 (1.40)) ≈ .017. It was
recentered and rescaled to have a mean of .5 and variance of 1/12, the same as the U(0, 1)
distribution.
Densities of the Ȳin ’s are in Figure 8.3. As the sample size increases from n = 10 to
n = 270, the Ȳin ’s from both distributions get closer to their expected value of 0.5. That’s
the Law of Large Numbers at work. The amount by which they’re off their mean goes
from about ±.2 to about ±.04. That’s Corollary 1.10 at work. And finally, as n → ∞, the
densities get more Normal. That’s the Central Limit Theorem at work.
Note that the density of the Ȳin ’s derived from the U(0, 1) distribution is close to Nor-
mal even for the smallest sample size, while the density of the Ȳin ’s derived from the
Be(.39, .01) distribution is way off. That’s because U(0, 1) is symmetric and unimodal,
and therefore close to Normal to begin with, while Be(.39, .01) is far from symmetric and
unimodal, and therefore far from Normal, to begin with. So Be(.39, .01) needs a larger n
to make the Central Limit Theorem work; i.e., to be a good approximation.
Figure 8.4 is for the Zin ’s. It’s the same as Figure 8.3 except that each density has been
recentered and rescaled to have mean 0 and variance 1. When put on the same scale we
can see that all densities are converging to N(0, 1).
8
6
4
2
0
n = 10 n = 30
30
30
25
25
20
20
15
15
10
10
5
5
0
0
0.0 0.4 0.8 0.0 0.4 0.8
n = 90 n = 270
30
30
25
25
20
20
15
15
10
10
5
5
0
Figure 8.3: Densities of Ȳin for the U(0, 1) (dashed), modified Be(.39, .01) (dash and dot),
and Normal (dotted) distributions.
8.4. ASYMPTOTICS 439
for ( n in 1:length(samp.size) ) {
Ybar.1 <- apply ( Y.1[,1:samp.size[n]], 1, mean )
Ybar.2 <- apply ( Y.2[,1:samp.size[n]], 1, mean )
sd <- sqrt ( 1 / ( 12 * samp.size[n] ) )
x <- seq ( .5-3*sd, .5+3*sd, length=60 )
y <- dnorm ( x, .5, sd )
den1 <- density ( Ybar.1 )
den2 <- density ( Ybar.2 )
ymax <- max ( y, den1$y, den2$y )
plot ( x, y, ylim=c(0,ymax), type="l", lty=3, ylab="",
xlab="", main=paste("n =", samp.size[n]) )
lines ( den1, lty=2 )
lines ( den2, lty=4 )
}
• The manipulations in the line Y.2[i,] <- ... are so Y.2 will have mean 1/2 and
variance 1/12.
n = 10 n = 30
1.0
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0.0
0.0
!3 !1 1 3 !3 !1 1 3
n = 90 n = 270
1.0
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0.0
0.0
!3 !1 1 3 !3 !1 1 3
Figure 8.4: Densities of Zin for the U(0, 1) (dashed), modified Be(.39, .01) (dash and dot),
and Normal (dotted) distributions.
8.4. ASYMPTOTICS 441
Almost sure (a.s.) convergence is also called convergence almost everywhere (a.e.) and
convergence with probability 1 (w.p.1).
The three modes of convergence are related by the following Theorems, stated here
without proof.
Convergence in Distribution
1. Toss a fair penny. For all i = 0, 1, . . . , let Xi = 1 if the penny lands heads and Xi = 0
if the penny lands tails. (All Xi ’s are equal to each other.) Toss a fair dime. Let
Y = 1 if the dime lands heads and Y = 0 if the dime lands tails. Then the sequence
X1 , X2 , . . . converges to X0 in distribution, in probability and almost surely. The
cdf’s of X1 , X2 , . . . and Y are all the same, so the Xi ’s converge to Y in distribution.
But the Xi ’s do not converge to Y in probability or almost surely.
Theorem 8.6. The sequence Xn → X in distribution if and only if E[g(Xn )] → E[g(X)] for
every bounded, continuous, real-valued function g.
8.4. ASYMPTOTICS 442
The proof is beyond the scope of this book, but may be found in advanced texts on
probability.
Sometimes we know that a sequence Xn → X in distribution. But we may be interested
in Y = g(X) for some know function g. Does g(Xn ) → g(X)? Theorems 8.7 and 8.8
provide answers in some important special cases.
Theorem 8.7 (Slutsky). Let Xn → X in distribution and Yn → c in probability, where
|c| < ∞. Then,
1. Xn + Yn → X + c in distribution;
2. Xn Yn → cX in distribution; and
Proof. This proof follows Serfling [1980]. Choose t such that F X is continuous at t − c
and let > 0 be such that F X is continuous at t − c − and t − c + . (Such a t and can
always be found if F X is continuous except at finitely many points.) Then
Letting ↓ 0 gives
lim sup F Xn +Yn (t) ≤ F X (t − c) = F X+c (t). (8.2)
Similarly,
So,
lim inf Pr[Xn + Yn ≤ t] + Pr[|Yn − c| ≥ ] ≥ lim inf Pr[Xn ≤ t − c − ] = F X (t − c − ).
Letting ↓ 0 gives
lim inf F Xn +Yn (t) ≥ F X (t − c) = F X+c (t). (8.3)
Combining 8.2 and 8.3 yields
lim F Xn +Yn (t) = F X+c (t),
showing that Xn + Yn → X + c in distribution.
We leave the proofs of parts 2 and 3 as exercises.
One application of Slutsky’s Theorem is the following, which comes from Casella
and Berger [2002]. Let X1 , X2 , . . . be a sample from a distribution whose mean µ and SD
σ are finite. The Central Limit Theorem says
√
n(X̄n − µ)
→ N(0, 1)
σ
in distribution. But in almost all practical problems we don’t know σ. However, we can
estimate σ by either the m.l.e. σ̂ or the unbiased estimator σ̃. Both of them are consistent,
meaning they converge to σ in probability. Thus,
√ √
n(X̄n − µ) σ n(X̄n − µ)
= → N(0, 1) in distribution
σ̂ σ̂ σ
by Slutsky’s Theorem.
tangent line
g(x) p(x)
To illustrate the δ-method, suppose we’re testing a new medical treatment to learn its
probability of success, θ. Subjects, n of them, are recruited into a trial and we observe
Xn , the number of successes. √ We could√ use the m.l.e. θ̂n = Xn /n as an estimator of θ.
And we already know that n(θ̂n − θ)/ θ(1 − θ) → N(0, 1). But researchers sometimes
want to phrase their results in terms of the odds of success, θ/(1 − θ), or the log odds,
log(θ/(1 − θ)). We could use θ̂n /(1 − θ̂n ) to estimate the odds of success, but how would
we know its accuracy; what do we know about its distribution? The δ-method gives the
answer, at least approximately for large n.
Let g(x) = x/(1 − x). Then g0 (θ) = (1 − θ)−2 . Therefore,
θ̂n θ
√
"
g(θ̂n ) − g(θ)
#
√ 1−θ̂n
− 1−θ
n √ = n √ → N(0, 1),
θ(1 − θ)g (θ)
0 θ(1 − θ)(1 − θ)
−2
or equivalently,
θ̂n θ θ1/2
" # !
√
n − → N 0, .
1 − θ̂n 1 − θ (1 − θ)3/2
8.4. ASYMPTOTICS 446
The sequence of constants tells us the rate of convergence. We say the errors (δn − θ)
converge to zero at the rate 1/cn . For example, in cases
√ where the Central Limit Theorem
applies, (δn − θ) →√0 in distribution at the rate of 1/ n. The δ-method is another example.
When it applies,
√ n(g(δn ) − g(θ)) → N(0, σg0 (θ)), so the errors (g(δn ) − g(θ)) → 0 at the
rate of 1/ n.
When there are competing estimators, say δn and δ0n , we might want to choose the one
whose errors converge√to zero at a faster rate. But the √ situation can be more subtle than
that. It might be that n(δn − θ) → N(0, σ) while n(δn − θ) → N(0, σ0 ). In that case,
0
the two estimators converge at the same rate and we might want to know which has the
smaller asymptotic SD.
This section of the book is about rates of convergence and comparison of asymptotic
SD’s. It will guide us in choosing estimators. Of course, with any finite data set, rates
do not tell the whole story and can be only a guide. A more advanced treatment of these
topics can be found in many texts on mathematical statistics. Lehmann [1983] is a good
example.
First, we have to see whether the concept of rate is well-defined. The following theo-
rem gives that assurance under some mild conditions.
Theorem 8.9. Suppose that for two sequences of constants, cn and c0n , cn (δn − θ) → G
and c0n (δn − θ) → G0 in distribution. Then, under conditions given in Lehmann [1983]
(pg. 347), there exists a constant k such that (a) c0n /cn → k and (b) G0 (x) = G(x/k) for all
x.
The proof is beyond our scope but can be found in Lehmann. Theorem 8.9 says that if
two sequences of constants lead to convergence then (a) the sequences themselves differ
8.4. ASYMPTOTICS 447
asymptotically by only a scale factor and (b), the corresponding limits differ by only that
same scale factor.
Now let’s √ look at an illustration of two estimators δn and δ0n that both converge at
the rate of 1/ n. The illustration comes from Example 2.1 in Chapter 5 of Lehmann.
Suppose X1 , X2 , . . . are a sample from an unknown distribution F. We want to estimate
θ = F(a) = Pr[Xi ≤ a]. If we think that F is approximately N(µ, σ), and if we know or can
estimate σ, then we might takeR x X̄1n as 1an2 estimator of µ and δn = Φ((a− X̄n )/σ), where Φ(x)
is the N(0, 1) cdf: Φ(x) = −∞ √2π e− 2 u du. But if we don’t know that F is approximately
Normal, then we might instead use the estimator δ0n = n−1 (# of Xi ≤ a).
If F is not Normal, then δn is likely to be a bad estimator; it won’t even be consistent.
But what if F is Normal; how can we compare δn to δ0n ? Since both are consistent, we
compare their asymptotic standard deviations.
√
First, the Central Limit Theorem says n X̄nσ−µ → N(0, 1). Then, since δn is a known
transformation of X̄n , we can get its asymptotic distribution by Theorem 8.7:
a−X̄ a−µ
√ Φ σn −Φ σ
n → N(0, 1)
σφ a−µσ
1
σ
√ Yn
−θ
n√n → N(0, 1)
θ(1 − θ)
or, equivalently,
√ 0 p
n(δn − θ) → N 0, θ(1 − θ) . (8.5)
Since there
a−µ is a 1-to-1 correspondence between values of θ and values of a−µ σ
, we can
√
plot φ σ against θ(1 − θ). Figure 8.6 shows the plot. The top panel, a plot of the
asymptotic SD of δn as a function of the asymptotic SD of δ0n , shows that δn has the smaller
SD. The bottom panel, a plot of the ratio of the SD’s, shows that the advantage of δn over
δ0n grows as θ goes to either 0 or 1. Therefore, we would normally prefer δn , especially for
extreme values of a, at least when F is Normal and the sample size is large enough that
the asymptotics are reliable.
8.4. ASYMPTOTICS 448
0.4
0.3
)
actual SD's
a−µ
0.2
σ
equal SD's
φ(
0.1
0.0
θ(1 − θ)
0.7
SD ratio
0.5
ratio of SD's
0.3
0.1
θ(1 − θ)
Figure 8.6: Top panel: asymptotic standard deviations of δn and δ0n for Pr[X ≤ a]. The
solid line shows the actual relationship. The dotted line is the line of equality. Bottom
panel: the ratio of asymptotic standard deviations.
8.4. ASYMPTOTICS 449
When there are competing estimators, there may be one or more that have faster rate,
or smaller asymptotic SD, than the others. If so, those estimators would be the best. Our
next task is to show that under some mild regularity conditions there is, in fact, a lower
bound on the asymptotic SD, find what that bound is, and show that the sequence θ̂n of
m.l.e.’s achieves that lower bound. Thus the sequence of m.l.e.’s is, in this sense, a good
sequence of estimators. There may be other estimators that achieve the same bound, but
no sequence can do better. The main theorem is the following, which we state without
proof.
Theorem 8.10. Under suitable regularity conditions concerning continuity, differentia-
bility, support of the parametric family, and interchanging the order of differentiation
√ integration (See Lehmann, pg. 406.), if δn is a sequence
and of estimators satisfying
n(δn − θ) → N(0, SD(θ)) in distribution, then SD(θ) ≥ I(θ) , except possibly for a
−1/2
Proof.
n n n
Y Y f (x | θ0 ) X
f (x | θ0 ) > f (x | θ) ⇐⇒ n −1
> 0.log
i=1 i=1 i=1
f (x | θ)
f (x | θ0 )
h i
The Law of Large Numbers says that the right-hand side goes to E log f (x | θ)
, which
Theorem 8.2 and the exercises say is positive.
Theorem 8.11 says that for any fixed θ, `(θ0 ) > `(θ) with high probability. It does
not say that argmax `(θ) = θ0 with high probability. In fact, argmax `(θ) ≡ θ̂ is a random
variable with a continuous distribution, so Pr[argmax `(θ) = θ0 ] = 0.
8.4. ASYMPTOTICS 450
Theorem 8.12. The sequence of maximum likelihood estimators, θ̂n , is consistent; i.e.
θ̂n → θ0 in probability.
Proof. Choose > 0, let ~xn = x1 , . . . , xn , and let
For any ~x ∈ S n , the likelihood function p(~xn | θ) has a local maximum in [θ0 − , θ0 + ].
And by Theorem 8.11, Pr[S n ] → 1. Therefore Pr[|θ̂n − θ0 | < ] → 1.
Theorem 8.13. If there exists a number c and a function M(x) such that
d3
log p(x | θ) ≤ M(x)
dθ3
for all x and for all θ ∈ (θ0 −c, θ0 +c), then the sequence of maximum likelihood estimators,
θ̂n , is asymptotically efficient; i.e.
!
√ 1
n θ̂n − θ → N 0, √ in distribution.
I(θ0 )
We want to examine g(θ̂n ); we know g(θ̂n ) is close to g(θ̂0 ), so expand in a Taylor’s series.
1
g(θ̂n ) = g(θ0 ) + (θ̂n − θ0 )g0 (θ0 ) + (θ̂n − θ0 )2 g(2) (θ∗ )
2
for some θ∗ ∈ [θ̂n , θ0 ]. By assumption, the left-hand side is zero. Rearrange terms to get
g(θ0 )
(θ̂n − θ0 ) = −
g0 (θ0 ) + 12 (θ̂n − θ0 )g(2) (θ∗ )
or
√ √1 g(θ0 )
n
n(θ̂n − θ0 ) =
− 1n g0 (θ0 ) − 1
(θ̂
2n n
− θ0 )g(2) (θ∗ )
Now we’re going to analyze the numerator and the two terms in the denominator sepa-
rately, then use Slutsky’s Theorem (Thm 8.7).
8.5. EXERCISES 451
1. g(θ0 ) is the sum of terms like p0 (xi | θ)/p(xi | θ). Because xi is a random variable,
each term is a random variable. Because the xi ’s are i.i.d., the terms are i.i.d. The
expected value of each term is
p (xi | θ)
" 0 # Z Z
d d
E = p(xi | θ) dθ = p(xi | θ) dθ = 0.
p(xi | θ) dθ dθ
√
The Central Limit Theorem applies so, by the definition of I(θ), √1 g(θ0 ) → N(0, I(θ))
n
in probability.
The expectation of the first term is 0; the expectation of the second is I(θ). Therefore
1 0
n
g (θ) → −I(θ) by the Law of Large Numbers.
3. By assumption of the theorem, g(2) (θ∗ ) ≤ M(x) for all θ’s in an interval around θ0 .
And θ∗ → θ0 in probability, so 2n1 (θ̂n − θ0 )g(2) (θ∗ ) → 0 in probability.
The condition of Theorem 8.13 may seem awkward, but it is often easy to check. The
following corollary is an example.
Corollary 8.14. If {p(x | θ)} is a one-dimensional exponential family, then θ̂n is asymptoti-
cally efficient.
d3 d3
log p(x | θ) = log h(x) + log c(θ) + w(θ)t(x)
dθ3 dθ3
which satisfies the conditions of the theorem, at least when c, w, and t are continuous
functions.
8.5 Exercises
1. Let Y1 , . . . , Yn be a sample from N(µ, σ).
8.5. EXERCISES 452
2. Let Y1 , . . . , Yn be a sample from Be(α, β). Find a two dimensional sufficient statistic
for (α, β).
3. Let Y1 , . . . , Yn ∼ i.i.d. U(−θ, θ). Find a low dimensional sufficient statistic for θ.
5. Refer to Exercise 26 in Chapter 5. When ecologists sample the nearest pitcher plants
to, say, M points, the total number of plants found is some number M ∗ , which is less
than or equal to M. The total region they search, say A, is the union of the circles of
radii Di , centered at the si .
(a) Show that (M ∗ , |A|) is a sufficient statistic for λ. (|A| is the area of A.)
(b) If any of the circles intersect, then |A| is difficult to find analytically. But |A|
can be estimated numerically by the following method.
i. Construct a rectangle B that contains A.
ii. Generate N (a large integer) points at random, uniformly, in B.
iii. Let X be the number of generated points that lie within A.
iv. Use X/N as an estimate of |A|/B and |B|X/N as an estimate of |A|.
8.5. EXERCISES 453
The user can choose B, as long as it’s big enough to contain A. If the goal is to
estimate |A| as accurately as possible, what advice would you give the user for
choosing the size of B? Should |B| be large, small, or somewhere in between?
Justify your answer. Hint: think about Binomial distributions.
6. Theorem 8.2 shows that Kullback-Leibler divergences are non-negative. This exer-
cise investigates conditions under which they are zero.
7. (a) Find the Kullback-Leibler divergence from Bern(p1 ) to Bern(p2 ) and from
Bern(p2 ) to Bern(p1 ).
(b) Find the Kullback-Leibler divergence from Bin(n, p1 ) to Bin(n, p2 ) and from
Bin(n, p2 ) to Bin(n, p1 ).
9. (a) Let X ∼ Poi(λ). We know I(λ) = λ−1 . But we may be interested in λ∗ ≡ log λ.
Find I(λ∗ ).
(b) Let X ∼ f (x | θ). Let φ = h(θ). Show I(φ) = ( dφ
dθ 2
) I(θ).
10. Show that the following are exponential families of distributions. In each case,
identify the functions h, c, wi , and ti and find the natural parameters.
11. Verify that Equation 8.1 gives the correct value for the means of the following dis-
tributions.
(a) Poi(λ).
(b) Exp(θ).
(c) Bin(n, θ).
8.5. EXERCISES 454
(a) Use the method of transformations to find p(t | η). Show that it is an exponential
family.
(b) Find the moment generating function MT (s) of T .
16. Prove the claims in item 1 on page 441 that Xn → X0 in distribution, in probability,
and almost surely, but Xn → Y in distribution only.
√
17. Let Xn ∼ N(0, 1/ n). Does the sequence {Xn }∞ n=1 converge? Explain why or why
not. If yes, also explain in what sense it converges — distribution, probability or
almost sure — and find its limit.
18. Let X1 , X2 , · · · ∼ i.i.d. N(µ, σ) and let X̄n = n−1 ni=1 Xi . Does the sequence {X̄n }∞
P
n=1
converge? In what sense? To what limit? Justify your answer.
19. Let X1 , X2 , . . . be√an i.i.d. random sample from a distribution F with mean µ and SD
σ and let Zn = n(X̄n − µ)/σ. A well-known theorem says that {Zn }∞ n=1 converges
in distribution to a well-known distribution. What is the theorem and what is the
distribution?
20. Let U ∼ U(0, 1). Now define the sequence of random variables X1 , . . . in terms of
U by
1 if U ≤ n−1
Xn =
0 otherwise.
21. This exercise is similar to Exercise 20 but with a subtle difference. Let U ∼ U(0, 1).
Now define the sequence of constants c0 = 0, c1 = 1 and, in general, cn = cn−1 + 1/n.
In defining the ci ’s, addition is carried out modulo 1; so c2 = (1+1/2) mod 1 = 1/2,
etc. Now define the sequence of random variables X1 , . . . in terms of U by
1 if Xn ∈ [cn−1 , cn ]
Xn =
0 otherwise.
where intervals are understood to wrap around the unit interval. For example,
[c3 , c4 ] = [5/6, 13/12] = [5/6, 1/12] is understood to be the union [5/6, 1]∪[0, 1/12].
(It may help to draw a picture.)
23. Let Xn ∼ Bin(n, θ) and let θ̂n = Xn /n. Use the δ-method to find the asymptotic
distribution of the log-odds, log(θ/(1 − θ)).
√
24. In Figure 8.6, show that the ratio of asymptotic SD’s, φ a−µ
σ
/ θ(1 − θ), goes to
infinity as θ goes to 0 and also as θ goes to 1.
26. Page 447 compares the asymptotic variances of two estimators, δn and δ0n , when the
underlying distribution F is Normal. Why is Normality needed?
Bibliography
Richard J. Bolton and David J. Hand. Statistical fraud detection: A review. Statistical
Science, 17:235–255, 1992.
Paul Brodeur. Annals of radiation, the cancer at Slater school. The New Yorker, Dec. 7,
1992.
Jason C. Buchan, Susan C. Alberts, Joan B. Silk, and Jeanne Altmann. True paternal care
in a multi-male primate society. Nature, 425:179–181, 2003.
George Casella and Roger L. Berger. Statistical Inference. Duxbury, Pacific Grove, second
edition, 2002.
456
BIBLIOGRAPHY 457
Lorraine Denby and Daryl Pregibon. An example of the use of graphics in regression. The
American Statistician, 41:33–38, 1987.
D. Freedman, R. Pisani, and R. Purves. Statistics. W. W. Norton and Company, New York,
4th edition, 1998.
Andrew Gelman, John B. Carlin, Hal S. Stern, and Donald B. Rubin. Bayesian Data
Analysis. Chapman and Hall, Boca Raton, 2nd edition, 2004.
S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian
restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence,
6:721–741, 1984.
John Hassall. The Old Nursery Stories and Rhymes. Blackie and Son Limited, London,
1909.
W. K. Hastings. Monte Carlo sampling methods using Markov chains and their applica-
tions. Biometrika, 57:97–109, 1970.
Solomon Kullback. Information Theory and Statistics. Dover Publications, Inc., 1968.
Shannon LaDeau and James Clark. Rising co2 levels and the fecundity of forest trees.
Science, 292(5514):95–98, 2001.
Michael Lavine. What is Bayesian statistics and why everything else is wrong. The Journal
of Undergraduate Mathematics and Its Applications, 20:165–174, 1999.
Michael Lavine, Brian Beckage, and James S. Clark. Statistical modelling of seedling
mortality. Journal of Agricultural, Biological and Environmental Statistics, 7:21–41,
2002.
Jun S. Liu. Monte Carlo Strategies in Scientific Computing. Springer-Verlag, New York,
2004.
Christian P. Robert and George Casella. Monte Carlo Statistical Methods. Springer-Verlag,
New York, 1997.
T.S. Tsou and R.M. Royall. Robust likelihoods. Journal of the American Statistical Asso-
ciation, 90:316–320, 1995.
W. N. Venables and B. D. Ripley. Modern Applied Statistics with S. Springer, New York,
fourth edition, 2002.
Sanford Weisberg. Applied Linear Regression. John Wiley & Sons, New York, second
edition, 1985.
A.S. Yang. Seasonality, division of labor, and dynamics of colony-level nutrient storage
in the ant Pheidole morrisi. Insectes Sociaux, 53:456–452, 2006.
Index
459
INDEX 460
baboons, 187
bladder cancer, 359
O-rings, 236
ocean temperatures, 29, 316
463