Lme4: Mixed-Effects Modeling With R
Lme4: Mixed-Effects Modeling With R
Bates
Springer
vii
Page: viii job: lMMwR macro: svmono.cls date/time: 25-Jun-2010/17:10
Contents
ix
x Contents
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
xiii
xiv List of Figures
xv
Page: xvi job: lMMwR macro: svmono.cls date/time: 25-Jun-2010/17:10
Chapter 1
A Simple, Linear, Mixed-effects Model
In this book we describe the theory behind a type of statistical model called
mixed-effects models and the practice of fitting and analyzing such models
using the lme4 package for R. These models are used in many different dis-
ciplines. Because the descriptions of the models can vary markedly between
disciplines, we begin by describing what mixed-effects models are and by ex-
ploring a very simple example of one type of mixed model, the linear mixed
model.
This simple example allows us to illustrate the use of the lmer function in
the lme4 package for fitting such models and for analyzing the fitted model.
We describe methods of assessing the precision of the parameter estimates
and of visualizing the conditional distribution of the random effects, given
the observed data.
1
2 1 A Simple, Linear, Mixed-effects Model
observed value of the response, the covariate takes on the value of one of a
set of distinct levels.
Parameters associated with the particular levels of a covariate are some-
times called the “effects” of the levels. If the set of possible levels of the
covariate is fixed and reproducible we model the covariate using fixed-effects
parameters. If the levels that we observed represent a random sample from
the set of all possible levels we incorporate random effects in the model.
There are two things to notice about this distinction between fixed-effects
parameters and random effects. First, the names are misleading because the
distinction between fixed and random is more a property of the levels of the
categorical covariate than a property of the effects associated with them. Sec-
ondly, we distinguish between “fixed-effects parameters”, which are indeed pa-
rameters in the statistical model, and “random effects”, which, strictly speak-
ing, are not parameters. As we will see shortly, random effects are unobserved
random variables.
To make the distinction more concrete, suppose that we wish to model the
annual reading test scores for students in a school district and that the co-
variates recorded with the score include a student identifier and the student’s
gender. Both of these are categorical covariates. The levels of the gender co-
variate, male and female, are fixed. If we consider data from another school
district or we incorporate scores from earlier tests, we will not change those
levels. On the other hand, the students whose scores we observed would gen-
erally be regarded as a sample from the set of all possible students whom
we could have observed. Adding more data, either from more school districts
or from results on previous or subsequent tests, will increase the number of
distinct levels of the student identifier.
Mixed-effects models or, more simply, mixed models are statistical models
that incorporate both fixed-effects parameters and random effects. Because
of the way that we will define random effects, a model with random effects
always includes at least one fixed-effects parameter. Thus, any model with
random effects is a mixed model.
We characterize the statistical model in terms of two random variables: a
q-dimensional vector of random effects represented by the random variable
B and an n-dimensional response vector represented by the random variable
Y . (We use upper-case “script” characters to denote random variables. The
corresponding lower-case upright letter denotes a particular value of the ran-
dom variable.) We observe the value, y, of Y . We do not observe the value,
b, of B.
When formulating the model we describe the unconditional distribution
of B and the conditional distribution, (Y |B = b). The descriptions of the
distributions involve the form of the distribution and the values of certain
parameters. We use the observed values of the response and the covariates to
estimate these parameters and to make inferences about them.
That’s the big picture. Now let’s make this more concrete by describing a
particular, versatile class of mixed models called linear mixed models and by
studying a simple example of such a model. First we will describe the data
in the example.
Models with random effects have been in use for a long time. The first edition
of the classic book, Statistical Methods in Research and Production, edited by
O.L. Davies, was published in 1947 and contained examples of the use of ran-
dom effects to characterize batch-to-batch variability in chemical processes.
The data from one of these examples are available as the Dyestuff data in the
lme4 package. In this section we describe and plot these data and introduce
a second example, the Dyestuff2 data, described in Box and Tiao [1973].
The Dyestuff data are described in Davies and Goldsmith [1972, Table˜6.3,
p.˜131], the fourth edition of the book mentioned above, as coming from
an investigation to find out how much the variation from batch to batch in the
quality of an intermediate product (H-acid) contributes to the variation in the
yield of the dyestuff (Naphthalene Black 12B) made from it. In the experiment
six samples of the intermediate, representing different batches of works manu-
facture, were obtained, and five preparations of the dyestuff were made in the
laboratory from each sample. The equivalent yield of each preparation as grams
of standard colour was determined by dye-trial.
To access these data within R we must first attach the lme4 package to our
session using
> library(lme4)
Note that the ">" symbol in the line shown is the prompt in R and not part
of what the user types. The lme4 package must be attached before any of the
data sets or functions in the package can be used. If typing this line results in
an error report stating that there is no package by this name then you must
first install the package.
In what follows, we will assume that the lme4 package has been installed
and that it has been attached to the R session before any of the code shown
has been run.
The str function in R provides a concise description of the structure of the
data
> str(Dyestuff)
'data.frame': 30 obs. of 2 variables:
$ Batch: Factor w/ 6 levels "A","B","C","D",..: 1 1 1 1 1 2 2 2 2 2 ...
$ Yield: num 1545 1440 1440 1520 1580 ...
from which we see that it consists of 30 observations of the Yield, the response
variable, and of the covariate, Batch, which is a categorical variable stored as
a factor object. If the labels for the factor levels are arbitrary, as they are
here, we will use letters instead of numbers for the labels. That is, we label
the batches as "A" through "F" rather than "1" through "6". When the labels
are letters it is clear that the variable is categorical. When the labels are
numbers a categorical covariate can be mistaken for a numeric covariate,
with unintended consequences.
It is a good practice to apply str to any data frame the first time you
work with it and to check carefully that any categorical variables are indeed
represented as factors.
The data in a data frame are viewed as a table with columns corresponding
to variables and rows to observations. The functions head and tail print the
first or last few rows (the default value of “few” happens to be 6 but we can
specify another value if we so choose)
> head(Dyestuff)
Batch Yield
1 A 1545
2 A 1440
3 A 1440
4 A 1520
5 A 1580
6 B 1540
Batch Yield
A:5 Min. :1440
B:5 1st Qu.:1469
C:5 Median :1530
D:5 Mean :1528
E:5 3rd Qu.:1575
F:5 Max. :1635
E ● ● ● ● ●
C ● ● ● ●
●
B ● ● ● ●
Batch
A ●
● ● ● ●
D ● ● ● ● ●
F ● ● ●
● ●
Fig. 1.1 Yield of dyestuff (Napthalene Black 12B) for 5 preparations from each of 6
batches of an intermediate product (H-acid). The line joins the mean yields from the
batches, which have been ordered by increasing mean yield. The vertical positions
are “jittered” slightly to avoid over-plotting. Notice that the lowest yield for batch A
was observed for two distinct preparations from that batch.
how one would create such a plot. Because this book was created using Sweave
[Leisch, 2002], the exact code used to create the plot, as well as the code for
all the other figures and calculations in the book, is available on the web site
for the book. In Sect.˜?? we review some of the principles of lattice graphics,
such as reordering the levels of the Batch factor by increasing mean response,
that enhance the informativeness of the plot. At this point we will concentrate
on the information conveyed by the plot and not on how the plot is created.
In Sect.˜1.3.1 we will use mixed models to quantify the variability in yield
between batches. For the time being let us just note that the particular
batches used in this experiment are a selection or sample from the set of
all batches that we wish to consider. Furthermore, the extent to which one
particular batch tends to increase or decrease the mean yield of the process
— in other words, the “effect” of that particular batch on the yield — is not
as interesting to us as is the extent of the variability between batches. For
the purposes of designing, monitoring and controlling a process we want to
predict the yield from future batches, taking into account the batch-to-batch
variability and the within-batch variability. Being able to estimate the extent
to which a particular batch in the past increased or decreased the yield is not
usually an important goal for us. We will model the effects of the batches as
random effects rather than as fixed-effects parameters.
C ● ● ● ●
●
● ● ● ●
A ●
● ●
E
Batch
● ● ●
D ● ● ●
● ●
B ● ● ●
● ●
F ● ●
● ● ●
0 5 10
Fig. 1.2 Simulated data presented in Box and Tiao [1973] with a structure similar
to that of the Dyestuff data. These data represent a case where the batch-to-batch
variability is small relative to the within-batch variability.
The Dyestuff2 data are simulated data presented in Box and Tiao [1973,
Table 5.1.4, p. 247] where the authors state
These data had to be constructed for although examples of this sort undoubt-
edly occur in practice they seem to be rarely published.
> summary(Dyestuff2)
Batch Yield
A:5 Min. :-0.892
B:5 1st Qu.: 2.765
C:5 Median : 5.365
D:5 Mean : 5.666
E:5 3rd Qu.: 8.151
F:5 Max. :13.434
Before we formally define a linear mixed model, let’s go ahead and fit models
to these data sets using lmer. Like most model-fitting functions in R, lmer
takes, as its first two arguments, a formula specifying the model and the data
with which to evaluate the formula. This second argument, data, is optional
but recommended. It is usually the name of a data frame, such as those we
examined in the last section. Throughout this book all model specifications
will be given in this formula/data format.
We will explain the structure of the formula after we have considered an
example.
We fit a model to the Dyestuff data allowing for an overall level of the Yield
and for an additive random effect for each level of Batch
> fm01 <- lmer(Yield ~ 1 + (1|Batch), Dyestuff)
> print(fm01)
Fixed effects:
Estimate Std. Error t value
(Intercept) 1527.50 19.38 78.8
In the first line we call the lmer function to fit a model with formula
Yield ~ 1 + (1 | Batch)
applied to the Dyestuff data and assign the result to the name fm01. (The
name is arbitrary. I happen to use names that start with fm, indicating “fitted
model”.)
As is customary in R, there is no output shown after this assignment. We
have simply saved the fitted model as an object named fm01. In the second
line we display some information about the fitted model by applying print
to fm01. In later examples we will condense these two steps into one but here
it helps to emphasize that we save the result of fitting a model then apply
various extractor functions to the fitted model to get a brief summary of the
model fit or to obtain the values of some of the estimated quantities.
The printed display of a model fit with lmer has four major sections: a de-
scription of the model that was fit, some statistics characterizing the model
fit, a summary of properties of the random effects and a summary of the
fixed-effects parameter estimates. We consider each of these sections in turn.
The description section states that this is a linear mixed model in which the
parameters have been estimated as those that minimize the REML criterion
(explained in Sect.˜5.5). The formula and data arguments are displayed for
later reference. If other, optional arguments affecting the fit, such as a subset
specification, were used, they too will be displayed here.
For models fit by the REML criterion the only statistic describing the
model fit is the value of the REML criterion itself. An alternative set of pa-
rameter estimates, the maximum likelihood estimates, are obtained by spec-
ifying the optional argument REML=FALSE.
> (fm01ML <- lmer(Yield ~ 1 + (1|Batch), Dyestuff, REML=FALSE))
Fixed effects:
Estimate Std. Error t value
(Intercept) 1527.50 17.69 86.33
(Notice that this code fragment also illustrates a way to condense the assign-
ment and the display of the fitted model into a single step. The redundant set
of parentheses surrounding the assignment causes the result of the assignment
to be displayed. We will use this device often in what follows.)
The display of a model fit by maximum likelihood provides several other
model-fit statistics such as Akaike’s Information Criterion (AIC)˜[Sakamoto
et˜al., 1986], Schwarz’s Bayesian Information Criterion (BIC)˜[Schwarz, 1978],
the log-likelihood (logLik) at the parameter estimates, and the deviance (neg-
ative twice the log-likelihood) at the parameter estimates. These are all statis-
tics related to the model fit and are used to compare different models fit to
the same data.
At this point the important thing to note is that the default estimation
criterion is the REML criterion. Generally the REML estimates of variance
components are preferred to the ML estimates. However, when comparing
models it is safest to refit all the models using the maximum likelihood cri-
terion. We will discuss comparisons of model fits in Sect.˜2.2.4.
Fixed effects:
Estimate Std. Error t value
(Intercept) 5.6656 0.6784 8.352
Fixed effects:
Estimate Std. Error t value
(Intercept) 5.666 0.667 8.494
(Note the use of the update function to re-fit a model changing some of the
arguments. In a case like this, where the call to fit the original model is not
very complicated, the use of update is not that much simpler than repeating
the original call to lmer with extra arguments. For complicated model fits it
can be.)
An estimate of 0 for σ1 does not mean that there is no variation between the
groups. Indeed Fig.˜1.2 shows that there is some small amount of variability
between the groups. The estimate, σ b1 = 0, simply indicates that the level of
“between-group” variability is not sufficient to warrant incorporating random
effects in the model.
The important point to take away from this example is that we must
allow for the estimates of variance components to be zero. We describe such
a model as being degenerate, in the sense that it corresponds to a linear
model in which we have removed the random effects associated with Batch.
Degenerate models can and do occur in practice. Even when the final fitted
model is not degenerate, we must allow for such models when determining
the parameter estimates through numerical optimization.
To reiterate, the model fm02 corresponds to the linear model
> summary(fm02a <- lm(Yield ~ 1, Dyestuff2))
Call:
lm(formula = Yield ~ 1, data = Dyestuff2)
Residuals:
Min 1Q Median 3Q Max
-6.5576 -2.9006 -0.3006 2.4854 7.7684
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.6656 0.6784 8.352 3.32e-09
because the random effects are inert, in the sense that they have a variance
of zero, and hence can be removed.
Notice that the estimate of σ from the linear model (called the Residual
standard error in the output) corresponds to the estimate in the REML fit
(fm02) but not that from the ML fit (fm02ML). The fact that the REML es-
timates of variance components in mixed models generalize the estimate of
the variance used in linear models, in the sense that these estimates coincide
in the degenerate case, is part of the motivation for the use of the REML
criterion for fitting mixed-effects models.
(Y |B = b) ∼ N (Xβ + Zb, σ 2 I)
(1.1)
B ∼ N (0, Σθ ).
B = Λθ U .
is the sum of the residual sum of squares, measuring fidelity of the model to
the data, and a penalty on the size of u, measuring the complexity of the
model. Minimizing r2 with respect to u,
rβ2 ,θ = min ky − Xβ − ZΛθ uk2 + kuk2 (1.4)
u
θ = Λθ Z ZΛθ + Iq .
Lθ LT T T
(1.5)
rβ2 ,θ
d(θ , β , σ |y) = n log(2πσ 2 ) + log(|Lθ |2 ) + . (1.6)
σ2
where |Lθ | denotes the determinant of Lθ . Because Lθ is triangular, its de-
terminant is the product of its diagonal elements.
Because the conditional mean, µ, is a linear function of β and u, mini-
mization of the PRSS with respect to both β and u to produce
rθ2 = min ky − Xβ − ZΛθ uk2 + kuk2 (1.7)
β ,u
is also a direct calculation. The values of u and β that provide this minimum
are called, respectively, the conditional mode, ũθ , of the spherical random
effects and the conditional estimate, βb θ , of the fixed effects. At the conditional
estimate of the fixed effects the deviance is
r2
d(θ , βbθ , σ |y) = n log(2πσ 2 ) + log(|Lθ |2 ) + θ2 . (1.8)
σ
Minimizing this expression with respect to σ 2 produces the conditional esti-
mate
2
c2 = rθ
σ (1.9)
θ
n
which provides the profiled deviance,
a function of θ alone.
The maximum likelihood estimate (MLE) of θ , written θb , is the value that
minimizes the profiled deviance˜(1.10). We determine this value by numerical
˜ θb |y) we determine βb = βb b , ũ b
optimization. In the process of evaluating d(
q θ θ
and r2b , from which we can evaluate σ
b = r2b /n.
θ θ
The elements of the conditional mode of B, evaluated at the parameter
estimates,
b̃θb = Λθb ũθb (1.11)
are sometimes called the best linear unbiased predictors or BLUPs of the
random effects. Although it has an appealing acronym, I don’t find the term
particularly instructive (what is a “linear unbiased predictor” and in what
sense are these the “best”?) and prefer the term “conditional mode”, which is
explained in Sect.˜1.6.
npt = 3 , n = 1
rhobeg = 0.2 , rhoend = 2e-07
0.020: 4: 327.347;0.800000
0.0020: 6: 327.328;0.764659
0.00020: 9: 327.327;0.752808
2.0e-05: 10: 327.327;0.752583
2.0e-06: 12: 327.327;0.752583
2.0e-07: 14: 327.327;0.752581
At return
17: 327.32706: 0.752581
1 2 3 4 5 6
1
2
Row
3
4
5
6
5 10 15 20 25
Column
Fig. 1.4 Image of the transpose of the random-effects model matrix, Z, for model
fm01. The non-zero elements, which are all unity, are shown as darkened squares. The
zero elements are blank.
In this section we show how to create a profile deviance object from a fitted
linear mixed model and how to use this object to evaluate confidence intervals
on the parameters. We also discuss the construction and interpretation of
profile zeta plots for the parameters and profile pairs plots for parameter
pairs.
The mixed-effects model fit as fm01 or fm01ML has three parameters for which
we obtained estimates. These parameters are σ1 , the standard deviation of the
random effects, σ , the standard deviation of the residual or “per-observation”
noise term and β0 , the fixed-effects parameter that is labeled as (Intercept).
The profile function systematically varies the parameters in a model, as-
sessing the best possible fit that can be obtained with one parameter fixed
at a specific value and comparing this fit to the globally optimal fit, which is
the original model fit that allowed all the parameters to vary. The models are
compared according to the change in the deviance, which is the likelihood ra-
tio test (LRT) statistic. We apply a signed square root transformation to this
statistic and plot the resulting function, called ζ , versus the parameter value.
A ζ value can be compared to the quantiles of the standard normal distribu-
tion, Z ∼ N (0, 1). For example, a 95% profile deviance confidence interval
on the parameter consists of the values for which −1.960 < ζ < 1.960.
Because the process of profiling a fitted model, which involves re-fitting
the model many times, can be computationally intensive, one should exercise
caution with complex models fit to very large data sets. Because the statistic
of interest is a likelihood ratio, the model is re-fit according to the maximum
likelihood criterion, even if the original fit is a REML fit. Thus, there is a
slight advantage in starting with an ML fit.
> pr01 <- profile(fm01ML)
Plots of ζ versus the parameter being profiled (Fig.˜1.5) are obtained with
> xyplot(pr01, aspect = 1.3)
We will refer to such plots as profile zeta plots. I usually adjust the aspect
ratio of the panels in profile zeta plots to, say, aspect = 1.3 and frequently
set the layout so the panels form a single row (layout = c(3,1), in this case).
The vertical lines in the panels delimit the 50%, 80%, 90%, 95% and 99%
confidence intervals, when these intervals can be calculated. Numerical values
of the endpoints are returned by the confint extractor.
> confint(pr01)
ζ 0
−1
−2
Fig. 1.5 Signed square root, ζ , of the likelihood ratio test statistic for each of the
parameters in model fm01ML. The vertical lines are the endpoints of 50%, 80%, 90%,
95% and 99% confidence intervals derived from this test statistic.
2.5
2.0
(Intercept)
.sig01
1.5
.lsig
|ζ|
1.0
0.5
0.0
Fig. 1.6 Profiled deviance, on the scale |ζ |, the square root of the change in the
deviance, for each of the parameters in model fm01ML. The intervals shown are 50%,
80%, 90%, 95% and 99% confidence intervals based on the profile likelihood.
2.5 % 97.5 %
.sig01 12.197461 84.063361
.lsig 3.643624 4.214461
(Intercept) 1486.451506 1568.548494
0.5 % 99.5 %
.sig01 NA 113.690280
.lsig 3.571290 4.326337
(Intercept) 1465.872875 1589.127125
Notice that the lower bound on the 99% confidence interval for σ1 is not
defined. Also notice that we profile log(σ ) instead of σ , the residual standard
deviation.
log(σ) σ σ2
ζ 0
−1
−2
Fig. 1.7 Signed square root, ζ , of the likelihood ratio test statistic as a function of
log(σ ), of σ and of σ 2 . The vertical lines are the endpoints of 50%, 80%, 90%, 95%
and 99% confidence intervals.
A profile zeta plot, such as Fig.˜1.5, shows us the sensitivity of the model fit
to changes in the value of particular parameters. Although this is not quite
the same as describing the distribution of an estimator, it is a similar idea
and we will use some of the terminology from distributions when describing
these plots. Essentially we view the patterns in the plots as we would those
in a normal probability plot of data values or residuals from a model.
Ideally the profile zeta plot will be close to a straight line over the region
of interest, in which case we can perform reliable statistical inference based
on the parameter’s estimate, its standard error and quantiles of the stan-
dard normal distribution. We will describe such a situation as providing a
good normal approximation for inference. The common practice of quoting
a parameter estimate and its standard error assumes that this is always the
case.
In Fig.˜1.5 the profile zeta plot for log(σ ) is reasonably straight so log(σ )
has a good normal approximation. But this does not mean that there is a
good normal approximation for σ 2 or even for σ . As shown in Fig.˜1.7 the
profile zeta plot for log(σ ) is slightly skewed, that for σ is moderately skewed
and the profile zeta plot for σ 2 is highly skewed. Deviance-based confidence
intervals on σ 2 are quite asymmetric, of the form “estimate minus a little,
plus a lot”.
log(σ1) σ1 σ21
ζ 0
−1
−2
Fig. 1.8 Signed square root, ζ , of the likelihood ratio test statistic as a function of
log(σ1 ), of σ1 and of σ12 . The vertical lines are the endpoints of 50%, 80%, 90%, 95%
and 99% confidence intervals.
1600
1550
(Intercept)
1500
1450 0 1 2 3
4.2
4.0
.lsig
3.8
3.6
0 1 2 3
0 50 100 150
.sig01 0
−1
−2
−3
Scatter Plot Matrix
Fig. 1.9 Profile pairs plot for the parameters in model fm01. The contour lines
correspond to two-dimensional 50%, 80%, 90%, 95% and 99% marginal confidence
regions based on the likelihood ratio. Panels below the diagonal represent the (ζi , ζ j )
parameters; those above the diagonal represent the original parameters.
> splom(pr01)
and shown in Fig.˜1.9 shows the profile traces along with interpolated con-
tours of the two-dimensional profiled deviance function. The contours are
chosen to correspond to the two-dimensional marginal confidence regions at
particular confidence levels.
Because this plot may be rather confusing at first we will explain what is
shown in each panel. To make it easier to refer to panels we assign them (x, y)
coordinates, as in a Cartesian coordinate system. The columns are numbered
1 to 3 from left to right and the rows are numbered 1 to 3 from bottom to
top. Note that the rows are numbered from the bottom to the top, like the
y-axis of a graph, not from top to bottom, like a matrix.
The diagonal panels show the ordering of the parameters: σ1 first, then
log(σ ) then β0 . Panels above the diagonal are in the original scale of the
parameters. That is, the top-left panel, which is the (1, 3) position, has σ1 on
the horizontal axis and β0 on the vertical axis.
In addition to the contour lines in this panel, there are two other lines,
which are the profile traces of σ1 on β0 and of β0 on σ1 . The profile trace of β0
on σ1 is a straight horizontal line, indicating that the conditional estimate of
β0 , given a value of σ1 , is constant. Again, this is a consequence of the simple
model form and the balanced data set. The other line in this panel, which is
the profile trace of σ1 on β0 , is curved. That is, the conditional estimate of
σ1 given β0 depends on β0 . As β0 moves away from the estimate, βb0 , in either
direction, the conditional estimate of σ1 increases.
We will refer to the two traces on a panel as the “horizontal trace” and
“vertical trace”. They are not always perfectly horizontal and vertical lines
but the meaning should be clear from the panel because one trace will always
be more horizontal and the other will be more vertical. The one that is more
horizontal is the trace of the parameter on the y axis as a function of the
parameter on the horizontal axis, and vice versa.
The contours shown on the panel are interpolated from the profile zeta
function and the profile traces, in the manner described in Bates and Watts
[1988, Chapter 6]. One characteristic of a profile trace, which we can verify
visually in this panel, is that the tangent to a contour must be vertical where
it intersects the horizontal trace and horizontal where it intersects the vertical
trace.
The (2, 3) panel shows β0 versus log(σ ). In this case the traces actually
are horizontal and vertical straight lines. That is, the conditional estimate of
β0 doesn’t depend on log(σ ) and the conditional estimate of log(σ ) doesn’t
depend on β0 . Even in this case, however, the contour lines are not concentric
ellipses, because the deviance is not perfectly quadratic in these parameters.
That is, the zeta functions, ζ (β0 ) and ζ (log(σ )), are not linear.
The (1, 2) panel, showing log(σ ) versus σ1 shows distortion along both
axes and nonlinear patterns in both traces. When σ1 is close to zero the
conditional estimate of log(σ ) is larger than when σ1 is large. In other words
small values of σ1 inflate the estimate of log(σ ) because the variability that
would be explained by the random effects gets incorporated into the residual
noise term.
Panels below the diagonal are on the ζ scale, which is why the axes on
each of these panels span the same range, approximately −3 to +3, and the
profile traces always cross at the origin. Thus the (3, 1) panel shows ζ (σ1 )
on the vertical axis versus ζ (β0 ) on the horizontal. These panels allow us
to see distortions from an elliptical shape due to nonlinearity of the traces,
separately from the one-dimensional distortions caused by a poor choice of
scale for the parameter. The ζ scales provide, in some sense, the best possible
set of single-parameter transformations for assessing the contours. On the ζ
scales the extent of a contour on the horizontal axis is exactly the same as
the extent on the vertical axis and both are centered about zero.
Another way to think of this is that, if we would have profiled σ12 instead
of σ1 , we would change all the panels in the first column but the panels on
the first row would remain the same.
In Sect.˜1.4.1 we mentioned that what are sometimes called the BLUPs (or
best linear unbiased predictors) of the random effects, B, are the conditional
modes evaluated at the parameter estimates, calculated as b̃θb = Λθb ũθb .
These values are often considered as some sort of “estimates” of the ran-
dom effects. It can be helpful to think of them this way but it can also be
misleading. As we have stated, the random effects are not, strictly speak-
ing, parameters—they are unobserved random variables. We don’t estimate
the random effects in the same sense that we estimate parameters. In-
stead, we consider the conditional distribution of B given the observed data,
(B|Y = y).
Because the unconditional distribution, B ∼ N (0, Σθ ) is continuous, the
conditional distribution, (B|Y = y) will also be continuous. In general, the
mode of a probability density is the point of maximum density, so the phrase
“conditional mode” refers to the point at which this conditional density is
maximized. Because this definition relates to the probability model, the values
of the parameters are assumed to be known. In practice, of course, we don’t
know the values of the parameters (if we did there would be no purpose
in forming the parameter estimates), so we use the estimated values of the
parameters to evaluate the conditional modes.
Those who are familiar with the multivariate Gaussian distribution may
recognize that, because both B and (Y |B = b) are multivariate Gaussian,
(B|Y = y) will also be multivariate Gaussian and the conditional mode will
also be the conditional mean of B, given Y = y. This is the case for a linear
mixed model but it does not carry over to other forms of mixed models. In the
general case all we can say about ũ or b̃ is that they maximize a conditional
density, which is why we use the term “conditional mode” to describe these
values. We will only use the term “conditional mean” and the symbol, µ, in
reference to E(Y |B = b), which is the conditional mean of Y given B, and
an important part of the formulation of all types of mixed-effects models.
The ranef extractor returns the conditional modes.
> ranef(fm01ML)
$Batch
(Intercept)
A -16.628222
B 0.369516
C 26.974671
D -21.801446
E 53.579825
F -42.494344
attr(,"class")
[1] "ranef.mer"
List of 1
$ Batch:'data.frame': 6 obs. of 1 variable:
..$ (Intercept): num [1:6] -16.628 0.37 26.975 -21.801 53.58 ...
- attr(*, "class")= chr "ranef.mer"
shows that the value is a list of data frames. In this case the list is of length 1
because there is only one random-effects term, (1|Batch), in the model and,
hence, only one grouping factor, Batch, for the random effects. There is only
one column in this data frame because the random-effects term, (1|Batch), is
a simple, scalar term.
To make this more explicit, random-effects terms in the model formula are
those that contain the vertical bar ("|") character. The Batch variable is the
grouping factor for the random effects generated by this term. An expression
for the grouping factor, usually just the name of a variable, occurs to the right
of the vertical bar. If the expression on the left of the vertical bar is 1, as it
is here, we describe the term as a simple, scalar, random-effects term. The
designation “scalar” means there will be exactly one random effect generated
for each level of the grouping factor. A simple, scalar term generates a block
of indicator columns — the indicators for the grouping factor — in Z. Because
there is only one random-effects term in this model and because that term
is a simple, scalar term, the model matrix Z for this model is the indicator
matrix for the levels of Batch.
In the next chapter we fit models with multiple simple, scalar terms and, in
subsequent chapters, we extend random-effects terms beyond simple, scalar
terms. When we have only simple, scalar terms in the model, each term has
a unique grouping factor and the elements of the list returned by ranef can
E ●
C ●
B ●
A ●
D ●
F ●
−50 0 50
Fig. 1.10 95% prediction intervals on the random effects in fm01ML, shown as a
dotplot.
−0.5
●
−1.0
●
−1.5
−50 0 50
(Fig.˜1.10), which provides linear spacing of the levels on the y axis, or using
> qqmath(ranef(fm01ML, postVar=TRUE))
(Fig.˜1.11), where the intervals are plotted versus quantiles of the standard
normal.
The dotplot is preferred when there are only a few levels of the grouping
factor, as in this case. When there are hundreds or thousands of random
effects the qqmath form is preferred because it focuses attention on the “im-
portant few” at the extremes and de-emphasizes the “trivial many” that are
close to zero.
(Y |B = b) ∼ N (Zb + Xβ , σ 2 In ).
Notation
Random Variables
Model Matrices
Derived Matrices
Vectors
γ = Xβ + Zb = ZΛθ u + Xβ
D ● ● ●
C ● ● ●
F ● ● ●
Rail
A ●●●
E ●●●
B ● ● ●
40 60 80 100
µ = E[Y |B = b] = E[Y |U = u]
ũθ the q-dimensional conditional mode (the value at which the conditional
density is maximized) of U given Y = y.
Exercises
These exercises and several others in this book use data sets from the MEMSS
package for R. You will need to ensure that this package is installed before
you can access the data sets.
To load a particular data set, either attach the package
> library(MEMSS)
1.1. Check the documentation, the structure (str) and a summary of the Rail
data (Fig.˜1.12) from the MEMSS package. Note that if you used data to access
this data set (i.e. you did not attach the whole MEMSS package) then you must
use
> help(Rail, package = "MEMSS")
1.4. Profile the fitted model and construct 95% profile-based confidence in-
tervals on the parameters. Is the confidence interval on σ1 close to being
symmetric about the estimate? Is the corresponding interval on log(σ1 ) close
to being symmetric about its estimate?
1.5. Create the profile zeta plot for this model. For which parameters are
there good normal approximations?
1.6. Create a profile pairs plot for this model. Does the shape of the deviance
contours in this model mirror those in Fig.˜1.9?
1.7. Plot the prediction intervals on the random effects from this model. Do
any of these prediction intervals contain zero? Consider the relative magni-
tudes of σ
c1 and σ b in this model compared to those in model fm01 for the
Dyestuff data. Should these ratios of σ1 /σ lead you to expect a different
pattern of prediction intervals in this plot than those in Fig.˜1.10?
The mixed models considered in the previous chapter had only one random-
effects term, which was a simple, scalar random-effects term, and a single
fixed-effects coefficient. Although such models can be useful, it is with the
facility to use multiple random-effects terms and to use random-effects terms
beyond a simple, scalar term that we can begin to realize the flexibility and
versatility of mixed models.
In this chapter we consider models with multiple simple, scalar random-
effects terms, showing examples where the grouping factors for these terms
are in completely crossed or nested or partially crossed configurations. For
ease of description we will refer to the random effects as being crossed or
nested although, strictly speaking, the distinction between nested and non-
nested refers to the grouping factors, not the random effects.
One of the areas in which the methods in the lme4 package for R are particu-
larly effective is in fitting models to cross-classified data where several factors
have random effects associated with them. For example, in many experiments
in psychology the reaction of each of a group of subjects to each of a group
of stimuli or items is measured. If the subjects are considered to be a sample
from a population of subjects and the items are a sample from a population
of items, then it would make sense to associate random effects with both
these factors.
In the past it was difficult to fit mixed models with multiple, crossed
grouping factors to large, possibly unbalanced, data sets. The methods in
the lme4 package are able to do this. To introduce the methods let us first
consider a small, balanced data set with crossed grouping factors.
29
30 2 Models With Multiple Random-effects Terms
The Penicillin data are derived from Table˜6.6, p.˜144 of Davies and Gold-
smith [1972] where they are described as coming from an investigation to
assess the variability between samples of penicillin by the B.˜subtilis method.
In this test method a bulk-innoculated nutrient agar medium is poured into
a Petri dish of approximately 90 mm. diameter, known as a plate. When the
medium has set, six small hollow cylinders or pots (about 4 mm. in diameter)
are cemented onto the surface at equally spaced intervals. A few drops of the
penicillin solutions to be compared are placed in the respective cylinders, and
the whole plate is placed in an incubator for a given time. Penicillin diffuses
from the pots into the agar, and this produces a clear circular zone of inhibition
of growth of the organisms, which can be readily measured. The diameter of
the zone is related in a known way to the concentration of penicillin in the
solution.
and a summary
> summary(Penicillin)
A ● C E
B D F ●
m ● ●
t ● ●
o ● ●
k ● ●
h ● ●
b ● ●
a ● ●
n ● ●
l ● ●
d ● ●
c ● ●
Plate
p ● ●
e ● ●
v ● ●
r ● ●
q ● ●
f ● ●
w ● ●
j ● ●
i ● ●
u ● ●
x ● ●
s ● ●
g ● ●
18 20 22 24 26
Fig. 2.1 Diameter of the growth inhibition zone (mm) in the B. subtilis method of
assessing the concentration of penicillin. Each of 6 samples was applied to each of the
24 agar plates. The lines join observations on the same sample.
means that the factors are not nested. If we wish to be more specific, we
could describe these factors as being completely crossed, which means that
we have at least one observation for each combination of a level of sample and
a level of plate. We can see this in Fig.˜2.1 and, because there are moderate
numbers of levels in these factors, we can check it in a cross-tabulation
> xtabs(~ sample + plate, Penicillin)
plate
sample a b c d e f g h i j k l m n o p q r s t u v w x
A 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
B 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
C 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
D 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
E 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
F 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Like the Dyestuff data, the factors in the Penicillin data are balanced.
That is, there are exactly the same number of observations on each plate and
for each sample and, furthermore, there is the same number of observations
on each combination of levels. In this case there is exactly one observation for
each combination of sample and plate. We would describe the configuration
of these two factors as an unreplicated, completely balanced, crossed design.
In general, balance is a desirable but precarious property of a data set.
We may be able to impose balance in a designed experiment but we typically
cannot expect that data from an observation study will be balanced. Also,
as anyone who analyzes real data soon finds out, expecting that balance in
the design of an experiment will produce a balanced data set is contrary to
“Murphy’s Law”. That’s why statisticians allow for missing data. Even when
we apply each of the six samples to each of the 24 plates, something could
go wrong for one of the samples on one of the plates, leaving us without a
measurement for that combination of levels and thus an unbalanced data set.
A model incorporating random effects for both the plate and the sample is
straightforward to specify — we include simple, scalar random effects terms
for both these factors.
> (fm03 <- lmer(diameter ~ 1 + (1|plate) + (1|sample), Penicillin))
Fixed effects:
Estimate Std. Error t value
(Intercept) 22.9722 0.8086 28.41
This model display indicates that the sample-to-sample variability has the
greatest contribution, then plate-to-plate variability and finally the “resid-
ual” variability that cannot be attributed to either the sample or the plate.
These conclusions are consistent with what we see in the Penicillin data plot
(Fig.˜2.1).
The prediction intervals on the random effects (Fig.˜2.2) confirm that
the conditional distribution of the random effects for plate has much less
variability than does the conditional distribution of the random effects for
plate, in the sense that the dots in the bottom panel have less variability than
those in the top panel. (Note the different horizontal axes for the two panels.)
A ●
C ●
E ●
D ●
B ●
F ●
−3 −2 −1 0 1 2
m ●
t ●
o ●
k ●
h ●
b ●
a ●
l ●
n ●
d ●
c ●
p ●
e ●
v ●
r ●
q ●
f ●
w ●
j ●
i ●
u ●
x ●
s ●
g ●
−2 −1 0 1 2
Fig. 2.2 95% prediction intervals on the random effects for model fm03 fit to the
Penicillin data.
5
10
15
20
25
50 100
Fig. 2.3 Image of the transpose of the random-effects model matrix, Z, for model
fm03. The non-zero elements, which are all unity, are shown as darkened squares. The
zero elements are blank.
5 5 5
10 10 10
15 15 15
20 20 20
25 25 25
5 10 15 20 25 5 10 15 20 25 5 10 15 20 25
Λ Z'Z L
Fig. 2.4 Images of the relative covariance factor, Λ , the cross-product of the random-
effects model matrix, ZT Z, and the sparse Cholesky factor, L, for model fm03.
The first parameter is the relative standard deviation of the random effects
for plate, which has the value 0.84671/0.54992 = 1.53968 at convergence, and
the second is the relative standard deviation of the random effects for sample
(1.93157/0.54992 = 3.512443).
Because Λθ is diagonal, the pattern of non-zeros in ΛθT ZT ZΛθ + I will be
the same as that in ZT Z, shown in the middle panel of Fig.˜2.4. The sparse
Cholesky factor, L, shown in the right panel, is lower triangular and has
non-zero elements in the lower right hand corner in positions where ZT Z has
systematic zeros. We say that “fill-in” has occurred when forming the sparse
Cholesky decomposition. In this case there is a relatively minor amount of fill
but in other cases there can be a substantial amount of fill and we shall take
precautions so as to reduce this, because fill-in adds to the computational
effort in determining the MLEs or the REML estimates.
A profile zeta plot (Fig.˜2.5) for the parameters in model fm03 leads to con-
clusions similar to those from Fig.˜?? for model fm1ML in the previous chapter.
The fixed-effect parameter, β0 , for the (Intercept) term has symmetric inter-
vals and is over-dispersed relative to the normal distribution. The logarithm
0
ζ
−1
−2
2.5 % 97.5 %
.sig01 0.6335658 1.1821040
.sig02 1.0957822 3.5563194
.lsig -0.7218645 -0.4629033
(Intercept) 21.2666274 24.6778176
2.5 % 97.5 %
.sig01 0.7492746 1.397993
.sig02 0.6188594 2.008485
26
24
(Intercept)
22
20 0 1 2 3
−0.5
−0.6 .lsig
−0.7
0 1 2 3
−0.8
6
4 5 6
5
4
.sig02
3
1 0 1 2 3
.sig01 0
−1
−2
−3
Scatter Plot Matrix
Fig. 2.6 Profile pairs plot for the parameters in model fm03 fit to the Penicillin
data.
(panels below the diagonal) the profile traces are nearly straight and orthog-
onal with the exception of the trace of ζ (σ2 ) on ζ (β0 ) (the horizontal trace
for the panel in the (4, 2) position). The pattern of this trace is similar to
the pattern of the trace of ζ (σ1 ) on ζ (β0 ) in Fig.˜??. Moving β0 from its
estimate, βb0 , in either direction will increase the residual sum of squares. The
increase in the residual variability is reflected in an increase of one or more
of the dispersion parameters. The balanced experimental design results in a
fixed estimate of σ and the extra apparent variability must be incorporated
into σ1 or σ2 .
Contours in panels of parameter pairs on the original scales (i.e. panels
above the diagonal) can show considerable distortion from the ideal elliptical
shape. For example, contours in the σ2 versus σ1 panel (the (1, 2) position)
and the log(σ ) versus σ2 panel (in the (2, 3) position) are dramatically non-
26
24
(Intercept)
22
20 0 1 2 3
−0.5
−0.6 .lsig
−0.7
0 1 2 3
−0.8
1.0 1.5
1.5
1.0
.lsig02
0.5
0.0 0 1 2 3
.lsig01 0
−1
−2
−3
Scatter Plot Matrix
Fig. 2.7 Profile pairs plot for the parameters in model fm03 fit to the Penicillin
data. In this plot the parameters σ1 and σ2 are on the scale of the natural logarithm,
as is the parameter σ in this and other profile pairs plots.
elliptical. However, the distortion of the contours is not due to these param-
eter estimates depending strongly on each other. It is almost entirely due to
the choice of scale for σ1 and σ2 . When we plot the contours on the scale of
log(σ1 ) and log(σ2 ) instead (Fig.˜2.7) they are much closer to the elliptical
pattern.
Conversely, if we tried to plot contours on the scale of σ12 and σ22 (not
shown), they would be hideously distorted.
In this section we again consider a simple example, this time fitting a model
with nested grouping factors for the random effects.
The third example from Davies and Goldsmith [1972, Table˜6.5, p.˜138] is
described as coming from
deliveries of a chemical paste product contained in casks where, in addition to
sampling and testing errors, there are variations in quality between deliveries
. . . As a routine, three casks selected at random from each delivery were sampled
and the samples were kept for reference. . . . Ten of the delivery batches were
sampled at random and two analytical tests carried out on each of the 30
samples.
> summary(Pastes)
batch
6
10
5 10 15 20 25
sample
Fig. 2.8 Image of the cross-tabulation of the batch and sample factors in the Pastes
data.
D . . . . . . . . . 2 2 2 . . . . . . . . . . . . . . . . . .
E . . . . . . . . . . . . 2 2 2 . . . . . . . . . . . . . . .
F . . . . . . . . . . . . . . . 2 2 2 . . . . . . . . . . . .
G . . . . . . . . . . . . . . . . . . 2 2 2 . . . . . . . . .
H . . . . . . . . . . . . . . . . . . . . . 2 2 2 . . . . . .
I . . . . . . . . . . . . . . . . . . . . . . . . 2 2 2 . . .
J . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 2
H:b ● ●
H
H:c ● ●
H:a ●●
A:c ● ●
A
A:a ●●
A:b ● ●
C:c ● ●
C
C:b ● ●
C:a ● ●
F:a ● ●
F
F:c ● ●
F:b
Sample within batch
● ●
G:a ●●
G
G:b ● ●
G:c ● ●
D:c ●●
D
D:b ● ●
D:a ● ●
B:a ● ●
B
B:c ● ●
B:b ● ●
I:b ●
I:c ● ●
I
I:a ●
J:b ●
J:a ●
J
J:c ● ●
E:c ● ●
E
E:a ●
E:b ● ●
54 56 58 60 62 64 66
Paste strength
Fig. 2.9 Strength of paste preparations according to the batch and the sample within
the batch. There were two strength measurements on each of the 30 samples; three
samples each from 10 batches.
Because each level of sample occurs with one and only one level of batch we
say that sample is nested within batch. Some presentations of mixed-effects
models, especially those related to multilevel modeling˜[Rasbash et˜al., 2000]
or hierarchical linear models˜[Raudenbush and Bryk, 2002], leave the im-
pression that one can only define random effects with respect to factors that
are nested. This is the origin of the terms “multilevel”, referring to multiple,
nested levels of variability, and “hierarchical”, also invoking the concept of
a hierarchy of levels. To be fair, both those references do describe the use
of models with random effects associated with non-nested factors, but such
models tend to be treated as a special case.
The blurring of mixed-effects models with the concept of multiple, hier-
archical levels of variation results in an unwarranted emphasis on “levels”
when defining a model and leads to considerable confusion. It is perfectly le-
gitimate to define models having random effects associated with non-nested
factors. The reasons for the emphasis on defining random effects with respect
to nested factors only are that such cases do occur frequently in practice and
that some of the computational methods for estimating the parameters in
the models can only be easily applied to nested factors.
This is not the case for the methods used in the lme4 package. Indeed there
is nothing special done for models with random effects for nested factors.
When random effects are associated with multiple factors exactly the same
computational methods are used whether the factors form a nested sequence
or are partially crossed or are completely crossed.
There is, however, one aspect of nested grouping factors that we should
emphasize, which is the possibility of a factor that is implicitly nested within
another factor. Suppose, for example, that the sample factor was defined as
having three levels instead of 30 with the implicit assumption that sample
is nested within batch. It may seem silly to try to distinguish 30 different
batches with only three levels of a factor but, unfortunately, data are fre-
quently organized and presented like this, especially in text books. The cask
factor in the Pastes data is exactly such an implicitly nested factor. If we
cross-tabulate batch and cask
> xtabs(~ cask + batch, Pastes)
batch
cask A B C D E F G H I J
a 2 2 2 2 2 2 2 2 2 2
b 2 2 2 2 2 2 2 2 2 2
c 2 2 2 2 2 2 2 2 2 2
we get the impression that the cask and batch factors are crossed, not nested.
If we know that the cask should be considered as nested within the batch then
we should create a new categorical variable giving the batch-cask combina-
tion, which is exactly what the sample factor is. A simple way to create such a
factor is to use the interaction operator, ‘:’, on the factors. It is advisable, but
not necessary, to apply factor to the result thereby dropping unused levels of
the interaction from the set of all possible levels of the factor. (An “unused
level” is a combination that does not occur in the data.) A convenient code
idiom is
> Pastes$sample <- with(Pastes, factor(batch:cask))
or
> Pastes <- within(Pastes, sample <- factor(batch:cask))
In a small data set like Pastes we can quickly detect a factor being implic-
itly nested within another factor and take appropriate action. In a large data
set, perhaps hundreds of thousands of test scores for students in thousands
of schools from hundreds of school districts, it is not always obvious if school
identifiers are unique across the entire data set or just within a district. If you
are not sure, the safest thing to do is to create the interaction factor, as shown
above, so you can be confident that levels of the district:school interaction
do indeed correspond to unique schools.
Fitting a model with simple, scalar random effects for nested factors is done
in exactly the same way as fitting a model with random effects for crossed
grouping factors. We include random-effects terms for each factor, as in
> (fm04 <- lmer(strength ~ 1 + (1|sample) + (1|batch), Pastes, REML=FALSE))
Fixed effects:
Estimate Std. Error t value
(Intercept) 60.0533 0.6421 93.52
Not only is the model specification similar for nested and crossed factors,
the internal calculations are performed according to the methods described in
Sect.˜1.4.1 for each model type. Comparing the patterns in the matrices Λ ,
ZT Z and L for this model (Fig.˜2.10) to those in Fig.˜2.4 shows that models
with nested factors produce simple repeated structures along the diagonal of
10 10 10
20 20 20
30 30 30
10 20 30 10 20 30 10 20 30
Λ Z'Z L
Fig. 2.10 Images of the relative covariance factor, Λ , the cross-product of the
random-effects model matrix, ZT Z, and the sparse Cholesky factor, L, for model
fm04.
the sparse Cholesky factor, L, after reordering the random effects (we discuss
this reordering later in Sect.˜5.4.1). This type of structure has the desirable
property that there is no “fill-in” during calculation of the Cholesky factor.
In other words, the number of non-zeros in L is the same as the number of
non-zeros in the lower triangle of the matrix being factored, Λ T ZT ZΛ + I
(which, because Λ is diagonal, has the same structure as ZT Z).
Fill-in of the Cholesky factor is not an important issue when we have a few
dozen random effects, as we do here. It is an important issue when we have
millions of random effects in complex configurations, as has been the case in
some of the models that have been fit using lmer.
The parameter estimates are: σ c1 =2.904, the standard deviation of the ran-
dom effects for sample; σ
c2 =1.095, the standard deviation of the random effects
for batch; σ =0.823, the standard deviation of the residual noise term; and
b
βb0 =60.053, the overall mean response, which is labeled (Intercept) in these
models.
The estimated standard deviation for sample is nearly three times as large
as that for batch, which confirms what we saw in Fig.˜2.9. Indeed our con-
clusion from Fig.˜2.9 was that there may not be a significant batch-to-batch
variability in addition to the sample-to-sample variability.
Plots of the prediction intervals of the random effects (Fig.˜2.11) confirm
this impression in that all the prediction intervals for the random effects for
batch contain zero. Furthermore, the profile zeta plot (Fig.˜2.12) shows that
the even the 50% profile-based confidence interval on σ2 extends to zero.
H ●
A ●
C ●
F ●
G ●
D ●
B ●
I ●
J ●
E ●
−3 −2 −1 0 1 2
H:b ●
D:c ●
I:b ●
C:c ●
F:a ●
H:c ●
C:b ●
G:a ●
A:c ●
A:a ●
B:a ●
A:b ●
B:c ●
F:c ●
G:b ●
J:b ●
E:c ●
J:a ●
F:b ●
H:a ●
J:c ●
D:b ●
I:c ●
C:a ●
B:b ●
G:c ●
D:a ●
E:a ●
E:b ●
I:a ●
−5 0 5
Fig. 2.11 95% prediction intervals on the random effects for model fm04 fit to the
Pastes data.
0
ζ
−1
−2
2.0 2.5 3.0 3.5 4.0 4.5 0 1 2 3 −0.4 −0.2 0.0 0.258 59 60 61 62
Fig. 2.12 Profile zeta plots for the parameters in model fm04.
Fixed effects:
Estimate Std. Error t value
(Intercept) 60.0533 0.5765 104.2
Data: Pastes
Models:
fm04a: strength ~ 1 + (1 | sample)
fm04: strength ~ 1 + (1 | sample) + (1 | batch)
Df AIC BIC logLik Chisq Chi Df Pr(>Chisq)
fm04a 3 254.40 260.69 -124.20
fm04 4 255.99 264.37 -124.00 0.4072 1 0.5234
The profile zeta plots for the remaining parameters in model fm04a (Fig.˜2.13)
ζ 0
−1
−2
Fig. 2.13 Profile zeta plots for the parameters in model fm04a.
2.5 % 97.5 %
.sig01 2.1579337 4.05358894
.sig02 NA 2.94658934
.lsig -0.4276761 0.08199287
(Intercept) 58.6636504 61.44301637
> confint(pr04a)
2.5 % 97.5 %
.sig01 2.4306377 4.12201052
.lsig -0.4276772 0.08199277
(Intercept) 58.8861831 61.22048353
The confidence intervals on log(σ ) and β0 are similar for the two models.
The confidence interval on σ1 is slightly wider in model fm04a than in fm04,
because the variability that is attributed to batch in fm04 is incorporated into
the variability due to sample in fm04a.
The patterns in the profile pairs plot (Fig.˜2.14) for the reduced model
fm04a are similar to those in Fig.˜??, the profile pairs plot for model fm1.
62
61
60 (Intercept)
59
0 1 2 3
58
0.0
.lsig
−0.2
−0.4
0 1 2 3
.sig01 0
−1
−2
−3
Scatter Plot Matrix
Fig. 2.14 Profile pairs plot for the parameters in model fm04a fit to the Pastes data.
Studies in education, in which test scores for students over time are also
associated with teachers and schools, usually result in partially crossed group-
ing factors. If students with scores in multiple years have different teachers
for the different years, the student factor cannot be nested within the teacher
factor. Conversely, student and teacher factors are not expected to be com-
pletely crossed. To have complete crossing of the student and teacher factors
it would be necessary for each student to be observed with each teacher,
which would be unusual. A longitudinal study of thousands of students with
hundreds of different teachers inevitably ends up partially crossed.
In this section we consider an example with thousands of students and
instructors where the response is the student’s evaluation of the instructor’s
effectiveness. These data, like those from most large observational studies,
are quite unbalanced.
The InstEval data are from a special evaluation of lecturers by students at the
Swiss Federal Institute for Technology–Zürich (ETH–Zürich), to determine
who should receive the “best-liked professor” award. These data have been
slightly simplified and identifying labels have been removed, so as to preserve
anonymity.
The variables
> str(InstEval)
have somewhat cryptic names. Factor s designates the student and d the
instructor. The dept factor is the department for the course and service indi-
cates whether the course was a service course taught to students from other
departments.
Although the response, y, is on a scale of 1 to 5,
> xtabs(~ y, InstEval)
y
1 2 3 4 5
10186 12951 17609 16921 15754
15:1 ●
8:1 ●
4:0 ●
7:0 ●
5:1 ●
11:0 ●
4:1 ●
10:1 ●
14:0 ●
9:0 ●
5:0 ●
6:0 ●
3:1 ●
8:0 ●
12:0 ●
1:1 ●
2:0 ●
3:0 ●
1:0 ●
12:1 ●
7:1 ●
15:0 ●
2:1 ●
14:1 ●
9:1 ●
10:0 ●
11:1 ●
6:1 ●
Fig. 2.15 95% prediction intervals on the random effects for the dept:service factor
in model fm05 fit to the InstEval data.
Fixed effects:
Estimate Std. Error t value
(Intercept) 3.25521 0.02824 115.3
(Fitting this complex model to a moderately large data set takes less than
two minutes on a modest laptop computer purchased in 2006. Although this
is more time than required for earlier model fits, it is a remarkably short time
for fitting a model of this size and complexity. In some ways it is remarkable
that such a model can be fit at all on such a computer.)
All three estimated standard deviations of the random effects are less than
σ
b , with σ b3 , the estimated standard deviation of the random effects for the
dept:service interaction, less than one-tenth the estimated residual standard
deviation.
It is not surprising that zero is within all of the prediction intervals on the
random effects for this factor (Fig.˜2.15). In fact, zero is close to the middle of
all these prediction intervals. However, the p-value for the LRT of H0 : σ3 = 0
versus Ha : σ3 > 0
> fm05a <- lmer(y ~ 1 + (1|s) + (1|d), InstEval, REML=0)
> anova(fm05a,fm05)
Data: InstEval
Models:
fm05a: y ~ 1 + (1 | s) + (1 | d)
fm05: y ~ 1 + (1 | s) + (1 | d) + (1 | dept:service)
Fig. 2.16 Image of the sparse Cholesky factor, L, from model fm05
is highly significant. That is, we have very strong evidence that we should
reject H0 in favor of Ha .
The seeming inconsistency of these conclusions is due to the large sample
size (n = 73421). When a model is fit to a very large sample even the most
subtle of differences can be highly “statistically significant”. The researcher
or data analyst must then decide if these terms have practical significance,
beyond the apparent statistical significance.
The large sample size also helps to assure that the parameters have good
normal approximations. We could profile this model fit but doing so would
take a very long time and, in this particular case, the analysts are more
interested in a model that uses fixed-effects parameters for the instructors,
which we will describe in the next chapter.
We could pursue other mixed-effects models here, such as using the dept
factor and not the dept:service interaction to define random effects, but we
will revisit these data in the next chapter and follow up on some of these
variations there.
Before leaving this model we examine the sparse Cholesky factor, L, (Fig.˜2.16),
which is of size 4128 × 4128. Even as a sparse matrix this factor requires a
considerable amount of memory,
> object.size(fm05@re@L)
6904640 bytes
[1] 6.585
[1] 65.01965
[1] 130.0078
megabytes of storage.
The number of nonzero elements in this matrix that must be updated for
each evaluation of the deviance is
> nnzero(as(fm05@re@L, "sparseMatrix"))
[1] 566960
A simple, scalar random effects term in an lmer model formula is of the form
(1|fac), where fac is an expression whose value is the grouping factor of the
set of random effects generated by this term. Typically, fac is simply the name
of a factor, such as in the terms (1|sample) or (1|plate) in the examples in
this chapter. However, the grouping factor can be the value of an expression,
such as (1|dept:service) in the last example.
Because simple, scalar random-effects terms can differ only in the descrip-
tion of the grouping factor we refer to configurations such as crossed or nested
as applying to the terms or to the random effects, although it is more accurate
to refer to the configuration as applying to the grouping factors.
A model formula can include several such random effects terms. Because
configurations such as nested or crossed or partially crossed grouping factors
are a property of the data, the specification in the model formula does not
depend on the configuration. We simply include multiple random effects terms
in the formula specifying the model.
One apparent exception to this rule occurs with implicitly nested factors,
in which the levels of one factor are only meaningful within a particular level
of the other factor. In the Pastes data, levels of the cask factor are only
meaningful within a particular level of the batch factor. A model formula of
strength ~ 1 + (1 | cask) + (1 | batch)
would result in a fitted model that did not appropriately reflect the sources
of variability in the data. Following the simple rule that the factor should
be defined so that distinct experimental or observational units correspond to
distinct levels of the factor will avoid such ambiguity.
For convenience, a model with multiple, nested random-effects terms can
be specified as
strength ~ 1 + (1 | batch/cask)
for the standard deviations of random effects (σ1 , σ2 , etc.) are symmetric on
a logarithmic scale except for those that could be zero.
Another observation from the last example is that, for data sets with a
very large numbers of observations, a term in a model may be “statistically
significant” even when its practical significance is questionable.
Exercises
These exercises use data sets from the MEMSS package for R. Recall that to
access a particular data set, you must either attach the package
> library(MEMSS)
We begin with exercises using the ergoStool data from the MEMSS package.
The analysis and graphics in these exercises is performed in Chap.˜4. The
purpose of these exercises is to see if you can use the material from this
chapter to anticipate the results quoted in the next chapter.
2.1. Check the documentation, the structure (str) and a summary of the
ergoStool data from the MEMSS package. (If you are familiar with the Star
Trek television series and movies, you may want to speculate about what,
exactly, the “Borg scale” is.) Use
> xtabs(~ Type + Subject, ergoStool)
2.2. Create a plot, similar to Fig.˜2.1, showing the effort by subject with lines
connecting points corresponding to the same stool types. Order the levels of
the Subject factor by increasing average effort.
2.3. The experimenters are interested in comparing these specific stool types.
In the next chapter we will fit a model with fixed-effects for the Type factor
and random effects for Subject, allowing us to perform comparisons of these
specific types. At this point fit a model with random effects for both Type
and Subject. What are the relative sizes of the estimates of the standard
deviations, σ
b1 (for Subject), σ
b2 (for Type) and σ
b (for the residual variability)?
2.4. Refit the model using maximum likelihood. Check the parameter esti-
mates and, in the case of the fixed-effects parameter, β0 , its standard error.
In what ways have the parameter estimates changed? Which parameter esti-
mates have not changed?
2.5. Profile the fitted model and construct 95% profile-based confidence in-
tervals on the parameters. (Note that you will get the same profile object
whether you start with the REML fit or the ML fit. There is a slight advan-
tage in starting with the ML fit.) Is the confidence interval on σ1 close to
being symmetric about its estimate? Is the confidence interval on σ2 close to
being symmetric about its estimate? Is the corresponding interval on log(σ1 )
close to being symmetric about its estimate?
2.6. Create the profile zeta plot for this model. For which parameters are
there good normal approximations?
2.7. Create a profile pairs plot for this model. Comment on the shapes of the
profile traces in the transformed (ζ ) scale and the shapes of the contours in
the original scales of the parameters.
2.8. Create a plot of the 95% prediction intervals on the random effects for
Type using
> dotplot(ranef(fm, which = "Type", postVar = TRUE), aspect = 0.2,
+ strip = FALSE)
(Substitute the name of your fitted model for fm in the call to ranef.) Is there
a clear winner among the stool types? (Assume that lower numbers on the
Borg scale correspond to less effort).
2.9. Create a plot of the 95% prediction intervals on the random effects for
Subject.
2.10. Check the documentation, the structure (str) and a summary of the
Meat data from the MEMSS package. Use a cross-tabulation to discover whether
Pair and Block are nested, partially crossed or completely crossed.
to determine whether Pair and Storage are nested, partially crossed or com-
pletely crossed.
2.12. Fit a model of the score in the Meat data with random effects for Pair,
Storage and Block.
2.13. Plot the prediction intervals for each of the three sets of random effects.
2.14. Profile the parameters in this model. Create a profile zeta plot. Does
including the random effect for Block appear to be warranted. Does your con-
clusion from the profile zeta plot agree with your conclusion from examining
the prediction intervals for the random effects for Block?
2.15. Refit the model without random effects for Block. Perform a likelihood
ratio test of H0 : σ3 = 0 versus Ha : σ3 > 0. Would you reject H0 in favor of
Ha or fail to reject H0 ? Would you reach the same conclusion if you adjusted
the p-value for the test by halving it, to take into account the fact that 0 is
on the boundary of the parameter region?
2.16. Profile the reduced model (i.e. the one without random effects for Block)
and create profile zeta and profile pairs plots. Can you explain the apparent
interaction between log(σ ) and σ1 ? (This is a difficult question.)
In this data frame, the response variable Reaction, is the average of the
reaction time measurements on a given subject for a given day. The two
covariates are Days, the number of days of sleep deprivation, and Subject, the
identifier of the subject on which the observation was made.
57
58 3 Models for Longitudinal Data
0 2 4 6 8 0 2 4 6 8 0 2 4 6 8
●
● 450
●
●
400
●
● ●
● ● ●● ● ● 350
●
● ●
●
●●
● ●●●
● ● ●
●
300
●●● ●
●●● ●● ● ● ● ● ● ● ●
●● ● ● ● ●●● ●
● ●
● ●●● ●●
250
200
310 309 370 349 350 334
450
400 ●●
● ●
●● ●
350 ● ● ●
● ● ●●
● ●
300
● ● ● ●●
● ● ● ●
●●● ●● ● ●● ●
250 ●●● ● ● ●●● ●●● ● ●
● ● ●● ●
●●● ●●●
200 ●●
0 2 4 6 8 0 2 4 6 8 0 2 4 6 8
Fig. 3.1 A lattice plot of the average reaction time versus number of days of sleep
deprivation by subject for the sleepstudy data. Each subject’s data are shown in a
separate panel, along with a simple linear regression line fit to the data in that panel.
The panels are ordered, from left to right along rows starting at the bottom row, by
increasing intercept of these per-subject linear regression lines. The subject number
is given in the strip above the panel.
Days
Subject 0 1 2 3 4 5 6 7 8 9
308 1 1 1 1 1 1 1 1 1 1
309 1 1 1 1 1 1 1 1 1 1
310 1 1 1 1 1 1 1 1 1 1
330 1 1 1 1 1 1 1 1 1 1
331 1 1 1 1 1 1 1 1 1 1
332 1 1 1 1 1 1 1 1 1 1
333 1 1 1 1 1 1 1 1 1 1
334 1 1 1 1 1 1 1 1 1 1
335 1 1 1 1 1 1 1 1 1 1
337 1 1 1 1 1 1 1 1 1 1
349 1 1 1 1 1 1 1 1 1 1
350 1 1 1 1 1 1 1 1 1 1
351 1 1 1 1 1 1 1 1 1 1
352 1 1 1 1 1 1 1 1 1 1
369 1 1 1 1 1 1 1 1 1 1
370 1 1 1 1 1 1 1 1 1 1
371 1 1 1 1 1 1 1 1 1 1
372 1 1 1 1 1 1 1 1 1 1
In cases like this where there are several observations (10) per subject
and a relatively simple within-subject pattern (more-or-less linear) we may
want to examine coefficients from within-subject fixed-effects fits. However,
because the subjects constitute a sample from the population of interest and
we wish to drawn conclusions about typical patterns in the population and
the subject-to-subject variability of these patterns, we will eventually want
to fit mixed models and we begin by doing so. In section˜3.4 we will com-
pare estimates from a mixed-effects model with those from the within-subject
fixed-effects fits.
Fixed effects:
Estimate Std. Error t value
(Intercept) 251.405 6.632 37.91
Days 10.467 1.502 6.97
From the display we see that this model incorporates both an intercept and
a slope (with respect to Days) in the fixed effects and in the random effects.
Extracting the conditional modes of the random effects
> head(ranef(fm06)[["Subject"]])
(Intercept) Days
308 2.815683 9.0755341
309 -40.048489 -8.6440673
310 -38.433155 -5.5133788
330 22.832297 -4.6587506
331 21.549991 -2.9445202
332 8.815587 -0.2352092
confirms that these are vector-valued random effects. There are a total of
q = 36 random effects, two for each of the 18 subjects.
The random effects section of the model display,
Groups Name Variance Std.Dev. Corr
Subject (Intercept) 565.52 23.781
Days 32.68 5.717 0.081
Residual 654.94 25.592
indicates that there will be a random effect for the intercept and a random
effect for the slope with respect to Days at each level of Subject and, further-
more, the unconditional distribution of these random effects, B ∼ N (0, Σ ),
allows for correlation of the random effects for the same subject.
We can confirm the potential for correlation of random effects within sub-
ject in the images of Λ , Σ and L for this model (Fig.˜3.2). The matrix Λ has
10 10 10
20 20 20
30 30 30
10 20 30 10 20 30 10 20 30
Λ Σ L
surprised by intercepts as low as 200 ms. or as high as 300 ms. This range is
consistent with the reference lines shown in Figure˜3.1.
Similarly, the estimated subject-to-subject variation in the slope corre-
sponds to a standard deviation of about 5.7 ms./day so we would not be
surprised by slopes as low as 10.5 − 2 · 5.7 = −0.9 ms./day or as high as
10.5+2·5.7 = 21.9 ms./day. Again, the conclusions from these rough, “back of
the envelope” calculations are consistent with our observations from Fig.˜3.1.
The estimated residual standard deviation is about 25 ms. leading us to
expect a scatter around the fitted lines for each subject of up to ±50 ms.
From Figure˜3.1 we can see that some subjects (309, 372 and 337) appear
to have less variation than ±50 ms. about their within-subject fit but others
(308, 332 and 331) may have more.
Finally, we see the estimated within-subject correlation of the random ef-
fect for the intercept and the random effect for the slope is very low, 0.081,
confirming our impression that there is little evidence of a systematic rela-
tionship between these quantities. In other words, observing a subject’s initial
reaction time does not give us much information for predicting whether their
reaction time will be strongly affected by each day of sleep deprivation or
not. It seems reasonable that we could get nearly as good a fit from a model
that does not allow for correlation, which we describe next.
but it is not. Because the intercept is implicit in linear models, the second ran-
dom effects term is equivalent to (1+Days|Subject) and will, by itself, produce
correlated, vector-valued random effects.
We must suppress the implicit intercept in the second random-effects term,
which we do by writing it as (0+Days|Subject), read as “no intercept and
Days by Subject”. An alternative expression for Days without an intercept by
Subject is (Days - 1 | Subject). Using the first form we have
> (fm07 <- lmer(Reaction ~ 1 + Days + (1|Subject) + (0+Days|Subject),
+ sleepstudy, REML=FALSE))
Fixed effects:
Estimate Std. Error t value
(Intercept) 251.405 6.708 37.48
Days 10.467 1.519 6.89
As in model fm06, there are two random effects for each subject
> head(ranef(fm07)[["Subject"]])
(Intercept) Days
308 1.854656 9.2364346
309 -40.022307 -8.6174730
310 -38.723163 -5.4343801
330 23.903319 -4.8581939
331 22.396321 -3.1048404
332 9.052001 -0.2821598
10 10 10
20 20 20
30 30 30
10 20 30 10 20 30 10 20 30
Λ Σ L
Fig. 3.3 Images of Λ , the relative covariance factor, Σ , the variance-covariance ma-
trix of the random effects, and L, the sparse Cholesky factor, in model fm07
10
20
30
10
20
30
Fig. 3.4 Images of ZT for models fm06 (upper panel) and fm07 (lower panel)
Let us consider these columns in more detail, starting with the columns of Z
for model fm07. The first 18 columns (rows in the bottom panel of Fig.˜3.4)
are the indicator columns for the Subject factor, as we would expect from the
simple, scalar random-effects term (1|Subject). The pattern of zeros and non-
zeros in the second group of 18 columns is determined by the indicators of the
grouping factor, Subject, and the values of the non-zeros are determined by
the Days factor. In other words, these columns are formed by the interaction
of the numeric covariate, Days, and the categorical covariate, Subject.
The non-zero values in the model matrix Z for model fm06 are the same
as those for model fm06 but the columns are in a different order. Pairs of
columns associated with the same level of the grouping factor are adjacent.
One way to think of the process of generating these columns is to extend the
realizing that the linear model expression, Days, actually generates two
columns because of the implicit intercept.
Whether or not to include an explicit intercept term in a model formula
is a matter of personal taste. Many people prefer to write the intercept ex-
plicitly so as to emphasize the relationship between terms in the formula and
coefficients or random effects in the model. Others omit these implicit terms
so as to economize on the amount of typing required. Either approach can
be used. The important point to remember is that the intercept must be
explicitly suppressed when you don’t want it in a term.
Also, the intercept term must be explicit when it is the only term in the
expression. That is, a simple, scalar random-effects term must be written as
(1|fac) because a term like (|fac) is not syntactically correct. However, we
can omit the intercept from the fixed-effects part of the model formula if we
have any random-effects terms. That is, we could write the formula for model
fm1 in Chap.˜1 as
Yield ~ (1 | Batch)
or even
Yield ~ 1 | Batch
although omitting the parentheses around a random-effects term is risky.
Because of operator precedence, the vertical bar operator, |, takes essentially
everything in the expression to the left of it as its first operand. It is advisable
always to enclose such terms in parentheses so the scope of the operands to
the | operator is clearly defined.
Returning to models fm06 and fm07 for the sleepstudy data, it is easy to see
that these are nested models because fm06 is reduced to fm07 by constraining
the within-group correlation of random effects to be zero (which is equivalent
to constraining the element below the diagonal in the 2 × 2 lower triangular
blocks of Λ in Fig.˜3.2 to be zero).
We can use a likelihood ratio test to compare these fitted models.
> anova(fm07, fm06)
Data: sleepstudy
Models:
fm07: Reaction ~ 1 + Days + (1 | Subject) + (0 + Days | Subject)
fm06: Reaction ~ 1 + Days + (1 + Days | Subject)
Df AIC BIC logLik Chisq Chi Df Pr(>Chisq)
fm07 5 1762.0 1778.0 -876.00
fm06 6 1763.9 1783.1 -875.97 0.0639 1 0.8004
Plots of the profile ζ for the parameters in model fm07 (Fig.˜3.5) show that
0
ζ
−1
−2
0.5 1.0 1.5 2.0 2.5 −0.2 0.0 0.2 0.4 7 8 9 10 2.5 3.0 3.5 4.0 4.5 5.0 1.0 1.5 2.0 2.5 3.0 3.5
Fig. 3.5 Profile zeta plot for each of the parameters in model fm07. The vertical lines
are the endpoints of 50%, 80%, 90%, 95% and 99% profile-based confidence intervals
for each parameter.
confidence intervals on σ1 and σ2 will be slightly skewed; those for log(σ ) will
be symmetric and well-approximated by methods based on quantiles of the
standard normal distribution and those for the fixed-effects parameters, β1
and β2 will be symmetric and slightly over-dispersed relative to the standard
normal. For example, the 95% profile-based confidence intervals are
> confint(pr07)
2.5 % 97.5 %
.sig01 0.7342443 2.2872576
.lsig -0.2082812 0.3293787
(Intercept) 7.4238425 9.6872687
TypeT2 2.8953043 4.8824734
TypeT3 1.2286377 3.2158068
TypeT4 -0.3269179 1.6602512
The profile pairs plot (Fig.˜3.6) shows, for the most part, the usual pat-
terns. First, consider the panels below the diagonal, which are on the (ζi , ζ j )
scales. The ζ pairs for log(σ ) and β0 , in the (4, 3) panel, and for log(σ ) and
β1 , in the (5, 3) panel, show the ideal pattern. The profile traces are straight
and orthogonal, producing interpolated contours on the ζ scale that are con-
centric circles centered at the origin. When mapped back to the scales of
log(σ ) and β0 or β1 , in panels (3, 4) and (3, 5), these circles become slightly
distorted, but this is only due to the moderate nonlinearity in the profile ζ
plots for these parameters.
Examining the profile traces on the ζ scale for log(σ ) versus σ1 , the (3, 1)
panel, or versus σ2 , the (3, 2) panel, and for σ1 versus σ2 , the (2, 1) panel,
we see that close to the estimate the traces are orthogonal but as one vari-
ance component becomes small there is usually an increase in the others.
In some sense the total variability in the response will be partitioned across
the contribution of the fixed effects and the variance components. In each of
1
TypeT4
0
0 1 2 3
−1
2 3
3
2 TypeT3
1 0 1 2 3
5 4 5
4 TypeT2
3
0 1 2 3
10 9 10
9
(Intercept)
8
7 0 1 2 3
.sig01 0
−1
−2
−3
Scatter Plot Matrix
Fig. 3.6 Profile pairs plot for the parameters in model fm07. The contour lines
correspond to marginal 50%, 80%, 90%, 95% and 99% confidence regions based on
the likelihood ratio. Panels below the diagonal represent the (ζi , ζ j ) parameters; those
above the diagonal represent the original parameters.
these panels the fixed-effects parameters are at their optimal values, condi-
tional on the values of the variance components, and the variance components
must compensate for each other. If one is made smaller then the others must
become larger to compensate.
The patterns in the (4, 1) panel (σ1 versus β0 , on the ζ scale) and the (5, 2)
panel (σ2 versus β1 , on the ζ scale) are what we have come to expect. As the
fixed-effects parameter is moved from its estimate, the standard deviation
of the corresponding random effect increases to compensate. The (5, 1) and
(4, 2) panels show that changing the value of a fixed effect doesn’t change the
estimate of the standard deviation of the random effects corresponding to
the other fixed effect, which makes sense although the perfect orthogonality
shown here will probably not be exhibited in models fit to unbalanced data.
In some ways the most interesting panels are those for the pair of fixed-
effects parameters: (5, 4) on the ζ scale and (4, 5) on the original scale. The
traces are not orthogonal. In fact the slopes of the traces at the origin of
the (5, 4) (ζ scale) panel are the correlation of the fixed-effects estimators
(−0.194 for this model) and its inverse. However, as we move away from
the origin on one of the traces in the (5, 4) panel it curves back toward the
horizontal axis (for the horizontal trace) or the vertical axis (for the vertical
trace). In the ζ scale the individual contours are still concentric ellipses but
their eccentricity changes from contour to contour. The innermost contours
have greater eccentricity than the outermost contours. That is, the outermost
contours are more like circles than are the innermost contours.
In a fixed-effects model the shapes of projections of deviance contours onto
pairs of fixed-effects parameters are consistent. In a fixed-effects model the
profile traces in the original scale will always be straight lines. For mixed
models these traces can fail to be linear, as we see here, contradicting the
widely-held belief that inferences for the fixed-effects parameters in linear
mixed models, based on T or F distributions with suitably adjusted degrees
of freedom, will be completely accurate. The actual patterns of deviance
contours are more complex than that.
The result of applying ranef to fitted linear mixed model is a list of data
frames. The components of the list correspond to the grouping factors in the
random-effects terms, not to the terms themselves. Model fm07 is the first
model we have fit with more than one term for the same grouping factor
where we can see the combination of random effects from more than one
term.
> str(rr1 <- ranef(fm07))
List of 1
$ Subject:'data.frame': 18 obs. of 2 variables:
..$ (Intercept): num [1:18] 1.85 -40.02 -38.72 23.9 22.4 ...
..$ Days : num [1:18] 9.24 -8.62 -5.43 -4.86 -3.1 ...
- attr(*, "class")= chr "ranef.mer"
The plot method for "ranef.mer" objects produces one plot for each grouping
factor. For scalar random effects the plot is a normal probability plot. For
two-dimensional random effects, including the case of two scalar terms for the
same grouping factor, as in this model, the plot is a scatterplot. For three or
more random effects per level of the grouping factor, the plot is a scatterplot
matrix. The left hand panel in Fig.˜3.7 was created with plot(ranef(fm07)).
● ●
280
● ●
● ●
20 ● ●
● ●
● ●
● 260 ●
(Intercept)
(Intercept)
● ●
● ● ● ●
0 ● ● ● ●
● ●
● 240 ●
−20
● ● ● ●
220
● ●
−40 ● ●
−10 −5 0 5 10 0 5 10 15 20
Days Days
Fig. 3.7 Plot of the conditional modes of the random effects for model fm07 (left
panel) and the corresponding subject-specific coefficients (right panel)
The coef method for a fitted lmer model combines the fixed-effects esti-
mates and the conditional modes of the random effects, whenever the column
names of the random effects correspond to the names of coefficients. For model
fm07 the fixed-effects coefficients are (Intercept) and Days and the columns
of the random effects match these names. Thus we can calculate some kind
of per-subject “estimates” of the slope and intercept and plot them, as in the
right hand panel of Fig.˜3.7. By comparing the two panels in Fig.˜3.7 we can
see that the result of the coef method is simply the conditional modes of the
random effects shifted by the coefficient estimates.
It is not entirely clear how we should interpret these values. They are a
combination of parameter estimates with the modal values of random vari-
ables and, as such, are in a type of “no man’s land” in the probability model.
(In the Bayesian approach˜[Box and Tiao, 1973] to inference, however, both
the parameters and the random effects are random variables and the inter-
pretation of these values is straightforward.) Despite the difficulties of inter-
pretation in the probability model, these values are of interest because they
determine the fitted response for each subject.
Because responses for each individual are recorded on each of ten days
we can determine the within-subject estimates of the slope and intercept
(that is, the slope and intercept of each of the lines in Fig.˜3.1). In Fig.˜3.8
we compare the within-subject least squares estimates to the per-subject
slope and intercept calculated from model fm07. We see that, in general, the
per-subject slopes and intercepts from the mixed-effects model are closer to
the population estimates than are the within-subject least squares estimates.
This pattern is sometimes described as a shrinkage of coefficients toward the
population values.
330 337
331
●
280
333 352
●
●
●
● 372
335 332
351 ●
260 ●
● 371 369
●
●
(Intercept)
● ●
240
308
●
334
● ●
350
220
●
● 349
370
309
200 310
0 5 10 15 20
Days
Fig. 3.8 Comparison of the within-subject estimates of the intercept and slope for
each subject and the conditional modes of the per-subject intercept and slope. Each
pair of points joined by an arrow are the within-subject estimates and conditional
modes of the random for a particular subject. The arrow points from the within-
subject estimate to the conditional mode for the mixed-effects model. The subject
identifier number is at the head of each arrow.
The term “shrinkage” may have negative connotations. John Tukey chose
to characterize this process in terms of the estimates for individual subjects
“borrowing strength” from each other. This is a fundamental difference in the
models underlying mixed-effects models versus strictly fixed-effects models.
In a mixed-effects model we assume that the levels of a grouping factor are a
selection from a population and, as a result, can be expected to share charac-
teristics to some degree. Consequently, the predictions from a mixed-effects
model are attenuated relative to those from strictly fixed-effects models.
The predictions from model fm07 and from the within-subject least squares
fits for each subject are shown in Fig.˜3.9. In may seem that the shrinkage
0 2 4 6 8 0 2 4 6 8 0 2 4 6 8
Fig. 3.9 Comparison of the predictions from the within-subject fits with those from
the conditional modes of the subject-specific parameters in the mixed-effects model.
from the per-subject estimates toward the population estimates depends only
on how far the per-subject estimates (solid lines) are from the population es-
timates (dot-dashed lines). However, careful examination of this figure shows
that there is more at work here than a simple shrinkage toward the popula-
tion estimates proportional to the distance of the per-subject estimates from
the population estimates.
It is true that the mixed model estimates for a particular subject are
“between” the within-subject estimates and the population estimates, in the
sense that the arrows in Fig.˜3.8 all point somewhat in the direction of the
population estimate. However, the extent of the attenuation of the within-
subject estimates toward the population estimates is not simply related to the
distance between those two sets of estimates. Consider the two panels, labeled
330 and 337, at the top right of Fig.˜3.9. The within-subject estimates for 337
are quite unlike the population estimates but the mixed-model estimates are
very close to these within-subject estimates. That is, the solid line and the
dashed line in that panel are nearly coincident and both are a considerable
distance from the dot-dashed line. For subject 330, however, the dashed line is
more-or-less an average of the solid line and the dot-dashed line, even though
the solid and dot-dashed lines are not nearly as far apart as they are for
subject 337.
The difference between these two cases is that the within-subject estimates
for 337 are very well determined. Even though this subject had an unusually
large intercept and slope, the overall pattern of the responses is very close to
a straight line. In contrast, the overall pattern for 330 is not close to a straight
line so the within-subject coefficients are not well determined. The multiple
R2 for the solid line in the 337 panel is 93.3% but in the 330 panel it is only
15.8%. The mixed model can pull the predictions in the 330 panel, where
the data are quite noisy, closer to the population line without increasing the
residual sum of squares substantially. When the within-subject coefficients
are precisely estimated, as in the 337 panel, very little shrinkage takes place.
We see from Fig.˜3.9 that the mixed-effects model smooths out the
between-subject differences in the predictions by bringing them closer to a
common set of predictions, but not at the expense of dramatically increasing
the sum of squared residuals. That is, the predictions are determined so as to
balance fidelity to the data, measured by the residual sum of squares, with
simplicity of the model. The simplest model would use the same prediction
in each panel (the dot-dashed line) and the most complex model, based on
linear relationships in each panel, would correspond to the solid lines. The
dashed lines are between these two extremes. We will return to this view of
the predictions from mixed models balancing complexity versus fidelity in
Sect.˜5.3, where we make the mathematical nature of this balance explicit.
We should also examine the prediction intervals on the random effects
(Fig.˜3.10) where we see that many prediction intervals overlap zero but
there are several that do not. In this plot the subjects are ordered from
bottom to top according to increasing conditional mode of the random effect
for (Intercept). The resulting pattern in the conditional modes of the random
effect for Days reinforces our conclusion that the model fm07, which does not
allow for correlation of the random effects for (Intercept) and Days, is suitable.
Problems
(Intercept) Days
337 ● ●
330 ● ●
331 ● ●
352 ● ●
333 ● ●
372 ● ●
332 ● ●
351 ● ●
369 ● ●
308 ● ●
371 ● ●
335 ● ●
334 ● ●
350 ● ●
349 ● ●
370 ● ●
310 ● ●
309 ● ●
Fig. 3.10 Prediction intervals on the random effects for model fm07.
(a) Create an xyplot of the distance versus age by Subject for the female sub-
jects only. You can use the optional argument subset = Sex == "Female"
in the call to xyplot to achieve this. Use the optional argument type =
c("g","p","r") to add reference lines to each panel.
(b) Enhance the plot by choosing an aspect ratio for which the typical slope of
the reference line is around 45o . You can set it manually (something like
aspect = 4) or with an automatic specification (aspect = "xy"). Change
the layout so the panels form one row (layout = c(11,1)).
(c) Order the panels according to increasing response at age 8. This is
achieved with the optional argument index.cond which is a function of
arguments x and y. In this case you could use index.cond = function(x,y)
y[x == 8]. Add meaningful axis labels. Your final plot should be like
F10 F06 F09 F03 F01 F02 F05 F07 F08 F04 F11
28 ● ●
●
26 ●
●
● ● ●
● ● ●
24 ● ● ●
● ● ●
● ● ● ● ●
● ● ●
22 ●
● ● ● ● ●
● ● ● ● ●
●
20 ● ● ●
●
● ●
18
●
16
89 11 13 89 11 13 89 11 13 89 11 13 89 11 13 89 11 13
Age (yr)
(d) Fit a linear mixed model to the data for the females only with random
effects for the intercept and for the slope by subject, allowing for corre-
lation of these random effects within subject. Relate the fixed effects and
the random effects’ variances and covariances to the variability shown in
the figure.
(e) Produce a “caterpillar plot” of the random effects for intercept and slope.
Does the plot indicate correlated random effects?
(f) Consider what the Intercept coefficient and random effects represents.
What will happen if you center the ages by subtracting 8 (the baseline
year) or 11 (the middle of the age range)?
(g) Repeat for the data from the male subjects.
3.2.
Fit a model to both the female and the male subjects in the Orthodont data
set, allowing for differences by sex in the fixed-effects for intercept (probably
with respect to the centered age range) and slope.
Milliken and Johnson [2009, Table 23.1] discuss data from an experiment
to measure productivity on a manufacturing task according to the type of
machine used and the operator. No further details on the experiment are
77
78 4 Building Linear Mixed Models
A ● B C
● ●●
3
5 ● ●●
Worker
●
1 ● ●
●
4 ● ●
●
2 ● ●
6 ● ● ●
45 50 55 60 65 70
Fig. 4.1 A quality and productivity score for each of six operators (the Worker factor)
on each of three machine types.
given and it is possible that these data, which are available as Machines in the
MEMSS package, were constructed for illustration and are not observed data
from an actual experiment.
> str(Machines)
Worker
Machine 1 2 3 4 5 6
A 3 3 3 3 3 3
B 3 3 3 3 3 3
C 3 3 3 3 3 3
The cross-tablulation shows that each of the six operators used each of
the three machines on three occasions producing replicate observations of the
“subject-stimulus” combinations. Although the operators represent a sample
from the population of potential operators, the three machines are the specific
machines of interest. That is, we regard the levels of Machine as fixed levels
and the levels of Worker as a random sample from a population. In other
studies we may regard the levels of the stimulus as a random sample from a
population of stimuli.
A plot of these data (Fig.˜4.1) shows high reproducibility of measurements
on the same operator-machine combination. On each line the scores on a
particular machine are tightly clustered. There are considerable, apparently
systematic, differences between machines and somewhat smaller differences
between operators except for one unusual combination, operator 6 on machine
B. The pattern for operator 6 is very different from the pattern for the other
We fit and compare three models for these data: fm10 without interactions,
fm11 with vector-valued random effects to allow for interactions, and fm12 with
interactions incorporated into a second simple scalar random-effects term.
> fm10 <- lmer(score ~ Machine + (1|Worker), Machines, REML=FALSE)
> fm11 <- lmer(score ~ Machine + (Machine|Worker), Machines, REML=FALSE)
> fm12 <- lmer(score ~ Machine + (1|Worker) + (1|Machine:Worker), Machines, REML=FALSE)
> anova(fm10, fm11, fm12)
Data: Machines
Models:
fm10: score ~ Machine + (1 | Worker)
fm12: score ~ Machine + (1 | Worker) + (1 | Machine:Worker)
fm11: score ~ Machine + (Machine | Worker)
Df AIC BIC logLik Chisq Chi Df Pr(>Chisq)
fm10 5 303.70 313.65 -146.85
fm12 6 237.27 249.20 -112.64 68.4338 1 < 2e-16
fm11 10 236.42 256.31 -108.21 8.8516 4 0.06492
Notice that in the anova summary the order of the models has been re-
arranged according to the complexity of the models, as measured by the
number of parameters to be estimated. The simplest model, fm10, incorpo-
rates three fixed-effects parameters and two variance components. Model fm12,
with scalar random effects for Worker and for the Machine:Worker combination
incorporates one additional variance component for a total of six parameters,
while model fm11 adds five variance-component parameters to those in fm10,
for a total of 10 parameters.
In the comparison of models fm10 and fm12 (i.e. the line labeled fm12 in the
anova table) we see that the additional parameter is highly significant. The
change in the deviance of 68.4338 (in the column labeled Chisq) at a cost of
one additional parameter is huge; hence we prefer the more complex model
fm12. In the next line, which is the comparison of the more complex model
fm11 to the simpler model fm12, the change in the deviance is 8.8516 at a
cost of 4 additional parameters with a p-value of 6.5%. In formal hypothesis
tests we establish a boundary, often chosen to be 5%, below which we regard
the p-value as providing “significant” evidence to prefer the more complex
model and above which the results are regarded as representing an “insignif-
icant” improvement. Such boundaries, while arbitrary, help us to assess the
numerical results and here we prefer model fm12, of intermediate complexity.
T1 ● T2 T3 T4
B ●
A ●
G ●
C ●
F ●
I ●
D ●
E ●
H ●
8 10 12 14
Fig. 4.2 Subjective evaluation of the effort required to arise (on the Borg scale) by
9 subjects, each of whom tried each of four types of stool.
Problems˜2.1 and 2.2 in Chap.˜2 involve examining the structure of the er-
goStool data from the MEMSS package
> str(ergoStool)
and plotting these data, as in Fig.˜4.2. These data are from an ergometrics
experiment where nine subjects evaluated the difficulty to arise from each of
four types of stools. The measurements are on the scale of perceived exertion
developed by the Swedish physician and researcher Gunnar Borg. Measure-
ments on this scale are in the range 6-20 with lower values indicating less
exertion.
From Fig.˜4.2 we can see that all nine subjects rated type T1 or type T4 as
requiring the least exertion and rated type T2 as requiring the most exertion.
Type T3 was perceived as requiring comparatively little exertion by some
subjects (H and E) and comparatively greater exertion by others (F, C and
G).
Problem˜2.3 involves fitting and evaluating a model in which the effects
of both the Subject and the Type factors are incorporated as random effects.
Such a model may not be appropriate for these data where we wish to make
inferences about these particular four stool types. According to the distinction
between fixed- and random-effects described in Sect.˜1.1, if the levels of the
Type factor are fixed and reproducible we generally incorporate the factor in
the fixed-effects part of the model.
Before doing so, let’s review the results of fitting a linear mixed model
with random effects for both Subject and Type.
A model with random effects for both Subject and Type is fit in the same way
that we fit such in Chap.˜2,
> (fm06 <- lmer(effort ~ 1 + (1|Subject) + (1|Type), ergoStool, REML=FALSE))
Fixed effects:
Estimate Std. Error t value
(Intercept) 10.2500 0.8883 11.54
from which we determine that the mean effort to arise, across stool types and
across subjects, is 10.250 on this scale, with standard deviations of 1.305 for
the random-effects for the Subject factor and 1.505 for the Type factor.
One question we would want to address is whether there are “significant”
differences between stool types, taking into account the differences between
subjects. We could approach this question by fitting a reduced model, with-
0
ζ
−1
−2
0.5 1.0 1.5 2.0 2.5 3.0 1 2 3 4 5 −0.2 0.0 0.2 0.4 8 10 12
Fig. 4.3 Profile zeta plot for the parameters in model fm06 fit to the ergoStool data
out random effects for Type, and comparing this fit to model fm06 using a
likelihood-ratio test.
> fm06a <- lmer(effort ~ 1 + (1|Subject), ergoStool, REML=FALSE)
> anova(fm06a, fm06)
Data: ergoStool
Models:
fm06a: effort ~ 1 + (1 | Subject)
fm06: effort ~ 1 + (1 | Subject) + (1 | Type)
Df AIC BIC logLik Chisq Chi Df Pr(>Chisq)
fm06a 3 164.15 168.90 -79.075
fm06 4 144.02 150.36 -68.011 22.128 1 2.551e-06
The p-value in this test is very small, indicating that the more complex model,
fm06, which allows for differences in the effort to arise for the different stool
types, provides a significantly better fit to the observed data.
In Sect.˜2.2.4 we indicated that, because the constraint on the reduced
model, σ2 = 0, is on the boundary of the parameter space, the p-value for
this likelihood ratio test statistic calculated using a χ12 reference distribution
will be conservative. That is, the p-value one would obtain by, say, simulation
from the null distribution, would be even smaller than the p-value, 0.0000026,
reported by this test, which is already very small.
Thus the evidence against the null hypothesis (H0 : σ2 = 0) and in favor of
the alternative, richer model (Ha : σ2 > 0) is very strong.
Another way of addressing the question of whether it is reasonable for σ2
to be zero is to profile fm06 and examine profile zeta plots (Fig.˜4.3) and the
corresponding profile pairs plot (Fig.˜4.4).
We can see from the profile zeta plot (Fig.˜4.3) that both σ1 , the standard
deviation of the Subject random effects, and, σ2 , the standard deviation of
the Type random effects, are safely non-zero. We also see that σ2 is very
poorly determined. That is, a 95% profile-based confidence interval on this
parameter, obtained as
14
12
10 (Intercept)
6 0 1 2 3
0.6
0.2 0.4 0.6
0.4
0.2
.lsig
0.0
−0.2 0 1 2 3
8
6 8
4 .sig02
2
0 1 2 3
1 2 3
.sig01 0
−1
−2
−3
Scatter Plot Matrix
Fig. 4.4 Profile pairs plot for the parameters in model fm06 fit to the ergoStool
data
> confint(pr06)[".sig02",]
2.5 % 97.5 %
0.7925434 3.7958505
is very wide. The upper end point of this 95% confidence interval, 3.796, is
more than twice as large as the estimate, σ c2 = 1.505.
A plot of the prediction intervals on the random effects for Type (Fig.˜4.5)
confirms the impression from Fig.˜4.2 regarding the stool types. Type T2
requires the greatest effort and type T1 requires the least effort. There is con-
siderable overlap of the prediction intervals for types T1 and T4 and somewhat
less overlap between types T4 and T3 and between types T3 and T2.
In an analysis like this we begin by asking if there are any significant dif-
ferences between the stool types, which we answered for this model by testing
the hypothesis H0 : σ2 = 0 versus Ha : σ2 > 0. If we reject H0 in favor of Ha
T2 ●
T3 ●
T4 ●
T1 ●
−2 −1 0 1 2 3
Fig. 4.5 95% prediction intervals on the random effects for Type from model fm06
fit to the ergoStool data
— that is, if we conclude that the more complex model including random
effects for Type provides a significantly better fit than the simpler model —
then usually we want to follow up with the question, “Which stool types are
significantly different from each other?”. It is possible, though not easy, to
formulate an answer to that question from a model fit such as fm06 in which
the stool types are modeled with random effects, but it is more straightfor-
ward to address that question when we model the stool types as fixed-effects
parameters, which we do next.
Fixed effects:
Estimate Std. Error t value
(Intercept) 8.5556 0.5431 15.754
TypeT2 3.8889 0.4890 7.952
TypeT3 2.2222 0.4890 4.544
TypeT4 0.6667 0.4890 1.363
It appears that the last three levels of the Type factor are now modeled
as fixed-effects parameters, in addition to the (Intercept) parameter, whose
estimate has decreased markedly from that in model fm06. Furthermore, the
estimates of the fixed-effects parameters labeled TypeT2, TypeT3 and TypeT4,
while positive, are very much smaller than would be indicated by the average
responses for these types.
It turns out, of course, that the fixed-effects parameters generated by a
factor covariate do not correspond to the overall mean and the effect for each
level of the covariate. Although a model for an experiment such as this is
sometimes written in a form like
yi j = µ + αi + b j + εi j , i = 1, . . . , 4, j = 1, . . . 9 (4.1)
where i indexes the stool type and j indexes the subject, the parameters
{µ, α1 , α2 , α3 , α4 }, representing the overall mean and the effects of each of the
stool types, are redundant. Given a set of estimates for these parameters we
would not change the predictions from the model if, for example, we added
one to µ and subtracted one from all the α’s. In statistical terminology we
say that this set of parameters is not estimable unless we impose some other
conditions on them. The estimability condition ∑4i=1 αi = 0 is often used in
introductory texts.
The approach taken in R is not based on redundant parameters that are
subject to estimability conditions. While this approach may initially seem
reasonable, in complex models it quickly becomes unnecessarily complex to
need to use constrained optimization for parameter estimation. Instead we
incorporate the constraints into the parameters that we estimate. That is,
we reduce the redundant set of parameters to an estimable set of contrasts
between the levels of the factors.
Although the particular set of contrasts used for a categorical factor can be
controlled by the user, either as a global option for a session (see ?options) or
by the optional contrasts argument available in most model-fitting functions,
most users do not modify the contrasts, preferring to leave them at the default
setting, which is the “treatment” contrasts (contr.treatment) for an unordered
factor and orthogonal polynomial contrasts (contr.poly) for an ordered factor.
You can check the current global setting with
> getOption("contrasts")
unordered ordered
"contr.treatment" "contr.poly"
Because these were the contrasts in effect when model fm07 was fit, the
particular contrasts used for the Type factor, which has four levels, correspond
to
> contr.treatment(4)
2 3 4
1 0 0 0
2 1 0 0
3 0 1 0
4 0 0 1
In this display the rows correspond to the levels of the Type factor and the
columns correspond to the parameters labeled TypeT2, TypeT3 and TypeT4.
The values of Type in the data frame, whose first few rows are
> head(ergoStool)
combined with the contrasts produce the model matrix X, whose first few
rows are
> head(model.matrix(fm07))
We see that the rows of X for observations on stool type T1 have zeros in the
last three columns; the rows for observations on stool type T2 have a 1 in the
second column and zeros in the last two columns, and so on. As before, the
(Intercept) column is a column of 1’s.
When we evaluate Xβ in the linear predictor expression, Xβ + Zb, we take
the p elements of the fixed-effects parameter vector, β , whose estimate is
> fixef(fm07)
and the fixed-effects predictor for the second observation (stool type T2) will
be
8.5556 × 1 + 3.8889 × 1 + 2.2222 × 0 + 0.6667 × 0 = 12.4444
We see that the parameter labeled (Intercept) is actually the fixed-effects
prediction for the first level of Type (i.e. level T1) and the second parameter,
labeled TypeT2, is the difference between the fixed-effects prediction for the
second level (T2) and the first level (T1) of the Type factor.
Similarly, the fixed-effects predictions for the T3 and T4 levels of Type are
8.5556 + 2.2222 = 10.7778 and 8.5556 + 0.6667 = 9.2222, respectively, as can
be verified from
> head(as.vector(model.matrix(fm07) %*% fixef(fm07)))
The fact that the parameter labeled TypeT2 is the difference between the
fixed-effects prediction for levels T2 and T1 of the Type factor is why we refer
to the parameters as being generated by contrasts. They are formed by con-
trasting the fixed-effects predictions for some combination of the levels of the
factor. In this case the contrast is between levels T2 and T1.
In general, the parameters generated by the “treatment” contrasts (the
default for unordered factors) represent differences between the first level of
the factor, which is incorporated into the (Intercept) parameter, and the
subsequent levels. We say that the first level of the factor is the reference
level and the others are characterized by their shift relative to this reference
level.
2.5 % 97.5 %
TypeT2 2.8953043 4.882473
TypeT3 1.2286377 3.215807
TypeT4 -0.3269179 1.660251
According to these intervals, and from what we see from Fig.˜4.6, types T2
and T3 are significantly different from type T1 (the intervals do not contain
−1
−2
0.5 1.0 1.5 2.0 2.5 −0.2 0.0 0.2 0.4 7 8 9 102.5 3.5 4.5 1.0 2.0 3.0 −0.5 0.5 1.5
Fig. 4.6 Profile zeta plot for the parameters in model fm06 fit to the ergoStool data
zero) but type T4 is not (the confidence interval on this contrast contains
zero).
However, this process must be modified in two ways to provide a suitable
answer. The most important modification is to take into account the fact
that we are performing multiple comparisons simultaneously. We describe
what this means and how to accomodate for it in the next subsection. The
other problem is that this process only allows us to evaluate contrasts of the
reference level, T1, with the other levels and the reference level is essentially
arbitrary. For completeness we should evaluate all six possible contrasts of
pairs of levels.
We can do this by refitting the model with a difference reference level for
the Type factor and profiling the modified model fit. The relevel function
allows us to change the reference level of a factor.
> pr07a <- profile(lmer(effort ~ 1 + Type + (1|Subject),
+ within(ergoStool, Type <- relevel(Type, "T2")),
+ REML=FALSE))
> pr07b <- profile(lmer(effort ~ 1 + Type + (1|Subject),
+ within(ergoStool, Type <- relevel(Type, "T3")),
+ REML=FALSE))
2.5 % 97.5 %
TypeT3 -2.660251 -0.6730821
TypeT4 -4.215807 -2.2286377
2.5 % 97.5 %
TypeT4 -2.54914 -0.561971
from which would conclude that type T2 requires significantly greater effort
than any of the other types at the 5% level (because none of the 95% confi-
dence intervals on contrasts with T2 contain zero) and that types T3 and T4
are significantly different at the 5% level.
However, we must take into account that we are performing multiple, sim-
ulataneous comparisons of levels.
or a little more than 99%. We can specify this coverage level for the individual
intervals to ensure a family-wise coverage of at least 95%.
> rbind(confint(pr07, c("TypeT2","TypeT3","TypeT4"), covrge),
+ confint(pr07a, c("TypeT3","TypeT4"), covrge),
+ confint(pr07b, "TypeT4", covrge))
0.417 % 99.583 %
TypeT2 2.5109497 5.2668280
TypeT3 0.8442830 3.6001613
TypeT4 -0.7112726 2.0446058
TypeT3 -3.0446059 -0.2887275
TypeT4 -4.6001614 -1.8442831
TypeT4 -2.9334948 -0.1776164
We again reach the conclusion that the only pair of stool types for which
zero is within the confidence interval on the difference in effects is the (T1,T4)
pair but, for these intervals, the family-wise coverage of all six intervals is at
least 95%.
There are other, perhaps more effective, techniques for adjusting intervals
to take into account multiple comparisons. The purpose of this section is to
show that the profile-based confidence intervals can be extended to at least
the Bonferroni correction.
The easiest way to apply other multiple comparison adjustment methods
is to model both the Type and the Subject factors with fixed effects, which we
do next.
As seen above, the summary method for objects of class "aov" provides an
analysis of variance table. The order in which the terms are listed in the model
formula can affect the results in this table, if the data are unbalanced, and we
should be cautious to list the terms in the model in the appropriate order, even
for a balanced data set like the ergoStool. The rule is that blocking factors
should precede experimental factors because the contributions of the terms
are assessed sequentially. Thus we read the rows in this table as measuring
the variability due to the Subject factor and due to the Type factor after
taking into account the Subject. We want to assess the experimental factor
after having removed the variability due to the blocking factor.
If desired we can assess individual coefficients by applying the summary
method for "lm" objects, called summary.lm to this fitted model. For example,
the coefficients table is available as
> coef(summary.lm(fm08))
but often the individual coefficients are of less interest than the net effect
of the variability due to the levels of the factor, as shown in the analysis of
variance table. For example, in the summary of the coefficients shown above
the (Intercept) coefficient is the predicted response for the reference subject
(subject A) on the reference stool type (type T1). Other coefficients generated
by the Subject term are the differences from the reference subject to other
subjects. It is not clear why we would want to compare all the other subjects
to subject A.
One of the multiple comparison methods that we can apply to fm08 is
Tukey’s Honest Significant Difference (HSD) method
> TukeyHSD(fm08, which = "Type")
$Type
[1] 10.76924
[1] 4.83993
146 ●
● ●
83 ● ● ● ● ●
● ●
70
152 ● ● ● ●● ●
261 ●
311 ● ●
244 ●
298 ● ● ● ● ●
68
271 ● ●●
263 ● ●●
57 ●● ● ●
10 ● ●
177 ● ● ●
17
216 ● ●
87 ●
7 ●
5 ●
92 ●
15
85 ● ● ● ● ● ● ●
119 ●
260 ● ● ●
40 ● ● ●
36 ●
57
Classroom within school
171 ●
250 ● ●
272 ●
229 ●●
● ● ●
26 ● ●● ● ● ●● ●●
●
33
100 ●
32 ● ● ● ●
88 ● ●
19 ● ●
71 ● ●
46
133 ●● ●
240 ● ● ●
69 ● ● ●
196 ● ● ●
84 ●
85
157 ● ● ● ●
66 ● ●● ● ● ●
186 ● ● ● ● ● ●
209 ● ●
120 ●● ●● ●● ● ●
71
80 ● ● ● ● ● ●
304 ● ● ● ●
264 ●
● ● ● ● ● ●
81 ● ● ●● ● ● ●●
251 ● ● ●
12
49 ● ● ● ● ● ●
20 ● ● ●
302 ● ●● ●● ● ●
236 ● ●
226 ● ●
99
205 ● ● ●● ● ● ● ●
266 ● ● ● ●●
214 ●
28 ● ● ● ● ●
287 ●● ●● ●
76
136 ● ● ●● ● ●
277 ● ● ● ●● ● ●
8 ● ●● ●
classroom (level 2) and school (level 3) with the assumption that classroom
is nested within school. The concept of “levels” can only be applied to models
and data sets in which the grouping factors of the random effects form a
nested sequence. In crossed or partially crossed configurations there are no
clearly defined levels.
At this point we should check if there is implicit nesting. That is, are the
levels of the classid factor nested within schoolid factor. We could simply
create the interaction factor to avoid the possibility of implicit nesting but it
saves a bit of trouble if we check before doing so
> with(classroom, isNested(classid, schoolid))
[1] TRUE
A model with simple, scalar random effects and without any fixed-effects
terms (other than the implicit intercept) is called the “unconditional model”
in the multilevel modeling literature. We fit it as
> (fm09 <- lmer(mathgain ~ (1|classid) + (1|schoolid), classroom))
Fixed effects:
Estimate Std. Error t value
(Intercept) 57.427 1.443 39.79
The results from this model fit using the REML criterion can be compared
to Table 4.6 (page 156) of West et˜al. [2007].
It seems that the housepov value is a property of the school. We can
check this by considering the number of unique combinations of housepov
and schoolid and comparing that to the number of levels of schoolid. For
safety we check the number of levels of factor(schoolid) in case there are
unused levels in schoolid.
> with(classroom, length(levels(factor(schoolid))))
[1] 107
[1] 107
Fixed effects:
−1
−2
−1
−2
VDB ●
Carbachol
LS ●
BST ● R100797 ●
R100997
R110597
R111097
VDB ●
R111397
Basal
LS ●
BST ●
activate
(Y |B = b) ∼ N (Xβ + Zb, σ 2 I)
B ∼ N (0, Σθ ).
99
100 5 Computational Methods for Mixed Models
(The symbol ∀ denotes “for all”.) The fact that Σθ is positive semidefinite
does not guarantee that Σθ−1 exists. We would need a stronger property,
bT Σθ b > 0, ∀ b 6= 0, called positive definiteness, to ensure that Σθ−1 exists.
Many computational formulas for linear mixed models are written in terms
of Σθ−1 . Such formulas will become unstable as Σθ approaches singularity.
And it can do so. It is a fact that singular (i.e. non-invertible) Σθ can and
do occur in practice, as we have seen in some of the examples in earlier
chapters. Moreover, during the course of the numerical optimization by which
the parameter estimates are determined, it is frequently the case that the
deviance or the REML criterion will need to be evaluated at values of θ that
produce a singular Σθ . Because of this we will take care to use computational
methods that can be applied even when Σθ is singular and are stable as Σθ
approaches singularity.
As defined in (1.2) a relative covariance factor, Λθ , is any matrix that
satisfies
Σθ = σ 2Λθ ΛθT .
According to this definition, Σ depends on both σ and θ and we should
write it as Σσ ,θ . However, we will blur that distinction and continue to write
Var(B) = Σθ . Another technicality is that the common scale parameter, σ ,
can, in theory, be zero. We will show that in practice the only way for its
estimate, σb , to be zero is for the fitted values from the fixed-effects only, Xβb ,
to be exactly equal to the observed data. This occurs only with data that
have been (incorrectly) simulated without error. In practice we can safely
assume that σ > 0. However, Λθ , like Σθ , can be singular.
Our computational methods are based on Λθ and do not require evaluation
of Σθ . In fact, Σθ is explicitly evaluated only at the converged parameter
estimates.
The spherical random effects, U ∼ N (0, σ 2 Iq ), determine B as
B = Λθ U . (5.2)
and
(Y |U = u) ∼ N (ZΛθ u + Xβ , σ 2 In )
(5.5)
U ∼ N (0, σ 2 Iq ).
γ = ZΛθ u + Xβ (5.6)
µ = E [Y |U = u] . (5.7)
For a linear mixed model µ = γ. In other forms of mixed models the condi-
tional mean, µ, can be a nonlinear function of the linear predictor, γ. For
some models the dimension of γ is a multiple of n, the dimension of µ and y,
but for a linear mixed model the dimension of γ must be n. Hence, the model
matrix Z must be n × q and X must be n × p.
we would first determine the joint density of U and Y , written fU ,Y (u, y),
then integrate this density with respect u to create the marginal density,
fY (y), and finally evaluate this marginal density at yobs .
To allow for later generalizations we will change the order of these steps
slightly. We evaluate the joint density function, fU ,Y (u, y), at yobs , producing
the unnormalized conditional density, h(u). We say that h is “unnormalized”
because the conditional density is a multiple of h
h(u)
fU |Y (u|yobs ) = R . (5.8)
Rq h(u) du
For a linear mixed model, where all the distributions of interest are mul-
tivariate Gaussian and the conditional mean, µ, is a linear function of both
u and β , the distinction between evaluating the joint density at yobs to pro-
duce h(u) then integrating with respect to u, as opposed to first integrating
the joint density then evaluating at yobs , is not terribly important. For other
mixed models this distinction can be important. In particular, generalized lin-
ear mixed models, described in Sect.˜??, are often used to model a discrete
response, such as a binary response or a count, leading to a joint distribution
for Y and U that is discrete with respect to one variable, y, and contin-
uous with respect to the other, u. In such cases there isn’t a joint density
for Y and U . The necessary distribution theory for general y and u is well-
defined but somewhat awkward to describe. It is much easier to realize that
we are only interested in the observed response vector, yobs , not some arbi-
trary value of y, so we can concentrate on the conditional distribution of U
given Y = yobs . For all the mixed models we will consider, the conditional
distribution, (U |Y = yobs ), is continuous and both the conditional density,
fU |Y (u|yobs ), and its unnormalized form, h(u), are well-defined.
The integral defining the likelihood in (5.9) has a closed form in the case of
a linear mixed model but not for some of the more general forms of mixed
models. To motivate methods for approximating the likelihood in more gen-
eral situations, we describe in some detail how the integral can be evaluated
using the sparse Cholesky factor, Lθ , and the conditional mode,
ũ = arg max fU |Y (u|yobs ) = arg max h(u) = arg max fY |U (yobs |u) fU (u) (5.10)
u u u
The notation arg maxu means that ũ is the value of u that maximizes the
expression that follows.
In general, the mode of a continuous distribution is the value of the ran-
dom variable that maximizes the density. The value ũ is called the conditional
mode of u, given Y = yobs , because ũ maximizes the conditional density of
U given Y = yobs . The location of the maximum can be determined by max-
imizing the unnormalized conditional density because h(u) is just a constant
multiple of fU |Y (u|yobs ). The last part of (5.10) is simply a re-expression of
h(u) as the product of fY |U (yobs |u) and fU (u). For a linear mixed model
these densities are
!
2
1 ky − Xβ − ZΛθ uk
fY |U (y|u) = exp − (5.11)
(2πσ 2 )n/2 2σ 2
1 kuk2
fU (u) = exp − (5.12)
(2πσ 2 )q/2 2σ 2
with product
!
1 kyobs − Xβ − ZΛθ uk2 + kuk2
h(u) = exp − . (5.13)
(2πσ 2 )(n+q)/2 2σ 2
a smoothing objective, in the sense that it seeks to smooth out the fitted
response by reducing model complexity while still retaining reasonable fidelity
to the observed data.
For the purpose of evaluating the likelihood we will regard the PRSS crite-
rion as a function of the parameters, given the data, and write its minimum
value as
rθ2 ,β = min kyobs − Xβ − ZΛθ uk2 + kuk2 . (5.16)
u
Notice that β only enters the right hand side of (5.16) through the linear
predictor expression. We will see that ũ can be determined by a direct (i.e.
non-iterative) calculation and, in fact, we can minimize the PRSS criterion
with respect to u and β simultaneously without iterating. We write this
minimum value as
One way of expressing a penalized least squares problem like (5.16) is by incor-
porating the penalty as “pseudo-data” in an ordinary least squares problem.
We extend the “response vector”, which is yobs − Xβ when we minimize with
respect to u only, with q responses that are 0 and we extend the predictor
expression, ZΛθ u with Iq u. Writing this as a least squares problem produces
2
yobs − Xβ ZΛ θ
ũ = arg min
− u
(5.18)
u
0 Iq
θ = Λθ Z ZΛθ + Iq .
Lθ LT T T
(5.20)
In earlier chapters we have seen that often the random effects vector is re-
ordered before Lθ is created. The re-ordering or permutation of the elements
of u and, correspondingly, the columns of the model matrix, ZΛθ , does not
affect the theory of linear mixed models but can have a profound effect on
the time and storage required to evaluate Lθ in large problems. We write the
effect of the permutation as multiplication by a q × q permutation matrix, P,
although in practice we apply the permutation without ever constructing P.
That is, the matrix P is only a notational convenience only.
The matrix P consists of permuted columns of the identity matrix, Iq ,
and it is easy to establish that the inverse permutation corresponds to mul-
tiplication by PT . Because multiplication by P or by PT simply re-orders the
components of a vector, the length of the vector is unchanged. Thus,
and we can express the penalty in (5.17) in any of these three forms. The
properties of P that it preserves lengths of vectors and that its transpose is
its inverse are summarized by stating that P is an orthogonal matrix.
The permutation represented by P is determined from the structure of
Λθ ZT ZΛθ + Iq for some initial value of θ . The particular value of θ does not
T
affect the result because the permutation depends only the positions of the
non-zeros, not the numerical values at these positions.
Taking into account the permutation, the sparse Cholesky factor, Lθ , is
defined to be the sparse, lower triangular, q × q matrix with positive diagonal
elements satisfying
Lθ LTθ = P Λθ
T T
Z ZΛ θ + I T
q P . (5.22)
Note that we now require that the diagonal elements of Λθ be positive. Prob-
lems˜5.1 and 5.2 indicate why we can require this. Because the diagonal ele-
ments of Λθ are positive, its determinant, |Λθ |, which, for a triangular matrix
such as Λθ , is simply the product of its diagonal elements, is also positive.
Many sparse matrix methods, including the sparse Cholesky decomposi-
tion, are performed in two stages: the symbolic phase in which the locations
of the non-zeros in the result are determined and the numeric phase in which
the numeric values at these positions are evaluated. The symbolic phase for
the decomposition (5.22), which includes determining the permutation, P,
need only be done once. Evaluation of Lθ for subsequent values of θ requires
only the numeric phase, which typically is much faster than the symbolic
phase.
The permutation, P, serves two purposes. The first and most important
purpose is to reduce the number of non-zeros in the factor, Lθ . The factor
is potentially non-zero at every non-zero location in the lower triangle of the
After evaluating Lθ and using that to solve for ũ, which also produces rβ2 ,θ ,
we can write the PRSS for a general u as
which finally allows us to evaluate the likelihood. We plug the right hand side
of (5.25) into the definition of h(u) and apply the change of variable
LT
θ (u − ũ)
z= . (5.26)
σ
The determinant of the Jacobian of this transformation,
is required for the change of variable in the integral. We use the letter z for the
transformed value because we will rearrange the integral to have the form of
the integral of the density of the standard multivariate normal distribution.
That is, we will use the result
Z 2
e−kzk /2
dz = 1. (5.28)
Rq (2π)q/2
2
rβ2 ,θ
d(θ , β , σ |yobs ) = −2 log (L(θ , β , σ |yobs )) = n log(2πσ ) + 2 log |Lθ | + ,
σ2
as stated in (1.6). The maximum likelihood estimates of the parameters are
those that minimize this deviance.
Equation (1.6) is a remarkably compact expression, considering that the
class of models to which it applies is very large indeed. However, we can
do better than this if we notice that β affects (1.6) only through rβ2 ,θ , and,
for any value of θ , minimizing this expression with respect to β is just an
extension of the penalized least squares problem. Let βb θ be the value of β
that minimizes the PRSS simultaneously with respect to β and u and let rθ2
be the PRSS at these minimizing values. If, in addition, we set σc2 = r2 /n,
θ θ
which is the value of σ 2 that minimizes the deviance for a given value of rθ2 ,
then the profiled deviance, which is a function of θ only, becomes
where, as before, Lθ , the sparse Cholesky factor, is the sparse lower triangular
q × q matrix satisfying (5.22). The other two matrices in (5.33): RZX , which
is a general q × p matrix, and RX , which is an upper triangular p × p matrix,
satisfy
Lθ RZX = PΛθT ZT X (5.34)
and
RT T T
X RX = X X − RZX RZX (5.35)
Those familiar with standard ways of writing a Cholesky decomposition
as either LLT or RT R (L is the factor as it appears on the left and R is as
it appears on the right) will notice a notational inconsistency in (5.33). One
Cholesky factor is defined as the lower triangular fractor on the left and the
other is defined as the upper triangular factor on the right. It happens that
in R the Cholesky factor of a dense positive-definite matrix is returned as the
right factor, whereas the sparse Cholesky factor is returned as the left factor.
One other technical point that should be addressed is whether XT X −
RTZX RZX is positive definite. In theory, if X has full column rank, so that X X
T
b 2
c2 = kyobs − Xβ k
σ (5.37)
R
n− p
b 2
c2 = kyobs − Xβ k .
σ (5.38)
L
n
rθ2
dR (θ , σ |yobs ) = (n − p) log(2πσ 2 ) + 2 log (|Lθ ||RX |) + . (5.42)
σ2
c2 = r2 /(n − p).
σ (5.45)
R θb R
It is not entirely clear how one would define a “REML estimate” of β because
the REML criterion, dR (θ , σ |y), defined in (5.42), does not depend on β .
However, it is customary (and not unreasonable) to use βb R = βb θb as the
R
REML estimate of β .
npt = 7 , n = 3
rhobeg = 0.2 , rhoend = 2e-07
0.020: 11: 1752.65; 1.00913 -0.0128508 0.200209
0.0020: 19: 1752.07;0.973276 0.0253858 0.209423
...
> str(fm08@re@Lind)
Solving (5.32) for ũ and βb θ is done in stages. Writing cu and cβ for the
intermediate results that satisfy
T T
Lθ 0 cu PΛθ Z yobs
= (5.46)
RTZX RX
T cβ XT yobs .
we evaluate
The next set of equations to solve is
T
Lθ RZX Pũ c
b = U (5.47)
0 RX βθ cβ
We can now create the conditional mean, mu, the penalized residual sum of
squares, prss, the logarithm of the square of the determinant of L, ldL2, and
the profiled deviance, which, fortuitously, equals the value shown earlier.
The last step is detach the environment of fm08 from the search list to
avoid later name clashes.
In terms of the calculations performed, these steps describe exactly the
evaluation of the profiled deviance in lmer. The actual function for evaluating
the deviance, accessible as fm08@setPars, is a slightly modified version of what
is shown above. However, the modifications are only to avoid creating copies
of potentially large objects and to allow for cases where the model matrix, X,
is sparse. In practice, unless the optional argument compDev = FALSE is given,
the profiled deviance is evaluated in compiled code, providing a speed boost,
but the R code can be used if desired. This allows for checking the results
from the compiled code and can also be used as a template for extending the
computational methods to other types of models.
In later chapters we cover the theory and practice of generalized linear mixed
models (GLMMs), nonlinear mixed models (NLMMs) and generalized non-
linear mixed models (GNLMMs). Because quite a bit of the theoretical and
computational methodology covered in this chapter extends to those models
we will cover the common aspects here.
(Y |U = u) ∼ N (ZΛθ u + Xβ , σ 2 In )
from the linear mixed model. In particular, the components of Y are con-
ditionally independent, given U = u. Furthermore, u affects the distribu-
tion only through the conditional mean, which we will continue to write
as µ, and it affects the conditional mean only through the linear predictor,
γ = ZΛθ u + Xβ .
Typically we do not have µ = γ, however. The elements of the linear pre-
dictor, γ, can be positive or negative or zero. Theoretically they can take
on any value between −∞ and ∞. But many distributional forms used in
GLMMs put constraints on the value of the mean. For example, the mean
of a Bernoulli random variable, modeling a binary response, must be in the
range 0 < µ < 1 and the mean of a Poisson random variable, modeling a count,
must be positive. To achieve these constraints we write the conditional mean,
(Y |U = u) ∼ N (µ, σ 2 In ) (5.49)
but µ depends nonlinearly on γ. For NLMMs the length of the linear predic-
tor, γ, is a multiple, ns, of n, the length of µ.
Like the map from η to µ, the map from γ to µ has a “diagonal” property,
which we now describe. If we use γ to fill the columns of an n × s matrix,
Γ , then µi depends only on the ith row of Γ . In fact, µi is determined by a
nonlinear model function, f , applied to the i row of Γ . Writing µ = f(γ) based
on the component function f , we see that the Jacobian of f, dµ dγ , will be the
vertical concatenation of s diagonal n × n matrices.
Because we will allow for generalized nonlinear mixed models (GNLMMs),
in which the mapping from γ to µ has the form
γ → η → µ, (5.50)
where
(0) dµ
U = . (5.53)
du u(0)
(0)
Naturally, we use the sparse Cholesky decomposition, Lθ , satisfying
T
(0) (0)
Lθ Lθ = P U (0)
U + Iq PT
(0)
(5.54)
2
rθ2 ,β
d(θ , β , σ |yobs ) ≈ n log(2πσ ) + 2 log |Lθ ,β | + , (5.56)
σ2
where the Cholesky factor, Lθ ,β , and the penalized residual sum of squares,
rθ2 ,β , are both evaluated at the conditional mode, ũ. The Cholesky factor
depends on θ , β and u for these models but typically the dependence on β
and u is weak.
The definitions and the computational results for maximum likelihood estima-
tion of the parameters in linear mixed models were summarized in Sect.˜1.4.1.
A key computation is evaluation of the sparse Cholesky factor, Λθ , satisfying
eqn.˜5.22,
Lθ LT
θ = P Λθ Z ZΛθ + Iq PT .
T T
Exercises
5.1. Show that the matrix Aθ = PΛθT ZT ZΛθ PT + Iq is positive definite. That
is, bT Ab > 0, ∀b 6= 0.
5.2. (a) Show that Λθ can be defined to have non-negative diagonal elements.
(Hint: Show that the product Λθ D where D is a diagonal matrix with
diagonal elements of ±1 is also a Cholesky factor. Thus the signs of the
diagonal elements can be chosen however we want.)
(b) Use the result of Prob.˜5.1 to show that the diagonal elements of Λθ must
be non-zero. (Hint: Suppose that the first zero on the diagonal of Λθ is in
the ith position. Show that there is a solution x to ΛθT x = 0 with xi = 1
and x j = 0, j = i + 1, . . . , q and that this x contradicts the positive definite
condition.)
5.3. Show that if X has full column rank, which means that there does not
exist a β 6= 0 for which Xβ = 0, then XT X is positive definite.
In this chapter we consider mixed-effects models for data sets in which the
response is binary, in the sense that it represents yes/no or true/false or
correct/incorrect responses.
Because the response can only take on one of two values we adapt our mod-
els to predict the probability of the positive response. We retain the concept
of the linear predictor, Xβ + Zb, depending on the fixed-effects parameters,
β , and the random effects, b, with the corresponding model matrices, X and
Z, determining the conditional mean, µ, of the response, given the random
effects, but the relationship is more general than that for the linear mixed
model. The linear predictor, which we shall write as η, determines µ accord-
ing to a link function, g. For historical reasons it is the function taking an
element of µ to the corresponding element of η that is called the link. The
transformation in the opposite direction, from η to µ, is called the inverse
link.
As described in earlier chapters, models based on a Gaussian distribution
for the response vector with its mean determined by the linear predictor
are called linear models. Models incorporating coefficients in a linear predic-
tor but allowing for more general forms of the distribution of the response
are called generalized linear models. When the linear predictor incorporates
random effects in addition to the fixed-effects parameters we call them gen-
eralized linear mixed models (GLMMs) and fit such models with the glmer
function. As in previous chapters, we will begin with an example to help
illustrate these ideas.
One of the test data sets from the Center for Multilevel Modelling, University
of Bristol is derived from the 1989 Bangladesh Fertility Survey, [Huq and
Cleland, 1990]. The data are a subsample of responses from 1934 women
119
120 6 Generalized Linear Mixed Models for Binary Responses
grouped in 60 districts and are available as the Contraception data set in the
mlmRev package.
> str(Contraception)
The response of interest is use — whether the woman chooses to use artificial
contraception. The covariates include the district in which the woman resides,
the number of live children she currently has, her age and whether she is in
a rural or an urban setting.
Note that the age variable is centered about a particular age so some values
are negative. Regretably, the information on what the centering age was does
not seem to be available.
0 1 2 3+
−10 0 10 20
N Y
1.0
0.8
Proportion
0.6
0.4
0.2
0.0
−10 0 10 20
Centered age
Fig. 6.1 Contraception use versus centered age for women in the Bangladesh Fertility
Survey 1989. Panels are determined by whether the woman is in an urban setting or
not. Lines within the panels are scatterplot smoother lines for women with 0, 1, 2
and 3 or more live children.
As for the lmer function, the first two arguments to the glmer function for
fitting generalized linear mixed models are the model formula and the name
of the data frame. The third argument to glmer, named family, describes the
type of conditional distribution of the response given the random effects. Ac-
tually, as the name family implies, it contains more information than just the
distribution type in that each distribution and link are described by several
functions. Certain distributions, including the binomial, have canonical link
functions associated with them and if we specify just the distribution type
we get the family with the canonical link. Thus our initial fit is generated as
> fm10 <- glmer(use ~ 1+age+I(age^2)+urban+livch+(1|district),
+ Contraception, binomial)
When displaying a fitted model like this that has several fixed-effects co-
efficients it is helpful to specify the optional argument corr=FALSE to suppress
printing of the rather large correlation matrix of the fixed effects estimators.
> print(fm10, corr=FALSE)
Fixed effects:
Estimate Std. Error z value
(Intercept) -1.0353439 0.1743809 -5.937
age 0.0035350 0.0092314 0.383
I(age^2) -0.0045624 0.0007252 -6.291
urbanY 0.6972788 0.1198849 5.816
livch1 0.8150294 0.1621961 5.025
livch2 0.9164494 0.1851060 4.951
livch3+ 0.9150402 0.1857760 4.926
N Y
−10 0 10 20
N Y
1.0
0.8
Proportion
0.6
0.4
0.2
0.0
−10 0 10 20
Centered age
Fig. 6.2 Contraception use versus centered age for women in the Bangladesh Fertility
Survey 1989. Panels are determined by whether the woman is in an urban setting or
not. Lines within the panels are scatterplot smoother lines for women without children
and women with one or more live children.
Random effects:
Groups Name Variance Std.Dev.
district (Intercept) 0.2251 0.4744
Number of obs: 1934, groups: district, 60
Fixed effects:
Estimate Std. Error z value
(Intercept) -0.9876044 0.1677069 -5.889
age 0.0067102 0.0078362 0.856
I(age^2) -0.0046674 0.0007161 -6.518
urbanY 0.6834724 0.1196144 5.714
chY 0.8479485 0.1471716 5.762
Data: Contraception
Models:
fm11: use ~ age + I(age^2) + urban + ch + (1 | district)
fm10: use ~ 1 + age + I(age^2) + urban + livch + (1 | district)
Df AIC BIC logLik Chisq Chi Df Pr(>Chisq)
fm11 6 2385.2 2418.6 -1186.6
fm10 8 2388.7 2433.3 -1186.4 0.4722 2 0.7897
Fixed effects:
Estimate Std. Error z value
(Intercept) -1.2998180 0.2134408 -6.090
age -0.0459409 0.0217227 -2.115
chY 1.1891294 0.2058998 5.775
I(age^2) -0.0057677 0.0008351 -6.907
urbanY 0.7011683 0.1201614 5.835
age:chY 0.0672373 0.0253180 2.656
Data: Contraception
Models:
fm11: use ~ age + I(age^2) + urban + ch + (1 | district)
fm12: use ~ age * ch + I(age^2) + urban + (1 | district)
Df AIC BIC logLik Chisq Chi Df Pr(>Chisq)
fm11 6 2385.2 2418.6 -1186.6
fm12 7 2379.2 2418.2 -1182.6 7.9983 1 0.004682
Fixed effects:
Estimate Std. Error z value
(Intercept) -1.3077641 0.2199094 -5.947
age -0.0442332 0.0218690 -2.023
Data: Contraception
Models:
fm12: use ~ age * ch + I(age^2) + urban + (1 | district)
fm13: use ~ age * ch + I(age^2) + urban + (1 | urban:district)
Df AIC BIC logLik Chisq Chi Df Pr(>Chisq)
fm12 7 2379.2 2418.2 -1182.6
fm13 7 2368.5 2407.5 -1177.2 10.698 0 < 2.2e-16
Notice that although there are 60 distinct districts there are only 102
distinct combinations of urban:district represented in the data. In 15 of the
60 districts there are no rural women in the sample and in 3 districts there
are no urban women in the sample, as shown in
> xtabs(~ urban + district, Contraception)
district
urban 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
N 54 20 0 19 37 58 18 35 20 13 21 23 16 17 14 18 24
Y 63 0 2 11 2 7 0 2 3 0 0 6 8 101 8 2 0
district
urban 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
N 33 22 15 10 20 15 14 49 13 39 45 25 45 27 24 7 26
Y 14 4 0 8 0 0 0 18 0 5 4 7 16 6 0 7 9
district
urban 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51
N 28 14 13 7 24 12 23 6 28 27 34 74 9 26 4 15 20
Y 20 3 0 7 2 29 3 5 17 0 5 12 6 16 0 4 17
district
urban 52 53 55 56 57 58 59 60 61
N 42 0 0 24 23 20 10 22 31
Y 19 19 6 21 4 13 0 10 11
To this point the only difference we have encountered between glmer and lmer
as model-fitting functions is the need to specify the distribution family in a
call to glmer. The formula specification is identical and the assessment of the
significance of terms using likelihood ratio tests is similar. This is intentional.
We have emphasized the use of likelihood ratio tests on terms, whether fixed-
effects or random-effects terms, exactly so the approach will be general.
However, the interpretation of the coefficient estimates in the different
types of models is different. In a linear mixed model the linear predictor is
the conditional mean (or “expected value”) of the response given the random
effects. That is, if we assume that we know the values of the fixed-effects
parameters and the random effects then the expected response for a particular
combination of covariate values is the linear predictor. Individual coefficients
can be interpreted as slopes of the fitted response with respect to a numeric
covariate or shifts between levels of a categorical covariate.
To interpret the estimates of coefficients in a GLMM we must define and
examine the link function that we mentioned earlier.
To understand why this is called the “log odds” recall that µ corresponds
µ
to a probability in [0, 1]. The corresponding the odds ratio, 1−µ , is in [0, ∞)
and the logarithm of the odds ratio, logit(µ), is in (−∞, ∞).
The inverse of the logit link function,
1
µ= (6.2)
1 + exp(−η)
shown in Fig.˜6.3, takes a value on the unrestricted range, (−∞, ∞), and maps
it to the probability range, [0, 1]. It happens this function is also the cumu-
lative distribution function for the standard logistic distribution, available in
R as the function plogit. In some presentations the relationship between the
1.0
1 + exp(− η)
0.8
0.6
1
0.4
µ=
0.2
0.0
−5 0 5
η
Fig. 6.3 Inverse of the logit link function. The linear predictor value, η, which is on
an unrestricted scale, is mapped to µ on the probability scale, [0, 1].
logit link and the logistic distribution is emphasized but that often leads to
questions of why we should focus on the logistic distribution. Also, it is not
clear how this approach would generalize to other distributions such as the
Poisson or the Gamma distributions.
A way of deriving the logit link that does generalize to a class of common
distributions in what is called the exponential family is to consider the loga-
rithm of the probability function (for discrete distributions) or the probability
density function (for continuous distributions). The probability function for
the Bernoulli distribution is µ for y = 1 and 1 − µ for y = 0. If we write this in
a somewhat peculiar way as µ y + (1 − µ)1−y for y ∈ {0, 1} then the logarithm
of the probability function becomes
y 1−y
µ
log µ + (1 − µ) = log(1 − µ) + y log . (6.3)
1−µ
Notice that the logit link function is the multiple of y in the last term.
For members of the exponential family the logarithm of the probability
or probability density function can be expressed as a sum of up to three
terms: one that involves y only, one that involves the parameters only and the
product of y and a function of the parameters. This function is the canonical
link. −µ y
In the case of the Poisson distribution the probability function is e y!µ for
y ∈ {0, 1, 2, . . . } so the log probability function is
[1] 0.2128612
or 21.3%.
Similarly the predicted log-odds of a childless woman with a centered age
of 0 in an urban setting of a typical district using artificial contraception is
> sum(fixef(fm13)[c("(Intercept)","urbanY")])
[1] -0.5431313
corresponding to a probability of
> plogis(sum(fixef(fm13)[c("(Intercept)","urbanY")]))
[1] 0.3674595
The predicted log-odds and predicted probability for a woman with children
and at the same age and location are
> logodds <- sum(fixef(fm13)[c("(Intercept)","chY","urbanY")])
> c("log-odds"=logodds, "probability"=plogis(logodds))
log-odds probability
0.6447594 0.6558285
We should also be aware that the random effects are defined on the linear
predictor scale and not on the probability scale.
Douglas˜M. Bates and Donald˜G. Watts. Nonlinear Regression Analysis and Its
Applications. Wiley, Hoboken, NJ, 1988. ISBN 0-471-81643-4.
Gregory Belenky, Nancy˜J. Wesensten, David˜R. Thorne, Maria˜L. Thomas, He-
len˜C. Sing, Daniel˜P. Redmond, Michael˜B. Russo, and Thomas˜J. Balkin. Pat-
terns of performance degradation and restoration during sleep restriction and sub-
sequent recovery: a sleep dose-response study. Journal of Sleep Research, 12:1–12,
2003.
G.E.P. Box and G.C. Tiao. Bayesian Inference in Statistical Analysis. Addison-
Wesley, Reading, MA, 1973.
Bill Cleveland. Visualizing Data. Hobart Press, Summit, NJ, 1993.
Owen˜L. Davies and Peter˜L. Goldsmith, editors. Statistical Methods in Research
and Production. Hafner, 4th edition, 1972.
Tim Davis. An approximate minimal degree ordering algorithm. SIAM J. Matrix
Analysis and Applications, 17(4):886–905, 1996.
Tim Davis. Direct Methods for Sparse Linear Systems. SIAM, Philadelphia, PA,
2006.
N.˜M. Huq and J.˜Cleland. Bangladesh fertility survey 1989 (main report). Tech-
nical report, National Institute of Population Research and Training, Dhaka,
Bangladesh, 1990.
Friedrich Leisch. Sweave: Dynamic generation of statistical reports using literate data
analysis. In Wolfgang Härdle and Bernd Rönz, editors, Compstat 2002 — Proceed-
ings in Computational Statistics, pages 575–580. Physica Verlag, Heidelberg, 2002.
URL http://www.stat.uni-muenchen.de/~leisch/Sweave. ISBN 3-7908-1517-9.
George˜A. Milliken and Dallas˜E. Johnson. Analysis of Messy Data: Volume 1,
Designed Experiments. CRC Press, 2nd edition, 2009.
José˜C. Pinheiro and Douglas˜M. Bates. Mixed-effects Models in S and S-PLUS.
Springer, 2000.
J.˜Rasbash, W.˜Browne, H.˜Goldstein, M.˜Yang, and I.˜Plewis. A User’s Guide to
MLwiN. Multilevel Models Project, Institute of Education, University of London,
London, 2000.
Stephen˜W. Raudenbush and Anthony˜S. Bryk. Hierarchical Linear Models: Appli-
cations and Data Analysis Methods. Sage, 2nd edition, 2002. ISBN 0-7619-1904-X.
Y.˜Sakamoto, M.˜Ishiguro, and G.˜Kitagawa. Akaike Information Criterion Statis-
tics. Reidel, Dordrecht, Holland, 1986.
Deepayan Sarkar. Lattice: Multivariate Data Visualization with R. Springer, 2008.
G.˜Schwarz. Estimating the dimension of a model. Annals of Statistics, 6:461–464,
1978.
129
130 References
Brady˜T. West, Kathleen˜B. Welch, and Andrzej˜T. Galecki. Linear Mixed Models:
A Practical Guide Using Statistical Software. Chapman and Hall/CRC, Boca
Raton, FL, 2007. ISBN 1-58488-480-0.
131