Download Using R and RStudio for Data Management Statistical Analysis and Graphics 2nd Edition Nicholas J. Horton ebook All Chapters PDF
Download Using R and RStudio for Data Management Statistical Analysis and Graphics 2nd Edition Nicholas J. Horton ebook All Chapters PDF
com
https://textbookfull.com/product/using-r-and-rstudio-for-
data-management-statistical-analysis-and-graphics-2nd-
edition-nicholas-j-horton/
OR CLICK BUTTON
DOWNLOAD NOW
https://textbookfull.com/product/r-cookbook-proven-recipes-for-data-
analysis-statistics-and-graphics-jd-long/
textboxfull.com
https://textbookfull.com/product/r-in-action-data-analysis-and-
graphics-with-r-bonus-ch-23-only-2nd-edition-robert-kabacoff/
textboxfull.com
https://textbookfull.com/product/statistical-data-analysis-using-sas-
intermediate-statistical-methods-mervyn-g-marasinghe/
textboxfull.com
https://textbookfull.com/product/visualizing-data-in-r-4-graphics-
using-the-base-graphics-stats-and-ggplot2-packages-1st-edition-margot-
tollefson/
textboxfull.com
R Graphics Cookbook Practical Recipes for Visualizing Data
2nd Edition Winston Chang
https://textbookfull.com/product/r-graphics-cookbook-practical-
recipes-for-visualizing-data-2nd-edition-winston-chang/
textboxfull.com
https://textbookfull.com/product/an-introduction-to-statistical-
methods-and-data-analysis-7th-edition-r-lyman-ott/
textboxfull.com
https://textbookfull.com/product/statistical-and-thermal-physics-an-
introduction-2nd-edition-michael-j-r-hoch/
textboxfull.com
K23166
Nicholas J. Horton and Ken Kleinman
w w w. c rc p r e s s . c o m
Using R and
RStudio
for Data Management,
Statistical Analysis,
and Graphics
Second Edition
i i
i i
i i
i i
i i
i i
R and
Using
RStudio
for Data Management,
Statistical Analysis,
and Graphics
Second Edition
Nicholas J. Horton
Department of Mathematics and Statistics
Amherst College
Massachusetts, U.S.A.
Ken Kleinman
Department of Population Medicine
Harvard Medical School and
Harvard Pilgrim Health Care Institute
Boston, Massachusetts, U.S.A.
i i
i i
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2015 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been
made to publish reliable data and information, but the author and publisher cannot assume responsibility for the valid-
ity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright
holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this
form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may
rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or uti-
lized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopy-
ing, microfilming, and recording, or in any information storage or retrieval system, without written permission from the
publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://
www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923,
978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For
organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
i i
Contents
v
i i
i i
i i
vi CONTENTS
2 Data management 11
2.1 Structure and metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.1 Access variables from a dataset . . . . . . . . . . . . . . . . . . . . . 11
2.1.2 Names of variables and their types . . . . . . . . . . . . . . . . . . . 11
2.1.3 Values of variables in a dataset . . . . . . . . . . . . . . . . . . . . . 12
2.1.4 Label variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.5 Add comment to a dataset or variable . . . . . . . . . . . . . . . . . 12
2.2 Derived variables and data manipulation . . . . . . . . . . . . . . . . . . . . 12
2.2.1 Add derived variable to a dataset . . . . . . . . . . . . . . . . . . . . 13
2.2.2 Rename variables in a dataset . . . . . . . . . . . . . . . . . . . . . . 13
2.2.3 Create string variables from numeric variables . . . . . . . . . . . . . 13
2.2.4 Create categorical variables from continuous variables . . . . . . . . 13
2.2.5 Recode a categorical variable . . . . . . . . . . . . . . . . . . . . . . 14
2.2.6 Create a categorical variable using logic . . . . . . . . . . . . . . . . 14
2.2.7 Create numeric variables from string variables . . . . . . . . . . . . . 15
2.2.8 Extract characters from string variables . . . . . . . . . . . . . . . . 15
2.2.9 Length of string variables . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2.10 Concatenate string variables . . . . . . . . . . . . . . . . . . . . . . . 15
2.2.11 Set operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.12 Find strings within string variables . . . . . . . . . . . . . . . . . . . 16
2.2.13 Find approximate strings . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.14 Replace strings within string variables . . . . . . . . . . . . . . . . . 17
2.2.15 Split strings into multiple strings . . . . . . . . . . . . . . . . . . . . 17
2.2.16 Remove spaces around string variables . . . . . . . . . . . . . . . . . 17
2.2.17 Convert strings from upper to lower case . . . . . . . . . . . . . . . 17
2.2.18 Create lagged variable . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2.19 Formatting values of variables . . . . . . . . . . . . . . . . . . . . . . 18
2.2.20 Perl interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.21 Accessing databases using SQL . . . . . . . . . . . . . . . . . . . . . 18
2.3 Merging, combining, and subsetting datasets . . . . . . . . . . . . . . . . . 19
2.3.1 Subsetting observations . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.3.2 Drop or keep variables in a dataset . . . . . . . . . . . . . . . . . . . 19
2.3.3 Random sample of a dataset . . . . . . . . . . . . . . . . . . . . . . 20
2.3.4 Observation number . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3.5 Keep unique values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3.6 Identify duplicated values . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3.7 Convert from wide to long (tall) format . . . . . . . . . . . . . . . . 21
2.3.8 Convert from long (tall) to wide format . . . . . . . . . . . . . . . . 21
2.3.9 Concatenate and stack datasets . . . . . . . . . . . . . . . . . . . . . 22
2.3.10 Sort datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3.11 Merge datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4 Date and time variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.4.1 Create date variable . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.4.2 Extract weekday . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4.3 Extract month . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4.4 Extract year . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4.5 Extract quarter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4.6 Create time variable . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.5 Further resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.6.1 Data input and output . . . . . . . . . . . . . . . . . . . . . . . . . . 25
i i
i i
i i
CONTENTS vii
i i
i i
i i
viii CONTENTS
i i
i i
i i
CONTENTS ix
i i
i i
i i
x CONTENTS
6.6.8 Contrasts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
i i
i i
i i
CONTENTS xi
i i
i i
i i
xii CONTENTS
i i
i i
i i
CONTENTS xiii
10 Simulation 155
10.1 Generating data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
10.1.1 Generate categorical data . . . . . . . . . . . . . . . . . . . . . . . . 155
10.1.2 Generate data from a logistic regression . . . . . . . . . . . . . . . . 156
10.1.3 Generate data from a generalized linear mixed model . . . . . . . . . 156
10.1.4 Generate correlated binary data . . . . . . . . . . . . . . . . . . . . 157
10.1.5 Generate data from a Cox model . . . . . . . . . . . . . . . . . . . . 158
10.1.6 Sampling from a challenging distribution . . . . . . . . . . . . . . . 159
10.2 Simulation applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
10.2.1 Simulation study of Student’s t-test . . . . . . . . . . . . . . . . . . 161
10.2.2 Diploma (or hat-check) problem . . . . . . . . . . . . . . . . . . . . 162
10.2.3 Monty Hall problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
10.2.4 Censored survival . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
10.3 Further resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
i i
i i
i i
xiv CONTENTS
i i
i i
i i
CONTENTS xv
C References 243
D Indices 255
D.1 Subject index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
D.2 R index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
i i
i i
i i
i i
i i
i i
List of Tables
11.1 Bayesian modeling functions available within the MCMCpack package . . . . 175
12.1 Weights, volume, and values for the knapsack problem . . . . . . . . . . . . 209
xvii
i i
i i
i i
i i
i i
i i
List of Figures
5.1 Density plot of depressive symptom scores (CESD) plus superimposed his-
togram and normal distribution . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.2 Scatterplot of CESD and MCS for women, with primary substance shown as
the plot symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.3 Graphical display of the table of substance by race/ethnicity . . . . . . . . 63
5.4 Density plot of age by gender . . . . . . . . . . . . . . . . . . . . . . . . . . 65
6.1 Scatterplot of observed values for age and I1 (plus smoothers by substance)
using base graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.2 Scatterplot of observed values for age and I1 (plus smoothers by substance)
using the lattice package . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.3 Scatterplot of observed values for age and I1 (plus smoothers by substance)
using the ggplot2 package . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.4 Regression coefficient plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
6.5 Default diagnostics for linear models . . . . . . . . . . . . . . . . . . . . . . 83
6.6 Empirical density of residuals, with superimposed normal density . . . . . . 84
6.7 Interaction plot of CESD as a function of substance group and gender . . . 85
6.8 Boxplot of CESD as a function of substance group and gender . . . . . . . 86
6.9 Pairwise comparisons (using Tukey HSD procedure) . . . . . . . . . . . . . 88
6.10 Pairwise comparisons (using the factorplot function) . . . . . . . . . . . . . 89
8.1 Plot of InDUC and MCS vs. CESD for female alcohol-involved subjects . . 135
8.2 Association of MCS and CESD, stratified by substance and report of suicidal
thoughts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
8.3 Lattice settings using the mosaic black-and-white theme . . . . . . . . . . . 137
8.4 Association of MCS and PCS with marginal histograms . . . . . . . . . . . 138
8.5 Kaplan–Meier estimate of time to linkage to primary care by randomization
group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
xix
i i
i i
i i
xx LIST OF FIGURES
8.6 Receiver operating characteristic curve for the logistical regression model pre-
dicting suicidal thoughts using the CESD as a measure of depressive symp-
toms (sensitivity = true positive rate; 1-specificity = false positive rate) . . 140
8.7 Pairs plot of variables from the HELP dataset using the lattice package . 141
8.8 Pairs plot of variables from the HELP dataset using the GGally package. . 142
8.9 Visual display of correlations (times 100) . . . . . . . . . . . . . . . . . . . . 143
i i
i i
i i
Software systems such as R evolve rapidly, and so do the approaches and expertise of
statistical analysts.
In 2009, we began a blog in which we explored many new case studies and applications,
ranging from generating a Fibonacci series to fitting finite mixture models with concomitant
variables. We also discussed some additions to R, the RStudio integrated development
environment, and new or improved R packages. The blog now has hundreds of entries and
according to Google Analytics has received hundreds of thousands of visits.
The volume you are holding is a larger format and longer than the first edition, and
much of the new material is adapted from these blog entries, while it also includes other
improvements and additions that have emerged in the last few years.
We have extensively reorganized the material in the book and created three new chap-
ters. The firsts, “Simulation,” includes examples where data are generated from complex
models such as mixed-effects models and survival models, and from distributions using
the Metropolis–Hastings algorithm. We also explore interesting statistics and probability
examples via simulation. The second is “Special topics,” where we describe some key fea-
tures, such as processing by group, and detail several important areas of statistics, including
Bayesian methods, propensity scores, and bootstrapping. The last is “Case studies,” where
we demonstrate examples of useful data management tasks, read complex files, make and
annotate maps, show how to “scrape” data from the web, mine text files, and generate
dynamic graphics.
We also describe RStudio in detail. This powerful and easy-to-use front end adds in-
numerable features to R. In our experience, it dramatically increases the productivity of R
users, and by tightly integrating reproducible analysis tools, helps avoid error-prone “cut
and paste” workflows. Our students and colleagues find RStudio an extremely comfortable
interface.
We used a reproducible analysis system (knitr) to generate the example code and
output in the book. Code extracted from these files is provided on the book website. In
this edition, we provide a detailed discussion of the philosophy and use of these systems. In
particular, we feel that the knitr and markdown packages for R, which are tightly integrated
with RStudio, should become a part of every R user’s toolbox. We can’t imagine working
on a project without them.
The second edition of the book features extensive use of a number of new packages
that extend the functionality of the system. These include dplyr (tools for working with
dataframe-like objects and databases), ggplot2 (implementation of the Grammar of Graph-
ics), ggmap (spatial mapping using ggplot2), ggvis (to build interactive graphical displays),
httr (tools for working with URLs and HTTP), lubridate (date and time manipulations),
markdown (for simplified reproducible analysis), shiny (to build interactive web applica-
tions), swirl (for learning R, in R), tidyr (for data manipulation), and xtable (to cre-
ate publication-quality tables). Overall, these packages facilitate ever more sophisticated
analyses.
xxi
i i
i i
i i
Finally, we’ve reorganized much of the material from the first edition into smaller, more
focused chapters. Readers will now find separate (and enhanced) chapters on data input
and output, data management, statistical and mathematical functions, and programming,
rather than a single chapter on “data management.” Graphics are now discussed in two
chapters: one on high-level types of plots, such as scatterplots and histograms, and another
on customizing the fine details of the plots, such as the number of tick marks and the color
of plot symbols.
We’re immensely gratified by the positive response the first edition elicited, and hope
the current volume will be even more useful to you.
On the web
The book website at http://www.amherst.edu/~nhorton/r2 includes the table of contents,
the indices, the HELP dataset in various formats, example code, a pointer to the blog, and
a list of errata.
Acknowledgments
In addition to those acknowledged in the first edition, we would like to thank J.J. Allaire
and the RStudio developers, Danny Kaplan, Deborah Nolan, Daniel Parel, Randall Pruim,
Romain Francois, and Hadley Wickham, plus the many individuals who have created and
shared R packages. Their contributions to R and RStudio, programming efforts, comments,
and guidance and/or helpful suggestions on drafts of the revision have been extremely
helpful. Above all, we greatly appreciate Sara and Julia as well as Abby, Alana, Kinari,
and Sam, for their patience and support.
Amherst, MA
October 2014
i i
i i
i i
R (R development core team, 2009) is a general purpose statistical software package used
in many fields of research. It is licensed for free, as open-source software. The system is
developed by a large group of people, almost all volunteers. It has a large and growing user
and developer base. Methodologists often release applications for general use in R shortly
after they have been introduced into the literature. While professional customer support is
not provided, there are many resources to help support users.
We have written this book as a reference text for users of R. Our primary goal is to
provide users with an easy way to learn how to perform an analytic task in this system,
without having to navigate through the extensive, idiosyncratic, and sometimes unwieldy
documentation or to sort through the huge number of add-on packages. We include many
common tasks, including data management, descriptive summaries, inferential procedures,
regression analysis, multivariate methods, and the creation of graphics. We also show some
more complex applications. In toto, we hope that the text will facilitate more efficient use
of this powerful system.
We do not attempt to exhaustively detail all possible ways available to accomplish a
given task in each system. Neither do we claim to provide the most elegant solution. We
have tried to provide a simple approach that is easy to understand for a new user, and have
supplied several solutions when it seems likely to be helpful.
The book has two indices, in addition to the comprehensive table of contents. These
include: 1) a detailed topic (subject) index in English; 2) an R command index, describing
R syntax.
Extensive example analyses of data from a clinical trial are presented; see Table B.1
(p. 237) for a comprehensive list. These employ a single dataset (from the HELP study),
described in Appendix B. Readers are encouraged to download the dataset and code from
the book website. The examples demonstrate the code in action and facilitate exploration
by the reader.
xxiii
i i
i i
i i
In addition to the HELP examples, a case studies and extended examples chapter uti-
lizes many of the functions, idioms and code samples introduced earlier. These include
explications of analytic and empirical power calculations, missing data methods, propensity
score analysis, sophisticated data manipulation, data gleaning from websites, map making,
simulation studies, and optimization. Entries from earlier chapters are cross-referenced to
help guide the reader.
Where to begin
We do not anticipate that the book will be read cover to cover. Instead, we hope that the
extensive indexing, cross-referencing, and worked examples will make it possible for readers
to directly find and then implement what they need. A new user should begin by reading
the first chapter, which includes a sample session and overview of the system. Experienced
users may find the case studies to be valuable as a source of ideas on problem solving in R.
Acknowledgments
We would like to thank Rob Calver, Kari Budyk, Shashi Kumar, and Sarah Morris for
their support and guidance at Informa CRC/Chapman and Hall. We also thank Ben Cowl-
ing, Stephanie Greenlaw, Tanya Hakim, Albyn Jones, Michael Lavine, Pamela Matheson,
Elizabeth Stuart, Rebbecca Wilson, and Andrew Zieffler for comments, guidance and/or
helpful suggestions on drafts of the manuscript.
Above all we greatly appreciate Julia and Sara as well as Abby, Alana, Kinari, and Sam,
for their patience and support.
i i
i i
i i
Chapter 1
This chapter reviews data input and output, including reading and writing files in spread-
sheet, ASCII file, native, and foreign formats.
1.1 Input
R provides comprehensive support for data input and output. In this section we address
aspects of these tasks. Datasets are organized in dataframes (A.4.6), or connected series
of rectangular arrays, which can be saved as platform-independent objects. UNIX-style
directory delimiters (forward slash) are allowed on Windows.
Note: Forward slash is supported as a directory delimiter on all operating systems; a double
backslash is supported under Windows. The file savedfile is created by save() (see 1.2.3).
Running the command print(load(file="dir location/savedfile")) will display the
objects that are added to the workspace.
1
i i
i i
i i
will be called V1, V2, . . . Vn. A limit on the number of lines to be read can be specified
through the nrows option. The read.table() function can support reading from a URL
as a filename (see 1.1.12) or browse files interactively using read.table(file.choose())
(see 4.3.7).
Note: The readLines() function returns a character vector with length equal to the number
of lines read (see file()). A limit on the number of lines to be read can be specified through
the nrows option. The scan() function returns a vector, with entries separated by white
space by default. These functions read by default from standard input (see stdin() and
?connections), but can also read from a file or URL (see 1.1.12). The read.fwf() function
may also be useful for reading fixed-width files.
Note: The stringsAsFactors option can be set to prevent automatic creation of factors
for categorical variables. A limit on the number of lines to be read can be specified through
the nrows option. The command read.csv(file.choose()) can be used to browse files
interactively (see 4.3.7). The comma-separated file can be given as a URL (see 1.1.12). The
colClasses option can be used to speed up reading large files. Caution is needed when
reading date and time variables (see 2.4).
i i
i i
i i
1.1. INPUT 3
tosas = data.frame(ds)
library(foreign)
write.dbf(tosas, "dir_location/tosas.dbf")
This can be read into SAS using the following commands:
proc import datafile="dir_location\tosas.dbf"
out=fromr dbms=dbf;
run;
i i
i i
i i
tmpds = read.table("file_location/filename.dat")
id = tmpds$V1
initials = tmpds$V2
datevar = as.Date(as.character(tmpds$V3), "%m/%d/%Y")
cost = as.numeric(substr(tmpds$V4, 2, 100))
ds = data.frame(id, initials, datevar, cost)
rm(tmpds, id, initials, datevar, cost)
library(lubridate)
library(dplyr)
tmpds = mutate(tmpds, datevar = mdy(V3))
Note: This task is accomplished by first reading the dataset (with default names from
read.table() denoted V1 through V4). These objects can be manipulated using
as.character() to undo the default coding as factor variables, and coerced to the appropri-
ate data types. For the cost variable, the dollar signs are removed using the substr() func-
tion. Finally, the individual variables are bundled together as a dataframe. The lubridate
package includes functions to make handling date and time values easier; the mdy() function
is one of these.
Reading data in a complex data format will generally require a tailored approach. Here
we give a relatively simple example and outline the key tools useful for reading in data in
complex formats. Suppose we have data as follows:
1 Las Vegas, NV --- 53.3 --- --- 1
2 Sacramento, CA --- 42.3 --- --- 2
3 Miami, FL --- 41.8 --- --- 3
4 Tucson, AZ --- 41.7 --- --- 4
5 Cleveland, OH --- 38.3 --- --- 5
6 Cincinnati, OH 15 36.4 --- --- 6
7 Colorado Springs, CO --- 36.1 --- --- 7
8 Memphis, TN --- 35.3 --- --- 8
8 New Orleans, LA --- 35.3 --- --- 8
10 Mesa, AZ --- 34.7 --- --- 10
11 Baltimore, MD --- 33.2 --- --- 11
12 Philadelphia, PA --- 31.7 --- --- 12
13 Salt Lake City, UT --- 31.9 17 --- 13
The --- means that the value is missing. Note two complexities here. First, fields are
delimited by both spaces and commas, where the latter separates the city from the state.
Second, cities may have names consisting of more than one word.
i i
i i
Exploring the Variety of Random
Documents with Different Content
decretals, the donation of Constantine, and the decretum of Gratian.
The last subject ought to be carefully studied by all who wish to
understand the frightful tyranny of a complicated system of laws,
devised not for the protection of a people, but as instruments for
grinding them to subjection. Then, after an historical outline of the
general growth of the Papal power in the twelfth and thirteenth
centuries, the writers enter upon the peculiarly episcopal and clerical
question, pointing out how marvellously every little change worked in
one direction, invariably tending to throw the rule of the Church into
the power of Rome; and how the growth of new institutions, like the
monastic orders and the Inquisition, gradually withdrew the conduct
of affairs from the Bishops of the Church in general, and
consolidated the Papal influence. For all this, however, unless we
could satisfy ourselves with a mere magnified table of contents the
reader must be referred to the book itself, in which he will find the
interest sustained without flagging to the end.”—Pall Mall
Gazette.
“This is not only a very able and carefully written treatise upon
the doctrine of Apostolical Succession, but it is also a calm yet noble
vindication of the validity of the Anglican Orders: it well sustains the
brilliant reputation which Mr. Haddan left behind him at Oxford, and it
supplements his other profound historical researches in
ecclesiastical matters. This book will remain for a long time the
classic work upon English Orders.”—Church Review.
Conclusion.
Teaching in Galilee.
Teaching at Jerusalem.
The Agony.
The Apprehension.
The Condemnation.
The Crucifixion.
The Sepulture.
Christ Appearing.
“To us it appears that Mr. Blunt has succeeded very well. All
necessary information seems to be included, and the arrangement is
excellent.”—Literary Churchman.
“It is the best short explanation of our offices that we know of,
and would be invaluable for the use of candidates for confirmation in
the higher classes.”—John Bull.
“This is another of Mr. Blunt’s most useful manuals, with all the
precision of a school book, yet diverging into matters of practical
application so freely as to make it most serviceable, either as a
teacher’s suggestion book, or as an intelligent pupil’s reading
book.”—Literary Churchman.
RIVINGTON’S DEVOTIONAL
SERIES.
Elegantly printed with red borders. 16mo.
2s. 6d. each.
“We may here fitly record that Bishop Wilson on the Lord’s
Supper has been issued in a new but unabridged form.”—Daily
Telegraph.
“It has been the food and hope of countless souls ever since its
first appearance two centuries and a half ago, and it still ranks with
Scupoli’s ‘Combattimento Spirituale,’ and Arvisenet’s ‘Memoriale
Vitæ Sacerdotalis,’ as among the very best works of ascetic
theology. We are glad to commend this careful and convenient
version to our readers.”—Union Review.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com