Introduction To Matrix Analytic Methods In Queues 1 Analytical And Simulation Approach Basics Srinivas R Chakravarthy instant download
Introduction To Matrix Analytic Methods In Queues 1 Analytical And Simulation Approach Basics Srinivas R Chakravarthy instant download
https://ebookbell.com/product/introduction-to-matrix-analytic-
methods-in-queues-1-analytical-and-simulation-approach-basics-
srinivas-r-chakravarthy-46345242
https://ebookbell.com/product/introduction-to-matrixanalytic-methods-
in-queues-2-srinivas-r-chakravarthy-46323010
https://ebookbell.com/product/an-introduction-to-queueing-theory-and-
matrixanalytic-methods-1st-edition-lothar-breuer-2323796
https://ebookbell.com/product/an-introduction-to-matrix-methods-of-
structural-analysis-muhammad-akram-tahir-worsak-
kanoknukulchai-231779806
https://ebookbell.com/product/an-introduction-to-matrix-methods-of-
structural-analysis-muhammad-akram-tahir-worsak-
kanoknukulchai-154902458
An Introduction To Matrix Methods Of Structural Analysis Muhammad
Akram Tahir
https://ebookbell.com/product/an-introduction-to-matrix-methods-of-
structural-analysis-muhammad-akram-tahir-200040022
https://ebookbell.com/product/introduction-to-matrix-analysis-and-
applications-1st-edition-fumio-hiai-4662488
https://ebookbell.com/product/matrixbased-introduction-to-
multivariate-data-analysis-1st-edition-kohei-adachi-auth-5609750
https://ebookbell.com/product/matrixbased-introduction-to-
multivariate-data-analysis-2nd-edition-kohei-adachi-11083338
https://ebookbell.com/product/quantitative-tourism-industry-analysis-
introduction-to-inputoutput-social-accounting-matrix-modeling-and-
tourism-satellite-accounts-dr-tadayuki-hara-auth-4422704
Introduction to Matrix-Analytic Methods in Queues 1
This book is dedicated to my parents:
Mrs. P.S. Rajalakshmi and Mr. P.S.S. Raghavan;
to my professors:
Dr. Marcel F. Neuts and Dr. K.N. Venkataraman;
and to His Holiness
Sri Maha Periyava (Sri Chandrasekharendra Saraswati Mahaswamigal)
of Kanchi Kamakoti Peetham
Series Editor
Nikolaos Limnios
Introduction to
Matrix-Analytic Methods
in Queues 1
Srinivas R. Chakravarthy
First published 2022 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as
permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced,
stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers,
or in the case of reprographic reproduction in accordance with the terms and licenses issued by the
CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the
undermentioned address:
www.iste.co.uk www.wiley.com
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the
author(s), contributor(s) or editor(s) and do not necessarily reflect the views of ISTE Group.
List of Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1. Probability concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1. Random variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.2. Discrete probability functions . . . . . . . . . . . . . . . . . . . . . 6
1.1.3. Probability generating function . . . . . . . . . . . . . . . . . . . . 7
1.1.4. Continuous probability functions . . . . . . . . . . . . . . . . . . . 7
1.1.5. Laplace transform and Laplace-Stieltjes transform . . . . . . . . . 9
1.1.6. Measures of a random variable . . . . . . . . . . . . . . . . . . . . . 10
1.2. Renewal process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.1. Renewal function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.2. Terminating renewal process . . . . . . . . . . . . . . . . . . . . . . 15
1.2.3. Poisson process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3. Matrix analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.3.1. Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.3.2. Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . . 23
1.3.3. Partitioned matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.3.4. Matrix differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.3.5. Exponential matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.3.6. Kronecker products and Kronecker sums . . . . . . . . . . . . . . . 32
1.3.7. Vectorization (or direct sums) of matrices . . . . . . . . . . . . . . 33
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Symbols
Abbreviations
The introduction of the phase type (PH) distributions in the early 1970s by
Marcel Neuts opened up a wide range of possibilities in Applied Probability
modeling and ushered in the idea that finding computable, numerical solutions was
an acceptable and desirable goal in analyzing stochastic models. Furthermore, he
popularized incorporating the computational aspects in the study of stochastic
models. It gave researchers a powerful new tool that enabled them to move beyond
the traditional models limited to exponential processes for analytical convenience to
studying more realistic stochastic models with algorithmic solutions and simple,
elegant probabilistic interpretations. The goal of building models with
computationally tractable solutions rather than the abstract transform-based solutions
took root. This rapidly led to an entirely new area of research on the study of
stochastic models in queues, inventory, reliability, and communication networks
using matrix-analytic methods (MAM). The versatile Markovian point process
(VMPP) was introduced by Neuts in the late 1970s. This process was used in the
study of a single-server queueing system with general services by one of Neuts’s
students, V. Ramaswami, for his PhD dissertation. In 1990, this VMPP was studied
differently as a batch Markovian arrival process (BMAP) by Neuts and his students
David Lucantoni and Kathy Meier-Hellstern. At that time it was thought that VMPP
was a special case of BMAP, but it was proved that BMAP and VMPP are the same.
However, the compact and transparent notations with which BMAP is described
allowed the readers to understand this versatile point process with relative ease, and
since then VMPP is referred to as BMAP in the literature. In the case of single
arrivals, the process is referred to as a Markovian arrival process (MAP).
methods have been extensively studied both theoretically and computationally in the
context of a variety of stochastic models useful in many applied areas. A handful of
books starting with Neuts’s two classical books, Matrix-Geometric Solutions in
Stochastic Models: An Algorithmic Approach, originally published in 1981, and
Structured Stochastic Matrices of M/G/1 Type and Their Applications in 1989, to the
latest one, The Theory of Queuing Systems with Correlated Flows by Dudin,
Klimenok and Vishnevsky in 2020, have appeared in the literature that deal with
MAM. The other books published from 1989 to 2020 include Introduction to Matrix
Analytic Methods in Stochastic Modeling by Latouche and Ramaswami (1999);
Numerical Methods for Structured Markov Chains by Bini et al. (2005); Queueing
Theory for Telecommunications by Alfa (2010); Fundamentals of Matrix-Analytic
Methods by He (2014); and Matrix-Exponential Distributions in Applied Probability
by Bladt and Nielsen (2017).
Thus, the text is a useful source of reference for researchers established in this
field and, more importantly, a valuable, inviting guide for those venturing into this
rich research area.
Preface xiii
Writing this book has been lot of fun but also a challenge. However, my family,
friends, and mentors helped me to meet that challenge. I take a great pleasure in
acknowledging them. This book project would not have been possible without the
educational foundation, moral support, encouragement, and critical analysis of
teachers, friends, and families.
– My (late) father, P.S.S. Raghavan, for being a role model. My mother, P.S.S.
Rajalakshmi, for her encouragement. Both my parents made many sacrifices that
enabled me to first go to college and, later on, to leave for the United States to pursue
higher studies.
– My sister, Vasumathi Parthasarathy, for exposing me to mathematics at a very
young age.
– My (late) Professors Marcel F. Neuts and K.N. Venkataraman. While K.N.V.
gave me an opportunity to learn probability theory under him while in India, M.F.N.
showed me the path to MAM. I owe a debt of gratitude to him for what I am now and
for his important role in shaping my career as a teacher and a researcher.
– My college teachers, Prof. D. Ratnasabapathi (Presidency College in Madras)
and Prof. K. Suresh Chandra (University of Madras), who not only taught me statistics
but also were a source of encouragement to pursue higher studies.
– I benefited a lot through interacting with my friends and colleagues V.
Ramaswami, D.M. Lucantoni, Kathy Meier-Hellstern and S. Kumar, during my days
at Delaware.
– R. Parthasarathy (Kent State University), whom I knew from my college days in
India and who has always been there to give moral support since those days.
– My research colleagues who played key roles in my career, notably A.S. Alfa,
A.N. Dudin, A. Krishnamoorthy, and A. Rumyantsev.
– My students: Serife Ozkar, who visited from Turkey to finish up her doctoral
thesis with me at Kettering, and Shruti Goel, who attended the workshops I conducted
in India. The questions these students, along with a countless others, including Alka
Choudhry, a doctoral student at the Central University of Rajasthan, India, raised
provided the impetus for this book. Furthermore, I am thankful to both Serife and
Shruti for helping me put the bibliography in the format required by the publishers.
– Several of my colleagues at Kettering, notably (the late) Duane McKeachie,
Petros Gheresus, and T.R. Chandrupatla (who retired from Rowan University recently
after serving nearly two decades at Kettering) for their friendship and encouragement
throughout my career at Kettering.
– Kettering University for its support of my sabbatical which was instrumental in
completing this book project.
– The ISTE team for their continued and timely help during the production process
of this two-volume book.
– Finally, the most important people in my life, my wife, Jayanthi Chakravarthy,
son Arvind Chakravarthy and his beloved wife, Vina Harji Chakravarthy. Since
Preface xv
Jayanthi and Arvind came into my life, their understanding, love and support have
helped me to focus on my research and career without any distraction. They along
with Vina have been a source of constant inspiration to finish the book. No words are
adequate to express my sincere appreciation to them.
Srinivas C HAKRAVARTHY
April 2022
1
Introduction
Even though probability theory was used to describe the experiences connected
with games of chance and the calculation of certain probabilities, the main purpose is
to discover the general rules and to construct satisfactory theoretical models for
problems under study. Most phenomenon in our life are random and probability
modeling is vital to understand and take appropriate actions. A brief history of
probability is given below for those interested to know about it.
There are three major definitions of probability, namely, axiomatic, frequency and
classical. Each one has its own merits and demerits. The axiomatic approach is mainly
used in developing the mathematical theory of probability. The frequency approach
gives an intuitive notion of probability. However, the computation of probability in
practice is based on the classical approach.
Suppose that S is a sample space (i.e. S is the set of all possible outcomes of an
experiment) and Ω to be the set of all possible subsets of S. For example, consider
the experiment of throwing a six-sided die. It can readily be seen that
S = {1, 2, 3, 4, 5, 6}, and Ω = {∅}, {(1)}, · · · , {6}, {1, 2}, · · · , {5, 6}, · · · , S ,
where ∅ is null or an empty set. As an example, ∅ will include outcomes that are
impossible such as seeing a number 7 or a negative number. Note that the cardinality
Introduction 3
of Ω for this example is 26 = 64. In general, if the sample space S has a finite
number, say, N , of elements, then the cardinality of Ω is 2N .
R EMARK 1.1.– Note that axiom (3) implies that if A and B are mutually exclusive
then P (AU B) = P (A) + P (B).
The probabilities of events of interest are computed only based on the sample space
and with no other prior information related to the events. Sometimes it is convenient
to compute certain unconditional probabilities by first conditioning on some event,
whose probability is easy to find. For example, suppose we draw two cards without
replacement from a pack of 52 playing cards. What is the probability that the second
card drawn will be an ace of spade?
certain events when some partial information concerning the results of experiments is
given. Also in some calculation of probabilities, it is often convenient to compute
them by conditioning on certain events. In probability theory, the models under study
are usually described by specifying the appropriate conditional probabilities or
conditional distributions. The main topics, such as Bayesian inference, estimation
theory, tests of hypotheses and decision theory, in statistics use several notions of
conditioning.
P (A ∩ B)
P (B|A) = , if P (A) > 0.
P (B)
R EMARK 1.4.– Suppose that P (A) > 0 and P (B) > 0. Then one of the following
will occur. Either
1) P (B|A) < P (B); in this case we say that A carries negative information about
B; or
2) P (B|A) > P (B); in this case we say that A carries positive information about
B; or
3) P (B|A) = P (B); in this case we say that A does not contain any information
about B.
R EMARK 1.5.– It is very easy to show that if A carries negative (positive or no)
information about B, then B also carries negative (positive or no) information about
A. The concepts of positive and negative information in conditional probability were
first introduced by K.L. Chung (1942).
From the discussion of the notion of conditional probability we see that all general
theorems on probabilities are also valid for conditional probabilities.
n
P (B) = P (B|Ai )P (Ai ).
i=1
Introduction 5
D EFINITION 1.3.– Two non-empty events A and B are said to be independent if the
occurrence (or non-occurrence) of A does not affect the occurrence (non-occurrence)
of B.
R EMARK 1.6.– The following are trivial pairs of independent events: A and ∅, A and
S, S and ∅.
In probability and statistics, most of the times the quantities that are of interest
are not the outcomes of an experiment under study but rather the values associated
with the outcome of the experiment. For example, when n items from the output of
the process are inspected the quality control inspector is concerned about the total
6 Introduction to Matrix-Analytic Methods in Queues 1
number of defective out of the n chosen and the corresponding probabilities rather
than the way those defective, if any, were selected. In this section, we review the
important concept of a random variable and the probability functions associated with
it.
The study of random variables is done through the probability functions associated
with them. For a discrete random variable X, the function f (x), defined as f (x) =
P (X = x), is called the probability mass function (PMF) of X.
2) Poisson:
⎧ k
⎨ e−λ λk! , k = 0, 1, · · · ,
f (x) =
⎩
0, elsewhere.
3) Geometric:
⎧
⎨ p (1 − p)x , x = 0, 1, · · · ,
f (x) =
⎩
0, elsewhere.
The probability generating function (PGF) is key in deriving and proving results
in stochastic models. So, we review it here.
Some well-known probability density functions used in this book in the context of
stochastic modeling are listed below and for others we refer the reader to any textbook
on probability and statistics.
8 Introduction to Matrix-Analytic Methods in Queues 1
1) Uniform:
⎧
⎪
⎪ 1
⎨ , a ≤ x ≤ b,
f (x) = b − a
⎪
⎪
⎩
0, elsewhere.
2) Exponential:
⎧ −λx
⎨ λe , x ≥ 0, λ > 0,
f (x) =
⎩
0, elsewhere.
3) Erlang of order m:
⎧
⎪
⎪ λm
⎨ xm−1 e−λx , x ≥ 0, λ > 0,
f (x) = (m − 1)!
⎪
⎪
⎩
0, elsewhere.
4) Hyperexponential of order m with mixing probability vector p and the
parameter vector λ :
⎧ m
⎪
⎪
⎨ pj λj e−λj x , x ≥ 0, λj > 0, 1 ≤ j ≤ m,
f (x) = k=1
⎪
⎪
⎩
0, elsewhere.
5) Gamma:
⎧
⎪
⎪ 1
⎨ xα−1 e−x/β , α > 0, β > 0, x ≥ 0,
f (x) = β α Γ(α)
⎪
⎪
⎩
0, elsewhere.
6) Weibull:
⎧ α−1
⎪
⎪ α x
⎪
⎨
α
e−(x/β) , α > 0, β > 0, x ≥ 0,
f (x) = β β
⎪
⎪
⎪
⎩
0, elsewhere.
7) Beta:
⎧
⎪
⎪ Γ(α + β) α−1
⎨ x (1 − x)β−1 , α > 0, β > 0, 0 ≤ x ≤ 1,
f (x) = Γ(α)Γ(β)
⎪
⎪
⎩
0, elsewhere.
∞
[Note: In the above Γ(α) = 0
xα−1 e−x dx.]
Introduction 9
R EMARK 1.8.– Note that the CDF is a right-continuous and non-decreasing function,
which tends to 0 as x → −∞ and goes to 1 as x → ∞.
In this section, we briefly discuss the Laplace transform (LT) that plays an
important role in stochastic modeling.
Since we are focusing on queueing and related topics in this two-volume book, we
assume that the underlying random variables are all non-negative.
D EFINITION 1.11.– Suppose that f (x) is the PDF of a non-negative random variable,
X. The Laplace transform (LT) of f (x) (or equivalently of X) is defined as:
∞
f ∗ (s) = e−s x f (x)dx, Re(s) ≥ 0. [1.3]
0
D EFINITION 1.12.– Suppose that F (x) is the CDF of a non-negative random variable,
X. The Laplace-Stieltjes transform (LST) of F (x) (or equivalently of X) is defined as:
∞
F ∗ (s) = e−s x dF (x), Re(s) ≥ 0. [1.4]
0
D EFINITION 1.13.– Suppose that F̄ (x) = P (X > x) is the tail probability function
of a non-negative random variable, X. The Laplace-Stieltjes transform (LST) of F̄ (x)
is defined as:
∞
F̄ ∗ (s) = e−s x dF̄ (x), Re(s) ≥ 0. [1.5]
0
t
R ESULT 1.1.– Since F (t) = 0 f (x)dx, the LST of F (t) is same as the LT of f (t).
That is:
∞ ∞
F ∗ (s) = e−sx dF (x) = e−sx f (x)dx, Re(s) ≥ 0. [1.6]
0 0
R ESULT 1.2.– The LT of F (t) and the LT of F̄ (t) are given by:
F (t) b
For some a > 0, limt→∞ = ⇒ lims→0+ sa F ∗ (s) = b.
ta Γ(a + 1)
F (t) b
For some a > 0, lims→0+ sa F ∗ (s) = b ⇒ limt→∞ a
= .
t Γ(a + 1)
Suppose that one is planning to buy a new model car. Among all other criteria for
buying a new car, let us assume that person gives priority for a car that gives good
mileage. The MPG (miles per gallon) of a new model car is a random variable (why?).
But the person is not interested in knowing the probability distribution of this random
variable, only the average MPG. Of course, the average MPG depends on a number
of variables, such as the size of the car, power of the engine and type of transmission.
However, this measure will give the person a smaller set of cars to pick from. The key
point here is how one or more measures of a random variable is used in practice. There
are several other instances, which will be seen throughout the book.
Introduction 11
Some commonly used measures are: (1) nth raw moment, especially the first (also
referred to as expected value) and second moments; (2) standard deviation; and (3)
percentiles.
D EFINITION 1.14.– The nth raw moment: of a random variable is defined as:
⎧ n
⎪
⎨ x∈S x f (x), X is discrete,
E(X n ) = [1.8]
⎪
⎩ ∞ n
0
x f (x)dx, X is continuous.
F (xp ) = P (X ≤ xp ) = p. [1.10]
D EFINITION 1.17.– If X1 also has the same PDF as the rest of Xn , the renewal
process {Xn : n ≥ 0} is referred to as an ordinary renewal process.
D EFINITION 1.18.– If X1 has a different PDF than the rest of Xn , the renewal process
{Xn : n ≥ 0} is referred to as a modified renewal process.
D EFINITION 1.19.– If X1 has the PDF given by μF̄ (x), where μ−1 is the mean of
the random variable with CDF given by F (x), the renewal process {Xn : n ≥ 0} is
referred to as a stationary (or equilibrium) renewal process.
R EMARK 1.10.– Note that for a Poisson process (see section 1.2.3), the ordinary
renewal process and the stationary renewal process are identical.
12 Introduction to Matrix-Analytic Methods in Queues 1
D EFINITION 1.20.– The renewal function, M (t), is defined as the expected number of
renewals in t units of time. That is, M (t) = E(N (t)).
n
Suppose that Sn = Xi . Then, we have the following key results.
i=1
One of the most celebrated equations in renewal theory is the famous renewal
equation. This is obtained by conditioning on the first renewal.
R ESULT 1.8.– The renewal equation corresponding to the renewal process {Xn } is
given by:
t
M (t) = F (t) + M (t − x)dF (x), t ≥ 0, [1.13]
0
R EMARK 1.11.– It is worth pointing out how the solution for equation [1.13] is
obtained as the function given in equation [1.14] by replacing the functions M (.) and
F (.) inside the integral in equation [1.13] with F (.) and M (.), respectively.
∞
Suppose that G(t, z) = z n P (N (t) = n), t ≥ 0, |z| ≤ 1, denotes the
n=0
probability generating function of N (t) and g ∗ (s, z) is the LST of G(t, z). That is,
g ∗ (s, z) is the joint transform.
Introduction 13
R ESULT 1.10.– The Laplace transform, M ∗ (s), of M (t) under various renewal
processes is:
⎧
⎪
⎪ f ∗ (s)
⎪
⎪ , ordinary renewal process,
⎪
⎪ s[1 − f ∗ (s)]
⎪
⎪
⎪
⎪
⎨
∗ f1∗ (s)
M (s) = , modified renewal process, [1.16]
⎪
⎪ s[1 − f ∗ (s)]
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪ μ
⎩ , stationary renewal process.
s2
t
R EMARK 1.12.– We note that for the stationary renewal process, M (t) = , where
μ
1
μ = is the mean of the underlying random variable.
μ
R ESULT 1.11.– In the case of an ordinary renewal process, asymptotically the renewal
t
function approaches . That is:
μ
M (t) 1
lim = = μ. [1.17]
t→∞ t μ
Suppose that m(t) denotes the renewal density. That is, m(t) = M (t). Note that
m(t)dt gives the expected number of renewals in (t, t + dt).
R ESULT 1.12.– Suppose that m∗ (s) denotes the LT of m(t). It is easy to verify that:
⎧
⎪
⎪ f ∗ (s)
⎪
⎨ 1 − f ∗ (s), ordinary renewal process,
m∗ (s) = [1.18]
⎪
⎪ f1∗ (s)
⎪
⎩ , modified renewal process.
1 − f ∗ (s)
14 Introduction to Matrix-Analytic Methods in Queues 1
R ESULT 1.13.– In the case of an ordinary renewal process, asymptotically the renewal
1
density, mo (t), approaches . That is:
μ
1
lim mo (t) = = μ. [1.19]
t→∞ μ
R ESULT 1.14.– Suppose that m∗1 (s) is the LT of the density, m1 (t), of the modified
renewal process. Then using equation [1.18] we get:
from which we get the following integral equation. This equation plays an important
role in stochastic modeling.
t
m1 (t) = f1 (t) + m1 (t − x)f (x)dx, [1.21]
0
R ESULT 1.15.– The integral equation for the ordinary renewal process can be
obtained with a similar argument that leads to result 1.8 or simply differentiating
equation [1.13] and is given by:
t
mo (t) = f (t) + mo (t − x)f (x)dx, [1.23]
0
R EMARK 1.13.– Noting that f (x)dx gives the probability that in (x, x + dx) a (first)
renewal occurs, the integral equation given in equation [1.23] can be given a nice
probabilistic interpretation: the LHS gives the probability of a renewal in a small time
interval near t and the RHS is the sum of a probability of a (first) renewal near t and
the probability that there is a renewal near t − x, which is followed by an inter-renewal
time of duration x, 0 < x < t.
R ESULT 1.16.– Suppose that r(t) is defined as the forward recurrence time (also
referred to as residual lifetime). That is, r(t) = t − SN (t) . If fr (x, t) denotes the
probability density function of r(t), then it is known that (see, e.g. Cox (1962))
t
fr (t, x) = f (t + x) + h(t − u)f (u + x)du. [1.25]
0
Introduction 15
R ESULT 1.18.– A function satisfying any of the following conditions guarantees the
function to be DRI (note that these are not necessary!).
1) The function is non-negative, continuous and has a finite support.
2) The function is non-negative, continuous, bounded such that the upper Riemann
sum is bounded.
3) The function is non-negative, monotone non-increasing and Riemann
integrable.
4) The function is non-negative and bounded above by a DRI function.
R ESULT 1.19.– Key Renewal theorem: Suppose that k(t) is a DRI function. Then, we
have:
t
limt→∞ 0
k(t − x)dM (x) = limt→∞ k ∗ M (t)
[1.26]
1 ∞
= limt→∞ M ∗ k(t) = 0
k(t)dt.
μ
In the previous section, we talked about renewal process, which does not terminate.
That is, F (∞) = 1. However, there are times when we need to study terminating, also
referred to as transient, renewal process. That is, we can have a situation wherein
F (∞) < 1. In this case, the integral equation for renewals as well as the key renewal
theorem need to be applied differently.
R ESULT 1.20.– For a terminating renewal process, {Xn : n ≥ 1}, with CDF, F (.),
the associated counting process, N (t), is such that N = N (∞) < ∞, almost surely.
That is, the number of renewals, N , during [0, ∞] is finite almost surely. Furthermore,
R ESULT 1.21.– The renewal function, M (t), for the terminating renewal process is
such that we have:
1
M (∞) = . [1.28]
1 − F (∞)
R ESULT 1.22.– For any DRI function, k(t) with k(∞) = limt→∞ k(t) exists, we
have:
k(∞)
lim k ∗ M (t) = . [1.29]
t→∞ 1 − F (∞)
R ESULT 1.23.– If the terminating renewal process, {Xn : n ≥ 1}, is a delayed one
(i.e. the initial one has a different distribution function, say, H(.)) then, for any DRI
function, k(t) with k(∞) = limt→∞ k(t) exists, we have:
k(∞)H(∞)
lim k ∗ M (t) = . [1.30]
t→∞ 1 − F (∞)
One of the most celebrated stochastic processes is the Poisson process. That is, a
renewal process whose inter-arrival times follow an exponential distribution. We will
briefly summarize some key results related to Poisson processes for our needs here.
Any additional ones needed will be mentioned in appropriate places.
f (t) = λe−λt , t ≥ 0,
F (t) = 1 − e−λt , t ≥ 0,
[1.31]
λ
f ∗ (s) = .
s+λ
R ESULT 1.25.– The exponential distribution possesses the famous memoryless
property:
R ESULT 1.26.– The counting process, N (t), denoting the number of Poisson arrivals
by time t has the following properties:
(λt)k
1) P [N (t) = k] = e−λt , k = 0, 1, 2, · · · .
k!
Introduction 17
R ESULT 1.27.– The renewal function, M (t), for the Poisson process with parameter
λ is obviously a linear function. That is, M (t) = λ t, for t ≥ 0. One can see this by
λ
looking at result 1.17 for the current case noting that f ∗ (s) = yields M ∗ (s) =
λ+s
λ
implying M (t) = λ t.
s2
R ESULT 1.28.– We have:
∞
(λt)i
P (N (t) ≥ n) = P (Sn ≤ t) = e−λt , n ≥ 0, [1.33]
i=n
i!
R EMARK 1.14.– The result in equation [1.34] is intuitively obvious. In order for the
nth renewal to occur in (t, t + dt), n − 1 renewals should have occurred by time t and
in the small interval a renewal occurs. Note that this PDF corresponds to the celebrated
Erlang random variable, which will be used extensively in this two-volume book.
R EMARK 1.15.– The above density function given in equation [1.34] is that of Erlang
(a gamma family).
R ESULT 1.29.– The superposition of two or more Poisson processes is again a Poisson
process.
R ESULT 1.30.– Given that exactly one Poisson event has occurred by time t, the
distribution of the time of the occurrence of this event is uniform on [0, t]. That is:
P (X1 < s, N (t) = 1)
P (X1 < s|N (t) = 1) =
P (N (t) = 1)
[1.35]
P (N (s) = 1, N (t − s) = 0) s
= = .
P (N (t) = 1) t
R ESULT 1.31.– Given that N (t) = n, the joint distribution of the arrival times,
S1 , · · · , Sn , is that of the joint distribution of the order statistics of n uniformly
18 Introduction to Matrix-Analytic Methods in Queues 1
distributed random variables on (0, t). That is, for 0 < u1 < · · · < un < t, the
conditional density is given by:
f (u1 , · · · , un |N (t) = n)
P (N (u1 ) = 1, N (u2 − u1 ) = 1, · · · , N (un − un−1 ) = 1, N (t − un ) = 0)
=
P (N (t) = n)
n!
= n.
t
Matrix theory plays an important role in many areas such as business, economics,
statistics, engineering, finance, stochastic modeling and other applied fields. Also, it
is fairly easy to introduce this subject at the undergraduate level for the students to
get familiarize with the concepts as well as to apply them to advanced fields such as
Markov chains and queues.
In this section, we briefly summarize some of the key results and properties of
matrices that are crucial to MAM. For details we refer to books such as Dhrymes
(2013); Marcus and Minc (1964); Graham (1981); Seneta (2006); and Steeb and Hardy
(2011).
1.3.1. Basics
R EMARK 1.18.– A diagonal matrix such that all its diagonal entries are 1 is called an
identity matrix and will be denoted by Im = Δ{1, · · · , 1}.
D EFINITION 1.28.– The basic matrix operations are: (1) addition of matrices of the
same dimensions. Thus, Am×n + Bm×n = Cm×n with ci,j = ai,j + bi,j ; (2) scalar
multiplication of a matrix: dA = (d ai,j ); (3) multiplication of two matrices requires
that the number columns of the left side matrix equals the number of rows on the right.
Thus, the matrix product, Am×n Bq×r makes sense if and only if n = q. Similarly,
Bq×r Am×n makes sense if and only if r = m. The product Am×n Bn×r yields a
n
matrix Cm×r = (ci,j ), where ci,j = ai,k bk,j .
k=1
D EFINITION 1.29.– Hadamard (or Schur) product of two matrices, Am×n and Bm×n ,
denoted as A◦B, is defined A◦B = B◦A = (ai,j bi,j ). That is, one takes element-wise
products.
D EFINITION 1.30.– A square matrix Am is said to be stable if and only if the following
three conditions are satisfied:
1) ai,i < 0, for all 1 ≤ i ≤ m;
2) ai,j ≥ 0, for all 1 ≤ i, j ≤ m;
m
m
3) ai,j ≤ 0, for all 1 ≤ i ≤ m, and at least for one i, ai,j < 0.
j=1 j=1
20 Introduction to Matrix-Analytic Methods in Queues 1
R EMARK 1.19.– Note that a stable matrix is always semi-stable but the converse is
not true.
R EMARK 1.20.– The rank of Am×n cannot exceed the minimum of m and n.
R EMARK 1.22.– It is worth pointing out that while we do not take the approach of
discussing MAM in the context of Toeplitz and asymptotically Toeplitz matrices.
However, if one is interested to know about that approach we recommend the books
by Bini et al. (2005) and Dudin et al. (2020).
D EFINITION 1.35.– A matrix A is non-negative if and only if all its elements are
non-negative.
D EFINITION 1.36.– The trace, tr(A), of a square matrix A is defined as the sum of
m
the diagonal elements. That is, tr(A) = ai,i .
i=1
R EMARK 1.25.– If A and B are square matrices, then tr(A + B) = tr(A) + tr(B)
and tr(AB) = tr(BA).
R EMARK 1.26.– For a square matrix Am , we have |AT | = |A|. Further, if B = dA,
then |B| = dm |A|.
D EFINITION 1.38.– Suppose that we obtain Bm−1 from Am by deleting the ith row
and the jth column. The quantity (−1)i+j |Bm−1 | is called the cofactor of the element
ai,j .
D EFINITION 1.39.– Suppose that we obtain Bm−1 from Am by deleting the ith row
and the jth column. The quantity (−1)i+j |Bm−1 | is called the cofactor of the element
ai,j .
The following result is a matrix-analog of the binomial theorem (when the matrices
A and B commute). Since to the author’s knowledge a proof of this result is not seen
in the literature, a proof is given.
R ESULT 1.34.– Suppose that A and B are square matrices of order m. Define the
matrix polynomial f (z) as f (z) = (A + zB)n . Then, we have:
n
f (z) = z k Fk,n ,
k=0
where the square matrices, Fk,n , of order m, are recursively computed (in that order)
as follows:
F0,0 = Im , F0,1 = A, F1,1 = B,
F0,r = A F0,r−1 , 2 ≤ r ≤ n,
Fr,r = B Fr−1,r−1 , 2 ≤ r ≤ n.
Assume now that the result is true for (A + z B)i , i = 1, · · · , r − 1. Thus, using
the fact that Fk,r−1 is the coefficient of z k , k = 0, · · · , r − 1, in the expansion of
(A + z B)r−1 , we have:
r−1
(A + z B)r = (A + z B) z k Fk,r−1 ,
k=0
which from the recursive equations shows that the result is true for i = r. This
completes the proof.
P ROOF.– First note that Fk,n has nk terms involving the products of the powers of A
and B. When A and B commute, all different ways of producing the product of An−k
and B k will result in the same product.
D EFINITION 1.42.– Given a square matrix A, the roots of its characteristic equation
are called eigenvalues or characteristic roots. That is, ξ is said to be an eigenvalue of
A, if it satisfies:
det(ξI − A) = 0. [1.43]
24 Introduction to Matrix-Analytic Methods in Queues 1
D EFINITION 1.43.– Given a square matrix A, the vectors u and v are, respectively,
called the left and right eigenvectors corresponding to the eigenvalue, ξ, if :
R ESULT 1.37.– Suppose that A is non-singular, then the eigenvalues of A are non-
zero. Further, the eigenvalues of A−1 are reciprocals of the eigenvalues of A.
R ESULT 1.38.– The eigenstructure, namely, the eigenvalues and their corresponding
eigenvectors, of A and AT are the same.
D EFINITION 1.44.– Two square matrices, A and B, are said to be similar if there
exists a non-singular matrix, say, P , such that P −1 AP = B. The matrix P is referred
to as a similarity transformation matrix.
R ESULT 1.41.– A square matrix is diagonalizable if and only if there exists a linearly
independent set of eigenvectors and they are used as columns of P in P −1 AP = Δ,
where Δ is a diagonal matrix.
R ESULT 1.45.– Suppose that A and B are square matrices. Then AB and BA have
identical eigenstructure.
R ESULT 1.46.– Suppose that Am is a square matrix. Then there exists a Bm such that
|ai,j − bi,j | < and B has distinct eigenvalues.
i,j
R ESULT 1.48.– Suppose that A is a positive square matrix. Then, for all u ≥ 0, we
have A u ≥ 0.
R ESULT 1.49.– Suppose that A is a positive square matrix. Then the maximum
eigenvalue of A is positive.
R ESULT 1.51.– Suppose that A is an irreducible stable matrix. Let θ ≥ maxi |ai,i |.
1
Then, the matrix B = I + A is irreducible and non-negative. Writing A = θ(B −I),
θ
and noting that all eigenvalues of B are less than 1, from result 1.36 we infer that all
eigenvalues of A are of the from ξ = θ(λ − 1), where λ is an eigenvalue of B. Thus,
all eigenvalues of A have strictly negative real parts. This also implies that the stable
matrices are non-singular.
R EMARK 1.29.– We can also say that a square matrix, A, is a stable matrix if all its
eigenvalues have strictly negative real parts. Similarly, a square matrix, A, is said to
be semi-stable if all its eigenvalues have non-positive real parts. These are important
observations and will be used in later chapters.
R EMARK 1.30.– The spectral radius (or the maximal eigenvalue) of a non-negative
matrix plays an important role in stochastic modeling. We propose (based on our
experience) using Elsner’s algorithm to compute the spectral radius. Suppose that A
is an irreducible non-negative matrix with η as its spectral radius. If A is not
irreducible (which occurs commonly in many applications), then one can identify the
principal submatrix of A with the spectral radius and then apply Elsner’s algorithm to
this submatrix. Elsner’s algorithm is easy to implement and also converges fast.
Necessary steps of the algorithm are given below.
(n)
Sν(n)
n
≤ Sj ≤ Sμ(n)
n
, 1 ≤ j ≤ m.
lim Sμ(n)
n
= lim Sν(n)
n
= Sp(B), u(n) → u,
n→∞ n→∞
[Note: uB = ζu.]
R ESULT 1.53.– If a square matrix Am has all its eigenvalues less than 1 in modulus,
then we have:
∞
−n n+k−1 k
(I − A) = A , n ≥ 1. [1.45]
k
k=0
R ESULT 1.54.– If Am and Bn are non-singular and if Cm×n and Dn×m are two
matrices, then:
[A + CBD]−1 = A−1 − A−1 C(B −1 + DA−1 D)−1 DA−1 .
R ESULT 1.55.– Suppose that Am is non-singular and that a and b are, respectively,
column and row vectors of dimension m. Then, we have for any scalar c,
c
[A + c a b]−1 = A−1 − A−1 a b A−1 .
1 + bA−1 a
R ESULT 1.59.– Suppose that the partitioned matrix A (see equation [1.46]) is non-
singular but A11 and A22 are singular. Suppose that C = (A11 + A12 A21 )−1 and
D = (A22 − A21 C A12 − A21 C A12 A22 )−1 exist. Then, the inverse of A is given by:
⎧
⎪
⎪ C + C A12 (I + A22 )D A21 C, k = 1,
⎪
⎪
⎛ ⎞ ⎪
⎪
⎪
⎪
E1 E2 ⎨ E1 A12 − C A12 (I + A22 ) D, k = 2,
A =⎝
−1 ⎠ , where Ek =
⎪
⎪
E3 E4 ⎪
⎪ −D A21 C, k = 3,
⎪
⎪
⎪
⎪
⎩
D − D A21 C A12 , k = 4.
∂x
R EMARK 1.31.– From the definition, we see that = In .
∂x
∂
R ESULT 1.63.– If the matrix An is independent of x, then we have A x = A.
∂x
R ESULT 1.64.– If the matrix Am×n is independent of both z, a column vector of
dimension m, and x, a column vector of dimension n, then we have:
∂ T ∂ T
(z Ax) = xT AT and (z Ax) = z T A. [1.47]
∂z ∂x
R ESULT 1.65.– If the matrix An is independent of x, which is a column vector of
∂ T
dimension n, then we have (x Ax) = xT (A + AT ).
∂x
R ESULT 1.66.– Suppose that the matrix An (x) is a function of x and that A−1
n (x)
exists. Then, we have:
d −1 d
A (x) = −A−1
n (x) An (x) A−1
n (x). [1.48]
dx n dx
R ESULT 1.67.– Suppose that the matrix A and B are two matrices for which addition
d
makes sense. Then, we have [A + xB] = B.
dx
30 Introduction to Matrix-Analytic Methods in Queues 1
R ESULT 1.68.– Suppose that the matrix operations shown below are valid. Then, we
have the following:
d T
A (B + xC)−1 D = −AT (B + xC)−1 C (B + xC)−1 D. [1.49]
dx
R ESULT 1.69.– Suppose that f (A) and g(A) are functions of A. Assume that the
products shown below are well defined. Then, we have:
d d d
{[f (A)]T g(A)} = f (A)g(A) + g(A)f (A). [1.50]
dA dA dA
R ESULT 1.70.– Suppose that A and B are square matrices of dimension m. Suppose
that f (z) = (A + zB)n , where z is a scalar and n is a non-negative integer. Then, we
have:
df
n−1
= (A + zB)k B (A + zB)n−1−k . [1.51]
dz
k=0
where A0 = Im .
R ESULT 1.71.– Suppose that a is a scalar and that Am is a square matrix. Then, we
have:
eaIm +A = ea eA . [1.53]
e(A+B) = eA eB = eB eA . [1.54]
Introduction 31
R ESULT 1.75.– Suppose that Am is a square matrix with an eigenvalue ξ. Let u and
v denote, respectively, the left and right eigenvectors of A corresponding to ξ. Then,
eξ is an eigenvalue of eA , and u and v are, respectively, the left and right eigenvectors
of eA corresponding to eξ .
uA = ξu ⇒ uAn = ξ n u, n ≥ 1, [1.57]
ueA = eξ u. [1.58]
d Ax
e = AeAx = eAx A.
dx
R ESULT 1.80.– Below, we assume that the matrix operations such as multiplications
and additions are meaningful.
(A + B) ⊗ C = (A ⊗ C) + (B ⊗ C),
A ⊗ (B + C) = (A ⊗ B) + (A ⊗ C),
(A ⊗ B) ⊗ C = A ⊗ (B ⊗ C).
(A ⊗ B)T = (AT ⊗ B T ),
R ESULT 1.81.– If A and B are non-singular matrices, then (A⊗ B)−1 = A−1 ⊗B −1 .
R EMARK 1.33.– In general, Kronecker product operation is not commutative. That is,
in general A ⊗ B = B ⊗ A.
D EFINITION 1.53.– Suppose that A and B are two square matrices of dimension m
and n, respectively. The Kronecker sum , denoted by, A ⊕ B, is defined as:
A ⊕ B = A ⊗ In + Im ⊗ B.
Random documents with unrelated
content Scribd suggests to you:
so soon as it stands true in his mind, and accordingly becomes his
faith; that all the divine power which operates in the minds of
men, either to give the first relief to their consciences, or to
influence them in every part of their obedience to the Gospel, is
persuasive power, or the forcible conviction of truth.
From this we see that he saw with some degree of clearness 141
the nature of faith, but not that the divine economy provides
that faith shall be perfected by surrender to an ordinance of the
Lord’s own appointment. On some other points in regard to faith he
was more or less confused. He advocated the weekly observance of
the Lord’s Supper; love feasts; weekly contribution for the poor;
mutual exhortation of members; plurality of elders; conditional
community of goods; and approved of theaters and public and
private diversions, when not connected with circumstances really
sinful. His influence extended to the north of Ireland, but the people
there did not adopt all his views. They attended weekly to the Lord’s
Supper, contributions, etc., but were opposed to going to theaters or
such places of amusement; to the doctrine of the community of
goods; feet washing, etc., as advocated by Sandeman. Sandeman’s
influence extended also to England and to this country.
THE SEPARATISTS
About the year 1802 there were a few persons in Dublin, most of
them connected with the religious establishments of the country.
The most noted among them were John Walker, G. Carr and Dr.
Darby, all of whom organized religious bodies, differing in minor
points from one another. Their attention was directed to Christian
fellowship, as they perceived it to have existed among the disciples
in apostolic times. They concluded from the study of the New
Testament that all the first Christians in any place were connected
together in the closest brotherhood; and that as their connection
was grounded on the one apostolic gospel which they believed, so it
was altogether regulated by the precepts delivered to them by the
apostles, as the divinely commissioned ambassadors of Christ. They
were convinced that every departure of professing Christians from
this course must have originated in a withdrawing of their allegiance
from the King of Zion, and in the turning away from the instruction
of the inspired apostles; that the authority of their word, 145
being divine, was unchangeable, and that it can not have
been annulled by or weakened by the lapse of ages, by the varying
customs of different nations, or by the enactments of earthly
legislators.
With such views in their minds they set out in the attempt to return
fully to the course marked out in the Scriptures; persuaded that they
were not called to make any laws or regulations for their union, but
simply to learn and adhere to the law recorded in the divine Word.
Their number soon increased; and for some time they did not see
that the union which they maintained with each other, on the
principles of scripture, was at all inconsistent with the continuance of
their connection with the various religious bodies round them. But
after a time they were convinced that these two things were utterly
incompatible; and that the same divine rule which regulated their
fellowship with each other forbade them to maintain any religious
fellowship with others. From this view, and the practice consequent
upon it, they were called “Separatists.”
They held that even two or three disciples in any one place, united
together in the faith of the apostolic gospel, and in obedience to the
apostolic precepts, constitute the Church of Christ in that place.
They held that the only good and sure hope toward God for any
sinner is by the belief of this testimony concerning the great things
of God and his salvation. And as they understood by faith, with
which justification and eternal life were connected, nothing else but
belief of the things declared to all alike in the Scriptures, so by
repentance they understood nothing else but the new mind which
that belief produces. Everything called repentance, but antecedent
to the belief of the Word of God, or unconnected with it, they
considered spurious and evil.
The Bible thus trammeled had, nevertheless, set free from spiritual
bondage individuals here and there, who were more or less
successful in their pleadings for reform. But among them all,
however, there was no one who took hold of the leading errors with
sufficient clearness and grasp as to liberate it from the thraldom of
human tradition and restore the Gospel to the people in its primitive
simplicity and power.
148
PART IV
The Restoration Movement in America
CHAPTER I.
SPIRITUAL UNREST IN MANY PLACES
About the same time Abner Jones, a physician, of Hartland, Vt., then
a member of the Baptist Church, became “greatly dissatisfied with
sectarian names and creeds, began to preach that all these should
be abolished, and that true piety should be made the ground of
Christian fellowship. In September, 1800, he succeeded by
persevering zeal in establishing a church of twenty-five members at
Lyndon, Vt., and subsequently one in Bradford and one in Piermont,
N. H., in March, 1803.” Elias Smith, a Baptist preacher, who was
about this time laboring with much success in Plymouth, N. H.,
adopted Jones’ view and carried the whole congregation with him.
Several other preachers, both from the Regular and Freewill Baptists,
soon followed, and with many other zealous preachers, who were
raised up in the newly-organized churches, traveled extensively over
the New England states, New York, Pennsylvania, Ohio and into
Canada, and made many converts. Those in this movement also
called themselves Christians only, and adopted the Bible as their only
rule of faith and practice.
In early life Duncan had united with the Baptists and was ordained
by them, but after a time adopted the views of the “Christians,”
chiefly through the teaching of Joseph Thomas, who was in some
respects a remarkable man. He was born in North Carolina, whence
he removed with his father to Giles County, Virginia, where he
became deeply imbued with religious fervor, and began while 150
quite a young man to urge his neighbors to the importance
of devoting themselves to the service of God. Associating with
O’Kelley in North Carolina, he desired to be immersed, when O’Kelley
persuaded him that pouring was more scriptural, to which he
submitted after stipulating that a tubful of water should be poured
upon him. But afterward he became fully convinced that immersion
alone is baptism, and was immersed by Elder Plumer. This brought
him into intimate association with Abner Jones, Elias Smith and
others of the “Christians.” He now devoted his life wholly to
preaching and became noted for the extent of his travels throughout
the United States. He traveled on foot dressed in a long, white robe,
hence he was called the “White Pilgrim,” and frequently, in imitation
of the Master, retired to lonely places for fasting and prayer. He
made a strong impression on the people, and finally died of smallpox
amidst his itinerant labors in New Jersey.
That night they spent at Stone’s home, and the doctor soon
perceived that one of the greatest things in the way was Stone’s
wife. Accordingly he gave her much attention, and the three
searched the Scriptures the greater part of the night. A large crowd
assembled the next day to hear the discussion. Dr. Bullard
announced that there would be no debate, but that he would preach
that morning and Stone in the afternoon; also that there 152
would be an immersion immediately after the morning
discourse. Much to the surprise of all, both Mr. and Mrs. Stone
presented themselves for baptism when the invitation was given.
153
CHAPTER II.
BARTON W. STONE
About this time some Baptist preachers came into the neighborhood
and began preaching to the people, and great excitement followed.
Multitudes attended their ministrations, and many were immersed.
Immersion was so novel that people traveled long distances to see
the ordinance administered. Young Stone was constant in his 154
attendance, and was particularly interested in hearing the
converts relate their experiences. Of their conviction and great
distress they were very particular in giving an account, and how and
when they obtained deliverance from their burdens. Some were
delivered by a dream, a vision, or some uncommon appearance of
light; others by a voice spoken to them—“Thy sins are forgiven
thee”; and others by seeing the Savior with their natural eyes. Such
experiences were considered good by the Church, and those relating
such were baptized and received into full fellowship. The preachers
had an art of affecting their hearers by a tuneful voice in preaching.
Not knowing any better, he considered all this a work of God, and
the way of salvation.
Just before he entered the academy the students had been greatly
stirred by James McGready, a Presbyterian preacher, and Stone was
not a little surprised to find many of the students assembled 155
every morning in a private room before the hour for
recitation to engage in singing and prayer. This was a source of
uneasiness to him, and frequently brought him to serious reflections.
He labored diligently to banish these serious thoughts, thinking that
religion would impede his progress in learning, thwart the object he
had in view, and expose him to the ridicule of his relatives and
companions. He therefore associated with those students who made
light of such things, and joined them in the ridicule of the pious. For
this his conscience severely condemned him when alone and made
him so very unhappy that he could neither enjoy the company of the
pious nor that of the impious. This caused him to decide to go to
Hampden-Sidney College, Virginia, that he might be away from the
constant sight of religion. He determined to leave at once, but was
prevented by a violent storm. He remained in his room all day and
reached the decision to pursue his studies there and to attend to his
own business, and let others do the same.
Having made this resolution, he was settled till his roommate asked
him to accompany him to hear Mr. McGready preach. Of the deep
impression made on him by the discourse he heard on that occasion
he says:
When the meeting was over he returned to his room, and when
night came he walked out into a field and seriously reasoned with
himself on the all-important subject of religion. He asked himself:
“What shall I do? Shall I embrace religion, or not?” He weighed the
subject and counted the cost. He concluded that if he embraced
religion he would then incur the displeasure of his relatives 156
and lose the favor and company of his companions: become
the object of their scorn and ridicule; relinquish all his plans and
schemes for worldly honor, wealth and preferment, and bid adieu to
all the pleasures in which he had lived. He asked himself, “Are you
willing to make this sacrifice?” His heart answered, “No, no.” Then
there loomed before him a certain alternative, “You must be
damned.” This thought was so terrible to him that he could not
endure the thought, and, after due deliberation, he resolved from
that hour to seek religion at the sacrifice of every earthly good, and
immediately prostrated himself before God in supplication for mercy.
In accordance with the popular belief, and the experience of the
pious in those days, he anticipated a long and painful struggle
before he should be prepared to come to Christ, or, in the language
of that day, before he should “get religion.” This anticipation was
fully realized. For a year he was tossed about on the waves of
uncertainty, laboring, praying and striving for “saving faith,”
sometimes desponding and almost despairing of ever getting it. He
wrestled with this condition until he heard a sermon on “God is
love,” which so impressed his mind that he retired to the woods
alone with his Bible. There he read and prayed with various feelings,
between hope and fear, till the great truth of the love of God so
triumphed over him that he afterward said:
He was in this state of mind when the day for his ordination came.
He determined to tell the presbytery honestly his state of mind, and
to request them to defer his ordination until he should be better
informed and settled. When the day came a large congregation
assembled, but before the presbytery convened he took aside the
two pillars—James Blythe and Robert Marshall—and made 158
known to them his difficulties and that he had determined to
decline ordination at that time. They labored, but in vain, to remove
his difficulties and objections. They asked him how far he was willing
to receive the Confession of Faith. To this he replied, “As far as I see
it is consistent with the Word of God.” They concluded that that was
sufficient. The presbytery then convened, and when the question,
“Do you receive and adopt the Confession of Faith as containing the
system of doctrine taught in the Bible?” he answered aloud, so that
the whole assembly could hear, “I do, so far as I see it consistent
with the Word of God.” No objection being raised to this answer he
was ordained.
ebookbell.com