100% found this document useful (1 vote)
21 views

Introduction To Matrix Analytic Methods In Queues 1 Analytical And Simulation Approach Basics Srinivas R Chakravarthy instant download

The document is an introduction to matrix-analytic methods in queues, focusing on both analytical and simulation approaches. It covers various topics including probability concepts, Markov chains, and different types of phase-type distributions. The book is authored by Srinivas R. Chakravarthy and was first published in 2022.

Uploaded by

dovalgurdau5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
21 views

Introduction To Matrix Analytic Methods In Queues 1 Analytical And Simulation Approach Basics Srinivas R Chakravarthy instant download

The document is an introduction to matrix-analytic methods in queues, focusing on both analytical and simulation approaches. It covers various topics including probability concepts, Markov chains, and different types of phase-type distributions. The book is authored by Srinivas R. Chakravarthy and was first published in 2022.

Uploaded by

dovalgurdau5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 77

Introduction To Matrix Analytic Methods In

Queues 1 Analytical And Simulation Approach


Basics Srinivas R Chakravarthy download

https://ebookbell.com/product/introduction-to-matrix-analytic-
methods-in-queues-1-analytical-and-simulation-approach-basics-
srinivas-r-chakravarthy-46345242

Explore and download more ebooks at ebookbell.com


Here are some recommended products that we believe you will be
interested in. You can click the link to download.

Introduction To Matrixanalytic Methods In Queues 2 Srinivas R


Chakravarthy

https://ebookbell.com/product/introduction-to-matrixanalytic-methods-
in-queues-2-srinivas-r-chakravarthy-46323010

An Introduction To Queueing Theory And Matrixanalytic Methods 1st


Edition Lothar Breuer

https://ebookbell.com/product/an-introduction-to-queueing-theory-and-
matrixanalytic-methods-1st-edition-lothar-breuer-2323796

An Introduction To Matrix Methods Of Structural Analysis Muhammad


Akram Tahir Worsak Kanoknukulchai

https://ebookbell.com/product/an-introduction-to-matrix-methods-of-
structural-analysis-muhammad-akram-tahir-worsak-
kanoknukulchai-231779806

An Introduction To Matrix Methods Of Structural Analysis Muhammad


Akram Tahir Worsak Kanoknukulchai

https://ebookbell.com/product/an-introduction-to-matrix-methods-of-
structural-analysis-muhammad-akram-tahir-worsak-
kanoknukulchai-154902458
An Introduction To Matrix Methods Of Structural Analysis Muhammad
Akram Tahir

https://ebookbell.com/product/an-introduction-to-matrix-methods-of-
structural-analysis-muhammad-akram-tahir-200040022

Introduction To Matrix Analysis And Applications 1st Edition Fumio


Hiai

https://ebookbell.com/product/introduction-to-matrix-analysis-and-
applications-1st-edition-fumio-hiai-4662488

Matrixbased Introduction To Multivariate Data Analysis 1st Edition


Kohei Adachi Auth

https://ebookbell.com/product/matrixbased-introduction-to-
multivariate-data-analysis-1st-edition-kohei-adachi-auth-5609750

Matrixbased Introduction To Multivariate Data Analysis 2nd Edition


Kohei Adachi

https://ebookbell.com/product/matrixbased-introduction-to-
multivariate-data-analysis-2nd-edition-kohei-adachi-11083338

Quantitative Tourism Industry Analysis Introduction To Inputoutput


Social Accounting Matrix Modeling And Tourism Satellite Accounts Dr
Tadayuki Hara Auth

https://ebookbell.com/product/quantitative-tourism-industry-analysis-
introduction-to-inputoutput-social-accounting-matrix-modeling-and-
tourism-satellite-accounts-dr-tadayuki-hara-auth-4422704
Introduction to Matrix-Analytic Methods in Queues 1
This book is dedicated to my parents:
Mrs. P.S. Rajalakshmi and Mr. P.S.S. Raghavan;
to my professors:
Dr. Marcel F. Neuts and Dr. K.N. Venkataraman;
and to His Holiness
Sri Maha Periyava (Sri Chandrasekharendra Saraswati Mahaswamigal)
of Kanchi Kamakoti Peetham
Series Editor
Nikolaos Limnios

Introduction to
Matrix-Analytic Methods
in Queues 1

Analytical and Simulation


Approach – Basics

Srinivas R. Chakravarthy
First published 2022 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as
permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced,
stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers,
or in the case of reprographic reproduction in accordance with the terms and licenses issued by the
CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the
undermentioned address:

ISTE Ltd John Wiley & Sons, Inc.


27-37 St George’s Road 111 River Street
London SW19 4EU Hoboken, NJ 07030
UK USA

www.iste.co.uk www.wiley.com

© ISTE Ltd 2022


The rights of Srinivas R. Chakravarthy to be identified as the author of this work have been asserted by
him in accordance with the Copyright, Designs and Patents Act 1988.

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the
author(s), contributor(s) or editor(s) and do not necessarily reflect the views of ISTE Group.

Library of Congress Control Number: 2022935180

British Library Cataloguing-in-Publication Data


A CIP record for this book is available from the British Library
ISBN 978-1-78630-732-3
Contents

List of Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1. Probability concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1. Random variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.2. Discrete probability functions . . . . . . . . . . . . . . . . . . . . . 6
1.1.3. Probability generating function . . . . . . . . . . . . . . . . . . . . 7
1.1.4. Continuous probability functions . . . . . . . . . . . . . . . . . . . 7
1.1.5. Laplace transform and Laplace-Stieltjes transform . . . . . . . . . 9
1.1.6. Measures of a random variable . . . . . . . . . . . . . . . . . . . . . 10
1.2. Renewal process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.1. Renewal function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.2. Terminating renewal process . . . . . . . . . . . . . . . . . . . . . . 15
1.2.3. Poisson process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3. Matrix analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.3.1. Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.3.2. Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . . 23
1.3.3. Partitioned matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.3.4. Matrix differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.3.5. Exponential matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.3.6. Kronecker products and Kronecker sums . . . . . . . . . . . . . . . 32
1.3.7. Vectorization (or direct sums) of matrices . . . . . . . . . . . . . . 33

Chapter 2. Markov Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35


2.1. Discrete-time Markov chains (DTMC) . . . . . . . . . . . . . . . . . . 36
2.1.1. Basic concepts, key definitions and results . . . . . . . . . . . . . . 36
vi Introduction to Matrix-Analytic Methods in Queues 1

2.1.2. Computation of the steady-state probability vector of DTMC . . . 43


2.1.3. Absorbing DTMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.1.4. Taboo probabilities in DTMC . . . . . . . . . . . . . . . . . . . . . 47
2.2. Continuous-time Markov chain (CTMC) . . . . . . . . . . . . . . . . . 48
2.2.1. Basic concepts, key definitions and results . . . . . . . . . . . . . . 48
2.2.2. Computation of exponential matrix . . . . . . . . . . . . . . . . . . 52
2.2.3. Computation of the limiting probabilities of CTMC . . . . . . . . . 57
2.2.4. Computation of the mean first passage times . . . . . . . . . . . . . 58
2.3. Semi-Markov and Markov renewal processes . . . . . . . . . . . . . . . 61

Chapter 3. Discrete Phase Type Distributions . . . . . . . . . . . . . . 71


3.1. Discrete phase type (DPH) distribution . . . . . . . . . . . . . . . . . . 72
3.2. DPH renewal processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.3. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

Chapter 4. Continuous Phase Type Distributions . . . . . . . . . . . . 101


4.1. Continuous phase type (CPH) distribution . . . . . . . . . . . . . . . . 101
4.2. CPH renewal process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.3. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

Chapter 5. Discrete-Batch Markovian Arrival Process . . . . . . . . . 143


5.1. Discrete-batch Markovian arrival process (D-BMAP) . . . . . . . . . . 144
5.2. Counting process associated with the D-BMAP . . . . . . . . . . . . . 152
5.3. Generation of D-MAP processes for numerical purposes . . . . . . . . 162
5.4. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

Chapter 6. Continuous-Batch Markovian Arrival Process . . . . . . . 171


6.1. Continuous-time batch Markovian arrival process (BMAP) . . . . . . . 171
6.2. Counting processes associated with BMAP . . . . . . . . . . . . . . . . 177
6.3. Generation of MAP processes for numerical purposes . . . . . . . . . . 198
6.4. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206

Chapter 7. Matrix-Analytic Methods (Discrete-Time) . . . . . . . . . . 213


7.1. M/G/1-paradigm (scalar case) . . . . . . . . . . . . . . . . . . . . . . . 215
7.2. M/G/1-paradigm (matrix case) . . . . . . . . . . . . . . . . . . . . . . . 224
7.3. GI/M/1-paradigm (scalar case) . . . . . . . . . . . . . . . . . . . . . . . 244
7.4. GI/M/1-paradigm (matrix case) . . . . . . . . . . . . . . . . . . . . . . 252
7.5. QBD process (scalar case) . . . . . . . . . . . . . . . . . . . . . . . . . 268
7.6. QBD process (matrix case) . . . . . . . . . . . . . . . . . . . . . . . . . 269
7.7. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Contents vii

Chapter 8. Matrix-Analytic Methods (Continuous-time) . . . . . . . . 291


8.1. M/G/1-type (scalar case) . . . . . . . . . . . . . . . . . . . . . . . . . . 291
8.2. M/G/1-type (matrix case) . . . . . . . . . . . . . . . . . . . . . . . . . . 295
8.3. GI/M/1-type (scalar case) . . . . . . . . . . . . . . . . . . . . . . . . . . 297
8.4. GI/M/1-type (matrix case) . . . . . . . . . . . . . . . . . . . . . . . . . 300
8.5. QBD process (scalar case) . . . . . . . . . . . . . . . . . . . . . . . . . 304
8.6. QBD process (matrix case) . . . . . . . . . . . . . . . . . . . . . . . . . 305
8.7. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308

Chapter 9. Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321


9.1. Production and manufacturing . . . . . . . . . . . . . . . . . . . . . . . 322
9.2. Service sectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
9.2.1. Healthcare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
9.2.2. Artificial Intelligence and the Internet of Things . . . . . . . . . . . 324
9.2.3. Biological and medicine . . . . . . . . . . . . . . . . . . . . . . . . 325
9.2.4. Telecommunications . . . . . . . . . . . . . . . . . . . . . . . . . . 325
9.2.5. Supply chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
9.2.6. Consumer issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335

Summary of Volume 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339


List of Notations

Symbols

e column vector, with all elements equal to 1, of appropriate dimension


eT row vector, with all elements equal to 1, of appropriate dimension
ei column vector with 1 in the ith position and 0 elsewhere
I identity matrix
⊗ Kronecker product
⊕ Kronecker sum
Δ(.) diagonal matrix with entries given in the parentheses

Abbreviations

BMAP batch Markovian arrival process


BP busy period
CDF (cumulative) distribution function
CPH continuous phase type
CTMC continuous-time Markov chain
D-BMAP discrete-time BMAP (batch Markovian arrival process)
DPH discrete phase type
DRI directly Riemann integrable
DTMC discrete-time Markov chain
x Introduction to Matrix-Analytic Methods in Queues 1

EMC embedded Markov chain


LST Laplace-Stieltjes transform
LT Laplace transform
MAM matrix-analytic methods
MAP Markovian arrival process
MC Markov chain
MRP Markov renewal process
PDF probability density function
PGF probability generating function
PH phase type
PMF probability mass function
TPM transition probability matrix
VMPP versatile Markovian point process
Preface

The introduction of the phase type (PH) distributions in the early 1970s by
Marcel Neuts opened up a wide range of possibilities in Applied Probability
modeling and ushered in the idea that finding computable, numerical solutions was
an acceptable and desirable goal in analyzing stochastic models. Furthermore, he
popularized incorporating the computational aspects in the study of stochastic
models. It gave researchers a powerful new tool that enabled them to move beyond
the traditional models limited to exponential processes for analytical convenience to
studying more realistic stochastic models with algorithmic solutions and simple,
elegant probabilistic interpretations. The goal of building models with
computationally tractable solutions rather than the abstract transform-based solutions
took root. This rapidly led to an entirely new area of research on the study of
stochastic models in queues, inventory, reliability, and communication networks
using matrix-analytic methods (MAM). The versatile Markovian point process
(VMPP) was introduced by Neuts in the late 1970s. This process was used in the
study of a single-server queueing system with general services by one of Neuts’s
students, V. Ramaswami, for his PhD dissertation. In 1990, this VMPP was studied
differently as a batch Markovian arrival process (BMAP) by Neuts and his students
David Lucantoni and Kathy Meier-Hellstern. At that time it was thought that VMPP
was a special case of BMAP, but it was proved that BMAP and VMPP are the same.
However, the compact and transparent notations with which BMAP is described
allowed the readers to understand this versatile point process with relative ease, and
since then VMPP is referred to as BMAP in the literature. In the case of single
arrivals, the process is referred to as a Markovian arrival process (MAP).

The study of stochastic models possessing matrix-geometric solutions (thus


extending the geometric solution result for the scalar case for Poisson arrivals and
exponential services in a single server queue) by Neuts in the late 1970s and the
introduction of PH distributions, BMAP, and an emphasis on the algorithmic
approach, paved the way for Neuts to the introduction of MAM. Ever since, these
xii Introduction to Matrix-Analytic Methods in Queues 1

methods have been extensively studied both theoretically and computationally in the
context of a variety of stochastic models useful in many applied areas. A handful of
books starting with Neuts’s two classical books, Matrix-Geometric Solutions in
Stochastic Models: An Algorithmic Approach, originally published in 1981, and
Structured Stochastic Matrices of M/G/1 Type and Their Applications in 1989, to the
latest one, The Theory of Queuing Systems with Correlated Flows by Dudin,
Klimenok and Vishnevsky in 2020, have appeared in the literature that deal with
MAM. The other books published from 1989 to 2020 include Introduction to Matrix
Analytic Methods in Stochastic Modeling by Latouche and Ramaswami (1999);
Numerical Methods for Structured Markov Chains by Bini et al. (2005); Queueing
Theory for Telecommunications by Alfa (2010); Fundamentals of Matrix-Analytic
Methods by He (2014); and Matrix-Exponential Distributions in Applied Probability
by Bladt and Nielsen (2017).

All the texts mentioned above provide an excellent foundation of a variety of


stochastic models in general and of theoretical properties and applications of MAM
to those models. The present work takes a different approach by covering the basics
of MAM but focusing on clearly illustrating its use in analyzing many stochastic
models. It is also my strong belief that the art of model building and analysis is better
learned by studying carefully constructed examples and by practicing those skills on
other models. A text that incorporates the mathematical ideas of MAM along with
clearly illustrated examples on the use of these methods in analyzing interesting
stochastic models makes MAM more accessible to current researchers. I believe this
juxtaposition reinforces the power of MAM, enables one to appreciate and get better
in the art of model building, and helps in improving “probabilistic thinking” of
models and solutions. It is also for these reasons that I have included a large
collection of exercises, most of which are computational in nature, for the reader to
practice, experiment and get the experience in the algorithmic and computational
procedures. It should be pointed out that the illustrative examples and the exercises
are deliberately made more generic so as to let the readers modify them for their
areas of applications. This approach was motivated by the feedback that I have had
from numerous graduate students and fellow researchers in the past.

Furthermore, the text also contains exploration of simulation as an integral tool in


studying stochastic models, which don’t admit analytical or numerical solutions.
Finally, the text provides detailed explanations on the mathematics and the
applications of MAM to enable graduate students with a strong background in
probability and mathematics.

Thus, the text is a useful source of reference for researchers established in this
field and, more importantly, a valuable, inviting guide for those venturing into this
rich research area.
Preface xiii

This two-volume book is organized as follows: Volume 1 has nine chapters.


Chapter 1 reviews basic concepts in probability and matrix theory. Chapter 2 has a
brief review of discrete-state-discrete-time and discrete-state-continuous-time
Markov chains. These two chapters are not meant to be an exhaustive and thorough
review of those topics but hopefully sufficient enough to understand the rest of the
materials in the two-volume book. In Chapter 3, discrete-time phase type distribution
is presented. Chapter 4 focuses on the continuous-time phase type distributions.
Chapters 5 and 6, respectively, cover discrete and continuous time BMAP. The basics
of MAM from both discrete time (through embedded epochs) and continuous time
points of view along with many examples and exercises are, respectively, presented
in Chapters 7 and 8. A brief summary of the applications of queueing (and in turn
MAM) is given in Chapter 9. The presence of numerous detailed solutions and
exercises benefit students by piquing their interest in MAM, helping them learn and
understand basic concepts, and succeed in constructing and solving models of their
own in their research. The solutions to these exercises can be found at the following
link: www.iste.co.uk/chakravarthy/queues1.zip.

Volume 2 contains seven chapters. In Chapters 1, 2, and 3, respectively,


single-server queues are studied by looking at departure, arrivals, and at arbitrary
epochs. Chapter 4 focuses on the busy periods in queues. Selected multi-server
queues are studied in Chapter 5, and finite capacity queues are the focus in Chapter 6.
Finally in Chapter 7, we present analysis of queues via simulation using ARENA, a
powerful simulation software. In this chapter we also provide a very brief
introduction to ARENA. All these chapters have exercises.

This two-volume book can be used in a number of settings. Senior undergraduate


students (with sufficient background in probability) and Master’s level graduate
students could use Volume 1 to get an understanding of the fundamentals of MAM.
Research scholars pursuing MPhil and PhD degrees can start with Volume 1 and,
after going through the basics covered there, move on to finishing Volume 2. For
research scholars pursuing MPhil and PhD degrees, the two volumes would
constitute a two- to three-semester course.

Writing this book has been lot of fun but also a challenge. However, my family,
friends, and mentors helped me to meet that challenge. I take a great pleasure in
acknowledging them. This book project would not have been possible without the
educational foundation, moral support, encouragement, and critical analysis of
teachers, friends, and families.

Specifically, I want to acknowledge the following people who made a positive


difference.
xiv Introduction to Matrix-Analytic Methods in Queues 1

– My (late) father, P.S.S. Raghavan, for being a role model. My mother, P.S.S.
Rajalakshmi, for her encouragement. Both my parents made many sacrifices that
enabled me to first go to college and, later on, to leave for the United States to pursue
higher studies.
– My sister, Vasumathi Parthasarathy, for exposing me to mathematics at a very
young age.
– My (late) Professors Marcel F. Neuts and K.N. Venkataraman. While K.N.V.
gave me an opportunity to learn probability theory under him while in India, M.F.N.
showed me the path to MAM. I owe a debt of gratitude to him for what I am now and
for his important role in shaping my career as a teacher and a researcher.
– My college teachers, Prof. D. Ratnasabapathi (Presidency College in Madras)
and Prof. K. Suresh Chandra (University of Madras), who not only taught me statistics
but also were a source of encouragement to pursue higher studies.
– I benefited a lot through interacting with my friends and colleagues V.
Ramaswami, D.M. Lucantoni, Kathy Meier-Hellstern and S. Kumar, during my days
at Delaware.
– R. Parthasarathy (Kent State University), whom I knew from my college days in
India and who has always been there to give moral support since those days.
– My research colleagues who played key roles in my career, notably A.S. Alfa,
A.N. Dudin, A. Krishnamoorthy, and A. Rumyantsev.
– My students: Serife Ozkar, who visited from Turkey to finish up her doctoral
thesis with me at Kettering, and Shruti Goel, who attended the workshops I conducted
in India. The questions these students, along with a countless others, including Alka
Choudhry, a doctoral student at the Central University of Rajasthan, India, raised
provided the impetus for this book. Furthermore, I am thankful to both Serife and
Shruti for helping me put the bibliography in the format required by the publishers.
– Several of my colleagues at Kettering, notably (the late) Duane McKeachie,
Petros Gheresus, and T.R. Chandrupatla (who retired from Rowan University recently
after serving nearly two decades at Kettering) for their friendship and encouragement
throughout my career at Kettering.
– Kettering University for its support of my sabbatical which was instrumental in
completing this book project.
– The ISTE team for their continued and timely help during the production process
of this two-volume book.
– Finally, the most important people in my life, my wife, Jayanthi Chakravarthy,
son Arvind Chakravarthy and his beloved wife, Vina Harji Chakravarthy. Since
Preface xv

Jayanthi and Arvind came into my life, their understanding, love and support have
helped me to focus on my research and career without any distraction. They along
with Vina have been a source of constant inspiration to finish the book. No words are
adequate to express my sincere appreciation to them.

Srinivas C HAKRAVARTHY
April 2022
1

Introduction

Stochastic (also referred to as random) models play an important role in our


day-to-day activities. Just look around your living or working space and see the
items, are needed on a daily basis. Dairy products, appliances, electronic devices, to
name a few among several dozens of items, are manufactured/grown/cultivated and
shipped from the factories to nearby stores for consumers. How are these delivered?
A supply chain plays a major part in all of these, and as is well-known most things
involved in this chain are random. How do we study randomness? Through
probability. Whether someone likes probability or is allergic to it, it is there to rescue
us and provide a good quality of life. While this book is being written, the whole
planet has been affected by Covid-19 disrupting almost everyone in one form or the
other. The containers used in shipping items via cargo ships are scarce, resulting in
prohibitively high costs; trucks needed to haul away the filled containers after
dropping the empty ones are in short supply, making all the cargo ships anchor close
to the ports where they arrive. According to the news, the bottleneck is expected to
last well into 2022. The point of this discussion is to illustrate the havoc played by
uncertainty and why it is important to have several scenarios analyzed through
stochastic modeling.

In this chapter, we present key concepts and results on probability, random


variables, renewal processes including Poisson, and matrix analysis. For full details
we refer the reader to the books cited in their respective sections.

The chapter is organized as follows. In section 1.1, probability concepts are


reviewed. The basic concepts related to renewal processes are reviewed in section
1.2, and finally the matrix concepts, including Kronecker products and Kronecker
sums, which play an important role in matrix-analytic methods (MAM), are reviewed
in section 1.3.

Introduction to Matrix-Analytic Methods in Queues 1: Analytical and Simulation Approach – Basics,


First Edition. Srinivas R. Chakravarthy.
© ISTE Ltd 2022. Published by ISTE Ltd and John Wiley & Sons, Inc.
2 Introduction to Matrix-Analytic Methods in Queues 1

1.1. Probability concepts

Even though probability theory was used to describe the experiences connected
with games of chance and the calculation of certain probabilities, the main purpose is
to discover the general rules and to construct satisfactory theoretical models for
problems under study. Most phenomenon in our life are random and probability
modeling is vital to understand and take appropriate actions. A brief history of
probability is given below for those interested to know about it.

The history of probability is in essence separate from the history of statistics,


although statistics relies on probability as the foundation of quantitative inference.
Probability theory is believed to have been started by two famous French
mathematicians, Blaise Pascal (1623–1662) and Pierre de Fermat (1601–1665),
primarily in games of chance (until the 19th century its motivation remained mainly
in this field). Later on a number of well-known mathematicians such as Jacob
Bernoulli (1654–1705), Nicholas Bernoulli (1687–1759), Abraham de Moivre
(1667–1754) and Pierre Simon de Laplace (1749–1827) developed the theory in a
much more general setup. During the 19th century the French school (working on the
foundations laid by Laplace) and the Russian school were very influential in the
development of probability theory we see now. In the French school, to name a few,
the main contributors were Simon Poisson (1781–1840), Augustin Cauchy
(1789–1857), Jules Bienayme (1796–1878), Joseph Bertrand (1822–1900) and Henri
Poincare (1854–1912). From the Russian school, V. Ya. Buniakovsky (1801–1889),
P.L. Chebyshev (1821–1894), A.A. Markov (1856–1922), A.M. Liapunov
(1857–1918) and A.N. Kolmogorov (1903–1987) were very significant contributors
to probability theory. It was Kolmogorov who, in 1933, introduced the now generally
accepted axiomatic approach suitable to probability theory and random processes.
The books on probability theory by Paul Levy, Herald Cramer, B.V. Gnedenko and
A.N. Kolmogorov, Michael Loeve and William Feller provided an impetus to the
development of modern probability theory.

There are three major definitions of probability, namely, axiomatic, frequency and
classical. Each one has its own merits and demerits. The axiomatic approach is mainly
used in developing the mathematical theory of probability. The frequency approach
gives an intuitive notion of probability. However, the computation of probability in
practice is based on the classical approach.

Suppose that S is a sample space (i.e. S is the set of all possible outcomes of an
experiment) and Ω to be the set of all possible subsets of S. For example, consider
the experiment of throwing a six-sided die. It can readily be seen that 
S = {1, 2, 3, 4, 5, 6}, and Ω = {∅}, {(1)}, · · · , {6}, {1, 2}, · · · , {5, 6}, · · · , S ,
where ∅ is null or an empty set. As an example, ∅ will include outcomes that are
impossible such as seeing a number 7 or a negative number. Note that the cardinality
Introduction 3

of Ω for this example is 26 = 64. In general, if the sample space S has a finite
number, say, N , of elements, then the cardinality of Ω is 2N .

D EFINITION 1.1.– Probability is a real valued (set) function defined on Ω


corresponding to the sample space, S. That is, the probability of an event A, denoted
by P (A), is a real number such that the following axioms (a set of rules) are satisfied.
1) P (A) ≥ 0,
2) P (S) = 1,
 
3) If Ai ∩ Aj = ∅, for all i = j, then P ( Ai ) = P (Ai ).

R EMARK 1.1.– Note that axiom (3) implies that if A and B are mutually exclusive
then P (AU B) = P (A) + P (B).

R EMARK 1.2.– The axiomatic approach is used mainly in developing the


mathematical theory of probability. The axiomatic approach does not tell us how to
compute the probability of an event. One can assign probabilities to the events in S
arbitrarily as long as the axioms are not violated. Thus, there are an infinite number
of ways of doing this. Because of this a different method based on frequency
approach was developed.

The probabilities of events of interest are computed only based on the sample space
and with no other prior information related to the events. Sometimes it is convenient
to compute certain unconditional probabilities by first conditioning on some event,
whose probability is easy to find. For example, suppose we draw two cards without
replacement from a pack of 52 playing cards. What is the probability that the second
card drawn will be an ace of spade?

Conditional probabilities also play an important role in stochastic models, one of


the areas that deals with solving real-world problems. For example, consider a
communication network that transmits messages from one node to another node.
Arriving messages in sending node are stored in a finite buffer and transmitted on a
first-come first-served basis. Note that not all messages are going to be admitted
because of the finite capacity of the buffer. The computation of the probability of an
admitted message finding exactly, say, k messages requires the use of conditional
probability. Even waiting time problems have to use conditional probabilities to
compute the distribution (and its measures) of the time spent in the system by an
arriving customer. Later on, when we deal with Markov chains, conditional
probability and conditional expectations play an important role.

Conditional probability is one of the most important concepts in probability and


statistics. Although A. de Moivre explicitly introduced the concepts of independence
and conditional probability, it was Kolmogorov who studied the notion of conditional
probability in the general form. We are often interested in computing probabilities of
4 Introduction to Matrix-Analytic Methods in Queues 1

certain events when some partial information concerning the results of experiments is
given. Also in some calculation of probabilities, it is often convenient to compute
them by conditioning on certain events. In probability theory, the models under study
are usually described by specifying the appropriate conditional probabilities or
conditional distributions. The main topics, such as Bayesian inference, estimation
theory, tests of hypotheses and decision theory, in statistics use several notions of
conditioning.

D EFINITION 1.2.– The conditional probability of B given A is defined as:

P (A ∩ B)
P (B|A) = , if P (A) > 0.
P (B)

R EMARK 1.3.– In the above definition of conditional probability, the requirement


that P (A) > 0 is very natural, since the conditional probability is defined on the
assumption that A has occurred. However, for theoretical purposes P (B|A) is not
defined if P (A) = 0. But for all practical examples we do not have to worry about
this case.

R EMARK 1.4.– Suppose that P (A) > 0 and P (B) > 0. Then one of the following
will occur. Either
1) P (B|A) < P (B); in this case we say that A carries negative information about
B; or
2) P (B|A) > P (B); in this case we say that A carries positive information about
B; or
3) P (B|A) = P (B); in this case we say that A does not contain any information
about B.

R EMARK 1.5.– It is very easy to show that if A carries negative (positive or no)
information about B, then B also carries negative (positive or no) information about
A. The concepts of positive and negative information in conditional probability were
first introduced by K.L. Chung (1942).

From the discussion of the notion of conditional probability we see that all general
theorems on probabilities are also valid for conditional probabilities.

Law of Total Probability: Suppose that Ai , 1 ≤ i ≤ n, are n mutuallyexclusive


and collectively exhaustive events in S. That is, Ai ∩Aj = ∅, for i = j, and Ai = S.
Then for any event B in S, we have:


n
P (B) = P (B|Ai )P (Ai ).
i=1
Introduction 5

D EFINITION 1.3.– Two non-empty events A and B are said to be independent if the
occurrence (or non-occurrence) of A does not affect the occurrence (non-occurrence)
of B.

R EMARK 1.6.– The following are trivial pairs of independent events: A and ∅, A and
S, S and ∅.

R EMARK 1.7.– The above notion of independence is referred to as pairwise


independence. If only two events are considered at any given time then we will freely
use either one. However, when more than two events are considered, we need to
specify the type of independence such as pairwise or mutually independent (see
below).

The notion of independence plays an important role in probability, statistics and


other applied areas that rely on probability and statistics. Sometimes the assumption of
independence is misused and one has to pay a price for this. An example of this would
be the following. In computer communications area, one of the important quantities of
interest is the time taken for the token (a tag given to customers for sending packets of
messages from one node to another) to return to the starting point. The times between
successive arrivals of the token to a particular node are obviously not independent.
However, in practice people do assume independence more for the sake of tracking the
model mathematically. Depending on the situation this assumption may be a serious
one.

D EFINITION 1.4.– Mathematically non-empty events A and B are independent if any


one of the following conditions holds good. Also note that one implies the other two.
1) P (B|A) = P (B).
2) P (A ∩ B) = P (A)P (B).
3) P (A|B) = P (A).

D EFINITION 1.5.– Events Ai , 1 ≤ i ≤ n, in n in S are said to be mutually


independent if for every choice of r distinct indices i1 , · · · , ir from among
1, 2, · · · , n and for every r = 2, 3, · · · , n, we have:

P (Ai1 ∩ Ai2 ∩ · · · ∩ Air ) = P (Ai1 )P (Ai2 ) · · · P (Air ).

1.1.1. Random variables

In probability and statistics, most of the times the quantities that are of interest
are not the outcomes of an experiment under study but rather the values associated
with the outcome of the experiment. For example, when n items from the output of
the process are inspected the quality control inspector is concerned about the total
6 Introduction to Matrix-Analytic Methods in Queues 1

number of defective out of the n chosen and the corresponding probabilities rather
than the way those defective, if any, were selected. In this section, we review the
important concept of a random variable and the probability functions associated with
it.

D EFINITION 1.6.– A random variable is a real-valued function defined on the sample


space S into the set of real numbers.

A random variable is discrete if it takes only discrete values. Examples of such a


random variable are: (1) number of defective out of a sample of n items chosen; (2)
number of cycles before the failure of a pressure regulator that controls downstream
water pressure; (3) number of phone calls arriving at an office during a day; (4) number
of misprints on a printed page; (5) number of messages arriving at a communication
node during an hour; (6) number of industrial accidents in a given week; (7) number
of molds made in a month; and (8) number of packages delivered in a week.

A random variable that takes values in an interval is said to be a continuous random


variable. Some examples of a continuous random variable are: (1) lifetime of a light
bulb; (2) time taken to complete job by a machine; (3) length of a phone call; (4)
length of service time at a teller; (5) time between successive arrivals of messages;
and (6) delay time in receiving a token in a transmission system.

1.1.2. Discrete probability functions

The study of random variables is done through the probability functions associated
with them. For a discrete random variable X, the function f (x), defined as f (x) =
P (X = x), is called the probability mass function (PMF) of X.

D EFINITION 1.7.– A given function f (x) is a PMF of a (discrete) random variable if


and only if the following conditions are satisfied:
1) f (x) ≥ 0, for all x, and

2) f (x) = 1.
x

Some well-known probability mass functions in the context of stochastic modeling


used in this book are listed below and for others we refer the reader to any textbook
on probability and statistics.
1) Uniform:


⎪ 1
⎨ , x = a1 , · · · , aN ,
f (x) = N


⎩ 0, elsewhere.
Introduction 7

2) Poisson:
⎧ k
⎨ e−λ λk! , k = 0, 1, · · · ,
f (x) =

0, elsewhere.

3) Geometric:

⎨ p (1 − p)x , x = 0, 1, · · · ,
f (x) =

0, elsewhere.

1.1.3. Probability generating function

The probability generating function (PGF) is key in deriving and proving results
in stochastic models. So, we review it here.

D EFINITION 1.8.– Suppose that X is a discrete random variable with P (X = k) =


ak . Then the PGF is defined as:

a(z) = z k ak , |z| ≤ 1. [1.1]
k

1.1.4. Continuous probability functions

In the continuous case, the probability function similar to PMF of a discrete


random variable is called the probability density function (PDF). This function is
continuous on its domain except possibly at a finite number of points and the
definition is given below. For a continuous random variable X taking values in the
interval (a, b), −∞ < a < b < ∞, the function f (x) defined as:
f (x)dx = P (x < X < x + dx), x ∈ (a, b),
is called the probability density function of X.

D EFINITION 1.9.– The function f (x) is PDF of a continuous random variable X on


(a, b) if and only if f (x) satisfies the following conditions:
1) f (x) ≥ 0, for all x, and
b
2) a
f (x) = 1.

Some well-known probability density functions used in this book in the context of
stochastic modeling are listed below and for others we refer the reader to any textbook
on probability and statistics.
8 Introduction to Matrix-Analytic Methods in Queues 1

1) Uniform:


⎪ 1
⎨ , a ≤ x ≤ b,
f (x) = b − a



0, elsewhere.
2) Exponential:
⎧ −λx
⎨ λe , x ≥ 0, λ > 0,
f (x) =

0, elsewhere.
3) Erlang of order m:


⎪ λm
⎨ xm−1 e−λx , x ≥ 0, λ > 0,
f (x) = (m − 1)!



0, elsewhere.
4) Hyperexponential of order m with mixing probability vector p and the
parameter vector λ :
⎧ m
⎪ 

⎨ pj λj e−λj x , x ≥ 0, λj > 0, 1 ≤ j ≤ m,
f (x) = k=1



0, elsewhere.
5) Gamma:


⎪ 1
⎨ xα−1 e−x/β , α > 0, β > 0, x ≥ 0,
f (x) = β α Γ(α)



0, elsewhere.
6) Weibull:
⎧ α−1

⎪ α x


α
e−(x/β) , α > 0, β > 0, x ≥ 0,
f (x) = β β




0, elsewhere.
7) Beta:


⎪ Γ(α + β) α−1
⎨ x (1 − x)β−1 , α > 0, β > 0, 0 ≤ x ≤ 1,
f (x) = Γ(α)Γ(β)



0, elsewhere.

[Note: In the above Γ(α) = 0
xα−1 e−x dx.]
Introduction 9

Whether a random variable, X, is discrete or continuous or a combination of both,


the (cumulative) distribution function of X is defined as:

D EFINITION 1.10.– The (cumulative) distribution function (CDF) of a random


variable X, denoted as, say, F (x), is defined as:
F (x) = P (X ≤ x), −∞ < x < ∞. [1.2]

R EMARK 1.8.– Note that the CDF is a right-continuous and non-decreasing function,
which tends to 0 as x → −∞ and goes to 1 as x → ∞.

1.1.5. Laplace transform and Laplace-Stieltjes transform

In this section, we briefly discuss the Laplace transform (LT) that plays an
important role in stochastic modeling.

Since we are focusing on queueing and related topics in this two-volume book, we
assume that the underlying random variables are all non-negative.

D EFINITION 1.11.– Suppose that f (x) is the PDF of a non-negative random variable,
X. The Laplace transform (LT) of f (x) (or equivalently of X) is defined as:
 ∞
f ∗ (s) = e−s x f (x)dx, Re(s) ≥ 0. [1.3]
0

D EFINITION 1.12.– Suppose that F (x) is the CDF of a non-negative random variable,
X. The Laplace-Stieltjes transform (LST) of F (x) (or equivalently of X) is defined as:
 ∞
F ∗ (s) = e−s x dF (x), Re(s) ≥ 0. [1.4]
0

D EFINITION 1.13.– Suppose that F̄ (x) = P (X > x) is the tail probability function
of a non-negative random variable, X. The Laplace-Stieltjes transform (LST) of F̄ (x)
is defined as:
 ∞
F̄ ∗ (s) = e−s x dF̄ (x), Re(s) ≥ 0. [1.5]
0

R EMARK 1.9.– Since probabilistic interpretation plays a key role in stochastic


modeling, we give one such an interpretation for the LST of a continuous random
variable X with PDF given by f (.). Suppose that a catastrophic event occurs and that
it could affect (or kill) the lifetime, say, X, of a random component. Suppose that X
has the PDF given by f (.) and that Y , with an exponential distribution with
parameter s, is used to model the catastrophic event. Then the LST of f (x) gives the
probability that the lifetime of the component is not affected during its lifetime by the
catastrophic event.
10 Introduction to Matrix-Analytic Methods in Queues 1

t
R ESULT 1.1.– Since F (t) = 0 f (x)dx, the LST of F (t) is same as the LT of f (t).
That is:
 ∞  ∞
F ∗ (s) = e−sx dF (x) = e−sx f (x)dx, Re(s) ≥ 0. [1.6]
0 0

R ESULT 1.2.– The LT of F (t) and the LT of F̄ (t) are given by:

∞ −sx ∞ −sx x f ∗ (s)


F̂ ∗ (s) = 0
e F (x)dx = 0
e 0
f (y)dydx = , Re(s) > 0,
s
[1.7]
∞ −sx ∞ 1 − f ∗ (s)
F̄ˆ ∗ (s) = 0
e x
f (y)dydx = , Re(s) > 0,
s
R ESULT 1.3.– (Abelian theorem)

F (t) b
For some a > 0, limt→∞ = ⇒ lims→0+ sa F ∗ (s) = b.
ta Γ(a + 1)

R ESULT 1.4.– (Tauberian theorem)

F (t) b
For some a > 0, lims→0+ sa F ∗ (s) = b ⇒ limt→∞ a
= .
t Γ(a + 1)

1.1.6. Measures of a random variable

So far, we discussed how a random variable is studied through the probability


functions associated with it. Most of the time the information conveyed in the
probability functions can be effectively summarized by their general shapes and
locations of certain parameters. For a great many distributions these characteristics
can be fully described by a small number of numerical quantities (also referred to as
measures) that are unique to the distributions under study. Also in many instances
one is not interested in studying the random variable in detail but only to get some
idea about the nature of the random variable itself. The following example should
motivate the need for this section.

Suppose that one is planning to buy a new model car. Among all other criteria for
buying a new car, let us assume that person gives priority for a car that gives good
mileage. The MPG (miles per gallon) of a new model car is a random variable (why?).
But the person is not interested in knowing the probability distribution of this random
variable, only the average MPG. Of course, the average MPG depends on a number
of variables, such as the size of the car, power of the engine and type of transmission.
However, this measure will give the person a smaller set of cars to pick from. The key
point here is how one or more measures of a random variable is used in practice. There
are several other instances, which will be seen throughout the book.
Introduction 11

Some commonly used measures are: (1) nth raw moment, especially the first (also
referred to as expected value) and second moments; (2) standard deviation; and (3)
percentiles.

The definitions of these measures are given below.

D EFINITION 1.14.– The nth raw moment: of a random variable is defined as:
⎧  n

⎨ x∈S x f (x), X is discrete,
E(X n ) = [1.8]

⎩ ∞ n
0
x f (x)dx, X is continuous.

D EFINITION 1.15.– The standard deviation of a random variable X is defined as


Standard deviation (σX ):

σX = E(X 2 ) − [E(X)]2 [1.9]

D EFINITION 1.16.– The 100pth percentile of a random variable X is defined as:


100pth Percentile: The 100pth percentile of a random variable X, denoted by xp , for
0 < p < 1, is the solution to the equation:

F (xp ) = P (X ≤ xp ) = p. [1.10]

1.2. Renewal process

A renewal process is a sequence of random variables, {Xn : n ≥ 1}, having a


common probability (mass or density, depending on whether the random variables are
discrete or continuous) function. Suppose that we are dealing with continuous random
variables and let f (x) and F (x), respectively, denote the common PDF and CDF of
X1 , X2 , · · · . Below, we summarize some key observations and results, and refer the
reader to Cox (1962) and Karlin and Taylor (1975) for more details.

D EFINITION 1.17.– If X1 also has the same PDF as the rest of Xn , the renewal
process {Xn : n ≥ 0} is referred to as an ordinary renewal process.

D EFINITION 1.18.– If X1 has a different PDF than the rest of Xn , the renewal process
{Xn : n ≥ 0} is referred to as a modified renewal process.

D EFINITION 1.19.– If X1 has the PDF given by μF̄ (x), where μ−1 is the mean of
the random variable with CDF given by F (x), the renewal process {Xn : n ≥ 0} is
referred to as a stationary (or equilibrium) renewal process.

R EMARK 1.10.– Note that for a Poisson process (see section 1.2.3), the ordinary
renewal process and the stationary renewal process are identical.
12 Introduction to Matrix-Analytic Methods in Queues 1

1.2.1. Renewal function

Suppose that N (t) denotes the number of renewals in (0, t] corresponding to an


ordinary renewal process.

D EFINITION 1.20.– The renewal function, M (t), is defined as the expected number of
renewals in t units of time. That is, M (t) = E(N (t)).


n
Suppose that Sn = Xi . Then, we have the following key results.
i=1

R ESULT 1.5.– N (t) ≥ n ⇐⇒ Sn ≤ t.

R ESULT 1.6.– Result 1.5 implies that:


P (N (t) ≥ n) = P (Sn ≤ t) = F (n) (t), [1.11]
where F (n) (.) is the n-fold convolution of F (.).

R ESULT 1.7.– The renewal function is given by:



 ∞

M (t) = E[N (t)] = nP (N (t) = n) = F (n) (t), t ≥ 0. [1.12]
n=1 n=1

One of the most celebrated equations in renewal theory is the famous renewal
equation. This is obtained by conditioning on the first renewal.

R ESULT 1.8.– The renewal equation corresponding to the renewal process {Xn } is
given by:
 t
M (t) = F (t) + M (t − x)dF (x), t ≥ 0, [1.13]
0

whose solution is given by:


 t
M (t) = F (t) + F (t − x)dM (x), t ≥ 0. [1.14]
0

R EMARK 1.11.– It is worth pointing out how the solution for equation [1.13] is
obtained as the function given in equation [1.14] by replacing the functions M (.) and
F (.) inside the integral in equation [1.13] with F (.) and M (.), respectively.



Suppose that G(t, z) = z n P (N (t) = n), t ≥ 0, |z| ≤ 1, denotes the
n=0
probability generating function of N (t) and g ∗ (s, z) is the LST of G(t, z). That is,
g ∗ (s, z) is the joint transform.
Introduction 13

R ESULT 1.9.– It is easy to verify that, for |z| ≤ 1, Re(s) > 0:



⎪ 1 − f ∗ (s)

⎪ , ordinary renewal process,

⎪ s[1 − zf ∗ (s)]





⎨ 1 − zf ∗ (s) + zf ∗ (s) − f ∗ (s)
∗ 1 1
g (s, z) = , modified renewal process, [1.15]

⎪ s[1 − zf ∗ (s)]







⎪ 1 μ(z − 1)[1 − f ∗ (s)]
⎩ + , stationary renewal process.
s s2 [1 − zf ∗ (s)]

R ESULT 1.10.– The Laplace transform, M ∗ (s), of M (t) under various renewal
processes is:


⎪ f ∗ (s)

⎪ , ordinary renewal process,

⎪ s[1 − f ∗ (s)]





∗ f1∗ (s)
M (s) = , modified renewal process, [1.16]

⎪ s[1 − f ∗ (s)]







⎪ μ
⎩ , stationary renewal process.
s2

t
R EMARK 1.12.– We note that for the stationary renewal process, M (t) = , where
μ
1
μ = is the mean of the underlying random variable.
μ

R ESULT 1.11.– In the case of an ordinary renewal process, asymptotically the renewal
t
function approaches  . That is:
μ
M (t) 1
lim =  = μ. [1.17]
t→∞ t μ

Suppose that m(t) denotes the renewal density. That is, m(t) = M  (t). Note that
m(t)dt gives the expected number of renewals in (t, t + dt).

R ESULT 1.12.– Suppose that m∗ (s) denotes the LT of m(t). It is easy to verify that:


⎪ f ∗ (s)

⎨ 1 − f ∗ (s), ordinary renewal process,
m∗ (s) = [1.18]

⎪ f1∗ (s)

⎩ , modified renewal process.
1 − f ∗ (s)
14 Introduction to Matrix-Analytic Methods in Queues 1

R ESULT 1.13.– In the case of an ordinary renewal process, asymptotically the renewal
1
density, mo (t), approaches  . That is:
μ
1
lim mo (t) = = μ. [1.19]
t→∞ μ

R ESULT 1.14.– Suppose that m∗1 (s) is the LT of the density, m1 (t), of the modified
renewal process. Then using equation [1.18] we get:

m∗1 (s) = f1∗ (s) + m∗1 (s)f ∗ (s), [1.20]

from which we get the following integral equation. This equation plays an important
role in stochastic modeling.
 t
m1 (t) = f1 (t) + m1 (t − x)f (x)dx, [1.21]
0

whose solution can be verified to be:


 t
m1 (t) = f1 (t) + f1 (t − x)mo (x)dx. [1.22]
0

R ESULT 1.15.– The integral equation for the ordinary renewal process can be
obtained with a similar argument that leads to result 1.8 or simply differentiating
equation [1.13] and is given by:
 t
mo (t) = f (t) + mo (t − x)f (x)dx, [1.23]
0

whose solution can be verified to be


 t
mo (t) = f (t) + f (t − x)mo (x)dx. [1.24]
0

R EMARK 1.13.– Noting that f (x)dx gives the probability that in (x, x + dx) a (first)
renewal occurs, the integral equation given in equation [1.23] can be given a nice
probabilistic interpretation: the LHS gives the probability of a renewal in a small time
interval near t and the RHS is the sum of a probability of a (first) renewal near t and
the probability that there is a renewal near t − x, which is followed by an inter-renewal
time of duration x, 0 < x < t.

R ESULT 1.16.– Suppose that r(t) is defined as the forward recurrence time (also
referred to as residual lifetime). That is, r(t) = t − SN (t) . If fr (x, t) denotes the
probability density function of r(t), then it is known that (see, e.g. Cox (1962))
 t
fr (t, x) = f (t + x) + h(t − u)f (u + x)du. [1.25]
0
Introduction 15

R ESULT 1.17.– The limiting density function of r(t) is given by F̄ (x)/μ as t → ∞


1 − f ∗ (s)
and the LST of the limiting density is given by .

D EFINITION 1.21.– Directly Riemann integrable (DRI) functions: A real-valued


function defined on (0, ∞) is said to be DRI if and only the partial (Riemann) sums
taken over the intervals partitioned on (0, ∞) are finite and both converge to the
same limit.

R ESULT 1.18.– A function satisfying any of the following conditions guarantees the
function to be DRI (note that these are not necessary!).
1) The function is non-negative, continuous and has a finite support.
2) The function is non-negative, continuous, bounded such that the upper Riemann
sum is bounded.
3) The function is non-negative, monotone non-increasing and Riemann
integrable.
4) The function is non-negative and bounded above by a DRI function.

R ESULT 1.19.– Key Renewal theorem: Suppose that k(t) is a DRI function. Then, we
have:
t
limt→∞ 0
k(t − x)dM (x) = limt→∞ k ∗ M (t)
[1.26]
1 ∞
= limt→∞ M ∗ k(t) = 0
k(t)dt.
μ

1.2.2. Terminating renewal process

In the previous section, we talked about renewal process, which does not terminate.
That is, F (∞) = 1. However, there are times when we need to study terminating, also
referred to as transient, renewal process. That is, we can have a situation wherein
F (∞) < 1. In this case, the integral equation for renewals as well as the key renewal
theorem need to be applied differently.

D EFINITION 1.22.– A renewal process, {Xn : n ≥ 1}, is said to be a terminating


renewal process if its common CDF, F (.), is such that F (∞) < 1.

R ESULT 1.20.– For a terminating renewal process, {Xn : n ≥ 1}, with CDF, F (.),
the associated counting process, N (t), is such that N = N (∞) < ∞, almost surely.
That is, the number of renewals, N , during [0, ∞] is finite almost surely. Furthermore,

P (N = k) = F k−1 (∞)(1 − F (∞)), k ≥ 1. [1.27]


16 Introduction to Matrix-Analytic Methods in Queues 1

R ESULT 1.21.– The renewal function, M (t), for the terminating renewal process is
such that we have:
1
M (∞) = . [1.28]
1 − F (∞)

R ESULT 1.22.– For any DRI function, k(t) with k(∞) = limt→∞ k(t) exists, we
have:
k(∞)
lim k ∗ M (t) = . [1.29]
t→∞ 1 − F (∞)

R ESULT 1.23.– If the terminating renewal process, {Xn : n ≥ 1}, is a delayed one
(i.e. the initial one has a different distribution function, say, H(.)) then, for any DRI
function, k(t) with k(∞) = limt→∞ k(t) exists, we have:

k(∞)H(∞)
lim k ∗ M (t) = . [1.30]
t→∞ 1 − F (∞)

1.2.3. Poisson process

One of the most celebrated stochastic processes is the Poisson process. That is, a
renewal process whose inter-arrival times follow an exponential distribution. We will
briefly summarize some key results related to Poisson processes for our needs here.
Any additional ones needed will be mentioned in appropriate places.

R ESULT 1.24.– The PDF, CDF, and the LST are:

f (t) = λe−λt , t ≥ 0,

F (t) = 1 − e−λt , t ≥ 0,
[1.31]
λ
f ∗ (s) = .
s+λ
R ESULT 1.25.– The exponential distribution possesses the famous memoryless
property:

P (X > t + t1 |X > t1 ) = P (X > t), t1 , t ≥ 0. [1.32]

R ESULT 1.26.– The counting process, N (t), denoting the number of Poisson arrivals
by time t has the following properties:
(λt)k
1) P [N (t) = k] = e−λt , k = 0, 1, 2, · · · .
k!
Introduction 17

2) Possess independent increments. That is, N (t) is independent of N (t + s) −


N (t).
3) Possess stationary property. That is, the counting process depends only on the
lag as opposed to the actual time point. That is, N (t+s)−N (t) has the same statistical
property as N (u + s) − N (u).

R ESULT 1.27.– The renewal function, M (t), for the Poisson process with parameter
λ is obviously a linear function. That is, M (t) = λ t, for t ≥ 0. One can see this by
λ
looking at result 1.17 for the current case noting that f ∗ (s) = yields M ∗ (s) =
λ+s
λ
implying M (t) = λ t.
s2
R ESULT 1.28.– We have:

 (λt)i
P (N (t) ≥ n) = P (Sn ≤ t) = e−λt , n ≥ 0, [1.33]
i=n
i!

from which the density function of Sn is obtained as:


(λt)n−1
fSn (t) = λe−λt , n ≥ 0, t ≥ 0. [1.34]
(n − 1)!

R EMARK 1.14.– The result in equation [1.34] is intuitively obvious. In order for the
nth renewal to occur in (t, t + dt), n − 1 renewals should have occurred by time t and
in the small interval a renewal occurs. Note that this PDF corresponds to the celebrated
Erlang random variable, which will be used extensively in this two-volume book.

R EMARK 1.15.– The above density function given in equation [1.34] is that of Erlang
(a gamma family).

R ESULT 1.29.– The superposition of two or more Poisson processes is again a Poisson
process.

R ESULT 1.30.– Given that exactly one Poisson event has occurred by time t, the
distribution of the time of the occurrence of this event is uniform on [0, t]. That is:
P (X1 < s, N (t) = 1)
P (X1 < s|N (t) = 1) =
P (N (t) = 1)
[1.35]
P (N (s) = 1, N (t − s) = 0) s
= = .
P (N (t) = 1) t

R ESULT 1.31.– Given that N (t) = n, the joint distribution of the arrival times,
S1 , · · · , Sn , is that of the joint distribution of the order statistics of n uniformly
18 Introduction to Matrix-Analytic Methods in Queues 1

distributed random variables on (0, t). That is, for 0 < u1 < · · · < un < t, the
conditional density is given by:

f (u1 , · · · , un |N (t) = n)
P (N (u1 ) = 1, N (u2 − u1 ) = 1, · · · , N (un − un−1 ) = 1, N (t − un ) = 0)
=
P (N (t) = n)
n!
= n.
t

1.3. Matrix analysis

Matrix theory plays an important role in many areas such as business, economics,
statistics, engineering, finance, stochastic modeling and other applied fields. Also, it
is fairly easy to introduce this subject at the undergraduate level for the students to
get familiarize with the concepts as well as to apply them to advanced fields such as
Markov chains and queues.

In this section, we briefly summarize some of the key results and properties of
matrices that are crucial to MAM. For details we refer to books such as Dhrymes
(2013); Marcus and Minc (1964); Graham (1981); Seneta (2006); and Steeb and Hardy
(2011).

1.3.1. Basics

D EFINITION 1.23.– A matrix A is an array of elements arranged in rows and columns.


It is defined through the dimension and the (i, j)th entries. Thus, an m × n matrix A
is defined as:
⎛ ⎞
a1,1 a1,2 · · · a1,j · · · a1,n
⎜ a2,1 a2,2 · · · a2,j · · · a2,n ⎟
⎜ ⎟
A = (ai,j ) = ⎜ . .. .. .. ⎟ . [1.36]
⎝ .. . ··· . ··· . ⎠
am,1 am,2 · · · am,j · · · am,n

R EMARK 1.16.– When the dimension of a matrix needs to be displayed, we will do so


by labeling it as subscripts. Thus, an m × n matrix A will be denoted as Am×n . Also,
uppercase letters will be used to denote a matrix and the corresponding lowercase
letter for its elements.

D EFINITION 1.24.– The transpose of Am×n = (ai,j ) is the n × m matrix obtained by


interchanging the rows and columns. The transpose notation used in this book will be
AT . Thus, AT = (aj,i ).
Introduction 19

R EMARK 1.17.– If m = n, we say that A is a square matrix and will be denoted by


An when the dimension needs to be displayed.

D EFINITION 1.25.– A square matrix A is symmetric if and only if A = AT .

D EFINITION 1.26.– A square matrix A is said to be a diagonal matrix if and only if


ai,j = 0, for all i = j. In this case, it is sometimes convenient to display as A =
Δ{a1,1 , · · · , am,m }.

R EMARK 1.18.– A diagonal matrix such that all its diagonal entries are 1 is called an
identity matrix and will be denoted by Im = Δ{1, · · · , 1}.

D EFINITION 1.27.– A square matrix A is said to be reducible if after rearranging the


rows and the columns (i.e. some permutations of rows and columns) it can be written
as:
 
A11 A12
A= . [1.37]
0 A22

If A cannot be written as above, then it is said to be irreducible.

R ESULT 1.32.– A is irreducible ⇔ A − Δ(A) is irreducible.

D EFINITION 1.28.– The basic matrix operations are: (1) addition of matrices of the
same dimensions. Thus, Am×n + Bm×n = Cm×n with ci,j = ai,j + bi,j ; (2) scalar
multiplication of a matrix: dA = (d ai,j ); (3) multiplication of two matrices requires
that the number columns of the left side matrix equals the number of rows on the right.
Thus, the matrix product, Am×n Bq×r makes sense if and only if n = q. Similarly,
Bq×r Am×n makes sense if and only if r = m. The product Am×n Bn×r yields a

n
matrix Cm×r = (ci,j ), where ci,j = ai,k bk,j .
k=1

D EFINITION 1.29.– Hadamard (or Schur) product of two matrices, Am×n and Bm×n ,
denoted as A◦B, is defined A◦B = B◦A = (ai,j bi,j ). That is, one takes element-wise
products.

D EFINITION 1.30.– A square matrix Am is said to be stable if and only if the following
three conditions are satisfied:
1) ai,i < 0, for all 1 ≤ i ≤ m;
2) ai,j ≥ 0, for all 1 ≤ i, j ≤ m;

m 
m
3) ai,j ≤ 0, for all 1 ≤ i ≤ m, and at least for one i, ai,j < 0.
j=1 j=1
20 Introduction to Matrix-Analytic Methods in Queues 1

D EFINITION 1.31.– A square matrix Am is said to be semi-stable if and only if the


following three conditions are satisfied:
1) ai,i ≤ 0, for all 1 ≤ i ≤ m;
2) ai,j ≥ 0, for all 1 ≤ i, j ≤ m;

m
3) ai,j ≤ 0, for all 1 ≤ i ≤ m.
j=1

R EMARK 1.19.– Note that a stable matrix is always semi-stable but the converse is
not true.

D EFINITION 1.32.– The rank of A is defined as the maximum number of linearly


independent rows (or columns) of A.

R EMARK 1.20.– The rank of Am×n cannot exceed the minimum of m and n.

D EFINITION 1.33.– A square matrix Am is said to have an inverse or simply be


non-singular, if and only if there exists a matrix Bm such that AB = BA = Im . We
will denote by A−1 the inverse of A.

R EMARK 1.21.– The rank of Am is m if and only if A is non-singular. In this case, it


is also said that A has full rank.

D EFINITION 1.34.– A square matrix Am is said to be a Toeplitz matrix if ai,j =


ai+r,j+r , for all i, j, and r. That is, ai,j depends only on the difference |i − j|.

R EMARK 1.22.– It is worth pointing out that while we do not take the approach of
discussing MAM in the context of Toeplitz and asymptotically Toeplitz matrices.
However, if one is interested to know about that approach we recommend the books
by Bini et al. (2005) and Dudin et al. (2020).

R EMARK 1.23.– The variance-covariance matrix seen in regression analysis is a


classical example of a Toeplitz matrix. In general, Toeplitz matrix is of the form:
⎛ ⎞
a0 a1 · · · aj−1 · · · am−1
⎜ a−1 a0 · · · aj−2 · · · am−2 ⎟
⎜ ⎟
A = (ai,j ) = ⎜ . .. .. .. ⎟ [1.38]
⎝ .. . ··· . ··· . ⎠
a1−m a2−m · · · aj−m · · · a0

D EFINITION 1.35.– A matrix A is non-negative if and only if all its elements are
non-negative.

R EMARK 1.24.– If A is a non-negative and an irreducible square matrix, then there


exists a positive integer, say, n∗ such that An > 0, n ≥ n∗ .
Introduction 21

D EFINITION 1.36.– The trace, tr(A), of a square matrix A is defined as the sum of

m
the diagonal elements. That is, tr(A) = ai,i .
i=1

R EMARK 1.25.– If A and B are square matrices, then tr(A + B) = tr(A) + tr(B)
and tr(AB) = tr(BA).

D EFINITION 1.37.– The determinant, denoted by |A|, of a square matrix A is defined


as:

|A| = (−1)r a1,j1 a2,j2 , · · · , am,jm , [1.39]
{j1 ,··· ,jm }

where the summation is taken over (distinct) permutations of the integers


{1, 2, · · · , m} and r is taken to be 0 or 1 depending on whether the number of
transpositions needed to bring the permutation {j1 , · · · , jm } back to the natural
order of (1, 2, · · · , m) is even or odd.

R EMARK 1.26.– For a square matrix Am , we have |AT | = |A|. Further, if B = dA,
then |B| = dm |A|.

D EFINITION 1.38.– Suppose that we obtain Bm−1 from Am by deleting the ith row
and the jth column. The quantity (−1)i+j |Bm−1 | is called the cofactor of the element
ai,j .

D EFINITION 1.39.– Suppose that we obtain Bm−1 from Am by deleting the ith row
and the jth column. The quantity (−1)i+j |Bm−1 | is called the cofactor of the element
ai,j .

D EFINITION 1.40.– For a square matrix A, the adjoint of A, denoted usually by


adj(A), is a square matrix whose (i, j)th element is the cofactor, say, a∗j,i , of the
(j, i)th element of A. Or equivalently, the adj(A) = (A∗ )T , where A∗ = (a∗i,j ).

R EMARK 1.27.– In terms of cofactors, the determinant of A is obtained as:


m 
m
|A| = ai,j a∗i,j = ai,j a∗i,j . [1.40]
j=1 i=1

R EMARK 1.28.– In terms of cofactors, the inverse of a non-singular matrix A is


obtained as:
1
A−1 = adj(A). [1.41]
|A|

R ESULT 1.33.– If Am×n and Bn×m , then we have:


(Im + A B)−1 exists ⇔ (In + B A)−1 exists.
22 Introduction to Matrix-Analytic Methods in Queues 1

The following result is a matrix-analog of the binomial theorem (when the matrices
A and B commute). Since to the author’s knowledge a proof of this result is not seen
in the literature, a proof is given.

R ESULT 1.34.– Suppose that A and B are square matrices of order m. Define the
matrix polynomial f (z) as f (z) = (A + zB)n . Then, we have:


n
f (z) = z k Fk,n ,
k=0

where the square matrices, Fk,n , of order m, are recursively computed (in that order)
as follows:
F0,0 = Im , F0,1 = A, F1,1 = B,

F0,r = A F0,r−1 , 2 ≤ r ≤ n,

Fi,r = A Fi,r−1 + B Fi−1,r−1 , 1 ≤ i ≤ r − 1, 2 ≤ r ≤ n,

Fr,r = B Fr−1,r−1 , 2 ≤ r ≤ n.

P ROOF.– Suppose we define an upper triangular matrix of K order (n + 1)m as:


⎡ ⎤
A B
⎢ A B ⎥
⎢ ⎥
⎢ A B ⎥
⎢ ⎥
K=⎢ . ⎥.
⎢ .. ⎥
⎢ ⎥
⎣ A B⎦
A

Verify that K r is also upper triangular and is of the form:


⎡ ⎤
F0,r F1,r F2,r F3,r F4,r · · · Fn,r
⎢ ⎥
⎢ ⎥
⎢ F0,r F1,r F2,r F3,r · · · Fn−1,r ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ F0,r F1,r F2,r · · · Fn−2,r ⎥
⎢ ⎥
⎢ ⎥
Kr = ⎢ ⎥ , 2 ≤ r ≤ n.
⎢ . . . . .. ⎥
⎢ .. .. .. ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ F0,r F1,r ⎥
⎢ ⎥
⎣ ⎦
F0,r
Introduction 23

Note that in K r , the matrices Fk,r = 0, k ≥ r + 1. The recursive equations can


easily be derived. For example, looking at K r = K K r−1 and applying the matrix
multiplication rule one sees the recursive equations. The stated result on the
expansion of f (z) will follow immediately once we show that the matrices Fk,r are
the coefficients of z k , 0 ≤ k ≤ r, in the matrix expansion of (A + z B)r . To see this,
we apply the induction method. First note that the result is true for r = 2 by
observing (A + z B)2 = (A + z B)(A + z B) = A2 + z(AB + BA) + z 2 B 2 =
F0,2 + zF1,2 + z 2 F2,2 .

Assume now that the result is true for (A + z B)i , i = 1, · · · , r − 1. Thus, using
the fact that Fk,r−1 is the coefficient of z k , k = 0, · · · , r − 1, in the expansion of
(A + z B)r−1 , we have:


r−1
(A + z B)r = (A + z B) z k Fk,r−1 ,
k=0

which on expanding gives:



r−1
(A + z B)r = A F0,r−1 + z k [A Fk,r−1 + B Fk−1,r−1 ] + z r B Fr−1,r−1 ,
k=1

which from the recursive equations shows that the result is true for i = r. This
completes the proof. 

R ESULT 1.35.– If A and B commute, then we have:


n  
n k n−k k
f (z) = z A B . [1.42]
k
k=0

 
P ROOF.– First note that Fk,n has nk terms involving the products of the powers of A
and B. When A and B commute, all different ways of producing the product of An−k
and B k will result in the same product. 

1.3.2. Eigenvalues and eigenvectors

D EFINITION 1.41.– Given a square matrix Am , the associated characteristic equation


is defined as the mth degree polynomial obtained as det(ξI − A).

D EFINITION 1.42.– Given a square matrix A, the roots of its characteristic equation
are called eigenvalues or characteristic roots. That is, ξ is said to be an eigenvalue of
A, if it satisfies:

det(ξI − A) = 0. [1.43]
24 Introduction to Matrix-Analytic Methods in Queues 1

D EFINITION 1.43.– Given a square matrix A, the vectors u and v are, respectively,
called the left and right eigenvectors corresponding to the eigenvalue, ξ, if :

u A = ξu, and A v = ξv. [1.44]

R ESULT 1.36.– Suppose that A is a square matrix and that ξ is an eigenvalue of A.


Then for any matrix B = a1 I + a2 A, where a1 and a2 = 0 are scalars, then an
eigenvalue of B is of the form a1 + a2 ξ. The (left or right) eigenvectors of A and B
are identical.

R ESULT 1.37.– Suppose that A is non-singular, then the eigenvalues of A are non-
zero. Further, the eigenvalues of A−1 are reciprocals of the eigenvalues of A.

R ESULT 1.38.– The eigenstructure, namely, the eigenvalues and their corresponding
eigenvectors, of A and AT are the same.

R ESULT 1.39.– Suppose that f (.) is a polynomial. Then, if ξ is an eigenvalue of A,


then f (ξ) is an eigenvalue of f (A).

D EFINITION 1.44.– Two square matrices, A and B, are said to be similar if there
exists a non-singular matrix, say, P , such that P −1 AP = B. The matrix P is referred
to as a similarity transformation matrix.

R ESULT 1.40.– Similar matrices have identical eigenstructures.

D EFINITION 1.45.– A square matrix A is said to be diagonalizable if A is similar to


a diagonal matrix.

R ESULT 1.41.– A square matrix is diagonalizable if and only if there exists a linearly
independent set of eigenvectors and they are used as columns of P in P −1 AP = Δ,
where Δ is a diagonal matrix.

R ESULT 1.42.– Eigenvectors corresponding to distinct eigenvalues of A are linearly


independent.

D EFINITION 1.46.– Algebraic multiplicity of an eigenvalue corresponds to the


multiplicity of the eigenvalue (or root).

D EFINITION 1.47.– Geometric multiplicity of an eigenvalue corresponds to the


dimension of the subspace of the eigenvector of the eigenvalue.

R ESULT 1.43.– Algebraic multiplicity of an eigenvalue is always greater than or equal


to the geometric multiplicity of the eigenvalue.
Introduction 25

R ESULT 1.44.– A is diagonalizable ⇔ algebraic multiplicity of ξ = geometric


multiplicity of ξ, for every ξ of A.

D EFINITION 1.48.– A square matrix A is orthogonal if AT = A−1 . That is, A AT =


AT A = I implying that the columns of A are linearly independent and the inner
products are 0 or 1.

R ESULT 1.45.– Suppose that A and B are square matrices. Then AB and BA have
identical eigenstructure.

R ESULT 1.46.– Suppose that Am is a square matrix. Then there exists a Bm such that

|ai,j − bi,j | <  and B has distinct eigenvalues.
i,j

R ESULT 1.47.– Suppose that A is a non-negative square matrix. Suppose that u ≥ v.


Then A u ≥ A v.

R ESULT 1.48.– Suppose that A is a positive square matrix. Then, for all u ≥ 0, we
have A u ≥ 0.

R ESULT 1.49.– Suppose that A is a positive square matrix. Then the maximum
eigenvalue of A is positive.

R ESULT 1.50.– Perron-Frobenius Result: Suppose that A is a non-negative square


matrix of dimension n. Then we have the following:
1) There exists an eigenvalue, say, η, such that η > 0.
2) There exists strictly positive left and right eigenvectors associated with η.
3) Suppose that u and v are, respectively, left and right eigenvectors of A
corresponding to η, that is, uA = ηu and Av = ηv. Then, we can normalize them so
as to have uv = 1.
4) η is the maximal eigenvalue. That is, if r is any other eigenvalue, then |r| < η.
5) Eigenvectors associated with η are unique up to a constant multiple.
6) If B is any other matrix of dimension n with 0 ≤ B ≤ A and if ξ is an
eigenvalue of B, then |ξ| ≤ η. Further, if |ξ| = η, then B = A.
7) η is a simple root of the characteristic equation of A.
8) The maximal eigenvalue (or spectral radius) , η, lies between the minimum and
maximum of the row sums of A.
9) If A is irreducible, then a weaker condition holds good: |r| ≤ η.
26 Introduction to Matrix-Analytic Methods in Queues 1

R ESULT 1.51.– Suppose that A is an irreducible stable matrix. Let θ ≥ maxi |ai,i |.
1
Then, the matrix B = I + A is irreducible and non-negative. Writing A = θ(B −I),
θ
and noting that all eigenvalues of B are less than 1, from result 1.36 we infer that all
eigenvalues of A are of the from ξ = θ(λ − 1), where λ is an eigenvalue of B. Thus,
all eigenvalues of A have strictly negative real parts. This also implies that the stable
matrices are non-singular.

R EMARK 1.29.– We can also say that a square matrix, A, is a stable matrix if all its
eigenvalues have strictly negative real parts. Similarly, a square matrix, A, is said to
be semi-stable if all its eigenvalues have non-positive real parts. These are important
observations and will be used in later chapters.

R ESULT 1.52.– Suppose that f (x) is a polynomial in x. If ξ is an eigenvalue of a


matrix Am , then f (ξ) is an eigenvalue of f (A).

R EMARK 1.30.– The spectral radius (or the maximal eigenvalue) of a non-negative
matrix plays an important role in stochastic modeling. We propose (based on our
experience) using Elsner’s algorithm to compute the spectral radius. Suppose that A
is an irreducible non-negative matrix with η as its spectral radius. If A is not
irreducible (which occurs commonly in many applications), then one can identify the
principal submatrix of A with the spectral radius and then apply Elsner’s algorithm to
this submatrix. Elsner’s algorithm is easy to implement and also converges fast.
Necessary steps of the algorithm are given below.

Let u(n) be positive vectors normalized as u(n) e = 1, n ≥ 0.


(n) (n)
Define Sj = (u(n) A)j /uj , 1 ≤ j ≤ m. Find νn and μn such that:

(n)
Sν(n)
n
≤ Sj ≤ Sμ(n)
n
, 1 ≤ j ≤ m.

Define, for 0 < α < 1,


(n)
Sνn − Aνn ,νn
dn = (n) (n) (n)
.
Sνn − Bνn ,νn + α(Sμn − Sνn )

Next iterate value is obtained as:



⎪ u(n)
⎨ νn
 νn ,
(n) , j =
(n+1) [1−(1−d n )uνn ]
uj =

⎩ dn u(n)
νn
(n) , j = νn ,
[1−(1−dn )uνn ]

Elsner has proved that for all n ≥ 0:


Sν(n)
n
≤ ζ ≤ Sμ(n)
n
,
Introduction 27

lim Sμ(n)
n
= lim Sν(n)
n
= Sp(B), u(n) → u,
n→∞ n→∞
[Note: uB = ζu.]

R ESULT 1.53.– If a square matrix Am has all its eigenvalues less than 1 in modulus,
then we have:
∞  
−n n+k−1 k
(I − A) = A , n ≥ 1. [1.45]
k
k=0

1.3.3. Partitioned matrices

Partitioned matrices play a vital role in the computational aspects of stochastic


models. So, we will give a few key results and additional ones can be generated out of
these; as well, one can consult the references mentioned for more details.

R ESULT 1.54.– If Am and Bn are non-singular and if Cm×n and Dn×m are two
matrices, then:
[A + CBD]−1 = A−1 − A−1 C(B −1 + DA−1 D)−1 DA−1 .

R ESULT 1.55.– Suppose that Am is non-singular and that a and b are, respectively,
column and row vectors of dimension m. Then, we have for any scalar c,
c
[A + c a b]−1 = A−1 − A−1 a b A−1 .
1 + bA−1 a

R ESULT 1.56.– Assuming that A−1 and B −1 exist, we have:


[A + C B C T ]−1 = A−1 − A−1 C (B −1 + C T A−1 C)−1 C T A−1 .

R ESULT 1.57.– Assuming the mentioned inverses exist, we have:


(Im + Am×n Bn×m )−1 = Im − A(In + Bn×m Am×n )−1 B.

R ESULT 1.58.– If A is non-singular and is partitioned as:


 
A11 A12
A= , [1.46]
A21 A22
where A11 and A22 are non-singular matrices of orders, say, m1 and m2 , then we
have:
⎛ ⎞
(A11 − A12 A−1
22 A21 )
−1
−A−1 −1
11 A12 (A22 − A21 A11 A12 )
−1
⎜ ⎟
A−1 = ⎜
⎝ −A−1 A21 (A11 − A12 A−1 A21 )−1 −1 −1
⎟.

22 22 (A22 − A21 A11 A12 )
28 Introduction to Matrix-Analytic Methods in Queues 1

R ESULT 1.59.– Suppose that the partitioned matrix A (see equation [1.46]) is non-
singular but A11 and A22 are singular. Suppose that C = (A11 + A12 A21 )−1 and
D = (A22 − A21 C A12 − A21 C A12 A22 )−1 exist. Then, the inverse of A is given by:


⎪ C + C A12 (I + A22 )D A21 C, k = 1,


⎛ ⎞ ⎪



E1 E2 ⎨ E1 A12 − C A12 (I + A22 ) D, k = 2,
A =⎝
−1 ⎠ , where Ek =


E3 E4 ⎪
⎪ −D A21 C, k = 3,





D − D A21 C A12 , k = 4.

R ESULT 1.60.– Assuming that A−1 and B −1 exist, we have:

[A + B C]−1 = A−1 − A−1 B (I + C A−1 B)−1 C A−1 .

R ESULT 1.61.– Assuming that A−1 and B −1 exist, we have:

[I + A−1 ]−1 = A(A + I)−1 ,

(A + BB T )−1 B = A−1 B(I + B T A−1 B)−1 ,

(A−1 + B −1 )−1 = A(A + B)−1 B = B(A + B)−1 A,

A − A(A + B)−1 A = B − B(A + B)−1 B,

(A−1 + B −1 ) = A−1 (A + B)B −1 ,

(I + A B)−1 = I − A(I + BA)−1 B,

(I + A B)−1 A = A(I + AB)−1 .

R ESULT 1.62.– Assuming that A−1 and B −1 exist, we have:

(A + B)−1 = A−1 + B −1 ⇒ A B −1 A = B A−1 B.

1.3.4. Matrix differentiation

Differentiation of vectors and matrices is needed in deriving expressions and


proving results in stochastic modeling. Thus, in this section, we review the needed
definitions, concepts and results. For more details, we refer the reader to the
references mentioned in section 1.3.
Introduction 29

D EFINITION 1.49.– Suppose that f (x) = (f1 (x), · · · , fn (x)) is a vector-valued


∂f
differentiable function defined on n to n . The partial derivative, , is defined as:
∂x
⎡ ⎤
∂f1 ∂f1 ∂f1
⎢ ∂x ∂x · · · ∂x ⎥
⎢ 1 2 n⎥
⎢ ⎥
⎢ ⎥
⎢ ∂f2 ∂f2 ∂f 2 ⎥
⎢ ··· ⎥
∂f ⎢ ⎢ ∂x1 ∂x2 ∂xn ⎥⎥
=⎢ ⎥.
∂x ⎢ ⎥
⎢ .. .. .. ⎥
⎢ . . ··· . ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎣ ∂fn ∂fn ∂fn ⎦
···
∂x1 ∂x2 ∂xn

D EFINITION 1.50.– Suppose that f : n →  is a real-valued differentiable


∂f ∂f ∂f ∂f !
function. Then, the partial derivative, , is defined as = ,··· , .
∂x ∂x ∂x1 ∂xn

∂x
R EMARK 1.31.– From the definition, we see that = In .
∂x


R ESULT 1.63.– If the matrix An is independent of x, then we have A x = A.
∂x
R ESULT 1.64.– If the matrix Am×n is independent of both z, a column vector of
dimension m, and x, a column vector of dimension n, then we have:

∂ T ∂ T
(z Ax) = xT AT and (z Ax) = z T A. [1.47]
∂z ∂x
R ESULT 1.65.– If the matrix An is independent of x, which is a column vector of
∂ T
dimension n, then we have (x Ax) = xT (A + AT ).
∂x

R ESULT 1.66.– Suppose that the matrix An (x) is a function of x and that A−1
n (x)
exists. Then, we have:

d −1 d
A (x) = −A−1
n (x) An (x) A−1
n (x). [1.48]
dx n dx
R ESULT 1.67.– Suppose that the matrix A and B are two matrices for which addition
d
makes sense. Then, we have [A + xB] = B.
dx
30 Introduction to Matrix-Analytic Methods in Queues 1

R ESULT 1.68.– Suppose that the matrix operations shown below are valid. Then, we
have the following:

d T
A (B + xC)−1 D = −AT (B + xC)−1 C (B + xC)−1 D. [1.49]
dx
R ESULT 1.69.– Suppose that f (A) and g(A) are functions of A. Assume that the
products shown below are well defined. Then, we have:

d d d
{[f (A)]T g(A)} = f (A)g(A) + g(A)f (A). [1.50]
dA dA dA
R ESULT 1.70.– Suppose that A and B are square matrices of dimension m. Suppose
that f (z) = (A + zB)n , where z is a scalar and n is a non-negative integer. Then, we
have:

df 
n−1
= (A + zB)k B (A + zB)n−1−k . [1.51]
dz
k=0

1.3.5. Exponential matrix

Exponential matrices arise naturally in stochastic modeling. These matrices need


to be computed at times and knowing the properties of an exponential matrix will
greatly benefit in coding as well as to prove analytical results. We review a few aspects
related to an exponential matrix.

D EFINITION 1.51.– Suppose that A is a finite-dimensional (square) matrix of


dimension m. The exponential of A is defined as the Picard series:

 Ak
eA = , [1.52]
k!
k=0

where A0 = Im .

R EMARK 1.32.– Note that by definition eA is always a non-negative matrix. If A is


irreducible, then eA will be a positive matrix.

R ESULT 1.71.– Suppose that a is a scalar and that Am is a square matrix. Then, we
have:

eaIm +A = ea eA . [1.53]

R ESULT 1.72.– If Am and Bm commute, then:

e(A+B) = eA eB = eB eA . [1.54]
Introduction 31

R ESULT 1.73.– Suppose that A−1 exists. Then, we have:


 t  t
f (x) eA x dx = f (t) eA t A−1 − f (0)A−1 − A−1 eA x f  (x) dx, [1.55]
0 0

where f  (x) denotes the first derivative of f (x).

R ESULT 1.74.– Suppose that A−1 exists. Then, we have:


 ∞
e−s x eA x dx = (sI − A)−1 , Re(s) ≥ 0. [1.56]
0

R ESULT 1.75.– Suppose that Am is a square matrix with an eigenvalue ξ. Let u and
v denote, respectively, the left and right eigenvectors of A corresponding to ξ. Then,
eξ is an eigenvalue of eA , and u and v are, respectively, the left and right eigenvectors
of eA corresponding to eξ .

P ROOF.– Follows immediately on noting that:

uA = ξu ⇒ uAn = ξ n u, n ≥ 1, [1.57]

which immediately implies that:

ueA = eξ u. [1.58]

Similarly, one can show that eA v = eξ v. 

R ESULT 1.76.– Differentiation of an exponential matrix yields the following:

d Ax
e = AeAx = eAx A.
dx

R ESULT 1.77.– Suppose that A is non-singular. Then, we have:



eAt dt = A−1 eAt = eAt A−1 .

R ESULT 1.78.– The inverse of eA always exists and is given by e−A .

R ESULT 1.79.– Suppose  that Q = (qi,j ) is a semi-stable matrix (recall:


qi,i < 0, qi,j ≥ 0, and qi,j = 0, for all i). Also, let π denote the left eigenvector
j
corresponding to the maximal eigenvalue, namely, 0. Then, we have:
 t
eQ x dx = t e π + [I − eQ t ][e π − Q]−1 , t ≥ 0.
0
32 Introduction to Matrix-Analytic Methods in Queues 1

1.3.6. Kronecker products and Kronecker sums

In stochastic modeling, Kronecker products and Kronecker sums play an important


role and simplify a number of steps in proofs. Thus, in this section, we review the basic
ones needed for the book. It is highly recommended that readers go through the books
referenced earlier.

D EFINITION 1.52.– Suppose that A = (aij ) is an m × n matrix and that B = (bij )


is a p × q matrix. The Kronecker product of A and B, denoted by A ⊗ B, is a matrix
of dimension mp × nq and is given by:
⎛ ⎞
a11 B a12 B · · · a1n B
⎜ a21 B a12 B · · · a1n B ⎟
⎜ ⎟
A⊗B =⎜ . .. .. ⎟ .
⎝ .. . ··· . ⎠
am1 B an2 B · · · amn B

R ESULT 1.80.– Below, we assume that the matrix operations such as multiplications
and additions are meaningful.

(A + B) ⊗ C = (A ⊗ C) + (B ⊗ C),

A ⊗ (B + C) = (A ⊗ B) + (A ⊗ C),

(A ⊗ B) ⊗ C = A ⊗ (B ⊗ C).

(A ⊗ B)T = (AT ⊗ B T ),

(A ⊗ B)(C ⊗ D) = (AC ⊗ BD).

R ESULT 1.81.– If A and B are non-singular matrices, then (A⊗ B)−1 = A−1 ⊗B −1 .

R EMARK 1.33.– In general, Kronecker product operation is not commutative. That is,
in general A ⊗ B = B ⊗ A.

R ESULT 1.82.– Suppose that A has eigenvalues ξi , i = 1, 2, · · · , m, with the


corresponding eigenvectors ui ; B has eigenvalues γj , j = 1, 2, · · · , n, with the
corresponding eigenvectors v j . Then, the eigenvalues of A ⊗ B are given by
ξi γj , 1 ≤ i ≤ m, 1 ≤ j ≤ n, with the corresponding eigenvectors given by ui ⊗ v j .

D EFINITION 1.53.– Suppose that A and B are two square matrices of dimension m
and n, respectively. The Kronecker sum , denoted by, A ⊕ B, is defined as:

A ⊕ B = A ⊗ In + Im ⊗ B.
Random documents with unrelated
content Scribd suggests to you:
so soon as it stands true in his mind, and accordingly becomes his
faith; that all the divine power which operates in the minds of
men, either to give the first relief to their consciences, or to
influence them in every part of their obedience to the Gospel, is
persuasive power, or the forcible conviction of truth.

From this we see that he saw with some degree of clearness 141
the nature of faith, but not that the divine economy provides
that faith shall be perfected by surrender to an ordinance of the
Lord’s own appointment. On some other points in regard to faith he
was more or less confused. He advocated the weekly observance of
the Lord’s Supper; love feasts; weekly contribution for the poor;
mutual exhortation of members; plurality of elders; conditional
community of goods; and approved of theaters and public and
private diversions, when not connected with circumstances really
sinful. His influence extended to the north of Ireland, but the people
there did not adopt all his views. They attended weekly to the Lord’s
Supper, contributions, etc., but were opposed to going to theaters or
such places of amusement; to the doctrine of the community of
goods; feet washing, etc., as advocated by Sandeman. Sandeman’s
influence extended also to England and to this country.

HALDANE AND AIKMAN

At the close of the eighteenth century spiritual religion in Scotland


was at a very low ebb. Then village preaching and extensive
itineraries were entered upon by James A. Haldane and John
Aikman. They were members of the Established Church of Scotland.
They took in hand preaching tours unauthorized by the clergy. They
were “laymen,” and preaching by such men was then a strange thing
in Scotland. Their labors were so far successful that a revival of
spiritual life set in at many places and a spirit of inquiry was
aroused. They made successive tours throughout all Scotland, as far
as the Orkney Islands. Then Robert Haldane turned his attention to
the spiritual needs of his native land, and determined to devote his
large fortune to spreading the Gospel through its benighted districts.
This led to the formation of a society for the dissemination of
religious knowledge, and to the employment of young men of known
piety to plant and superintend evening schools for the instruction of
the young in religious truths. This movement grew to considerable
proportions. But it met with determined opposition, both from
Presbyterian Dissenters and the Established clergy. The decrees
were fulminated by entire bodies, as the Relief Synod, 142
obviously leveled against the devout and ardent itinerants. In
like spirit the Antiburger Synod decreed:

That as lay preaching has no warrant in the Word of God, and as


the Synod has always considered it their duty to testify against
promiscuous communion, no person under the inspection of the
Synod can, consistently with these principles, attend upon or give
countenance to public preaching by any who are not of our
community; and if any do so they ought to be dealt with by the
judicatories of the Church, to bring them to a sense of their
offensive conduct.

Going beyond this, the General Assembly of 1799 accused the


itinerant preachers of “being artful and designing men, disaffected to
the evil constitution of the country, holding secret meetings, and
abusing the name of liberty as a cover for a secret democracy and
anarchy.” In the midst of this opposition a church was formed of
some fourteen persons in a private house on George Street,
Edinburgh, which was the beginning of the Tabernacle Church Leith
Walk, in which James Haldane eventually became minister, in which
capacity he exercised, without any emolument, all the public and
private duties with unbroken fidelity and zeal for a period of fifty
years. For some time this church was content with monthly
communion, but in 1802 it resolved to spread the Lord’s table on the
first day of every week. By the close of 1807 some eighty-five
Independent churches had been established. Out of this movement
a further advance took place, and thence arose Baptist churches in
Scotland.
THE SCOTCH BAPTISTS

Churches holding the immersion of believers as the only authorized


baptism have, possibly, stood out against the apostasy (not as
Baptists), even from the days of the apostles, though frequently
driven into hiding places by the force of persecution and for the
preservation of their faith and order and also of their lives.

Concerning the origin of the Baptists in England I shall not dwell;


though their early history is very interesting, and far more in accord
with the apostolic style than the present-day Baptists. Passing at
once to Scotland, I find no trace of Baptist churches till the latter
part of the eighteenth century, excepting one of short 143
duration formed by soldiers of Cromwell’s army. The earliest
Scotch Baptist Church is said to have been formed in Edinburgh in
1765 under the efforts of Robert Carmichael, who had been a
minister in the Antiburger Church at Coupar-Augus; but later became
minister of an Independent Church (“Glassite”) in Edinburgh, of
which Archibald McLean was a member. Early in life a strong
impression had been made on the mind of McLean by the preaching
of George Whitfield. In 1762 he withdrew from the Established
Church of Scotland and united with this Independent Church. But it
was not long till some trouble arose over a case of discipline which
resulted in the withdrawal of both Carmichael and McLean from the
church. While thus standing aloof from church membership they
directed their attention to baptism. McLean, not having read a line
upon the subject, went carefully through the whole of the New
Testament with the inquiry before him, “What is baptism?” This led
him to the firm conviction that only those capable of believing in
Christ are its subjects and that it must be performed by immersion
of the whole body in water. A year later Carmichael reached the
same conclusion. He then went to London where he was immersed
by Dr. Gill at Barbican, October 9, 1765. On returning to Edinburgh,
he baptized McLean and six others, and formed a Baptist church. In
1769, Carmichael moved to Dundee, and McLean became minister of
the newly-formed church. Other churches of immersed believers
were soon planted in Glasgow, Dundee, Montrose and other places,
and the sentiment in favor of returning to the scriptural act of
baptizing grew among the people. The marked piety and noble
disinterestedness of Archibald McLean stand out as worthy of all
admiration. His labors were immense and given gratuitous, as he
persisted in continuing in employment as overseer of a printing
establishment.

As Scotch Baptist churches multiplied there arose a disturbing


element. McLean and others held the necessity of an ordained elders
to the proper observance of the Lord’s Supper; consequently,
notwithstanding that they taught the importance of observing the
Lord’s Supper on the first day of every week, it had to be 144
omitted when an ordained elder could not be present. But
ere long others among them saw more light and insisted that elders
were not essential to the being of a church, that the church existed
before its eldership, and that where the church is the Lord’s table
should be spread on the first day of every week, irrespective of the
presence of an ordained elder. This led to contention, and, indeed,
to separation. But truth will not down. We may go with it any
distance we please, but when we say, “Thus far and no farther,”
truth struggles to remove the hindrances thrown across its path, and
in the end starts on afresh to complete the journey.

As leaven will permeate so truth must influence more or less the


mass into which it is cast. From Scotland the principles associated
with the names of the Haldanes, Carmichael and McLean found
receptive hearts in Wales. Even in Ireland, also, there was in men’s
minds the struggling of truth and error, the partial expulsion of the
false by the true, the consequent advance to apostolic faith and
order, and falling short of a complete return thereto, notwithstanding
progress calling for thankful recognition.

THE SEPARATISTS
About the year 1802 there were a few persons in Dublin, most of
them connected with the religious establishments of the country.
The most noted among them were John Walker, G. Carr and Dr.
Darby, all of whom organized religious bodies, differing in minor
points from one another. Their attention was directed to Christian
fellowship, as they perceived it to have existed among the disciples
in apostolic times. They concluded from the study of the New
Testament that all the first Christians in any place were connected
together in the closest brotherhood; and that as their connection
was grounded on the one apostolic gospel which they believed, so it
was altogether regulated by the precepts delivered to them by the
apostles, as the divinely commissioned ambassadors of Christ. They
were convinced that every departure of professing Christians from
this course must have originated in a withdrawing of their allegiance
from the King of Zion, and in the turning away from the instruction
of the inspired apostles; that the authority of their word, 145
being divine, was unchangeable, and that it can not have
been annulled by or weakened by the lapse of ages, by the varying
customs of different nations, or by the enactments of earthly
legislators.

With such views in their minds they set out in the attempt to return
fully to the course marked out in the Scriptures; persuaded that they
were not called to make any laws or regulations for their union, but
simply to learn and adhere to the law recorded in the divine Word.
Their number soon increased; and for some time they did not see
that the union which they maintained with each other, on the
principles of scripture, was at all inconsistent with the continuance of
their connection with the various religious bodies round them. But
after a time they were convinced that these two things were utterly
incompatible; and that the same divine rule which regulated their
fellowship with each other forbade them to maintain any religious
fellowship with others. From this view, and the practice consequent
upon it, they were called “Separatists.”
They held that even two or three disciples in any one place, united
together in the faith of the apostolic gospel, and in obedience to the
apostolic precepts, constitute the Church of Christ in that place.

They held that the only good and sure hope toward God for any
sinner is by the belief of this testimony concerning the great things
of God and his salvation. And as they understood by faith, with
which justification and eternal life were connected, nothing else but
belief of the things declared to all alike in the Scriptures, so by
repentance they understood nothing else but the new mind which
that belief produces. Everything called repentance, but antecedent
to the belief of the Word of God, or unconnected with it, they
considered spurious and evil.

They considered the idea of any successors to the apostles or of any


change of Christ’s laws as utterly unchristian, and did not tolerate
any men of the clerical type among them. They believed that the
Scriptures taught the community of goods. They held that there is
no sanction in the New Testament for the observance of the first day
of the week as the Sabbath; and that the Jewish Sabbath was one of
the shadows of good things to come, which passed away on 146
the completion of the work of Jesus on the cross. They
believed themselves bound to meet together on the first day of the
week, the memorial day of Christ’s resurrection, to show his death,
in partaking of bread and wine, as the symbols of his body and his
blood shed for the remission of sins.

In their assembly they joined together in the various exercises of


praise and prayer, reading the Scriptures, exhorting and admonishing
one another as brethren according to their several gifts and ability;
contributed of their means and saluted each other with “an holy
kiss.” In the same assemblies they attended, as occasion required, to
the discipline appointed by the apostles, for removing any evil that
might appear in the body.
When any brethren appeared among them possessing all the
qualifications of the office of elders or overseers, which are marked
in the apostolic writings, they thought themselves called upon to
acknowledge them as brethren in that office, as the gifts of the Lord
to his church. They held themselves bound to live as peaceable and
quiet subjects of any government under which the providence of
God placed them; to implicitly obey all human ordinances which did
not interfere with their subjection to their heavenly King.

The baptism of believers was cast aside as anti-Christian, except in


the case of the heathen, who on conversion had made no previous
confession of faith. Their mistake lay in the belief that baptism was
intended to mark the mere profession of Christian faith. They failed
to see that it was commanded by the Lord himself to follow upon a
real believing with the heart, and a confession with the mouth. Any
act called baptism prior to that is not the ordinance of Christ, and
stands for nothing. The time for baptism is so soon as that believing
confession and heart trust exists as a fact. So long as it remains
unperformed after that there is a cessation in that particular of
compliance with the divine command, which should be terminated
by obedience so soon as possible.

While these people were scriptural in a number of things, in others


they fell far short of returning to apostolic Christianity. So we must
continue our search.

As we have already seen, there was a great struggle in 147


Europe to escape from the direful effects of departure from
apostolic simplicity. These efforts brought forth many sects, and
each sect fought desperately to secure the Bible within its own party
by the spiritual fetters of partisan interpretation. The clergy of each
denomination, arrogating to themselves the claim of being its
divinely-authorized expounders, caused it to speak only in the
interest of their sect, and thus the Bible was made to speak in
defense of each particular creed. Detached sentences, relating to
matters wholly distinct and irrelevant, were placed in imposing array
in support of positions assumed by human leaders; the people, on
the other hand, seemed to have quietly surrendered into the hands
of the clergy all power of discrimination and all independence in
religious matters. It seemed vain that the Bible had been put into
the hands of the people in their mother tongue, since the “clergy”
had succeeded in imposing upon it a seal which the “laity” dared not
break, so that while Protestants were delighted that they were in
possession of the Bible, it was, in fact, little else than an empty
boast, so long as they could be persuaded that they were wholly
unable to understand it.

The Bible thus trammeled had, nevertheless, set free from spiritual
bondage individuals here and there, who were more or less
successful in their pleadings for reform. But among them all,
however, there was no one who took hold of the leading errors with
sufficient clearness and grasp as to liberate it from the thraldom of
human tradition and restore the Gospel to the people in its primitive
simplicity and power.

148
PART IV
The Restoration Movement in America

CHAPTER I.
SPIRITUAL UNREST IN MANY PLACES

The close of the eighteenth and the beginning of the nineteenth


century were characterized by efforts to get entirely on apostolic
ground, originating almost simultaneously in widely-separated
localities and amidst different and antagonistic sects. But, as the
greatest of these efforts developed in our own country, we now turn
our attention to them.

One of these originated among the Methodists at the time of the


establishment of the American colonies, and the subject of church
government became a matter of discussion among them. Thomas
Coke, Francis Asbury and others labored to establish prelacy,
regarding themselves as superintendents or bishops. Against this
movement, James O’Kelley, of North Carolina, and some others of
that State and of Virginia, with a number of members, pleaded for a
congregational system, and that the New Testament be the only
creed and discipline. Those contending for the episcopal form of
government were largely in the majority, and the reformers were
unable to accomplish their wishes. Led by James O’Kelley, they
finally seceded at Mankintown, N. C., Dec. 25, 1793. McTiere says:
“The spirit of division prevailed chiefly in the southern part of
Virginia, and in the border counties of North Carolina, in all of which
region the personal influence of O’Kelley has been seen. It extended
also beyond these limits. We find the first two missionaries in
Kentucky—Ogden and Haw—drawn away into his scheme. And in
other places he had adherents” (History of Methodism, page 411). At
first they took the name “Republican Methodists,” but in 1801
“resolved to be known as Christians only, to acknowledge no head
over the Church but Christ, and to have no creed or discipline but
the Bible.” In respect to increase of numbers, this movement 149
was not great, and in the course of time was weakened by
changes and removals, but its principles spread into other States.

About the same time Abner Jones, a physician, of Hartland, Vt., then
a member of the Baptist Church, became “greatly dissatisfied with
sectarian names and creeds, began to preach that all these should
be abolished, and that true piety should be made the ground of
Christian fellowship. In September, 1800, he succeeded by
persevering zeal in establishing a church of twenty-five members at
Lyndon, Vt., and subsequently one in Bradford and one in Piermont,
N. H., in March, 1803.” Elias Smith, a Baptist preacher, who was
about this time laboring with much success in Plymouth, N. H.,
adopted Jones’ view and carried the whole congregation with him.
Several other preachers, both from the Regular and Freewill Baptists,
soon followed, and with many other zealous preachers, who were
raised up in the newly-organized churches, traveled extensively over
the New England states, New York, Pennsylvania, Ohio and into
Canada, and made many converts. Those in this movement also
called themselves Christians only, and adopted the Bible as their only
rule of faith and practice.

Dr. Chester Bullard was the pioneer in the cause of primitive


Christianity in all Southwest Virginia. He separated himself from the
Methodist Church and most earnestly desired to be immersed, but
would not receive it at the hands of the Baptists, as he was not
sufficiently in harmony with their tenets to unite with them. About
this time Landon Duncan, the assessor of the county, happened to
call in the discharge of his official duties. Engaging in a religious
conversation with him, Dr. Bullard freely expressed to him his
feelings and his desires, and though he freely expressed his dissent
from some of the views held by Duncan, the latter agreed to baptize
him.

In early life Duncan had united with the Baptists and was ordained
by them, but after a time adopted the views of the “Christians,”
chiefly through the teaching of Joseph Thomas, who was in some
respects a remarkable man. He was born in North Carolina, whence
he removed with his father to Giles County, Virginia, where he
became deeply imbued with religious fervor, and began while 150
quite a young man to urge his neighbors to the importance
of devoting themselves to the service of God. Associating with
O’Kelley in North Carolina, he desired to be immersed, when O’Kelley
persuaded him that pouring was more scriptural, to which he
submitted after stipulating that a tubful of water should be poured
upon him. But afterward he became fully convinced that immersion
alone is baptism, and was immersed by Elder Plumer. This brought
him into intimate association with Abner Jones, Elias Smith and
others of the “Christians.” He now devoted his life wholly to
preaching and became noted for the extent of his travels throughout
the United States. He traveled on foot dressed in a long, white robe,
hence he was called the “White Pilgrim,” and frequently, in imitation
of the Master, retired to lonely places for fasting and prayer. He
made a strong impression on the people, and finally died of smallpox
amidst his itinerant labors in New Jersey.

Dr. Bullard, after his baptism by Duncan, at once began preaching,


delivering his first discourse the evening following his baptism.
Avoiding those speculation points with which Duncan and those
associated with him were so much occupied, he presented simple
views of the Gospel and the freeness of the salvation through Christ,
and showed that faith comes by hearing the Word of God, and that
“he that believeth and is baptized shall be saved.” It was a
considerable time, however, before he convinced enough people of
the scripturalness of the doctrine to form a church. By degrees, most
of those associated with Duncan were convinced by Dr. Bullard, and
through the assistance of James Redpath and others joining him in
the ministry of the Word, a number of churches were established in
that part of Virginia. About 1839 Dr. Bullard incidentally came into
possession of a copy of Alexander Campbell’s “Extra on Remission of
Sins.” On reading it he was so surprised and delighted with the new
views therein set forth that he obtained all the numbers of the
Christian Baptist and Millennial Harbinger, and was filled with great
joy to find how clear and consistent were Campbell’s views, and how
different from the slanderous misrepresentations which had been so
persistently circulated through the press and from the pulpit. 151
He immediately began to circulate Campbell’s writings,
preaching with great success the ancient Gospel, and overjoyed in
finding himself unexpectedly associated with so many fellow laborers
in the effort to restore primitive Christianity. He endured hardships
as a good soldier of Jesus Christ and pushed forward against great
odds. He traveled all over Virginia, from the mountains to the
seashore, and baptized thousands. In his prime he was one of the
most powerful exhorters that could be found, and his sermons were
exceedingly clear, scriptural and persuasive.

On a notable occasion the Methodists, who had become greatly


stirred by Dr. Bullard’s preaching, chose one of their preachers, T. J.
Stone, to represent them in a debate with Dr. Bullard on the “Act of
Baptism.” The debate was to be held in a grove at a place some
distance from Dr. Bullard’s home, and he had to start the day before
in order to reach the place in time. Late in the afternoon of the first
day’s journey he fell in with the preacher who was to be his
opponent in the debate. Stone had been studying the Campbell and
Rice Debate in search of arguments to sustain his side of the
question. As they rode along together their conversation turned on
the debate, and Dr. Bullard noticed rather a lack of confidence in the
language of his opponent. The doctor turned the conversation so
that he might learn the cause of this, and soon reached the
conclusion that his opponent had little relish for the debate, and, in
short, in his research his confidence in affusion had been
overturned. Dr. Bullard finally said: “You had better let me baptize
you to-morrow instead of debating.” Stone replied: “If it were not for
two or three things in the way, I would.”

That night they spent at Stone’s home, and the doctor soon
perceived that one of the greatest things in the way was Stone’s
wife. Accordingly he gave her much attention, and the three
searched the Scriptures the greater part of the night. A large crowd
assembled the next day to hear the discussion. Dr. Bullard
announced that there would be no debate, but that he would preach
that morning and Stone in the afternoon; also that there 152
would be an immersion immediately after the morning
discourse. Much to the surprise of all, both Mr. and Mrs. Stone
presented themselves for baptism when the invitation was given.

153

CHAPTER II.
BARTON W. STONE

We have already learned that efforts were being made to return to


apostolic Christianity in different places in the East, and I mentioned
these efforts first because as emigration is most usually westward,
the influences thus exerted spread far and wide. This is one of the
reasons why the plea to return to the original practice of the
apostolic churches has been more effective in the West than in the
East.

I now give attention to a great movement that was inaugurated in


what was then called the “West,” through the untiring labors of
Barton W. Stone and others. Stone was born in Maryland, December
24, 1772. His father died and the mother, being left with a large
family of children, moved to Pittsylvania County, Va., in 1779, where
the manners and customs of the people were very simple, and
contentment seemed to be the lot of all, and happiness dwelt in
every breast amidst the abundance of home stores, acquired by
honest industry. His first teacher was a tyrant, who seemed to take
pleasure in whipping and abusing his pupils for every trifling offense.
When called upon to recite, he was so affected with fear, and so
confused in mind, that he could say nothing, and remained in that
school only a few days. He was then sent to another teacher, who
was patient and kind, and he advanced so rapidly that after five
years’ training his teacher “pronounced him a finished scholar.” This
fired him with ambition and spurred his efforts to rise to eminence in
learning.

CONFRONTED BY MANY DIFFICULTIES

About this time some Baptist preachers came into the neighborhood
and began preaching to the people, and great excitement followed.
Multitudes attended their ministrations, and many were immersed.
Immersion was so novel that people traveled long distances to see
the ordinance administered. Young Stone was constant in his 154
attendance, and was particularly interested in hearing the
converts relate their experiences. Of their conviction and great
distress they were very particular in giving an account, and how and
when they obtained deliverance from their burdens. Some were
delivered by a dream, a vision, or some uncommon appearance of
light; others by a voice spoken to them—“Thy sins are forgiven
thee”; and others by seeing the Savior with their natural eyes. Such
experiences were considered good by the Church, and those relating
such were baptized and received into full fellowship. The preachers
had an art of affecting their hearers by a tuneful voice in preaching.
Not knowing any better, he considered all this a work of God, and
the way of salvation.

After these came Methodist preachers who were bitterly opposed by


the Baptists and Episcopalians, who publicly declared them to be the
locusts of Revelation, and warned the people against receiving them.
Stone’s mind was much agitated, and vacillated between the two
parties. For some time he had been in the habit of retiring in secret,
morning and evening, for prayer, with an earnest desire for religion;
but being ignorant of what he ought to do, he became discouraged
and quit praying, and turned away from religion.

When he was about sixteen he came into possession of his portion


of his father’s estate. This absorbed his mind day and night
endeavoring to devise some plan as to how to use it to the best
advantage. At last he decided to acquire a liberal education, and
thus qualify himself for the practice of law. Having reached this
decision he began immediately to arrange his affairs to put his
purpose into execution. Accordingly he bade farewell to his mother,
and made his way to the noted academy at Guilford, N. C. Here he
applied himself with great diligence to acquire an education or die in
the attempt. He divested himself of every hindrance for the course.
With such application he made rapid progress.

Just before he entered the academy the students had been greatly
stirred by James McGready, a Presbyterian preacher, and Stone was
not a little surprised to find many of the students assembled 155
every morning in a private room before the hour for
recitation to engage in singing and prayer. This was a source of
uneasiness to him, and frequently brought him to serious reflections.
He labored diligently to banish these serious thoughts, thinking that
religion would impede his progress in learning, thwart the object he
had in view, and expose him to the ridicule of his relatives and
companions. He therefore associated with those students who made
light of such things, and joined them in the ridicule of the pious. For
this his conscience severely condemned him when alone and made
him so very unhappy that he could neither enjoy the company of the
pious nor that of the impious. This caused him to decide to go to
Hampden-Sidney College, Virginia, that he might be away from the
constant sight of religion. He determined to leave at once, but was
prevented by a violent storm. He remained in his room all day and
reached the decision to pursue his studies there and to attend to his
own business, and let others do the same.
Having made this resolution, he was settled till his roommate asked
him to accompany him to hear Mr. McGready preach. Of the deep
impression made on him by the discourse he heard on that occasion
he says:

His coarse, tremulous voice excited in me the idea of something


unearthly. His gestures were the very reverse of elegance.
Everything appeared by him forgotten but the salvation of souls.
Such earnestness, such zeal, such powerful persuasion, enforced
by the joys of heaven and miseries of hell, I had never witnessed
before. My mind was chained by him, and followed him closely in
his rounds of heaven, earth and hell, with feelings indescribable.
His concluding remarks were addressed to the sinners to flee the
wrath to come without delay. Never before had I comparatively
felt the force of truth. Such was my excitement that had I been
standing I should have probably sunk to the floor under the
impression.

When the meeting was over he returned to his room, and when
night came he walked out into a field and seriously reasoned with
himself on the all-important subject of religion. He asked himself:
“What shall I do? Shall I embrace religion, or not?” He weighed the
subject and counted the cost. He concluded that if he embraced
religion he would then incur the displeasure of his relatives 156
and lose the favor and company of his companions: become
the object of their scorn and ridicule; relinquish all his plans and
schemes for worldly honor, wealth and preferment, and bid adieu to
all the pleasures in which he had lived. He asked himself, “Are you
willing to make this sacrifice?” His heart answered, “No, no.” Then
there loomed before him a certain alternative, “You must be
damned.” This thought was so terrible to him that he could not
endure the thought, and, after due deliberation, he resolved from
that hour to seek religion at the sacrifice of every earthly good, and
immediately prostrated himself before God in supplication for mercy.
In accordance with the popular belief, and the experience of the
pious in those days, he anticipated a long and painful struggle
before he should be prepared to come to Christ, or, in the language
of that day, before he should “get religion.” This anticipation was
fully realized. For a year he was tossed about on the waves of
uncertainty, laboring, praying and striving for “saving faith,”
sometimes desponding and almost despairing of ever getting it. He
wrestled with this condition until he heard a sermon on “God is
love,” which so impressed his mind that he retired to the woods
alone with his Bible. There he read and prayed with various feelings,
between hope and fear, till the great truth of the love of God so
triumphed over him that he afterward said:

I yielded and sunk at his feet, a willing subject. I loved him, I


adored him, I praised him aloud in the silent night, in the echoing
groves around. I confessed to the Lord my sin and folly in
disbelieving his word so long, and in following so long the devices
of men. I now saw that a poor sinner was as much authorized to
believe in Jesus at first as last; that now was the accepted time
and the day of salvation.

From that time he looked forward to preaching, and in the spring of


1796 applied to the Presbytery of Orange, N. C., for license to
preach. In describing the proceedings of the presbytery, he says:
“Never shall I forget the impression made on my mind when a
venerable old father addressed the candidates, standing up together
before the presbytery. After the address he presented to each of the
candidates the Bible (not the Confession of Faith), with this 157
solemn charge, ‘Go ye unto all the world, and preach the
Gospel to every creature.’” He was assigned to a certain district, but
soon became much discouraged, and contemplated seeking regions
where he was not known and turning his attention to some other
calling in life.

In the midst of much doubt and perplexity, he turned westward and


finally reached Caneridge, Bourbon County, Ky., where he remained
for a few months, then returned to Virginia.

ORDAINED TO THE MINISTRY

In the fall of 1798 he received a call from the united congregations


of Caneridge and Concord, through the Transylvania Presbytery. He
accepted, and a day was appointed for his ordination to the ministry.
Knowing that at his ordination he would be required to adopt the
Westminster Confession of Faith, as the system of doctrine taught in
the Bible, he determined to give it a very careful examination. This
was to him almost the beginning of sorrows. He stumbled at the
doctrine of the Trinity as therein taught, and could not
conscientiously subscribe to it. Doubts, too, arose in his mind on the
doctrines of election, reprobation and predestination, as then taught.
He had before this time learned from those higher up in the
ecclesiastical world the way of divesting those doctrines of their
hard, repulsive features, and admitted them as true, yet
unfathomable mysteries. Viewing them as such, he let them alone in
his public discourses and confined himself to the practical part of
religion, and to subjects within his depth. But in re-examining these
doctrines he found the covering put over them could not hide them
from a discerning eye with close inspection. Indeed, he saw that
they were necessary to the system, without any covering.

He was in this state of mind when the day for his ordination came.
He determined to tell the presbytery honestly his state of mind, and
to request them to defer his ordination until he should be better
informed and settled. When the day came a large congregation
assembled, but before the presbytery convened he took aside the
two pillars—James Blythe and Robert Marshall—and made 158
known to them his difficulties and that he had determined to
decline ordination at that time. They labored, but in vain, to remove
his difficulties and objections. They asked him how far he was willing
to receive the Confession of Faith. To this he replied, “As far as I see
it is consistent with the Word of God.” They concluded that that was
sufficient. The presbytery then convened, and when the question,
“Do you receive and adopt the Confession of Faith as containing the
system of doctrine taught in the Bible?” he answered aloud, so that
the whole assembly could hear, “I do, so far as I see it consistent
with the Word of God.” No objection being raised to this answer he
was ordained.

The reception of his ordination papers neither ended his intellectual


misgivings nor his difficulties with his strictly orthodox ministerial
associates in the presbytery. His mind, from this time until he finally
broke the fetters of religious bondage, “was continually tossed on
the waves of speculative divinity,” the all-engrossing theme of the
religious community at that time. Clashing, controversial theories
were urged by the different sects with much zeal and bad feeling. At
that time he believed and taught that mankind were so depraved
that they could do nothing acceptable to God until his Spirit, by
some physical, almighty and mysterious power had quickened,
enlightened and regenerated the heart, and thus prepared the sinner
to believe in Jesus for salvation. He began to see that if God did not
perform this regenerating work in all, it was because he chose to do
it for some and not for others, and that this depended upon his own
sovereign will and pleasure. He then saw that the doctrine was
inseparably linked with unconditional election and reprobation, as
taught in the Westminster Confession of Faith; that they are virtually
one, and that was the reason why he admitted the decrees of
election and reprobation, having admitted the doctrine of total
depravity. Scores of objections continually crossed his mind against
the system. These he imputed to blasphemous suggestions of Satan,
and labored to repel them as satanic temptations and not honestly
to meet them with Scripture arguments. Often, when addressing the
multitudes on the doctrine of total depravity, on their inability 159
to believe and on the physical power of God to produce faith,
and then persuading the helpless to “repent and believe” the Gospel,
his zeal would in a moment be chilled by such questions as: “How
can they believe?” “How can they repent?” “How can they do
impossibilities?” “How can they be guilty in not doing them?” Such
thoughts almost stifled his ability to speak, and were as great
weights pressing him down to the shades of death. The pulpits were
continually ringing with this doctrine; but to his mind it ceased to be
a relief; for whatever name it was called, he could see that the
inability was in the sinner, and therefore he could not believe nor
repent, but must be damned. Wearied with the works and doctrines
of men and distrustful of their influence, he made the Bible his
constant companion. He honestly, earnestly and prayerfully sought
for the truth, determined to buy it at the sacrifice of everything else.

He was relieved from this state of perplexity by this resolve. By


reading and meditating upon the Word of God, he became convinced
that God did love the whole world, and that the only reason why he
did not save all was because of their unbelief, and that the reason
why they believed not was because they neglected and received not
his testimony concerning his Son, for the Scripture says: “These are
written, that ye may believe that Jesus is the Christ, the Son of God;
and that believing ye may have life in his name.” From this he saw
that the requirement to believe in the Son of God was reasonable,
because the testimony given is sufficient to produce faith in the
sinner, and the invitation and encouragement of the Gospel are
sufficient, if believed, to lead him to the Savior for the promised
salvation and eternal life. From that moment of new light and joy he
began to part company with Calvinism, declaring it to be the
heaviest clog on Christianity in the world, a dark mountain between
heaven and earth, shutting out the love of God from the sinner’s
heart.

In the joy of this new-found liberty he received such power that


made him one of God’s choicest instruments in awakening 160
religious society out of its apathy, and in preparing the way
for the great religious movement with which the last century was
ushered in. Born with his new convictions of God’s all-abounding
love, was an intense yearning to bring his fellow men to the joy of
such a salvation. While the fire was kindling in his soul, he heard of
a great religious excitement which had already begun in Logan
County, Kentucky, under the labors of certain Presbyterian
preachers, among whom was the same James McGready whose
preaching had so strongly affected Stone, while a youth, in North
Carolina. In the spring of 1801 he attended one of these camp
meetings, and for the first time witnessed those strange agitations
and cataleptic attacks, which baffled description. He describes them
thus:

The scene to me was new, and passing strange. It baffled


description. Many, very very many, fell down as men slain in
battle, and continued for hours together in an apparently
breathless and motionless state; sometimes for a few moments
reviving and exhibiting symptoms of life by a deep groan or a
piercing shriek, or by a prayer for mercy most fervently uttered.
After lying thus for hours they obtained deliverance. The gloomy
cloud which had covered their faces seemed gradually and visibly
to disappear, and hope in smiles brightened into joy; they would
rise shouting deliverance, and then would address the surrounding
multitude in language truly eloquent. (Biography of Stone, page
34.)

REMARKABLE MEETING AT CANE RIDGE

Returning from these strange scenes, he entered the pulpit at


Caneridge with heart aglow with spiritual fervor. No longer shackled
by the doctrine of election and reprobation, he took for his text the
inspiring message of the great commission: “Go ye into all the world
and preach the Gospel to the whole creation. He that believeth and
is baptized shall be saved; but he that disbelieveth shall be
condemned.” Old as was the text, it came like a new evangel to this
people, who had known nothing but the hard terms of a Calvinistic
creed. The audience was visibly affected, and he left them promising
to return in a few days. This was the beginning of one of the
greatest revivals in history. On his return a vast multitude awaited
him, and he had scarcely begun to picture before them the great
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookbell.com

You might also like