0% found this document useful (0 votes)
8 views

Book 8

Uploaded by

charles luis
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Book 8

Uploaded by

charles luis
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 347

Foundations of Quantitative Finance:

8. Itô Integration and Stochastic Calculus 1

Robert R. Reitano
Brandeis International Business School
Waltham, MA 02454

July, 2021
Copyright © 2021 by Robert R. Reitano
Brandeis International Business School

This work is licensed under a Creative Commons Attribution-NonCommercial-


NoDerivatives 4.0 International License.
To view a copy of the license, visit:
https://creativecommons.org/licenses/by-nc-nd/4.0/
Contents

Preface ix

to Lisa xi

Introduction xiii

1 Stochastic Calculus: An Informal Link to Finance 1


1.1 Asset Model Limits . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Outline of Stochastic Calculus Topics . . . . . . . . . . . . . . 6
1.2.1 Book 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.2 Book 9 or Later . . . . . . . . . . . . . . . . . . . . . 7

2 The Itô Integral 9


2.1 Is a New Integration Theory Needed? . . . . . . . . . . . . . 9
2.2 Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.1 Additional Properties of BM . . . . . . . . . . . . . . 14
2.2.2 BM on a Filtered Probability Space . . . . . . . . . . 18
2.3 Preliminary Insights to a New Integral . . . . . . . . . . . . . 22
2.3.1 Axioms for a New Integral . . . . . . . . . . . . . . . . 23
2.3.2 Next Steps and Questions . . . . . . . . . . . . . . . . 25
2.4 Quadratic Variation Process of Brownian Motion . . . . . . . 27
2.5 Itô Integral of Simple Processes . . . . . . . . . . . . . . . . . 32
2.6 H2 ([0; 1) S) and Simple Process Approximations . . . . . 39
2.7 The General Itô Integral . . . . . . . . . . . . . . . . . . . . . 48
2.8 Properties of the Itô Integral . . . . . . . . . . . . . . . . . . 54
2.9 A Continuous Version of the Itô Integral . . . . . . . . . . . . 58
2.10 Itô Integration via Riemann Sums . . . . . . . . . . . . . . . 65

v
vi CONTENTS

3 Integrals w.r.t. Continuous Local Martingales 79


3.1 Integration of Simple Processes . . . . . . . . . . . . . . . . . 81
3.2 Integrals w.r.t. Continuous L2 -Bounded Martingales . . . . . 84
3.2.1 A Generalized Itô Isometry . . . . . . . . . . . . . . . 84
3.2.2 M2 -Integrators and H2M ([0; 1) S)-Integrands . . . 89
3.2.3 Simple Process Approximations in H2M ([0; 1) S) . . 94
3.2.4 The General Stochastic Integral . . . . . . . . . . . . . 99
3.2.5 A Continuous Version . . . . . . . . . . . . . . . . . . 106
3.2.6 The Kunita-Watanabe Inequality . . . . . . . . . . . . 109
Measures Induced by BV Functions . . . . . . . . . . 110
The Kunita-Watanabe Inequality . . . . . . . . . . . . 117
3.2.7 Additional Properties For L2 -Bounded Martingales . . 124
3.2.8 Stochastic Integrals via Riemann Sums . . . . . . . . 137
3.3 Integrals w.r.t. Continuous Local Martingales . . . . . . . . . 142
M ([0; 1)
3.3.1 Mloc -Integrators and H2;loc S)-Integrands . . 148
3.3.2 The General Stochastic Integral . . . . . . . . . . . . . 155
3.3.3 Properties of Stochastic Integrals . . . . . . . . . . . . 161
3.3.4 Stochastic Dominated Convergence Theorem . . . . . 175
3.3.5 Stochastic Integrals via Riemann Sums . . . . . . . . 177

4 Integrals w.r.t. Continuous Semimartingales 181


4.1 Integrals w.r.t. Continuous B.V. Processes . . . . . . . . . . . 185
4.2 The General Stochastic Integral . . . . . . . . . . . . . . . . . 195
4.3 Properties of Stochastic Integrals . . . . . . . . . . . . . . . . 198
4.4 Stochastic Dominated Convergence Theorem . . . . . . . . . 201
4.5 Stochastic Integrals via Riemann Sums . . . . . . . . . . . . . 203
4.6 Stochastic Integration by Parts . . . . . . . . . . . . . . . . . 204
4.7 Integration of Vector and Matrix Processes . . . . . . . . . . 207
4.7.1 The Itô Integral . . . . . . . . . . . . . . . . . . . . . 209
4.7.2 Integrals w.r.t. Continuous Semimartingales . . . . . . 212

5 Itô’s Lemma 215


5.1 Versions of Itô’s Lemma . . . . . . . . . . . . . . . . . . . . . 218
5.2 Semimartingale Version . . . . . . . . . . . . . . . . . . . . . 219
5.3 Itô Process Version . . . . . . . . . . . . . . . . . . . . . . . . 227
5.4 Itô Di¤usion Version . . . . . . . . . . . . . . . . . . . . . . . 228
5.5 Multivariate Semimartingale Version . . . . . . . . . . . . . . 237
5.6 Multivariate Itô Process Version . . . . . . . . . . . . . . . . 244
5.7 Multivariate Itô Di¤usion Version . . . . . . . . . . . . . . . . 247
CONTENTS vii

6 Some Applications of Itô’s Lemma 255


6.1 Lévy’s Characterization of n-Dimensional BM . . . . . . . . . 255
6.1.1 When is an Itô Process a Brownian Motion? . . . . . 259
6.1.2 Continuous LMs are Time-Changed BMs . . . . . . . 267
6.2 The Burkholder-Davis-Gundy Inequality . . . . . . . . . . . . 276
6.3 Local Martingales from Semimartingales . . . . . . . . . . . . 284
6.3.1 Itô Di¤usion Version . . . . . . . . . . . . . . . . . . . 284
6.3.2 Semimartingale Version . . . . . . . . . . . . . . . . . 289
6.4 The Feynman-Kac Representation Theorem 1 . . . . . . . . . 293
6.5 Dynkin’s Formula . . . . . . . . . . . . . . . . . . . . . . . . 301
6.5.1 In…nitesimal Generators . . . . . . . . . . . . . . . . . 309
6.5.2 Kolmogorov’s Backward Equation 1 . . . . . . . . . . 311

References 321
Preface
The idea for a reference book on the mathematical foundations of quantita-
tive …nance has been with me throughout my career in this …eld. But the
urge to begin writing it didn’t materialize until shortly after completing my
…rst book, Introduction to Quantitative Finance: A Math Tool Kit, in 2010.
The one goal I had for this reference book was that it would be complete
and detailed in the development of the many materials one …nds referenced
in the various areas of quantitative …nance. The one constraint I realized
from the beginning was that I could not accomplish this goal, plus write a
complete survey of the quantitative …nance applications of these materials,
in the 700 or so pages that I budgeted for myself for my …rst book. Little
did I know at the time that this project would require a multiple of this
initial page count budget even without detailed …nance applications.
I was never concerned about the omission of the details on applications
to quantitative …nance because there are already a great many books in
this area that develop these applications very well. The one shortcoming
I perceived many such books to have is that they are written at a level of
mathematical sophistication that requires a reader to have signi…cant formal
training in mathematics, as well as the time and energy to …ll in omitted
details. While such a task would provide a challenging and perhaps welcome
exercise for more advanced graduate students in this …eld, it is likely to
be less welcome to many other students and practitioners. It is also the
case that quantitative …nance has grown to utilize advanced mathematical
theories from a number of …elds. While there are also a great many very
good references on these subjects, most are again written at a level that
does not in my experience characterize the backgrounds of most students
and practitioners of quantitative …nance.
So over the past several years I have been drafting this reference book,
accumulating the mathematical theories I have encountered in my work in
this …eld, and then attempting to integrate them into a coherent collection
of books that develops the necessary ideas in some detail. My target readers
would be quantitatively literate to the extent of familiarity, indeed comfort,
with the materials and formal developments in my …rst book, and su¢ ciently
motivated to identify and then navigate the details of the materials they were
attempting to master. Unfortunately, adding these details supports learning
but also increases the lengths of the various developments. But this book was
never intended to provide a “cover-to-cover”reading challenge, but rather to
be a reference book in which one could …nd detailed foundational materials
in a variety of areas that support current questions and further studies in
quantitative …nance.

ix
x Preface

Over these past years, one volume turned into two, which then became a
work not likely publishable in the traditional channels given its unforgiving
size and likely limited target audience. So I have instead decided to self-
publish this work, converting the original chapters into stand-alone books, of
which there are now nine. My goal is to …nalize each book over the coming
year or two.
I hope these books serve you well.
I am grateful for the support of my family: Lisa, Michael, David, and
Je¤rey, as well as the support of friends and colleagues at Brandeis Interna-
tional Business School.
Robert R. Reitano
Brandeis International Business School
to Lisa

xi
Introduction

This is the eighth book in a series of nine that will be self-published under
the collective title of Foundations of Quantitative Finance. Each book in
the series is intended to build from the materials in earlier books, with the
…rst six volumes alternating between books with a more foundational
mathematical perspective, which was the case with the …rst, third and …fth
book, and books which develop probability theory and some quantitative
applications to …nance, the focus of the second, fourth and sixth book.
This is the second of three books on stochastic processes.

While providing many of the foundational theories underlying quantita-


tive …nance, this series of books does not provide a detailed development
of these …nancial applications. Instead this series is intended to be used
as a reference work for students, researchers and practitioners of quanti-
tative …nance who already have other sources for these detailed …nancial
applications but …nd that such sources are written at a level which assumes
signi…cant mathematical expertise, which if not possessed can be di¢ cult to
acquire.
The goal of many books in quantitative …nance is to develop …nancial
applications from an advanced point of view. So it is often the case that
the needed advanced foundational materials from mathematics and proba-
bility theory are assumed, or simply introduced and summarized, without a
complete and formal development that would of necessity take the respec-
tive authors far from their intended objectives. And while there are a great
many excellent books on mathematics and probability theory, a number of
which are cited in the references, such books typically develop materials
with a eye to comprehensiveness in the subject matter, and not with an eye
toward e¢ ciently curating and developing the theory needed for applications
in quantitative …nance.
Thus the goal of this series is to introduce and develop in some detail
a number of the foundational theories underlying quantitative …nance. The

xiii
xiv INTRODUCTION

included topics have been curated from a vast mathematical and probability
literature for the express purpose of supporting applications in quantitative
…nance. In addition, the development of these topics will be found to be at
a much greater level of detail than in most advanced quantitative …nance
books, and certainly in more detail that most advanced mathematics and
probability theory texts. Finally and most importantly for a reference work,
this series of books is extensively self-referenced. The reader can enter the
volumes at any place of interest, and any earlier results utilized will be
explicitly identi…ed for easy reference.
The title of this eighth book is Itô Integration and Stochastic Calculus I.
While book 7 developed properties of Brownian motion and other stochastic
processes in some detail, this book sets out to begin the study a "calculus"
of such processes with an emphasis on the associated integration theories.
Subjects formally categorized under stochastic calculus but not pursued in
this book are Girsanov’s theorem(s), martingale representation theorems,
and the study of stochastic di¤erential equations, which are deferred to
book 9.
The goal of chapter 1 is to motivate the studies in this book from the
framework of quantitative …nance, recalling the asset and …nancial deriva-
tive pricing models of book 6, and then summarizes the investigations in this
and the next book from this perspective. Chapter 2 then turns to the devel-
opment of the foundational integral in stochastic calculus, the Itô integral,
named for Kiyoshi Itô (1915 –2008), where such integrals use Brownian mo-
tion as integrators. Itô also pioneered a mathematical framework for such
integrals and related concepts that is collectively known as Itô calculus.
After justifying that the proposed integral does not …t into the integra-
tion theories of books 3 and 5, the development begins as in earlier books
with the Itô integral of simple processes. It is then seen that such integrals
converge within an L2 -framework, where the quadratic variation results of
book 7 play a prominent role. Properties of this integral are then developed,
including cases where the Itô integral can be de…ned in terms of limits of
the associated Riemann sums of earlier books.
Chapter 3 then sets out to generalize this integration theory from Brown-
ian motion integrators to continuous local martingale integrators. While
Brownian motion is indeed a continuous local martingale by book 7’s corol-
lary 5.85, this generalization will require a fair amount of machinery to
substitute for the special properties of Brownian motion that general local
martingales do not enjoy. In order to achieve this general result, the …rst
part of the chapter focuses on continuous L2 -bounded martingale integra-
tors, and then develops the needed properties to extend these results to
xv

continuous local martingale integrators.


Many properties of such integrals will be developed, such as their quadratic
variation and covariation processes, a generalization of the Cauchy-Schwarz
inequality of book 4’s corollary 3.48 called the Kunita-Watanabe inequality,
as well as a version of Lebesgue’s dominated convergence theorem (book 5,
proposition 2.43), and the approximation of such integrals with Riemann
sums.
All of chapter 4’s development is then generalized further to continuous
semimartingale integrators in chapter 5, and this is the theory that will be
foundational in later chapters. Since continuous local martingale integrators
are studied in chapter 4, the primary results needed for semimartingale
integrators are a study of bounded variation integrators, generalizing the
book 5 Riemann-Stieltjes theory, as well as results to insure well-de…nedness
of the associated general de…nition.
Chapter 6 begins a study of the Itô calculus with one of the most foun-
dational results of stochastic calculus, and certainly the foundational result
for quantitative …nance, known as Itô’s lemma. While this "lemma" pro-
vides the fundamental insight as to what happens when a smooth function
is applied to a semimartingale, this result has a number of versions which
can be confusing on …rst acquaintance, and so this chapter develops many
in some detail. The potential power of Itô’s result is honored in chapter 7,
which develops a host of applications.
These applications only scratch the surface of the importance of this
result, as it can be said with hardly any exaggeration that all of stochastic
calculus is an application of, or generalization of, Itô’s lemma.
Chapter 1

Stochastic Calculus: An
Informal Link to Finance

1.1 Asset Model Limits


In section 8.5 on Limiting Distributions of Harmonious Asset Models of
book 6, a result was developed in proposition 8.36 on certain limits of the
multiplicative binomial temporal model:
hXj p i
(n)
Xj = X0 exp t+ tbi ; 1 j n: (1.1)
i=1

Recall that there T is …xed and …nite, t T =n; and fbi g are independent
binomial variates with bi = 1; each with probability 1=2: At any time
point t T representable as t = jT =n for integers 0 < j n; this
(mn)
proposition addressed the distributional limit of Xmj as m ! 1: Since
t mjT =mn is …xed and independent of m, Xt can be de…ned as a limit of
(mn)
Xmj -variates de…ned as above, as a sum of mj binomial variates with a
step size of t = T =mn.

(mn) (m)
Denoting Xmj Xt ; the price variate at such time t for given m;
this proposition proved that as m ! 1 :
(m)
ln[Xt =X0 ] t
p !d Z1 N (0; 1); ((1))
t

where N (a; b2 ) denotes a normally distributed variate with mean a and


variance b2 : By the mapping theorem of book 2’s proposition 8.37 with

1
2CHAPTER 1 STOCHASTIC CALCULUS: AN INFORMAL LINK TO FINANCE

p
h(y) X0 exp ty + t ; it follows that for such t = jT =n :

(m)
Xt !d Xt X0 exp [ t + Zt ] ; ((2))

with Zt N (0; t): This is a result for each such t; but more can be said with
the results of book 7.
In the notation of 1.2 of book 7, for given t = jT =n as above:
(m)
ln[Xt =X0 ] t
= Bt ( t m ) ;

where Bt ( tm ) is a binomial path with tm = T =mn: Thus (1) is a special


case of that book’s proposition 1.10, that as m ! 1 :

Bt ( tm ) !d Zt : (1.2)

This result is true for all t; where in general Bt ( tm ) is de…ned by linear


interpolation. Thus this proposition assures that the limits in (1) and (2)
(m)
are also true for all t if we de…ne Xt in terms of this binomial path by:
(m)
Xt = X0 exp [ t + Bt ( tm )] : (1.3)

Further, for any …nite collection

0 < t1 < ::: < tk T;

the proof of proposition 1.17 of book 7 assures that:

(Bt1 ( tm ) ; :::; Btk ( tm )) !d (Zt1 ; :::; Ztk ) ;

and that (Zt1 ; :::; Ztk ) is multivariate normally distributed with mean vector
= 0 and covariance matrix Cij = min(ti tj ): Thus the Mann-Wald theorem
of book 6’s proposition 4.21 and h : Rk ! Rk de…ned componentwise with
h above obtains that (2) generalizes to:

(m) (m)
Xt1 ; :::; Xtk !d (X0 exp [ t1 + Zt1 ] ; :::; X0 exp [ tk + Ztk ]) ; (1.4)

(m)
where Xtj is de…ned as in 1.3,
Book 7’s proposition 1.21 summarizes that these limiting variates Zt in
1.2 have all of the attributes of a Brownian motion Bt (de…nition 1.27 there)
except a demonstration of continuity with probability 1 (see also remark 1.23
1.1 ASSET MODEL LIMITS 3

there). However, Donsker’s theorem of that book’s proposition 2.21 …nalizes


this result to prove that for this special binomial case:

Bt ( tm ) !d Bt ;

where this convergence in distribution is de…ned in de…nition 2.17. It is


therefore natural to rewrite the limiting result in 1.4 in terms of the model:

Xt = X0 exp [ t + Bt ] ; (1.5)

where Bt denotes a Brownian motion.


This representation is valid in the sense that for any …nite collection of
time variates as above, 1.4 is satis…ed in the sense that:
(m) (m)
Xt1 ; :::; Xtk !d (X0 exp [ t1 + Bt1 ] ; :::; X0 exp [ tk + Btk ]) : (1.6)

(m)
In other words, all …nite dimensional distributions of Xt converge in distri-
bution to the respective …nite dimensional distributions of X0 exp [ t + Bt ] :
In the notation of book 7’s proposition 2.21:
(m)
Xt !F DD X0 exp [ t + Bt ] :
(m)
This does not necessarily imply that Xt !d X0 exp [ t + Bt ] in the sense
of book 7’s de…nition 2.17 as that book’s example 2.20 illustrates.
Since the …nal model in 1.5 results from 1.1 as t ! 0; it is natural to
investigate if this …nal result can also be produced with a simpler discrete
model initially that omits the exponential function. Since b2j = 1; a Taylor
series analysis of 1.1 shows that for t small:
(n) (n)
p
Xj = Xj 1 exp t+ tbj
1 2 p h i
(n)
= Xj 1 1 + + t+ tbj + O t3=2 ;
2

where O t3=2 c t3=2 : Thus:


Yj 1 p h i
(n) 2
Xj = X0 1+ + t+ tbi + O t3=2 ; 1 j n;
i=1 2
and we now investigate if the …nal result in 1.5 can be obtained by using the
simpler asset model:

(n)
Yj 1 p
2
Yj = X0 1+ + t+ tbi : (1.7)
i=1 2
4CHAPTER 1 STOCHASTIC CALCULUS: AN INFORMAL LINK TO FINANCE

To investigate we use the Taylor series:

ln(1 + y) = y y 2 =2 + O y 3 ;

and …x t = jT =n: Then with 1.7 and t = T =mn :


h i Xmj 1 p h i
(mn) 2
ln Ymj =X0 = ln 1 + + t+ tbi + O t3=2
i=1 2
p Xmj
= t+ t bi + O t1=2 :
i=1
P p
Now mj i=1 bj = mj converges in distribution to a normal variate with mean
0 and variance 1 by the central limit theorem of book 4’s proposition 5.14,
and t + O t1=2 converges to t and hence also then in probability to t:
p p p
Since t mj = t for all m by assumption, Slutsky’s theorem of
(mn) (m)
book 2’s proposition 5.29 and exercise 5.30 obtain that with Ymj Yt
denoting the price variate at this time t :
h i
(m)
ln Yt =X0 !d t + Zt ;

where Zt N (0; t) : This is equivalent to (1) above. Another application of


the mapping theorem of book 2’s proposition 8.37 obtains (2) for all such
t = jT =n for integers 0 < j n: Using the binomial paths of book 7 and
repeating the steps above obtains that 1.5 again results in the limit for all
t:

Summary 1.1 The following models have 1.5 as limiting distribution in the
sense of 1.6:

1. Multiplicative binomial temporal model 1:


hXj p i
(n)
Xj = X0 exp t+ tbi ; 1 j n; (1.8)
i=1

2. Multiplicative binomial temporal model 2:

(n)
Yj 1 p
2
Yj = X0 1+ + t+ tbi ; 1 j n;
i=1 2
(1.9)
(n)
Further investigating the model in 1.9, but returning to notation Xj
for simplicity:
1.1 ASSET MODEL LIMITS 5

(n) (n) (n) 1 2 (n)


p
Xj Xj 1 = Xj 1 + t + Xj 1 tbj :
2
Thus:
Xj h i
(n) (n) (n)
Xj = X0 + Xi Xi 1
i=1
1 Xj (n)
Xj (n)
p
2
= X0 + + Xi 1 t+ Xi 1 tbi
2 i=1 i=1

1 Xj (n)
Xj (n)
2
= X0 + + Xi 1 t+ Xi 1 Bi 1:
2 i=1 i=1

Here: p
Bi 1 Bi ( t) Bi 1( t) = tbi ;
is the change in a Binomial path over this time interval, recalling (1.1) of
book 7.
Now let t ! 0 and imagine for a moment that the Xt price paths are
continuous functions of time. We then have derived very informally, or
better said we can imagine that:
Z t Z t
1 2
Xt = X0 + + Xs ds + Xs dBs : (1.10)
2 0 0

Here Xt Xt (!) ; with ! a point in some probability space on which Bt


Bt (!) is also de…ned.
If Xt is continuous for all or almost all !, the …rst integral is de…nable
as a Riemann integral with probability 1. If Xs is only measurable (or
measurable for almost all !) this is de…nable as a Lebesgue integral. In
either case the …rst integral would appear to be well-de…ned pathwise. The
second integral needs a de…nition, both in terms of allowable integrands,
and in terms of the nature of the convergence from the Riemann-Stieltjes-
like summation to this integral. It will be seen below that such integrals
cannot be de…ned "pathwise" as the Riemann-Stieltjes integrals of book 3
because Brownian motion is not of bounded variation. Also note that 1.10
does not provide an explicit formula for the asset price path Xt ; but instead
provides an integral equation which if solvable, de…nes the desired Xt : Thus
such integrals need de…nition, as does the meaning of solving such integral
equations.
The second integral above in 1.10 is an example of what is called an Itô
integral, named for Kiyoshi Itô (1915 –2008). Itô pioneered a mathemat-
ical framework for such integrals and related concepts, which collectively is
known as Itô calculus.
6CHAPTER 1 STOCHASTIC CALCULUS: AN INFORMAL LINK TO FINANCE

1.2 Outline of Stochastic Calculus Topics


Assume we are given a …ltered probability space denoted
(S; (S); t (S); )u:c: ; on which a Brownian motion Bt is de…ned (see
de…nitions 2.2 and 2.7). Recall that this u:c:-notation implies that the
…ltration f t (S)g satis…es the usual conditions of book 7’s de…nition 5.4.
See that book’s sections 5.1.1-5.1.2 for a general discussion of spaces on
which a Brownian motion may be de…ned, including the canonical
probability space (C[0; 1); B [C[0; 1)] ; ) : On this latter space recall the
notational convention for the de…nition of Bt (!) ; that Bt (!) !(t) for
! 2 C[0; 1):

Remark 1.2 For the next chapter on the Itô Integral we do not need to
assume the usual conditions on the …ltered probability space, but will re-
quire this for the following chapter 3, where local martingales and thus stop-
ping times will be studied. For consistency, we assume the usual conditions
throughout this book.

1.2.1 Book 8
Framed in terms of the above derivation, this book 8 will investigate:

1. Itô Integral: Under what conditions on a function v(s; !) de…ned on


[0; 1) S; can the Itô integral:
Z t
v(s; !)dBs (!);
0

be de…ned? Note that for each t 2 [0; 1) this integral is a function of


! 2 S; while for each ! 2 S this integral is a function of time t 2 [0; 1):
Thus if appropriately measurable, this integral is a stochastic process
on (S; (S); t (S); )u:c: denoted Xt (!); say: Once de…ned, what are
the measurability and other properties of this integral?

2. Stochastic Integrals: For the same or other integrand functions, can


the integral in 1 be de…ned relative to integrator processes other than
Brownian motion? What types of processes, what types of integrands,
and what are the properties of the resulting integrals?

3. Transformation of Stochastic Integrals: As an example, de…ne


a stochastic process Xt (!) on [0; 1) S as the sum of a Riemann
1.2 OUTLINE OF STOCHASTIC CALCULUS TOPICS 7

integral and stochastic integral:


Z t Z t
Xt (!) = X0 + u(s; !)ds + v(s; !)dBs (!);
0 0

for suitable functions u(s; !) and v(s; !) de…ned on [0; 1) S: More


generally, the …rst integral could be a Riemann-Stieltjes integral (book
3, chapter 4) :
Z t Z t
Xt (!) = X0 + u(s; !)dFs (!) + v(s; !)dBs (!);
0 0

where Ft (!) has bounded variation with probability 1.


Under what conditions on a function f (t; x) will f (t; Xt (!)) be a sto-
chastic process of the same form? In other words, when will there be
functions uf (s; !) and vf (s; !) so that:
Z t Z t
f (t; Xt (!)) = f (0; X0 ) + uf (s; !)dFs (!) + vf (s; !)dBs (!);
0 0

and how do these functions relate to u; v; and f ?


The answer to this investigation and related questions is provided by
Itô’s Lemma.

4. Partial Di¤erential Equations: It turns out that the price func-


tion f (t; Xt (!)) for certain …nancial derivatives on an asset Xt can
be shown to satisfy a partial di¤erential equation which re‡ects the
coe¢ cient functions in the Xt model as represented in 3: It is in fact
a partial di¤erential equation with a boundary constraint which is
typically de…ned as the payo¤ of the derivative at time t = T: One
approach to solving such equations is given by the Feynman-Kac
representation theorem, which represents the solution in terms of
a expected value of a given expression de…nable in terms of the given
model. This expectation is de…ned on (S; (S); t (S); )u:c: ; and thus
re‡ects all asset price paths generated by the Xt model. This result is
introduced on this book, and developed in more detail in book 9.

1.2.2 Book 9 or Later


The …nal topics to be developed in the next book 9 are:
8CHAPTER 1 STOCHASTIC CALCULUS: AN INFORMAL LINK TO FINANCE

1. Transformation of Stochastic Processes: Given a stochastic process


Xt (!) as in 3; how does the form and properties of this process depend
on the underlying probability measure, or the sigma algebras, de…ned
on this space S? In particular, can one choose a probability measure
G that is "equivalent" to ; and a Brownian motion B G de…ned on
t
(S; (S); t (S); G )u:c: ; so that Xt (!) is representable in this space as
in 3 but without the dFs (!)-integral?
The answer to this and related questions is provided by Girsanov’s
theorem, of which there are several versions.

2. Itô Integral Representations: Given a stochastic process Xt (!);


what property(ies) will assure that there is a suitable function v(s; !)
de…ned on [0; 1) S so that (at least for almost all !):
Z t
Xt (!) = v(s; !)dBs (!):
0

The answer to this and related questions is provided by the martin-


gale representation theorem, of which there are several versions.

3. Stochastic Di¤erential Equations: What conditions on functions


u(t; x) and v(t; x) will ensure that the the stochastic integral equation:
Z t Z t
Xt (!) = X0 + u(s; Xs (!))ds + v(s; Xs (!))dBs (!);
0 0

has a solution Xt (!)? Here, these integrals are de…ned as in 3 with


u(s; !) = (s; Xs (!)) and v(s; !) = (s; Xs (!)): Such equations are
commonly called stochastic di¤erential equations because they
are often expressed in di¤erential notation:

dXt (!) = u(t; Xt (!))dt + v(t; Xt (!))dBt (!):

For such equations we are interested in results on existence - when


does a solution exist for all t, and uniqueness - when is this solution
unique in some well-de…ned way? In addition, we are interested in
identifying approaches to solving such equations which are important
in …nance, at least in the simpler cases.

4. Framework for Financial Derivative Pricing: Given the tools of


this and earlier books, we develop how the classical approaches to the
pricing of certain …nancial derivatives can be derived.
Chapter 2

The Itô Integral

2.1 Is a New Integration Theory Needed?


Given the study of Riemann, Lebesgue, and Riemann-Stieltjes integrals in
book 3, and the general integration theory which included
Lebesgue-Stieltjes integrals undertaken in book 5; it may be hard to
believe that the integral:
Z t
It (!) v(s; !)dBs (!);
0

requires yet another new theory. Certainly when v(s; !) = 1 no new theory
is needed, and it will be taken as axiomatic that under any rational
de…nition of integral that:
Z t
dBs (!) Bt (!) B0 (!): (2.1)
0

Of course B0 (!) = 0 -a.e. by de…nition, but we retain this structure to


suggest the general required outcome on other intervals.

For more general integrands, we …rst investigate potential approaches to


this integral. As noted above, such It (!) is de…ned as a function of ! 2 S
given an associated …ltered probability space (S; (S); t (S); )u:c: on which
Bt (!) is de…ned. For this discussion, we implicitly …x ! and investigate
potential approaches to de…ning this integral.

1. As a Riemann-Stieltjes Integral: Since Bt (!) is continuous in t


with probability 1; it seems feasible that It (!) could be de…nable as a

9
10 CHAPTER 2 THE ITÔ INTEGRAL

Riemann-Stieltjes integral, at least with probability 1: Based on book


3’s results, to be de…nable in this sense requires that Bt (!) and v(t; !)
have no common discontinuities for bounded v(t; !) (proposition 4.17,
exercise 4.18), and this poses no problem due to the continuity of
Bt (!): On the other hand, existence results of that book for v(t; !)
continuous in t were proved when the integrator Bt (!) is increasing
(proposition 4.19) or of bounded variation (proposition 4.27). Unfor-
tunately by book 7’s propositions 2.83 and 2.87, Brownian motion is
nowhere monotonic and not of bounded variation.
A positive result in this direction appears achievable using the general
existence result of L. C. Young (1905 –2000) discussed in book 3’s
remark 4.25. As common discontinuities are not a problem as noted
above, Young proved that if Bt (!) has …nite strong p-variation (de-
…nition 2.84, book 7) and v(t; !) has …nite strong q-variation with
1=p + 1=q > 1; then It (!) is de…nable as a Riemann-Stieltjes integral.
Now Bt (!) has weak 2-variation or quadratic variation (proposition
2.88, book 7) but not …nite strong 2-variation (proposition 2.93, book
7). However, Bt (!) does have strong p-variation for all p > 2 (para-
graph following proposition 2.87, book 7), and thus It (!) is de…nable
as a Riemann-Stieltjes integral for all v(t; !) of …nite strong q-variation
with q < 2:
While seemingly a satisfying outcome, this approach has two critical
shortcomings:

(a) Despite being de…nable for all integrands v(t; !) of …nite strong
q-variation for q < 2; this result can’t possibly be extended even
to
R t all continuous v(t; !). For example, the integral It (!)
0 Bs (!)dBs (!) does not …t Young’s framework, even though as
noted in book 7’s remark 6.8, this integral has a meaningful inter-
pretation in the context of the Doob-Meyer decomposition theo-
rem of proposition 6.5. More generally, by book 3’s propositions
4.27 and 4.52, It (!) de…ned above exists for all continuous v(t; !)
if and only if Bt (!) has …nite strong 2-variation. As noted above,
Bt (!) has in…nite strong 2-variation.
(b) Perhaps more importantly, the Riemann-Stieltjes approach aban-
dons the measure theoretic structure of the underlying probability
space (S; (S); t (S); )u:c: ; other than allowing for statements
such as: Bt (!) is continuous with probability 1: But the measur-
ability properties of Bt (!) and v(t; !) would then play no role in
2.1 IS A NEW INTEGRATION THEORY NEEDED? 11

this theory. Thus this approach does not allow one to investigate
if or when It (!) is adapted to f t (S)g for example, nor to in-
vestigate other measure theoretic properties of this integral. For
example, when is It (!) a local martingale?

Thus we abandon the Riemann-Stieltjes approach and turn to integra-


tion approaches based on measure theory.

2. As a Lebesgue-Stieltjes Integral: Fixing ! as above, perhaps we


can de…ne a Borel measure ! on [0; 1) that makes this Itô integral
a Lebesgue-Stieltjes integral relative to this measure:
Z t Z t
v(s; !)dBs (!) = v(s; !)d ! : ((*))
0 0

But recall that by book 1’s proposition 5.7, every such Borel measure
is given by an increasing, right continuous function F ! de…ned by:

F !
(t) = ! [[0; t]] :

Now with v(s; !) = 1; it follows as noted above that since B0 (!) = 0


with probability 1 :
Z t
F ! (t) = dBs (!) Bt (!); -a.e.
0

While Bt (!) has more than enough continuity, book 7’s proposition
2.83 obtains that it is nowhere monotonic and so the integral It (!)
cannot be represented as a Lebesgue-Stieltjes integral.

3. As an Integral with respect to a Signed Measure: More gen-


erally, perhaps ( ) above is valid with ! a signed measure on [0; 1)
(de…nition 7.5, book 5). If so, the Jordan decomposition theorem of
that book’s proposition 7.14 assures that there is a unique decompo-
sition:
+
! = ! !;

where + ! and ! are mutually singular measures. As above it then


follows that:
+
! [[0; t]] = ! [[0; t]] ! [[0; t]]
= F +
!
(t) F !
(t)
= Bt (!); -a.e.
12 CHAPTER 2 THE ITÔ INTEGRAL

Thus in this case, Bt (!) must equal a di¤erence of increasing functions.


But by book 3’s proposition 3.27, this is possible if and only if Bt (!)
is of bounded variation. Thus by book 7’s proposition 4.27, Bt (!) is
not of bounded variation and thus It (!) cannot be represented as an
integral with respect to a signed measure.
4. As an Integral with respect to on S : While chapter 2 of book
5 develops a general integration theory on (S; (S); ); since It (!)
results in a function on S parametrized by t, there seems to be no
natural way to apply this theory to this problem.

Conclusion 2.1 The notion of an Itô integral does not …t into any of the
integration theories studied in books 3 and 5:

2.2 Brownian Motion


Before beginning, we recall here for completeness the de…nition of an
n-dimensional Brownian motion introduced in book 7’s de…nition 1.27. For
the current development, only n = 1 is needed, but the more general
statement will be needed in later chapters. See also sections 2.2.1-2.2.2
below on the matter of …ltrations on S; on which this de…nition is silent.
For other properties of Brownian motion the reader is referred to chapters
1 and 2 of book 7, while for the multivariate normal distribution see
chapter 3 of book 6.

De…nition 2.2 (n-dimensional Brownian motion on (S; (S); )) Let


(S; (S); ) be a probability space. An n-dimensional Brownian motion
B : S ! Rn is a collection of random vectors (Bt )t2I indexed by t 2 I;
where typically I [0; T ] for T 1 :
(1) (2) (n)
B (Bt )t2I Bt ; Bt ; :::; Bt ; (2.2)
t2I

so that:

1. For almost all !; B0 (!) = 0: In other words, B0 1 (0) = 1;


2. For 0 s < t; Bt Bs has a multivariate normal distribution with
mean n-vector 0; and covariance matrix C (t s)In ; where In denotes
the n n identity matrix;
m
3. For m 2; and 0 s1 < t1 s2 < t2 ::: sm < tm ; Btj Bsj j=1
are independent random vectors;
2.2 BROWNIAN MOTION 13

4. For almost all !; Bt (!) is a continuous function of t:


For 2; de…nition 3.6 of book 6 de…nes the multivariate normal distribu-
tion in terms of the corresponding moment generating function, while that
book’s de…nition 6.26 restates this de…nition in terms of the characteristic
function as justi…ed there. However by exercise 3.4 of that book, and the
uniqueness of characteristic functions (proposition 6.25, book 6), one obtains
the following user-friendly de…nition:
De…nition 2.3 (Multivariate normal distribution) The random vector
X Bt Bs has a multivariate normal distribution as in 2 of de…nition
2.2 if X has a density function fX (x) with x = (x1 ; :::; xn ) and In the n n
identity matrix:
n=2 n=2 1 T 1
fX (x) = (2 ) (t s) exp x (t s) In x : (2.3)
2
The matrix (t s) 1 In is thus an n n diagonal matrix with (t s) 1 along
the diagonal.
(1) (2) (n)
With s = 0; the density of Bt Bt ; Bt ; :::; Bt can be expressed
Pn
with jyj2 2
j=1 yj :
h i
n=2 2
fBt (y) = [2 t] exp jyj =2t ; (2.4)
(j)
and from this it follows (corollary 3.22, book 6) that for all t; fBt gnj=1 are
independent normally distributed random variables with mean 0 and variance
t:
For the purpose of 3; recall 6 of book 7’s summary 1.25:
De…nition 2.4 (Independent random vectors) Given (S; (S); ); ran-
dom n-vectors X1 and X2 are said to be independent random vectors if
(X1 ) and (X2 ) are independent sigma algebras, where (Xj ) Xj 1 [B(Rn )]):
Thus for all A1 ; A2 2 B(Rn ) :
h \ i
X1 1 (A1 ) X2 1 (A2 ) = X1 1 (A1 ) X2 1 (A2 ) : (2.5)

More generally a …nite or in…nite (countable or not) collection of ran-


dom vectors fX g 2I are said to be independent random vectors if
f (X)g 2I are independent sigma algebras, meaning for any …nite index
collection f (j)gm m
j=1 and fA (j) gj=1 B(Rn ) :
h\m i Ym h i
1 1
X (j) A (j) = X (j) A (j) : (2.6)
j=1 j=1
14 CHAPTER 2 THE ITÔ INTEGRAL

2.2.1 Additional Properties of BM


(1) (2) (n)
As noted in de…nition 2.2, 2.4 is the density for Bt (Bt ; Bt ; ; Bt )
(j) n
and from this it follows that for all t; fBt gj=1 are independent, normally
distributed random variables with mean 0 and variance t: Book 7’s
proposition 1.42 generalizes this to the statement that an n-dimensional
(1) (2) (n)
stochastic process Xt (Xt ; Xt ; ; Xt ) is an n-dimensional
(j)
Brownian motion if and only if the component processes fXt gnj=1 are
independent 1-dimensional Brownian motions.

The following summarizes a number of important and interesting results


(1) (n)
related to n-dimensional Brownian motion Bt = (Bt ; :::; Bt ): It is left as
an exercise to complete the remaining details of the following statements.
(1) (n)
1. Bt = (Bt ; :::; Bt ) is a martingale relative to the natural …ltered
space (S; (S); t (B); ) where t (B) is de…ned as (de…nition 5.4, book 6):

t (B) Bs 1 (B (Rn )) j0 s t : (2.7)

That is, t (B) is the smallest sigma algebra that contains Bs 1 (B(Rn ))
for all s 2 [0; t]: Recall that this is called the natural …ltration for a
stochastic process, and it is by de…nition the smallest …ltration with respect
to which the given stochastic process is adapted (de…nition 5.10, book 7).
Generalizing, Bt is a martingale relative to the general …ltered space
(S; (S); t (S); )u:c: ; if Bt Bs is independent of s (S) for all 0 s < t:
That is, if (Bt Bs ) and s (S) are independent sigma algebras in the sense
of de…nition 2.4.
Proof. One must verify the requirements of book 7’s de…nition 5.22 to be
a martingale. For the martingale condition, that

E [Bt Bs j s (B)] = E [Bt Bs j s (S)] = 0;

recall book 6’s proposition 5.26 on properties of conditional expectations (or


see proposition 2.8 below).
2. On such (S; (S); t (S); )u:c: ; so Bt Bs is independent of s (S) for
all 0 s < t; the quadratic covariance processes of the component Brownian
motions are given -a.e. by:
D E
B (j) ; B (k) = jk t; (2.8)
t

where jk = 1 if j = k and jk = 0 otherwise.


2.2 BROWNIAN MOTION 15

Proof. If j = k then by 6.20 of book 7, B (j) ; B (j) t = B (j) t the quadratic


variation process of B (j) : The result then follows from that book’s propo-
2
sition 6.12 since B (j) t is a martingale by 2 of proposition 2.8 below.
(j) (k)
Otherwise it must be shown by that book’s proposition 6.29 that Bt Bt is
(j) (k)
a continuous local martingale. In fact Bt Bt is a continuous martingale.
Hint:
(j) (k) (j) (k) (j)
Bt Bt Bs(j) Bs(k) = Bt Bt Bs(k) + Bs(k) Bt Bs(j) :

3. Given real constants fcjk g1 k j n; de…ne:


Xj
b (j) =
B
(k)
cjk Bt :
t k=1
h i
By construction E Bb (j) = 0 for all j: Show that:
t
D E
b (j) ; B
B b (l) = jl t
t

where: Xmin(j;l)
jl = cjk clk :
k=1
Proof. Recall book 7’s proposition 6.30 and its corollary.
4. Continuing 3; show that for any constants fcjk g1 k j n ; the n n
matrix S = ( jl ) is positive semide…nite, meaning that xT Sx 0 for all
P b (j) 0:
x 2 Rn : Further, xT Sx = 0 if and only if nj=1 xj B
Proof. That S = ( jl ) is positive semide…nite follows from 3 and the same
book 7 results:
1 Xn Xn D b (j) b (l) E
xT Sx = B ;B xj xl
t j=1 l=1 t
1 Xn Xn D b (j) b (l)
E
= xj B ; xl B
t j=1 l=1 t
1 DX n b (j) ;
Xn
b (l)
E
= xj B xl B
t j=1 l=1 t
1 DX n b (j)
E
= xj B
t j=1 t
0:
Pn b (j)
Thus xT Sx = 0 if j=1 xE jB 0: Conversely, if xT Sx = 0 for some
DP
x 2 Rn then n b (j) = 0 for all t > 0 and thus Pn xj B b (j)
j=1 xj B j=1 0:
t
16 CHAPTER 2 THE ITÔ INTEGRAL

Pn b (j) must be identically


That the continuous local martingale Mt j=1 xj B
zero follows from book 7’s proposition 6.28 which for this case would state:

sup Qs n (M ) !p 0:
s t

5. Determine when S in 4 is positive de…nite, meaning that xT Sx > 0


for all x 2 Rn with x 6= 0: P
Discussion. First note that while xT Sx = 0 implies that nj=1 xj B b (j)
0 from 4; this does not in general imply that x = 0: For example with
n b (1) = B
b (2) ; and thus x = (1; 1) yields
P2= 2; the set-up in 3 allows B t t
b (j) 0: However, note that from the expression in 3 for jl that:
j=1 x j B

S = LLT ;

where L is the lower triangular matrix with jth row (cj1 ; cj2 ; :::; cjj ) and LT
denotes the upper triangular transpose of L: In other words, LTij = Lji : In
this notation, B b = LB; where Bb and B denote the n 1 column matrices
b (j) (j)
of Bt and Bt variates with 1 j n:
It then follows that S is positive de…nite if and only if L is invertible
2
because xT Sx = LT x 2 ; where this last expression denotes the square
of the L2 -norm of LT x in Rn : Thus if S is positive de…nite this implies
2
LT x 2 = 0 if and only if x = 0; and so L is invertible. The opposite
conclusion is similar.
6: Given a real, symmetric, positive de…nite matrix S =D( jl ), so Ejl =
lj ; then fcjk g can be uniquely chosen in 3 so that B b (j) ; B
b (l) =
1 k j n t
T
jl t: Put another way as in 5; such S can be uniquely expressed as S = LL
where L is the lower triangular invertible matrix with jth row (cj1 ; cj2 ; :::; cjj )
and LT denotes the upper triangular transpose of L:
Proof. Given such S; the calculation of fcjk gk j ; or equivalently the matrix
L so that S = LLT ; is called the Cholesky decomposition of S and
named for André-Louis Cholesky (1875 –1918). This is proposition 3.14
of book 6.
7. Continuing 6; if the matrix S = ( jl ) is real, symmetric, and positive
p
de…nite, then so too is S 0 = jl where jl jl = jj ll : Thus jj = 1;
jl = lj ; and 1 jl 1:
p
Proof. xT S 0 x = y T Sy where yj xj = jj ; so S 0 is positive de…nite.
That 1 jl 1 follows from proposition 6.33 of book 7, since for the
D E
constructed processes in 3 using fcjk g ; it follows that B b (j) ; B
b (l) =
1 k j n t
2.2 BROWNIAN MOTION 17

D E . rD E D E
jl t and thus jk = b (j) ; B
B b (k) Bb (j) Bb (k) : That 1 jl 1
t t t
also follows from the Cauchy-Schwarz inequality for m-vectors, that jx yj
jxj jyj ; by making the associated vectors the same dimension m max(j; k)
by end-…lling one with 0s as necessary.

D E
Remark 2.5 (Covariance of BM) From 3; if b (j) ; B
B b (l) = jl t then
t
by book 7’s proposition 6.29,

^ (j) B
B ^ (l) jl t;
t t

is a local martingale. In fact it is a martingale by book 7’s proposition 5.88,


the Cauchy-Schwarz inequality of book 4’s corollary 3.48, and …nally book
7’s proposition 5.91:

^ (j) B
E sup B ^ (l) jl s E sup B ^ (l)
^ (j) sup B +j jl j t
s s s s
s t s t s t
" # " #
2 1=2 2 1=2
^ (j)
E sup B ^ (l)
E sup B +j jl j t
s s
s t s t
2
4E ^ (j)
B +j jl j t
t

= (4 + j jk j) t:

Thus as a martingale:
h i
E B^ (j) B
^ (l) = jl t;
t t

h i
and since E B^ (j) = 0 for all t; j :
t

h i
cov B^ (j) B
^ (l) = jl t;
t t

where cov denotes the covariance.


^ (j) and B
Analogously, jl in 7 is the correlation between B ^ (l) :
t t

h i
corr B^ (j) B
^ (l) = jl :
t t
18 CHAPTER 2 THE ITÔ INTEGRAL

2.2.2 BM on a Filtered Probability Space


As noted above prior to de…nition 2.2 of a Brownian motion Bt on a
probability space (S; (S); ); no …ltration on S is mentioned nor indeed
needed. The de…nitional requirement for a stochastic process (de…nition
5.10, book 7) is simply that for all t that Bt : S ! Rn is
(S)=B (Rn )-measurable. In other words, for all A 2 B (Rn ) we have that
Bt 1 (A) 2 (S): That said, any such process and space naturally identify
at least one …ltration, the natural …ltration f t (B)g of 1 of section 2.2.1,
with which one has a re…ned statement of measurability. That is for each t;
Bt : S ! Rn is t (S)=B (Rn )-measurable, and thus Bt 1 (A) 2 t (S) for all
A 2 B (Rn ) : In the language of book 7’s de…nition 5.10, Bt is adapted to
the the …ltered probability space (S; (S); t (S); ):
More generally, to say that Bt is de…ned on a …ltered probability space
(S; (S); t (S); ) always implies that Bt is de…ned on (S; (S); ) and
adapted to the …ltration f t (S)g; and thus it follows for the natural …l-
tration that t (B) t (S) for all t: While using larger sigma algebras
never creates a problem for adaptedness, such …ltrations create potential
problems for certain properties of a process such as the Markov property,
or the martingale property. This is because larger sigma algebras add more
conditions that must be satis…ed for a Brownian motion or other process to
be Markov (section 4.2.4, book 7) or martingale (section 3.2, book 7), and
indeed these properties can be lost.
If f t (B)gu:c: denotes the natural …ltration enlarged to satisfy the usual
conditions (de…nition 5.4, remark 5.5, book 7), then proposition 4.31 of
that book states that Brownian motion remains a Markov process relative
to (S; (S); t (S); )u:c: : This …ltration is there denoted f 0t+ (B)g: It is an
exercise to verify that Brownian motion also remains a martingale relative
to (S; (S); t (S); )u:c: : Only the martingale property need be checked, and
this follows by remark 5.6 of book 7 that for any t; t (B) and 0t+ (B) (there
denoted t+ (B)) di¤er only by negligible sets, which have measure 0: To
enlarge these sigma algebras more, however, is to risk losing these and other
important properties of Brownian motion.
What will seen to be critical for the development of the Itô integral,
the de…nition of Brownian motion states that for all t > s; Bt Bs is
independent of Bs : By de…nition (see section 3.4 of book 2, or summary 1.25
of book 7), this means that the sigma algebras generated by these random
variables are independent sigma algebras. This allows the application of
the independence property of conditional expectations (proposition 5.26,
book 6), an important operation in the current chapter, which requires that
2.2 BROWNIAN MOTION 19

Bt Bs is independent of the sigma algebra s (B) : As implied by the


above, Bt Bs is then also independent of the larger sigma algebra found
in the …ltration f t (B)gu:c: : But if Bt is de…ned on general …ltered space
(S; (S); t (S); )u:c: ; such independence need not be valid. Of course by
de…nition, Bt Bs will be independent of Bs and thus s (B) ; but it need
not be the case that Bt Bs will be independent of the larger sigma algebra
s (S): As a simple example …x t and s; and then for all r s de…ne:

r (S) [ r (B) [ (Bt Bs )] :

In other words for r s; r (S) is the smallest sigma algebra that contains
r (B) and the sigma algebra generated by Bt Bs : Then Bt Bs can not
be independent of s (S) since (Bt Bs ) s (S):
The next result reframes the de…nition of a Brownian motion on the …l-
tered space (S; (S); t (B); )u:c: in a manner that explicitly identi…es the
role of this …ltration and independence. It will then form the basis of the de…-
nition of a Brownian motion on a general …ltered space (S; (S); t (S); )u:c: :

Proposition 2.6 (n-dimensional Brownian motion on (S; (S); )) Let


(S; (S); ) be a probability space and B : S ! Rn a collection of random
vectors (Bt )t2I indexed by t 2 I as in 2.2; so that -a.e., B0 (!) = 0 and
Bt (!) is a continuous function of t: Then B is an n-dimensional Brownian
motion if and only if for all 0 s < t :

1. Bt Bs has a multivariate normal distribution with mean n-vector


0; and covariance matrix C (t s)In where In denotes the n n
identity matrix;

2. Bt Bs is independent of s (B) Bs 1 (B(Rn )) :

Proof. Comparing the statement here with that in de…nition 2.2, the only
di¤ erence is item 2 here versus item 3 in the de…nition. Now if B is an n-
dimensional Brownian motion by de…nition 2.2, then let m = 2 with s1 = 0;
t1 = s2 = s and t2 = t: Then the de…nition states that Bt Bs is independent
of Bs ; which by de…nition means it is independent of s (B):
Conversely, assume item 2 in the statement, let 0 s1 < t 1 s2 <
m
t2 ::: sm < tm be given, and we prove that Btj Bsj j=1 are in-
dependent random vectors. By item 2; each Btm Bsm is independent
of sm (B): However, since sm 1 (B) tm 1 (B) sm (B) implies that
Btm 1 Bsm 1 sm (B); it follows by de…nition that Btm Bsm is inde-
m 1
pendent of Btm 1 Bsm 1 : Repeating this argument obtains that Btj Bsj j=1
20 CHAPTER 2 THE ITÔ INTEGRAL

are each independent of Btm Bsm : But then applying this argument to the
observation that Btm 1 Bsm 1 is independent of sm 1 (B) obtains that
m 2
Btj Bsj j=1 are each independent of Btm 1 Bsm 1 : Continuing …nally
m
obtains that Btj B sj j=1
are independent random vectors.

It follows from remark 5.6 of book 7 that for this proposition, we can in
item 2 specify s (B) as the given sigma algebra but de…ned in terms of the
natural …ltration f s (B)gu:c: : Recall that this denotes the …ltration f s (B)g
increased to be right continuous and include all negligible sets of (S), and
thus now satis…es the usual conditions. By that prior book’s remark,
s (B) and s (B)u:c: di¤er only by sets of measure 0 or 1; and thus Bt Bs
is independent of s (B) if and only if Bt Bs is independent of s (B)u:c: :
This leads naturally to the general de…nition of a Brownian motion de…ned
on a …ltered probability space.
By proposition 2.6 and this discussion, de…nitions 2.2 and 2.7 are equiv-
alent for (S; (S); t (S); )u:c: = (S; (S); t (S); )u:c: :

De…nition 2.7 (n-dimensional Brownian motion on (S; (S); t (S); )u:c: )


Let (S; (S); t (S); )u:c: be a …ltered probability space and B : S ! Rn a
collection of random vectors (Bt )t2I indexed by t 2 I as in 2.2: Then B is
an n-dimensional Brownian motion on (S; (S); t (S); )u:c: if:

1. For almost all !; B0 (!) = 0;


2. For 0 s < t; Bt Bs has a multivariate normal distribution with
mean n-vector 0; and covariance matrix C (t s)In ; where In denotes
the n n identity matrix;
3. For 0 s < t; Bt Bs is independent of s (S);

4. For almost all !; Bt (!) is a continuous function of t:

The Itô integral is generally developed in the context of a Brownian mo-


tion de…ned on (S; (S); t (S); ) or (S; (S); t (S); )u:c: ; and then this
independence of Bt Bs and s ; and all of the implications of this inde-
pendence can be utilized. The following proposition summarizes the key
properties of a Brownian motion on (S; (S); t (S); )u:c: that follow from
such an independence assumption. They will be familiar as properties asso-
ciated with the natural …ltration.
For the rest of this chapter, we will then explicitly assume this property
for the …ltration f t (S)gu:c: and develop the Itô integral in this slightly more
general context.
2.2 BROWNIAN MOTION 21

Proposition 2.8 (BM as a martingale) Let Bt (!) be a Brownian mo-


tion on (S; (S); t (S); )u:c: ; and thus by de…nition 2.7, for all t > s 0 :

Bt Bs is independent of s (S): (2.9)

Then for all t > s 0:

1. Bt is a martingale relative to f t (S)g :

E [Bt j s (S)] = Bs ;

2. Bt2 t is a martingale relative to f t (S)g :

E Bt2 tj s (S) = Bs2 s;

h i
3. E (Bt Bs )2 j s (S) =t s:

Proof. By the independence property of conditional expectations (proposi-


tion 5.26, book 6):

E [Bt Bs j s (S)] = E [Bt Bs ] = 0;

since Bt Bs N (0; t s): The linearity and measurability properties of


conditional expectations then obtain 1; since Bt is integrable.
By de…nition, 2.9 is equivalent to independence of (Bt Bs ) ; the sigma
algebra generated by Bt Bs ; and s (S): Remark 3.57 of book 2 proves that
[Bt Bs ]2 (Bt Bs ) ; and thus (Bt Bs )2 is also independent of
s (S): The derivation of 3 is now identical to that for 1:
Now since

Bt2 = Bs2 + 2Bs (Bt Bs ) + (Bt Bs )2 ;

1; 3 and the measurability property obtain:


h i
E Bt2 Bs2 j s (S) = 2Bs E [Bt Bs j s (S)] + E (Bt Bs )2 j s (S)

= t s:

The linearity property and integrability of Bt2 then obtain 2:


22 CHAPTER 2 THE ITÔ INTEGRAL

Remark 2.9 Note that given only 1; that Bt is a martingale relative to


f t (S)g; that 2 and 3 are equivalent. As noted above, 3 ) 2 since then:
h i
E Bt2 Bs2 j s (S) = E (Bt Bs )2 j s (S) ;

using integrability of Bt2 and linearity of conditional expectations. But the


same justi…cation obtains:

E Bt2 Bs2 j s (S) =E Bt2 t Bs2 s j s (S) +t s:

The point of the additional assumption in 2.9 is that it then allows the
evaluation of these conditional expectations.

Corollary 2.10 (Quadratic variation of BM) Let Bt (!) be a Brownian


motion on (S; (S); t (S); )u:c: : Then the quadratic variation process hBit
of book 7’s de…nition 6.2 is given -a.e. by:

hBit = t: (2.10)

Proof. As a continuous martingale on (S; (S); t (S); )u:c: by proposition


2.8, Bt (!) is a continuous local martingale in this space by book 7’s corollary
5.86. Thus 2.10 follows from that book’s proposition 6.12 since Bt2 t is a
local martingale by the same argument, and the observation that hBit = t is
a continuous, increasing and adapted process.

Exercise 2.11 Generalize exercise 5.29 of book 7: Let Bt be a Brownian


motion on (S; (S); t (S); )u:c: ; then exp aBt a2 t=2 is a martingale for
a real. Hint: Proposition 5.26, book 6.

2.3 Preliminary Insights to a New Integral


Taking our cue from the earlier developments ofR integration theories in
t
books 3 and 5, if we seek to de…ne the integral 0 v(s; !)dBs (!); we should
begin with the simplest functions de…ned on (S; (S); t (S); )u:c: , then
investigate generalizations. In the process, it is natural to allow properties
of such simpler integrals such as linearity that we desire in the …nal result.
And as this integral is de…ned as a function of !; and thus is a random
variable on (S; (S); t (S); )u:c: if measurable, it is only necessary to have
such de…nitions speci…ed -a.e.
2.3 PRELIMINARY INSIGHTS TO A NEW INTEGRAL 23

2.3.1 Axioms for a New Integral


We next identify what mightR 1 be considered the required axiomatic
properties of the integral 0 v(s; !)dBs (!) for the simplest integrands.
Item 1 below de…nes what it means to integrate the "di¤erential" dBs (!)
over bounded intervals. Items 2 and 3 then extend this de…nition so that
the integral will be required to be "linear" in terms of it integrands. That
is, we ultimately want to have -a.e.:
Z 1
(a(!)v(s; !) + w(s; !)) dBs (!)
0
Z 1 Z 1
= a(!) v(s; !)dBs (!) + w(s; !)dBs (!): (2.11)
0 0

This identity provides more than just a statement about the value of such
integrals, as it also provides a statement about existence of the integral.
Speci…cally, if v(s; !) and w(s; !) are integrable, then so too is
a(!)v(s; !) + w(s; !):

As introduced in 2.1 above, if v(s; !) 1 we ultimately expect that


-a.e.: Z t
dBs (!) Bt (!) B0 (!) = Bt (!); -a.e.;
0
since B0 (!) = 0 -a.e.
R 1But it is simpler and more directR tto initially de…ne
integrals of the form 0 v(s; !)dBs (!); and then de…ne 0 v(s; !)dBs (!):

1. Characteristic Functions:
Given the characteristic (or indicator) function v(s; !) (a;b] (s); 0
a < b; meaning (a;b] (s) = 1 for s 2 (a; b] and is 0 otherwise, then for
almost all ! 2 S we de…ne:
Z 1
(a;b] (s)dBs (!) = Bb (!) Ba (!):
0

Thus in particular:
Z 1
(0;t] (s)dBs (!) = Bt (!); -a.e.
0

In addition we de…ne:
Z 1
f0g (s)dBs (!) = 0:
0
24 CHAPTER 2 THE ITÔ INTEGRAL

2. Pathwise Constant Functions: Generalizing from characteristic


functions to:

v(s; !) a(!) (t;t0 ] (s); 0 t < t0 < 1;

then since a(!) is a constant for each ! 2 S; we require linearity and


de…ne -a.e.:
Z 1 Z 1
a(!) (a;b] (s)dBs (!) = a(!) (a;b] (s)dBs (!);
0 0

and to evaluate this integral we apply 1 :


Z 1
a(!) (a;b] (s)dBs (!) = a(!) [Bb (!) Ba (!)] :
0

The integral of a(!) f0g (s) is de…ned analogously and thus:


Z 1
a(!) f0g (s)dBs (!) = 0:
0

3. Simple Processes (also called elementary processes): The next


step in the sequence of "axioms" is to address simple processes:
Xn
v(s; !) a 1 (!) f0g (s) + aj (!) (tj ;tj+1 ] (s); (2.12)
j=0

where:
0 t0 < t1 < < tn+1 < 1:
For this we require linearity and thus de…ne (simplifying notation by
suppressing (!)):
Z 1h Xn i
a 1 f0g (s) + aj (tj ;tj+1 ] (s) dBs
j=0
Z0 1 Xn Z 1
a 1 f0g (s)dBs + aj (tj ;tj+1 ] (s)dBs ;
0 j=0 0

which we evaluate by 2 :
Z 1h Xn i Xn
a 1 f0g (s) + aj (tj ;tj+1 ] (s) dBs = aj Btj+1 Btj :
0 j=0 j=0
(2.13)
As in the case of simple functions in book 5, we must check that this
de…nition is consistent as in that book’s proposition 2.4. See exercise
2.13.
2.3 PRELIMINARY INSIGHTS TO A NEW INTEGRAL 25

4. Bounded Intervals: If v(s; !) is a simple process as in 2.12, then for


any interval (a; b] so too is (a;b] (s)v(s; !) and we de…ne:
Z b Z 1
v(s; !)dBs (!) (a;b] (s)v(s; !)dBs (!):
a 0

This will often be applied with (a; b] = (0; t] :


Z t Z 1
v(s; !)dBs (!) (0;t] (s)v(s; !)dBs (!): (2.14)
0 0

Exercise 2.12 (On consistency of 2.13) Prove that if:


Xm
w(s; !) b=1 (!) f0g (s) + bk (!) (t0 ;t0 ] (s);
k=0 k k+1

with
0 t00 < t01 < < t0m+1 < 1;
and w(s; !) = v(s; !); -a.e., then the respective integrals produced by 2.13
agree -a.e. Hint: De…ne a common partition and note that if (tj ; tj+1 ] \
(t0k ; t0k+1 ] 6= ;; then aj (!) = bk (!); -a.e.

Exercise 2.13 Show that 2.13 generalizes using 2.14 as follows. If v(s; !)
is a simple process as in 2.12, then:
Z t Xn
v(s; !)dBs (!) = aj (!) Btj+1 ^t (!) Btj ^t (!) ; (2.15)
0 j=0

where tj ^ t minftj ; tg:

2.3.2 Next Steps and Questions


Following the logic of book 5, and given our de…nition
Rt of an Itô integral for
simple processes in 2.13, it is natural to de…ne 0 v(s; !)dBs (!) for more
general v(s; !) as a limit of the integrals of simple processes. It was noted
above that such limits will not exist in the Riemann-Stieltjes sense with
probability 1, even for aj (!) 1:

In addition to this question on limits, the following example shows that it


is also necessary to do some thinking about the measurability
RT requirements
of the aj -variates. Here we explore an evaluation of 0 Bs (!)dBs (!) by
approximating Bs (!) with simple processes in a natural way. This example
also works with general f t (B)g in place of f t (B)g as long as Bt (!) is a
Brownian motion on (S; (S); t (S); )u:c: by de…nition 2.7.
26 CHAPTER 2 THE ITÔ INTEGRAL

Example 2.14 (Integrand measurability and the integral) Let Bt (!)


be a Brownian motion on (S; (S); t (S); )u:c: ; where f t (B)g is the nat-
ural …ltration on S (de…nition 5.4, book 7). For any r with 0 r 1; there
is a sequence of simple functions fvn (s; !)g1n=1 de…ned on (S; (S); t (B); )
so that vn (s; !) ! Bs (!) for almost all !; and for all n :
Z T
E vn (s; !)dBs (!) = rT:
0

Proof. Given a partition 0


X t0 < t 2 < < tn = T; de…ne vn (s; !)
n
Bsj (!) (tj 1 ;tj ] (s) where sj = tj 1 + r(tj tj 1 ) with r 2 [0; 1] and
j=1
assumed …xed. By 1 3 above:
Z T Xn
vn (s; !)dBs (!) = Bsj (!) Btj (!) Btj 1 (!) ;
0 j=1

Since

Btj (!) Btj 1 (!) = Btj (!) Bsj (!) + Bsj (!) Btj 1 (!);

it follows that
Z T Xn
vn (s; !)dBs (!) = Bsj (!) Btj (!) Bsj (!)
0 j=1
Xn
+ Bsj (!) Bsj (!) Btj 1 (!) :
j=1

Noting that Bsj (!) is measurable relative to sigma algebra sj (S); apply
the tower and measurability properties of conditional expectations of book 6’s
proposition 5.26, and then the independence property (justi…ed in proposition
2.6):

E Bsj Btj Bsj = E B sj E Btj B sj j sj (S)


= E Bsj E Btj B sj = 0:

Similarly E Btj 1 B sj Btj 1 = 0 and so:


h i
2
E Bsj Bsj Btj 1 = E Bsj Btj 1 = sj tj 1:

Thus:
Z T Xn
E vn (s; !)dBs (!) = (sj tj 1) = rT:
0 j=1
2.4 QUADRATIC VARIATION PROCESS OF BROWNIAN MOTION27

Exercise 2.15 Show that the same expectations are produced if instead of
de…ning vn (s; !) in terms of aj (!) Bsj (!) where sj = tj 1 + r(tj tj 1 );
we de…ne vn (s; !) in terms of aj (!) (1 r)Btj 1 (!) + rBtj (!):

Remark 2.16 (On example 2.14) Note that the above example does not
RT
show that the sequence of random variables 0 vn (s; !)dBs (!) converges to
a random variable on (S; (S); t (S); )u:c: ; nor even addresses the meaning
of such convergence. However, it does show that each random variable of this
sequence has constant expectation, and that we can make this constant equal
to rT for any r with 0 r 1: The signi…cance of this is that our usual
procedure for approximating measurable functions with simple functions will
require some additional thought since even for the -a.e. continuous process
Bt ; the unbounded variation over intervals makes the choice of intermediate
values very signi…cant.
Speci…cally, this choice re‡ects the measurability property desired for the
aj -variates in 2.13.
The Itô integral developed below and named for Kiyoshi Itô (1915 –
2008) is de…ned with aj -variates that are measurable relative to the sigma
algebra tj (S); the natural …ltration associated with Bt (de…nition 5.4, book
7). This integral turns out to be a martingale, and as such the Itô calculus
is universally used in …nance where martingales play an integral role.
An alternative approach is to use a midpoint estimate for the aj -variates,
for example aj (!) = Btj (!) + Btj+1 (!) =2 or aj (!) = Bsj (!) with sj =
[tj + tj+1 ] =2 as in the example of the above integral. The …rst approach
produces what is known as the Stratonovich integral, named for Rus-
lan Stratonovich (1930 – 1997), and also the Fisk-Stratonovich inte-
gral, for Donald L. Fisk who developed these results at the same time
as Stratonovich. This integral, though not a martingale, and the associ-
ated Stratonovich calculus form the basis for important applications in
physics.

2.4 Quadratic Variation Process of Brownian Mo-


tion
An insight to the manner in which we will evaluate the limits of Itô
integrals of simple functions comes from the next proposition which
summarizes results on the quadratic variation and the quadratic variation
process associated with Brownian motion, where we reference de…nitions
2.84 and 6.2 of book 7. It was proved in that book’s proposition 2.87 that
28 CHAPTER 2 THE ITÔ INTEGRAL

with probability 1; Brownian motion is not of bounded variation (not even


…nite weak variation). Generalizing this 1-variation result, corollary 2.90
proved that Brownian motion is not of …nite weak p-variation for all p < 2:
On the other hand, in 1972 S. James Taylor (b. 1929) proved that
Brownian motion has …nite strong p-variation if and only if p > 2: Thus
the existence of weak p-variation for p 2 is the only positive result
possible. By exercise 2.86 of book 7, Brownian motion can have at most
one …nite, non-zero weak p-variation in this range, and p = 2; or weak
quadratic variation is that unique index.

Recall that for the weak variation de…nition that the supremum is re-
stricted to partitions with n ! 0; where the mesh size n is de…ned:

n max fti ti 1 g:
1 i n

Though Brownian motion does not have …nite strong quadratic variation,
where the supremum re‡ects all partitions, it does have …nite weak quadratic
variation. This last result was proved in book 7’s proposition 2.88, along
with the result that Brownian motion’s weak quadratic variation over the
interval [a; b] converges to b a in the L2 (S)-norm and in probability.
The next result is largely a restatement and summary of the more general
results of chapter 6 of book 7 applied to Brownian motion. But here we
are able to de…ne the quadratic variation process more explicitly as in the
Doob-Meyer decomposition theorem for bounded continuous martingales
of book 7’s proposition 6.5, and then obtain a slightly stronger result than
that book’s proposition 6.12 for continuous local martingales. To simplify
notation this result is stated over [0; t]; though the same proof works for
Brownian motion over [t; t0 ]: The result would then be that Qs n (B) !L2
t0 t uniformly over s 2 [t; t0 ] , meaning in the L2 -norm, as well as in
probability. We then add the corollary 2.10 result for completeness.

Proposition 2.17 (Quadratic variation process of BM) Let Bt (!) be


a Brownian motion on (S; (S); t (S); )u:c: which thus satis…es the inde-
pendence criterion of 2.9. Let Qs n (B) denote the …nite quadratic varia-
tion process over [0; t] as in book 7’s de…nition 6.2 based on a partition
n fti gni=0 with 0 = t0 < t1 < ::: < tn = t :
Xi(s) 2 2
Qs n (B) = Bti Bti 1 + Bs Bti(s) ;
i=1

where i(s) maxfij ti sg:


2.4 QUADRATIC VARIATION PROCESS OF BROWNIAN MOTION29

If n maxi fti ti 1g ! 0; then:


h 2
i Z 2
E sups t Qs n (B) s sups t Qs n (B) s d ! 0: (2.16)
S

In other words, Qs n (B) !L2 s uniformly over s 2 [0; t]:


In addition, Qs n (B) converges to s in probability uniformly over [0; t] :

sups t Qs n (B) s !P 0:

That is for all >0:

Pr sups t Qs n (B) s > ! 0: (2.17)

Finally, the quadratic variation process for Brownian motion is given


-a.e. by:
hBit = t; (2.18)
and is independent of ! 2 S:
Proof. That 2.17 follows from 2.16 is a result of Chebyshev’s inequality of
proposition 3.33 (book 4) with n = 2 :
h 2
i
Pr sups t Qs n (B) s > E sups t Qs n (B) s = 2 : ((1))

For 2.16, Doob’s martingale maximal inequality 2 book 7’s proposition


5.91 obtains that if Qs n (B) s is a continuous martingale then:
h 2
i 2
E sups t Qs n (B) s 4E Qt n (B) t : ((2))

That Qs n (B) s is a continuous martingale will be demonstrated as the


di¤ erence of Bs2 s; which is a continuous martingale by proposition 2.8,
and Bs2 Qs n (B); which is a continuous martingale as we now demonstrate,
using essentially the same proof as for step 2 of book 7’s proposition 6.5.
Though Bs is not a bounded martingale as that theorem assumed, the same
proof works here because the Bs has a …nite second moment for all s: The
proof requires veri…cation of the requirements of book 7’s de…nition 5.22.
First, Bs2 Qs n (B) is s (S)-measurable by the de…nition of Qs n (B);
and continuous since Bs is continuous and by de…nition this assures the
same for Qt n (B): In addition, since jBs j2 is integrable for all s :
h i h i
2 2
E Bti Bti 1 E jBti j + Bti 1 < 1:
30 CHAPTER 2 THE ITÔ INTEGRAL

Thus E Qs n (B) < 1; and so E Bs2 Qs n (B) < 1 for all s:


For the martingale condition let t s: Since Qs n (M ) is s (S)-measurable,
the measurability property of conditional expectations of book 6’s proposition
5.26 obtains:
h i
E Bt2 Qt n (B)j s (S) = E Bt2 j s (S) Qs n (B)
h i
E Qt n (B) Qs n (B)j s (S) :

The martingale property then follows if it can be proved that


h i
E Qt n (B) Qs n (B)j s (S) = E Bt2 j s (S) Bs2 :

To this end, if ti s t < ti+1 ; then i(t) = i(s) = ti ; and thus:

Qt n (B) Qs n (B) = (Bt Bti )2 (Bs Bti )2


= Bt2 Bs2 2Bti (Bt Bs ) :

Since Bti and Bs are s (S)-measurable, the measurability property of con-


ditional expectations (proposition 5.26, book 6) and de…nition 2.7 obtain:
h i
E Qt n (B) Qs n (B)j s (S) = E Bt2 j s (S) Bs2 E [1j s (S)]
2Bti E [Bt Bs j s (S)]
= E Bt2 j s (S) 2
Bs :

It is left to prove as an exercise that the same result is obtained if ti s <


ti+1 t; or, ti s < ti+1 and tj t < tj+1 where j > i + 1:
Thus by (1) and (2); the proof of 2.16 and thus 2.17 will be complete if
it can be shown that as n ! 0 :
2
E Qt n (B) t ! 0: ((3))

To this end, let the partition n of [0; t] be given with t0 = 0 and tNn = t; and
2
de…ne Xi Bti (!) Bti 1 (!) (ti ti 1 ) : Now fXi g are independent
by de…nitional independence of fBti (!) Bti 1 (!)gN n
i=1 and proposition 3.56
of book 2. Also:
XNn
Xi = Qt n (B) t;
i=1
and: XNn X
2
Qt n (B) t = Xi2 + 2 Xi Xj :
i=1 i<j
2.4 QUADRATIC VARIATION PROCESS OF BROWNIAN MOTION31

Thus since Bti (!) Bti 1 (!) N (0; ti ti 1) yields E [Xi ] = 0; inde-
pendence of fXi g obtains:
2 XNn
E Qt n (B) t = E Xi2 :
i=1

Now:
h i
4
E Xi2 = E Bti (!) Bti 1 (!)
h i
2 2
2 (ti ti 1 ) E Bti (!) Bti 1 (!) + (ti ti 1) :

Since h i
2
E Bti (!) Bti 1 (!) = (ti ti 1)

and h i
4 2
E Bti (!) Bti 1 (!) = 3 (ti ti 1)

by proposition 1.73 of book 7:


2
E Xi2 = 2 (ti ti 1) :

Finally, if ti ti 1 n for all i :


2 XNn 2
E Qt n (B) t =2 (ti ti 1) 2 n t; ((4))
i=1

completing the proof of (3):


Finally, 2.18 is corollary 2.10.

The following corollary strengthens the result of book 7’s proposition


2.91 for Brownian motion to one of uniform convergence.
X1
Corollary 2.18 If n < 1; then Qs (B) ! s uniformly over [0; t]
n
n=1
with probability 1:
Proof. By (1); (2) and (4) of the prior proof:
2
2
Pr sups t Qs n (B) s > 4E Qt n (B) t = 8 n t= 2 ;

X1
and so Pr sups t Qs n (B) s > converges. By the Borel-Cantelli
n=1
theorem of book 2’s proposition 2.6:

lim supfsups t Qs n (B) s > g = 0;


n!1
32 CHAPTER 2 THE ITÔ INTEGRAL

while taking complements this seen to be equivalent to:


h i
lim inf fsups t Qs n (B) s g = 1:
n!1

Hence for any > 0; with probability 1 there exists N so that sups t Qs n (B) s
for n N:

Example 2.19 A example of the above corollary is produced by interval


bisection. Given any initial sequence ftj gnj=0 with 0 = t0 < t1 < tn = t;
de…ne the sequence ft0j g2n
j=0 so that t0 = t and t0
2j j 2j+1 = (tj + tj+1 )=2: Thus
0
n = n =2: Hence if a sequence of partitions is de…ned iteratively by this
bisection process, then Qs n (B) ! s uniformly over [0; t] with probability 1:

2.5 Itô Integral of Simple Processes


The above proposition will provide the needed insight to a potential
de…nition of an Itô integral:
Z 1
v(s; !)dBs (!);
0

for appropriate v(s; !): When v(s; !) is a simple process as in 2.12, the
value of this integral is de…ned in 2.13 (simplifying notation by suppressing
(!)):
Z 1h Xn i Xn
a 1 f0g (s) + aj (tj ;tj+1 ] (s) dBs aj Btj+1 Btj :
0 j=0 j=0

For a generalization to other integrands, we seek to prove that if a


sequence of simple processes vn (s; !) converges in some manner as n ! 1
to v(s; !) :
Xn
a=1 (!) f0g (s) + aj (!) (tj ;tj+1 ] (s) ! v(s; !);
j=0

then the associated integrals also converge in some manner:


Xn Z 1
aj (!) Btj+1 (!) Btj (!) ! v(s; !)dBs (!):
j=0 0

Remark 2.20 (On notation for partitions) As was often used in ear-
lier books, we simplify notion in the above statement. Speci…cally, we do not
mean to imply that the partition points ftj gn+1
j=0 are independent of n; nor
2.5 ITÔ INTEGRAL OF SIMPLE PROCESSES 33

are the coe¢ cient random variables. In full notation, the above convergence
of simple processes would be stated that as n ! 1 :

(n)
XNn (n)
a 1 (!) f0g (s) + aj (!) (n) (n)
(tj ;tj+1 ]
(s) ! v(s; !):
j=0

It is hoped that the reader will agree that the simpler notation is preferred.

Now summations of f Btj (!) Btj 1 (!) g do not have nice convergence
properties as n ! 1 as noted above and in book 7’s proposition 2.87.
2
However, summations of f Btj (!) Btj 1 (!) g are far more manageable
as seen above and provide the avenue for investigation that Itô developed.
But to obtain sums of such terms from the integrals of simple processes we
must work with the square of these integrals. Thus it is natural to consider
the Itô integrals of simple processes and general v(s; !) to be elements of
an L2 -space (recall chapter 4, book 5). Based on a sample calculation of
the associated L2 -norms of the Itô integral of a simple process, it becomes
apparent that we should assume at least initially that such processes are
elements of an L2 -space over [0; t] S:
So we begin with an L2 -space of potential integrands v(s; !); and then
derive an important result on certain simple processes in this space. While
the following de…nition does not explicitly make reference to the …ltration
noted, this will be important momentarily. For background on L2 and re-
lated spaces, see chapter 4 of book 5:

De…nition 2.21 (L02 ([0; t] S)) Given a …ltered probability space (S; (S);
0
t (S); )u:c: ; let L2 ([0; t] S) denote the L2 -space of (B[0; t] (S))-measurable
functions v(s; !) de…ned on [0; t] S under the norm:
Z t Z Z t
kv(s; !)k2L0 ([0;t] S) E 2
v (s; !)ds = v 2 (s; !)dsd ; (2.19)
2
0 S 0

where ds denotes Lebesgue measure. We de…ne L02 ([0; 1) S) analogously.


Here (B[0; t] (S)) denotes the smallest sigma algebra that contains the
measurable rectangles of B[0; t] (S) where B[0; t] denotes the Borel sigma
algebra (chapter 7, book 1).

Example 2.22 If v(s; !) 2 L02 ([0; t] S) is a simple process as in 2.12, then


since the (tj ; tj+1 ]-intervals are non-overlapping:
Z X
2 n
kv(s; !)kL0 ([0;t] S) = a2j (!)(tj+1 tj )d ;
2
S j=0
34 CHAPTER 2 THE ITÔ INTEGRAL

if tn+1 t: More generally as a function of t this becomes:


Z X
2 n
kv(s; !)kL0 ([0;t] S) = a2j (!)(t ^ tj+1 t ^ tj )d ; (2.20)
2
S j=0

where t ^ tj+1 = minft; tj+1 g:

Remark 2.23 (On L02 ([0; t] S)) Although this norm is expressed as an
iterated integral, this space is equivalent to the L2 -spaces of book 5’s chapter
4, and in particular equivalent to

L2 ([0; t] S) L2 ([0; t] S; (B[0; t] (S)) ; mL );

with B[0; t] the Borel sigma algebra. The product measure space ([0; t]
S; (B[0; t] (S)) ; mL ) is derived in book 1’s chapter 7, but here the
product sigma algebra is de…ned as the smallest sigma algebra that con-
tains the algebra A generated by the measurable rectangles from B[0; t] (S):
In that book’s proposition 7.20 this sigma algebra is denoted (A) ; while in
chapter 5 of book 5 this was also denoted 0 (B[0; t] (S)) :
The space L2 ([0; t] S) is de…ned as the collection of measurable func-
tions v(s; !) so that:
ZZ
2
kv(s; !)kL2 ([0;t] S) v 2 (s; !)d (mL ) < 1:
[0;t] S

Since both B[0; t] and (S) are …nite measures spaces (when t < 1), Fubini’s
theorem of book 5’s corollary 5.20 applies. Thus, if v(s; !) 2 L2 ([0; t] S)
then v(s; !) 2 L02 ([0; t] S) and the norms agree. When t = 1 this version of
Fubini’s theorem remains valid, and as noted in the earlier book, Billingsley
(1995) is a reference for this more general result.
Conversely, since v 2 (s; !) is nonnegative, Tonelli’s theorem assures that
if measurable v(s; !) 2 L02 ([0; t] S) then v(s; !) 2 L2 ([0; t] S) and the
norms agree. Book 5’s proposition 5.22 states Tonelli’s theorem in the con-
text of complete and sigma …nite component measure spaces, but as was
proved for Fubini’s theorem, this result is also true without completeness,
again referencing Billingsley (1995).

Remark 2.24 (On Equivalence classes) Recalling remark 4.13 of book


5 and the paragraph preceding that remark, while it is common to refer to
v(s; !) 2 L2 as a measurable function, it is in fact an equivalence class of
measurable functions. Speci…cally,

v(s; !) fv 0 (s; !)jv(s; !) = v 0 (s; !); mL -a.e.g:


2.5 ITÔ INTEGRAL OF SIMPLE PROCESSES 35

This convention is needed to make kv(s; !)kL2 a norm (de…nition 4.3, book
5) since kv(s; !)kL2 = 0 only implies that v(s; !) = 0; mL -a.e.

To begin our investigation of this approach, we investigate the L2 (S)-


norm of the Itô integral of a simple process. For this, the next result connects
adaptedness and measurability of a simple process to the measurability of
the aj (!)-terms. The importance of the measurability assumption on aj (!)
was already illustrated in example 2.14 above. In addition, adaptedness and
measurability will be the key measurability properties of processes v(s; !) 2
H2 ([0; 1) S) of de…nition 2.31 below.

Lemma 2.25 (Adaptedness of simple processes) With 0 = t0 < t1 <


< tn+1 ; the simple process:
Xn
v(s; !) a 1 (!) f0g (s) + aj (!) (tj ;tj+1 ] (s); (2.21)
j=0

is an adapted process on (S; (S); t (S); )u:c: if and only if a 1 (!) is


0 (S)-measurable and each aj (!) is tj (S)-measurable. In this case, v(s; !)
is also measurable.
Proof. If each a 1 (!) is 0 (S)-measurable then v(0; !) = a 1 (!) is 0 (S)-
measurable by de…nition. For s > 0; if tj < s tj+1 then v(s; !) = aj (!)
and so v(s; !) is tj (S)-measurable and thus s (S)-measurable. Finally
v(s; !) = 0 if tn+1 < s and so v(s; !) is adapted. Conversely, if v(s; !) is
adapted then by de…nition v(s; !) is s (S)-measurable for all s: In particular,
v(0; !) = a 1 (!) is 0 (S)-measurable. If s > 0 and tj < s tj+1 then
v(s; !) = aj (!) as above. So aj (!) is s (S)-measurable for all s with tj <
s tj+1 ; and by right continuity of the …ltration aj (!) is tj (S)-measurable.
Since v(s; !) is left continuous, it is measurable when adapted by book
7’s proposition 5.19.

Exercise 2.26 Check that the collection of adapted simple processes given
in 2.21 form a vector space over R (de…nition 4.34, book 3), and that if 0
r < r0 and v(s; !) is an adapted simple process, then so too is (r;r0 ] (s)v(s; !):

Proposition 2.27 (Integral properties for simple processes) Let Bt (!)


be a Brownian motion on (S; (S); t (S); )u:c: : Given 0 = t0 < t1 <
< tn+1 ; let v(s; !) 2 L02 ([0; tn+1 ] S) be an adapted simple process on
(S; (S); t (S); )u:c: as in 2.21. Then:

1. For all t :
Z t
v(s; !)dBs (!) 2 L2 (S) L2 (S; (S); );
0
36 CHAPTER 2 THE ITÔ INTEGRAL

and we have the Itô Isometry:


Z t
v(s; !)dBs (!) = kv(s; !)kL0 ((0;t] S) : (2.22)
2
0 L2 (S)

2. Mean equal to zero: For all t :


Z t
E v(s; !)dBs (!) = 0: (2.23)
0

3. The process Z t
Vt (!) v(s; !)dBs (!)
0
is continuous -a.e., and an L2 (S)-martingale relative to f t (S)g;
meaning that Vt (!) is a martingale and for all t :

kVt (!)kL2 (S) < 1:

Proof. The Itô integral of v(s; !) is de…ned in 2.15:


Z t Xn
v(s; !)dBs (!) = aj (!) Bt^tj+1 (!) Bt^tj (!) : (2.24)
0 j=0

From this representation


Rt we obtain that this integral is (S)-measurable. To
prove that 0 v(s; !)dBs (!) 2 L2 (S) we evaluate the square of its L2 -norm:
Z t 2
v(s; !)dBs (!)
0 L2 (S)
Z hX i2
n
= aj (!) Bt^tj+1 (!) Bt^tj (!) d
S j=0
Xn h i
2
= E a2j (!) Bt^tj+1 (!) Bt^tj (!)
j=0
X
+2 E ai (!)aj (!) Bt^ti+1 (!) Bt^ti (!) Bt^tj+1 (!) Bt^tj (!) :
i<j

Since aj (!) is measurable relative to tj (S); the tower and measurability


properties of conditional expectations from book 6’s proposition 5.26 obtain:
Xn h i
2
E a2j (!) Bt^tj+1 (!) Bt^tj (!)
j=0
Xn h i
2
= E a2j (!)E Bt^tj+1 (!) Bt^tj (!) j tj (S) :
j=0
2.5 ITÔ INTEGRAL OF SIMPLE PROCESSES 37

If t > tj ; then by the independence criterion of 2.9 and proposition 2.8:


2
E Bt^tj+1 (!) Bt^tj (!) j tj (S) = t ^ tj+1 tj :

Thus since the intervals f(t ^ ti+1 t ^ ti ]g are non-overlapping:


Xn h i Xn
2
E a2j (!) Bt^tj+1 (!) Bt^tj (!) = E a2j (!)(t ^ ti+1 t ^ ti )
j=0 j=0
Z t
= E v 2 (s; !)ds
0
kv(s; !)k2L0 ([0;t] S) :
2

Similarly, if t > tj > ti then ai (!)aj (!) Bt^ti+1 (!) Bt^ti (!) is measur-
able relative to tj (S); and by the same steps:

E ai (!)aj (!) Bt^ti+1 (!) Bt^ti (!) Bt^tj+1 (!) Bt^tj (!)
= E ai (!)aj (!) Bt^ti+1 (!) Bt^ti (!) E Bt^tj+1 (!) Btj (!) j tj (S)
= 0:

Combining obtains 2.22.


The expectation formula in 2.23 follows by the tower and measurability
properties (proposition 5.26, book 6) and proposition 2.8:
Z t Xn
E v(s; !)dBs (!) = E aj (!) Bt^tj+1 (!) Bt^tj (!)
0 j=0
Xn
= E aj (!)E Bt^tj+1 (!) Bt^tj (!) j tj (S)
j=0
= 0:

Pathwise continuity of this integral -a.e. follows from 2.24 and the -
a.e. continuity of Brownian motion. The same observation proves Vt (!) is
measurable relative to the sigma algebra t (S): Applying Jensen’s inequality
of book 4’s proposition 3.39:

(E [jVt (!)j])2 E Vt2 (!) kv(s; !)k2L0 ([0;t] S) ;


2

and thus Vt (!) is integrable for all t by 2.22. For the martingale property, if
t > s then linearity of conditional expectations and the measurability prop-
erty yields, simplifying notation:

E [Vt j s] = E [Vs j s] + E [Vt Vs j s]


= Vs + E [Vt Vs j s] :
38 CHAPTER 2 THE ITÔ INTEGRAL

If s = 0 then V0 = 0; and so:


Xn
E [Vt Vs j s ] = E aj Bt^tj+1 Bt^tj j 0 :
j=0

Applying the above properties of conditional expectations to any term of the


sum obtains:

E aj Bt^tj+1 Bt^tj j 0 = E aj E Bt^tj+1 Bt^tj j tj j 0 = 0:

This follows by proposition 2.8 if t tj :

E Bt^tj+1 Bt^tj j tj = E Bt^tj+1 Btj j tj = 0;

while if t < tj then again by this proposition:

E Bt^tj+1 Bt^tj j tj = E E Bt^tj+1 Bt j t j tj = 0:

If s 2 (tk ; tk+1 ] :
Xn Xn
Vt (!) Vs (!) = aj Bt^tj+1 Bt^tj aj Bs^tj+1 Bs^tj
j=0 j=0
Xn
= ak Bt^tk+1 Bs + aj Bt^tj+1 Bt^tj :
j=k+1

That E [Vt Vs j s ] = 0 follows by considering the various terms. For the


…rst, ak is tk - measurable and thus s - measurable, so by proposition 2.8:

E ak Bt^tk+1 Bs j s = ak E Bt^tk+1 Bs j s = 0:

For the other terms, as above:

E aj Bt^tj+1 Bt^tj j s = E aj E Bt^tj+1 Bt^tj j tj j s ;

and this is 0 as above by considering t tj and t < tj :


Thus Vt is a continuous martingale, is an L2 (S)-martingale by 2.22, and
the proof is complete.

Remark 2.28 (Isometry) In mathematics, an isometry is a transforma-


tion between metric spaces:

T : (X1 ; d1 ) ! (X2 ; d2 );

that preserves distances:

d2 (T x; T y) = d1 (x; y) :
2.6 H2 ([0; 1) S) AND SIMPLE PROCESS APPROXIMATIONS39

We have not yet fully identi…ed theseR t metric spaces, but at this point it can
be appreciated that the Itô Integral 0 dBs (!) will be the transformation T
of interest. Once identi…ed, it will be seen that the identity in 2.22 general-
izes to these spaces. That this identity assures the preservation of distances
comes from the observation that in a normed space, an L2 -space for example,
we can de…ne a distance function d( ; ) by:

d(f; g) kf gk : (2.25)

See section 3.2 of Reitano (2010), for example, for a discussion on normed
and metric spaces.

2.6 H2 ([0; 1) S) and Simple Process Approxima-


tions
The Itô integral is named for Kiyoshi Itô (1915 –2008) as noted above.
The
R t Itô isometry is the key result needed to generalize the de…nition of
0 v(s; !)dBs (!) and related integrals from adapted simple processes to
more general square integrable functions v(s; !); where square integrability
is de…ned as in 2.19.

The big idea behind Itô’s approach to de…ning this integral is now easy
to describe informally, with details to follow.

1. Given appropriately restricted v(s; !) 2 L02 ([0; t] S); …nd a sequence


of adapted simple processes fvn (s; !)g1
n=1 L02 ([0; t] S) as in 2.21
so that
kvn (s; !) v(s; !)kL0 ([0;t] S) ! 0:
2

2. It then follows that fvn (s; !)g1 0


n=1 is a Cauchy sequence in L2 ([0; t]
S): That is, given any > 0; there is an N so that for n; m N :

kvn (s; !) vm (s; !)kL0 ([0;t] S) < :


2

3. By the Itô isometry,


Z t 1
vn (s; !)dBs (!)
0 n=1

is then a Cauchy sequence in L2 (S):


40 CHAPTER 2 THE ITÔ INTEGRAL

4. Since L2 (S) is complete, there exists a function Vt (!) 2 L2 (S) so that:


Z t
vn (s; !)dBs (!) Vt (!) ! 0:
0 L2 (S)

5. De…ne the Itô integral:


Z t
v(s; !)dBs (!) Vt (!): (2.26)
0

6. Prove that this process is well-de…ned in the sense that if fvn0 (s; !)g1
n=1
L02 ([0; t] S) have the same properties as fvn (s; !)g1n=1 ; the resulting
function Vt0 (!) satis…es Vt0 (!) = Vt (!); -a.e.

Remark 2.29 (The big idea’s big challenge) Note that the big challenge
in Itô’s approach is characterizing the functions v(s; !) that have simple
process approximations as required in step
R t 1: Once done, steps 2 5 will
unambiguously produce a de…nition for 0 v(s; !)dBs (!) for all t as a ran-
dom variable de…ned on (S; (S); ); which is square integrable and hence
an element of L2 (S): Well-de…nedness must then also be addressed.
We identify the candidate function space in de…nition 2.31 below, and
prove the existence of simple process approximations in proposition 2.37 be-
low. The details are then assembled in proposition 2.40.

Beneath this big challenge is the most basic question: what is the appro-
priate measurability properties for v(s; !) to assure success in step 1? Are
all (B[0; t] (S))-measurable functions v(s; !) so approximable? The
derivation of the Itô isometry depended on a very speci…c kind of measur-
ability for simple processes, that such processes were adapted and so by
lemma 2.25 each aj (!) is tj (S)-measurable. This detailed type of mea-
surability is far beyond that normally imposed on the space L02 ([0; t] S):
As developed in book 5; the standard measurability requirement and that
used above would generally be de…ned relative to the product space sigma
algebra,
[B([0; t]) (S)] ;
de…ned in remark 2.23. This is then enough measurability to de…ne an
integral on [0; t] S.
But to make Itô’s idea work, we will need to de…ne an L2 -space on
[0; 1) S that has a more detailed form of measurability. First, taking
our queue from lemma 2.25, we will require that v(t; !) be measurable with
2.6 H2 ([0; 1) S) AND SIMPLE PROCESS APPROXIMATIONS41

respect to t (S) for all t, and also that such functions can be approximated
in an L2 -norm by the adapted simple processes of 2.21. In addition to these
technical requirements for the Itô isometry, we will also be interested in
studying the integral Vt (!) as a stochastic process that evolves in t, so again
the measurability of v(t; !) relative to t (S) will be critical for this analysis.

Notation 2.30 Because of the extra measurability requirements that will be


imposed, it is conventional to change the notation for this space of functions
from L02 ([0; 1) S) to the often-used notation of H2 ([0; 1) S): This denotes
that this is a space of square integrable functions, but for which the measur-
ability assumption is now also de…ned relative to a …ltration f t (S)gu:c:
Note that this is not merely a change in notation. Because of the addi-
tional measurability requirements, we cannot assume that H2 ([0; 1) S) is
complete as discussed below.
For the work below it will be assumed that there exists a Brownian motion
Bt de…ned on the …ltered probability space (S; (S); t (S); )u:c: ; and thus by
de…nition t (B) t (S) where f t (B)g is the natural …ltration associated
with Bt (de…nition 5.4, book 7). However recalling de…nition 2.7, we will
continue to assume 2.9 in the development of this integral, that Bt Bs is
independent of s (S) for all t > s 0:

De…nition 2.31 (H2 ([0; 1) S)) Given (S; (S); t (S); )u:c: ; and Lebesgue
measure m on B([0; 1)); the space:

H2 ([0; 1) S) H2 ([0; 1) S; [B([0; 1)) (S)] ; m );

is the collection of real valued functions v(t; !) de…ned on [0; 1) S so that:

1. v(t; !) is measurable on the product space ([0; 1) S; [B([0; 1)) (S)]) ;


with sigma algebra as in remark 2.23;

2. v(t; !) is adapted to the …ltration f t (S)g: In other words, for each t;


v(t; ) is measurable as a function from (S; t (S)) to (R; B(R)) ;

3. v(t; !) satis…es:
Z Z 1
kv(s; !)k2H2 ([0;1) S) v 2 (s; !)dsd < 1: (2.27)
S 0

When needed, we de…ne H2 ([0; T ] S) analogously, for …xed T < 1:


That is, v(t; !) 2 H2 ([0; T ] S) if v(t; !) [0;T ] (s) 2 H2 ([0; 1) S):
42 CHAPTER 2 THE ITÔ INTEGRAL

Remark 2.32 (On Norms) Note that:


Z 1
2
kv(s; !)kH2 ([0;1) S) E v 2 (s; !)ds = kv(s; !)k2L0 ([0;1) S) : (2.28)
2
0

In other words, 2.27 re‡ects a norm on H2 ([0; 1) S) that is identical to the


norm on L02 ([0; 1) S): What di¤ ers between these spaces is the assump-
tion on measurability of v(s; !); that H2 ([0; 1) S) requires adaptedness in
addition to measurability. Thus:
H2 ([0; 1) S) L02 ([0; 1) S):
While the norms are identical, we use the notation kv(s; !)k2H2 ([0;1) S) or
kv(s; !)k2L0 ([0;1) S) to simply identify the space to which v(s; !) belongs.
2

Notation 2.33 If v(t; !) 2 H2 ([0; 1) S); it is sometimes convenient to


de…ne for 0 t < t0 1 :

kv(s; !)kH2 ([t;t0 ] S) [t;t0 ] (s)v(s; !) ; (2.29)


H2 ([0;1) S)

and similarly for (t; t0 ]; etc.

Remark 2.34 (On Measurability and Simple Processes in H2 ([0; 1) S))


Given:
0 = t0 < t1 < < tn+1 < 1;
if v(t; !) 2 H2 ([0; 1) S) is a simple process:
Xn
v(s; !) a 1 (!) f0g (s) + aj (!) (tj ;tj+1 ] (s);
j=0

then since v(s; !) must be adapted, a 1 (!) is 0 (S)-measurable and aj (!)


is tj (S)-measurable for all j by lemma 2.25.
Conversely, if v(t; !) is a simple process as in 2.21, with a 1 (!) 0 (S)-
measurable and aj (!) tj (S)-measurable; then v(t; !) is adapted to the …ltra-
tion f t (S)g: Since left continuous, proposition 5.19 of book 7 assures that
v(s; !) is progressively measurable and hence measurable. Thus v(t; !) 2
H2 ([0; 1) S) if 2.27 is satis…ed. In other words, the adapted simple
processes de…ned in 2.21 are members of H2 ([0; 1) S) if square integrable.

Remark 2.35 (On Completeness of H2 ([0; 1) S)) Assume that fvn (t; !)g
H2 ([0; 1) S) and there exists a function v(t; !); measurable with respect
to [B([0; 1)) (S)] ; so that:
Z 1
E [v(s; !) vn (s; !)]2 ds ! 0: ((*))
0
2.6 H2 ([0; 1) S) AND SIMPLE PROCESS APPROXIMATIONS43

Is it then the case that v(t; !) 2 H2 ([0; 1) S)?


Since (S; (S)) is …nite under and [0; 1) is sigma-…nite under m; the
general form of Tonelli’s theorem (proposition 5.22, book 5) applies to ([0; 1)
S; [B([0; 1)) (S)] ; m ) as discussed in remark 2.23, without the as-
sumption of component space completeness. Thus ( ) implies that:

kv(s; !) vn (s; !)kL2 ! 0

in L2 ([0; 1) S), de…ned relative to this product space: Since complete, this
implies that v(t; !) 2 L2 ([0; 1) S):
Hence v(t; !) satis…es 1 and 3 of de…nition 2.31, and the question of
membership in H2 ([0; 1) S) relates to requirement 2 and whether v(t; !)
must be adapted to the …ltration f t (S)g: To investigate, we reverse the
iterated integrals in ( ) by Tonelli’s theorem to obtain:
Z 1Z
[v(s; !) vn (s; !)]2 d ds ! 0:
0 S

This then implies that for m-a.e. s :


Z
[v(s; !) vn (s; !)]2 d ! 0:
S

That is, vn (s; !) ! v(s; !) in L2 (S; s (S); ) for s outside a set of Lebesgue
measure 0:
To see this, note that vn (s; !) is s (S)-measurable for all s by require-
ment 2: For any s outside the above set of measure 0; fvn (s; !)g is then a
Cauchy sequence in L2 (S; s (S); ) and since this space is complete, there
exists ves (!) 2 L2 (S; s (S); ) with vn (s; !) !L2 ves (!). But then ves (!) =
v(s; !) -a.e., and we conclude by the completeness of s (S) that v(s; !) 2
L2 (S; s (S); ) for almost all s:
In summary, we can generally only conclude that v(t; !) is adapted to the
…ltration f t (S)g for almost all t; and hence H2 ([0; 1) S) is not quite
complete. See remark 2.43 for a continuation of this discussion, and the
section, M2 -Integrators and H2M ([0; 1) S)-Integrands; where H2 ([0; 1)
S) is generalized to a space which is complete.

Despite this lack of completeness, it is the case that if v(t; !) 2 H2 ([0; 1)


S); there exists a sequence of simple functions fvn (t; !)g H2 ([0; 1) S)
so that vn (t; !) ! v(t; !) in H2 ([0; 1) S): One step of this proof is rela-
tively easy. Given such v(t; !) and partitions of [0; 1) explicitly de…ned by
44 CHAPTER 2 THE ITÔ INTEGRAL

(n) (n) (n) (n) (n) (n)


0 = t0 < t1 < < tNn +1 with with tNn ! 1 and maxftj+1 tj g ! 0;
de…ne:
XNn (n)
vn (t; !) v(0; !) f0g (s) + v(tj ; !) (n) (n)
(tj ;tj+1 ]
(t):
j=0

Then each vn (t; !) is adapted to t (S) for all t by the adaptedness of v(t; !);
and since left continuous, is also measurable by proposition 5.19 of book 7.
The challenge is to prove H2 -convergence.

Example 2.36 If v(t; !) 0 for rational t; then for any partition based on
rational points we would have vn (t; !) 0 for all t and no H2 -convergence
could be expected. However, if v(t; !) is continuous in t; the success of an
approximation would not be dependent on the particular partition sequence.

The next proposition implements this approximation result in 3 steps,


the third of which will utilize the above approximation.

Proposition 2.37 (Simple process approximations in H2 ([0; 1) S))


If v(t; !) 2 H2 ([0; 1) S); then there exists a sequence of simple processes
fvn (t; !)g H2 ([0; 1) S) as in 2.21 so that:

vn (t; !) !H2 ([0;1) S) v(t; !):

That is: Z 1
E [v(s; !) vn (s; !)]2 ds ! 0: (2.30)
0
Proof. We implement the proof in 3 steps, showing that general v(t; !) 2
H2 ([0; 1) S) can be H2 -approximated by bounded v(t; !); that bounded
v(t; !) can be H2 -approximated by bounded v(t; !) that is continuous in t for
all !; and that such bounded and continuous v(t; !) can be H2 -approximated
by simple processes. Step 2 is the most challenging.

1. Given v(t; !); de…ne:


8
>
> n; v(t; !) < n;
>
<
vn (t; !) = v(t; !); n v(t; !) n;
>
>
>
: n; v(t; !) > n:

Then vn (t; !) ! v(t; !) pointwise in (t; !); and since


Z 1 Z 1
2
E [v(s; !) vn (s; !)] ds E v 2 (s; !)ds < 1;
0 0
2.6 H2 ([0; 1) S) AND SIMPLE PROCESS APPROXIMATIONS45

Lebesgue’s dominated convergence theorem (proposition 2.43, book 5)


can be applied to derive 2.30 with bounded vn (s; !): Measurability of
vn (t; !) relative to 1 and 2 of de…nition 2.31 is left as an exercise.

2. Given bounded v(t; !); say jv(t; !)j N; we next de…ne bounded fvn (t; !)g;
continuous in t for each !; such that 2.30 is satis…ed. Let '(x) be de-
…ned by:
8
>
> x + 2; 2 x 1;
>
<
'(x) = x; 1 x 0;
>
>
>
: 0; otherwise;

and let 'n (x) n'(nx): Note that 'n (x) is piecewise linear, supported
on [ 2=n; 0]; has a maximum value of n at x = 1=n; and integrates
to 1: Now de…ne:
Z t
vn (t; !) = v(s; !)'n (s t)ds:
t 2=n
R
Then jvn (t; !)j N since 'n (x)dx = 1:
Also, vn (t; !) is continuous in t for each !: To see this:
Z
0
vn (t; !) vn (t ; !) = v(s; !) 1 (s)'n (s t) 2 (s)'n (s t0 ) ds ;

where 1 is the characteristic function of [t 2=n; t] and 2 the char-


acteristic function of [t0 2=n; t0 ]: Thus:
Z
0
vn (t; !) vn (t ; !) N 1 (s)'n (s t) 2 (s)'n (s t0 ) ds:

Now j 1 (s)'n (s t) 2 (s)'n (s t0 )j ! 0 pointwise as t ! t0 ; and


thus by the bounded convergence theorem (proposition 2.46, book 5) we
obtain that vn (t; !) ! vn (t0 ; !); so vn (t; !) is continuous in t for each
!:
Adaptedness of vn (t; !) with respect to f t (S)g is proved as follows.
Create a partition of the integration interval [t 2=n; t] into m subin-
tervals, and de…ne:
Xm
vn(m) (t; !) = v(si ; !)'n (si t) s;
i=1
46 CHAPTER 2 THE ITÔ INTEGRAL

where s = 2=mn; and si 2 Ii [t 2=n + (i 1) s; t 2=n +


(m)
i s]: Since all si t; vn (t; !)R is measurable relative to t (S): Then
because v(si ; !)'n (si t) s = Ii v(si ; !)'n (si t)ds and 'n 0 :
Xm Z
(m)
vn (t; !) vn (t; !) jv(s; !)'n (s t) v(si ; !)'n (si t)j ds
i=1 I
Xm Z
i

2N j'n (s t) 'n (si t)j ds


i=1 Ii
4N=n maxi supIi j'n (s t) 'n (si t)j :
Now 'n is uniformly continuous on [t 2=n; t]; so it follows that
(m)
maxi supIi j'n (s t) 'n (si t)j ! 0 as m ! 1 and vn (t; !) !
vn (t; !) pointwise in !: Thus vn (t; !) is also measurable relative to
t (S) by book 5’s corollary 1.10. Since adapted and continuous, mea-
surability of vn (t; !) now follows from book 7’s proposition 5.19.
R
Finally, we show that 2.30 is satis…ed. Since 'n (x)dx = 1 :
Z 1
E [v(s; !) vn (s; !)]2 ds
0
Z Z "Z #2
1 s
= [v(s; !) v(r; !)] 'n (r s)dr dsd :
0 s 2=n

Substituting t = n(r s) into the dr-integral and applying Hölder’s


inequality of book 4’s proposition 3.46:
Z 1
E [v(s; !) vn (s; !)]2 ds
0
Z Z 1 Z 0 2
= jv(s; !) v(s + t=n; !)j '(t)dt dsd
0 2
Z Z 1Z 0
c [v(s; !) v(s + t=n; !)]2 dtdsd ;
0 2
R0
where c = 2 '2 (t)dt: Now for each (s; t; !); the integrand converges
to 0 as n ! 1 by continuity of v(s; !): Further,
[v(s; !) v(s + t=n; !)]2 2v 2 (s; !) + 2v 2 (s + t=n; !):
To see that this upper bound is dtdsd -integrable, …rst note:
Z Z 1Z 0 Z Z 1
2
v (s; !)dtdsd 2 v 2 (s; !)dsd :
0 2 0
2.6 H2 ([0; 1) S) AND SIMPLE PROCESS APPROXIMATIONS47

For the second term we prove that the dsd dt-integral is …nite, and
then apply Tonelli’s theorem as noted in remark 2.23, since the inte-
grand is nonnegative. Using the substitution r = s + t=n in the inner
dr-integral:
Z 0Z Z 1 Z 0Z Z 1
2
v (s + t=n; !)dsd dt = v 2 (r; !)drd dt
2 0 2 t=n
Z 0 Z 1
E v 2 (r; !)dr dt
2 0
Z 1
= 2E v 2 (r; !)dr :
0

Thus Lebesgue’s dominated convergence theorem of book 5’s proposi-


tion 2.43 proves 2.30.

3. Given bounded v(t; !) that is continuous in t for each !; we de…ne


bounded simple functions vn (t; !) which satisfy 2.30. The construction
in the paragraph preceding this proposition proposed vn (t; !) that is
adapted, left continuous and measurable. To prove 2.30, let > 0 be
given and choose M so that
Z 1
E v 2 (r; !)dr < :
M

(n)
This is possible by 2.27. Now given n; let tj = jM=n and de…ne:
Xn (n)
vn (t; !) v(tj ; !) (t(n) ;t(n) ] (t):
j=0 j j+1

Then:
Z 1 Z M
2
E [v(s; !) vn (s; !)] ds = E [v(s; !) vn (s; !)]2 ds + ;
0 0

and it is left as an exercise to show that the expectation converges to


0:

Corollary 2.38 If v(s; !) 2 H2 ([0; 1) S) and fvn (s; !)g1n=1 H2 ([0; 1)


S) are simple processes as given in proposition 2.37 which satisfy 2.30, then
for any t; t0 with 0 t < t0 1 :

(t;t0 ] (s)vn (s; !) !H2 ([0;1) S) (t;t0 ] (s)v(s; !):


48 CHAPTER 2 THE ITÔ INTEGRAL

That is:
Z 1h i2
E v(s; !) (t;t0 ] (s) vn (s; !) (t;t0 ] (s) ds ! 0: (2.31)
0

Proof. This follows from:


Z 1h i2
E v(s; !) (t;t0 ] (s) vn (s; !) (t;t0 ] (s) ds
Z0 1
= E (t;t0 ] (s) [v(s; !) vn (s; !)]2 ds
Z0 1
E [v(s; !) vn (s; !)]2 ds :
0

Remark 2.39 By exercise 2.26, if vn (s; !) is an adapted simple process as


in 2.21, then so too is vn (s; !) (t;t0 ] (s) if 0 t < t0 1: It is an exercise
to check that if v(s; !) 2 H2 ([0; 1) S) and then again v(s; !) (t;t0 ] (s) 2
H2 ([0; 1) S): Thus by proposition 2.37 there exists f~ vn (s; !)g1
n=1 H2 ([0; 1)
S) so that v~n (s; !) !H2 ([0;t] S) v(s; !) (t;t0 ] (s): The signi…cance of corollary
2.38 is that we can take v~n (s; !) = vn (s; !) (t;t0 ] (s) where fvn (s; !)g1 n=1 is
the approximating sequence for v(s; !) given in proposition 2.37.

2.7 The General Itô Integral


Given the simple process approximations in H2 ([0; 1) S) of proposition
2.37, we are now able to implement the program set out at the beginning
R t0
of the prior section to de…ne t v(s; !)dBs (!) for all
v(s; !) 2 H2 ([0; 1) S) and 0 t < t0 1:

Proposition 2.40 (Itô Integral on H2 ([0; 1) S)) Let Bt (!) be a Brown-


ian motion on (S; (S); t (S); )u:c: : If v(t; !) 2 H2 ([0; 1) S) and 0
t < t0 1; the integral:
Z t0
v(s; !)dBs (!)
t
is well de…ned as the L2 (S)-limit of the integrals of fvn (s; !) 1
(t;t0 ] (s)gn=1
H2 ([0; 1) S): That is:
Z 1 Z t0
vn (s; !) (t;t0 ] (s)dBs (!) v(s; !)dBs (!) ! 0; (2.32)
0 t
L2 (S)
2.7 THE GENERAL ITÔ INTEGRAL 49

where fvn (s; !)g1n=1 H2 ([0; 1) S) are simple processes as de…ned as in


2.21, and with integrals de…ned in 2.24.
R t0
By "well de…ned" is meant that for every such t; t0 ; t v(s; !)dBs (!) is
de…ned uniquely -a.e.
Proof. Given v(t; !) 2 H2 ([0; 1) S); proposition 2.37 derives simple
processes as in 2.21, fvn (s; !)g1
n=1 H2 ([0; 1) S); so that 2.30 is satis…ed.
It then follows that fvn (s; !)g1n=1 is a Cauchy sequence in H2 ([0; 1) S):
That is, given > 0 there is an N so that for n; m N :

kvn (s; !) vm (s; !)kH2 ([0;1) S) < :

This follows from the triangle inequality for a norm, with H2 H2 ([0; 1)
S) :

kvn (s; !) vm (s; !)kH2


kvn (s; !) v(s; !)kH2 + kv(s; !) vm (s; !)kH2 ; ((*))

choosing N so that for n N:

kvn (s; !) v(s; !)kH2 ([0;1) S) < =2:

Since vn (s; !) vm (s; !) is a simple process for all n; m (exercise), and for
such processes:
Z 1 Z 1 Z 1
[vn (t; !) vm (t; !)] dBs (!) = vn (s; !)dBs (!) vm (s; !)dBs (!);
0 0 0

the Itô isometry in 2.22 obtains that:


Z 1 1
vn (s; !)dBs (!)
0 n=1

is a Cauchy sequence in L2 (S): Since L2 (S) is complete, there exists a func-


tion V1 (!) 2 L2 (S) so that:
Z 1
vn (s; !)dBs (!) V1 (!) ! 0:
0 L2 (S)

We thus de…ne the Itô integral:


Z 1
v(s; !)dBs (!) V1 (!):
0
50 CHAPTER 2 THE ITÔ INTEGRAL

To prove that this is well de…ned, let fvn0 (s; !)g1


n=1 H2 ([0; 1) S) be
another simple process sequence so that 2.30 is again satis…ed, and repeat
the above steps to obtain V10 (!) 2 L (S): Then:
2
0
V1 (!) V1 (!) L2 (S)
Z 1 Z 1
0 0
vn (s; !)dBs (!) V1 (!) + V1 (!) vm (s; !)dBs (!)
0 L2 (S) 0 L2 (S)
Z 1 Z 1
0
+ vn (s; !)dBs (!) vm (s; !)dBs (!) :
0 0 L2 (S)

Now the …rst two expressions converge to 0 by construction, and for the
third, the Itô isometry obtains:
Z 1 Z 1
0 0
vn (s; !)dBs (!) vm (s; !)dBs (!) = vn (s; !) vm (s; !) H2 :
0 0 L2 (S)

This term also converges to 0 by an expansion as in ( ); and thus V1 0 (!) =

V1 (!); -a.e.
R t0
For the integral t v(s; !)dBs (!); 2.31 obtains that with fvn (s; !)g1 n=1
as above, v~n (s; !) vn (s; !) (t;t0 ] (s) is a sequence of simple processes as in
2.21 that form a Cauchy sequence in H2 ([0; 1) S): Thus the above steps
can be repeated to both de…ne this integral, and prove uniqueness -a.e.
R t0
Corollary 2.41 The integral t v(s; !)dBs (!) of proposition 2.40 is equiv-
alently de…ned -a.e. for all t; t0 with 0 t < t0 1 by:
Z t0 Z 1
v(s; !)dBs (!) v(s; !) (t;t0 ] (s)dBs (!): (2.33)
t 0

Proof. With fvn (s; !)g1


H2 ([0; 1) S) an approximating sequence to
n=1
v(s; !) as in proposition 2.37 above, then vn (s; !) (t;t0 ] (s) H2 ([0; 1) S)
is an approximating sequence to v(s; !) (t;t0 ] (s) by corollary 2.38. Thus by
2.32, the integral on the right of 2.33 is de…ned by:
Z 1 Z 1
vn (s; !) (t;t0 ] (s)dBs (!) v(s; !) (t;t0 ] (s)dBs (!) ! 0:
0 0 L2 (S)

Comparing to 2.32, 2.33 is obtained.


R t0
Corollary 2.42 The integral t v(s; !)dBs (!) of proposition 2.40 is equiv-
alently de…ned -a.e. for all t; t0 with 0 t < t0 1 by:
Z t0 Z t0 Z t
v(s; !)dBs (!) = v(s; !)dBs (!) v(s; !)dBs (!): (2.34)
t 0 0
2.7 THE GENERAL ITÔ INTEGRAL 51

Proof. This follows from 2.33, since (t;t0 ] (s) = (0;t0 ] (s) (0;t] (s):

Remark 2.43 (On Completeness of H2 ; cont’d) In remark 235 it was


argued that if fvn (t; !)g H2 ([0; 1) S) and there exists a measurable
function v(t; !) with respect to [B([0; 1)) (S)] so that:
Z 1
E [v(s; !) vn (s; !)]2 ds ! 0; ((*))
0
then it need not be the case that v(t; !) 2 H2 : In particular, we can generally
only conclude that v(t; !) is adapted to the …ltration f t (S)g for almost
all t; and thus H2 ([0; 1) S) is not complete. But in this case, the Itô
R t0
integral t v(s; !)dBs (!) is still well de…ned for all 0 t < t0 1:
To see this, note that fvn (s; !)g1 is a Cauchy sequence in H2 ([0; 1)
R t0 n=1
S) as in the above proof, and t vn (s; !)dBs (!) is well de…ned for all n by
R t0
the above proposition. Thus by the Itô isometry, f t vn (s; !)dBs (!)g is a
Cauchy sequence in L2 (S) and there exists Vt;t0 (!) 2 L2 (S) such that
Z t0
vn (s; !)dBs (!) !L2 (S) Vt;t0 (!):
t
Uniqueness -a.e. follows as above.
Thus while H2 ([0; 1) S) is not complete, the de…nition of the Itô inte-
gral can be extended to the sequential closure of this space, H2 ([0; 1) S);
which is complete by construction. Thus:
H2 ([0; 1) S) & H2 ([0; 1) S):
Rt
Example 2.44 ( 0 Bs dBs ) As an example of an explicit evaluation of an
Itô integral, we prove that if Bt is a Brownian motion de…ned on (S; (S); t (S); )u:c: ;
then for all t < 1 : Z t
Bs dBs = Bt2 t =2: (2.35)
0
This integral is not well de…ned by the above proposition because Bt (!) 2 =
H2 ([0; 1) S): Of course Bt is adapted to the …ltration t (S) by de…nition,
and since also continuous it is measurable by book 7’s proposition 5.19. But
by Tonelli’s theorem (recall remark 2.23), since E Bs2 = s :
Z Z t Z tZ
2
Bs (!)dsd = Bs2 (!)d ds
S 0 0 S
Z t
= sds
0
= t2 =2;
52 CHAPTER 2 THE ITÔ INTEGRAL

which is not bounded. But we can relatively easily apply a work-around


solution. Given t choose t < T < 1 and de…ne v(s; !) = Bs (!): Then
v(s; !) 2 H2 ([0; 1) S) and:
Z t Z t Z t
v(s; !)dBs (0;T ] (s)Bs dBs = Bs dBs :
0 0 0

Returning to the original notation, let 0 = t0 < t2 < < tn+1 = t and
de…ne: Xn
vn (s; !) Btj (!) (tj ;tj+1 ] (s):
j=0

Then as noted in remark 2.34, vn (s; !) 2 H2 ([0; 1) S) if square integrable in


the sense of 2.27. We leave the veri…cation of square integrability as an ex-
ercise. Also, vn (s; !) ! Bs (!) in H2 ([0; 1) S) if supfjtj+1 tj jg ! 0:
To see this …rst note that vn (s; !) = Btj on (tj ; tj+1 ]; and then by Tonelli’s
theorem:
Z t " #
2
Xn Z tj+1 2
E (vn (s; !) Bs (!)) ds = E Btj Bs ds
0 j=0 tj
Xn Z tj+1
= (s tj ) ds
j=0 tj
Xn
= (tj+1 tj )2 =2
j=0
! 0:
Rt Rt
Hence, 0 Bs dBs is the limit in L2 (S) of 0 vn (s; !)dBs (!); where by the
2.24: Z t Xn
vn (s; !)dBs (!) = Btj Btj+1 Btj :
0 j=0

Thus 2.35 will be proved if it can be shown that:


Xn 2
E Bt2 =2 t=2 Btj Btj+1 Btj ! 0: ((*))
j=0

To this end, note that since B0 = 0 and tn+1 = t that


Xn h i
Bt2 = Bt2j+1 Bt2j ;
j=0

and then from:


2
Bt2j+1 Bt2j = Btj+1 Btj + 2Btj Btj+1 Btj ;
2.7 THE GENERAL ITÔ INTEGRAL 53

this obtains:
Xn Xn 2
Btj 1 Btj Btj 1 = Bt2 =2 Btj Btj 1 =2:
j=1 j=1

Substituting into ( ); 2.35 is proved if

Xn 2 2
E Btj Btj 1 t ! 0;
j=1

which was true by 2.16.

Remark 2.45 1. Compare 2.35 to the Doob-Meyer decomposition


theorem 2 of book 7’s proposition 6.12 which assured that:

Bt2 = hBit + Bt0 ;

where Bt0 is a continuous local martingale. Thus hBit = Bt2 Bt0 ;


But hBit = t from 2.18, and so by uniqueness of hBit it follows that
Bt0 = Bt2 t: Alternatively, example 2.44 obtains:
Z t
Bt0 =2 Bs dBs :
0

Thus in this case, we have two explicit representations for the contin-
uous local martingale Bt0 assured to exist by this earlier
Rt theorem. This
will be generalized in 3.54 of the next section once 0 Ms dMs is de…ned
for a continuous locally bounded martingale Mt :
It should also be noted that the equivalent formulation of 2.35:
Z t
Bt2 =t+2 Bs dBs ;
0

is a special case of Itô’s lemma below.

2. From 2.35 we can conclude thatR t for the special case of v(s; !) = Bs (!);
that the Itô integral Vt (!) 0 Bs (!)dBs (!) is in fact a continuous
martingale with respect to the …ltration f t (S)g: It is apparently con-
tinuous -a.e. by its explicit formula, and is a martingale by proposi-
tion 2.8.
54 CHAPTER 2 THE ITÔ INTEGRAL

2.8 Properties of the Itô Integral


For any v(t; !) 2 H2 ([0; 1) S) and 0 t < t0 1; the Itô integral
R t0
t v(s; !)dBs (!) is now well de…ned by proposition 2.40 as a square
integrable random variable:
Z t0
v(s; !)dBs (!) 2 L2 (S):
t

Further, applying corollary 2.41 to 2.32, this random variable is the


L2 -limit of random variables,
Z t0
vn (s; !)dBs (!) 2 L2 (S);
t

where vn (t; !) 2 H2 ([0; 1) S) are adapted simple processes as de…ned as


in 2.21, and with integrals de…ned in 2.15. Thus:
Z t0 Z t0
vn (s; !)dBs (!) v(s; !)dBs (!) ! 0:
t t
L2 (S)

As above, this L2 -limit is often denoted:


Z t0 Z t0
vn (s; !)dBs (!) !L2 (S) v(s; !)dBs (!): (2.36)
t t

Convergence in L2 (S) is a relatively strong criterion, as the following


proposition illustrates.

Proposition 2.46 (Convergence in probability) Let Bt (!) be a Brown-


ian motion on (S; (S); t (S); )u:c: ; and let fv(s; !); fvn (s; !)g1
n=1 g H2 ([0; 1)
S): Given 0 t < t 0 1; then:
Z t0 Z t0
vn (s; !)dBs (!) !L2 (S) v(s; !)dBs (!)
t t

implies convergence in probability:


Z t0 Z t0
vn (s; !)dBs (!) !P v(s; !)dBs (!): (2.37)
t t
nR 0 o
t
Further, the sequence t vn (s; !)dBs (!) is uniformly integrable (de…ni-
tion 5.5, book 6).
2.8 PROPERTIES OF THE ITÔ INTEGRAL 55

Proof. For convergence in probability, Chebyshev’s inequality of book 4’s


proposition 3.33 obtains that for any > 0 :
"Z 0 Z t0 #
t
Pr vn (s; !)dBs (!) v(s; !)dBs (!) >
t t
Z t0 Z t0
2
2
vn (s; !)dBs (!) v(s; !)dBs (!) :
t t
L2 (S)

Hence this probability converges to zero for all ; which is 2.37.


For uniform integrability, note that because this sequence converges in
L2 (S) that: 2 3
Z t0 2
sup E 4 vn (s; !)dBs (!) 5 < 1:
n t

This follows because:


Z t0 Z t0
vn (s; !)dBs (!) v(s; !)dBs (!)
t t
L2 (S) L2 (S)
Z t0 Z t0
+ vn (s; !)dBs (!) v(s; !)dBs (!) ;
t t
L2 (S)
R t0
and the last expression converges to 0: Now denote Xn t vn (s; !)dBs (!) to
simplify notation. Then as N ! 1 :
Z Z
1
sup jXn (s)j d sup jXn (s)j2 d
n jXn (s)j N N n jXn (s)j N
1 h i
sup E jXn j2 ! 0:
N n

The following proposition summarizes important properties of the Itô


integral. All the results will be familiar and expected from previous results
on simple processes and the integration theories of books 3 and 5, except the
persistent presence of "for almost all ! 2 S:" This quali…cation is needed
because as noted in the statement of proposition 2.40, Itô integrals are well-
de…ned only in the sense of L2 -convergence, and thus one has at best an
almost everywhere speci…cation on S:
The apparent shortcoming concerning the omission of "continuous" in
the statement of 5 will be resolved in the next section.
56 CHAPTER 2 THE ITÔ INTEGRAL

Proposition 2.47 (Properties of the Itô integral) Let Bt (!) be a Brown-


ian motion on (S; (S); t (S); )u:c: and let v(s; !); u(s; !) 2 H2 ([0; 1) S);
and 0 t < t0 1:

1. For r with t < r < t0 ; then for almost all ! 2 S :


Z t0 Z r Z t0
v(s; !)dBs (!) = v(s; !)dBs (!) + v(s; !)dBs (!):
t t r

2. For constant a 2 R;

av(s; !) + u(s; !) 2 H2 ([0; 1) S);

and for almost all ! 2 S :


Z t0 Z t0 Z t0
[av(s; !) + u(s; !)] dBs (!) = a v(s; !)dBs (!)+ u(s; !)dBs (!):
t t t

3. Mean equal to zero:


"Z #
t0
E v(s; !)dBs (!) = 0: (2.38)
t

4. The Itô Isometry:


Z t0
v(s; !)dBs (!) = kv(s; !)kH2 ([t;t0 ] S) ; (2.39)
t
L2 (S)

or with more explicit notation, after squaring both sides:

Z Z t0
2 Z Z t0
v(s; !)dBs (!) d = v 2 (s; !)dsd :
t t

5. The process:
Z t
Vt (!) v(s; !)dBs (!)
0

is a martingale relative to the …ltration f t (S)g:


2.8 PROPERTIES OF THE ITÔ INTEGRAL 57

Proof. Now 1 follows from 2 by 2.33:


Z t0 Z 1
v(s; !)dBs (!) v(s; !) (t;t0 ] (s)dBs (!)
t
Z0 1 h i
= v(s; !) (t;r] (s) + v(s; !) (r;t0 ] (s) dBs (!)
0
Z r Z t0
= v(s; !)dBs (!) + v(s; !)dBs (!):
t r

Now 2 holds for all simple functions by 2.24 and 2.33. The general
identity then follows since given approximating sequences vn ! v and un !
u in H2 (S); it follows as an exercise that avn + un ! av + u in H2 (S): Thus
simplifying notation:
Z t0 Z t0 Z t0
[av + u] dBs (!) a vdBs (!) udBs (!)
t t t
L2 (S)
Z t0 Z t0
[av + u] dBs (!) [avn + un ] dBs (!)
t t
L2 (S)
Z t0 Z t0 Z t0 Z t0
+a vdBs (!) vn dBs (!) + udBs (!) un dBs (!) :
t t t t
L2 (S) L2 (S)

This upper bound converges to 0 as n ! 1 by proposition 2.40 and 2.33,


and thus the expression in the …rst norm is 0; -a.e.
For 3; proposition 5.6 of book 6 obtains that:
"Z 0 # "Z 0 #
t t
E vn (s; !)dBs (!) ! E v(s; !)dBs (!) ;
t t

using the convergence in probability and uniform integrability results of propo-


sition 2.46. Thus 2.38 follows from 2.23.
For Itô’s isometry, recall 2.22 and remark 2.32 that for all n :
Z t0
vn (s; !)dBs (!) = kvn (s; !)kH2 ((t;t0 ] S) :
t
L2 (S)

Taking limits as n ! 1; the result follows since by book 5’s proposition 4.20:
Z t0 Z t0
vn (s; !)dBs (!) ! v(s; !)dBs (!)
t t
L2 (S) L2 (S)
58 CHAPTER 2 THE ITÔ INTEGRAL

and
kvn (s; !)kH2 ((t;t0 ] S) ! kv(s; !)kH2 ((t;t0 ] S) :
To demonstrate that Vt (!) is a martingale relative to the …ltration f t (S)g
(n)
(de…nition 5.22, book 7), …rst note that Vt (!) is the L2 -limit of Vt (!)
Rt (n)
0 vn (s; !)dBs (!) for simple vn (s; !) for each t: In addition Vt (!) is a
(continuous) martingale by proposition 2.27. Thus Vt (!) is a martingale by
book 7’s proposition 5.32.

Remark 2.48 (Variance of Itô integral) Note that 3 and 4 above obtain
an expression for the variance of the random variable de…ned by the Itô
integral: "Z 0 # Z Z 0
t t
V ar v(s; !)dBs (!) = v 2 (s; !)dsd : (2.40)
t t

2.9 A Continuous Version of the Itô Integral


We now investigate the continuity and martingale properties of general Itô
integrals. It may well have been noticed when comparing proposition 2.47
to the corresponding proposition 2.27 for the Itô
R t integral of simple
processes, that it was not stated that Vt (!) 0 v(s; !)dBs (!) is a
continuous martingale. In example 2.44 above, we saw that with
v(s; !) = Bs (!) that:
Z t
Bs dBs = Bt2 =2 t=2;
0

and in this case it is apparent that this integral is continuous in t with


probability 1; since Brownian motion has this property. With the aid of
Doob’s martingale maximal inequality 1 in book 7’s proposition 5.46, it is
proved below that in the general case, there is a continuous version
(de…nition 5.10, book 7) of the Itô integral with probability 1.

Speci…cally, we prove that L2 -convergence of integrals of simple processes


as in 2.36: Z Z
t t
vn (s; !)dBs (!) !L2 (S) v(s; !)dBs (!);
0 0
implies the existence of subsequence fvnk (s; !)g so that for any compact
interval [0; T ] :
Z t
vnk (s; !)dBs (!) converges uniformly in t 2 [0; T ];
0
2.9 A CONTINUOUS VERSION OF THE ITÔ INTEGRAL 59

outside a set of -measure zero.


Since the integrals of simple processes are continuous in t; uniform con-
vergence over compact [0; T ] assures that this convergence is to a continuous
function, which we denote It (!): Then outside this set of -measure zero:
Z t
vnk (s; !)dBs (!) ! It (!) uniformly in t 2 [0; T ]:
0

As a subsequence, it is then also true that for every t 2 [0; T ] :


Z t Z t
vnk (s; !)dBs (!) !L2 (S) v(s; !)dBs (!):
0 0

Hence, for every t :


Z t
It (!) = v(s; !)dBs (!) -a.e.
0

By de…nition 5.10 of book 7, It (!) is then a version of the above de…ned Itô
integral.
The implications of these -a.e. statements are subtle, and we return to
them in remark 2.51 once the proposition is proved.

Proposition 2.49 (A Continuous version of Vt (!)) Let Bt (!) be a Brown-


ian motion on (S; (S); t (S); )u:c: and v(t; !) 2 H2 ([0; 1) S): Then
there exists a function It (!); continuous in t 2 [0; 1) for all ! 2 S; so that
with Itô integrals de…ned in proposition 2.40:

1. If v(s; !) is a simple process:


Z t
It (!) = v(s; !)dBs (!) for all !; all t: (2.41)
0

2. For general v(s; !); for each t :


Z t
It (!) = v(s; !)dBs (!); -a.e. (2.42)
0

In other words, It (!) is a version or a modi…cation (de…nition 5.10,


book 7) of Vt (!) of proposition 2.47.
Proof. Given a simple function v(s; !); we de…ne It (!) for all ! by 2.41
and know that It (!) is continuous in t by proposition 2.27, and in fact a
continuous martingale relative to f t (S)g:
60 CHAPTER 2 THE ITÔ INTEGRAL

For general v(s; !); …x the interval [0; T ] where here T is a constant
(i.e., and not a stopping time). If fvn (s; !)g1 n=1 H2 ([0; 1) S) is an
approximating sequence of simple processes as in proposition 2.37, then by
corollary 2.38:
kvn (s; !) v(s; !)kH2 ([0;T ] S) ! 0:
De…ne In (t; !) on [0; T ] S by:
Z t
In (t; !) = vn (s; !)dBs (!);
0

and thus by proposition 2.40:


Z t
In (t; !) !L2 (S) Vt (!) = v(s; !)dBs (!):
0

For each n; In (t; !) is a continuous martingale relative to f t (S)g by propo-


sition 2.27, and hence so too is In (t; !) Im (t; !) for any n; m: Apply-
ing Doob’s martingale maximal inequality 1 (proposition 5.46, book 7) with
p = 2; and then Itô’s isometry in 2.39, obtains that for any > 0 :
" #
1 h 2
i
Pr sup jIn (t; !) Im (t; !)j 2 E jI n (T; !) I m (T; !)j
0 t T
Z T
1
= 2E jvn (s; !) vm (s; !)j2 ds
0
! 0

as n; m ! 1: This follows because fvn (s; !)g1


n=1 is a Cauchy sequence in
H2 ([0; T ] S).
(T )
With = 2 k ; de…ne an increasing sequence fnk g1 k=1 so that
" #
k k
Pr sup In(T ) (t; !) In(T ) (t; !) 2 2 :
0 t T k+1 k

(T )
Let Ak S be de…ned by
( )
(T ) k
Ak = !j sup In(T ) (t; !) In(T ) (t; !) 2 :
0 t T k+1 k

P (T )
Then 1 k=1 [Ak ] 1; and the Borel-Cantelli lemma of book 2’s proposi-
(T )
tion 2.6 applies to yield that [lim sup Ak ] = 0: Hence, for ! outside the
2.9 A CONTINUOUS VERSION OF THE ITÔ INTEGRAL 61

(T )
set lim sup Ak of measure zero, there are at most …nitely many k with

k
sup In(T ) (t; !) In(T ) (t; !) 2 :
0 t T k+1 k

Equivalently, fIn(T ) (t; !)g1


k=1 converges uniformly for t 2 [0; T ] outside this
k
set of measure zero, and hence converges to a continuous limit function we
(T )
denote by It (!): For ! 2 lim sup Ak , de…ne It (!) 0 so It (!) is now
continuous in t for all !:
To de…ne It (!) for t 2 [0; 1); we perform the above construction for
(T +1) 1 (T ) (T +1) (T )
T = 1; 2; :::; choosing fnk gk=1 fnk g1 k=1 with n1 > n1 : Now if
(k)
we de…ne nk = nk ; then outside a set of measure 0; fInk (t; !)g1 k=1 converges
uniformly for t 2 [0; T ] for all T; and thus to continuous It (!) de…ned for
t 2 [0; 1): Outside this set of measure 0 we again de…ne It (!) 0:
As a subsequence of an L2 (S)-convergent sequence In (t; !); for each t :
Z t
Ink (t; !) !L2 (S) v(s; !)dBs (!);
0

and so for each t :


Z t
It (!) = v(s; !)dBs (!); -a.e.
0

Corollary 2.50 With It (!) de…ned as above, the Itô isometry holds. That
is, for t 1 :
kIt (!)k2L2 (S) = kv(s; !)k2H2 ([0;t] S) : (2.43)
In addition, corollary 2.41 and properties 1 3 of proposition 2.47 remain
valid.
Proof. For t < 1 the isometry follows from 2.39 and 2.42, since for all t :
Z t
It (!) = v(s; !)dBs (!); -a.e.
0

This identity then applies to t = 1 since both expressions in 2.43 increase


with t; and the expression on the right is bounded by de…nition.
Corollary 2.41 and properties 1 3 of proposition 2.47 follow similarly.
62 CHAPTER 2 THE ITÔ INTEGRAL

Remark 2.51 (On -a.e. Convergence Results) As noted above, the


implications of these -a.e. statements are subtle and worth further ex-
ploration.

1. For every v(t; !) 2 H2 ([0; 1) S) there exists a sequence of simple


processes fvn (t; !)g1
n=1 H2 ([0; 1) S) so that as in 2.30:

kvn (s; !) v(s; !)kH2 ([0;1) S) ! 0:

As noted in remark 2.35 On Completeness of H2 ([0; 1) S); this


implies convergence in the product measure, and thus:

vn (s; !) ! v(s; !); (m ) -a.e.,

meaning for almost all (s; !) in this product measure.

2. Similarly, the existence of a function Vt (!) 2 L2 (S) and the conver-


gence of the Itô integrals of simple processes:
Z t
vn (s; !)dBs (!) Vt (!) ! 0;
0 L2 (S)

implies that Z t
vn (s; !)dBs (!) ! Vt (!); -a.e.
0
In other words, for any t the proposition 2.40 de…nition:
Z t
v(s; !)dBs (!) Vt (!);
0

is only well-de…ned -a.e. and can be arbitrarily rede…ned on sets of


-measure zero.
This statement applies for each of uncountably many t; so the collec-
tion of all such exceptional sets of -measure zero cannot be readily
characterized and need not even be measurable. In other words, from
this one cannot characterize Ra single set of -measure zero, or of any
t
-measure, outside of which 0 vn (s; !)dBs (!) ! Vt (!) for all t:

3. The above proposition 2.49 states that there is a single exceptional set
of -measure zero, outside of which there is a function It (!) that is
continuous in t; with
Z t
vnk (s; !)dBs (!) ! It (!) for all t;
0
2.9 A CONTINUOUS VERSION OF THE ITÔ INTEGRAL 63

and this convergence is uniform in t on compact intervals. The im-


proved convergence is achieved by carefully selecting a subsequence of
the original sequence fvnk (t; !)g1 k=1 fvn (t; !)g1
n=1 ; and e¤ ectively
discarding the other terms that deteriorate convergence properties. On
the single exceptional set of -measure zero, we de…ned It (!) 0 for
all t as a convention, though any convention works equally well as long
as It (!) is continuous in t for such !:
Rt
4. Now …x t and consider how It (!) and Vt (!) compare. Since 0 vn (s; !)dBs (!) !
Vt (!); -a.e., for
R t each t; it follows that for the constructed subsequence
that similarly, 0 vnk (s; !)dBs (!) ! Vt (!); -a.e., for each t: But also,
Rt
0 vnk (s; !)dBs (!) ! It (!); -a.e., by proposition 2.49, and so for
each t :
It (!) = Vt (!); -a.e.
This then implies that

[f!jIt (!) = Vt (!) for all rational t 2 [0; 1)g] = 1:

5. It is then only natural to wonder what is the measure of the set A;


de…ned by

A = f!jIt (!) = Vt (!) for all t 2 [0; 1)g:

Unfortunately, A need not be measurable, since it is the intersection


of uncountably many sets fAt gt2[0;1) de…ned by:

At = f!jIt (!) = Vt (!)g:

While [At ] = 1 for each


T t; we can say nothing about the measure (nor
measurability) of A = t At : For continuous It (!); values on rational
t characterize values everywhere, but this is not so for Vt (!) and hence
the dead end on the measure of the set A:
On the other hand, if It (!) and It0 (!) are two continuous versions
of Vt (!) constructed by the above proposition, and:

Ct f!jIt (!) = It0 (!)g;

then each Ct has -measure 1. Thus

CQ f!jIt (!) = It0 (!); all rational tg


64 CHAPTER 2 THE ITÔ INTEGRAL

has measure 1 as above, and by continuity it follows that It (!) = It0 (!)
for all t on CQ .
In other words, any two continuous versions of Vt (!) are indistin-
guishable in the terminology of de…nition 5.10 of book 7.

As a result of theR prior proposition, we now have a potential rede…nition


t
of the Itô integral 0 v(s; !)dBs (!) as It (!); which is continuous in t for
all ! 2 S: It is thus natural to investigate if this rede…ned integral is in
fact a continuous martingale with respect to the …ltration f t (S)g: Now
proposition 2.47 proved that this integral is a martingale when de…ned as
Vt (!): Since Vt (!) can be modi…ed to be continuous as discussed above, we
provide the last result on this integral and prove that this continuous process
is indeed a martingale.

Proposition 2.52 (It (!) is a continuous martingale) Let Bt (!) be a Brown-


ian motion on (S; (S); t (S); )u:c: : For v(t; !) 2 H2 ([0; 1) S); the con-
tinuous process It (!) constructed in proposition 2.49 is a continuous L2 -
bounded martingale with respect to the …ltration f t (S)g: Thus I1 (!) 2
L2 (S) is well de…ned and satis…es 2.43.
Proof. On the set of -measure zero on which we de…ned It (!) = 0 by the
above proposition, this process is apparently a martingale, so we focus on the
set of -measure 1 on which this process is de…ned as a uniform sequential
limit. To simplify notation, let fvn (t; !)g1
n=1 be the de…ning subsequence for
which for any …xed constant T :
Z t
vn (s; !)dBs (!) ! It (!)
0

uniformly in t 2 [0; T ]: Then since for each t;

It (!) = Vt (!); -a.e.,

it follows that for each t;


Z t
It (!) vn (s; !)dBs (!) ! 0:
0 L2 (S)
Rt
Now by 1 and 3 of proposition 2.27, 0 vn (s; !)dBs (!) is an L2 -martingale
with respect to the …ltration f t (S)g; and thus by proposition 5.32 of book
7, It (!) is an L2 -martingale with respect to this same …ltration. Further
It (!) is an L2 -bounded by 2.43, and thus by book 7’s proposition 5.116
2.10 ITÔ INTEGRATION VIA RIEMANN SUMS 65

there exists I1 (!) 2 L2 (S) with It (!) !L2 (S) I1 (!): Then I1 (!) satis-
…es 2.43 by proposition 4.20 of book 5, noting that kv(s; !)k2H2 ([0;t] S) !
kv(s; !)k2H2 ([0;1) S) by 2.27.

De…nition 2.53 (Final Itô Integral) Let Bt (!) be a Brownian motion


on (S; (S); t (S); )u:c: : For v(t; !) 2 H2 ([0; 1) S); de…ne:
Z t
v(s; !)dBs (!) It (!); (2.44)
0

where It (!) is constructed as in proposition 2.49.

Exercise 2.54 Prove that for 0 t < t0 1 that this …nal de…nition
implies that:
Z t0
v(s; !)dBs (!) It0 (!) It (!):
t

Hint: 2.33.

2.10 Itô Integration via Riemann Sums


In this section we develop two approaches to the convergence of Riemann
sums. The second approach will utilize the more general setting of the
next chapter’s integration theories with respect to continuous L2 -bounded
martingales (de…nition 5.98, book 7) and continuous local martingales
(de…nition 5.70, book 7). There it will be proved in propositions 3.55 and
3.98 that for appropriate integrands, Riemann sums converge in
probability to the associated stochastic integrals, where Riemann sums are
de…ned by path-independent partitions n of [0; t];
0 = t0 < t1 < tn+1 = t; with max1 i n fti+1 ti g ! 0:

Both of these results are applicable in the current context because Brown-
ian motion is a continuous martingale which can be made into an L2 -
bounded martingale when stopped with a …xed stopping time T t0 ; and
is a continuous local martingale by book 7’s corollary 5.85. However, the
L2 -bounded results of proposition 3.55 provide the stronger conclusions for
the Itô integral. Speci…cally, one obtains convergence in probability and
in L2 (S); and in fact uniform convergence as de…ned below. In addition,
it then follows that there exists a subsequence of partitions nk so that
uniform convergence exists pointwise -a.e.
66 CHAPTER 2 THE ITÔ INTEGRAL

For these path-independent results on Riemann sums, integrands will


also be assumed to be left continuous ( -a.e.). When added to the adapt-
edness assumption for v(s; !) 2 H2 ([0; 1) S); this also ensures that these
integrands are predictable (corollary 5.17, book 7), as will be the ongoing
assumption in the next chapter’s integration theories.
Alternatively, if stopping times are used to de…ne partitions, so partitions
are now path dependent with n n (!) for ! 2 S; it is possible
to obtain uniform convergence pointwise -a.e. with the given partitions,
rather than only with a subsequence of partitions. This latter approach to
pathwise evaluation of stochastic integrals originates with a 1981 paper of
Klaus Bichteler (b. 1938), and we present his result here in the context
of Itô integrals. It will be noticed for this result that integrands v(s; !) 2
H2 ([0; 1) S) are assumed to be right continuous ( -a.e.), and this is
to ensure that hitting times for open sets are stopping times. In 4 of book
7’s proposition 5.60 is proved that such hitting times are stopping times for
continuous processes, but the associated proof only utilized right continuity
of the process. This will be clari…ed in a future edition but the reader can
verify this in the meantime.
For Bichteler’s result, we need a technical result on the integral of a
generalized simple process, whereby the process in 2.21 is modi…ed to re‡ect
intervals de…ned by stopping times. The proof is somewhat long, and it
is common to see this result simply stated as is the case in the original
reference.
To start, if v(s; !) 2 H2 ([0; 1) S) is a generalized simple process as
formally de…ned below by 2.45, then we can derive a few important conclu-
sions.

1. Measurability of v(s; !) as required by de…nition 2.31 is di¢ cult to


characterize directly. However, v(s; !) is left continuous by construc-
tion and thus v(s; !) will be measurable by book 7’s proposition 5.1 if
it is adapted.
2. For v(s; !) to be adapted, a 1 (!) must be T0 (S)-measurable and
aj (!) must be Tj (S)-measurable (de…nition 5.52, book 7) for all j.
This follows because if A 2 B (R) ; then for s > 0 :
[1 h \ \ i
v 1 (s; )(A) = aj 1 (A) fTj < sg fs Tj+1 g :
j=0

The sets de…ned by Tj and Tj+1 are s (S)-measurable by 1 of propo-


sition 5.60 of book 7 and the de…nition of a stopping time, respec-
tively, and thus v 1 (s; )(A) 2 s (S) if aj 1 (A) 2 Tj (S): Similarly
2.10 ITÔ INTEGRATION VIA RIEMANN SUMS 67

v 1 (0; )(A) = a 11 (A) and the result follows since T0 (S) = 0 (S)
(exercise 5.56, book 7). The reader may want to compare this result
with lemma 2.25.

3. For the integrability constraint of 2.27, by disjointness of intervals:


X1
v 2 (s; !) = a2 1 (!) f0g (s) + a2j (!) (Tj ;Tj+1 ] (s):
j=0

The ds-integral is then well de…ned pointwise:


Z 1 X1
v 2 (s; !)ds = a2j (!) [Tj+1 (!) Tj (!)] ;
0 j=0

and the integrability constraint of 2.27 assures that:


X1
kv(s; !)k2H2 ([0;1) S) = E a2j (!) [Tj+1 (!) Tj (!)] < 1:
j=0

Proposition 2.55 (Itô integral of a generalized simple process) Given


(S; (S); t (S); )u:c: ; let fTi g1
i=0 be a sequence of stopping times so that -
a.e., T0 = 0; Ti < Ti+1 for all i; and Ti ! 1: If v(s; !) 2 H2 ([0; 1) S) is
de…ned by:
X1
v(s; !) = a 1 (!) f0g (s) + aj (!) (Tj ;Tj+1 ] (s); (2.45)
j=0

then -a.e.:
Z t X1
v(s; !)dBs (!) = aj (!) BTj+1 ^t (!) BTj ^t (!) ; for all t:
0 j=0
(2.46)
Proof. The …rst step for 2.46 is to prove that -a.e.:
Z t X1 Z t
v(s; !)dBs (!) = aj (!) (Tj ;Tj+1 ] (s)dBs (!); all t: ((1))
0 j=0 0

De…ne:
Xm
vm (s; !) = a 1 (!) f0g (s) + aj (!) (Tj ;Tj+1 ] (s);
j=0

then from 2.15 and corollary 2.50:


Z t Xm Z t
vm (s; !)dBs (!) = aj (!) (Tj ;Tj+1 ] (s)dBs (!): ((2))
0 j=0 0
68 CHAPTER 2 THE ITÔ INTEGRAL

To apply this proposition it should be con…rmed that aj (!) (Tj ;Tj+1 ] (s) 2
H2 ([0; 1) S); and this follows from the introductory comments to this
proposition.
Now if we prove that vm (s; !) ! v(s; !) in H2 ([0; 1) S); then by the
Itô isometry of corollary 2.50 it will follow that for all t :
Z t Z t
vm (s; !)dBs (!) !L2 (S) v(s; !)dBs (!):
0 0

Corollary 4.17 of book 5 then obtains that for each t; there exists a subse-
quence mk ! 1 so that:
Z t Z t
vmk (s; !)dBs (!) ! v(s; !)dBs (!); -a.e.
0 0

But by (2); any such subsequence satis…es:


Z t X1 Z t
vmk (s; !)dBs (!) ! aj (!) (Tj ;Tj+1 ] (s)dBs (!); -a.e.
0 j=0 0

Thus (1) is valid -a.e. for each t; and then also valid -a.e. for all rational
t. Thus (1) follows -a.e. by continuity of these integrals.
For the required H2 ([0; 1) S) result, …rst note that by disjointness of
intervals:
X1
(v(s; !) vm (s; !))2 = a2j (!) (Tj ;Tj+1 ] (s):
j=m+1

Then as in the introductory comments:


X1
kv(s; !) vm (s; !)k2H2 ([0;1) S) = E a2j (!) [Tj+1 (!) Tj (!)] ;
j=m+1

and this can be made as small as desired since the full series converges as
noted above.
With (1) established, 2.46 will follow from a proof that -a.e.:
Z t
aj (!) (Tj ;Tj+1 ] (s)dBs (!) = aj (!) BTj+1 ^t (!) BTj ^t (!) ; for all t:
0
((3))
We do this using the corollary 2.50 representation of the integral on the left:
Z t Z 1
aj (!) (Tj ;Tj+1 ] (s)dBs (!) = aj (!) (Tj ;Tj+1 ] (s) (0;t] (s)dBs (!):
0 0
2.10 ITÔ INTEGRATION VIA RIEMANN SUMS 69

As a …rst step, if these stopping times have only …nitely many values:
Xn Xm
Tj = ci Ai ; Tj+1 = dk Ck ;
i=1 k=1
S S S T
where ni=1 Ai = m k=1 Ck = S; then also S T
= i;k (Ai Ck ) ; where some of
these intersection sets may be empty. If Ai Ck 6= ; then Tj < Tj+1 implies
that ci < dk : Hence:
P
(Tj ^t;Tj+1 ^t] (s) = i;k Ai \Ck (!) (ci ^t;dk ^t] (s);

and thus aj (!) (Tj ^t;Tj+1 ^t] (s) is a sum of simple processes. With a little
algebra, (3) then follows from 1 of proposition 2.49, 2 of proposition 2.47,
and 2.13.
In the general case, by proposition 5.57 of book 7 with a small change of
(n) (n) 1
notation; there exists sequences of stopping times fTj g1 n=1 and fTj+1 gn=1 ;
so that in each case T (n) T (n+1) and T (n) ! T: Further, all such stopping
times in the sequences have only …nitely many values. It is an exercise to
check that based on the de…nition of these sequences in that proposition that
(n) (n)
Tj < Tj+1 implies that Tj Tj+1 for all n: Thus the prior proof obtains
for each n :
Z 1
aj (!) (T (n) ^t;T (n) ^t] (s)dBs (!) = aj (!) BT (n) ^t (!) BT (n) ^t (!) :
0 j j+1 j+1 j

To complete the proof we replicate the proof of (1):


To prove H2 ([0; 1) S)-convergence of integrands:

aj (!) (Tj
(n) (n)
^t;Tj+1 ^t]
!H2 ([0;1) S) aj (!) (Tj ^t;Tj+1 ^t] ;

…rst note that the ds-integrals are well de…ned pointwise, and:
Z 1
aj (!) (T (n) ^t;T (n) ^t] (Tj ^t;Tj+1 ^t] ds
0 j j+1
h i
(n) (n)
= aj (!) Tj+1 ^ t Tj+1 ^ t Tj ^ t Tj ^ t :

Now by the construction of book 70 s proposition 5.57, T (n) ^ t T ^t


2 n for each stopping time, and thus:
2
2(n 1)
aj (!) (n)
(Tj
(n)
^t;Tj+1 ^t] (Tj ^t;Tj+1 ^t] 2 E a2j (!) ! 0:
H2 ([0;1) S)
70 CHAPTER 2 THE ITÔ INTEGRAL

It now follows from the Itô isometry of corollary 2.50 that for all t :
Z 1 Z 1
aj (!) (T (n) ^t;T (n) ^t] dBs (!) !L2 (S) aj (!) (Tj ^t;Tj+1 ^t] dBs (!):
0 j j+1 0

Since (3) has been proved for stopping times with …nitely many values, this
obtains for all t :
Z 1
aj (!) BT (n) ^t (!) BT (n) ^t (!) !L2 (S) aj (!) (Tj ^t;Tj+1 ^t] dBs (!):
j j+1 0

As above, for each t now choose a subsequence nk so that this convergence


is -a.e. Since it is then also the case that for each t :

aj (!) B (n ) (!) B (n ) (!) ! aj (!) BTj ^t (!) BTj+1 ^t (!) ; -a.e.,


Tj k ^t Tj+1k ^t

we obtain for each t; again using corollary 2.50:


Z t
aj (!) (Tj ;Tj+1 ] dBs (!) = aj (!) BTj ^t (!) BTj+1 ^t (!) ; -a.e.
0

This identity is thus valid for all rational t; -a.e., and (3) follows -a.e. by
continuity of both expressions.

The following proposition is the path-dependent Riemann sum result of


Klaus Bichteler noted above, applied to Itô integration.

Proposition 2.56 (K. Bichteler) Let Bt (!) be a Brownian motion on


(S; (S); t (S); )u:c: ; and v(s; !) 2 H2 ([0; 1) S) right continuous for
(n)
almost all !: Then there exists stopping ffTi g1 1
i=0 gn=1 so that for each n;
(n) (n) (n)
Ti+1 > Ti for all i and Ti ! 1; -a.e., and for all t < 1 :
X1 (n)
sup v(Ti ; !) BT (n) ^r (!) BT (n) ^r (!) (2.47)
r t i=1 i+1 i
Z r
v(s; !)dBs (!) ! 0 ; -a.e.,
0

as n ! 1:
(n) (n) (n)
Proof. De…ne T0 = 0 for all !; n; and then de…ne Ti+1 Ti+1 (!) by:

(n) (n) (n) n


Ti+1 inffs > Ti j v(s; !) v(Ti ; !) > 2 g:
2.10 ITÔ INTEGRATION VIA RIEMANN SUMS 71

(n)
Now T1 is the hitting time for v(s; !) v(0; !) for the open set G
( 1; 2 n ) [ (2 n ; 1) for any n: Thus by right continuity of v(s; !) and of
(n)
the …ltration t (S); T1 is a stopping time by 4 of book 7’s proposition 5.60
(n)
as noted in the introduction above. More generally, if v(s; !) v(Ti ; !) >
n (n)
2 for some s > Ti ; then by right continuity there exists (!) > 0 so that:

(n) n
v(r; !) v(Ti ; !) > 2 ; s r < (!):

(n) (n)
Thus if Ti+1 < t; there exists rational r with Ti < r < t so that:

(n) n
v(r; !) v(Ti ; !) > 2 ;

and hence:
n o S n o
(n) (n) n
Ti+1 < t = r2Q;T (n) <r<t v(r; !) v(Ti ; !) > 2 :
i

This is a countable union of t (S) sets since:


n o
(n)
v(r; !) v(Ti ; !) > 2 n r (S) t (S):

(n)
Thus Ti+1 is an optional time, and by the assumed right continuity of the
(n)
…ltration t (S); Ti+1 is a stopping time by book 7’s proposition 5.60. By
(n) (n)
construction Ti+1 > Ti for all i and it as an exercise to check that so
(n)
de…ned, Ti ! 1; -a.e.
Given n; de…ne the generalized simple process:
X1 (n)
vn (s; !) = v(Ti ; !) i (s):
(n) (n)
i=0 Ti ;Ti+1

To justify the application of proposition 2.55, we will prove that vn (s; !) (0;t] (s) 2
H2 ([0; 1) S) for all n and t:
Recalling the discussion preceding proposition 2.55, vn (s; !) is adapted if
(n)
ai (!) v(Ti ; !) is T (n) (S)-measurable (de…nition 5.52, book 7) for all i:
i
(n) (n)
For s = 0; since T0 = 0 for all ! and fTi 0g is empty for i 1; only
(n)
T0 need be checked. Then for A 2 B (R) ; since v(s; !) is adapted:

1 (n) T (n) 1
v (T0 ( ); )(A) fT0 0g = v (0; )(A) 2 0 (S):
72 CHAPTER 2 THE ITÔ INTEGRAL

For s > 0 we use 1 of book 7’s proposition 5.61 and prove that for A 2 B (R) :

1 (n) T (n)
v (Ti ( ); )(A) fTi < sg 2 s (S): ((1))

(n)
To verify (1); let Ti = T to simplify notation, and for given N and
m N 2N de…ne TN as the …nite valued stopping time of book 7’s proposition
5.57. That is:

N N N
TN m2 ; for (m 1)2 T < m2 ;

and TN = 1 if T N: By construction:

1 T
v (TN ( ); )(A) fT < sg
S
= m ms v 1 (m2 N ; )(A) \ f(m 1)2 N
T < m2 N
g ;

where ms = supfmjm2 N < sg: This union of sets is s (S)-measurable by


the de…nition of ms ; adaptedness of v(s; !); and 1 of proposition 5.60 of book
7: This proves that v(TN ; !) is is T (S)-measurable for all N: Now TN ! T
pointwise, and by right continuity v(TN ; !) ! v(T; !) pointwise, so v(T; !)
(n)
is is T (S)-measurable by book 5’s corollary 1.10. Recalling that T = Ti
proves (1):
Since adapted and left continuous by construction, vn (s; !) is measurable
by book 7’s proposition 5.19. To see that vn (s; !) (0;t] (s) 2 H2 ([0; 1) S) for
(n)
all n and t; recall that jvn (s; !) v(s; !)j 2 n by de…nition of fTi g: This
obtains that vn2 (s; !) 2 v 2 (s; !) + 2 2n ; and thus:

Z Z 1
vn2 (s; !) (0;t] (s)dsd 2 kv(s; !)k2H2 ([0;1) S) +2 2n
t :
S 0

With both integrands in H2 ([0; 1) S) :


Z r h i
vn (s; !) (0;t] (s) v(s; !) dBs (!)
0

is a continuous martingale by proposition 2.52. By Doob’s martingale max-


imal inequality of book 7’s proposition 5.91, followed by Itô’s isometry of
2.10 ITÔ INTEGRATION VIA RIEMANN SUMS 73

2.43, and then the bound jvn (s; !) v(s; !)j 2 n :


" Z rh #
i 2
E sup vn (s; !) (0:t] (s) v(s; !) dBs (!)
r t 0
" Z 2
#
t
4E [vn (s; !) v(s; !)] dBs (!)
0
Z t
= 4E jvn (s; !) v(s; !)j2 ds
0
2n
4t2 :

Hence:
" Z #
X1 r h i 2
E sup vn (s; !) (0:t] (s) v(s; !) dBs (!)
n=1 r t 0
" Z #
X1 r h i 2
= E sup vn (s; !) (0:t] (s) v(s; !) dBs (!) < 1:
n=1 r t 0

This implies that:


X1 Z r h i
sup vn (s; !) (0:t] (s) v(s; !) dBs (!) < 1; -a.e.,
n=1 r t 0

and thus as n ! 1 :
Z rh i
sup vn (s; !) (0:t] (s) v(s; !) dBs (!) ! 0; -a.e.
r t 0

Finally 2.47 follows from 2.46.

Bichteler’s result is of theoretical interest, but due to the de…nition of


the
R t stopping times it is not generally useful in the actual evaluation of
0 v(s; !)dBs (!) for given v(s; !): Returning to more traditional Riemann
sums with path-independent partitions, the following result will be stated
without a detailed proof other than con…rm the applicability of proposition
3.55 and some other results to be proved later. Admittedly this result is
somewhat "out of place" here since it requires a later development, but this
is the logical place for results on Itô integration. So rather than produce an
independent proof here, only to be generalized later, it is hoped the reader
will agree that verifying the conditions needed to apply the later results is
not onerous.
74 CHAPTER 2 THE ITÔ INTEGRAL

Proposition 2.57 (Convergence of Riemann sum approximation) Let


Bt (!) be a Brownian motion on (S; (S); t (S); )u:c: ; and v(s; !) 2 H2 ([0; 1)
S) both left continuous -a.e., and locally bounded, that for all t < 1 :

jv(s; !)j Kt < 1; for s t:

Given partitions n of [0; t] :

0 = t0 < t1 < tn+1 = t;

with mesh size n max1 i n fti+1 ti g ! 0; then with the integral de…ned
as It (!) of proposition 2.49:

Xn Z t
v(ti ; !) Bti+1 (!) Bti (!) !L2 (S) v(s; !)dBs (!): (2.48)
i=0 0

In addition, L2 (S)-convergence is uniform on compact sets:


Xn Z r
sup v(ti ; !) Bti+1 ^r (!) Bti ^r (!) v(s; !)dBs (!) ! 0;
r t i=0 0 L2 (S)
(2.49)
as is convergence in probability:
Xn Z r
sup v(ti ; !) Bti+1 ^r (!) Bti ^r (!) v(s; !)dBs (!) !P 0:
r t i=0 0
(2.50)
Further, there is a subsequence of partitions nk ; so that uniform conver-
gence on r t is pointwise -a.e.:
Xnk Z r
sup v(ti ; !) Bti+1 ^r (!) Bti ^r (!) v(s; !)dBs (!) !a:e: 0:
r t i=0 0
(2.51)
Proof. To apply proposition 3.55 we need to check assumptions. First,
this proposition applies to L2 -bounded martingale integrators Ms ; meaning
hR i1=2 hR i1=2 p
that jMs j2 d K < 1 for all s: Since jBs j2 d = s this
assumption fails. So choose t0 t and de…ne the …xed stopping time T
t0 : Then Ms Bs^T is a martingale by proposition 5.84 of book 7, and
hR i1=2 p
is L2 -bounded since jMs j2 d t0 : Next, left continuous v(s; !) 2
H2 ([0; 1) S) is predictable by proposition 5.17 of book 7, so v(s; !) satis…es
the measurability assumption to have v(s; !) 2 H2M ([0; 1) S) by de…nition
2.10 ITÔ INTEGRATION VIA RIEMANN SUMS 75

3.9. To check the integrability constraint 3.11 for this space, recall corollary
6.16 of book 7; and then corollary 2.10:

hM is = hBis^T = s ^ T;

and thus 3.11 is stated:


Z 1
kv(t; !)k2H M ([0;1) S) E v 2 (s; !)d hM is
2
0
Z T
= E v 2 (s; !)ds kv(t; !)k2H2 ([0;1) S) :
0

Finally, v(s; !) is locally bounded by assumption.


So proposition 3.55 applies and all four statements above follow directly
but with Bs^T everywhere in place of Bs : Since t T by construction, the
Riemann sums are identical with those above. For the stochastic integrals we
need to check two things. First, does the integration theory underlying propo-
sition 3.55 using Ms = BsT Bs^T reproduce the Itô integral with a stopped
Brownian motion? The answer is "yes." Since we have con…rmed that such
v(s; !) 2 H2M ([0; 1) S); the simple process approximation of proposition
3.15 is also a simple process approximation as in proposition 2.37. Simi-
larly, the initial construction of VtM (!) of proposition 3.19 replicates that of
Vt (!) of proposition 2.40 -a.e., since both re‡ect L2 -convergence of approx-
imating sequences. And …nally the continuous version ItM (!) of proposition
3.27 is created with the same steps as that of It (!) of proposition 2.49.
As these integrals agree, we next need to check the signi…cance of the
stopping time T in the stochastic integrals in the above four statements. For
this we require proposition 3.85 on stopping integrals with local martingale
integrators, and this is justi…ed since if a local martingale is also L2 -bounded
then theseH2 ([0; 1) S) integrals are identical by de…nition 3.77. Then since
r t T :
Z r Z r^T Z r
v(s; !)dBsT (! = v(s; !)dBs (!) = v(s; !)dBs (!);
0 0 0

and the proof is complete.

The following result is an interesting application of the above propo-


sition, and generalizes the well known facts that the sum of independent
normal variates is a normal variate, and the mean and variance of this sum
equal the respective sums of the corresponding component moments.
76 CHAPTER 2 THE ITÔ INTEGRAL

With this introduction, consider the Itô integral:


Z t
v(s)dBs (!);
0

with integrand v(s; !) v(s) independent of !: Such Itô integrals are called
Wiener integrals after Norbert Wiener (1894 –1964). Since v(s) is ap-
parently adapted, if v(s) is continuous the above result applies since continu-
ity assures local boundedness: Consequently, this Itô integral is the limit of
Riemann sums, and since these sums are normally distributed by the above
observation, the following will not surprise.
This result can be generalized to left continuous v(s) with the addition
of the above local boundedness assumption.

Proposition 2.58 (Wiener integral) Let Bt (!) be a Brownian motion


on (S; (S); t (S); )u:c: and v(s) a continuous function de…ned on [0; 1):
Then for every t :
Z t Z t
v(s)dBs (!) N 0; v 2 (s)ds : (2.52)
0 0
Rt
In other words, the Wiener integral 0 v(s)dBs (!) is normally distributed
Rt
with expectation 0 and variance 0 v 2 (s)ds:
Proof. First, given partitions n of [0; t] as above with n ! 0; 2.48 obtains:
Xn Z t
v(ti ) Bti+1 (!) Bti (!) !L2 (S) v(s)dBs (!):
i=0 0

Now fBti+1 (!) Bti (!)gni=0 are independent normal variates with means
equal to 0; and respective variances fti+1 ti gni=0 : So for each n; denoting
this Riemann sum by Xn :
Xn
Xn N 0; v 2 (ti )(ti+1 ti ) :
i=0

It follows that the characteristic function of Xn is well de…ned (6.22, book


6):
1 2 Xn
CXn (r) = exp r v 2 (ti 1 )(ti ti 1 ) :
2 i=1

With X denoting the above Itô integral, the above proposition states that
Xn !p X: By proposition 5.21 of book 2 this implies convergence in distrib-
ution Xn !d X; and then by de…nition (remark 8.3, book 2) this is equivalent
to weak convergence of the associated distribution functions, Fn ) F:
2.10 ITÔ INTEGRATION VIA RIEMANN SUMS 77

The Lévy’s continuity theorem for CF (r) in book 6’s proposition 6.16 obtains
that Fn ) F if and only if CFn (r) ! CF (r) for all r: But:
Z t
1 2
CFn (r) CXn (r) ! exp r v 2 (s)ds ;
2 0

R
1 2 t 2
and thus it follows that CX (r) = exp 2r 0 v (s)ds : This is the char-
Rt 2
acteristic function of N 0; 0 v (s)ds ; and so by the uniqueness theorem
of book 6’s proposition 6.14, 2.52 follows.
Chapter 3

Integrals w.r.t. Continuous


Local Martingales

It is natural to wonder if Brownian motion Ris unique in terms of its ability


t
to support the construction of the integral 0 v(s; !)dBs (!) for suitable
integrands v(s; !): As a question for investigation, is there a property or
properties of Bt (!) on (S; (S); t (S); )u:c: which if satis…ed by a
martingale or other process RMt (!); would assure success in the
t
construction of the integral 0 v(s; !)dMs (!); again for suitable integrands
v(s; !)?

Reviewing the above proofs one identi…es several essential properties of


Brownian motion relative to the …ltered space (S; (S); t (S); )u:c : that
are used over and over in the development of the Itô integral:

1. For almost all ! 2 S; Bt (!) is continuous in t 2 [0; 1) and B0 (!) = 0;

2. Bt (!) is a martingale with respect to the …ltration f t (S)g; so for


0 s t:
E [Bt (!)j s (S)] = Bs (!):

3. Bt2 (!) t is a martingale with respect to the …ltration f t (S)g; and


thus (remark 2.9):
h i
E (Bt (!) Bs (!))2 j s (S) = t s:

Item 3 re‡ects the fact that Bt (!) has …nite quadratic variation in the
sense of proposition 2.17. Speci…cally, as n max1 i n fti ti 1 g ! 0 for

79
80CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

a sequence of partitions of [0; t] with t0 = 0; tNn = t :

" #
XNn 2
2
E Bti (!) Bti 1 (!) t ! 0:
i=1

In fact, this convergence as well as convergence in probability is uniform


over [0; s] [0; t]: In the notation of the quadratic variation process and as
stated in 2.18:

hBit = t:

In proposition 6.1 we will prove a result due to Paul Lévy (1886 –1971)
and known as Lévy’s characterization of Brownian motion. It states
that Brownian motion is the only continuous martingale with quadratic
variation process hBit = t: So 3 cannot be satis…ed for any other general
process Mt (!): On the other hand, perhaps it is the existence of hBit more
than this actual value that is relevant. Thus it is compelling to wonder if
any
R t continuous martingale (or local martingale) will support a de…nition of
0 v(s; !)dMs (!) for suitable functions v(s; !); since the existence of a …nite
quadratic variation process hM it is assured by book 7’s proposition 6.12.
To this end, the next section addresses integration of simple processes
with respect to continuous martingales. The following section will then
generalize the development of the Itô Integral of v(s; !) 2 H2 ([0; 1) S)
with respect to Brownian motion Bt ; to stochastic integrals of a de…ned
space of predictable integrands, H2M ([0; 1) S), with respect to continuous
L2 -bounded martingales Mt with M0 = 0: The …nal section in this chapter
will generalize this result to continuous local martingales using a bigger
space of integrands.
Then in the next chapter we address continuous semimartingale integra-
tors. The general integration theory of predictable integrands with respect
to semimartingales originates with a 1980 paper of Claude Dellacherie
(b. 1943). This theory can also be developed in the somewhat more general
context of progressively measurable integrands. Indeed, for many proofs it
will be seen that it is progressive measurability of an integrand, as implied
by predictability (proposition 5.19, book 7) that is needed for the stated
result. See for example Karatzas and Shreve (1988) for the more general
development.
3.1 INTEGRATION OF SIMPLE PROCESSES 81

3.1 Integration of Simple Processes


We begin this investigation with an adapted simple process v(s; !) on
(S; (S); t (S); )u:c: as in 2.21. With 0 = t0 < t2 < < tn+1 < 1 :
Xn
v(s; !) a 1 (!) f0g (s) + aj (!) (tj ;tj+1 ] (s);
j=0

where a 1 (!) is 0 (S)-measurable and aj (!) is tj (S)-measurable for all j


by lemma 2.25. Analogously with 2.13, de…ne the stochastic integral with
respect to a continuous martingale Mt (!) :
Z 1 Xn
v(s; !)dMs (!) aj (!) Mtj+1 (!) Mtj (!) : (3.1)
0 j=0

Integrals over intervals [t; t0 ] [0; 1) are then de…ned as before:


Z t0 Z 1
v(s; !)dMs (!) v(s; !) (t;t0 ] (s)dMs (!); (3.2)
t 0

noting that v(s; !) (t;t0 ] (s) is an adapted simple process as de…ned above.
From this is derived:
Z t0 Z t0 Z t
v(s; !)dMs (!) = v(s; !)dMs (!) v(s; !)dMs (!): (3.3)
t 0 0

The …rst result and proof will be of little surprise, other than the quali-
…cation of "almost" in 2: See remark 3.2.

Proposition 3.1 (Integral of a simple process) If v(s; !) is an adapted


simple process and Mt is a continuous martingale on (S; (S); t (S); )u:c ;
then:

1. For 0 t < t0 1;
"Z #
t0
E v(s; !)dMs (!) = 0: (3.4)
t

2. The process: Z t
VtM (!) v(s; !)dMs (!); (3.5)
0
is continuous, satis…es the martingale property relative to f t (S)g; and
is thus almost a continuous martingale on (S; (S); t (S); ):
82CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

Proof. For t 2 (tk ; tk+1 ];


Z t Xk 1
v(s; !)dMs (!) = aj (!) Mtj+1 (!) Mtj (!) ((*))
0 j=0
+ak (!) [Mt (!) Mtk (!)] :

Using the linearity, measurability and tower properties of conditional expec-


tations (proposition 5.26, book 6):
Z t
E v(s; !)dMs (!)
0
Xk 1
= E E aj (!) Mtj+1 (!) Mtj (!) )j tj (S)
j=0
+E (E [ak (!) [Mt (!) Mtk (!)] j tk (S)])
Xk 1
= E aj (!)E Mtj+1 (!) Mtj (!)j tj (S)
j=0
+E (ak (!)E [Mt (!) Mtk (!)j tk (S)])
= 0:

The last step follows since Mt is a martingale relative to f t (S)g: When t >
tn+1 ; the formula in ( ) obtains VtM (!) = VtM
n+1
(!) and thus E[VtM (!)] = 0
for all t: Then 3.4 follows from 3.3:
For t tn+1 , continuity of VtM (!) follows from ( ) and the continuity
of Mt ; while t -measurability follows from ( ) and the adaptedness of Mt :
Both extend to t > tn+1 since there VtM (!) = VtM n+1
(!): For 0 s t :

E VtM j s (S) = E VsM j s (S) + E VtM VsM j s (S) :

Now E VsM j s (S) = VsM since VsM is s -measurable, and E VtM VsM j s (S) =
0 follows from 1 since by 3.3:
Z 1 Z 1
M M
Vt Vs = v(r; !) (0;t] (r)dMr (!) v(r; !) (0;s] (r)dMr (!)
0 0
Z t
= v(r; !)dMr (!):
s

Hence
E VtM j s (S) = VsM ;

so VtM satis…es the martingale property.


3.1 INTEGRATION OF SIMPLE PROCESSES 83

Remark 3.2 (On "almost") Note that by almost a continuous martin-


gale in the above result is meant that while VtM (!) is continuous, adapted
and satis…es the martingale property, we can not yet verify integrability, that
E VtM < 1 for all t 2 [0; 1): Indeed this need not be satis…ed without
further constraints. For the Itô integral, this integrability result followed from
2.27, the Itô isometry, and an application of the Cauchy-Schwarz inequality
(book 2, corollary 3.48).
We discuss this isometry next. Speci…cally, we evaluate the L2 -norm of
V1M (!); which will then provide an insight on how the integrand v(s; !) must

be further constrained to obtain a martingale.

Proposition 3.3 If v(s; !) is an adapted simple process and Mt is a con-


tinuous martingale on (S; (S); t (S); )u:c ; then with V1 M (!) de…ned in

3.5:
2 Xn h i
M
V1 (!) L2 (S) = E a2j (!) Mt2j+1 (!) Mt2j (!) : (3.6)
j=0

Proof. Repeating the derivation for the analogous Itô result in proposition
2.27, and then expanding:
2
hXn i2
M
V1 (!) L2 (S) E aj (!) Mtj+1 (!) Mtj (!)
j=0
Xn h h ii
= E a2j (!) Mt2j+1 (!) + Mt2j (!)
j=0
Xn
2 E a2j (!)Mtj+1 (!)Mtj (!)
j=0
X
+2 E ai (!)aj (!) Mti+1 (!) Mti (!) Mtj+1 (!) Mtj (!) :
i<j

An application of the tower and measurability properties of conditional ex-


pectations (proposition 5.26, book 6) reveals that the last summation has
expectation equal to zero since Mt is a martingale. Using the same proper-
ties of conditional expectations:

E a2j (!)Mtj+1 (!)Mtj (!) = E a2j (!)Mtj (!)E Mtj+1 (!)j tj (S)

= E a2j (!)Mt2j (!) ;

and the result follows.

Looking at the proof of proposition 3.3, 3.6 looks like the beginning of
an Itô-type isometry as seen in 2.22. But while 3.6 relates the L2 -norm of
M (!) to an expression that re‡ects the integrand
the stochastic integral V1
84CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

v(s; !); in the earlier derivation the summation on the right could be related
to the L2 -norm of v(s; !): For the formula above, such a conversion is not
apparent.
This is a major obstacle for the current development, because the exis-
tence of an Itô-type isometry was the key tool used for generalizing the Itô
integral from adapted simple processes to appropriately speci…ed integrands.
To make progress on this we will …rst restrict the continuous martingale in-
tegrator Mt to be L2 -bounded, and then later consider continuous local
martingales.

3.2 Integrals w.r.t. Continuous L2 -Bounded Mar-


tingales
In this section we develop the stochastic integral:
Z
v(s; !)dMs (!);

for predictable integrands v(s; !) with integrability properties to be


developed next, and for continuous L2 -bounded martingales Mt ; both
de…ned on a …ltered probability space (S; (S); t (S); )u:c: : By
L2 -bounded is meant that for all t :
h i
kMt k2L2 E jMt j2 C < 1:

The strategy for accomplishing this result will be similar to that used for
the Itô integral, …rst de…ning the integral for suitable simple processes
v(s; !) as in 3.1, then generalizing to other v(s; !) with the aid of an
approximation theorem and a generalized Itô isometry.

3.2.1 A Generalized Itô Isometry


To develop the Itô isometry for the Itô integral in the prior chapter, the
proof of proposition 2.27 revealed that:
Xn h i
2
kV1 (!)k2L2 (S) = E a2j (!) Btj+1 (!) Btj (!) :
j=0

Applying the measurability, tower and independence properties of


conditional expectations resulted in the Lebesgue integral:
hXn i
kV1 (!)k2L2 (S) = E a2j (!) [tj+1 tj ] = kv(s; !)k2L2 ([0;1) S) :
j=0
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES85

While a good approach for Brownian motion, this calculation does not
easily generalize to continuous martingales because we have no ready
formula for:
2
E Mtj+1 (!) Mtj (!) :

However, we could have proceeded di¤erently in the prior section. Start-


ing with the identity as in 3.6 above but with the continuous martingale
Mt = B t :
2 Xn h h ii
B
V1 (!) L2 (S) = E a2j (!) Bt2j+1 (!) tj+1 Bt2j (!) tj
j=0
Xn
+ E a2j (!) [tj+1 tj ] :
j=0

The second summation is the Lebesgue integral of v 2 (s; !) as before. For the
…rst, since Bt2 (!) t is a martingale (proposition2.8), applying properties of
conditional expectations obtains:
h h ii
E a2j (!) Bt2j+1 (!) tj+1 Bt2j (!) tj
h i
= E a2j (!)E Bt2j+1 (!) tj+1 Bt2j (!) tj j tj (S)
= 0:

Combining we again derive that:


2
VtB (!) L2 (S)
= kv(s; !)k2L2 ([0;1) S) :

This second derivation is more easily generalized to L2 -bounded con-


tinuous martingales, which are local martingales by book 7’s corollary 5.85.
The Doob-Meyer decomposition theorem 2 (proposition 6.12, book 7) states
that there exists a unique, continuous, adapted and increasing quadratic
variation process hM it with hM i0 = 0; so that Mt2 hM it is a local mar-
tingale. That hM it is increasing and continuous means that hM it induces a
pathwise Borel measure hM it on B (R) (section 5.2, book 1) with which it
is then possible to de…ne a pathwise Lebesgue-Stieltjes integral of appro-
priate v(t; !) with respect to d hM it d hM it (chapter 2, book 5).
These integrals could also be de…ned in the Riemann-Stieltjes sense if
we assume that v(t; !) is continuous or of bounded variation in t (proposition
4.19, book 3). Further, the Lebesgue-Stieltjes and Riemann-Stieltjes inte-
grals would then agree in this continuous case (proposition 2.56, book 5).
However, since we will only want to assume measurability properties of
86CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

v(t; !), the Lebesgue-Stieltjes model is better suited to the current investi-
gation.
The …nal result in 3.7 is the Itô isometry in the current context.

Proposition 3.4 (A general isometry) Let Mt be a continuous L2 -bounded


martingale on the …ltered probability space (S; (S); t (S); )u:c: with quadratic
variation process hM it ; and v(s; !) an adapted simple process. Then with
VtM (!) de…ned in 3.5:
Z t
M 2
Vt (!) L2 (S) = E v 2 (s; !)d hM is ; (3.7)
0

where d hM is denotes the pathwise de…ned Borel measure associated with the
increasing quadratic variation process hM it :
Hence by proposition 3.1 and remark 3.2, VtM is a continuous mar-
tingale if for all t :
Z t
E v 2 (s; !)d hM is < 1:
0

Proof. Repeating the steps above with Bt2j (!) tj replaced by Mt2j hM itj obtains:

M 2
V1 (!) L2 (S)
Xn h h ii
= E a2j (!)E Mt2j+1 hM itj+1 Mt2j hM itj j tj (S) ((*))
j=0
Xn h h ii
2
+ E aj (!) hM itj+1 hM itj :
j=0

As noted above, Mt2 hM it is a local martingale. Thus there exist stopping


times fTn g which are increasing and with Tn ! 1 -a.e., so that:
Tn
Tn >0 Mt2 hM it Tn >0
2
Mt^Tn
hM it^Tn
is a martingale with respect to t (S) for all n: Further, we can choose Tn so
T
that Tn >0 Mt2 hM it n n say, by book 7’s corollary 5.76. Thus for
each j :
h h i i
E Tn >0 Mt2j+1 ^Tn hM itj+1 ^Tn Mt2j ^Tn hM itj ^Tn j tj (S) = 0;

for all n: By de…nition of conditional expectation (de…nition 5.19, book 6),


this implies that for all B 2 tj (S) :
Z h i
2 2
Tn >0 M tj+1 ^Tn hM i tj+1 ^Tn M tj ^Tn hM i tj ^Tn d = 0:
B
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES87

This integrand is bounded by 2n; which is integrable. Thus letting n ! 1


and recalling Tn ! 1 -a.e., Lebesgue’s dominated convergence theorem
(proposition 2.43, book 5) obtains for all B 2 tj (S) :
Z h i
Mt2j+1 hM itj+1 Mt2j hM itj d = 0:
B

Thus by de…nition of conditional expectation, the …rst summation in ( ) for


the expression for V1 M (!) 2 is 0:
L2 (S)
For the second, given the pathwise increasing function hM it ; the Lebesgue-
Stieltjes integral is de…ned relative to the Borel measure d hM is d hM is :
Z
(tj ;tj+1 ] (s)d hM is hM itj+1 hM itj :

Thus since the (tj ; tj+1 ]-intervals are disjoint:


2 Xn h h ii
VtM (!) L2 (S) = E a2j (!) hM itj+1 hM itj
j=0
Xn Z
2
= E a (!) (tj ;tj+1 ] (s)d hM is
j=1 j
Z t
E v 2 (s; !)d hM is ;
0

which is 3.7.
Given proposition 3.1, to show that VtM is a continuous martingale re-
quires only that the integrability assumption is satis…ed. By the Cauchy-
Schwarz inequality (corollary 3.48, book 4):

VtM (!) L1 (S)


VtM (!) L2 (S)
k1kL2 (S) ;

and by the preceding derivation this is …nite if for all t :


Z t
E v 2 (s; !)d hM is < 1:
0

The result in 3.7 for a continuous L2 -bounded martingale Mt looks like


a generalized Itô isometry:

VtM (!) L2 (S)


= kv(s; !)kL0 ([0;t]hM i S) ;
2
88CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

where L02 ([0; t]hM i S) is another space of square-integrable functions on


[0; t] S: But on this space, the time integral is understood in the pathwise
Lebesgue-Stieltjes sense, using the Borel measures induced by the increas-
ing functions hM it (!): In other words, as in 2.19 but with more detailed
notation, this "norm" (see proposition 3.14) would be de…ned by:
Z Z t
2
kv(s; !)kL0 ([0;t]hM i S) v 2 (s; !)d hM is (!)d :
2
0

Remark 3.5 (On the Doléans measure) It should be noted that given a
continuous L2 -bounded martingale Mt and the associated increasing, contin-
uous process hM it ; it is by no means obvious that there is a measure M on
[0; 1) S with which to de…ne the above space of functions as an L2 -space
as in book 5, in the sense that:
Z
2
kv(s; !)kL0 ([0;1)hM i S) = v 2 (s; !)d M : (3.8)
2

The complexity here is that L02 ([0; 1)hM i S) is not a traditional product
measure space since the time integrals are de…ned by Borel measures d hM it
that depend on ! 2 S:
Such a measure was shown to exist by Catherine A. Doléans-Dade
(1942 – 2004), and is called a Doléans measure. This measure is de-
…ned on the predictable sigma algebra P of book 7’s de…nition 5.10, which is
generated by the semi-algebra of sets:

f(s; t] As ; f0g A0 g;

where 0 s < t; and As 2 s (S) for all s: On such sets, the set function
0 is de…ned by:
M
Z Z t Z
0
M [(s; t] As ] d hM ir d = [hM it hM is ] d :
As s As

Similarly, 0M [f0g A0 ] 0 for A0 2 0 (S):


This set function is well de…ned since by book 7’s corollary 5.85 and
proposition 6.18, hM it is integrable for all t: That 0M can be extended to
a measure on the sigma algebra generated by this semi-algebra requires the
extension theorems of chapter 6 of book 1: Speci…cally, we …rst need to show
that 0M has a well de…ned and countably additive extension to the algebra
generated by this subalgebra of sets. Then the Hahn–Kolmogorov exten-
sion theorem and the Carathéodory Extension Theorem I assure that
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES89

by way of a well de…ned outer measure, that 0M can be extended to a measure


M on the associated sigma algebra P:
For the limited purpose at hand we will circumvent the details of this
development, referencing Chung and Williams (1983) for a proof. In
the next section we simply de…ne a space of integrandR functions v(s; !) with
appropriate measurability requirements and with E v 2 (s; !)d hM is < 1:
In e¤ ect, we will generalize the de…nition of the space of integrands for Itô
integrals, H2 ([0; 1) S); to a space of integrands H2M ([0; 1) S); de…ned
relative to a given continuous L2 -bounded martingale Mt :

3.2.2 M2 -Integrators and H2M ([0; 1) S)-Integrands


In this section we review earlier results on continuous L2 -bounded
martingales which are to be used as integrators, and formally introduce the
associated spaces of predictable integrands. We begin with de…nition 5.98
of book 7 in the current context.

De…nition 3.6 (M2 ) A martingale Mt on (S; (S); t (S); )u:c: is an L2 -


martingale if for all t :
Z 1=2
kMt kL2 (S) Mt2 d < 1; (3.9)

and is an L2 -bounded martingale if for all t :


kMt kL2 (S) K < 1: (3.10)
The collection of continuous, L2 -bounded martingales with M0 = 0 is
denoted by M2 ; or M2 [(S; (S); t (S); )u:c: ] if all of the underlying details
need be speci…ed.

Example 3.7 (All Itô integrals are integrators) The prior section’s de-
velopment of the Itô integral provides examples of the continuous L2 -bounded
martingales addressed in this section. Let Bt (!) be a Brownian motion on
(S; (S); t (S); )u:c: and v(t; !) 2 H2 ([0; 1) S) as in de…nition 2.31.
Then Z t
Mt (!) v(s; !)dBs (!);
0
is a continuous martingale by proposition 2.52, where this integral is given in
proposition 2.49 and denoted It (!): Further, this martingale is L2 -bounded
by the Itô isometry of 2.43:
h i
E jMt (!)j2 = kv(s; !)k2H2 ([0;t] S) ;
90CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

and this is bounded by kv(s; !)k2H2 ([0;1) S) which is …nite by 2.27.


Thus, every v(t; !) 2 H2 ([0; 1) S) induces a continuous L2 -bounded
martingale Mt that is a potential integrator for this section. Of course having
an integrator that is itself a stochastic integral raises an interesting question
which is answered in the associative law of stochastic integration of
proposition 3.53.

Remark 3.8 (On M2 ) Recalling book 7’s de…nition 5.10, that Mt is con-
tinuous in t is to be interpreted as continuous -a.e. This is a formality in
the sense that we can always make Mt continuous for all ! by rede…ning
Mt (!) 0 on the exceptional set. But there is never any reason to do this,
as it changes nothing in the theory.
As is the case for all Lp -type spaces of book 5, M2 is a space of equiva-
lence classes of processes as noted in book 7’s de…nition 5.120. Speci…cally,
Mt and Mt0 are deemed to be in the same equivalence class if M1 = M1 0 -
0
a.e. Here the limiting random variables M1 and M1 are in L2 (S); de…ned
in terms of the L2 (S)-limits:

Mt !L2 (S) M1 ; Mt0 !L2 (S) M1


0
;

and proved to exist by Doob’s martingale convergence theorem of book 7’s


corollary 5.116.
Also, proposition 5.122 of book 7 proved that M2 ; the space of contin-
uous L2 -bounded martingales on (S; (S); t (S); )u:c: ; is a Hilbert space
(de…nition 4.23, book 5) under the norm de…ned for M 2 M2 by:
h i 1=2
kM k2 kM1 kL2 = E jM1 j2 ;

and subject to the above de…nition of equivalence classes.


Thus the results of this section apply to martingales in a space M2 that
is isometric to the Hilbert space L2 (S; (S); ) of square integrable random
variables on S under the identi…cations:

M 2 M2 ) M1 2 L2 (S; (S); );

M1 2 L2 (S; (S); ) ) Mt E [M1 j t (S)] 2 M2 :

The …rst result follows from Doob’s martingale convergence theorem as noted
above. For the second, if M1 2 L2 (S; (S); ) then Mt E [M1 j t (S)] is
in M2 by the proof of that book’s proposition 5.122.
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES91

Turning next to the space of integrands, note that the norming constraint
in 3.11 will in general select di¤erent collections of predictable processes for
di¤erent continuous L2 -bounded martingales.

De…nition 3.9 (H2M ([0; 1) S)) Let Mt 2 M2 be a continuous L2 -bounded


martingale on the …ltered probability space (S; (S); t (S); )u:c: ; and so
M0 = 0: The space H2M ([0; 1) S) is de…ned as the collection of real valued
functions v(t; !) de…ned on [0; 1) S so that:

1. v(t; !) is predictable, meaning measurable on the product space ([0; 1) S; P) ;


with P the predictable sigma algebra (de…nition 5.10, book 7).

2. v(t; !) satis…es:
Z 1
kv(t; !)k2H M ([0;1) S) E v 2 (s; !)d hM is < 1; (3.11)
2
0

where
Z 1 Z Z 1
2
E v (s; !)d hM is v 2 (s; !)d [hM is (!)] d :
0 S 0

Notation 3.10 (H2M ([t; t0 ] S)) If v(t; !) 2 H2M ([0; 1) S); it is some-
times convenient to de…ne for 0 t < t0 1 :

kv(s; !)kH M ([t;t0 ] S) [t;t0 ] (s)v(s; !) : (3.12)


2 H2M ([0;1) S)

Remark 3.11 (H2 ([0; 1) S) vs. H2B ([0; 1) S)) We might be interested
to compare de…nition 2.31 for H2 ([0; 1) S) to H2M ([0; 1) S) above with
M = B:
First, H2B is not formally de…ned. Though a continuous
p martingale, Bt
is not an L2 -bounded martingale since kBt kL2 = t: That said, hBit = t
exists and thus we can formally de…ne H2B ([0; 1) S) as above. Then the
integrability requirement for v(t; !) 2 H2 ([0; 1) S) in 2.27 is identical to
the integrability requirement for v(t; !) 2 H2B ([0; 1) S) in 3.11.
For measurability, processes v(t; !) in any H2M ([0; 1) S) are pre-
dictable, and so by book 7’s proposition 5.19 they are adapted and mea-
surable and hence satisfy the measurability requirements for processes in
H2 ([0; 1) S): But being predictable is more restrictive than being adapted
and measurable since adapted. An adapted right continuous processes is
measurable by book 7’s proposition 5.19 but such process are not necessarily
predictable (recall book 7’s remark 5.18).
92CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

Thus given this informal de…nition of H2B ([0; 1) S); we conclude that:

H2B ([0; 1) S) $ H2 ([0; 1) S):

Alternatively, we can stop Brownian motion at any …xed stopping time


T < 1; de…ning Mt = BtT Bt^T ; and then compare H2 ([0; 1) S) with
T
H2B ([0; 1) S): Since now B T t = t ^ T by book 7’s corollary 6.16, the
T
integrability constraint of H2B ([0; 1) S) is assured by that of H2 ([0; 1)
S): Restricting to left continuous processes in H2 ([0; 1) S); which are
predictable (corollary 5.17, book 7), one obtains with apparent notation that
for every …xed T < 1 :
T
H2l:c: ([0; 1) S) H2B ([0; 1) S):

The next result characterizes H2M -measurability of simple processes and


can be compared with lemma 5.19, which as noted there, characterized H2 -
measurability of simple processes.

Lemma 3.12 (On H2M -Measurability of Simple Functions) Let v(s; !)


be a simple process:
Xn
v(s; !) a 1 (!) f0g (s) + aj (!) (tj ;tj+1 ] (s);
j=0

with 0 = t0 < t1 < < tn+1 < 1: Then v(s; !) is predictable if and only
if a 1 (!) is measurable relative to 0 (S); and aj (!) is measurable relative
to tj (S) for all j:
Proof. If v(s; !) is predictable it must be adapted and measurable by book
7’s proposition 5.19. Hence a 1 (!) is measurable relative to 0 (S) and each
other aj (!) is measurable relative to s (S) for all s 2 (tj 1 ; tj ]: By the as-
sumed right continuity of the …ltration it follows that aj (!) is measurable
relative to tj (S). Conversely, if v(t; !) is given above with all aj (!) mea-
surable as described; then v(t; !) is adapted. Since left continuous, book 7’s
corollary 5.17 obtains that it is predictable.

Exercise 3.13 Check that H2M ([0; 1) S) is a real vector space that con-
tains v(t; !) 1 and all simple processes as in 2.21 with appropriately mea-
surable faj (!)gnj=0 L2 (S; ): Hint: This is not trivial, and requires book
7’s proposition 6.18 and corollary 5.116.

Before proceeding with this section’s development, we verify that H2M ([0; 1)
S) is a normed space under kkH M ([0;1) S) de…ned in 3.11.
2
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES93

Proposition 3.14 k kH M k kH M ([0;1) S) is a norm on H2M ([0; 1) S):


2 2
Proof. Recalling de…nition 4.3 of book 5, we only need to check the triangle
inequality. If u; v 2 H2M ([0; 1) S); then
Z 1 Z 1
2
u (s; !)d hM is < 1; v 2 (s; !)d hM is < 1;
0 0

each -a.e. Thus for ! in the intersection set of probability 1;

u(s; ); v(s; ) 2 L2 (R+ ; B R+ ; d hM is ):

By Minkowski’s inequality (proposition 4.15, book 5) applied to this space it


follows that for almost all !;

Z 1 1=2
[u(s; !) + v(s; !)]2 d hM is
0
Z 1 1=2 Z 1 1=2
u2 (s; !)d hM is + v 2 (s; !)d hM is :
0 0

Squaring and taking expectations:


" Z Z #
1 1=2 1 1=2
ku + vk2H M kuk2H M +kvk2H M +2E u2 d hM is v 2 d hM is :
2 2 2
0 0

Applying Hölder’s inequality (proposition 3.46, book 4):


" Z Z #
1 1=2 1 1=2
E u2 d hM is v 2 d hM is
0 0
Z 1 1=2 Z 1 1=2
E u2 d hM is E v 2 d hM is
0 0
= kukH M kvkH M ;
2 2

and the triangle inequality follows:

ku + vkH M kukH M + kvkH M :


2 2 2
94CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

3.2.3 Simple Process Approximations in H2M ([0; 1) S)


Assume that the collection of simple processes de…ned in 2.21 is proved to
be dense in H2M ([0; 1) S): That is, assume that given
v(t; !) 2 H2M ([0; 1) S) that there exists a sequence of simple processes
fvn (t; !)g1
n=1 H2M ([0; 1) S) so that:

kvn (s; !) v(s; !)kH M ([0;1) S) ! 0:


2

In this case we should be able to show as in proposition 2.40 that the


stochastic integral of v(t; !) 2 H2M ([0; 1) S) with respect to M exists
over [0; t] for all t asothe L2 (S) limit of the sequence
n Rt 1
0 vn (s; !)dMs (!) n=1 ; since this is then a Cauchy sequence by 3.7. In
addition, this integral should then satisfy the properties of proposition
2.47, and might even have a continuous martingale version ItM (!); as in
proposition 2.49.

To this end, the …rst step is to show that every processes v(t; !) 2
H2M ([0; 1) S) can be approximated in the H2M ([0; 1) S) norm with
simple processes. For this proof we will utilize the functional monotone
class theorem of proposition 1.32 of book 5. Recalling the proof of propo-
sition 2.57, if v(t; !) 2 H2 ([0; 1) S) is left continuous, then v(t; !) 2
T
H2B ([0; 1) S) for any …xed stopping time T < 1: The approximating
T
sequence fvn (t; !)g1 n=1 H2B ([0; 1) S) of the next result is then an
approximating sequence for v(t; !) (0;T ] (s) in H2 ([0; 1) S) since the mea-
surability conditions are identical (compare lemma 2.25 with lemma 3.12),
as are the norming constraints.

Proposition 3.15 (Simple process approximations in H2M ([0; 1) S))


Given (S; (S); t (S); )u:c: and v(t; !) 2 H2M ([0; 1) S); there exists a se-
quence of simple processes fvn (t; !)g1
n=1 H2M ([0; 1) S) so that:

kv(s; !) vn (s; !)kH M ([0;1) S) ! 0; (3.13)


2

where this norm is de…ned as in 3.11:


Z 1
kv(s; !) vn (s; !)k2H M ([0;1) S) E [v(s; !) vn (s; !)]2 d hM is :
2
0

Proof. Assume that for arbitrary T that the proposition is true for all
bounded v(t; !) 2 H2M with v(t; !) = 0 for t > T: For general v(t; !) 2 H2M ;
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES95

de…ne v (k) (t; !) = v(t; !) [0;k] (t) jv(t;!)j k (!) and note that v (k) (t; !) !
v(t; !) pointwise for all (t; !): Now:

v(t; !) v (k) (t; !) 2 jv(t; !)j ;

and kv(t; !)kH M < 1:


2
Expressing the H2M -norm in terms of the Doléans measure in 3.8
on [0; 1) S; it follows that v(t; !) v (k) (t; !) H M ! 0 by Lebesgue’s
2
dominated convergence theorem (proposition 2.43, book 5). But then since
each v (k) (t; !) is bounded and v (k) (t; !) = 0 for t > k; there is by assumption
(k)
for each k a sequence vn (t; !) of simple processes so that

vn(k) (t; !) v (k) (t; !) ! 0:


H2M

(k)
So for each k choose nk with vnk (t; !) v (k) (t; !) < 1=k; and thus
H2M
(k)
by the triangle inequality, v(t; !) vnk (t; !) ! 0: Hence the assumed
H2M
limited result for bounded v(t; !) with compact support implies the general
result.
To circumvent the use of the (here) unproved Doléans measure we can
proceed as follows. For general v(t; !) 2 H2M ; the requirement that kv(t; !)kH M <
R1 2
1 implies that 0 v 2 (s; !)d hM is < 1; -a.e. On this set of probability 1;
v (k) (t; ) ! v(t; ) pointwise for all t; and Lebesgue’s dominated convergence
R1 2
theorem obtains that 0 v (k) (t; ) v(t; ) d hM is ! 0; -a.e. Now since:
Z 1 Z 1
2
(k)
v (t; !) v(t; !) d hM is 4 v 2 (t; !)d hM is ;
0 0

which has …nite expectation, Lebesgue’s dominated convergence theorem of


book 5’s corollary 2.45 applies to again obtain that v(s; !) v (k) (t; !) H M !
2
0:
So …x T and let HT denote the collection of all v(t; !) 2 H2M such that
v(t; !) = 0 for t > T; and for which the conclusion of the proposition holds.
Our goal is to show that HT contains all bounded predictable v(t; !) with
v(t; !) = 0 for t > T: To this end, recall that the predictable sigma algebra
P is de…ned by P = [A0 ] where A0 denotes the semi-algebra:
A0 = ff(s; t] As g [ (f0g A0 ) j 0 s < t; As 2 s (S)g:

We now check the three requirements of the functional monotone class the-
orem.
96CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

1. By exercise 3.13 HT contains all functions A with A 2 A0 ; and we


now show that this result extends to A = [0; 1) S: To see this, let
An (0; n] S; then:
Z 1
A An H M = E d hM is
2
n
= E [hM i1 hM in ] :

From 6.15 of book 7’s proposition 6.18 it follows that for all t s:

E [hM it hM is ] = E Mt2 Ms2 :

Doob’s Martingale Convergence theorem of that book’s corollary 5.116


applies due to the L2 -boundedness of M; and assures that Mn2 ! M12 ;

-a.e., and that E Mt2 is increasing with limit E M1 2 < 1: By


Lebesgue’s dominated convergence theorem:
2
A An H M = E [hM i1 hM in ] = E M1 Mn2 ! 0: ((*))
2

By completeness of H2M this proves A 2 H2M ; and by ( ) 3.13 is


satis…ed with simple processes f An g; and so A 2 HT :
2. Next, we show that HT is a real vector space: If u; v 2 HT then au +
bv 2 HT for all real a; b: First au + bv 2 H2M by the triangle inequality.
Also, if fun ; vn g are approximating sequences of simple processes for
u; v; then aun + bvn is a simple process for all n and:

k(au + bv) (aun + bvn )kH M jaj ku un kH M + jbj kv vn kH M :


2 2 2

So faun + bvn g is an approximating sequence for au + bv and au + bv 2


HT :
3. Finally, assume that 0 v (k) (t; !) 2 HT and there is a bounded func-
tion v(t; !) c so that v (k) (t; !) ! v(t; !) pointwise. By an appli-
cation of remark 1.34 following the proof of the functional monotone
class theorem, it is su¢ cient to assume that this sequence is increas-
ing. Then v(t; !) is predictable as the pointwise limit of predictable
processes (corollary 1.10, book 5), and

v (k) (t; !) v(t; !) v(t; !):

Since v(t; !) 2 H2M by book 7’s proposition 6.18:

kv(t; !)kH M cE [hM iT ] = cE MT2 < 1;


2
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES97

it follows that v(t; !) v (k) (t; !) H M ! 0 by Lebesgue’s dominated


2
convergence theorem, using either justi…cation at the beginning of the
proof: Now since each v (k) (t; !) 2 HT ; there is are simple processes
(k) (k)
vn (t; !) so that vnk (t; !) v (k) (t; !) M < 1=k for appropriately
H2
(k)
chosen nk : The triangle inequality obtains v(t; !) vnk (t; !) !0
H2M
and thus v(t; !) 2 HT :

The functional monotone class theorem now applies and it can be con-
cluded that HT contains all bounded predictable functions with v(t; !) = 0
for t > T; and hence all bounded v(t; !) 2 H2M with v(t; !) = 0 for t > T;
as was to be proved.

Corollary 3.16 If v(t; !) and fvn (t; !)g1n=1 are given as in the above propo-
sition with 3.13 satis…ed, then for any interval (t; t0 ] with 0 t < t0 1 :

v(s; !) (t;t0 ] (s) vn (s; !) (t;t0 ] (s) ! 0: (3.14)


H2M ([0;1) S)

For notational convenience, (t; t0 ] (t; 1) when t0 = 1:


Proof. This follows from:
Z 1h i2
E v(s; !) (t;t0 ] (s) vn (s; !) (t;t0 ] (s) d hM is
Z0 1
= E (t;t0 ] (s) [v(s; !) vn (s; !)]2 d hM is
Z0 1
E [v(s; !) vn (s; !)]2 d hM is :
0

We will on occasion require a generalization of proposition 3.15 (and


T M
corollary 3.16) to cases for which v(t; !) 2 kj=1 H2 j ([0; 1) S) for fMj gkj=1
M2 : For this, we replicate the above proof with minor modi…cations.
T Mj
Proposition 3.17 (Simple process approximations in m j=1 H2 ([0; 1) S))
T Mj
Given (S; (S); t (S); )u:c: and v(t; !) 2 m j=1 H2 ([0; 1) S); there ex-
T m M
ists a sequence of simple processes fvn (t; !)g1
n=1
j
j=1 H2 ([0; 1) S) so
that for all j :
kv(s; !) vn (s; !)k Mj ! 0: (3.15)
H2 ([0;1) S)
98CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

In addition, for any interval (t; t0 ] with 0 t < t0 1:

v(s; !) (t;t0 ] (s) vn (s; !) (t;t0 ] (s) Mj ! 0; all j: (3.16)


H2 ([0;1) S)

Proof. It is an exercise to check that if for arbitrary T; the proposition is


T Mj
true for all bounded v(t; !) 2 m j=1 H2 with v(t; !) = 0 for t > T; then it
Tm Mj
is true for general v(t; !) 2 j=1 H2 :
So …x T and let HT denote the collection of all predictable v(t; !) such
that v(t; !) = 0 for t > T; and for which the conclusion of the proposition
holds. Our goal is again to show that HT contains all bounded predictable
v(t; !) with v(t; !) = 0 for t > T:
Checking the three requirements of the functional monotone class theo-
rem:

1. By exercise 3.13, HT contains all functions A with A 2 A0 ; and this


result extends to A = [0; 1) S as above. For this, substitute each of
M
the H2 j -norms for the H2M -norm,

2. If u; v 2 HT then au+bv 2 HT for all real a; b by the triangle inequality


M
and the argument above, again substituting each of the H2 j -norms for
the H2M -norm.

3. Finally, assume that 0 v (k) (t; !) 2 HT and there is a bounded func-


tion v(t; !) c so that v (k) (t; !) ! v(t; !) pointwise, and as above it
is su¢ cient to assume that this sequence is increasing. Then v(t; !) is
predictable as the pointwise limit of predictable processes, and

v (k) (t; !) v(t; !) v(t; !):

Tm Mj
Since v(s; !) 2 j=1 H2 ([0; 1) S) as above:
h i
kv(s; !)k Mj cE hMj iT = cE Mj2 T
< 1;
H2

it follows by Lebesgue’s dominated convergence theorem, again us-


ing either justi…cation at the beginning of the previous proof, that
v(t; !) v (k) (t; !) Mj ! 0 for all j: Now since each v (k) (t; !) 2 HT ;
H2
(k)
there is are simple processes vn (t; !) as above so that:

vn(k)
k
(t; !) v (k) (t; !) Mj < 1=k
H2
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES99

for all j; for appropriately chosen nk : The triangle inequality obtains:

v(t; !) vn(k)
k
(t; !) Mk !0
H2

for all j and thus v(t; !) 2 HT :

The proof of 3.16 is the same as that of corollary 3.16.

3.2.4 The General Stochastic Integral


With H2M ([0; 1) S) de…ned, we …rst consolidate the above results on
simple processes. For this let Mt 2 M2 be a continuous L2 -bounded
martingale with M0 = 0; and v(t; !) 2 H2M ([0; 1) S) a simple process
de…ned as in 2.21:
Xn
v(s; !) a 1 (!) f0g (s) + aj (!) (tj ;tj+1 ] (s):
j=1

Thus by lemma 3.12 a 1 (!) is measurable relativeRto 0 (S) and aj (!) is


1
measurable relative to tj (S) for all j: Finally let 0 v(s; !)dMs (!) be
de…ned in 3.1:
Z 1 Xn
v(s; !)dMs (!) = aj (!) Mtj+1 (!) Mtj (!) :
0 j=1

Proposition 3.18 (Simple process integrals redux) Given (S; (S); t (S); )u:c:
and v(t; !) 2 H2M ([0; 1) S) a simple process as above:

1. Mean equal to zero: For 0 t < t0 1:


"Z 0 #
t
E v(s; !)dMs (!) = 0;
t

where Z Z
t0 1
v(s; !)dMs (!) v(s; !) (t;t0 ] (s)dMs (!):
t 0

2. The process VtM (!); de…ned in 3.5:


Z t
VtM (!) v(s; !)dMs (!);
0

is a continuous martingale on (S; (S); t (S); ):


100CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

3. Itô M -Isometry:

VtM (!) L2 (S)


= kv(s; !)kH M ([0;t] S) ; (3.17)
2

where Z t
kv(s; !)k2H M ([0;t] S) E v 2 (s; !)d hM is :
2
0

Proof. 1 is 3.4 of proposition 3.1, while 2 is proposition 3.4 given 3.11.


Finally, 3 is 3.7.
Rt
We now prove existence and develop properties the integral 0 v(s; !)dMs
for v(s; !) 2 H2M ([0; 1) S) following the program outlined for the Itô
integral. Any missing details are left as an exercise for the reader.

Proposition 3.19 (Stochasic integral on H2M ([0; 1) S)) Given the …l-
tered probability space (S; (S); t (S); )u:c: ; let Mt 2 M2 be a continu-
ous L2 -bounded martingale with M0 = 0 and v(s; !) 2 H2M ([0; 1) S):
If fvn (t; !)g1
n=1 H2M ([0; 1) S) is any approximating sequence of sim-
ple processes satisfying 3.13, then for any 0 t < t0 1; the stochastic
integral:
Z t0
v(s; !)dMs (!);
t
is well de…ned as the L2 (S)-limit of the sequence of the integrals of the simple
processes fvn (s; !) (t;t0 ] (s)g1
n=1 H2M ([0; 1) S) as de…ned in 3.1. That
is:
Z 1 Z t0
vn (s; !) (t;t0 ] (s)dMs (!) v(s; !)dMs (!) ! 0: (3.18)
0 t
L2 (S)

R t0
By "well de…ned" is meant that for every such t; t0 ; t v(s; !)dMs (!) is
uniquely de…ned -a.e., independent of the approximating sequence.
Proof. As before, fvn (t; !)g1 M
n=1 is a Cauchy sequence in H2 ([0; 1) S):

kvn (s; !) vm (s; !)kH M kvn (s; !) v(s; !)kH M +kv(s; !) vm (s; !)kH M :
2 2 2

Since vn (t; !) vm (t; !) is a simple process for all n; m; and for such processes
3.1 obtains:
Z 1 Z 1 Z 1
[vn (t; !) vm (t; !)] dMs (!) = vn (s; !)dMs (!) vm (s; !)dMs (!);
0 0 0
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES101

R1
it follows from 3.17 that f 0 vn (s; !)dMs (!)g1 n=1 is a Cauchy sequence in
M
L2 (S): This sequence therefore has a limit V1 (!) 2 L2 (S) with
Z 1
M
V1 (!) vn (s; !)dMs (!) ! 0:
0 L2 (S)

To see that V1M (!) is well de…ned, assume that f~ vn (t; !)g1 H2M ([0; 1)
n=1
S) is another approximating sequence of simple processes for v(s; !) satisfy-
ing 3.13, and let V~1
M (!) then be derived similarly. That V M (!)
1 V~1M (!) =
L2 (S)
0 is proved exactly as in proposition 2.40, and thus V1 ~1
M (!) = V M (!); -a.e.
R t0
The proof for t v(s; !)dMs (!) is then the same using corollary 3.16.
R t0
Corollary 3.20 For all t; t0 with 0 t < t0 1; the integral t v(s; !)dMs (!)
is equivalently de…ned -a.e. by:
Z t0 Z 1
v(s; !)dMs (!) v(s; !) (t;t0 ] (s)dMs (!): (3.19)
t 0

Proof. With fvn (s; !)g1 n=1 H2M ([0; 1) S) an approximating sequence
for v(s; !) as in proposition 3.15 above, then vn (s; !) (t;t0 ] (s) H2M ([0; 1)
S) is an approximating sequence for v(s; !) (t;t0 ] (s) by corollary 3.16. Thus
by 3.18 applied to v(s; !) (t;t0 ] (s); the integral on the right of 3.19 is de…ned
by:
Z 1 Z 1
vn (s; !) (t;t0 ] (s)dMs (!) v(s; !) (t;t0 ] (s)dMs (!) ! 0:
0 0 L2 (S)

Comparing to 3.18, 3.19 is obtained.

Exercise 3.21 Show that given linearity of this integral, which is 2 of propo-
sition 3.24 below, that 3.19 obtains:
Z t0 Z t0 Z t
v(s; !)dMs (!) = v(s; !)dMs (!) v(s; !)dMs (!); -a.e. (3.20)
t 0 0

Remark 3.22 By 3.2:


Z 1 Z t0
vn (s; !) (t;t0 ] (s)dMs (!) vn (s; !)dMs (!);
0 t

and thus 3.18 can be expressed:


Z t0 Z t0
vn (s; !)dMs (!) !L2 (S) v(s; !)dMs (!): ((*))
t t
102CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

The next result con…rms that L2 (S)-convergence of proposition 3.19 al-


lows other conclusions as in the case of the Itô integral.
Proposition 3.23 (Convergence in probability) Given (S; (S); t (S); )u:c: ;
let Mt 2 M2 be a continuous L2 -bounded martingale with M0 = 0 and
fv(s; !); fvn (s; !)g1n=1 g H2M ([0; 1) S): For 0 t < t0 1; 3.18 of
proposition 3.19 implies convergence in probability:
Z t0 Z t0
vn (s; !)dMs (!) !P v(s; !)dMs (!): (3.21)
t t
nR 0 o
t
Further, the sequence t v n (s; !)dMs (!) is uniformly integrable (de…ni-
tion 5.5, book 6).
Proof. The proof for proposition 2.46 works identically here, with only a
change in notation from B to M: For example, convergence in probability is
obtained from Chebyshev’s inequality of book 4’s proposition 3.33, that for
any > 0 :
"Z 0 Z t0 #
t
Pr vn (s; !)dMs (!) v(s; !)dMs (!) >
t t
Z t0 Z t0
2
2
vn (s; !)dMs (!) v(s; !)dMs (!) :
t t
L2 (S)

Hence this probability converges to zero for all :


For uniform integrability, convergence in L2 (S) implies that:
2 3
Z t0 2
sup E 4 vn (s; !)dMs (!) 5 < 1:
n t

This is because:
Z t0
vn (s; !)dMs (!)
t
L2 (S)
Z t0 Z t0 Z t0
v(s; !)dMs (!) + vn (s; !)dMs (!) v(s; !)dMs (!) ;
t t t
L2 (S) L2 (S)
R t0
and the last expression converges to 0: Denoting Xn t vn (s; !)dMs (!) to
simplify notation, then as N ! 1 :
Z Z h i
1 1
sup jXn (s)j d sup jXn (s)j2 d sup E jXn j2 ! 0:
n jXn (s)j N N n jXn (s)j N N n
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES103

The next result summarizes properties of this integral, providing all the
usual results other than continuity, which is addressed below.

Proposition 3.24 (Properties of the stochastic integral) Given (S; (S); t (S); )u:c: ;
let Mt 2 M2 be a continuous L2 -bounded martingale with M0 = 0; v(s; !); u(s; !) 2
H2M ([0; 1) S); and 0 t < t0 1:

1. For r with t < r < t0 ; then for almost all ! 2 S :


Z t0 Z r Z t0
v(s; !)dMs (!) = v(s; !)dMs (!) + v(s; !)dMs (!):
t t r

2. For constant a 2 R; av(s; !)+u(s; !) 2 H2M ([0; 1) S) and for almost


all ! 2 S :
Z t0 Z t0 Z t0
[av(s; !) + u(s; !)] dMs (!) = a v(s; !)dMs (!)+ u(s; !)dMs (!):
t t t

3. Mean equal to zero:


"Z #
t0
E v(s; !)dMs (!) = 0: (3.22)
t

4. The Itô M -Isometry:


Z t0
v(s; !)dMs (!) = kv(s; !)kH M ([t;t0 ] S) ; (3.23)
2
t
L2 (S)

or with more explicit notation, after squaring both sides:


Z Z !2 Z Z
t0 t0
v(s; !)dMs (!) d = v 2 (s; !)d hM is d :
t t

5. The process: Z t
VtM (!) v(s; !)dMs (!)
0

is a martingale relative to the …ltration f t (S)g:


104CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

Proof. It is an exercise to check that 1 is true for simple processes by 3.1


and 3.2. More generally, 1 follows from 2 by 3.19:
Z t0 Z 1
v(s; !)dMs (!) v(s; !) (t;t0 ] (s)dMs (!)
t 0
Z 1h i
= v(s; !) (t;r] (s) + v(s; !) (r;t0 ] (s) dMs (!)
0
Z r Z t0
= v(s; !)dMs (!) + v(s; !)dMs (!):
t r

For 2; …rst note that av(s; !)+u(s; !) 2 H2M ([0; 1) S) since k kH M ([t;t0 ] S)
2
is a norm by proposition 3.14, and thus:

kav(s; !) + u(s; !)kH M jaj kv(s; !)kH M + ku(s; !)kH M :


2 2 2

Now let ffvn (s; !)g1 1


n=1 ; fun (s; !)gn=1 g H2M ([0; 1) S) be approximating
sequences as in proposition 3.15. It is an exercise to check that avn (s; !) +
un (s; !) is an approximating sequence for av(s; !) + u(s; !); and since 1 is
true for simple processes (simplifying notation):
Z t0 Z t0 Z t0
[av + u] dMs (!) a vdMs (!) udMs (!)
t t t
L2 (S)
Z t0 Z t0
[av + u] dMs (!) [avn + un ] dMs (!)
t t
L2 (S)
Z t0 Z t0
+a vdMs (!) vn dMs (!)
t t
L2 (S)
Z t0 Z t0
+ udMs (!) un dMs (!) :
t t
L2 (S)

This upper bound converges to 0 as n ! 1 by proposition 3.19, and thus


the expression in the …rst norm is 0; -a.e.
If fvn (s; !)g1
n=1 H2M ([0; 1) S) nis an approximatingo sequence for
R t0 R t0
v(s; !); then convergence in probability of t vn (s; !)dMs (!) to t vn (s; !)dMs (!)
by proposition 3.23 assures convergence in distribution by book 2’s proposi-
tion 5.21. Then uniform integrability of this sequence of integrals by propo-
sition 3.23 obtains convergence of expectations by book 6’s proposition 5.6.
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES105

Since 3 is true for simple processes by proposition 3.1:


"Z 0 # "Z 0 #
t t
E v(s; !)dMs (!) = lim E vn (s; !)dMs (!) = 0:
t n!1 t

R t0
Now with fvn (s; !)g1
n=1 as above, L2 -convergence t vn (s; !)dMs (!) !
R t0
t v(s; !)dMs (!) assures
Z t0 Z t0
vn (s; !)dMs (!) ! v(s; !)dMs (!)
t t
L2 (S) L2 (S)

by book 5’s proposition 4.20. Similarly since k kH M ([t;t0 ] S) is a norm by


2
proposition 3.14, kvn (s; !)kH M ([t;t0 ] S) ! kv(s; !)kH M ([t;t0 ] S) : Thus 4 fol-
2 2
lows from 3.17.
R t0
Finally, t vn (s; !)dMs (!) is an L2 -martingale for all n by proposition
R t0 R t0
3.4, and L2 -convergence t vn (s; !)dMs (!) ! t v(s; !)dMs (!) assures
R t0
that t v(s; !)dMs (!) is also a martingale by book 7’s proposition 5.32.

Remark 3.25 (Variance of the stochastic integral) Note that 3 and 4


above obtain an expression for the variance of the stochastic integral:
"Z 0 # Z Z 0
t t
V ar v(s; !)dMs (!) = v 2 (s; !)d hM is d : (3.24)
t t

The next result shows that this stochastic integral is not only linear in
its integrand, but also linear in its integrator.

Proposition 3.26 (Integral is linear w.r.t. integrators) Given (S; (S); t (S); )u:c: ;
let Mt ; Nt 2 M2 be a continuous L2 -bounded martingales with M0 = N0 = 0;
and v(s; !) 2 H2M ([0; 1) S)\H2N ([0; 1) S): Then v(s; !) 2 H2M +N ([0; 1)
S) and if 0 t 1 :
Z t Z t Z t
v(s; !)d [Ms + Ns ] (!) = v(s; !)dMs (!) + v(s; !)dNs (!); -a.e.
0 0 0
(3.25)
Proof. By book 7’s propositions 6.30:

hM + N it = hM it + hN it + 2 hM; N it ;

and by that book’s proposition 6.33:


1=2 1=2
jhM; N it j hM it hN it :
106CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

Since 2ab a2 + b2 for all real a; b; this obtains:

hM + N it 2 (hM it + hN it ) ;

and thus v(s; !) 2 H2M +N ([0; 1) S):


It is an exercise to check that 3.25 is valid when v(s; !) is a simple
process, so let fvn (s; !)g1
n=1 H2M ([0; 1) S) \ H2N ([0; 1) S) be an
approximating sequence relative to these two spaces as in proposition 3.17.
It is then an approximating sequence relative to H2M +N ([0; 1) S) by the
…rst part. Denoting integrals of v(s; !) by V M ; etc., and those of vn (s; !)
by VnM ; etc., then since VnM +N = VnM + VnN :

V M +N VM VN L2 (S)
V M +N VnM +N L2 (S)
+ VM VnM L2 (S)
+ VN VnN L2 (S)
:

Thus by proposition 3.19, V M +N VM VN L2 (S)


= 0 and 3.25 is proved.

3.2.5 A Continuous Version


Finally, as in the case of the Itô integral, we prove the existence of a
continuous version of this stochastic integral.

Proposition 3.27 (A Continuous version of VtM (!)) Given (S; (S); t (S); )u:c: ;
let Mt 2 M2 be a continuous L2 -bounded martingales with M0 = 0 and
v(s; !) 2 H2M ([0; 1) S): Then there exists a process ItM (!); continuous in
t 2 [0; 1) for all ! 2 S; so that with integrals below de…ned in 3.18:

1. If v(s; !) is a simple process, then:


Z t
ItM (!) = v(s; !)dMs (!); for all !; all t: (3.26)
0

2. For general v(s; !); for each t :


Z t
ItM (!) = v(s; !)dMs (!); -a.e. (3.27)
0

In other words, ItM (!) is a version or a modi…cation (de…nition


5.10, book 7) of VtM (!) of proposition 3.24.
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES107

3. The process ItM (!) is an L2 -bounded martingale relative to the …ltra-


tion f t (S)g:

4. The process ItM (!) satis…es properties 1 4 of proposition 3.24. In


particular, we have the Itô M -Isometry for t 1 :
2
ItM (!) L2 (S)
= kv(s; !)k2H M ([0;t] S) : (3.28)
2

5. If Mt ; Nt 2 M2 and v(s; !) 2 H2M ([0; 1) S) \ H2N ([0; 1) S); then


3.25 is satis…ed with ItM +N (!); ItM (!) and ItN (!) :

ItM +N (!) = ItM (!) + ItN (!); -a.e. (3.29)

Proof. This is largely a change of notation from earlier propositions, but


for completeness we provide some of the details.
First, 1 is proposition 3.18. For general v(s; !); …x the interval I = [0; T ]
for T < 1 and let fvn (s; !)g1 n=1 H2M ([0; 1) S) be an approximating
sequence of simple processes as in proposition 3.15, and so by corollary 3.16:

kvn (s; !) v(s; !)kH M ([0;T ] S) ! 0: ((*))


2

Changing notation a bit, de…ne InM (t; !) on [0; T ] S by:


Z t
M
In (t; !) = vn (s; !)dMs (!);
0

and so by proposition 3.19:


Z t
InM (t; !) !L2 (S) VtM (!) v(s; !)dMs (!); t 1:
0

For each n; InM (t; !) is a continuous martingale relative to f t (S)g by propo-


sition 3.18, and hence so too is In (t; !) Im (t; !) for any n; m: Therefore,
applying Doob’s martingale maximal inequality 1 (proposition 5.46, book 7)
with p = 2; and then Itô’s isometry in 3.23, derives that for any > 0; that
as n; m ! 1 :
" #
M M 1 h M M 2
i
Pr sup In (t; !) Im (t; !) 2 E I n (T; !) I m (T; !)
0 t T
Z T
1
= 2E jvn (s; !) vm (s; !)j2 ds
0
! 0:
108CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

The last step follows because fvn (s; !)g1 M


n=1 is a Cauchy sequence in H2 ([0; 1)
S) by ( ) and the triangle inequality:

kvn (s; !) vm (s; !)kH M kvn (s; !) v(s; !)kH M +kvm (s; !) v(s; !)kH M :
2 2 2

k (T )
Now let =2 and de…ne an increasing sequence fnk g1
k=1 so that
" #
Pr sup I M(T ) (t; !) I M(T ) (t; !) 2 k
2 k
:
0 t T nk+1 nk

(T )
De…ning Ak S by
( )
(T )
Ak !j sup I M(T ) (t; !) I M(T ) (t; !) 2 k
;
0 t T nk+1 nk

P (T )
then 1 k=1 [Ak ] 1 and the Borel-Cantelli lemma of book 2’s proposition
(T )
2.6 applies to yield that [lim sup Ak ] = 0: Hence, for ! outside this set of
measure zero, there are at most …nitely many k with

sup I M(T ) (t; !) I M(T ) (t; !) 2 k


:
0 t T nk+1 nk

Equivalently, fI M(T ) (t; !)g1


k=1 converges uniformly for t 2 [0; T ] outside this
nk
set of measure zero, and hence converges to a continuous limit function we
(T )
denote by ItM (!): For ! 2 lim sup Ak ; de…ne ItM (!) 0 so ItM (!) is now
continuous in t 2 [0; T ] for all !:
To de…ne ItM (!) for t 2 [0; 1); we perform the above construction for
(T +1) 1 (T ) (T +1) (T )
T = 1; 2; :::; choosing fnk gk=1 fnk g1 k=1 with n1 > n1 : Now if
(k)
we de…ne nk = nk ; then outside a set of measure 0; fInMk (t; !)g1 k=1 converges
uniformly for t 2 [0; T ] for all T; and thus to continuous ItM (!) de…ned for
t 2 [0; 1): Outside this set of measure 0 we again de…ne ItM (!) 0:
As a subsequence of a convergent sequence InM (t; !) in L2 (S); for each
t:
InMk (t; !) !L2 (S) VtM (!);
and so for each t :
ItM (!) = VtM (!); -a.e., ((**))
which is 2:
That ItM is an L2 -bounded martingale is proved identically as in propo-
sition 2.52 for the Itô integral. That this process satis…es properties 1 4 of
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES109

proposition 3.24 is left as an exercise in restating corollary 2.50, while the


veri…cation of 3.29 is an exercise in applying the -a.e. statements above.

De…nitionR 3.28 (Final Stochastic Integral in H2M ) For v(t; !) 2 H2M ([0; 1)
t
S); de…ne 0 v(s; !)dMs (!) by:
Z t
v(s; !)dMs (!) = ItM (!); (3.30)
0

where ItM (!) is constructed in proposition 3.27.


M (!) as
Exercise 3.29 Justify that for 0 t < t0 1; that if we de…ne It;t0
R t0
the t0 -continuous version of t v(s; !)dMs (!); then:
M M
It;t0 (!) = It0 (!) ItM (!); -a.e.
Hint: 3.20.

3.2.6 The Kunita-Watanabe Inequality


In this section we develop an important technical result on
Lebesgue-Stieltjes integration that can be seen as a generalized
Cauchy-Schwarz inequality of book 4’s corollary 3.48. This inequality is
named for a 1967 result by Hiroshi Kunita (1937 –2008) and Shinzo
Watanabe (1935 –), and states (in the version below) that if Mt ;
Nt 2 M2 ; and thus are continuous L2 -bounded martingales with
M0 = N0 = 0; and v(t; !); w(t; !) measurable processes, then for almost all
!:
Z t 2 Z t Z t
2
jv(s; !)w(s; !)j d jhM; N ijs v (s; !)d hM is w2 (s; !)d hN is ;
0 0 0
(3.31)
for 0 t < 1: Here d jhM; N ijs is the Borel measure associated with the
total variation of the signed measure hM;N it over [0; s]: We develop
background details below.

As a corollary of this result, if v(t; !) 2 H2M ([0; 1) S) and w(t; !) 2


H2N ([0; 1) S); then v(t; !)w(t; !) is integrable relative to d jhM; N ijs : We
will see in proposition 3.87 below that this result in fact applies to continuous
local martingales Mt ; Nt : The only reason for the added restriction of L2 -
boundedness here is to better …t in with this chapter’s focus and coming
applications.
110CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

Measures Induced by BV Functions


To set the stage, a review and extension of some earlier results is
warranted. First, for a given continuous martingale Mt (which is a local
martingale by book 7’s corollary 5.85), the Doob-Meyer decomposition
theorem of book 7’s proposition 6.12 states that quadratic variation
process hM it is a continuous, increasing, adapted process with hM i0 = 0:
It is also the unique such process so that Mt2 hM it is a continuous local
martingale, but we do not need this fact here. Thus the integrals on the
right of 3.31 can be de…ned as Riemann-Stieltjes integrals if the integrands
are continuous or of bounded variation in s for given ! (book 3’s
proposition 4.19), and can also be de…ned as Lebesgue-Stieltjes integrals if
the integrands are Borel measurable in s for given ! (book 5’s chapter 2,
using the Borel measure hM it induced by hM it as de…ned as in book 1’s
section 5.2). Thus identifying notational conventions,

d hM is d hM is ; d hN is d hN is :

Remark 3.30 (Parametrization of measures) Note that here and be-


low, all measures are parametrized by ! 2 S: So for example, d hM is
d hM is (!) ; etc., and in the above integrals, both the integrands and mea-
sures vary with !:

Given continuous martingales Mt ; Nt ; book 7’s proposition 6.29 states


that the covariation process hM; N it is a continuous, adapted, bounded
variation process over every compact [0; T ] with hM; N i0 = 0: It is also
the unique such process for which Mt Nt hM; N it is a continuous local
martingale, but again we do not need this here. By de…nition 6.25 of book
7 this covariation process is given as a di¤erence of the increasing quadratic
variation processes:

1
hM; N it [hM + N it hM N it ] ;
4
and thus induces the signed measure (de…nition 7.5, book 5):

hM;N it 1
4
hM +N it 1
4
hM N it ; (3.32)

as long as at least one of 1 hM +N i ; 1 hM N i is a …nite measure. In this


4 t 4 t
case integrals with respect to this measure are equivalently denoted in terms
of d hM; N it or d hM;N it :
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES111

Exercise 3.31 Demonstrate that on the semi-algebra A0 of right semi-closed


intervals f(a; b]g which generates the Borel sigma algebra B (R) ; that:

hM;N it [(a; b]] = hM; N ib hM; N ia : (3.33)

Thus on A0 this measure is evaluated exactly as was F [(a; b]] when F is an


increasing function . Hint: Express hM;N it [(a; b]] in terms of 1 hM N i [(a; b]] ;
4 t
apply proposition 5.7 of book 1; then proposition 6.30 of book 7:

When Mt ; Nt are L2 -martingales, hM;N it is a di¤erence of -a.e. …nite


measures over every compact interval and thus is a signed measure over every
such interval. To see this, …rst note that each of 1 hM +N i and 1 hM N i
4 t 4 t
1 1
are well de…ned Borel measures as above, since 4 hM + N it and 4 hM N it
are increasing. Now since hM N i0 = 0 :
Z T
1 1
1
hM N it [0; T ] d hM N is = hM N iT :
4
0 4 4

By book 7’s proposition 6.18 and the L2 assumption on Mt ; Nt :


h i
E [hM N iT ] = E (MT NT )2 < 1:

Thus each of these measures is …nite -a.e. If we assume that Mt and Nt are
in fact L2 -bounded as above, then these measures are also …nite for T = 1
by book 7’s corollary 5.116.

Conclusion 3.32 For Mt ; Nt 2 M2 ; hM;N it is a …nite signed measure,


-a.e.

As a bounded variation function, in addition to the representation in


3.32 above as a di¤erence of increasing functions 14 hM N it ; we can also
utilize the canonical representation of book 3’s proposition 3.27:

hM; N it = P0t N0t :

Here P0t (respectively N0t ) is the positive (respectively, negative) variation of


hM; N is on [0; t] of that book’s de…nition 3.23, and thus are again increasing
functions. The signed measure hM;N it of 3.32 can then also be expressed
as a di¤erence of Borel measures:

hM;N it P0t N0t : (3.34)


112CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

Repeating exercise 3.31, either formulation provides the same value for
0
hM;N it [(a; b]] on the semi-algebra A and thus by extension (chapter 5,
book 1) will agree on B (R) :
Now given the bounded variation process hM; N it ; the total variation
process jhM; N it j is de…nable in terms of the strong total variation of
hM; N is on [0; t] as in de…nition 2.84 of book 7; or as the total variation
function of book 3’s de…nition 3.23 and denoted T0t : The total variation
process jhM; N it j T0t is then increasing as a simple corollary of 3.36
below, and can be represented as in book 3’s proposition 3.26 as:

jhM; N it j T0t = P0t + N0t :

Consequently the Borel measure jhM;N ijt T0t ; introduced in 3.31 above,
but using the notation d jhM; N it j is de…nable:

jhM;N it j P0t + N0t : (3.35)

Remark 3.33 (On the notation jhM; N it j) The next result shows that
continuity of hM; N it assures continuity of jhM; N it j : This would be trivial if
jhM; N it j denoted the absolute value of hM; N it ; since jjaj jbjj ja bj ;
and it is perhaps unfortunate that the standard notational convention for the
total variation process T0t associated with hM; N it is the same as that of the
absolute value of this process. In the notation of book 3; this process would
be denoted T0t (hM; N it ) ; recalling remark 3.30 that this is parametrized by
! 2 S:

Exercise 3.34 Prove that for a bounded variation function, that continuity
of T0t (f ) assures continuity of f (t): Then show that this and the next result
are true pointwise, and hence, T0t (f ) is continuous at t0 if and only if f (t)
is continuous at t0 :

Lemma 3.35 (On continuity of jhM; N it j) If f (t) is a continuous bounded


variation function, then the associated total variation function T0t (f ) is also
continuous. In particular, since hM; N it is a continuous bounded varia-
tion process, the total variation process jhM; N it j is a continuous, increasing
process.
Proof. Since T0t (f ) is increasing, and by exercise 3.36:

T0t+ t (f ) = T0t (f ) + Ttt+ t (f ); t > 0; ((1))

right continuity at t is con…rmed by showing that given > 0 there exists


t > 0 so that Ttt+ t (f ) < : We leave left continuity as an exercise using
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES113

the same approach and:

T0t (f ) = T0t+ t (f ) + Tt+


t
t (f ); t < 0;
t
now proving there exists t < 0 so that Tt+ t (f ) < :
Given > 0; choose s > 0 and then choose a partition of [t; t + s] :

t = t0 < t1 < ::: < tn = t + s;

so that:
Pn
Ttt+s (f ) i=1 jf (ti ) f (ti 1 )j + =2: ((2))
This is possible since Ttt+s (f ) is the supremum of such sums over all par-
titions. Next, by continuity of f at t choose so that jf (t) f (t0 )j < =2
if jt t0 j < : Then if jt t1 j < we use the above partition in the next
step, otherwise we augment this partition with a new t1 that satis…es this
inequality, relabel partition points, and note that by the triangle inequality
that the restated (2) remains valid with this augmented partition. For nota-
tional simplicity we assume the latter outcome, and write t1 = t + t: Then
after relabeling:
t + t = t1 < ::: < tn+1 = t + s
is a partition of [ t + t; t + s] and so:
Pn+1 t+s
i=2 jf (ti ) f (ti 1 )j Tt+ t (f ): ((3))

Finally using (2) and (3) :

Ttt+ t (f ) = Ttt+s (f ) Tt+ t+s


t (f )
Pn+1 Pn+1
i=1 jf (ti ) f (ti 1 )j + =2 i=2 jf (ti ) f (ti 1 )j
= jf (t1 ) f (t0 )j + =2 < :

Exercise 3.36 Generalize (1) in the above proof; that if a c b then:

Tab (f ) = Tac (f ) + Tcb (f ): (3.36)

Hint: Any partitions of [a; c] and [c; b] induce a partition of [a; b] but not all
partitions of [a; b]; so we get in 3.36. On the other hand, any partition
of [a; b] can be augmented with c if needed to induce partitions of [a; c] and
[c; b]; and then the triangle inequality obtains the other inequality.
114CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

We next investigate how the Borel measure jhM;N ijt can be de…ned on
intervals in terms of the signed measure hM;N it :

Exercise 3.37 Demonstrate that on the semi-algebra A0 of right semi-closed


intervals f(a; b]g which generates B (R) ; that:
P
jhM;N ijt [(a; b]] = sup i hM;N it [(ti ; ti+1 ]] ;

where the supremum is over all partitions a = t0 < ::: < tn = b: Hint:
Recalling book 3’s notation 3.24, jhM;N ijt [(a; b]] Tab (f ) with the bounded
variation function f (t) hM; N it : Use the de…nition of Tab (f ) and exercise
3.36.

From this exercise it follows that in general:


P
jhM;N ijt [(a; b]] sup j hM;N it [Ej ] ;

where
S the supremum is over all measurable and countable partitions, (a; b] =
Ej with disjoint fEj g B (R). The next result proves that the expression
on the right of this inequality also de…nes a Borel measure.

Proposition 3.38 (The total variation measure) Given a signed mea-


sure on a sigma algebra (X); de…ne a set function j j on E 2 (X)
by:
P
j j [E] sup j j [Ej ]j ; (3.37)

where
S the supremum is over all measurable and countable partitions: E =
j j with disjoint fEj gj
E (X).
Then j j is a measure on (X); called the total variation measure
associated with :
Proof. Following Rudin (1974), given any such countable partition fEj g
of E; choose reals
P tj < j j [Ej ] : Then by de…nition each Ej has a partition
fEj;i gi so that i j [Ej;i ]j > tj : As fEj;i gi;j is then a partition of E :
P P
j tj j;i j [Ej;i ]j j j [E] :

Taking a supremum
P on the left over all such partitions fEj;i gi of fEj g ob-
tains that j j j [Ej ] j j [E] :
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES115

Conversely, let fEi0 g be another partition of E: Then fEj \ Ei0 gj is a


partition of Ei0 for any i; and fEi0 \ Ej gi is a partition of Ej for any j: Since
is a signed measure and thus countably additive;
P P P
i Ei0 = i j Ei0 \ Ej
P P
i j Ei0 \ Ej
P P
= Ei0 \ Ej
Pj i

j j j [Ej ] :

This holds for all partitions


P fEi0 g of E; and so it follows by taking a supre-
mum that j j [E] j j [Ej ].
P j
Thus j j [E] = i j j [Ei0 ] and j j is countably (and …nitely) additive. As
j j 0 and j j [;] = 0; it is a measure.

As …nal results for the signed measure hM;N it ; we also have the Hahn
and Jordan decomposition theorems of book 5’s propositions 7.12 and 7.14.
The Jordan theorem states that there exists a unique decomposition:

+
hM;N it = ; (3.38)

where the measures are mutually singular. By mutually singular is


meant that there exists measurable E; unique up to null sets, so that:

+ ~ =
E (E) = 0:

In other words, + is supported on E; and is supported on its complement


~
E:
Hahn’s theorem states that given such hM;N it , there exists a positive
set A+ and a negative set A with R = A+ [ A ; and these sets are
disjoint and uniquely de…ned up to null sets. By positive and negative is
meant that hM;N it (A) 0 for all measurable A A+ and hM;N it (A) 0
for all measurable A A : Now de…ne for measurable A :

+
(A) = hM;N it (A \ A+ ); (A) = hM;N it (A \ A ):

Then + is supported on positive set E A+ ; and is supported on a


~
negative set E A ; and these measures are mutually singular and thus by
uniqueness this characterizes the Jordan decomposition.
116CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

Given this decomposition of hM;N it , de…ne a measure that we denote


hM;N it for Jordan, by:
J

+
hM;N it = + : (3.39)
J

We are now ready to bring these notions together. The following is presented
in terms of inequalities, and this is adequate for the application below. But
we note that one or both of these inequalities may in fact be equalities but
the proofs have proved elusive.

Proposition 3.39 On the Borel sigma algebra B (R) :

jhM;N it j hM;N it hM;N it ; (3.40)


J

where these measures are respectively de…ned in 3.35, 3.37, and 3.39.
Proof. By exercise 3.37 and the paragraph immediately afterward, the …rst
inequality of measures holds on the semi-algebra A0 of right semi-closed in-
tervals f(a; b]g which generates B (R) : Thus by book 1’s remark 5.17, the
same inequality extends to the associated outer measures, and then by ex-
tension (that book’s propositions 5.20 and 5.23) to the associated Borel mea-
sures.
For the second inequality, if A 2 B (R) and A+ and A are de…ned rela-
tive to the Hahn decomposition theorem above, then by the above discussion:

hM;N it [A] = hM;N it A \ A+ + hM;N it A\A


+
= [A] [A] :
S
Similarly, given any disjoint measurable partition: A = j Ej :
hS i P
hM;N it j Ej = j hM;N it [Ej ]
P + P
= j [Ej ] j [Ej ] :

Thus:
P P + P
j hM;N it [Ej ] j [Ej ] + j [Ej ]
+ P
= [A] + j [A]
= hM;N it [A] :
J

As this is true for all such partitions, it is true for the supremum of all such
partitions and the second inequality is proved.
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES117

The Kunita-Watanabe Inequality


Returning to 3.31, all integrals are to be understood in the
Lebesgue-Stieltjes sense as discussed above. Existence of these integrals
then requires Borel measurability (or B (R)-measurability) of v(t; ) and
w(t; ) as functions of t; at least -a.e. This is nontrivial. What is trivial by
de…nition (de…nition 5.10, book 7) is that for any stochastic process
Xt (!) f (t; !) de…ned on (S; (S); t (S); )u:c: :

Xt (!) : S ! R (or Rn ),

f ( ; !) is B (R)-measurable (or B (Rn )-measurable) as a function of ! for


all t.

To ensure the required Borel measurability of f (t; ) at least -a.e. as


needed for 3.31, we require a lemma that assumes additional notions of
measurability identi…ed in book 5’s de…nition 5.10. This result is true for
vector-valued stochastic processes so we state it in that generality.

Lemma 3.40 (On Borel measurability of v(t; )) Given a …ltered prob-


ability space (S; (S); t (S); )u:c: indexed on t 2 I = [0; 1) or other inter-
val, a stochastic process Xt (!) f (t; !) de…ned on S is Borel measurable
as a function of t for all ! if f (t; !) is measurable. Consequently, the
same conclusion holds if f (t; !) is predictable; adapted and any of left/right
continuous or continuous; or, progressively measurable.
Proof. By measurable is meant that f (t; !) is a measurable function on the
product measure space (chapter 7, book 1):

f : (I S; (B (I) (S)) ; m ) ! (Rn ; B (Rn )) ;

where (B (I) (S)) is de…ned as the smallest sigma algebra that contains
all measurable rectangles A B with A 2 B (I) ; B 2 (S); and m denotes
Lebesgue measure. De…ne:

fn (t; !) f (t; !) [0;n] (t) jf (t;!)j n (t; !): ((*))

It is an exercise to check that fn (t; !) is also measurable for all n; and that
fn (t; !) ! f (t; !) pointwise on I S: Also since bounded by n and also
supported on [0; n] in the t-domain, fn (t; !) is integrable on I S relative
to the product measure m constructed in book 1’s chapter 7, where m is
Lebesgue measure on R:
Then by Fubini’s theorem (corollary 5.20, book 5), fn (t; ) is m-integrable
for all !; and so by de…nition B (I)-measurable for all !: Applying book 5’s
118CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

corollary 1.10, f (t; ) is B (I)-measurable for all ! as the pointwise limit of


measurable fn (t; ):
The same result holds in the other cases by book 7’s proposition 5.19,
since all imply measurability.

Now that the integrals in 3.31 are better understood and seen to be well-
de…ned, we need an additional measure-theoretic result before proving this
inequality.

dhM;N i
Proposition 3.41 (A type of Radon-Nikodým derivative: djhM;N ijs )
s
Let Mt ; Nt be continuous L2 -bounded martingales on (S; (S); t (S); )u:c:
with M0 = N0 = 0; and de…ne the signed measure hM;N it and associated
Borel measure jhM;N i j as above, recalling remark 3.30 that both are para-
t
metrized by ! 2 S: Then for each ! there exists a Borel measurable function
h(t) h! (t) with jh(t)j = 1 for all t; so that for all A 2 B (R) and all Borel
measurable functions f (t) :
Z Z
h(s)f (s)d jhM; N ijs = f (s)d hM; N is ;
A A

or in the notation of the associated measures:


Z Z
h(s)f (s)d jhM;N ijs = f (s)d hM;N is : (3.41)
A A

Proof. De…ne:

+ 1 1
= jhM;N ijs + hM;N is ; = jhM;N ijs hM;N is ;
2 2

where again are parametrized by ! 2 S: By de…nition, both + and


are well de…ned signed measures on (R; B (R)): In fact + and are
(positive) measures. For this, it is enough to prove positivity on the semi-
algebra A0 of right semi-closed intervals f(a; b]g which generates B (R) ; and
this is assured if for all such intervals:

jhM;N ijs [(a; b]] hM;N is [(a; b]] jhM;N ijs [(a; b]] : ((1))

Fix ! and letting c(s) hM; N is : Then by 3.33:

hM;N is [(a; b]] = jc(b) c(a)j Tab (c);


3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES119

where Tab (c) is the total variation of c(s) on [a; b]: Since c(s) is a bounded
variation function, 3.36 and then proposition 5.7 of book 1 obtain:

Tab (c) jhM; N ijb jhM; N ija = jhM;N ijs [(a; b]] :

So (1) is proved.
Now jhM;N ijs = + + by (1); and as a sum of measures it follows that
if jhM;N ijs [A] = 0 for A 2 B (R) then + [A] = [A] = 0: In the termi-
nology of book 5’s de…nition 7.3, both + and are absolutely continuous
with respect to jhM;N ijs ; denoted + jhM;N ijs and jhM;N ijs :
The Radon-Nikodým theorem of book 5’s corollary 7.24 now assures the
existence of nonnegative Borel measurable functions h (s) so that for all
A 2 B (R) and Borel measurable f (t) :
Z Z
h (s)f (s)d jhM;N i j = f (s)d :
s
A A

Thus 3.41 is satis…ed with h(s) = h+ (s) h (s) if it can be proved that for
all Borel measurable f (t) :
Z Z Z
+
f (s)d hM;N is = f (s)d f (s)d :
A A A

But since hM;N is = + ; this integral identity follows for simple func-
tions f (s) by de…nition 2.3 of book 5, and for general measurable functions
by approximation (that book’s de…nition 2.37, and then propositions 1.18
and 2.21).
To prove that jh(!)j = 1 as de…ned; let A(a;b) = fa < h < bg: Note that
A(a;b) 2 B (R) (why?). Then with f = 1 and A = A(a;b) ; 3.41 obtains:
Z
a jhM;N i j A(a;b) hM;N is A (a;b) f (s)d hM;N is b jhM;N i j A(a;b) :
s s
A(a;b)

If a > 1 or b < 1 then since (1) applies to all Borel sets as noted above, it
follows that jhM;N i j A(a;b) = 0: Thus jhM;N i j [1 < jh(!)j] = 0:
s s
Now for any disjoint measurable partition fEj g of A( 1;1) = fjh(!)j <
1g; let f = 1 in 3.41 as above and apply corollary 2.49 of book 5:
Z Z
P P P
j hM;N is [Ej ] j d hM;N is j jhj d jhM;N i j
s
Ej Ej
Z
< d jhM;N i j = jhM;N i j A( 1;1) : ((2))
s s
A( 1;1)
120CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

Taking the supremum over all such partitions to obtain hM;N it A( 1;1) ;
the total variation measure associated with hM;N it ; and applying 3.40:

jhM;N is j A( 1;1) hM;N it A( 1;1) < jhM;N i j A(


s
1;1) :

Thus jhM;N i j A( 1;1) = 0:


s
Combining obtains that jhM;N i j [fjhj =6 1g] = 0; and on this set we
s
rede…ne h(t) 1: Thus jh(t)j = 1 for all t; and since this rede…nition does
not change the validity of 3.41, the proof is complete.

With this pre-work done, we are now ready for the …nal result. Some of
the details are assigned as exercises.

Proposition 3.42 (Kunita-Watanabe inequality) Let Mt ; Nt 2 M2


be continuous L2 -bounded martingales on (S; (S); t (S); )u:c: with M0 =
N0 = 0; and v(t; !); w(t; !) measurable processes. Then for almost all ! :
Z t Z t 1=2 Z t 1=2
2
jv(s; !)w(s; !)j d jhM; N ijs v (s; !)d hM is w2 (s; !)d hN is ;
0 0 0
(3.42)
for all 0 t 1:
Proof. By the above discussion, all integrals exists as Lebesgue-Stieltjes
integrals for all !: Generalizing book 7’s proposition 6.33, de…ne hM its as
the change in the quadratic variation process over [s; t] :

hM its hM it hM is :

De…ning hM; N its analogously, it follows from book 7’s propositions 6.12 and
6.30 that for all real r :

0 hM + rN; M + rN its = hM its + 2r hM; N its + r2 hN its :

Minimizing over r obtains:

1=2 1=2
hM; N its hM its hN its ; -a.e., ((1))

recalling that quadratic variation and covariation processes are well de…ned
up to indistinguishability (de…nition 5.10, book 7).
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES121

Now let v(t; !); w(t; !) be measurable simple L2 -processes as in 2.21,


which we can assume to have common de…ning intervals by creating the com-
mon partition (see exercise 3.43). Using exercise 3.31 for hM;N is [(tj ; tj+1 ]] ;
the inequality in (1); and the Cauchy-Schwarz inequality :
Z t
v(s; !)w(s; !) d hM; N is
0
Xn t ^t
jvj (!)wj (!)j hM; N itjj+1
^t
j=1
Xn t ^t 1=2 t ^t 1=2
jvj (!)wj (!)j hM itjj+1
^t hN itjj+1
^t
j=1
Xn t ^t 1=2
Xn t ^t 1=2
vj2 (!) hM itj+1
j ^t
wj2 (!) hN itj+1
j ^t
j=1 j=1
Z t 1=2 Z t 1=2
= v 2 (s; !)d hM is w2 (s; !)d hN is ; -a.e.
0 0
Thus given simple L2 -processes, it is true -a.e. that for all t < 1 :
Z t Z t 1=2 Z t 1=2
2
v(s; !)w(s; !)d hM; N is v (s; !)d hM is w2 (s; !)d hN is :
0 0 0
((2))
Next, we claim that (2) is valid for all bounded measurable processes.
Given such v(s; !) and w(s; !); there exists increasing sequences of mea-
surable simple processes vn (s; !) and wn (s; !) so that vn (s; !) ! v(s; !)
and wn (s; !) ! w(s; !) for all (s; !) (see exercise 3.34). By the bounded
convergence theorem (proposition 2.46, book 5), for all t :
Z t Z t
2
vn (s; !)d hM is ! v 2 (s; !)d hM is ;
0 0
Rt
and a similar result for the integrals 0 wn2 (s; !)d hN is : Note that for the ap-
plication of this theorem, the continuity assumption on martingales Mt and
Nt assures …niteness of the measures d hM is and d hN is over any compact
[0; t] by book 7’s proposition 6.12.
By the same argument, for all t :
Z t Z t
jvn (s; !)j jwn (s; !)j d 1 hM N i ! jv(s; !)j jw(s; !)j d 1 hM N i ;
4 s 4 s
0 0
and thus by Lebesgue’s dominated convergence theorem (proposition 2.43,
book 5):
Z t Z t
vn (s; !)wn (s; !)d 1 hM N i ! v(s; !)w(s; !)d 1 hM N i :
4 s 4 s
0 0
122CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

Applying 3.32 obtains that for all t :


Z t Z t
vn (s; !)wn (s; !)d hM; N is ! v(s; !)w(s; !)d hM; N is :
0 0

Thus, that (2) is valid for simple L2 -processes -a.e. assures it is similarly
valid for bounded measurable processes for all t < 1:
Now with h(t) h! (t) of proposition 3.41:
Z t Z t
v(s; !)w(s; !)d hM; N is = v(s; !)w(s; !) h(s)d jhM; N ijs ;
0 0

and thus since jh(t)j = 1 it follows that for all bounded measurable processes:
Z t Z t 1=2 Z t 1=2
2
v(s; !)w(s; !) h(s)d jhM; N ijs v (s; !)d hM is w2 (s; !)d hN is :
0 0 0

Replacing v(s; !) and w(s; !) by the bounded measurable processes v~(s; !)


v(s; !)sgn [v(s; !)] and w(s;
~ !) = w(s; !)sgn [h(s)w(s; !)] ; and noting that
the absolute value on the left is now redundant, obtains 3.42 for all bounded
measurable processes.
If v(s; !) and w(s; !) are measurable processes, de…ne vn (s; !) = max[v(s; !); n];
and similarly for wn (s; !): Then since 3.42 is valid for all n; it is valid in the
limit if the integrals on the right are …nite, and otherwise there is nothing
to prove.
Finally, if v(s; !) and w(s; !) are measurable processes such that 3.42 is
valid for all t < 1; then it is valid for t = 1 by the same argument.

Exercise 3.43 Prove the assertions in the proof above:

1. Given two measurable simple processes v(t; !) and w(t; !); provide the
details that these can be rede…ned on a common t-partition. For ap-
plications below, repeat this exercise for adapted simple processes, re-
calling lemma 2.25.

2. Given bounded measurable v(s; !); there exists a sequence of measur-


able simple processes vn (s; !) de…ned on (R S; (B (R) (S)) ; m )
so that vn (s; !) ! v(s; !) for all (s; !): Hint: This will require the
functional monotone class theorem (proposition 1.32, book 5). First
note that

A0 f(a; b] Aj 1 a b 1; A 2 (S)g
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES123

is a semi-algebra (de…nition 6.8, book 1} that generates (B (R) (S))


(corollary 7.3, book 1). De…ne:
L fv(s; !)jthere exist vn (s; !) ! v(s; !) for all (s; !);
where fvn (s; !)g are measurable simple processes on
(R S; (B (R) (S)) ; m ):
By book 5’s corollary 1.10, any v(s; !) 2 L is measurable, and the FMC
theorem concludes that L contains all bounded measurable functions
as desired if:
1. (a;b] A 2 L for all (a; b] A 2 A0 (which includes R S);
2. L is a vector space: If v(s; !); w(s; !) 2 L then av(s; !)+bw(s; !) 2 L
for all a; b 2 R;
3. If v(s; !) : R S ! R+ is the pointwise limit of fvn (s; !)g L; then
v(s; !) 2 L:
Example 3.44 (Cauchy-Schwarz redux) If Mt = Nt = Bt a Brown-
ian motion, then this is a continuous L2 -martingale but not an L2 -bounded
martingale on (S; (S); t (S); )u:c: as 3.42 requires. However we will see
in proposition 3.87 that this result also applies to continuous local martin-
gale integrators and measurable integrands, so we illustrate that result here
to illustrate the connection with the Cauchy-Schwarz inequality.
Then hBit = t by corollary 2.10, and hB; Bit = t by book 7’s 6.20 and
corollary 6.15. Thus the Borel measures d jhB; Bijt and d hBit agree with
Lebesgue measure on the semi-algebra of sets A0 = f(a; b]g; and thus by sec-
tion 6.2 of book 1 equal Lebesgue measure on [0; 1): Given bounded measur-
able v(s; !); w(s; !); it then follows that 3.42 reduces to the Cauchy-Schwarz
inequality of book 4’s corollary 3.48, that -a.e.:
Z t Z t 1=2 Z t 1=2
2
jv(s; !)w(s; !)j ds v (s; !)ds w2 (s; !)ds ; 0 t 1:
0 0 0
More generally but in the current set-up, if Mt = Nt 2 M2 and v(s; !);
w(s; !) are bounded measurable processes, then 3.42 reduces to the general
Cauchy-Schwarz inequality, that for 0 t 1 :
Z t Z t 1=2 Z t 1=2
jv(s; !)w(s; !)j d hM is v 2 (s; !)d hM is w2 (s; !)d hM is ;
0 0 0
with d hM is a …nite measure -a.e. This follows from book 7’s corollary
2
5.116 and proposition 6.18, since E [hM i1 ] = E M1 < 1 implies that
hM i1 < 1; -a.e.
124CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

3.2.7 Additional Properties For L2 -Bounded Martingales


Rt
As noted in proposition 3.27, not only is ItM (!) = 0 v(s; !)dMs (!) a
continuous martingale, but it is an L2 -bounded continuous martingale
by the Itô M -isometry. Speci…cally, from 3.28 and 3.11:
Z Z t Z 1
M 2 2
It L2 (S) v (s; !)d hM is d E v 2 (s; !)d hM is < 1:
0 0
(3.43)
This follows since as noted in proposition 3.27, ItM (!) = VtM (!) -a.e.,
where VtM (!) is the L2 -limit of simple process integrals of proposition
2 2
3.19, and this assures that ItM L2 (S) = VtM L2 (S) .

2 2 H2M ([0; 1) S); then also ItM (!) =


R t Hence, if Mt 2 M 2 and v(t; !) M
0 v(s; !)dMs (!) 2 M : Thus It (!) can be used once again as a new
M
integrator of integrands w(t; !) 2 H2I ([0; 1) S) to produce:
Z t
M
ItI (!) = w(s; !)dIsM (!);
0

which is again a member of M2 ; and so forth.


Three questions immediately arise:
M
1. To de…ne H2I ([0; 1) S) we require I M t ; the quadratic variation
of ItM for 3.11: Is I M t derivable in terms of the de…ning components
of ItM (!) of v(t; !); and hM it ?
M
2. Once ItI (!) is de…ned, can this dIsM integral also be represented as a
dMs integral? A parallel inquiry of integration theory in book 5 was
answered in that book’s proposition 3.8.

3. Generalizing 1, what is the covariation process I M ; J N t of two such


processes, and is this process expressible in terms of the respective
integrands v(t; !) and w(t; !); and hM; N it ; the covariation of M and
N?

In this and the next section we investigate these and related questions.
First, we address the covariation and quadratic variation of stochastic in-
tegrals. This …rst result requires a rather long proof to justify a number
of details, and the reader is encouraged to …rst scan the logic ‡ow before
digging into these details.
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES125

Proposition 3.45 (Covariation of stochastic integrals) Given (S; (S); t (S); )u:c: ;
if Mt ; Nt 2 M2 ; v(t; !) 2 H2M ([0; 1) S); and w(t; !) 2 H2N ([0; 1) S);
then the covariation process (de…nition 6.25, book 7) of the continuous L2 -
bounded martingales:
Z t Z t
M N
It (!) = v(s; !)dMs (!); Jt (!) = w(s; !)dNs (!);
0 0
is given by: Z t
IM ; JN t
= v(s; !)w(s; !)d hM; N is ; (3.44)
0
where hM; N it is the covariation process of Mt ; Nt .
Proof. It is apparent that as de…ned in 3.44 that I M ; J N 0 = 0; and
we leave it as an exercise to con…rm that this integral de…nes an adapted,
continuous, bounded variation process (see exercise 3.46).
Thus to prove 3.44 it is su¢ cient to show that the process:
Z t Z t Z t
Xt v(s; !)dMs (!) w(s; !)dNs (!) v(s; !)w(s; !)d hM; N is ;
0 0 0
is a continuous local martingale, and then apply proposition 6.29 of book 7.
For this, let fvn (t; !)g H2M ([0; 1) S); fwn (t; !)g H2N ([0; 1)
S) be approximating sequences of simple processes of proposition 3.15 that
(n)
converge to v(t; !) in H2M ; and to w(t; !) in H2N ; respectively. De…ning Xt
(n)
as Xt but with v; w replaced by vn ; wn ; respectively, we …rst prove that Xt
(n)
is a continuous martingale for all n: By then demonstrating that Xt ! Xt
in L1 for all t; and applyinghbook 7’s propositioni 5.32, it will follow that Xt
(n)
is a martingale. In fact, E supt 0 Xt Xt ! 0 as n ! 1; and thus
as a uniform limit of continuous martingales, Xt so de…ned is continuous.
(n)
1. Xt is a continuous martingale for all n : For this step, let
vn (s; !) be given as in 2.21 with 0 = t0 < t2 < < tmn +1 < 1 :
Xmn
vn (t; !) a 1 (!) f0g (t) + aj (!) (tj ;tj+1 ] (t);
j=0
where aj (!) is tj (S)-measurable and a 1 (!) is 0 (S)-measurable by lemma
3.12. By using the common partition points justi…ed in exercise 3.43, we
can assume that wn (t; !) is de…ned identically, but with random variables
fbj (!)gm n
j=0 : Now by linearity of all integrals:
Xmn Z t Z t
(n)
Xt = aj (tj ;tj+1 ] dMs bk (tk ;tk+1 ] dNs
j;k=0 0 0
Z t
aj bk (tj ;tj+1 ] (tk ;tk+1 ] d hM; N is ;
0
126CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

and we show that this is a sum of continuous martingales.


If j 6= k the second integrals are 0 since partition intervals are disjoint,
and the remaining terms in this summation can be expressed:
Z t Z t
It = aj (tj ;tj+1 ] dMs bk (tk ;tk+1 ] dNs = aj bk Mtj+1 ^t Mtj ^t Ntk+1 ^t Ntk ^t :
0 0
((1))
Then any such It is continuous by the assumed continuity of Mt ; Nt ; and is
integrable for all t by the Cauchy-Schwarz inequality ( corollary 3.48, book
4) and 3.17:
Z t Z t
E aj (tj ;tj+1 ] dMs bk (tk ;tk+1 ] dNs
0 0
kvn (s; !)kH M ([0;1) S) kwn (s; !)kH N ([0;1) S) ;
2 2

which is …nite since vn (t; !) 2 H2M ([0; 1) S); wn (t; !) 2 H2N ([0; 1) S):
For the martingale property, assume tj+1 tk : Then the product on
the right in (1) is 0 if t tk ; which satis…es the martingale property by
de…nition. So assume tk < t and thus combining: tj+1 tk < t and then:

It = aj (!)bk (!) Mtj+1 Mtj Ntk+1 ^t Ntk :

Now if s < t; the same argument obtains Is = 0 if s tk ; and then It Is = It


is a martingale, so assume tj+1 tk < s < t: Then:

Is = aj (!)bk (!) Mtj+1 Mtj Ntk+1 ^s Ntk ;

and so:

It Is = aj (!)bk (!) Mtj+1 Mtj Ntk+1 ^t Ntk+1 ^s :

The manipulation of E [It Is j s ] using book 6’s proposition 5.26 depends


on the cases tk < s < t tk+1 ; tk < s < tk+1 t or tk+1 s < t: In the
…rst and second case, by the measurability property:

E [It Is j s] = aj (!)bk (!) Mtj+1 Mtj E Ntk+1 ^t Ns j s = 0;

since aj (!)bk (!) Mtj+1 Mtj is s -measurable by lemma 3.12. In the


third, It Is = 0 and there is nothing to prove.
If j = k; then as above but also applying 3.33:
h i
It = aj bj Mtj+1 ^t Mtj ^t Ntj+1 ^t Ntj ^t hM; N itj+1 ^t hM; N itj ^t ;
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES127

and this expression is identically 0 if t tj : Continuity follows from the


assumed continuity of Mt ; Nt and continuity of hM; N it by book 7’s propo-
sition 6.29: For integrability of It ; the …rst term is addressed as above in the
case of j 6= k: For the second term we use L2 -boundedness. First, by book
7’s de…nition 6.25:
1h i
hM; N itj+1 ^t = hM + N itj+1 ^t hM N itj+1 ^t ; ((2))
4
and each of these quadratic variation processes is integrable by that book’s
proposition 6.18, since L2 -boundedness of Mt and Nt implies the same for
Mt Nt : The same applies to hM; N itj ^t and thus It is integrable.
For the martingale property we assume that tj < s < t: If s < t < tj
then It Is = 0 and there is nothing to prove, and we leave s < tj < t as
an exercise, noting that It Is = It :
h i
It Is = aj bj Mtj+1 ^t Mtj Ntj+1 ^t Ntj hM; N itj+1 ^t hM; N itj
h i
aj bj Mtj+1 ^s Mtj Ntj+1 ^s Ntj hM; N itj+1 ^s hM; N itj
h i h i
= aj bj Mtj+1 ^t Ntj+1 ^t hM; N itj+1 ^t Mtj+1 ^s Ntj+1 ^s hM; N itj+1 ^s
aj bj Mtj Ntj+1 ^t Ntj+1 ^s + Ntj Mtj+1 ^t Mtj+1 ^s
A B:

Identifying again the cases tj < s < t tj+1 ; tj < s < tj+1 t and tj+1
s < t; we focus on the …rst two cases since It Is = 0 in the third. In
both of these cases E [Bj s ] = 0 by the measurability property since Mt and
Nt are martingales. For example:

E aj bj Mtj Ntj+1 ^t Ntj+1 ^s j s = aj bj Mtj E Ntj+1 ^t Ntj+1 ^s j s = 0;


t
since aj bj Mtj is s -measurable as above, and Ntj+1 ^t = Nt j+1 is a stopped
martingale and thus a martingale by book 7’s proposition 5.84.
For A; de…ning Mt0 Mt Nt hM; N it ; which is a continuous local mar-
tingale by book 7’s proposition 6.29, we have by the measurability property
that: h i
E [Aj s ] = aj bj E Mt0j+1 ^t Ms0 j s ; ((3))

since s < tj+1 by assumption. Now if fTn g is a localizing sequence for Mt0 ;
0
then since Tn >0 Mt^T is a martingale:
n
h i
E Tn >0 Mt0j+1 ^t^Tn Ms^T0
n
j s = 0;
128CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

for all n: Now Tn >0 Mt0j+1 ^t^Tn Ms^T0


n
! Mt0j+1 ^t Ms0 pointwise since
Tn ! 1 -a.e. Thus E [Aj s ] = 0 in (3) will follow from Lebesgue’s domi-
nated convergence theorem of book 5’s proposition 2.43 if sups tj+1 ^t jMs0 j is
integrable since:

Mt0j+1 ^t^Tn 0
Ms^Tn
2 sup Ms0 :
s tj+1 ^t

Now

sup Ms0 sup jMs Ns j + sup jhM; N is j ;


s tj+1 ^t s tj+1 ^t s tj+1 ^t

and integrability of sups tj+1 ^t jMs Ns j follows from the Cauchy-Schwarz in-
equality, Doob’s martingale maximal inequality (proposition 5.91, book 7),
and L2 -boundedness of Mt ; Nt :
" # " #
E sup jMs Ns j E sup jMs j sup jNs j
s tj+1 ^t s tj+1 ^t s tj+1 ^t
2 !2 31=2 2 !2 31=2
E4 sup jMs j 5 E4 sup jNs j 5
s tj+1 ^t s tj+1 ^t

4 Mtj+1 ^t L2
Ntj+1 ^t L2
< 1:

The same follows for sups tj+1 ^t jhM; N it j using (2) and book 7’s proposition
6.18, recalling that hM N is are increasing processes:
" # " # " #
1 1
E sup jhM; N i sj E sup hM + N is + E sup hM N is
s tj+1 ^t 4 s tj+1 ^t 4 s tj+1 ^t
1 2 1 2
= Mtj+1 ^t + Ntj+1 ^t L2
+ Mtj+1 ^t Ntj+1 ^t L2
:
4 4
(n)
2. That Xt !L1 Xt ; uniformly in t: This will be proved in two
(n) (n) (n)
steps, that each of the terms of Xt Yt + Zt converges uniformly to
the respective terms of Xt Yt + Zt ; and then the …nal result follows from:

(n) (n) (n)


E sup Xt Xt E sup Yt Yt + sup Zt Zt
t 0 t 0 t 0

(n) (n)
= E sup Yt Yt + E sup Zt Zt :
t 0 t 0
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES129

2.a. That with apparent notation:


Z t Z t Z t Z t
vn (s; !)dMs (!) wn (s; !)dNs (!) !L1 v(s; !)dMs (!) w(s; !)dNs (!)
0 0 0 0

uniformly in t: To this end (suppressing (!)):

Z t Z t Z t Z t
E sup vn (s; !)dMs wn (s; !)dNs v(s; !)dMs w(s; !)dNs
t 0 0 0 0 0
Z t Z t
E sup [vn (s; !) v(s; !)] dMs sup wn (s; !)dNs
t 0 0 t 0 0
Z t Z t
+E sup v(s; !)dMs sup [wn (s; !) w(s; !)] dNs
t 0 0 t 0 0
I + II:

By the Cauchy-Schwarz inequality, Doob’s martingale maximal inequality of


book 7’s proposition 5.91, then 3.17 :
Z t Z t
I sup [vn (s; !) v(s; !)] dMs sup wn (s; !)dNs
t 0 0 L2 t 0 0 L2
Z 1 Z 1
4 [vn (s; !) v(s; !)] dMs wn (s; !)dNs
0 L2 0 L2
= 4 kvn (s; !) v(s; !)kH M kwn (s; !)kH N :
2 2

The …rst term converges to 0 by proposition 3.19, while the second converges
to kw(s; !)kH N by proposition 4.20 of book 5. Similarly:
2

II 4 kv(s; !)kH M kwn (s; !) w(s; !)kH N ! 0:


2 2

2.b. That with apparent notation:


Z t Z t
vn (s; !)wn (s; !)d hM; N is !L1 v(s; !)w(s; !)d hM; N is
0 0

uniformly in t: To this end, denote d hM; N is d hM;N it in terms of the


associated Borel measure, and recalling 3.34 that hM;N it P0t N0t ; and
130CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

using the triangle inequality (suppressing (s; !)):


Z t
[vn wn vw] d hM;N is
0
Z t Z t
[vn wn vw] d P0s + [vn wn vw] d N0s
0 0
Z t Z t
jvn wn vwj d P0s + jvn wn vwj d N0s
0 0
Z t
= jvn wn vwj jd hM; N is j
0
Z t Z t
jvn vj jwn j jd hM; N is j + jvj jwn wj jd hM; N is j
0 0
= I + II:

The third step uses 3.35 and a change back in notation. Applying the Kunita-
Watanabe inequality of proposition 3.42:
Z 1
sup I jvn vj jwn j jd hM; N is j
t 0 0
Z t 1=2 Z t 1=2
2
[vn v] d hM is w2 d hN is :
0 0

Thus by Cauchy_Schwarz:

1=2 1=2
E sup I kvn vkH M kwkH N ! 0:
t 0 2 2

That E supt 0 II ! 0 is an exercise using the same approach.

Exercise 3.46 Prove that the integral in 3.44 de…nes an adapted, continu-
ous, bounded variation process. Hint: Given !; split u(t; !) and v(t; !) into
positive and negative parts (de…nition 2.36, book 5), and split the signed
measure d hM; N is as in 3.32, then express this integral as a di¤ erence of
integrals which are continuous increasing functions and recall proposition
3.27, book 3. Continuity of the quadratic variation processes is used for
continuity of the two integrals. For adaptedness, approximate v(s; !) and
w(s; !) with simple process (exercise 3.43), then corollary 1.10, book 5.

Corollary 3.47 (Quadratic variation of stochastic integrals) If Mt 2


M2 and v(t; !) 2 H2M ([0; 1) S); then the quadratic variation
R t process asso-
ciated with the continuous L2 -bounded martingale ItM (!) = 0 v(s; !)dMs (!)
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES131

is given by: Z t
M
I t
= v 2 (s; !)d hM is : (3.45)
0

Proof. This is now an immediate corollary of proposition 3.45 since I M t =


I M ; I M t by book 7’s de…nition 6.25 (and remark 6.26), while similarly,
d hM; M is = d hM is :

Corollary 3.48 (Variance of stochastic integrals) If Mt 2 M2 and


v(t; M M
R t !) 2 H2 ([0; 1) S); then for all t the second moment of It (!)
0 v(s; !)dMs (!) is given by:
h i Z t
M 2
E It =E v 2 (s; !)d hM is : (3.46)
0

The expectation on the right is also equal to V ar ItM ; the variance of ItM :
Proof. Since I0M = 0; book 7’s proposition 6.18 obtains that E I M t =
h i
2
E ItM ; and thus 3.46 follows from 3.45. That this second moment is
the variance of ItM is a consequence of 4 of proposition 3.27 which proved
3.22 for ItM :

Corollary 3.49 (Covariance of stochastic integrals) If Mt ; Nt 2 M2 ;


v(t; M N M
R t !) 2 H2 ([0; 1) S);N
and w(t;
R t !) 2 H2 ([0; 1) S); then if It (!) =
0 v(s; !)dMs (!) and Jt (!) = 0 w(s; !)dNs (!) :
Z t
M N
E It Jt = E v(s; !)w(s; !)d hM; N is : (3.47)
0

The expectation on the right is also equal to Cov ItM ; JtN ; the covariance
of ItM and JtN :
Proof. This expectation equals the covariance since both variates ItM and
JtN have 0 expectation as above. By proposition 3.45 and book 7’s proposition
6.29: Z t
M N
Mt It Jt v(s; !)w(s; !)d hM; N is
0
is a continuous martingale. Thus recalling exercise 5.22 of book 6, let 0
f;; Sg: Then since 0 0 (S) and M0 = 0; the tower property of book 7’s
proposition 5.26 obtains:

E [Mt ] E [Mt j 0] = E [E [Mt j 0] j 0] = E [M0 j 0] = 0;

which is 3.47.
132CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

Example 3.50 If Mt ; Nt 2 M2 ; v(t; !) 1 2 H2M ([0; 1) S) and


w(t; !) N M
1 2 H2 ([0; 1) S) by exercise 3.13, then It (!) = Mt (!) and
JtN (!) = Nt (!): Further by 3.46:
h i
V ar [Mt (!)] E (Mt (!))2 = E [hM it ] ;

which is also derived in book 7’s proposition 6.18, and by 3.47:

Cov [Mt (!); Nt (!)] E [Mt (!)Nt (!)] = E [hM; N it ] :

The next result obtains a characterization of the stochastic integral I M :


The identity in 3.48 is called the Kunita-Watanabe identity and named
for Hiroshi Kunita (1937 – 2008) and Shinzo Watanabe (1935 – ). It
will be applied in the proof of the associative law in the next section.

Proposition 3.51 (Kunita-Watanabe identity) Given R t(S; (S); t (S); )u:c: ;


let Mt 2 M2 ; v(t; !) 2 H2M ([0; 1) S); and ItM (!) 0 v(s; !)dMs (!):
2
Then for all Nt 2 M :
Z t
IM ; N t = v(s; !)d hM; N is : (3.48)
0

Further, if there exists Xt 2 M2 so that for all such Nt :


Z t
hX; N it = v(s; !)d hM; N is ; (3.49)
0

then almost everywhere, Xt = ItM for all t:


Proof. Given such Nt ; recall that by exercise 3.13, w(t; !) 1 2 H2N ([0; 1)
S): This process is predictable, and by L2 -boundedness and book 7’s propo-
sition 6.18:
Z 1
E w2 (s; !)d hM is = E [hM i1 ] = E M1 2
< 1;
0

noting that M1 is well-de…ned by Doob’s martingale


Rt convergence theorem
of book 7’s corollary 5.116. De…ning JtN (!) = 0 w(s; !)dNs (!); note that
since N0 = 0 that JtN (!) = Nt by 3.1 and thus by 3.44:
Z t
M M N
I ;N t = I ;J t = v(s; !)d hM; N is ;
0

which is 3.48.
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES133

Now assume such Xt exists and satis…es 3.49, then from 3.48 it follows
that hX; N it I M ; N t = 0 for all Nt 2 M2 : Book 7’s proposition 6.30 then
obtains X I M ; N t = 0 for all such Nt : Letting Nt = Xt ItM ; which
is an L2 -bounded martingale by assumption (for Xt ) and proposition 3.27
(for ItM ), this proves that X I M ; X I M t = 0: Book 7’s remark 6.26
obtains that X I M t = 0; and thus by the Doob-Meyer Decomposition
2
theorem 2 in that book’s proposition 6.12, X I M is a continuous local
2
martingale. If fTn g is an associated localizing sequence then X I M t^Tn
h i
2
is a martingale for all n; and thus E X I M t^Tn = 0 as in the proof of
corollary 3.49. This implies Xt = ItM almost everywhere for t Tn by book
7’s corollary 6.14, and letting Tn ! 1 the result follows.

Remark 3.52 (On the Kunita-Watanabe identity) The above proof showed
that the Kunita-Watanabe identity in 3.48 is implied by the covariation for-
mula in 3.44, but it is also theR case that 3.48 implies 3.44 and thus these
t
are equivalent. Since JtN (!) 2
0 w(s; !)dNs (!) 2 M , two applications of
3.48 produce:
Z t
IM ; JN t
= v(s; !)d M; J N s
0
Z t Z s
= v(s; !)d w(r; !)d hM; N ir :
0 0

The identity in 3.44 then follows from book 5’s proposition 3.8 as a result in
change of variables.
To see this, …rst recall that by 3.32 that d hM; N ir is a signed measure,
which in the notation of Borel measures is de…ned by hM;N it 1
hM +N it
4
1
hM N i : The integral in square brackets is then de…ned by:
4 t

Z s Z s
w(r; !)d hM; N ir w(r; !)d hM;N ir
0
Z0 s Z s
= w(r; !)d 1
hM +N ir w(r; !)d 1
hM N ir :
4 4
0 0

Although the signed measure hM;N it can be decomposed in various ways as


noted in the discussion on signed measures in section 3.2.6, this representa-
tion of the integral is well de…ned. See proposition 4.14.
134CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

By the same argument,


Z t Z s
IM ; JN t = v(s; !)d w(r; !)d 1 hM +N i
4 r
0 0
Z t Z s
v(s; !)d w(r; !)d 1 hM N i :
4 r
0 0

To apply book 5’s proposition 3.8 to each of these integrals, we formally need
to express:
w(r; !) = w+ (r; !) w (r; !);
as a di¤ erence of nonnegative functions using book 5’s de…nition 2.36. Ap-
plying the book 5 result and reassembling obtains:
Z t Z t
IM ; JN t = v(s; !)w(s; !)d 1 hM +N i v(s; !)w(s; !)d 1 hM N i
4 s 4 s
0 0
Z t
= v(s; !)w(s; !)d hM;N is
0
Z t
v(s; !)w(s; !)d hM; N is ;
0

which is 3.44.

The …nal result of this section addresses the last of the questions at the
start of this section. The so-called
Rt associative law of stochastic
Rt integra-
M
tion states that if ItM (!) = 0 v(s; !)dMs (!) and ItI (!) = 0 w(s; !)dIsM (!);
then: Z t Z t
M
w(s; !)dIs (!) = v(s; !)w(s; !)dMs (!): ((*))
0 0

As part of this proof we must verify that given v(t; !) 2 H2M and w(t; !) 2
M
H2I ; then v(t; !)w(t; !) 2 H2M and thus the integral on the right is well
de…ned.
This identity can be compared to the results in chapter 3 of book 5 on
change of variables in integrals, and speci…cally to the result in that book’s
proposition 3.8. Admittedly, the conclusion in ( ) is a deeper result due
to the manner in which such stochastic integrals are de…ned, but this basic
notion has been encountered before.
To see the analogy and the current challenge, …rst note that changing
notation:
Z t Z t Z s
M
w(s; !)dIs (!) w(s; !)d v(r; !)dMr (!) :
0 0 0
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES135

Fixing !; one can image de…ning a "Borel measure" ! as in 3.4 of book 5,


but starting o¤ on intervals of the form (0; t] by:
Z t
! [(0; t]] v(r; !)dMr (!):
0

This could then be extended to the semi-algebra of right semi-closed intervals


A f(a; b]g by subtraction, then be generalized to a measure ! by following
the construction of book 1’s chapter 5.
But wait. The function F! (t) ! [(0; t]] is not increasing as the book 1
development requires to produce a Borel measure. Nor is F! (t) of bounded
variation which could be used to de…ne a signed measure. We know this
because as de…ned, F! (t) is an L2 -bounded martingale and book 7’s propo-
sition 6.1 applies. That is, F! (t) is of bounded variation on an interval [a; b]
with probability 1 if and only if it is constant Ron this interval with proba-
d
bility 1: But if F! (t) is constant on [a; b] then c v(r; !)dMr (!) = 0 for all
[c; d] [a; b]; and hence if ! is de…nable it will of necessity not be very
interesting.
In summary, while ( ) seems compelling notationally, it is not derivable
with earlier book 5 approach.

Proposition 3.53 (Associative law of stochastic integration) Given the


…ltered probability space (S; (S); t (S); )u:c: ; let Mt 2 M2 ; v(t; !) 2 H2M ([0; 1)
Rt M
S) and ItM (!) = 0 v(s; !)dMs (!): If w(t; !) 2 H2I ([0; 1) S); then
v(t; !)w(t; !) 2 H2M ([0; 1) S) and -a.e.:
Z t Z t
M
w(s; !)dIs (!) = v(s; !)w(s; !)dMs (!) for all t: (3.50)
0 0
Rt
Proof. Now I M t = 0 v 2 (s; !)d hM is by 3.45, and it follows from book
5’s proposition 3.8 that:
Z t Z t
2 M
w (s; !)d I s = v 2 (s; !)w2 (s; !)d hM is :
0 0
M
Taking expectations we see that w(t; !) 2 H2I if and only if v(t; !)w(t; !) 2
H2M : Hence, the integral on the right in 3.50 is well de…ned.
Rt
Applying 3.48 to the L2 -bounded martingale Kt 0 v(s; !)w(s; !)dMs (!);
2
it follows that for all Nt 2 M :
Z t
hK; N it = v(s; !)w(s; !)d hM; N is :
0
136CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

Rt
This same identity applied to ItM 0 v(s; !)dMs (!) produces:
Z t
M
I ;N t
= v(s; !)d hM; N is ;
0

and then using the same derivation as in remark 3.52 and applying book 5’s
proposition 3.8 thus obtains for all Nt 2 M2 :
Z t
hK; N it = w(s; !)d I M ; N s :
0
Rt
Now applying 3.48 to Lt 0 w(s; !)dIsM yields for all Nt 2 M2 :
Z t
hL; N it = w(s; !)d I M ; N s
:
0

By the uniqueness result of proposition 3.51 above, we see that for almost
all !; Kt = Lt for all t:

Example 3.54 (Associative law with simple processes) We can explic-


itly verify 3.50 when v(s; !) and w(s; !) are predictable simple processes. Let
v(s; !) be given as in 2.21 with 0 = t0 < t2 < < tm+1 < 1 :
Xm
v(t; !) a 1 (!) f0g (t) + aj (!) (tj ;tj+1 ] (t);
j=0

and assume without loss of generality (by exercise 3.43) that w(t; !) is de-
…ned identically, but with random variables fbj (!)gmj= 1 : By lemma 3.12
it follows that a 1 (!)=b 1 (!) are 0 (S)-measurable, and aj (!)=bj (!) are
tj (S)-measurable for all j:
From 3.2:
Z t Xm
M
It (!) = v(s; !)dMs (!) = aj (!) Mtj+1 ^t (!) Mtj ^t (!) :
0 j=0

Similarly:
Xm
v(s; !)w(s; !) = a0 b0 (!) f0g (t) + aj (!)bj (!) (tj ;tj+1 ] (t);
j=0

and so:
Z t Xm
v(s; !)w(s; !)dMs (!) = aj (!)bj (!) Mtj+1 ^t (!) Mtj ^t (!) :
0 j=0
((*))
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES137

Finally:
Z t
w(s; !)dIsM (!)
0
Xm h i
= bj (!) ItMj+1 ^t
(!) I M
tj ^t (!)
j=0
Xm Xm
= bj (!) ak (!) Mtk+1 ^tj+1 ^t (!) Mtk ^tj+1 ^t (!)
j=0 k=0
Xm Xm
bj (!) ak (!) Mtk+1 ^tj ^t (!) Mtk ^tj ^t (!) :
j=0 k=0

To see that this expression equals that in ( ) requires an evaluation of


the square-bracketed di¤ erence of summations associated with the coe¢ cient
bj (!): Now if k < j; the coe¢ cient of ak (!) is:

Mtk+1 ^tj+1 ^t Mtk ^tj+1 ^t Mtk+1 ^tj ^t + Mtk ^tj ^t


= Mtk+1 ^t Mtk ^t Mtk+1 ^t + Mtk ^t = 0:

The same result is produced for k > j: When k = j :

Mtk+1 ^tj+1 ^t Mtk ^tj+1 ^t Mtk+1 ^tj ^t + Mtk ^tj ^t


= Mtj+1 ^t Mtj ^t Mtj ^t + Mtj ^t = Mtj+1 ^t Mtj ^t ;

and thus the coe¢ cient of bj (!) is aj (!) Mtj+1 ^t (!) Mtj ^t (!) : By sub-
stitution, the expression in ( ) is obtained.

3.2.8 Stochastic Integrals via Riemann Sums


When a stochastic process v(t; !) on (S; (S); t (S); )u:c: is left
continuous and adapted, it is then predictable by book 7’s corollary 5.17
and thus suitable as an integrand if square integrable in the sense of 3.11.
When such a process is also locally bounded as de…ned below, then the
R t0
stochastic integral t v(s; !)dMs (!) will equal the appropriately de…ned
limit of Riemann sums.

To set the stage, recall from proposition 3.15 that if v(s; !) 2 H2M ([0; 1)
S); there exists simple processes fvn (t; !)g1 n=1 H2M ([0; 1) S) so that
vn (s; !) ! v(s; !) in the H2M -norm. Then by propositions 3.19 and 3.23,
for all t; t0 there is convergence both in L2 (S) and in probability:
Z t0 Z t0
vn (s; !)dMs (!) ! v(s; !)dMs (!):
t t
138CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

Finally by the proof of 2 Rof proposition 3.27, a subsequence can be chosen


t
so that for almost all !; 0 vnk (s; !)dMs (!) ! ItM (!) uniformly in t over
compact sets, where ItM (!) is the continuous version of this integral in 3.30.
In the case of locally bounded, left-continuous v(s; !) 2 H2M ([0; 1) S);
these results are improved in two ways:

1. The vn (t; !)-sequence can be de…ned so that vn (t; !) ! v(t; !) point-


wise in t for almost all !; and the associated integrals are again Rie-
mann sums.
Rr
2. RThe convergence in L2 (S) and in probability, 0 vn (s; !)dMs (!) !
r
0 v(s; !)dMs (!); is then uniform over compact sets, r 2 [0; t]:
Rt
3. In addition, there is a subsequence so that for almost all !; 0 vnk (s; !)dMs (!) !
ItM (!) uniformly in t over compact sets.

See also Hunt and Kennedy (2004) for an alternative approach to


-a.e. convergence.

Proposition 3.55 (Riemann sum approximation) Given (S; (S); t (S); )u:c: ;
let Mt 2 M2 ; v(s; !) 2 H2M ([0; 1) S); and assume v(s; !) is also left con-
tinuous in s for almost all !; and locally bounded:
jv(s; !)j Kt < 1 for s t:
Given partitions n of [0; t] :
0 = t0 < t1 < tn+1 = t;
with n max0 i n fti+1 ti g ! 0; then with the integral de…ned as ItM (!)
of proposition 3.27 (suppressing (!)):
Xn Z t
v(ti ; !) Mti+1 Mti !L2 (S) v(s; !)dMs : (3.51)
i=0 0

This L2 (S)-convergence is uniform over [0; t] :


Xn Z r
sup v(ti ; !) Mti+1 ^r Mti ^r v(s; !)dMs !L2 (S) 0; (3.52)
r t i=0 0

and implies uniform convergence in probability over [0; t]: That is, for all
>0:
Xn Z r
Pr sup v(ti ; !) Mti+1 ^r Mti ^r v(s; !)dMs > ! 0:
r t i=0 0
(3.53)
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES139

Further, there is a subsequence of partitions nk ; so that uniform con-


vergence on r t is pointwise -a.e.:
Xnk Z r
sup v(ti ; !) Mti+1 ^r Mti ^r v(s; !)dMs !a:e: 0:
r t i=0 0
Xn
Proof. De…ne vn (s; !) v(ti ; !) (ti ;ti+1 ] (s): Then since v(ti ; !)
i=0
is ti (S)-measurable, vn (s; !) is adapted and by construction left contin-
uous and is thus predictable by book 7’s corollary 5.17. That vn (s; !) 2
H2M ([0;
R 11) S) for all n will then follow once 3.11 is demonstrated. Recall
that 0 (ti ;ti+1 ] (s)d hM is = hM iti+1 hM iti ; and E [hM it ] = E Mt2 by
book 7’s proposition 6.18. Hence:
Z 1X
n
kvn (t; !)k2H M ([0;1) S) E vn2 (ti ; !) (ti ;ti+1 ] (s)d hM is
2
0 i=0

Kt E [hM it ] = Kt2 E
2
Mt2 < 1:

Now by the left continuity, vn (s; !) ! v(s; !) pointwise in s for almost


all !: Also, from the de…nition of vn (s; !) and 3.1, the statement in 3.52 is
equivalent to:
Z r Z r
sup vn (s; !)dMs (!) v(s; !)dMs (!) ! 0:
r t 0 0 L2 (S)

To prove this, de…ne


Z r Z r
Nr(n) (!) vn (s; !)dMs (!) v(s; !)dMs (!):
0 0

As a di¤ erence of continuous martingales and hence a martingale, Doob’s


martingale maximal inequality 2 of book 7’s proposition 5.91, followed by the
Itô M -isometry in 3.23 produces:
" #
2
(n) 2
E sup Nr(n) 4E Nt
r t

= 4 kvn (s; !) v(s; !)k2H M ([0;t] S)


2
Z Z t
4 (vn (s; !) v(s; !))2 d hM is d :
S 0

Now
(vn (s; !) v(s; !))2 2 vn2 (s; !) + v 2 (s; !) ;
140CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

and since v(s; !); vn (s; !) 2 H2M ([0; 1) S) it follows that this upper bound
is d hM is -integrable, -a.e. Also, vn (s; !) ! v(s; !) pointwise in s; -a.e.,
and so Lebesgue’s dominated convergence theorem of book 5’s corollary 2.45
applies to conclude that as n ! 1;
Z t
(vn (s; !) v(s; !))2 d hM is ! 0; -a.e.
0

But as above:
Z t Z t
(vn (s; !) v(s; !))2 d hM is 2 vn2 (s; !) + v 2 (s; !) d hM is ;
0 0

and v(s; !); vn (s; !) 2 H2M ([0; 1) S) assures that this upper bound is -
integrable. Thus another application of Lebesgue’s dominated convergence
theorem obtains:
" # Z Z t
2
E sup Nr (n)
4 (vn (s; !) v(s; !))2 d hM is d ! 0:
r t S 0

This proves 3.52, and then 3.51 by de…nition.


Next, Chebyshev’s inequality of book 4’s proposition 3.33 and this last
result implies that for all > 0 :
" #,
2
Pr sup Nr(n) > E sup Nr(n) 2
! 0;
r t r t

which is 3.53.
Finally, de…ne nk so that

Pr sup Nr(nk ) 2 k
2 k
;
0 r t
n o
(n )
and let Ak S be de…ned by Ak = !j sup0 r t Nr k 2 k : Then since
P1
k=1 [Ak ] < 1; the Borel-Cantelli theorem of proposition 2.6 of book 2
applies to conclude that [A] = 0 where A lim sup Ak : So for ! 2 A~ the
(n )
complement of A; there are at most …nitely many k with sup0 r t Nr k
(n )
2 k ; and hence fNr k g1 k=1 converges uniformly for r 2 [0; t] on this set of
measure 1: As the integrals of simple processes are continuous, this uniform
convergence is to a continuous functions, which then must agree with ItM (!)
-a.e.
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES141

The following example generalizes the formula in 2.35 for Brownian mo-
tion, since hBit = t by corollary 2.10.

Rt
Example 3.56 (On 0 Ms dMs ) Given (S; (S); t (S); )u:c: ; let Mt 2 M2 also
be locally bounded, then v(s; !) = Ms (!) satis…es the conditions of the above
theorem. Given partitions n of [0; t] :

0 = t0 < t1 < tn+1 = t;

with n max0 i n fti+1 ti g ! 0; it follows that:

Xn Z t
Mti (!) Mti+1 (!) Mti (!) !L2 (S) Ms (!)dMs (!):
i=0 0

However,
2
Mt2i+1 Mt2i Mti+1 Mti = 2Mti Mti+1 Mti ;

and so summing and rearranging obtains:

Xn Z t
2
Mt2 Mti+1 Mti !L2 (S) 2 Ms (!)dMs (!);
i=0 0

since M0 = 0:
By book 7’s proposition 6.5 (see exercise 3.57):
Xn 2
Mti+1 Mti !L2 (S) hM it : ((*))
i=0

Combining, we derive a generalization of 2.35, that -a.e.,


Z t
Ms dMs = Mt2 hM it =2: (3.54)
0

Exercise 3.57 Justify the application of book 7’s proposition 6.5 above,
even though it was not assumed that Mt was bounded, but only locally bounded.
Hint: De…ne Nt = Mt for t T and Nt = MT for t > T: Then Nt is a
martingale (book 7, proposition 5.84), and bounded (why?). Thus in ( ); we
can conclude by the referenced result that convergence occurs to hN it : Why
is hM it = hN it for t T ? Consider stopping each at T and recall book 7’s
corollary 6.16.
142CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

Remark 3.58 (Doob-Meyer decomposition redux) Compare 3.54 to


the Doob-Meyer decomposition theorem 1 of book 7’s proposition 6.5 (same
justi…cation as in exercise 3.57), that:
Mt2 = hM it + Mt0 ;
where Mt0 is a continuous martingale. It follows from 3.54 that
Z t
0
Mt = 2 Ms dMs :
0
Thus in this case, we have an explicit representation for the continuous
martingale Mt0 assured to exist by this theorem.
Exercise 3.59 Investigate what happens in the derivation of example 3.56
when v(ti ; !) = Mti (!) is changed to v(ti ; !) = Mti+1 (!) in the Riemann
sum. Prove that the resulting summation converges to Mt2 + hM it =2: Gen-
eralize to a result using an "intermediate point," v(ti ; !) = (1 r)Mti (!) +
rMti+1 (!) for 0 r 1: Compare with exercise 2.15.

3.3 Integrals w.r.t. Continuous Local Martingales


The goal of this section is to generalize the previous section’s results from
continuous L2 -bounded martingales, to continuous local martingales
de…ned on the …ltered probability space (S; (S); t (S); )u:c: : The
approach taken will be familiar from chapter 6 of book 7. That is we will
use stopping times fTn g to reduce the local martingale to a continuous
L2 -bounded martingale (versus a bounded martingale in book 7), then
apply previous results and investigate the limit of such integrals as n ! 1:
The …rst step is relatively easy since if Mt is a continuous local martin-
gale, propositions 5.75 and 5.77 of book 7 state that we can reduce M by
localizing sequences that bound M using:
1. A hitting time such as: Tn = infftj jXt j ng; or,
2. Any sequence of stopping times Tn0 that is almost surely increasing and
unbounded with Tn0 Tn for all n:

Thus given such a localizing sequence Tn for which MtTn Kn with


(n)
Kn ! 1; it follows (de…nition 5.70, book 7) that Mt is a continuous,
L2 -bounded martingale where:
(n) Tn
Mt Tn >0 Mt = Tn >0 Mt^Tn ;
3.3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES143

(n)
and thus Mt 2 M if M0 = 0: It is a martingale by de…nition 5.70
of book 7, since fTn g is a localizing sequence, and is L2 -bounded since
2 (n)
Tn
E Tn >0 Mt Kn2 : Hence for any predictable process v(s; !) 2 H2M ([0; 1)
Rt (n)
S); the integral 0 v(s; !)dMs is well de…ned by the previous section.

Remark 3.60 (On Tn >0 ) Recall from book 7’s remark 5.71 that the pres-
ence of Tn >0 in the de…nition of a local martingale Mt is to weaken the
integrability requirement on M0 : Speci…cally, if we required MtTn to be a
martingale in this de…nition rather than Tn >0 MtTn ; this would require inte-
grability MtTn for all t and thus integrability of M0Tn = M0 : With the given
de…nition, we instead require only integrability of Tn >0 M0 :
In the current context, we will continue the assumption for local martin-
gales that was used for L2 -bounded martingales Mt 2 M2 that M0 = 0; and
thus integrability is not an issue. We will therefore discontinue the use of
Tn >0 ; and simply de…ne a local martingale Mt with M0 = 0 as a process
such that there exists a localizing sequence such that MtTn is a martingale.
This is justi…ed by the following:

Lemma 3.61 (On eliminating Tn >0 ) Let Mt be an adapted process on


(S; (S); t (S); )u:c: with M0 = 0 and T a stopping time. Then MtT is a
martingale if and only if T >0 MtT is a martingale.
Proof. Since

MtT = T
T >0 Mt + T
T =0 Mt = T
T >0 Mt ;

it follows that MtT is integrable and/or adapted for all t if and only if
T
T >0 Mt is integrable and/or adapted for all t: Further, for t > s it follows
that E MtT j s (S) = MsT if and only if E T >0 MtT j s (S) = T >0 MsT :

Exercise 3.62 Generalize lemma 3.61 to prove that if Mt is an adapted


process on (S; (S); t (S); )u:c: with M0 integrable and T a stopping time,
then MtT is a martingale if and only if T >0 MtT is a martingale. Also, check
that this lemma remains true for local submartingales (or local super-
martingales) de…ned on the …ltered probability space (S; (S); t (S); )u:c: :
Such processes are de…ned just as are local martingales except that given
the localizing sequence fTn g; the stopped process Tn >0 XtTn is a submartin-
gale (respectively, supermartingale)
h with
i respect to t (S) for all n: This
Tn Tn for a submartingale
means for t s that E Tn >0 Xt j s Tn >0 Xs
144CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
h i
Tn Tn
respectively, E Tn >0 Xt j s Tn >0 Xs
for a supermartingale ; in con-
h i
trast to the local martingale condition that E Tn >0 XtTn j s = Tn >0 XsTn :

Now a moment of thought reveals that even with the elimination of


the Tn >0 -factors, there is an inconvenience in the above approach. With
Tn
this setup the allowable integrands v(s; !) 2 H2M depend on the stopping
time Tn . To circumvent this apparent di¢ culty, recall that from book 7’s
corollary 6.16 that M Tn t = hM iTt n for continuous local martingales. Thus
Tn
if v(s; !) 2 H2M ([0; 1) S); 3.11 can be expressed:
Z 1
E v 2 (s; !)d hM iTs n < 1:
0

To uniformly restrict the domain of the d hM iTs n -integration to be inde-


pendent of !; we could equally well reduce Mt by a smaller (but again
unbounded -a.e.) localizing sequence as noted in 2: above.
T0
For example de…ning Tn0 Tn ^ n; then v(s; !) 2 H2M n ([0; 1) S)
implies that: Z 1 0
E v 2 (s; !)d hM iTs n < 1:
0
0 0
Now hM iTs n = hM is for s Tn0 ; and d hM iTs n = 0 for s > Tn0 ; and since
Tn0 n;: Z Z
1 0
n
v 2
(s; !)d hM iTs n v 2 (s; !)d hM is :
0 0
Hence by appropriately stopping continuous local martingales, it is pos-
sible to allow a larger class of integrands that is independent of the speci…c
localizing sequence and simply require that for all t :
Z t
E v 2 (s; !)d hM is < 1:
0
In fact by better stopping M; it is possible to generalize further to the
assumption that for almost all !;
Z t
v 2 (s; !)d hM is < 1; for all t:
0
This is the approach taken in this section. To formalize manipulations,
we …rst investigate how the integrals with respect to continuous L2 -bounded
martingales behave under stopping. The following proposition summarizes
the needed results.
3.3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES145

Remark 3.63 (On stopping R-S and L-S integrals) Note that if sto-
chastic integrals were de…ned pathwise as Riemann-Stieltjes or Lebesgue-
Stieltjes integrals, then the validity of 3.55 is reasonably transparent.
Rt
If t T; then all three integrals are simply equal to 0 v(s; !)dMs (!):
This is clear for the …rst two, while for the third integral, since s t it
follows that MsT = Ms and thus dMsT = dMs :
RT
For t > T; all three integrals equal 0 v(s; !)dMs (!): This is again clear
for the
R t …rst two, while for the third, note that MsT = MT for s > T implies
T
that T v(s; !)dMs (!) = 0:
What makes this proposition a challenge is the veri…cation that this in-
tuition generalizes to the present case where integrals are de…ned in the L2 -
sense.

Proposition 3.64 (On stopping stochastic integrals) Given (S; (S); t (S); )u:c: ;
let Mt 2 M2 ; v(s; !) 2 H2M ([0; 1) S); and T a stopping time. Then -a.e.:
Z t^T Z t Z t
v(s; !)dMs (!) = v(s; !) (0;T ] (s)dMs (!) = v(s; !)dMsT (!);
0 0 0
(3.55)
for all t:
Proof. We …rst address existence of theR integrals in 3.55 relative to the
t
development of the prior section. Since 0 v(s; !)dMs (!) is a continuous,
L2 -bounded martingale by the previous section, and the …rst integral equals
this martingale stopped at T; this integral is again continuous and a mar-
tingale by Doob’s optional stopping theorem of book 7’s proposition 5.84. In
fact it is again an L2 -bounded martingale by Doob’s martingale maximal
inequality of book 7’s proposition 5.91 and 2.39:
" Z 2
# " Z #
t^T r 2
E v(s; !)dMs (!) E sup v(s; !)dMs (!)
0 r t 0
" Z 2
#
t
4E v(s; !)dMs (!)
0

4 kv(s; !)k2H M ([0;1) S) :


2

The second integral is well de…ned because v(s; !) (0;T ] 2 H2M ([0; 1)
S): First this process is predictable as a product (proposition 1.5, book 5) of
predictable v(s; !) and (0;T ] ; which is predictable since it is left continuous
and adapted (corollary 5.17, book 7). The H2 -bound is satis…ed since hM it
146CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

is increasing by book 7’s proposition 6.12 and thus d hM is is a measure:

Z 1
2
2
(0;T ] v(t; !) E (0;T ] v (s; !)d hM is
H2M ([0;1) S) 0
Z 1
E v 2 (s; !)d hM is :
0

For the third integral, MtT is a continuous L2 -bounded martingale, again


by Doob’s martingale maximal inequality, and thus this integral is well de-
T
…ned if v(s; !) 2 H2M ([0; 1) S) implies that v(s; !) 2 H2M ([0; 1) S):
Measurability is not a question, while for 3.11, since d M T s = 0 for s > T :

" Z #
1 2
kv(s; !)kH M T ([0;1) S)
= E v 2 (s; !)d M T s
2 0
kv(s; !)kH M ([0;1) S) : ((*))
2

Turning to 3.55, let fvn (s; !)g H2M ([0; 1) S) be a sequence of simple
processes as in proposition 3.15 with kv(s; !) vn (s; !)kH M ([0;1) S) ! 0:
2
It is an exercise to prove that 3.55 is true for all such vn (s; !); so it is then
enough to prove that for each of the three expressions above, the integrals
of vn (s; !) converges to the respective integrals of v(s; !) in L2 (S): Also, we
prove this result for t < 1; since this implies the result for t = 1; noting
that all such integrals are …nite by the above discussion.

Rt
1. For each n; 0 (v(s; !) vn (s; !)) dMs is a continuous martingale which
is L2 -bounded by Itô’s M -isometry in 2.39, and thus also uniformly
integrable (proposition 5.99, book7). By Doob’s optional stopping the-
orem (proposition 5.117, book 7), arbitrarily de…ning the stopping time
T0 = t t ^ T :

Z t^T Z t
(v(s; !) vn (s; !)) dMs = E (v(s; !) vn (s; !)) dMs j t^T :
0 0

Then by Jensen’s inequality and the tower property of conditional ex-


3.3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES147

pectations (proposition 5.26, book 6):


Z t^T Z t^T 2
v(s; !)dMs vn (s; !)dMs
0 0 L2 (S)
" Z 2
#
t
= E E (v(s; !) vn (s; !)) dMs j t^T
0
" " Z 2
##
t
E E (v(s; !) vn (s; !)) dMs j t^T
0
" Z 2
#
t
= E (v(s; !) vn (s; !)) dMs :
0

This
R t^T converges to zero as n ! 1 by Itô’s M -isometry in 2.39, and so
R t^T
0 v n (s; !)dM s ! 0 v(s; !)dMs in L2 (S):
2. For the second representation of the integral, since:

[v(s; !) vn (s; !)] (0;T ] k[v(s; !) vn (s; !)]kH M ([0;1) S) ! 0;


H2M 2

it follows from Itô’s M -isometry in 2.39 that:


Z t
[v(s; !) vn (s; !)] (0;T ] (s)dMs (!) ! 0:
0 L2 (S)

3. For any v(s; !) 2 H2M ([0; 1) S) it follows from ( ) that v(s; !) 2


T
H2M ([0; 1) S): Applying this to [v(s; !) vn (s; !)] obtains:
k[v(s; !) vn (s; !)]kH M T ([0;1) S)
! 0:
2

Thus again by Itô’s M -isometry:


Z t
[v(s; !) vn (s; !)] dMsT (!) ! 0:
0 L2 (S)

Exercise 3.65 Prove 3.55 for simple processes as de…ned in 2.21 using 3.1
and 3.2.
Corollary 3.66 Given the assumptions of proposition 3.64:
Z t^T Z t
v(s; !)dMs (!) = v(s; !) (0;T ] (s)dMsT (!): (3.56)
0 0
Proof. This follows from two applications of 3.55.
148CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

M
3.3.1 Mloc -Integrators and H2;loc ([0; 1) S)-Integrands
With the aid of proposition 3.64, stochastic integrals with respect to
continuous local martingales can now be de…ned, and for a larger class of
integrands than contemplated for L2 -bounded martingale integrators. We
begin with a de…nition of the class of local martingale integrators.

De…nition 3.67 (Mloc ) Let Mloc denote the collection of continuous ( -


a.e.) local martingales de…ned on the …ltered probability space (S; (S); t (S); )u:c:
with M0 = 0: Thus if M 2 Mloc ; there exists a sequence of stopping times
fTn g1
n=1 so that -a.e., Tn < Tn+1 for all n and Tn ! 1; and so that
(lemma 3.61) MtTn Mt^Tn is a martingale (de…nition 5.22) for all n: The
sequence fTn g1n=1 is called a localizing sequence for M; and is said to
reduce M:

Notation 3.68 In some references which deal with more general spaces
of local martingales, the associated spaces of integrators might be denoted
Mc;loc to emphasize that these local martingales are continuous. In general,
one always assumes M0 = 0 for stochastic integrators.

Note that M2 Mloc since by book 7’s corollary 5.85, a continuous


martingale is a continuous local martingale whether it is L2 -bounded or
not. But the inclusion is strict, so Mloc 6= M2 : Continuous martingales
that are not L2 -bounded are continuous local martingales but not members
of M2 : An example of this is Brownian motion Bt ; for which E Bt2 = t:
On the other hand, if M 2 Mloc and fTn g is a localizing sequence for M;
then M Tn is a continuous martingale de…nition but need not be an element of
M2 . The next result shows that a localizing sequence can always be chosen
so that the stopped processes are in fact elements of M2 : The integration
theory of the prior section will then apply to such M Tn for all n; but with
a soon to be de…ned more general space of integrands.

Proposition 3.69 (Reduction to M2 martingales) Given (S; (S); t (S); )u:c: ;


let M 2 Mloc : Then there exists a localizing sequence fTn g with M Tn 2 M2
for all n:
Proof. By book 7’s proposition 5.75, a continuous local martingale can be
reduced by the sequence

Tn = infft > 0j jMt j ng:


3.3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES149

The stopped process MtTn is then a continuous martingale, and thus is adapted,
and integrable for each t: Also M Tn satis…es 3.10 since for all t :
Z 1=2
2
MtTn MtTn d n:
L2

Thus M Tn 2 M2 for all n:

We now de…ne the space of integrands associated with integrators M 2


Mloc :

De…nition 3.70 (H2;loc M ([0; 1) S)) Given (S; (S); t (S); )u:c: and M 2
M M when there is no con-
Mloc ; the space H2;loc ([0; 1) S) and sometimes H2;loc
fusion, is the collection of real valued functions v(t; !) de…ned on [0; 1) S
so that:

1. v(t; !) is predictable, meaning measurable on the product space ([0; 1) S; P) ;


with P the predictable sigma algebra (de…nition 5.10, book 7).

2. For almost all !;


Z t
v 2 (s; !)d hM is < 1 for all t < 1: (3.57)
0

Remark 3.71 The following comments provide additional insights to this


de…nition, and its relationship with earlier spaces of integrands.

1. Property 2 of the above de…nition can also be stated:


"For all t; the integral in 3.57 is …nite for almost all !:"
In general it is dangerous to switch "all" and "almost all" quali…ers
in such statements since an uncountable intersection, and thus a po-
tentially unmeasurable set, may result. But note that given !; if:
Z t
v 2 (s; !)d hM is < 1
0

for some t; then this integral is …nite over every subinterval of [0; t]:
Hence if At S denotes the collection of ! with …nite integral of
v 2 (s; !) over [0; t]; then this alternative statement implies that [At ] =
1 for all t: But fAt g isT a decreasing
T nested collection of sets with At
As for t s; so A1 At = An with integer n and hence [A1 ] =
1:
150CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

M ([0; 1)
2. H2;loc S) contains all continuous adapted processes. This fol-
lows because such processes are predictable by corollary 5.17 of book 7,
and Lebesgue-Stieltjes integrability of continuous functions is proved
in proposition 2.54 of book 5:

3. Note that for M 2 M2 Mloc :

H2M $ H2;loc
M
:

First, all martingales are local martingales by book 7’s corollary 5.85.
Inclusion then follows because the measurability requirement of being
predictable is the same, and the constraint in 3.11:
Z 1
E v 2 (s; !)d hM is < 1;
0

implies
R 1 2 3.57. But 3.57 does not even imply the weaker constraint that
0 v (s; !)d hM is < 1 for some !:
So once the current integration theory is developed, the above collec-
tion of integrands "expands" the space allowed in de…nition 3.9 for
continuous L2 -bounded martingales M 2 M2 : But there will be a small
price
Rt to pay for so expanding this space of integrands. That is, while
0 v(s; !)dMs (!) is de…nable for t < 1; in general we cannot extend
this de…nition to t = 1 as was possible given the square integrability
condition of 3.11.

4. For Brownian motion B; which is a continuous martingale and thus a


B ([0; 1)
local martingale, H2 ([0; 1) S) of de…nition 2.31 and H2;loc
S) are not directly comparable. The earlier Brownian integrands had
B ; in that they were
weaker measurability requirements than does H2;loc
required to be adapted and measurable compared to predictable (recall
remark 5.20, book 7), but a stronger integrability assumption in 2.27
which is comparable to that of 3.11 with M B:
A simple example for the integrability constraint is that v(t; !) 1 is
B , since for all ! :
an element of H2;loc
Z t
v 2 (s; !)d hBis = hBit = t;
0
R1
by 2.18. But 0 v 2 (s; !)d hBis does not exist for any ! and thus
v(t; !) 2
= H2 ([0; 1) S):
3.3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES151

Hence, for continuous L2 -bounded martingales, the current section


will expand the class of allowable integrands, while for Brownian mo-
tion, the comparison is more subtle. For predictable integrands in
H2 ([0; 1) S); however, the current section generalizes the earlier Itô
results. For example by book 7’s corollary 5.17, if v(t; !) 2 H2 ([0; 1)
S) is left continuous, then it is predictable.

A stopping time result below along with proposition 3.64 above will
justify the applicability of the L2 -bounded martingale integration results to
the current more general set-up. But …rst a technical result is needed which
has a simple and intuitive conclusion, but a somewhat lengthy justi…cation.

Proposition 3.72 (Stopping a Lebesgue-Stieltjes integral) Given (S; (S); t (S); )u:c: ;
M ; then for any real t
M 2 Mloc and v(t; !) 2 H2;loc 0:
Z r
Tt0 inf r 0j v 2 (s; !)d hM is t ;
0

is a stopping time.
Proof. Let Xt be a process de…ned on (S; (S); t (S); )u:c: by:
8R
< t v 2 (s; !)d hM i ; ! 2 A1 ;
0 s
Xt =
: 0; !2= A1 ;

where A1 is the set of measure 1 on which the integral is well-de…ned and


…nite for all t (1 of remark 3.64). As hM it is increasing and continuous
in t (proposition 6.12, book 7), it is left as an exercise to show that Xt is
continuous for all ! (see exercise 3.73). As a continuous process, if it can
be shown that Xt is also adapted then inf fr 0jXr tg is a stopping time
by book 7’s proposition 5.60. 0
R t 2 Then by completeness of the …ltration, Tt a
stopping time since Xt = 0 v (s; !)d hM is -a.e.
To prove that Xt is adapted, we …rst approximate this integral. Recalling
proposition 1.18 of book 5, for …xed t < 1; given n let N n2n + 1 and
(n)
de…ne sets fAj gN j=1 by:
8
< f(s; !) 2 [0; t] Sj(j 1)2 n v 2 (s; !) < j2 n g; 1 j N 1:
(n)
Aj =
: f(s; !) 2 [0; t] Sjn v 2 (s; !)g; j = N:

Since v and hence v 2 is predictable, it is progressively measurable by book


(n)
7’s proposition 5.19, and so Aj 2 [B([0; t]) t (S)] for all n and j: And
this is then true for all t:
152CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

Next de…ne
n
vn2 (s; !) = (j 1)2 n; (n)
(s; !) 2 Aj ; 1 j N:

Then fvn2 (s; !)g1 2


j=1 is an increasing sequence of simple processes with vn (s; !) !
v 2 (s; !) for all (s; !) 2 [0; t] S: Thus by Lebesgue’s monotone convergence
theorem of book 5’s proposition 2.21:
Z t Z t
2
vn (s; !)d hM is ! v 2 (s; !)d hM is ; for all t < 1:
0 0

Further, this convergence is to Xt for all t on theRsubset A1 of S of measure


t
1 on which this latter integral exists. Hence, if 0 vn2 (s; !)d hM is is t (S)-
measurable for all n; then t (S)-measurability of Xt will follow from book’s
5’s corollary 1.10.
Now:
Z t XN Z t
2 n
vn (s; !)d hM is = (j 1)2 A
(n) (s; !)d hM is ;
0 j=1 0 j

and R tso the t (S)-measurability of this integral follows from the t (S)-measurability
of 0 A (s; !)d hM is for all A 2 [B([0; t]) t (S)] : To prove this last state-
ment we use the monotone class theorem of book 5’s proposition Rt 1.30.
Let C be the collection of sets A [0; t] S for which 0 A (s; !)d hM is is
t (S)-measurable. First, C contains all rectangular sets of the form (a; b] B
where (a; b] [0; t] and B 2 t (S) since then
Z t
A (s; !)d hM is = B (!) [hM ib^t hM ia ] ;
0

and t (S)-measurability follows because hM it is adapted (proposition 6.12,


book 7) and b ^ t t: Hence C contains the collection of all such sets, which
is a semi-algebra. Also, C contains the associated algebra
Sm of …nite disjoint
unions of such sets by linearity of the integral. If A = k=1 Ai with disjoint
Ai = (ai ; bi ] Bi :
Z t Pm
A (s; !)d hM is = k=1 Bi (!) hM ibi ^t hM iai :
0

To show that C is a monotone class, let fAn g1n=1 C be a monotone


collection of sets, meaning either An An+1 for all n or An+1 SAn for all
n; and let A = limn!1 An : This limit is de…ned respectively as 1n=1 An or
3.3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES153

T1
n=1 An ; and in either case An (s; !) ! A (s; !) pointwise. Thus by the
integrability of [0;t] S ; Lebesgue’s dominated convergence theorem (proposi-
tion 2.43, book 5) obtains that for all ! :
Z t Z t
An (s; !)d hM is ! A (s; !)d hM is :
0 0

By corollary 1.10 of book 5, t (S)-measurability is preserved in such limits


and thus A 2 C:
Hence C is a monotone class that contains the algebra of sets that gener-
ates [B([0; t]) I
t (S)] : Thus B([0; t]) t (RB ) C and Xt is adapted
by the above remarks.

Exercise 3.73 Prove that Xt above is continuous for all !: Hint: Recall
the proof for the Lebesgue integral in proposition 3.33 of book 3; but using
Lebesgue’s monotone convergence theorem of book 5’s proposition 2.21.

In the process of proving this needed result on the stopping time Tr0 ;
an important conclusion was derived about integrals with respect to hM it
which we codify as a corollary. This result will be generalized below in the
section, Stochastic Integrals w.r.t. Continuous B.V. Processes.

Corollary 3.74 Given (S; (S); t (S); )u:c: ; M 2 Mloc and v(t; !) 2
M ; let Y be the process de…ned -a.e. by:
H2;loc t

Z t
Yt v 2 (s; !)d hM is :
0

Then Yt is an adapted process, and for almost all ! is continuous in t:


More generally, if u(t; !) is predictable and for almost all ! :
Z t
ju(s; !)j d hM is < 1 for all t < 1;
0

then Z t
Zt u(s; !)d hM is ;
0
is an adapted process, and for almost all ! is continuous in t:
Proof. The statement on Yt is a restatement of that proved in proposi-
tion 3.72. For Zt ; if ! is in the set of probability 1 on which this inte-
gral constraint is satis…ed, split u(s; !) = u+ (s; !) u (s; !) as in book
5’s de…nition 2.36, where u+ (s; !) and u (s; !) are nonnegative. Since
154CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

ju(s; !)j = u+ (s; !)+ u (s; !); the integral constraint above now applies to
both functions. With apparent notation, split Zt = Zt+ Zt and note that
the above proof for Yt can now be used for each of Zt+ and Zt separately,
and the result follows.

With the result of proposition 3.72, we can now use stopping times to
reduce the integrators and integrands of this section to …t the context of the
prior section on integration with respect to continuous L2 -bounded martin-
gales. Once this is done, we will only need to check that this reduction can
be accomplished in a consistent way so as to form the basis of the de…nition
of a generalized stochastic integral.

Proposition 3.75 (Reducing Mloc -integrators and H2;loc M -integrands)


M ; then there
Given (S; (S); t (S); )u:c: ; if M 2 Mloc and v(t; !) 2 H2;loc
exists a localizing sequence fRn g so that M Rn 2 M2 and v(t; !) (0;Rn ] (t) 2
Rn
H2M :
Proof. Let Tn be de…ned as in proposition 3.69, and Tn0 as in proposition
3.72. De…ne:
Rn = Tn ^ Tn0 minfTn ; Tn0 g;
which is a stopping time by book 7’s proposition 5.60. Also Rn > 0 for
almost all ! since Tn > 0 by continuity of Mt ; and Tn0 > 0 by continuity of
this integral by corollary 3.74.
Now Nn M Tn 2 M2 for all n by proposition 3.69. Since M Rn =
Tn0
Nn ; the process M Rn is continuous ( -a.e.) and is a martingale by Doob’s
optional stopping theorem of book 7’s proposition 5.84. Also, since Rn Tn0 ;
we have by the de…nition of Tn0 that:

MtRn n;
L2

and so M Rn 2 M2 :
Next, Xt (0;Rn ] (t) is left continuous for almost all ! since Rn > 0
-a.e. Thus X0 0; and this implies X0 1 (B) 2 f;; Sg 0 (S) for all
Borel sets B; and for …xed t > 0 :

Xt 1 (1) = f!jRn (!) tg; Xt 1 (0) = f!jRn (!) < tg:

Since Rn is a stopping time it is an optional time by book 7’s proposition


5.60, and thus both sets are t (S)-measurable. Since left continuous and
adapted, (0;Rn ] (t) is predictable by book 7’s corollary 5.17 , and consequently
so is v(t; !) (0;Rn ] (t):
3.3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES155

Rn
That v(t; !) (0;Rn ] (t) 2 H2M now follows by de…nition of the Lebesgue-
Stieltjes integral. By book 7’s corollary 6.16, M Rn s = hM is^Rn and thus
M Rn s = hM is for s t ^ Rn : Hence:
Z t Z t^Rn
2 Rn
v (s; !) (0;Rn ] (t)d M s
= v 2 (s; !)d hM is n < 1:
0 0

Finally, that fRn g is a localizing sequence for M requires only that Rn !


1 with probability 1: Assume that there is a set A S so that for all n;
Rn K < 1 on A: Then since fTn g is a localizing sequence it follows that
for all n; Tn0 K < 1 on A: This implies that for all n and ! 2 A :
Z r
inf r 0j v 2 (s; !)d hM is n K;
0

and thus for all n and ! 2 A :


Z K
v 2 (s; !)d hM is n:
0
M ; the set A must have measure 0:
By de…nition of H2;loc

Exercise 3.76 Prove that if fRn0 g is any sequence of stopping times with
0
Rn0 Rn for all n; then such a sequence also obtains M Rn 2 M2 and
R 0
v(t; !) (0;Rn0 ] (t) 2 H2M n : Thus if Rn0 ! 1 with probability 1; then it is also
a localizing sequence for M by book 7’s proposition 5.77.

3.3.2 The General Stochastic Integral


By proposition 3.75, if M 2 Mloc and v(t; !) 2 H2;locM ; the integral:

Z t
M Rn
It (!) = v(s; !) (0;Rn ] (s)dMsRn (!);
0

is well de…ned as a stochastic integral by the previous section since M Rn is


Rn
a continuous L2 -bounded martingale, and v(s; !) (0;Rn ] (s) 2 H2M :

So it is only natural to attempt to de…ne the general integral as follows.

De…nition 3.77 (Final stochastic integral in H2;loc M ) Given (S; (S);


t (S); )u:c: ;
M
if M 2 Mloc ; v(t; !) 2 H2;loc ; and fRn g is de…ned as in proposition 3.75
above, de…ne for t < 1 :
Z t Z t
M
It (!) = v(s; !)dMs (!) lim v(s; !) (0;Rn ] (s)dMsRn (!); (3.58)
0 n!1 0
156CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

where the integrals on the right are understood to be the continuous versions
identi…ed in 3.30. In other words, with apparent notation:
Rn
h i
ItM [v(s; !)] lim I M v(s; !) (0;Rn ] (s) :
n!1

More generally we de…ne for 0 t t0 < 1 :


Z t0 Z t0
Rn
v(s; !)dMs (!) lim v(s; !) (0;Rn ] (s)dMs (!); (3.59)
t n!1 t

where the integrals on the right are de…ned as in 3.19.

Remark 3.78 (On well-de…nedness) To justify that this de…nition makes


sense and will indeed be "Final" we need to con…rm: 1) that the limit exists,
and 2) that this limit is consistent and in a reasonable sense independent of
the sequence of stopping times used. The next proposition will address both
issues as follows:

1. The limit in 3.58 will exist because it will be shown that if m n;


the integral using Rn agrees with the integral using Rm when t Rm :
Thus each term of this sequence only extends the integral values from
one stopping time to the next, without changing values before a given
stopping time.
2. For consistency we must prove that if we use any almost surely un-
bounded sequence of stopping times fRn0 g with Rn0 Rn ; then the
same limit results. This follows from exercise 3.76 which proves that
proposition 3.75 remains valid for any sequence with Rn0 Rn ; and the
almost surely unbounded requirement assures that fRn0 g is a localizing
sequence for M: Intuitively, this unbounded requirement assures that
the stopped integrals in the limit in 3.58 eventually cover the interval
(0; t] for any t:

Once these de…nitional details are settled and a couple technical results
proved, proposition 3.83 will then prove that as de…ned above, ItM (!) is a
continuous local martingale.

Proposition 3.79 (Final stochastic integral in H2;loc M ) Given (S; (S);


t (S); )u:c: ;
let M 2 Mloc and v(t; !) 2 H2;locM ; and fR g be the sequence of stopping
n
times de…ned in proposition 3.75. Then for all t < 1; if m n :
Z t^Rm Z t
Rn
v(s; !) (0;Rn ] (s)dMs (!) = v(s; !) (0;Rm ] (s)dMsRm (!): (3.60)
0 0
3.3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES157

In other words, for …nite t Rm :


Z t Z t
Rn Rm
v(s; !) (0;Rn ] (s)dMs (!) = v(s; !) (0;Rm ] (s)dMs (!):
0 0

In addition, if fRn0 g is a sequence of stopping times with Rn0 Rn ; then


for …nite t Rn ^ Rn0 :
Z t Z t
Rn 0
v(s; !) (0;Rn ] (s)dMs (!) = v(s; !) (0;Rn0 ] (s)dMsRn (!); (3.61)
0 0

and hence if fRn0 g is almost surely unbounded as n ! 1; the same limit


results in 3.58 for all t:
Hence the limits in 3.58 and 3.59 exist, and are well-de…ned in the sense
that the same limits are obtained with any sequence of almost surely un-
bounded stopping times fRn0 g with Rn0 Rn :
Proof. Let fQn g be a sequence of stopping times with Qn Rn : Then by
Rn
3.56, which is applicable since M 2 M and v(t; !) (0;Rn ] (t) 2 H2M :
R n 2

Z t^Qn Z t
Rn Rn ^Qn
v(s; !) (0;Rn ] (s)dMs (!) = v(s; !) (0;Rn ^Qn ] (s)dMs (!)
0 0
Z t
Qn
= v(s; !) (0;Qn ] (s)dMs (!):
0

Hence for t Qn ;
Z t Z t
Rn Qn
v(s; !) (0;Rn ] (s)dMs (!) = v(s; !) (0;Qn ] (s)dMs (!):
0 0

Letting Qn = Rm proves 3.60, while Qn = Rn ^ Rn0 proves 3.61.


R1
Remark 3.80 (On 0 v(s; !)dMs (!)) It was noted in 3 of remark 3.71
that one consequence of the weaker integrability condition on integrands in
M of 3.57, vis-a-vis that of 2.27 for H or 3.11 for H M ; is that in general
H2;loc 2 2
such integrals are not de…ned over [0; 1): For example, given Mt = Bt ; a
Brownian motion on (S; (S); t (S); )u:c: which is also a continuous local
B : Indeed, this integrand is in H M
martingale, note that v(s; !) 1 2 H2;loc 2;loc
for all Mt 2 Mloc by book 7’s proposition 6.12. In this case Tn0 = n though
Tn is not readily characterized other than the assurance of book 7’s corollary
2.72 that with probability 1; Tn < 1 for all n and Tn ! 1 (why?): Thus
with probability 1; Rn < 1 for all n and Rn ! 1: Of course we know that
Rn ! 1 -a.e. for all continuous local martingales by proposition 3.75,
158CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

but in the case of Brownian motion we have additional insight to the -a.e.
boundedness of this localizing sequence.
If we apply de…nition 3.77:
Z 1 Z 1
Rn
v(s; !)dBs (!) lim (0;Rn ] (s)dBs (!)
0 n!1 0
= lim BRn (!) (!):
n!1

Note that the integral valuation is justi…ed with a simple process approxi-
mation of (0;Rn ] (s); then using 3.1 and continuity of Bt : Since Rn ! 1
almost certainly, this is equivalent to limt!1 Bt (!): By book 7’s corollary
2.71, lim inf t!1 Bt (!) = 1 and lim supt!1 Bt (!) = 1 with probability
1; so there is no hope of giving this integral meaning as a random variable.
For a general continuous local martingale Mt much of the above example
applies, other than the conclusion that with probability 1; Rn < 1 for all n:

Corollary 3.81 Given (S; (S); t (S); )u:c: ; let Mt 2 Mloc and v(t; !) 2
M ; and fR g be the sequence of stopping times de…ned in proposition
H2;loc n
Rt
3.75. If 0 v(s; !)dMs (!) is de…ned as in 3.58 for t < 1; then for any m :

Z t^Rm Z t
Rm
v(s; !)dMs (!) = v(s; !) (0;Rm ] (s)dMs (!); t < 1: (3.62)
0 0

Proof. From 3.60, let n ! 1 and apply 3.58.

The next corollary generalizes proposition 3.64 and its corollary 3.66 to
integrals with respect to local martingales.

Corollary 3.82 (On stopping stochastic integrals) Given (S; (S); t (S); )u:c: ;
M ; and T be a stopping time. Then for t < 1 :
let Mt 2 Mloc ; v(t; !) 2 H2;loc

Z t^T Z t
v(s; !)dMs (!) = v(s; !) (0;T ] (s)dMs (!)
0 0
Z t
= v(s; !)dMsT (!) (3.63)
0
Z t
T
= v(s; !) (0;T ] (s)dMs (!):
0
3.3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES159

Proof. By 3.58, the last three integrals in 3.63 can be expressed:


Z t Z t
v(s; !) (0;T ] (s)dMs (!) = lim v(s; !) (0;T ] (s) [0;Rn ] (s)dMsRn (!);
0 n!1 0
Z t Z t
Rn
v(s; !)dMsT (!) = lim v(s; !) (0;Rn ] (s)d MsT (!);
0 n!1 0
Z t Z t
T Rn
v(s; !) (0;T ] (s)dMs (!) = lim v(s; !) (0;T ] (s) (0;Rn ] (s)d MsT (!):
0 n!1 0

The identities in 3.55 and 3.56 can be applied to the integrals on the right
to derive equality of these three integrals. In detail:
Z t Z t
Rn
v(s; !) (0;T ] (s) (0;Rn ] (s)dMs (!) = v(s; !) (0;Rn ] (s)dMsRn ^T (!)
0 0
Z t
Rn
= v(s; !) (0;Rn ] (s)d MsT (!)
0
Z t
Rn
= v(s; !) (0;T ] (s) (0;Rn ] (s)d MsT (!):
0
For the …rst equality, 3.61 obtains that for t T ^ Rn :
Z t Z t
v(s; !) (0;Rn ] (s)dMsRn (!) = v(s; !) (0;T ] (s)dMsT (!):
0 0
Letting n ! 1 and applying 3.58 yields for t T :
Z t Z t
v(s; !)dMs (!) = v(s; !) (0;T ] (s)dMsT (!);
0 0
and so for all t :
Z t^T Z t^T
v(s; !)dMs (!) = v(s; !) (0;T ] (s)dMs (!):
0 0
The last step of the proof is to show that for all t :
Z t^T Z t
v(s; !) (0;T ] (s)dMs (!) = v(s; !) (0;T ] (s)dMs (!);
0 0
which is apparent by de…nition for t T; so assume that t T: By 3.58,
then 3.55:
Z t Z t
v(s; !) (0;T ] (s)dMs (!) = lim v(s; !) (0;T ] (s) (0;Rn ] (s)dMsRn (!)
0 n!1 0
Z t^T
= lim v(s; !) (0;Rn ] (s)dMsRn (!)
n!1 0
Z T
= lim v(s; !) (0;Rn ] (s)dMsRn (!):
n!1 0
160CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

Rt
Hence 0 v(s; !) ([0;T ] (s)dMs (!) is constant for t T and the proof is com-
plete.

The …nal result is that the stochastic integral de…ned by 3.58 is a con-
tinuous local martingale. See also corollary 3.86 below.

Proposition 3.83 (Continuity of stochastic integral) Given (S; (S); t (S); )u:c: ;
M ; and fR g be the sequence of stopping
let Mt 2 Mloc and v(t; !) 2 H2;loc n
times de…ned in proposition 3.75. Then as de…ned in 3.58:
Z t
ItM (!) = v(s; !)dMs (!);
0

is a continuous local martingale with localizing sequence fRn g: If fRn0 g is


a sequence of stopping times with Rn0 Rn for all n; that is almost surely
unbounded as n ! 1; then ItM (!) is also reduced byR fRn0 g:
t
Proof. First, this integral is continuous in t because 0 v(s; !) (0;Rn ] (s)dMsRn (!)
is continuous in t for each n by proposition 3.27, and by proposition 3.64:
Z t Z t
Rn
v(s; !)dMs (!) = v(s; !) (0;Rn ] (s)dMs (!); t Rn :
0 0

In addition, this process is reduced by the stopping times fRn g since by


proposition 3.79:
Z t Rn Z t^Rn Z t
Rn
v(s; !)dMs (!) v(s; !)dMs (!) = v(s; !) (0;Rn ] (s)dMs (!);
0 0 0

Rn
and thus since M Rn 2 M2 and v(t; !) (0;Rn ] (t) 2 H2M ; these stopped
processes are martingales by proposition 3.27:
Proposition 3.79 proves that ItM (!) can also be de…ned with such fRn0 g:
That ItM (!) is also reduced by fRn0 g is then book 7’s proposition 5.77.

Remark 3.84 (Integrals as integrands) Note that as in the case of con-


tinuous L2 -bounded martingale integrators, we once again are in the situa-
tion where the resulting integral process can be used as an integrator under
M ([0; 1) S);
the same theory. Speci…cally, if Mt 2 Mloc and v(t; !) 2 H2;loc
R t
then ItM (!) = 0 v(s; !)dMs (!) 2 Mloc and can again be used as a new in-
I M ([0; 1) S): Thus we again investigate
tegrator of integrands w(t; !) 2 H2;loc
the associative law below.
3.3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES161

3.3.3 Properties of Stochastic Integrals


As might be expected, most of the properties of stochastic integrals with
respect to continuous L2 -bounded martingales generalize to integrals with
respect to continuous local martingales. The basic idea of the proof is to
start with the de…nition 3.58, and recall that M Rn is an L2 -bounded
Rn
martingales and v(s; !) (0;Rn ] (s) 2 H2M by proposition 3.75. Thus if the
Rt
integrals 0 v(s; !) (0;Rn ] (s)dMsRn (!) have a given property by the prior
section,
Rt this property will be shared by the general integral
0 v(s; !)dMs (!) as long as this property can be proved to be preserved in
the limit.

The major exception to such a derivation is Itô’s M -isometry, which we


M in
can not expect to apply in general since the constraint on v(t; !) 2 H2;loc
3.57, that -a.e.:
Z t
v 2 (s; !)d hM is < 1 for all t;
0

does not imply for any interval [0; t] that:


Z t
E v 2 (s; !)d hM is < 1:
0

However, if v 2 (s; !) is so integrable over [0; t]; it is tempting to speculate


that the isometry will apply, and indeed it does.
We summarize the important properties of the stochastic integral, with
much of the hard work already done in the prior section. For 4 of proposition
3.85 and then corollary 3.86, note that while H2M ([0; 1) S) is formally
de…ned in de…nition 3.9 for L2 -bounded continuous martingales Mt 2 M2 ;
this de…nition applies equally well when Mt 2 Mloc is a continuous local
martingale. This is because then, hM it is again a continuous, adapted and
increasing process by book 7’s proposition 6.12.

Proposition 3.85 (Properties of the stochastic integral) Given (S; (S); t (S); )u:c: ;
let Mt 2 Mloc be a continuous local martingale with M0 = 0; and v(s; !);
M ([0; 1)
w(s; !) 2 H2;loc S): Let 0 t < t0 < 1:

1. Given r with t < r < t0 ; then for almost all ! 2 S :


Z t0 Z r Z t0
v(s; !)dMs (!) = v(s; !)dMs (!) + v(s; !)dMs (!):
t t r
162CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

M ([0; 1)
2. For constant a 2 R; av(s; !) + w(s; !) 2 H2;loc S) and for
almost all ! 2 S :
Z t0 Z t0 Z t0
[av(s; !) + w(s; !)] dMs (!) = a v(s; !)dMs (!)+ w(s; !)dMs (!):
t t t

3. For any stopping time T :


Z t^T Z t Z t
v(s; !)dMs (!) = v(s; !) (0;T ] (s)dMs (!) = v(s; !)dMsT (!):
0 0 0

M ([0; 1)
4. Itô M -Isometry and Mean equal to zero: If v(s; !) 2 H2;loc
S) also satis…es for given t 0 1:
"Z 0 #
t
E v 2 (s; !)d hM is < 1;
0

then for all t t0 :


Z t
E v(s; !)dMs (!) = 0; (3.64)
0

and:
" Z 2
# Z
t t
E v(s; !)dMs (!) =E v 2 (s; !)d hM is : (3.65)
0 0

Thus if v(s; !) 2 H2M ([0; 1) S); then 3.64 and 3.65 are valid for all
t 1:

Proof. With fRn g de…ned as in proposition 3.75, or with an appropriately


de…ned sequence denoted fRn0 g above with Rn0 Rn for all n and Rn0 ! 1
-a.e., we have by 4 of proposition 3.27 that -a.e.:
Z t0 Z r
Rn Rn
v(s; !) (0;Rn ] (s)dMs (!) = v(s; !) (0;Rn ] (s)dMs (!)
t t
Z t0
Rn
+ v(s; !) (0;Rn ] (s)dMs (!):
r

Taking limits and applying 3.59 obtains 1:


3.3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES163

Now av(s; !) + w(s; !) is predictable, and it follows that av(s; !) +


M
w(s; !) 2 H2;loc since:

[av(s; !) + w(s; !)]2 2 a2 v 2 (s; !) + w2 (s; !) :

Using the same argument as for 1 completes the proof of 2:


Rn
Since M Rn 2 M2 and v(s; !) (0;Rn ] (s) 2 H2M by proposition 3.75, it
follows from proposition 3.64 that for any n :
Z t^T Z t
Rn Rn
v(s; !) (0;Rn ] (s)dMs (!) = v(s; !) (0;T ] (s) [0;Rn ] (s)dMs (!)
0 0
Z t
Rn
= v(s; !) (0;Rn ] (s)d MsT (!);
0

T R
where for the last integral we use that MsRn = MsT n : It is an exercise
M ([0; 1) S); and thus we can take limits
to check that v(s; !) (0;T ] (s) 2 H2;loc
and apply de…nition 3.77 to …nish the proof of 3:
Rn
For 4; again since M Rn 2 M2 and v(t; !) (0;Rn ] (t) 2 H2M ; 3.63 and
3.23 obtain that for t t0 and all n :
Z Z t 2 Z Z t
Rn
v(s; !) (0;Rn ] dMs (!) d = v 2 (s; !) (0;Rn ] d M Rn s
d :
0 0
((1))
For the integral on the right in 1; M Rn s = hM iR n
s = hM is for s Rn by
book 7’s corollary 6.16 and d M Rn s = 0 for s > Rn ; so:
Z Z t Z Z t
v 2 (s; !) (0;Rn ] d M
Rn
s
d = v 2 (s; !) (0;Rn ] d hM is d :
0 0

Now v 2 (s; !) [0;Rn ] ! v 2 (s; !) pointwise, and by assumption v 2 (s; !) is


d hM i-integrable -a.e. So Lebesgue’s dominated convergence theorem of
book 5’s proposition 2.43 obtains:
Z t Z t
2
v (s; !) (0;Rn ] d hM is ! v 2 (s; !)d hM is ; -a.e.
0 0

Then by Lebesgue’s monotone convergence theorem of that book’s corollary


2.23:
Z Z t Z Z t
2
v (s; !) (0;Rn ] d hM is d ! v 2 (s; !)d hM is d ;
0 0
164CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

and thus:
Z Z t Z Z t
2 Rn
v (s; !) (0;Rn ] d M s
d ! v 2 (s; !)d hM is d : ((2))
0 0

For the integral on the left in (1); 3.63 and 3.58 obtain -a.e.:
Z t^Rn 2 Z t 2 Z t 2
Rn
v(s; !)dMs (!) = v(s; !) (0;Rn ] dMs (!) ! v(s; !)dMs (!) :
0 0 0
Rt ((3))
Now by proposition 3.83, Nt 0 v(s; !)dM s (!) is a local martingale with
the same localizing sequence as Mt ; and thus Nt^Rn is a martingale. By
Doob’s martingale maximal inequality of book 7’s proposition 5.91:
h i Z Z t^Rn 2
2 2 2
E Nt^Rn E sup (Ns^Rn ) 4E Nt^R n
=4 v(s; !)dMs (!) d :
s t 0

As Rn ! 1 -a.e., Fatou’s lemma of book 5’s corollary 2.19 and the inte-
grability assumption on v 2 (s; !) obtain:
h i Z Z t^Rn 2
E (Nt )2 4 lim inf v(s; !)dMs (!) d
n!1 0
Z Z t
= 4 v 2 (s; !)d hM is d < 1: ((4))
0

2
Thus Nt^R n
! Nt2 -a.e., and Nt^R2
n
(Nt )2 for all n with (Nt )2 inte-
grable. Lebesgue’s dominated convergence theorem (corollary 2.45, book 5)
and (3) then obtain:
" Z 2
# " Z 2
#
t^Rn t
E v(s; !)dMs (!) !E v(s; !)dMs (!) : ((5))
0 0

Combining this with (1) and (2) completes the proof of 3.65 for t t0 :
For 3.64, it follows from 3.63 and 4 of proposition 3.27 that for each n :
Z t^Rn Z t
E v(s; !)dMs (!) = E v(s; !) [0;Rn ] dMsRn (!) = 0: ((6))
0 0

Further, with Nt from the prior step:


Z t^Rn Z r
sup v(s; !)dMs (!) sup v(s; !)dMs (!) Nt :
n 0 r t 0
3.3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES165

Now Nt is integrable for each t t0 by (4) and the Cauchy-Schwarz inequal-


ity (corollary 3.48, book 4), since is a probability measure:
h i1=2
2
E [Nt ] E (Nt ) < 1:

By de…nition 3.77 and 3.63:


Z t^Rn Z t
v(s; !)dMs (!) ! v(s; !)dMs (!); -a.e.,
0 0

as n ! 1; so 3.64 now follows from Lebesgue’s dominated convergence


theorem (corollary 2.45, book 5) and (6):

Corollary 3.86 (When ItM (!) is L2 -bounded) Given (S; (S); t (S); )u:c: ;
let M 2 Mloc be a continuous local martingale with M0 = 0; and v(s; !) 2
M ([0; 1)
H2;loc S) that also satis…es for given t0 1 :
"Z #
t0
2
E v (s; !)d hM is < 1:
0

Then ItM (!) of proposition 3.83 is a continuous, L2 -bounded martingale for


t t0 : Thus if v(s; !) 2 H2M ([0; 1) S); then ItM (!) is a continuous,
L2 -bounded martingale.
Proof. Now ItM (!) of proposition 3.83 is a continuous local martingale,
which by the last steps of the previous proof satis…es for each t t0 :

M
E sup It^R n
(!) E [Nt ] < 1:
n

Thus ItM (!) is a martingale for t t0 by book 7’s proposition 5.88, and is
L2 -bounded by 3.65.

Next is the …nal version of the Kunita-Watanabe inequality, general-


izing proposition 3.42 to continuous local martingales. The proof is shorter
because most of the hard work is now done. As noted previously, this in-
equality is named for a 1967 result by Hiroshi Kunita (1937 –2008) and
Shinzo Watanabe (1935 – ). The reader is referred to section 3.2.6 for
background on the various measures used in this statement.
166CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

Proposition 3.87 (Kunita-Watanabe inequality) Given (S; (S); t (S); )u:c: ;


let Mt ; Nt 2 Mloc be continuous local martingales with M0 = N0 = 0; and
v(t; !); w(t; !) measurable processes. Then for almost all ! :
Z t Z t 1=2 Z t 1=2
2
jv(s; !)w(s; !)j d jhM; N ijs v (s; !)d hM is w2 (s; !)d hN is ;
0 0 0
(3.66)
for all 0 t 1:
Proof. Using book 7’s proposition 5.75, fHnX g finfftjXt ngg are
stopping times that reduce Xt Mt or Xt Nt ; and thus Tn HnM ^ HnN
reduces both (that book’s proposition 5.77). Since MtTn and NtTn are bounded
and thus L2 -bounded martingales on (S; (S); t (S); )u:c: , it follows from
3.42 that for almost all ! :
Z t
jv(s; !)w(s; !)j d M Tn ; N Tn s
0
Z t 1=2 Z t 1=2
2 Tn 2 Tn
v (s; !)d M s
w (s; !)d N s
;
0 0

for all 0 t 1: Now M Tn s hM is^Tn by book 7’s corollary 6.16,


and so d hM is = 0 for s > Tn : Then by de…nition of the Lebesgue-Stieltjes
integral (chapter 2, book 5):
8 R
Z t < t v 2 (s; !)d hM i ; t Tn ;
s
2
v (s; !)d M Tn
= R0
s : Tn 2
0 v (s; !)d hM is ; t > Tn :
0

In other words, for all n :


Z t Z t^Tn
v 2 (s; !)d M Tn s
= v 2 (s; !)d hM is ;
0 0

with the same result for the d N Tn s -integral.


For the d M Tn ; N Tn s -integral, …rst M Tn ; N Tn s = hM; N is^Tn by
6.20 of book 7 and that book’s corollary 6.16. From the discussion on signed
measures of section 3.2.6, it follows that the total variation process satis…es:

M Tn ; N Tn s
= jhM; N ijs^Tn ;

and thus by the above argument:


Z t Z t^Tn
jv(s; !)w(s; !)j d M Tn ; N Tn s
= jv(s; !)w(s; !)j d jhM; N ijs :
0 0
3.3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES167

Combining results, it follows from 3.42 that with probability 1 :


Z t^Tn Z t^Tn Z t^Tn
jv(s; !)w(s; !)j d jhM; N ijs v 2 (s; !)d hM is w2 (s; !)d hN is ;
0 0 0
((*))
for all 0 t 1: As this is true for each n; the intersected set has prob-
ability 1 and ( ) is then true -a.e. for all n: Letting n ! 1 obtains 3.66.

The following result generalizes proposition 3.26, and is useful for split-
ting integrators below.

Proposition 3.88 (Integral is linear w.r.t. integrators) Given the space


(S; (S); t (S); )u:c: ; let Mt ; Nt 2 Mloc be continuous local martingales
with M0 = N0 = 0; v(s; !) 2 H2;loc M ([0; 1) N ([0; 1)
S) \ H2;loc S): Then
M +N
v(s; !) 2 H2;loc ([0; 1) S) and if 0 t < 1 :
Z t Z t Z t
v(s; !)d [Ms + Ns ] (!) = v(s; !)dMs (!) + v(s; !)dNs (!); -a.e.
0 0 0
(3.67)
Proof. With fRnM g andfRnN g de…ned as in proposition 3.75 let Rn
RnM ^ RnN : Then v(s; !) [0;Rn ] 2 H2M ([0; 1) S) \ H2N ([0; 1) S) and
[Ms + Ns ]Rn 2 M2 by proposition 3.75, and so by 3.25:
Z t Z t
v(s; !) [0;Rn ] d [Ms + Ns ]Rn (!) = v(s; !) [0;Rn ] dMsRn (!)
0 0
Z t
+ v(s; !) [0;Rn ] dNsRn (!); -a.e.
0

Then by 3.63:
Z t^Rn Z t^Rn Z t^Rn
v(s; !)d [Ms + Ns ] (!) = v(s; !)dMs (!)+ v(s; !)dNs (!);
0 0 0

-a.e., for each n: As Rn ! 1 -a.e., 3.67 is proved.

Turning next to quadratic variation and covariation of integral processes,


the next result generalizes proposition 3.45 and corollary 3.47.

Proposition 3.89 (Quadratic variation and covariation of integrals)


Given (S; (S); t (S); )u:c: ; let M; N 2 Mloc be a continuous local mar-
M ([0; 1)
tingales with M0 = N0 = 0; v(s; !) 2 H2;loc S) and w(s; !) 2
N
H2;loc ([0; 1) S):
168CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

Rt
1. The the covariation of ItM (!) = v(s; !)dMs (!) and JtN (!) =
Rt 0
0 w(s; !)dNs (!) is given by:
Z t
M N
I ; J t
= v(s; !)w(s; !)d hM; N is ; (3.68)
0

where hM; N it is the covariation process of book 7’s de…nition 6.25.


Rt
2. The quadratic variation of ItM (!) = 0 v(s; !)dMs (!) is given by:
Z t
M
I t
= v 2 (s; !)d hM is : (3.69)
0

Proof. Since 2 follows from 1 as in corollary 3.47, we focus on covaria-


tion. By book 7’s proposition 6.29, 3.68 will follow if it is shown that as
de…ned, I M ; J N t is an adapted, continuous, bounded variation process
with I M ; J N 0 = 0; and such that:
Z t Z t Z t
Xt (!) v(s; !)dMs (!) w(s; !)dNs (!) v(s; !)w(s; !)d hM; N is
0 0 0
((*))
is a continuous local martingale. We leave the various properties of I M ; J N t
as de…ned in 3.68 as exercise 3.90, and prove that Xt (!) is a continuous local
martingale.
With fRnM g and fRnN g de…ned as in proposition 3.75 in terms of M and
N; let Rn RnM ^ RnN : Then since Rn ! 1 -a.e. and both Rn RnM and
Rn RnN ; it follows that Rn can play the role of Rn0 of proposition 3.79 for
both the dMs (!) and dNs (!) integrals, and for reducing both M and N: We
show that XtRn (!) is a continuous martingale.
To this end, we have by de…nition and then 3.63:
Z t^Rn Z t^Rn
XtRn (!) = v(s; !)dMs (!) w(s; !)dNs (!)
0 0
Z t^Rn
v(s; !)w(s; !)d hM; N is
0
Z t Z t
= v(s; !) [0;Rn ] (s)dMsRn (!) w(s; !) Rn
[0;Rn ] (s)dNs (!)
0 0
Z t^Rn
v(s; !)w(s; !)d hM; N is :
0
3.3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES169

For the third integral, by splitting the signed measure into positive and neg-
ative parts (recall remark 3.52), we obtain pointwise:
Z t^Rn Z t
v(s; !)w(s; !)d hM; N is = v(s; !)w(s; !) [0;Rn ] (s)d hM; N is :
0 0

We claim that for s Rn that hM; N is = MsRn ; NsRn s


: To see this,
book 7’s de…nition 6.25 de…nes:
1
hM; N is = [hM + N is + hM N is ] :
4
Now both M and N are reduced by Rn ; and thus so too are the local martin-
gales M N: Since (M N )T = M T N T for any processes by de…nition,
it follows from book 7’s corollary 6.16 that:

hM N i s = M Rn N Rn s
; s Rn ;

and thus for s Rn :

hM; N is = MsRn ; NsRn s


:

Substituting into XtRn (!) above obtains:


Z t Z t
Rn Rn
Xt (!) = v(s; !) [0;Rn ] (s)dMs (!) w(s; !) [0;Rn ] (s)dNsRn (!)
0 0
Z t
v(s; !)w(s; !) [0;Rn ] (s)d MsRn ; NsRn s :
0

This is a continuous martingale by proposition 3.45 since MsRn ; NsRn 2 M2 ;


Rn Rn
v(s; !) [0;Rn ] (s) 2 H2M and w(s; !) [0;Rn ] (s) 2 H2N : Thus Xt (!) in ( )
is a continuous local martingale, and is reduced by fRn g: This proves 3.68
by book 7’s proposition 6.29, subject to checking details in exercise 3.90.

Exercise 3.90 Prove that I M ; J N t as de…ned in 3.68 is an adapted, con-


tinuous, bounded variation process with I M ; J N 0 = 0: Hint: For continu-
ity, revise the proof for the Lebesgue integral in proposition 3.33 of book 3;
but using Lebesgue’s monotone convergence theorem of book 5’s proposition
2.21. For the other details, look ahead to proposition 4.14.

The next result generalizes the associative law of proposition 3.53 to the
current context, and this will be further generalized in propositions 3.95 and
4.22 below.
170CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

Proposition 3.91 (Associative law of stochastic integration) Given the


…ltered probability space (S; (S); t (S); )u:c: ; let M 2 Mloc and v(s; !) 2
M ([0; 1)
Rt I M ; then
H2;loc S): If ItM (!) = 0 v(s; !)dMs (!) and w(s; !) 2 H2;loc
M ([0; 1)
v(t; !)w(t; !) 2 H2;loc S) and -a.e.:
Z t Z t
M
w(s; !)dIs (!) = v(s; !)w(s; !)dMs (!); for all t: (3.70)
0 0
Rt
Proof. First, I M t = 0 v 2 (s; !)d hM is by 3.69, and since v 2 (s; !) is
predictable, it is progressively measurable (proposition 5.19, book 7). Thus
v 2 (s; ) is Borel measurable for all ! by Fubini’s theorem of book 5’s propo-
sition 5.19, and it follows from book 5’s proposition 3.8 that:
Z t Z t
w2 (s; !)d I M s = v 2 (s; !)w2 (s; !)d hM is :
0 0

Thus -a.e.: Z t
w2 (s; !)d I M s
< 1 for all t < 1;
0
if and only if -a.e.:
Z t
v 2 (s; !)w2 (s; !)d hM is < 1 for all t < 1:
0

I M M ; and so
In other words, w(t; !) 2 H2;loc if and only if v(t; !)w(t; !) 2 H2;loc
the integral on the right in 3.70 is well de…ned.
M
With fRnM g and fRnI g de…ned as in proposition 3.75 in terms of M and
M
I M ; let Rn RnM ^ RnI : Then since Rn ! 1 -a.e. and both Rn RnM
M
and Rn RnI ; Rn can play the role of Rn0 of proposition 3.79 for both the
dMs and dIsM (!) integrals, and for reducing both M and I M : For notational
simplicity, let I I M :
By remark 3.84 and proposition 3.75, M Rn ; I Rn 2 M2 ; w(t; !) [0;Rn ] (t) 2
Rn Rn
H2I and v(s; !)w(s; !) 2 H2M ; Thus by proposition 3.53, -a.e.:
[0;Rn ] (t)
Z t Z t
Rn
w(s; !) [0;Rn ] (t)dIs (!) = v(s; !)w(s; !) [0;Rn ] (t)dMsRn (!);
0 0

for all t: It then follows from 3.62, -a.e.:


Z t^Rn Z t^Rn
M
w(s; !)dIs (!) = v(s; !)w(s; !)dMs (!);
0 0

for all t: Since Rn ! 1 -a.e., this proves 3.70.


3.3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES171

For the generalization of the associative law in proposition 3.95 below, we


require the following result on products of independent (de…nition 1.36, book
7) local martingales. As will be seen in 4.13 of proposition 4.27, products
of two semimartingales or two local martingales produce semimartingales in
general. But in the case of independent local martingales, a local martingale
is obtained.

Proposition 3.92 (On independent LMs) Let M; N 2 Mloc be inde-


pendent continuous local martingales on (S; (S); t (S); )u:c: : Then for all
t:
hM; N it = 0; (3.71)
and M N 2 Mloc : Conversely, if M N 2 Mloc for continuous local martin-
gales M; N 2 Mloc ; then 3.71 is satis…ed.
Proof. Since (M N )0 = 0; if hM; N it = 0 for all t then M N 2 Mloc by
book 7’s proposition 6.29: To prove that hM; N it = 0 for independent local
martingales for all t; let TnM and TnN be localizing sequences for M and N
TM TN
respectively. We can assume that Mt n n and Nt n n by book 7’s
corollary 5.76, and then Tn TnM ^ TnN is a localizing sequence for both local
martingales by that book’s proposition 5.77. Further, the stopped processes
MtTn and NtTn are independent martingales for each n:
To see this, recall that by book 7’s de…nition 1.36 that M and N are
independent processes if (M ) and (N ) are independent sigma algebras in
the sense of 5 of that book’s summary 1.25. Here:
1
(M ) Mt (C) jt 2 [0; 1); C 2 B (R) ;

and similarly for (N ): By de…nition, t (M ) and t (N ); the natural …l-


trations generated by M and N (de…nition 5.4, book 7), are contained in
these independent sigma algebras. Now by that book’s proposition 5.19,
M and N are progressively measurable relative to t (M ) and t (N ); re-
spectively, since continuous and adapted to these …ltrations by de…nition.
Then that book’s proposition 5.67 asserts that for any n; MtTn and NtTn are
adapted to t (M ) and t (N ); respectively, and thus for the natural …ltra-
tions, t (MtTn ) Tn
t (M ) and t (Nt ) t (N ): Consequently,
Tn
t (Mt )
Tn Tn Tn
and t (Nt ) are independent sigma algebras, and Mt and Nt are inde-
pendent martingales for all n:
To evaluate M Tn ; N Tn t we use book 7’s proposition 6.28, that this co-
variation process is the uniform limit in probability over s 2 [0; t] of any se-
quence of …nite covariation processes Qs m M Tn ; N Tn with m ! 0: Here
172CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

f m g is a sequence of partition of [0; t]; m = fti gm


i=0 with t0 = 0 and
tm = t; the mesh size m maxi fjti ti 1 jg; and:
Pm
Qs m M Tn ; N Tn i=1 MtTi n^s MtTi n 1 ^s NtTi n^s NtTi n 1 ^s :

We prove below that for each t that as m ! 1 :


2
E Qt m
M Tn ; N Tn ! 0: ((*))

Assume this result. Now Qt m


M Tn ; N Tn M Tn ; N Tn t
!P 0 as noted
above, and so for any > 0 :
h i
Pr Qt m M Tn ; N Tn M Tn ; N Tn t
> ! 0; as m ! 1:

In addition, ( ) assures that Qt m M Tn ; N Tn !P 0 as m ! 1 by Cheby-


shev’s inequality (proposition 3.33, book 4). This then implies (exercise 3.93)
that M Tn ; N Tn t = 0 for each t; -a.e.: Thus M Tn ; N Tn t = 0 for all ra-
tional t; -a.e., and then by continuity of M Tn ; N Tn t we obtain that -a.e.,
M Tn ; N Tn t = 0 for all t: But then by book 7’s (6.20) and corollary 6.16,
hM; N it = M Tn ; N Tn t for t Tn ; and it follows that hM; N it = 0 for all
t; -a.e.
To prove ( ) we simplify notation, denoting MtTn and NtTn by Mt and
Nt : Now independence of fMti gm m
i=0 and fNti gi=0 assures independence of
m m
fMti Mti 1 gi=1 and fNti Nti 1 gi=1 by book 2’s proposition 3.56. Recall-
ing proposition 3.53 of that book, that the joint distribution of independent
variates is the product of the marginal distributions:
2
m
E Qt (M; N )
Pm 2 2
= i=1 E Mti Mti 1 E Nti Nti 1
P m
+ i6=j E Mti Mti 1 Mtj Mtj 1 E Nti Nti 1 Ntj Ntj 1 :

The second summation is 0 since for example if ti < tj ; the tower and
measurability properties of conditional expectations (proposition 5.26, book
6) obtain:

E Mti Mti 1 Mtj Mtj 1

= E Mti Mti 1 E Mtj Mtj 1 j tj 1 (S) = 0:


3.3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES173

Working
h iwith the …rst summation, the tower property hyields iE Nti Nti 1 =
2
E Nt2i 1 and thus E Nti Nti 1 = E Nt2i E Nt2i 1 : Hence since
N0 = 0 :

m
2 2 Pm 2
E Qt (M; N ) max E Mti Mti 1 i=1 E Nti Nti 1
i
h h ii
= max E Mt2i E Mt2i 1 E Nt2 :
i

Now E Nt2 < 1 since Nt = N Tn is bounded by n; and since E Mt2 is


continuous by proposition 6.18 of book 7, it follows that ( ) is satis…ed as
n ! 0:
Conversely if M N 2 Mloc ; then since M N hM; N it 2 Mloc by book
7’s proposition 6.29, it follows by subtraction that hM; N it 2 Mloc : Book 7’s
proposition 6.1 then states that as bounded variation local martingale that
hM; N it is constant on any interval [a; b] with probability 1: Thus 3.71 is
satis…ed since hM; N i0 = 0:

Exercise 3.93 Given random variables fXm ; Y g on a probability space (S; );


verify that jXm Y j !P 0 and jXm j !P 0 as m ! 1 imply that jY j = 0
-a.e. Hint: jY j jXm Y j + jXm j :

Remark 3.94 (On independent LMs) That the product of independent


continuous local martingales with respect to a given …ltration is a local mar-
tingale with respect to that same …ltration is one of several results in this
direction. Assuming continuity, the same result is true with respect to the re-
spective natural …ltrations, and in either case with local martingale replaced
by martingale. Removing continuity, this result is only true for martingales
with respect to the natural …ltrations, and not local martingales. See A.
Cherny (2006) for details on this and related martingale topics.

Proposition 3.95 (Generalized associative law of stochastic integration)


Given (S; (S); t (S); )u:c: ; let fM (j) g Mloc be independent (de…nition
M (j) ([0; 1)
1.36, book 7) and vj (s; !) 2 H2;loc S) for j = 1; ::; n: If:
Xn Z t
M
It (!) = vj (s; !)dMs(j) (!);
j=1 0
IM (j)
M ([0; 1)
and w(s; !) 2 H2;loc ; then vj (t; !)w(t; !) 2 H2;loc S) for all j and
-a.e.:
Z t Xn Z t
M
w(s; !)dIs (!) = vj (s; !)w(s; !)dMs(j) (!); for all t: (3.72)
0 j=1 0
174CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

Proof. As a sum of continuous local martingales, ItM is a continuous local


martingale by book 7’s exercise 5.78. Then by that book’s corollary 6.32,
3.68, 3.69 and the above proposition 3.92:
Z t Z t
M Pn Pn (i)
I t = i=1 j=1 vi (s; !)dMs (!); vj (s; !)dMs(j) (!)
0 0
Z t D E
Pn Pn (i) (j)
= i=1 j=1 v i (s; !)v j (s; !)d M ; M
0 t
Z t D E
Pn
= j=1 vj2 (s; !)d M (j) :
0 t

(j)
M ([0; 1)
Now vj (s; !) 2 H2;loc S) is predictable by de…nition, and thus
is measurable by book 7’s proposition 5.19. Book 5’s proposition 5.19 then
obtains that vj (s; ) is Borel measurable for all !; and it then follows from
3.69 and that book’s proposition 3.8 that:
Z t Z t D E
2 M Pn
w (s; !)d I s = j=1 vj2 (s; !)w2 (s; !)d M (j) :
0 0 t

Thus -a.e.:
Z t
w2 (s; !)d I M s
< 1 for all t < 1;
0

if and only if -a.e.:


Z t D E
vj2 (s; !)w2 (s; !)d M (j) < 1 for all t < 1; all j:
0 t

I M M ([0; 1) (j)
In other words, w(t; !) 2 H2;loc if and only if vj (t; !)w(t; !) 2 H2;loc
S) for all j; and thus the integrals on the right in 3.72 are well de…ned.
(j) Rt (j) M (j) (!) 2 M
Now if we de…ne ItM (!) 0 vj (s; !)dMs (!); then It loc
M (j)
by proposition 3.83. Further the conclusion that vj (t; !)w(t; !) 2 H2;loc ([0; 1)
I M (j)
S) is equivalent to w(t; !) 2 H2;loc ([0; 1) S) since vj (s; ) is Borel mea-
surable for all ! as noted above, and it again follows from 3.69 and book 5’s
proposition 3.8 that:
Z t D (j) E Z t D E
2 M
w (s; !)d I = vj2 (s; !)w2 (s; !)d M (j) :
0 s 0 t

(j)
M ([0; 1)
Thus vj (t; !)w(t; !) 2 H2;loc S) for all j is equivalent to w(t; !) 2
Tn I M (j)
j=1 H2;loc ([0; 1) S):
3.3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES175

Pn M (j) (!);
Finally, since ItM (!) = j=1 It 3.67 and then 3.70 obtain -a.e.:
Z t Z t
Pn (j)
w(s; !)dIsM (!) = j=1 w(s; !)dIsM (!)
0 0
Z t
Pn
= j=1 vj (s; !)w(s; !)dMs(j) (!) for all t;
0

which is 3.72.

3.3.4 Stochastic Dominated Convergence Theorem


In this section we prove a stochastic version of Lebesgue’s dominated
convergence theorem of book 5’s proposition 2.43 applicable to continuous
local martingale integrators. This will be generalized to semimartingale
integrators in the next chapter.

Proposition 3.96 (Stochastic dominated convergence theorem) Given


(S; (S); t (S); )u:c: ; let M 2 Mloc and fvn (s; !)g1 n=1
M ([0; 1)
H2;loc
S); and assume that -a.e. that vn (s; !) ! 0 for all s: If there exists
M ([0; 1)
v(s; !) 2 H2;loc S) so that -a.e., jvn (s; !)j v(s; !) for all s;
then for all …xed T < 1 :
Z t
sup vn (s; !)dMs (!) !P 0: (3.73)
t2[0;T ] 0

Rt
More generally, 0 vn (s; !)dMs (!) converges to 0 in probability, uniformly
in t over every compact set.
Proof. Let fRn g be the localizing sequence of proposition 3.75 so that
Rn
M Rn 2 M2 and v(t; !) [0;Rn ] (t) 2 H2M ; and fRn0 g be de…ned by Rn0 =
infftj hM it ng: By continuity of hM it (proposition 6.12, book 7), Rn0
is a sequence of stopping times (that book’s proposition 5.60), and Sn
Rn ^ Rn0 (that book’s proposition 5.77) is another localizing sequence for M:
Tracing back through these stopping time de…nitions we now have Sn ! 1
-a.e., and:
Z Sn
Sn
M n; v 2 (s; !)d hM is n; hM iSn n:
0

Let T; > 0 and > 0 be given, and choose N so that Pr[SN T] <
: Note that N exists since Pr[Sn T] for all n contradicts -a.e.
176CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

unboundedness. Then by proposition 3.82:


" Z t #
Pr sup vn (s; !)dMs (!) >
t2[0;T ] 0
" Z #
t
Pr[SN T ] + Pr sup vn (s; !)dMs (!) > ; SN > T
t2[0;T ] 0
" Z #
t^SN
+ Pr sup vn (s; !)dMs (!) > ; SN > T
t2[0;T ] 0
" Z #
t
SN
= + Pr sup vn (s; !) [0;SN ] (s)dMs (!) > ; SN > T
t2[0;T ] 0
" Z #
t
SN
+ Pr sup vn (s; !) [0;SN ] (s)dMs (!) > :
t2[0;T ] 0

SN
Recall that M SN 2 M2 and v(s; !) [0;SN ] (s) 2 H2M by construction,
S
and from jvn (s; !)j v(s; !) it follows that vn (s; !) [0;SN ] (s) 2 H2M N for
Rt
all n; and thus 0 vn (s; !) [0;SN ] (s)dMsSN (!) is a martingale for all n by
proposition 3.27. Doob’s martingale maximal inequality of book 7’s proposi-
tion 5.46, and 3.23 then obtain:
" Z #
t
SN
Pr sup vn (s; !) [0;SN ] (s)dMs (!) >
t2[0;T ] 0
"Z 2
#
T
1 SN
2
E vn (s; !) [0;SN ] (s)dMs (!)
0
Z T
1
= 2
E vn2 (s; !) [0;SN ] (s)d M SN s
(!) :
0

Now -a.e., vn2 (s; !) [0;SN ] (s) v 2 (s; !) [0;SN ] (s) and this upper bound is
d M SN s -integrable since v(s; !) 2 H2;loc M ([0; 1) S): Thus by Lebesgue’s
dominated convergence theorem of book 5’s corollary 2.45, as n ! 1 :
Z T
vn2 (s; !) [0;SN ] (s)d M SN s (!) ! 0; -a.e.
0

Then since:
Z T Z T
vn2 (s; !) [0;SN ] (s)d M
SN
s
(!) v 2 (s; !) [0;SN ] (s)d M SN s
(!); -a.e..
0 0
3.3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES177

and this upper bound is -integrable, another application of dominated con-


vergence obtains as n ! 1 :
Z T
E vn2 (s; !) [0;SN ] (s)d M SN s (!) ! 0:
0

Combining results obtains that:


" Z t #
lim sup Pr sup vn (s; !)dMs (!) > ;
n!1 t2[0;T ] 0

and since > 0 is arbitrary, this lim sup equals 0: As this sequence of prob-
abilities is nonnegative, we obtain:
" Z #
t
lim Pr sup vn (s; !)dMs (!) > = 0;
n!1 t2[0;T ] 0

which is 3.73.
Since every compact set is contained in an interval [0; T ]; this proves
uniform convergence in probability over all compact sets.

Corollary 3.97 (Stochastic dominated convergence theorem) Given


(S; (S); t (S); )u:c: ; let M 2 Mloc and fwn (s; !)g1 n=1
M ([0; 1)
H2;loc
S); and assume that -a.e. that wn (s; !) ! w(s; !) for all s with w(s; !) 2
M ([0; 1)
H2;loc M ([0; 1)
S): If there exists u(s; !) 2 H2;loc S) so that -a.e.,
jwn (s; !)j u(s; !) for all s; then uniformly in t over compact sets:
Z t Z t
wn (s; !)dMs (!) !P w(s; !)dMs (!): (3.74)
0 0

Proof. Let vn (s; !) wn (s; !) w(s; !) and v(s; !) u(s; !) + jw(s; !)j
in proposition 3.96. Then for all …xed T < 1 :
Z t Z t
sup wn (s; !)dMs (!) w(s; !)dMs (!) !P 0;
t2[0;T ] 0 0

which is 3.74 by the last sentence of the prior proof.

3.3.5 Stochastic Integrals via Riemann Sums


In this section, we generalize the proposition 3.55 result on integration
with respect to L2 -bounded martingales to integrals de…ned relative to
178CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES

local martingales. However, based on the discussion preceding proposition


3.85 on the Itô M -isometry, one cannot expect convergence of Riemann
sums in the context of L2 (S)-convergence. Instead we prove convergence in
probability, and thus also convergence of a subsequence -a.e. by
proposition 5.25 of book 2. See also Hunt and Kennedy (2004) for an
alternative approach to -a.e. convergence.

Proposition 3.98 (Riemann sum approximation) Given (S; (S); t (S); )u:c: ;
M ([0; 1)
let M 2 Mloc and v(s; !) 2 H2;loc S); and assume v(s; !) is con-
tinuous in s for almost all !: Then given partitions n of [0; t] :

0 = t0 < t1 < tn+1 = t;

with n max1 i n fti+1 ti g ! 0 :


Xn Z t
v(ti ; !) Mti+1 (!) Mti (!) !P v(s; !)dMs (!): (3.75)
i=0 0

Thus there is a subsequence of partitions nk so that the associated con-


vergence exists pointwise -a.e.
Proof. Let a sequence of partitions as described be given. By proposi-
tion 3.75, there exists a localizing sequence fRm g so that M Rm 2 M2 and
Rm
v(t; !) (0;Rm ] (t) 2 H2M : De…ne:

Rm = infftj jv(t; !)j > mg:

Then Rm is a stopping time by 4 of book 7’s proposition 5.60 and Rm ! 1:


Thus by exercise 3.76, if Rm 0 Rm ^ Rm then fRm 0 g is a localizing sequence
0
0
with M Rm 2 M2 and v(t; !) (0;Rm M Rm : Further, v(t; !)
0 ] (t) 2 H2 (0;Rm0 ] (t)

is left continuous and locally bounded.


Thus by 3.53, for every m :
Xn h i Z t
0 0
Rm Rm R0
v(ti ; !) (0;Rm
0 ] (ti ) Mt
i+1
(!) Mti (!) !P v(s; !) (0;Rm0 ] (s)dMs m (!)
i=0 0

as n ! 0: Applying 3.62 this is equivalent to:


Xn h i Z t^Rm
v(ti ; !) (0;Rm ] (ti ) MtRi+1
m
(!) MtRi m (!) !P v(s; !)dMs (!):
i=0 0
((*))
De…ne Am = 0
f!jRm e 0 0
tg and Am = f!jRm < tg: Since Rm is a stopping
time it is an optional time by book 7’s proposition 5.60, and thus Am 2 t (S)
3.3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES179

for all m. Denoting the Riemann summation in ( ) by Sn;m (!) and the
integral there by Im (!); with Sn (!) and I(!) denoting the corresponding
terms in 3.75, note that Sn;m (!) = Sn (!) and Im (!) = I(!) on Am : Thus
for any m :

Pr [jSn (!) I(!)j > ] = Pr [fjSn (!) I(!)j > g \ Am ]


h i
+ Pr fjSn (!) I(!)j > g \ A em
h i
Pr [jSn;m (!) Im (!)j > ] + Pr A em :

Now Rm0 ! 1 -a.e., so for any t it follows that 1


Am (!) ! 0 -a.e. as
m ! 1: By Lebesgue’s dominated convergence theorem of book 5’s corollary
2.45: Z
Aem = 1 A (!) d ! 0;
m
S
h i
so given there is an m with Pr A em < =2: For this m; since Sn;m (!) !p
Im (!) by proposition 3.55, there is an n so that Pr [jSn;m (!) Im (!)j > ] <
=2; and the result of 3.75 follows.
As noted above, the conclusion on -a.e. convergence follows from propo-
sition 5.25 of book 2.

Remark 3.99 The above result is stated under the hypothesis that v(s; !) 2
M ([0; 1)
H2;loc S) is continuous ( -a.e.). This is one of two options for the
conclusion in 3.75. In order to apply proposition 3.55 we require v(t; !) (0;Rm ] (t) 2
Rm
H2M ; a given by proposition 3.75, and that this process be left contin-
uous and locally bounded for all m. Thus one option is to assume that
M ([0; 1)
v(s; !) 2 H2;loc S) is left continuous and locally bounded. We leave
it as an exercise to complete the details for this result.
The approach taken assumes such that v(s; !) is continuous. Then using
4 of book 7’s proposition 5.60, the localizing sequence fRm g could be modi…ed
to fRm0 g so that v(t; !)
0 ] (t) is locally bounded for all m:
(0;Rm
Chapter 4

Integrals w.r.t. Continuous


Semimartingales

The …nal step in this book’s development of stochastic integration is to


expand the de…nition of an integral to integrators that are continuous
semimartingales. Recall that by book 7’s de…nition 6.20:

De…nition 4.1 (Continuous semimartingale) An adapted process Xt de-


…ned on a …ltered probability space (S; (S); t (S); )u:c: is a continuous
semimartingale if it can be written as:

Xt = X0 + Mt + Ft ; (4.1)

where X0 is 0 (S)-measurable, M0 = F0 = 0; Mt is a continuous local


martingale, and Ft is a continuous adapted process of bounded variation.

Recall that by "of bounded variation" is meant in the sense of book 7’s
de…nition 2.84 with p = 1. In other words, v1 (F ) < 1 -a.e. for any
compact interval [a; b]; where v1 (F ) denotes the "weak" quadratic
variation (see de…nition 4.6 below).

For the purpose of using such processes as integrators, we will also as-
sume X0 0:

De…nition 4.2 (MSloc ) Given the …ltered probability space (S; (S); t (S); )u:c: ;
let MSloc denote the collection of continuous semimartingales with X0 = 0:
In other words, if Xt 2 MSloc ; then

Xt = Mt + Ft

181
182CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES

where M0 = F0 = 0; Mt is a continuous local martingale, and Ft is a


continuous adapted process of bounded variation.

Remark 4.3 Note that for each interval [0; n] there exists An S with
[An ] = 1 so
T that for ! 2 An ; v1 (F ) < 1 on [0; n]: Since An+1 An ;
de…ne A A
n n : Then it follows that [A] = 1 (proposition 5.26, book
1); and for ! 2 A; v1 (F ) < 1 on all compact sets [a; b]: In other words,
there is a single exceptional set of measure zero outside of which a process
of bounded variation has bounded variation on all compact sets.

The following result shows that for Xt 2 MSloc ; the decomposition into
M and F is unique with probability 1: This is also true for more general
semimartingales as in 4.1.

Proposition 4.4 (Decompositions in MSloc are unique -a.e.) Let Xt 2


MSloc with Xt = Mt + Ft ; and also Xt = Mt0 + Ft0 ; -a.e. Here Mt ; Mt0 are
continuous local martingales, Ft ; Ft0 continuous processes of bounded varia-
tion, and M0 = F0 = M00 = F00 = 0: Then with probability 1; Mt Mt0 and
Ft Ft0 for all t:
Proof. By assumption, Mt Mt0 = Ft0 Ft ; -a.e., and is therefore a con-
tinuous local martingale of bounded variation. Hence by book 7’s proposition
6.1, this process is pathwise constant with probability 1: Since this process is
0 at t = 0; it follows that Mt Mt0 and Ft Ft0 for all t with probability 1:

For a suitable space of integrands, it will not be surprising that we will


de…ne integration relative to the integrator Xt by:
Z t Z t Z t
v(s; !)dXs v(s; !)dMs + v(s; !)dFs : (4.2)
0 0 0

If both integrals exist, this de…nition is well de…ned because the decompo-
sition of X into M and F is unique. However, we will need to reconcile
two apparently disparate integration theories. For appropriately de…ned
integrands v(s; !) :

(1) Rt
The process Yt (!) 0 v(s; !)dMs is not de…ned pathwise, outside
of the special case of proposition 3.98 where v(s; !) is continuous and
this integrable is de…nable -a.e. But we know that this process is in
general a continuous local martingale for appropriate v(s; !);
183

(2) Rt
The process Yt (!) 0 v(s; !)dFs is de…ned pathwise as a Lebesgue-
Stieltjes integral, but outside of the special case of corollary 3.74 where
Fs = hM is with M 2 Mloc and v(t; !) 2 H2;loc M ; we do not yet even

know if this process is adapted.

(1)
As we cannot in the general case make the de…nition of Yt (!)
path-based, the logical approach is to investigate the measurability
(2)
properties of the process Yt (!): We discuss this below, but …rst return to
considerations regarding the integrand v(s; !):

The logical starting point for a space of integrands v(s; !) would be


X ; de…ned so that if v(t; !) 2 H X
H2;loc 2;loc then:

1. v(t; !) is predictable,

2. For all t;
Z t
v 2 (s; !)d hXis < 1; -a.e.
0

But by book 7’s proposition 6.24, which proved that hXit = hM it ; this
X
would obtain that H2;loc M : Hence the integration constraint in 2
= H2;loc
would only re‡ect integrability relative to the local martingale M; and
would provide no assurance as to the integrability of such v(t; !) relative
X
to the bounded variation process F: Thus if v(t; !) 2 H2;loc so de…ned; we
Rt
can only be assured that the integral 0 v(s; !)dMs exists, and some other
Rt
constraint would perhaps be needed to assure that 0 v(s; !)dFs exists.

Example 4.5 If M = B; the local martingale of Brownian motion, then


B
since hXis = hBis = s; H2;loc is the space of predictable processes so that
Rt 2
-a.e., 0 v (s; !)ds < 1 for all t: To make a semimartingale, any function
of bounded variation can be used, the simplest of which is an increasing
function. If Fs is increasing
Rt in s for each !; it induces a Borel measure dFs
for each !; and hence 0 v(s; !)dFs is a Lebesgue-Stieltjes integral (chapter
2, book 5) for each !: In the special case where Fs is absolutely continuous
in s for each ! (de…nition 3.54, book 3), the density function fs Fs0 exists
for almost all s (that book’s proposition 3.58), is Lebesgue measurable, and
Z t
Ft = fs ds
0
184CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES

(proposition 3.61 there). Then by book 5’s proposition 3.6:


Z t Z t
v(s; !)dFs = v(s; !)fs (!)ds:
0 0

If we further restrict fs to be independent of !; then fs (!) = g(s) can


be chosen to equal virtually any nonnegative
Rt 2 measurable function. Then,
while the local martingale constraint 0 v (s; !)ds < 1; -a.e. assures that
Rt
0 v(s; !)ds < 1; -a.e. by the Cauchy-Schwarz
Rt
inequality (corollary 3.48,
book 4), this need not assure that 0 v(s; !)g(s)ds < 1; -a.e. for all non-
negative measurable functions g(s):

This example suggests that one solution to the problem of de…ning a


space of integrands is to make this space speci…c to the semimartingale
Xt = Mt + Ft : For example, perhaps require:

1. v(t; !) is predictable,

2. That -a.e.:
Z t Z t
2
v (s; !)d hM is < 1 and v(s; !)dFs < 1 for all t:
0 0

Rt
Then 0 v(s; !)dXs as in 4.2 is well de…ned as a sum of a stochastic
integral and a Lebesgue-Stieltjes integral.

This approach is again quickly seen to be problematic, since by intro-


ducing disparate notions
R t of the integrals in 4.2, it does not readily support
an investigation into 0 v(s; !)dXs as a stochastic process. While the earlier
Rt
development of 0 v(s; !)dMs emphasized stochastic properties such as mea-
Rt
surability and the martingale property, the integral 0 v(s; !)dFs is de…ned
in book 5 with an emphasis on valuation for given interval [0; t] and Borel
measure dF:
To resolve this dilemma, the approach often taken for allowable inte-
grands is to de…ne this space to be somewhat smaller, but with the added
advantage that this same space works for all continuous semimartingales,
and thus is independent of both Mt and Ft . The space of interest will
be denoted HlocbP ([0; 1) S); the collection of locally bounded, predictable
Rt
processes, and is de…ned below. Our goal is then to show that 0 v(s; !)dXs
as represented in 4.2 is well de…ned for v(s; !) 2 Hloc bP ([0; 1) S); and to
study this integral as a stochastic process on (S; (S); t (S); )u:c: :
4.1 INTEGRALS W.R.T. CONTINUOUS B.V. PROCESSES 185

Not surprisingly, the integral with respect to the local martingale M will
be straightforward to justify based on earlier results. What is now required
are some additional results for stochastic integrals de…ned with bounded
variation integrators, and this is studied next.

4.1 Integrals w.r.t. Continuous B.V. Processes


Let Ft (!) be an adapted process of bounded variation on
(S; (S); t (S); )u:c: in the sense of de…nition 2.84 of book 7 with p = 1.
In other words, v1 (F ) < 1; -a.e. for all compact intervals [a; b]: For ease
of reference, when [a; b] = [0; t] then v1 (F ) of book 7 is de…ned identically
with VF (t; !) in de…nition 4.6 below, and more generally this is
VF ([a; b] ; !). In this section we investigate the measurability properties of
integrals: Z t
v(s; !)dFs ;
0
for bounded, measurable integrands v(s; !): To this end, it will be fruitful
to view v1 (F ) as a stochastic process.

Consistent with book 7’s de…nition 6.2 for the quadratic variation process,
we have the following de…nition. This de…nition re‡ects the "weak" notion
of total variation by restricting the supremum to partitions with mesh size
! 0; while the strong version uses the supremum over all partitions.

De…nition 4.6 (Total variation process) Given a continuous process of


bounded variation Ft (!) = F (t; !) on (S; (S); t (S); )u:c: and partitions
= ft0 ; t1 ; :::; tn+1 g of [0; t] :

0 = t0 < t1 < tn+1 = t

with mesh size maxi fti ti 1 g; the total variation process is de…ned
pathwise by:
Xn
VF (t; !) sup jF (ti+1 ; !) F (ti ; !)j : (4.3)
!0 i=0

By remark 4.3, VF (t; !) < 1 for all t; -a.e.


Generalizing notation, VF (t; !) VF ([0; t] ; !) is also the total varia-
tion of F over [0; t]; and we de…ne VF (s; t; !) VF ([s; t] ; !) for 0 s < t
as the total variation of F over [s; t]; using partitions with:

s = t0 < t1 < tn+1 = t:


186CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES

Notation 4.7 (On VF (t; !)) The reader may have noticed that we intro-
duced a small notational inconsistency in this de…nition. In book 7’s de…-
nition 2.84, v1 (F ) denoted weak variation, and V1 (F ) the strong variation
of a function. In the current context we want to de…ne the total variation
process of F; and since this is a weak variation we ought to have denoted
this process by vF (t; !): However in the context of this book, vF (t; !) looks
like an integrand process, and so have opted for the above notation.

We begin with the following technical lemma.

Lemma 4.8 For 0 r < s < t; then -a.e.:

VF ([r; s] ; !) + VF ([s; t] ; !) = VF ([r; t] ; !): (4.4)

Proof. Given any partition of [r; t] that includes s; it follows by de…nition


that: Xn
jF (ti+1 ; !) F (ti ; !)j VF ([r; t] ; !):
i=0
Every pair of partitions of [r; s] and [s; t] equals such a partition: So this
summation can be split to produce:
(n1 ) (n2 )
VF ([r; s] ; !) + VF ([s; t] ; !) VF ([r; t] ; !);

where the superscripts denote the number of partition points in the respective
summations. Taking a supremum on the left over all such partitions obtains:

VF ([r; s] ; !) + VF ([s; t] ; !) VF ([r; t] ; !):

In particular, if VF ([r; t] ; !) < 1 then so too are VF ([r; s] ; !) < 1 and


VF ([s; t] ; !) < 1:
For the opposite inequality, if VF ([r; t] ; !) < 1 the de…nition of VF
assures that for any > 0 there is a partition of [r; t] so that
Xn
VF ([r; t] ; !) jF (ti ; !) F (ti 1 ; !)j :
i=1

By the triangle inequality this expression remains valid with s added to this
partition if it is not already included. Splitting the summation at s; this
yields with the same notation:
(n) (n)
VF ([r; t] ; !) VF ([r; s] ; !) + VF ([s; t] ; !):

Taking a supremum on the right, this and the …rst part then imply that:

VF ([r; t] ; !) VF ([r; s] ; !) + VF ([s; t] ; !) VF ([r; t] ; !);


4.1 INTEGRALS W.R.T. CONTINUOUS B.V. PROCESSES 187

Since > 0 is arbitrary, 4.4 follows when VF ([r; t] ; !) < 1:


If VF ([r; t] ; !) = 1; then for any N there is a partition of [r; t] so that
Xn
N jF (ti ; !) F (ti 1 ; !)j :
i=1

Arguing as above it follows that at least one of VF ([r; s] ; !) = 1 or VF ([s; t] ; !) =


1 and 4.4 again follows.
Recall from de…nition 5.15 of book 7 that a function which is "continuous
from the right and with left limits" is referred to as càdlàg, from
the French "continu à droite, limite à gauche." These are also referred
to a RCLL functions, for "right continuous with left limits." A stochas-
tic process is deemed to be càdlàg if F (t; ) is càdlàg for almost all !;
noting that the collection of càdlàg processes includes continuous processes.
When the process F (t; !) is càdlàg, the supremum in 4.3 for VF (t; !) can be
replaced with an ordered collection of partitions. The same proof works for
VF ([r; t] ; !) and is left as an exercise. This representation will simplify the
analysis of the measurability of VF (t; !) for such processes.
For this result, recall that bxc denotes the greatest integer less than or
equal to x; and thus bxc = x with 0 < 1: With x = 2m t; it follows
m
that b2 tc =2 m t =2 ; and so, (b2 tc + 1) =2m t + (1
m m )=2m : Thus
for all nonnegative real t :
b2m tc =2m t < (b2m tc + 1) =2m :
In the notation of the summation in 4.5 below, it then follows that:
F (t ^ j=2m ; !) F (t ^ (j 1) =2m ; !)
8
< F (j=2m ; !) F ((j 1) =2m ; !); j b2m tc ;
=
: F (t; !) F ((j 1) =2m ; !); j = b2m tc + 1:

Expressed this way, it is clear that with each increment in m; all intervals
are bisected except that last one, which is split unequally or not at all,
depending on t: The notation below is perhaps too general, but avoids the
necessity of splitting the summation for the special case of the last term.
Proposition 4.9 (VF (t; !) for càdlàg F (t; !) ) If F (t; !) is a càdlàg process
of bounded variation on (S; (S); t (S); )u:c: ; then VF (t; !) de…ned in 4.3
can be calculated by:
Xb2m tc+1
VF (t; !) sup jF (t ^ j=2m ; !) F (t ^ (j 1) =2m ; !)j ; -a.e.
m j=1
(4.5)
188CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES

Proof. Temporarily denote the expression on the right in 4.5 by WF (t; !);
and that in 4.3 as VF (t; !): Then WF (t; !) VF (t; !) since the partitions
underlying VF as de…ned in 4.3 include those used for WF : To prove the
opposite inequality, and hence that WF (t; !) = VF (t; !); we prove that any
partition used in the VF -calculation can be dominated by partitions of the
type used in the WF -calculation.
To this end, let a partition ft0 ; t1 ; :::; tn g be given, where as above t0 = 0
and tn = t: Given m and i with 1 i n; assume that ft^j=2m g (ti 1 ; ti ]
for j 2 Ii : Hence Ii is the empty set, or, a sequential set of integers with
minimum and maximum values denoted ji and ki ; respectively, with ji = ki
possible. Analogously, de…ne jn+1 as the smallest j such that j=2m > t:
Now for m large no set can be empty since the ti -partition is …xed, and
ft ^ j=2m gj;m is dense in [0; t]: By the triangle inequality for 1 i n; now
simplifying notation:
Xki
jF (ti ) F (ti 1 )j jF (t ^ j=2m ) F (t ^ (j 1) =2m )j
j=ji +1
+ jF (ti ) F (t ^ (ki =2m ))j + jF (ti 1) F (t ^ (ji =2m ))j :
(n) (m)
Let VF denote the summation with the above ti partition, and WF
the summation as in 4.5 for given m: It then follows by addition that:
(n) (m)
Xn
VF (t; !) WF (t; !) + jF (ti ) F (t ^ ki =2m )j
i=1
Xn
+ jF (ti 1 ) F (t ^ ji =2m )j
i=1
Xn
jF (t ^ ji+1 =2m ) F (t ^ ki =2m )j :
i=1
Now because tn = t; it follows that with jn+1 = b2m tc + 1 :
jF (tn ) F (t ^ (kn =2m ))j = jF (t ^ jn+1 =2m ) F (t ^ kn =2m )j ;
and thus the nth terms of the …rst and third summations cancel. In addition,
jF (t0 ) F (t ^ (j1 =2m ))j = 0 since t0 = j1 = 0; and so:
(n) (m)
Xn 1
VF (t; !) WF (t; !) + jF (ti ) F (t ^ ki =2m )j
i=1
Xn
+ jF (ti 1 ) F (t ^ ji =2m )j
i=2
Xn 1
jF (t ^ ji+1 =2m ) F (t ^ ki =2m )j :
i=1
Rewriting:
(n) (m)
Xn 1
VF (t; !) WF (t; !) + jF (ti ) F (ki =2m )j
i=1
Xn 1
+ (jF (ti ) F (ji+1 =2m )j jF (ji+1 =2m ) F (ki =2m((*))
)j) :
i=1
4.1 INTEGRALS W.R.T. CONTINUOUS B.V. PROCESSES 189

Now by construction, for each i with 1 i n 1:

ki =2m ti < ji+1 =2m :

By right continuity of the càdlàg assumption, as m ! 1 :

jF (ti ) F (ji+1 =2m )j ! 0:

Existence of left limits assures that:

jF (ti ) F (ki =2m )j jF (ji+1 =2m ) F (ki =2m )j ! 0:

Thus the summation in ( ) can be made arbitrarily small for m large.


(n) (m)
That is, for any > 0; VF (t; !) WF (t; !) + for m large, and hence
(n)
VF (t; !) WF (t; !) for all n: This implies VF (t; !) WF (t; !) and com-
pletes the proof.

Remark 4.10 Note that the above result can be generalized somewhat in
the sense that being continuous from the right and with left limits was not
explicitly needed. More generally, the requirement on F that su¢ ces for this
conclusion is that for every t; F is continuous from one side, and has a limit
from the other. This follows because the critical step in the above proof was
to be able to prove that if ti 2 [ki =2m ; ji+1 =2m ) for all m; and by construction
both ki =2m ! ti and ji+1 =2m ! ti ; then for any > 0 there is an m so that:

jF (ti ) F (ki =2m )j + jF (ti ) F (ji+1 =2m )j jF (ji+1 =2m ) F (ki =2m )j < :

Requiring left and right limits at ti and one-sided continuity is adequate for
the given result.
That said, this result does not generalize to other types of discontinuities.
For example, if F (t) is de…ned on [0; 1] so F (t) = 0 for t 6= r; and F (r) = 1;
where r is a given irrational, then WF (t) = 0 yet VF (t) = 2: If F (r) = 1 for
all irrationals, then again WF (t) = 0 and now VF (t) = 1:
While these observations are perhaps interesting, we focus on càdlàg
Ft (!): By book 1’s proposition 5.7, every Borel measure on R has an as-
sociated increasing càdlàg distribution, while by that book’s proposition 5.23,
every increasing right continuous function gives rise to a Borel measure.
Since every bounded variation function can be expressed as a di¤ erence of
increasing functions by book 3’s proposition 3.27, and we will want to use
such functions as integrators, it makes sense that we must at least demand
right continuity of BV Ft (!). The existence of left limits then allows a useful
technical result expressed in the above propositions.
190CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES

The next result uses the representation in 4.5 to show that when Ft (!)
is adapted, and continuous or càdlàg, the total variation process VF (t; !) is
adapted, and again continuous or càdlàg, respectively.
Remark 4.11 (On "continuous or càdlàg") In a number of results be-
low the expression "continuous or càdlàg" is used despite sounding some-
what redundant. It would seem to be enough to simply say that the process is
càdlàg, since this includes continuous. However, the point of this awkward
construction is to connect a property of the process Ft (!), whether "contin-
uous or càdlàg," to the respective property of another process that depends
on Ft (!): For example in the next result, it is the bounded variation process
VF (t; !).
Proposition 4.12 (Properties of VF (t; !)) If Ft (!) is a continuous or
càdlàg adapted process of bounded variation on (S; (S); t (S); )u:c: ; then
VF (t; !) is adapted, and continuous or càdlàg, respectively.
(m) (m)
Proof. Denoting by VF (t; !) the summation in 4.5, each VF (t; !) is
adapted since Ft (!) is adapted. As measurability is preserved in a supremum
of countably many measurable functions by book 5’s proposition 1.9, it follows
that VF (t; !) is adapted. In addition, the triangle inequality obtains that
(m) (m+1)
VF (t; !) VF (t; !); and since the supremum is …nite we conclude
that -a.e.: :
(m)
VF (t; !) = lim VF (t; !); all t:
m!1
(m)
It is an exercise to verify that -a.e., VF (t; !) is increasing in t since VF (t; !)
is increasing for all m. By proposition 5.8 of book 1, every monotonic func-
tion has left and right limits at every point, so left to prove is that -a.e.,
right continuity of F at t produces right continuity of VF at t; and similarly
for left continuity.
Arguing by contradiction, assume that there is a t > 0 and > 0 so
that for all s with t < s < t + t (recalling 4.4:
VF ([t; s]; !) = VF (s; !) VF (t; !) 2 > 0;
for ! 2 A with (A) > 0: Fixing any such s; since VF ([t; s]; !) 2 there
is a partition of [t; s] with t = t0 < t1 < tn = s so that:
Xn
jF (ti ; !) F (ti 1 ; !)j :
i=1
Since F is right continuous at t; there is a t0 with t < t0 < t1 so that
jF (t0 ; !) F (t; !)j =2: Adding t0 to the partition obtains:
Xn
jF (ti ; !) F (ti 1 ; !)j + F (t1 ; !) F (t0 ; !) =2;
i=2
4.1 INTEGRALS W.R.T. CONTINUOUS B.V. PROCESSES 191

and hence VF ([t0 ; s]; !) =2: But since t0 < s it follows by assumption
VF ([t; t0 ]; !) 2 ; so we can now repeat this construction over the inter-
val [t; t0 ]; …nding t00 with t < t00 < t0 and VF ([t00 ; t0 ]; !) =2 and still
VF ([t; t00 ]; !) 2 : Iterating this procedure and applying 4.4 obtains that
VF ([t; s]; !) is unbounded for ! 2 A with (A) > 0; a contradiction.
For left continuity we repeat the construction with details left as an ex-
ercise.

Recall next proposition 3.27 of book 3 that a function f (t) de…ned on


[a; b] is of bounded variation if and only if f (t) = I1 (t) I2 (t) for monotoni-
cally increasing real valued functions, I1 (t) and I2 (t): As noted in that book’s
remark 3.28, such a decomposition is not unique. Applying this pointwise
to a process Ft (!) of bounded variation it can be concluded without ad-
ditional e¤ort that for -a.e. !; Ft (!) = It (!) Jt (!) for monotonically
increasing processes It (!) and Jt (!): But since we will be concerned with
the measurability and other properties of these increasing processes, a little
work is needed, and this is addressed in the next result.
The following also addresses the de…nition of the integral of certain
measurable processes with Ft (!) as integrator in terms of the associated
Lebesgue-Stieltjes integrals, and will be reminiscent of book 3’s proposition
4.27 which addressed Riemann-Stieltjes integration.

Remark 4.13 (On [B(R+ ) (S)]) As noted above from book 7’s nota-
tion 5.9, the sigma algebra [B(R+ ) (S)] in the statement of the following
result is de…ned as the smallest sigma algebra that contains the semi-
algebra A of measurable rectangles E F with E 2 B(R+ ) and F 2 (S):
This sigma algebra was denoted 0 [B(R+ ) (S)] in book 5.

Proposition 4.14 (Integrals with BV Integrators) If Ft (!) is a con-


tinuous or càdlàg adapted process of bounded variation on (S; (S); t (S); )u:c: ;
then there exists increasing, adapted processes It (!) and Jt (!); which are
also continuous or càdlàg, respectively, so that -a.e.:

Ft (!) = It (!) Jt (!); all t:

Conversely, if a process Ft (!) is so represented in terms of processes It (!)


and Jt (!); then Ft (!) is of bounded variation on (S; (S); t (S); )u:c: ; and
is adapted, continuous or càdlàg, when the given processes have these at-
tributes.
192CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES

For v(t; !) bounded and [B(R+ ) (S)]-measurable, the stochastic in-


tegral:
Z t Z t Z t
v(s; !)dFs (!) v(s; !)dIs (!) v(s; !)dJs (!); (4.6)
0 0 0

is well de…ned as a Lebesgue-Stieltjes integral, and is a continuous or càdlàg


process of bounded variation, respectively. By well-de…ned is meant that if
also Ft (!) = It0 (!) Jt0 (!) for increasing, adapted processes It0 (!) and Jt0 (!);
then:
Z t Z t Z t Z t
v(s; !)dIs (!) v(s; !)dJs (!) = v(s; !)dIs0 (!) v(s; !)dJs0 (!):
0 0 0 0
(4.7) Rt
If v(t; !) is bounded and progressively measurable, then 0 v(s; !)dFs (!)
as de…ned in 4.6 is also adapted to the …ltration f t (S)g:
More generally, if v(t; !) is progressively measurable and -a.e.:
Z t
jv(s; !)j dFs (!) < 1 for all t < 1;
0
Rt
then 0 v(s; !)dFs (!) as de…ned in 4.6 is adapted to the …ltration f t (S)g:
Proof. De…ne -a.e.:

It (!) = (VF (t; !) + Ft (!)) =2; Jt (!) = (VF (t; !) Ft (!)) =2:

Then It (!) and Jt (!) are adapted, and continuous or càdlàg by proposition
4.12. In addition, It (!) and Jt (!) are increasing. For example, using 4.4
with t0 > t :
2 (It0 It ) = VF ([t; t0 ]) + [Ft0 Ft ] 0;
since VF ([t; t0 ]) jFt0 Ft j by de…nition.
Conversely, so de…ned Ft (!) is of bounded variation:
Xn
VF (t; !) sup jF (ti+1 ; !) F (ti ; !)j
!0 i=0
Xn Xn
sup jI(ti+1 ; !) I(ti ; !)j + sup jJ(ti+1 ; !) J(ti ; !)j
!0 i=0 !0 i=0

= It (!) I0 (!) + Jt (!) J0 (!)


< 1:

That Ft (!) inherits the attributes of It (!) and Jt (!) is apparent.


4.1 INTEGRALS W.R.T. CONTINUOUS B.V. PROCESSES 193

For v(t; !) bounded and [B(R+ ) (S)]-measurable, v(s; ) is B(R+ )-


measurable for allR t ! by Fubini’s theorem of book 5’s proposition 5.19, and
bounded. Hence 0 v(s; )dFs ( ) is well-de…ned for all ! by book 5’s de…nition
2.37, using that book’s propositions 1.18 and 2.21. This integral is well-
de…ned because if also Ft (!) = It0 (!) Jt0 (!); then It0 (!) + Jt (!) = It (!) +
Jt0 (!): As increasing functions -a.e., it follows that for all t :
Z t Z t
0
v(s; !)d Is + Js (!) = v(s; !)d Is + Js0 (!): ((*))
0 0

But note that:


Z t Z t Z t
0 0
v(s; !)d Is + Js (!) = v(s; !)dIs (!) + v(s; !)dJs (!);
0 0 0

since this is true for simple functions by de…nition, and thus true for all mea-
surable functions by the book 5 development of the integral speci…ed above.
This is also true for the integral on the right of ( ); and this proves 4.7.
When It (!) and Jt (!) are continuous or càdlàg in t; the integral is also
continuous or càdlàg. If jv(s; !)j K; then by book 5’s proposition 2.40
applied to the separate Lebesgue-Stieltjes integrals:
Z t+ t Z t
v(s; !)dFs (!) v(s; !)dFs (!)
0 0
Z t+ t Z t+ t
v(s; !)dIs (!) + v(s; !)dJs (!)
t t
K [It+ t It ] + K [Jt+ t Jt ] :

Thus continuity properties of It (!) and Jt (!) induce the same continuity
properties on the integral.
That this integral is of bounded variation for bounded v(t; !); again as-
sume jv(t; !)j K: Suppressing the ! variable, for any partition of a com-
pact interval [a; b]; using book 5’s proposition 2.40 as above:

Xn Z ti Z ti 1 Xn Z ti
v(s; )dFs v(s; )dFs = v(s; )dFs
i=1 0 0 i=1 ti 1
Xn
K Fti Fti 1
i=1
KVF ([a; b]):

Thus the integral is of bounded variation for all ! with VF ([a; b]) < 1; which
is -a.e.
194CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES

Rt
If v(t; !) is bounded and progressively measurable then 0 v(s; !)dFs (!) is
adapted if each of the dIs and dJs integrals are adapted. To prove that these
Lebesgue-Stieltjes integrals are adapted, we can use the proof of corollary
3.74. While that proof addressed a special case with Ft hM it for a local
martingale Mt ; the proof did not assume anything about this integrator other
than the applicability of book 5’s integration to the limit results, which are
perfectly general. See exercise 4.15.
In the …nal case when v(t; !) is progressively measurable and satis…es the
stated integrability result, de…ne vn (t; !) = max(v(t;
R t !); n): Then vn (t; !) is
bounded and progressively measurable and thus 0 vn (s; !)dFs (!) is adapted
for all n: Now vn (t; !) ! v(t; !) for all (t; !) and jvn (t; !)j jv(t; !)j ;
which is integrable -a.e. Thus by Lebesgue’s dominated convergence theo-
rem (proposition 2.43, book 5):
Z t Z t
vn (s; !)dFs (!) ! v(s; !)dFs (!); -a.e.
0 0
Rt
It then follows from corollary 1.10, book 5 that 0 v(s; !)dFs (!) is adapted.

Rt
Exercise 4.15 Provide the details of the proof that 0 v(s; !)dFs (!) is adapted
when v(t; !) is bounded and progressively measurable, adapting the proof of
corollary 3.74 (and in turn proposition 3.72).

In the section below, Stochastic Integrals


R tvia Riemann Sums, it is proved
that if v(t; !) is also continuous in t; then 0 v(s; !)dFs (!) is de…nable as a
limit in probability of the associated Riemann sums, generalizing proposition
2.54 of book 5. The next example shows that these integrals can sometimes
be evaluated as Lebesgue integrals.

Example 4.16 (Radon-Nikodým and BV integrators) Note that if F0 =


0; then by construction I0 = J0 = 0: Assume that the induced (chapter 5,
book 1) Borel measures I (!) dIs (!) and J (!) dJs (!) are absolutely
continuous (de…nition 7.3, book 5) with respect to Lebesgue measure -a.e.
For example, this would be the case by book 1’s proposition 5.30 if It and
Jt have locally bounded derivatives in t; -a.e. This is not a signi…cant re-
striction on monotonic It and Jt since by book 3’s proposition 3.15, It and
Jt are at least di¤ erentiable in t; m-a.e., with m Lebesgue measure.
With this assumption, the Radon-Nikodým theorem of book 5’s proposi-
tion 7.22 asserts the existence of nonnegative fI (t; !) and fJ (t; !), Borel
4.2 THE GENERAL STOCHASTIC INTEGRAL 195

measurable in t; -a.e., so that:


Z t Z t
It (!) = fI (s; !)ds; Jt (!) = fJ (s; !)ds:
0 0

Letting f (t; !) fI (t; !) fJ (t; !); it follows from book 5’s proposition 3.6
that: Z Z t
t
v(s; !)dFs (!) = v(s; !)f (s; !)ds; -a.e.
0 0

4.2 The General Stochastic Integral


In this section we formalize the construction of a stochastic integral with
respect to the space of semimartingale integrators introduced in de…nition
4.2 above. Speci…cally, given the …ltered probability space
(S; (S); t (S); )u:c: ; recall that MSloc is de…ned as the collection of
continuous semimartingales with X0 = 0: In other words, if Xt 2 MSloc
then:
Xt = Mt + Ft
where M0 = F0 = 0; Mt is a continuous local martingale, and Ft is a
continuous adapted process of bounded variation in the sense of (2.88) of
book 7’s de…nition 2.84 with p = 1. That is, v1 (F ) < 1; -a.e. for all
compact intervals [a; b]:
bP ([0; 1)
For an appropriately de…ned class of integrands, to be denoted Hloc
S) as introduced above, the stochastic integral with respect to Xt 2 Hloc bP

will be de…ned as in 4.2. That is:


Z t Z t Z t
v(s; !)dXs v(s; !)dMs + v(s; !)dFs ;
0 0 0

once it is demonstrated that each of the component integrals is well de…ned.


Because the splitting of Xt into Mt and Ft was shown Rt to be unique in
proposition 4.4, the well-de…nedness of the integral 0 v(s; !)dXs depends
only on the demonstration that given such Mt and Ft ; the two component
integrals are well de…ned for the given integrands.
This space of integrands is formally de…ned as follows.
bP ([0; 1) S)) Given a probability space (S; (S);
De…nition 4.17 (Hloc t (S); )u:c: ;
a process v(s; !) is a locally bounded, predictable process if:

1. v(s; !) is predictable,
196CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES

2. There is a sequence of increasing and almost surely unbounded stopping


times fTn g1 1
n=1 ; and a sequence of real constants fcn gn=1 ; so that -
a.e.:
v Tn (s; !) cn ;
where v Tn (s; !) v(s ^ Tn ; !); or equivalently,

v(s; !) [0;Tn ] (s) cn :

The sequence of stopping times fTn g1 n=1 is called a reducing se-


quence for v(s; !); and the space of locally bounded predictable processes
bP ([0; 1)
is denoted by Hloc S):

Remark 4.18 Note that if v(s; !) 2 Hloc bP ([0; 1) S); then fTn gS1
n=1 can be
rede…ned so that the above bounds are valid for all !: Let A = n An S
T
be the set of measure zero and de…ned with An = f v (s; !) > cn g: Then
n

de…ne Tn0 = 0 on A and Tn0 = Tn on S A: Then Tn0 is a stopping time


by completeness of the the sigma algebras, and is almost surely unbounded
since Tn0 = Tn ; -a.e.
Also, by the …rst half of the next proof, there is no loss of generality in
assuming cn = n in the de…nition of HlocbP when v(s; !) is continuous (in s).

Rt
Exercise 4.19 Recalling exercise 4.15, prove that 0 v(s; !)dFs (!) is adapted
when v(t; !) is a locally bounded predictable process. Hint: Proposition 5.19
of book 7 and remark 3.6. This is also proved in proposition 4.21.

bP ([0; 1)
The next result shows that the Hloc S)-class of integrands con-
tains all continuous adapted processes and thus also the class of continu-
ous semimartingale integrators, MSloc : In addition, this class of integrands
is contained in the class of integrands for continuous local martingales,
bP
Hloc M ; for any such M:
H2;loc

Proposition 4.20 (On Hloc bP ([0; 1) S)) Given (S; (S); t (S); )u:c: ; if
bP ([0; 1)
v(s; !) is a continuous, adapted process then v(s; !) 2 Hloc S):
Hence in particular,
MSloc Hloc bP
([0; 1) S):
In addition, for all M 2 Mloc :
bP M
Hloc ([0; 1) S) H2;loc ([0; 1) S):
4.2 THE GENERAL STOCHASTIC INTEGRAL 197

Proof. By book 7’s corollary 5.17, a continuous and adapted process is pre-
dictable. By that book’s proposition 5.60, if v(t; !) is a continuous, adapted
process and F R is closed, then TF inffs 0jv(s; !) 2 F g is a stop-
ping time. Letting Fn = ( 1; n] [ [n; 1) produces a sequence fTn g1 n=1
with v Tn (s; !) n all !: That this sequence is almost surely unbounded,
and thus a reducing sequence is proved as follows. Assume that for ! 2 A
with (A) > 0 that Tn (!) K for all n: This implies by continuity that
jv(Tn (!); !)j n for all n and ! 2 A: Thus v(s; ) is unbounded on [0; K]
on a set of positive -measure, contradicting continuity almost everywhere.
Hence HlocbP ([0; 1) S) contains all continuous, adapted processes, and thus
S
Mloc Hloc bP ([0; 1) S):
Given M 2 Mloc ; the Doob-Meyer decomposition theorem of book 7’s
proposition 6.12 obtains that hM it is a continuous, increasing, adapted process.
bP ([0; 1)
If v(s; !) 2 Hloc S); then by that book’s proposition 5.19, v(s; !) is
progressively measurable and then by Fubini’s theorem of book 5’s proposi-
tion 5.19, v(s; !) is Borel measurable in s for all !: Given t and fTn g1 n=1 ;
the reducing sequence for v(s; !); it follows that for almost all ! there exists
n(!) so that Tn(!) (!) > t: Thus -a.e.:
Z t
v 2 (s; !)d hM is
0

is well-de…ned pathwise as a Lebesgue-Stieltjes integral (section 2.3, book 5).


And by that book’s proposition 2.40, this integral is bounded for almost all
!: Z t
v 2 (s; !)d hM is c2n(!) hM it :
0
M ([0; 1)
This is 3.57, and so v(s; !) 2 H2;loc S):

The following proposition summarizes the desired existence result on the


bP ([0; 1) S)-integrands with respect to continuous
stochastic integral of Hloc
semimartingales.

Proposition 4.21 (Final Stochastic Integral in Hloc bP ) Given the …ltered

probability space (S; (S); t (S); )u:c: ; let Xt 2 MSloc be a continuous semi-
martingale, and v(s; !) 2 HlocbP ([0; 1) S) a locally bounded predictable
Rt
process. Then 0 v(s; !)dXs as de…ned by:
Z t Z t Z t
v(s; !)dXs v(s; !)dMs + v(s; !)dFs ; (4.8)
0 0 0
198CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES

is
R t well de…ned, continuous in t; and a semimartingale. In other words,
S
0 v(s; !)dXs 2 Mloc :
Proof. Assume that Xt = Mt + R tFt ; recalling that this decomposition is
unique by proposition 4.4. Then 0 v(s; !)dMs is well de…ned by the prior
proposition, since HlocbP M
H2;loc for all M 2 Mloc ; and this integral is a
continuous local martingale by proposition 3.83 of the prior section.
Next, given the reducing sequence fTn g1 Tn
n=1 ; v (s; !) is bounded and pre-
dictable by de…nition. Since then also progressively measurable Rt by book 7’s
proposition 5.19, it follows from proposition 4.14 that 0 v Tn (s; !)dFs is
adapted, continuous and of bounded variation. But for Lebesgue-Stieltjes
integrals it follows by the discussion of remark 3.63 that:
Z t Z t^Tn
Tn
v (s; !)dFs = v(s; !)dFs :
0 0

From this it follows that if t Tn ^ Tm ; then


Z t Z t Z t
Tn Tm
v (s; !)dFs = v (s; !)dFs = v(s; !)dFs :
0 0 0
Rt
Since fTn g1
n=1 are increasing and almost surely unbounded, 0 v(s; !)dFs is
adapted, continuous
R t and of bounded variation.
In summary, 0 v(s; !)dXs is the sum of a continuous local martingale
R0
and a continuous adapted process of bounded variation. Since 0 = 0 for
both
R t component integrals by continuity of the integrators, it follows that
v(s; !)dX 2 M S is a continuous semimartingale.
0 s loc

4.3 Properties of Stochastic Integrals


Rt
Now that 0 v(s; !)dXs is de…ned in 4.8 for continuous semimartingales
Xt 2 MSloc and locally bounded predictable processes
bP ([0; 1)
v(t; !) 2 Hloc S); many of the properties for this integral will
follow relatively
Rt easily from the
R t corresponding properties of the component
integrals, 0 v(s; !)dMs and 0 v(s; !)dFs :

We summarize these properties next.

Proposition 4.22 (Properties of the stochastic integral) Given the space


bP ([0; 1)
(S; (S); t (S); )u:c: ; let v(s; !); w(s; !) 2 Hloc S); Xt ; Yt 2 MSloc
with Xt = Mt + Ft and Yt = Nt + Gt ; and 0 t < t < 1: 0
4.3 PROPERTIES OF STOCHASTIC INTEGRALS 199

1. Given r with t < r < t0 ; then for almost all ! 2 S :


Z t0 Z r Z t0
v(s; !)dXs (!) = v(s; !)dXs (!) + v(s; !)dXs (!):
t t r

2. For constant a 2 R; av(s; !) + w(s; !) 2 Hloc bP ([0; 1) S) and for


almost all ! 2 S :
Z t0 Z t0 Z t0
[av(s; !) + w(s; !)] dXs (!) = a v(s; !)dXs (!)+ w(s; !)dXs (!):
t t t

3. For any stopping time T;


Z t^T Z t Z t
v(s; !)dXs (!) = v(s; !) [0;T ] (s)dXs (!) = v(s; !)dXsT (!):
0 0 0

4. RThe quadratic variation of the continuous semimartingale Xt0 (!)


t
0 v(s; !)dXs (!) is given by:
Z t
0
X t
= v 2 (s; !)d hM is :
0

5. The covariation
R of the continuous semimartingale Xt0 (!) in 4 and
t
Yt0 (!) 0 v(s; !)dYs (!) is given by:
Z t
0 0
X ;Y t
= v(s; !)w(s; !)d hM; N is :
0

Rt
6. The associative law applies: If Xt0 (!) = 0 v(s; !)dXs (!) as in 4;
then for almost all ! 2 S :
Z t Z t
0
w(s; !)dXs (!) = v(s; !)w(s; !)dXs (!); all t:
0 0

Proof. Using 4.8, 1 and 2 follow from the analogous properties for integrals
with respect to continuous local martingales by proposition 3.85, recalling
bP
that Hloc M
H2;loc for all M 2 Mloc by proposition 4.20, and for Lebesgue-
Stieltjes integrals by book 5’s proposition 2.40. To check that av(s; !) +
w(s; !) 2 HlocbP ([0; 1) S) for 2; predictability is immediate (proposition
1.5, book 5), while being locally bounded follows using Tn Tnv ^ Tnw ; with
apparent notation. Then Tn is a stopping time by book 7’s proposition 5.60,
200CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES

and the triangle inequality obtains boundedness with cn jaj cvn + cw n : Both
statements in 1 and 2 are then quali…ed as true -a.e. re‡ects the fact that
a semimartingale can always be changed on sets of -measure 0 without
corrupting its properties, since the …ltration is assumed complete.
Similarly, 3 follows from proposition 3.85 and remark 3.63.
For 4; book 7’s proposition 6.24 obtains hX 0 it = hM 0 it where Mt0 (!)
Rt
0 v(s; !)dMs (!); and so this result follows from 3.69. Similarly, the deriva-
tion of 5 uses book 7’s proposition 6.34, which obtains hX 0 ; Y 0 it = hM 0 ; N 0 it
Rt
with Nt0 (!) 0 w(s; !)dNs (!); and then 3.68.
bP ([0; 1) S): Predictabil-
Finally for 6; …rst note that v(s; !)w(s; !) 2 Hloc
ity is again immediate (proposition 1.5, book 5), while being locally bounded
follows using Tn Tnv ^ Tnw ; a stopping time as above, obtaining local bound-
edness with cvwn cvn cw 0 S
n : Since Xt 2 Mloc by proposition 4.21, both integrals
in this result are thus well de…ned. Using proposition 4.21:
Z t Z t
Xt0 (!) = v(s; !)dMs (!) + v(s; !)dFs (!) Mt0 (!) + Ft0 (!);
0 0

and thus by proposition 4.21 again:


Z t Z t Z t
w(s; !)dXs0 (!) = w(s; !)dMs0 (!) + w(s; !)dFs0 (!): ((*))
0 0 0

M ([0; 1) 0
Since w(s; !) 2 H2;loc S) by proposition 4.20, 3.70 obtains:
Z t Z t
w(s; !)dMs0 (!) = v(s; !)w(s; !)dMs (!): ((**))
0 0

For the other integral in ( ); Ft (!) = It (!) Jt (!) by proposition 4.14,


and since the integrand v(s; !) is bounded by cvn for t Tnv ; Ft0 (!) can be
expressed for all n :
Z t Z t
Ft0 (!) = v(s; !)dIs (!) v(s; !)dJs (!)
0 0
It0 (!) Jt0 (!); t Tnv :

Since v(s; !) is also progressively measurable (proposition 5.19, book 7),


proposition 4.14 also assures that It0 (!) and Jt0 (!) are continuous and adapted.
Letting n ! 1 obtains that Ft0 (!) is a continuous and adapted, and of
bounded variation as a di¤ erence of increasing processes..
4.4 STOCHASTIC DOMINATED CONVERGENCE THEOREM201

The integrand w(s; !) is similarly bounded by cw


n for t Tnw ; and by this
same proposition:
Z t Z t Z t
0 0
w(s; !)dFs (!) = w(s; !)dIs (!) w(s; !)dJs0 (!); t Tnw :
0 0 0
Rt
The process 0 w(s; !)dFs0 (!) is again seen to be a continuous, adapted,
bounded variation process. Now w(s; !) is progressively measurable and by
Fubini’s theorem of book 5’s proposition 5.19, w(s; !) is Borel measurable in
s for all !: Applying book 5’s proposition 3.8:
Z t Z t
0
w(s; !)dIs (!) = v(s; !)w(s; !)dIs (!);
0 0

and similarly for the dJs0 (!)-integral. Assembling the pieces and applying
proposition 4.14 again:
Z t Z t Z t
0
w(s; !)dFs (!) = v(s; !)w(s; !)dIs (!) v(s; !)w(s; !)dJs (!); t Tnw
0 0 0
Z t
= v(s; !)w(s; !)dFs (!) t Tnw ^ Tnv :
0

Letting n ! 1; this result with ( ) and ( ) provide:


Z t Z t Z t
w(s; !)dXs0 (!) = v(s; !)w(s; !)dMs (!) + v(s; !)w(s; !)dFs (!):
0 0 0

bP ([0; 1)
Since v(s; !)w(s; !) 2 Hloc S) this proves 6 by proposition 4.21.

4.4 Stochastic Dominated Convergence Theorem


In this section we generalize proposition 3.96 to the context of
semimartingale integrators.

Proposition 4.23 (Stochastic dominated convergence theorem) Given


(S; (S); t (S); )u:c: ; let Xt 2 MSloc with Xt = Mt +Ft and fvn (s; !)g1n=1
bP ([0; 1) S); and assume that -a.e. that v (s; !) ! 0 for all s: If there
Hloc n
bP ([0; 1)
exists v(s; !) 2 Hloc S) so that -a.e., jvn (s; !)j v(s; !) for all s;
then for all …xed T :
Z t
sup vn (s; !)dXs (!) !P 0: (4.9)
t2[0;T ] 0
202CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES

Rt
More generally, 0 vn (s; !)dXs (!) converges to 0 in probability, uniformly
in t over compact sets.
Proof. Given > 0 :
" Z t #
Pr sup vn (s; !)dXs (!) >
t2[0;T ] 0
" Z # " Z #
t t
Pr sup vn (s; !)dMs (!) > =2 + Pr sup vn (s; !)dFs (!) > =2 :
t2[0;T ] 0 t2[0;T ] 0

bP ([0; 1) S)
By proposition 4.20, Hloc M ([0; 1) S) and thus by propo-
H2;loc
sition 3.96: " Z #
t
Pr sup vn (s; !)dMs (!) > =2 ! 0:
t2[0;T ] 0

For the second integral, it follows from 4.6 that:


Z t Z t Z t
vn (s; !)dFs (!) vn (s; !)dIs (!) + vn (s; !)dJs (!) ;
0 0 0

where Is and Js are continuous, increasing processes. Thus:


" Z t #
Pr sup vn (s; !)dFs (!) > =2
t2[0;T ] 0
" Z # " Z #
t t
Pr sup vn (s; !)dIs (!) > =4 + Pr sup vn (s; !)dJs (!) > =4
t2[0;T ] 0 t2[0;T ] 0
" Z # " Z #
t t
Pr sup jvn (s; !)j dIs (!) > =4 + Pr sup jvn (s; !)j dJs (!) > =4 ;
t2[0;T ] 0 t2[0;T ] 0

where the last step is book 5’s proposition 2.40. These last two expressions
converge to 0 using the same argument.
In detail for the dIs -integral, since jvn (s; !)j is nonnegative and Is is
increasing, the supremum is attained at t = T :
" Z t # Z T
Pr sup jvn (s; !)j dIs (!) > =4 = Pr jvn (s; !)j dIs (!) > =4 :
t2[0;T ] 0 0

Now jvn (s; !)j v(s; !) -a.e., and this upper bound is dIs -integrable since
bP ([0; 1)
v(s; !) 2 Hloc S):Thus by Lebesgue’s dominated convergence theo-
rem of book 5’s corollary 2.45:
Z T
jvn (s; !)j dIs (!) ! 0; -a.e.
0
4.5 STOCHASTIC INTEGRALS VIA RIEMANN SUMS 203

Then by book 2’s proposition 5.21, -a.e. convergence implies convergence


in probability, and the proof is complete.
Since every compact set is contained in an interval [0; T ]; this proves
uniform convergence in probability over all compact sets.

Corollary 4.24 (Stochastic dominated convergence theorem) Given


(S; (S); t (S); )u:c: ; let Xt 2 MSloc with Xt = Mt +Ft and fwn (s; !)g1 n=1
bP ([0; 1)
Hloc S); and assume that -a.e. that wn (s; !) ! w(s; !) for all
bP ([0; 1)
s with w(s; !) 2 Hloc bP ([0; 1)
S): If there exists u(s; !) 2 Hloc S)
so that -a.e., jwn (s; !)j u(s; !) for all s; then uniformly in t over compact
sets: Z Z
t t
wn (s; !)dXs (!) !P w(s; !)dXs (!): (4.10)
0 0

Proof. Let vn (s; !) wn (s; !) w( ; !) and v( ; !) u(s; !) + jw( ; !)j in


proposition 4.23. Then for all …xed T < 1 :
Z t Z t
sup wn (s; !)dMs (!) w(s; !)dMs (!) !P 0;
t2[0;T ] 0 0

which is 4.10 by the last sentence of the prior proof.

4.5 Stochastic Integrals via Riemann Sums


In this section, we generalize the continuous local martingale result of
proposition 3.96 to continuous semimartingales.

Proposition 4.25 (Riemann sum approximation) Given (S; (S); t (S); )u:c: ;
let Xt 2 MSloc and assume v(s; !) 2 Hloc bP ([0; 1) S) is continuous in s;
-a.e. Then given partitions n of [0; t] :

0 = t0 < t1 < tn+1 = t;

with n max1 i n fti ti 1g !0:

Xn Z t
v(ti ; !) Xti+1 (!) Xti (!) !P v(s; !)dXs (!): (4.11)
i=0 0

Thus there is a subsequence of partitions, nk so that the associated con-


Rt
vergence exists pointwise -a.e. to the continuous semimartingale 0 v(s; !)dXs (!):
204CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES

Proof. If Xt = Mt + Ft :
Xn
v(ti ; !) Xti+1 (!) Xti (!)
Xi=0
n Xn
v(ti ; !) Mti+1 (!) Mti (!) + v(ti ; !) Fti+1 (!) Fti (!) :
i=0 i=0
If each of the summations on the right converges in probability to the re-
spective integral, then 4.11 will be proved. Notationally, if Yn !p Y and
Zn !p Z; it follows that Yn + Zn !p Y + Z since:
[j(Yn Y ) + (Zn Z)j > ] [j(Yn Y j > =2] + [jZn Zj > =2] :
Now let a sequence of partitions as described be given. Since Mt 2 Mloc
M ([0; 1) S) by proposition 4.20; 3.75 of proposition 3.98
and v(s; !) 2 H2;loc
obtains,
Xn Z t
v(ti ; !) Mti+1 (!) Mti (!) !p v(s; !)dMs (!):
i=0 0
In addition, by proposition 2.54 of book 5:
Xn Z t
v(ti 1 ; !) Fti (!) Fti 1 (!) ! v(s; !)dFs (!);
i=1 0

for all ! for which v(s; !) is continuous, meaning for almost all !: Since con-
vergence almost everywhere implies convergence in probability (proposition
5.21, book 2), 4.11 follows.
That convergence in probability implies the existence of a subsequence of
partitions nk so that this convergence exists pointwise -a.e. is book 2’s
proposition 5.25.

4.6 Stochastic Integration by Parts


In this section we generalize the Lebesgue-Stieltjes integration by parts
formula presented in book 5’s proposition 6.9; which was there proved as
an application of Fubini’s theorem. Recall from that book’s de…nition 6.1
that B:V:[R] denotes the space of functions of bounded variation on R;
meaning that the total variation T < 1; where T = sup[a;b] T[a;b] with
T[a;b] = Vf ([a; b]) in the notation of 4.4.
It was there proved that given right continuous f; g 2 B:V:[R]; with at
least one continuous, then for any bounded interval (a; b] :
Z Z
gd f = f (b)g(b) f (a)g(a) f d g;
(a;b] (a;b]
4.6 STOCHASTIC INTEGRATION BY PARTS 205

where f and g are the signed measures induced by f and g; respectively.


These integrals were de…ned in that book’s proposition 6.7 consistently with
4.6. Rearranging and expressing the upper limit of integration as a variable
x a: Z Z x x
f (x)g(x) f (a)g(a) = fd g + gd f:
a a

The following result generalizes this formula from Lebesgue-Stieltjes in-


tegrators to continuous semimartingale integrators.

Remark 4.26 (When X0 6= 0) In some books the continuous semimartin-


gales used as integrators are not required to satisfy X0 = 0: Then as in the
above Lebesgue-Stieltjes formula, 4.13 would be expressed:
Z t Z t
Xt Yt X0 Y0 = Ys dXs + Xs dYs + hX; Y it : (4.12)
0 0

This follows by applying 4.13 to Xt0 Xt X0 and Yt0 Yt Y0 for these


more general processes. Details are left as an exercise.
Also, the use of hX; Y it in these formulas is appealing for notational
symmetry, but recall from book 7’s proposition 6.34 that hX; Y it = hM; N it
where Mt ; Nt are the local martingale components of these semimartingales.

Proposition 4.27 (Stochastic integration by parts) If Xt ; Yt 2 MSloc ;


then -a.e.:
Z t Z t
Xt Yt = Ys dXs + Xs dYs + hX; Y it : (4.13)
0 0

Proof. By proposition 4.20, MSloc bP ([0; 1)


Hloc S) and so Xt ; Yt 2
bP I
Hloc ([0; 1) RB ) and both integrals in 4.13 are well de…ned by proposition
4.21. Given a partition n = fti gn+1
i=0 of [0; t] with 0 = t0 < t1 < tn+1 =
t; then since X0 = Y0 = 0 it follows that:
Xn
Xt Yt = Xti+1 Xti Yti+1 Yti
i=0
X n Xn
+ Yti Xti+1 Xti + Xti Yti+1 Yti :
i=1 i=1

If n max1 i n fti ti 1g ! 0; proposition 6.34 of book 7 yields that:


n
X
Xti+1 Xti Yti+1 Yti !p hX; Y it ;
i=0
206CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES

while from proposition 4.25:


Xn Z t n
X Z t
Yti 1 Xti Xti 1 !p Ys dXs ; Xti 1 Yti Yti 1 !p Xs dYs :
i=1 0 i=1 0

It is an exercise to check that for any N :


P
N P
N P
N h i
Pr Xj > Pr jXj j > Pr jXj j > :
i=1 i=1 i=1 N
Thus for any >0:
Z t Z t
Pr Xt Yt Ys dXs + Xs dYs + hX; Y it >
0 0
" n Z #
X t
Pr Yti 1 Xti Xti 1 Ys dXs >
0 3
i=1
" n Z #
X t
+ Pr Xti 1 Yti Yti 1 Xs dYs >
0 3
i=1
" n
#
X
+ Pr Xti+1 Xti Yti+1 Yti hX; Y it > :
3
i=0

Letting n ! 1 obtains that this upper bound is 0 for any > 0 and thus the
proof is complete.
The next result is initially a bit of a surprise. If Xt ; Yt 2 MSloc with
Xt = Mt + Ft and Yt = Nt + Gt ; then formally the product Xt Yt can be
expressed:
Xt Yt = Ft Nt + Gt Mt + Mt Nt + Ft Gt :
Written this way, it is not apparent what type of process Xt Yt is, though it
is apparently continuous and adapted.
The next result proves that Xt Yt is in fact equal to the sum of a local
martingale and a bounded variation process. Thus Xt Yt is a semimartingale.
This is a very special case of Itô’s lemma to be studied in chapter 5:
Corollary 4.28 If Xt ; Yt 2 MSloc ; then Xt Yt 2 MSloc : In other words, the
product of continuous semimartingales
Rt R tis a continuous semimartingale.
Proof. Both integrals 0 Ys dXs and 0 Xs dYs are continuous semimartin-
gales by proposition 4.21, while hX; Y it is an adapted, continuous, bounded
variation process (book 7’s propositions 6.34 and 6.29). Thus the conclu-
sion follows by 4.13 since the collection of local martingales is closed under
addition, as is the collection of bounded variation processes.
4.7 INTEGRATION OF VECTOR AND MATRIX PROCESSES207

If Xt = Mt + Ft and Yt = Nt + Gt ; then book 7’s proposition 6.34 yields


that hX; Y it = hM; N it : Thus if either Mt = 0 or Nt = 0; it follows that
hX; Y it = hM; N it = 0: This observation provides a simple extension of the
Lebesgue-Stieltjes result, though requiring continuity of both processes.

Corollary 4.29 If Xt ; Yt 2 MSloc ; and at least one is of bounded variation,


then: Z t Z t
Xt Yt = Ys dXs + Xs dYs : (4.14)
0 0
Proof. This follows from the above remark since the bounded variation
assumption assures that either Mt = 0 or Nt = 0; and thus hX; Y it = 0 by
book 7’s proposition 6.34.

4.7 Integration of Vector and Matrix Processes


A common model for a continuous semimartingale has the form:
Z t Xm Z t
Xt (!) = u(s; !)dFs (!) + vj (s; !)dBjs (!);
0 j=1 0

for appropriate integrands. Such processes may for example represent the
evolution of an asset price where the …rst integral re‡ects "drift" in this
asset price, while the summation of Itô integrals re‡ects asset price
"volatility" or risk relative to m-factors: For such a model, these
integrands must then be appropriately speci…ed as functions on an
underlying probability space (S; (S); ); and the bounded variation
process identi…ed, and this can be di¢ cult except for simple models.

It is often more convenient to model the integrands as functions of Xs ;


as in:
Z t Xm Z t
Xt (!) = u
~(s; Xs (!))ds + v~j (s; Xs (!))dBjs (!);
0 j=1 0

so that both the drift and volatility of such an asset at each moment of
time re‡ects the asset value at that time. Itô’s lemma is addressed in
the next chapter and will motivate this idea. It will show that if Xt is a
continuous semimartingale expressed as in the …rst speci…cation, and f (x; t)
is an appropriately de…ned smooth function, then f (Xt ; t) is a continuous
semimartingale expressed as in the second speci…cation.
While notationally compelling, we will need to specify properties of the
functions u
~(s; x) and v~j (s; x) that will assure that these integrands have the
208CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES

necessary properties to make these integrals well de…ned. Also, this latter
speci…cation is in fact an integral equation to be solved for such Xt (!) ;
and this raises the issue of solvability in the sense of existence and uniqueness
of such a solution. While formally an integral equation, such equations and
their solutions will be addressed in book 9 under the title of stochastic
di¤erential equations, or SDEs.
A simple example of such an equation de…nes geometric Brownian
motion , the model underlying the famous Black-Scholes-Merton op-
tion pricing formulas:
Z t Z t
Xt (!) = Xs (!) ds + Xs (!) dB(!):
0 0

See chapters 8 and 9 of book 6 for background on this option pricing ap-
proach. The challenge with any such speci…cation is to demonstrate that
there exists a process Xt (!) so that the …rst integral is well de…ned as a
Lebesgue integral, and the second well de…ned as an Itô integral, and that
these integrals then reproduce the given process. It is also important to
know if such a "solution" to this equation is unique.
(1) (m)
In a market of m such assets, with Xs (!) Xt (!) ; :::; Xt (!) ;
this becomes:
Z t Xn Z t
(i)
Xt (!) = ui (s; !)dFis (!) + vij (s; !)dBs(j) (!); i = 1; :::; m;
0 j=1 0

or
Z t Xn Z t
(i)
Xt (!) = u
~i (s; Xs (!))ds+ v~ij (s; Xs (!))dBs(j) (!); i = 1; :::; m;
0 j=1 0

(1) (n)
where Bt (!) Bt (!) ; :::; Bt (!) is an n-dimensional Brownian motion:
The above models with standard (s; !)-integrands provide some motiva-
tion for introducing the ideas and notations below.

Notation 4.30 (Common conventions) It is really in the context of this


section that one can appreciate the common notational convention that sto-
chastic processes are denoted by B; M; et cetera, and not with the time
subscript Bt ; Mt ; nor as functions of the probability space variable, Bt (!);
Mt (!): While convenient initially for notational clarity, these extra parame-
ters add notational complexity, and once the reader is familiar with these
ideas, it can be argued that this is unnecessary notational complexity.
4.7 INTEGRATION OF VECTOR AND MATRIX PROCESSES209

Instead, the subscript position is sometimes used to identify the compo-


nent of a vector-valued stochastic process:

X = (X1 ; X2 ; ; Xn );

or to liberate the subscript position for t one sometimes uses as noted above:

X = (X (1) ; X (2) ; ; X (n) ):

When needed for clarity, this can be expressed as:

X(t; !) = (X1 (t; !); X2 (t; !); ; Xn (t; !));

or:
(1) (2) (n)
Xt (!) = (Xt (!); Xt (!); ; Xt (!));
but it is often unambiguous to omit variables in a given formula. This lack
of ambiguity is a result of one’s familiarity with the ideas and notations of
the above sections, and within that framework, this notation often has only
one interpretation.
For example, given the development of this chapter the expression:
Z t
vdX;
0

is well de…ned for all the categories of processes X developed above, and all
the processes v for which such integrals were de…ned. That this integral can
be expressed:
Z t
v(s; !)dXs (!);
0

adds little to our understanding once stochastic integration theory has been
developed and the notation becomes more familiar.
That said, those new to the theory can …nd such notation ambiguous due
to lack of familiarity. Thus we will generally continue with the more explicit
notation, with apologies to the more expert readers.

4.7.1 The Itô Integral


In this section we de…ne Itô integrals with respect to an n-dimensional
Brownian motion de…ned on (S; (S); t (S); )u:c: . For the space of
integrands, we have a choice of spaces:
210CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES

1. De…nition 2.31: The original space H2 ([0; 1) S) of real valued


functions v(t; !) de…ned on [0; 1) S that are measurable, adapted
to the …ltration f t (S)g, and satisfy 2.27:
Z Z 1
kv(s; !)k2L2 ([0;1) S) v 2 (s; !)dsd < 1:
S 0

2. De…nition 3.70: Since Bt is a continuous local martingale, the space


B ([0; 1) S) of real valued functions v(t; !) de…ned on [0; 1) S
H2;loc
that are predictable, and satisfy 3.57 -a.e.:
Z t
v 2 (s; !)ds < 1; for all t.
0

For this de…nition, recall that d hBis = ds:

3. De…nition 4.17: The space of locally bounded predictable processes


bP ([0; 1)
Hloc S) of real valued functions v(t; !) de…ned on [0; 1) S
that are predictable, and for which there exists a sequence of increasing
and almost surely unbounded stopping times fTk g1 k=1 ; and a sequence
of real constants fck g1
k=1 ; so that with v Tk (s; !) v(s ^ Tk ; !) :

v Tk (s; !) ck ; -a.e.:

For integrals with respect to Brownian motion, all three spaces provide
well-de…ned integrals by propositions 2.52, 3.83 and 4.21. However, it is
common to use the middle ground for its generality, while circumventing
the need to introduce stopping times. However, any of the above
de…nitions is a reasonable choice if it suits the given application. Recalling
de…nition 3.70.
B(m n) (1) (n)
De…nition 4.31 (H2;loc ([0; 1) S)) Let Bt (!) Bt (!) ; :::; Bt (!) be
an n-dimensional Brownian motion de…ned on a …ltered probability space
B(1 n)
(S; (S); t (S); )u:c: : Then H2;loc ([0; 1) S) denotes the space of n-
dimensional processes v(t; !) (vj (t; !))nj=1 ; where:

1. Each vj (s; !) is predictable, and,

2. -a.e.: Z t
vj2 (s; !)ds < 1; all t, all j:
0
4.7 INTEGRATION OF VECTOR AND MATRIX PROCESSES211

B(m n)
More generally, the space H2;loc ([0; 1) S) of m n-matrix processes
m;n
v(t; x) (vij (t; x))i=1;j=1 is de…ned similarly, meaning each vij (s; !) is pre-
dictable, and -a.e.:
Z t
2
vij (s; !)ds < 1; all t, all i; j:
0

If the above integrals are …nite over [0; 1); -a.e., we refer to these
B(1 n) B(m n)
spaces as H2 ([0; 1) S) and H2 ([0; 1) S); respectively.

Integration with respect to n-dimensional Brownian motion Bt =


(1) (n)
Bt ; :::; Bt is then de…ned as follows. There is no new theory in this
de…nition, only the notational conventions.

De…nition 4.32 (Multivariate Itô integrals) Given an n-dimensional Brown-


(1) (n)
ian motion Bt Bt ; :::; Bt de…ned on a probability space (S; (S); t (S); )u:c: ;
then:
B(1 n)
1. If v 2 H2;loc ([0; 1) S) :
Z b Xn Z b
vdB vj (s; !)dBs(j) (!): (4.15)
a j=1 a

B(m n)
2. If v 2 H2;loc ([0; 1) S) :
0 P 1
n Rb (j)
v 1j (s; !)dB s (!)
B Pj=1 Ra C
Z B n b (j) C
b B j=1 a v2j (s; !)dBs (!) C
vdB B C: (4.16)
B .. C
a B . C
@ A
Pn R b (j)
v
j=1 a mj (s; !)dB s (!)

Remark 4.33 In other words, the notational convention in 4.16 is consis-


tent with that of a matrix product with integrator dB (dB1 ; dB2 ; ; dBn )
treated as an n 1 matrix, or column vector. The same is true of 4.15 if we
B(1 n)
deem v 2 H2;loc to be a 1 n matrix, or row vector. In the context of this
formula it is also not uncommon to use the notation of an inner produce of
vectors, so:
Z b Z b
vdB v dB:
a a
212CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES

Note that each of the integrals in these expressions is wellR de…ned from
t
proposition 3.83. In addition, each such integral expressed as 0 is a contin-
Rt
uous local martingale on the space (S; (S); t (S); )u:c: ; and so 0 vdB de-
…ned in 4.15 is a continuous local martingale on this space, and that in 4.16
is an m-dimensional vector process of continuous local martingales (which
is by de…nition an m-dimensional continuous local martingale).

4.7.2 Integrals w.r.t. Continuous Semimartingales


We begin by formalizing the de…nition of an m-dimensional semimartingale
and the associated components:

De…nition 4.34 (n-dimensional continuous LM or SM) A process X =


X (1) ; X (2) ; ; X (n) de…ned on (S; (S); t (S); )u:c: is an n-dimensional
continuous semimartingale if Xt = X0 + Ft + Mt where:

1. F = F (1) ; F (2) ; ; F (n) is an n-dimensional continuous, bounded


variation process, i.e., each Fj is adapted to the …ltration f t (S)g,
is continuous and of bounded variation;

2. M = (M (1) ; M (2) ; ; M (n) ) is an n-dimensional continuous local


martingale, i.e., each Mj is a continuous local martingale;
(j) (j)
3. F0 = M0 = 0 for all j:
(j)
4. X0 is 0 (S)-measurable for all j:

We again let MSloc denote the space of n-dimensional semimartin-


gales with X0 = 0; and Mloc the space of n-dimensional continuous
local martingales with M0 = 0:

The associated space of integrands is de…ned analogously with de…nition


4.17.

bP (m n)
De…nition 4.35 (Hloc ([0; 1) S)) Given a …ltered probability space
bP (1 n)
(S; (S); t (S); )u:c: ; Hloc ([0; 1) S) is de…ned as the space of n-
dimensional locally bounded predictable processes u(t; x) (ui (t; x))ni=1 :
That is, each component ui (s; !) 2 Hloc bP ([0; 1) S) as in de…nition 4.17:

1. Each ui (s; !) is predictable,


4.7 INTEGRATION OF VECTOR AND MATRIX PROCESSES213

2. There is a sequence of increasing and almost surely unbounded stopping


times fTk g1 1
k=1 ; and a sequence of real constants fck gk=1 ; so that -a.e.:

uTi k (s; !) ck ; all i;

where uTi k (s; !) ui (s ^ Tk ; !); or equivalently,

ui (s; !) [0;Tk ] (s) ck ; all i:

bP (m n)
More generally, the space Hloc ([0; 1) S) of m n-matrix processes
v(t; !) (vij (t; !))m;n
i=1;j=1 is de…ned similarly, meaning each vij (s; !) 2
bP
Hloc ([0; 1) S):
Remark 4.36 Note that both the sequence of stopping times fTk g1 k=1 and
the sequence of constants fck g1
k=1 can always be de…ned to be independent
of i; j by de…ning with apparent notation:
(i;j) (i;j)
Tk = minfTk g; ck = maxfck g:
i;j i;j

Then each Tk is a stopping time by book 7’s proposition 5.60. Also, by proof
by contradiction, Tk ! 1; -a.e.
The following de…nition is explicitly stated for continuous semimartin-
gales, but naturally applies in the special cases of continuous m-dimensional
local martingales, or continuous m-dimensional processes of bounded varia-
tion.
bP (1 n)
De…nition 4.37 If u 2 Hloc ([0; 1) S); and X 2 MSloc is an n-
dimensional, continuous semimartingale, then:
Z b Xn Z b
udX uj (s; !)dXs(j) (!); (4.17)
a j=1 a
where each component integral is de…ned as in 4.2.
bP (m 1) bP (m n)
Analogously, if u 2 Hloc ([0; 1) S) and v 2 Hloc ([0; 1) S);
then for an m-dimensional continuous, bounded variation process
F; and n-dimensional continuous local martingale M :
0 R Pn R b 1
b (1) (j)
u1 (s; !)dF s (!) + j=1 a v 1j (s; !)dM s (!)
B Ra Pn R b C
Z b Z b B b (2) (j) C
B a u2 (s; !)dFs (!) + j=1 a v2j (s; !)dMs (!) C
udF + vdM B B ..
C:
C
a a B . C
@ A
Rb (m) P n R b (j)
u
a m (s; !)dF s (!) + v
j=1 a mj (s; !)dM s (!)
(4.18)
214CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES

Rb bP (m n)
Remark 4.38 Note that a udX can well be de…ned for u 2 Hloc ([0; 1)
S) analogous with 4.16, and one obtains a result as in 4.18 only with an n-
sum of bounded variation integrals. In general, however, it is more common
for models to have n-sums of Brownian or local martingale integrals, and
only one bounded variation integral. Of course, this is just notation, so it
can be adapted to …t the need of the application.
Also, as was the case for the integrals with respect to n-dimensional
Brownian motion above, there is no new theory here, only notational con-
ventions.
Chapter 5

Itô’s Lemma

Itô’s Lemma, which is represented in the literature in many ways, is a


general result which answers the following question. If Xt is a continuous
semimartingale (one or multi-dimensional), and f is a smooth function or
transformation, what can be stated about the form of f (Xt )? The …nal
result of this lemma produces what is known as Itô’s Formula, or Itô’s
Rule, and more generally is called Itô’s change-of-variable formula.
It is arguably one of the most important results in the theory of stochastic
processes, named for Kiyoshi Itô (1915 –2008) who pioneered a
mathematical framework for the stochastic integral of chapter 2 and
related concepts, collectively known as the Itô calculus. It is only
referenced as a "lemma" to distinguish it from Itô’s theorem in book 9,
which also called Itô’s Representation theorem.

For the purposes of the current chapter, a continuous semimartingale is


de…ned more generally as in de…nition 4.1; rather than as in de…nition 4.2
which resulted in the space MSloc : Thus we allow X0 to be a random variable
rather than identically 0: We repeat the de…nition here for clarity.

De…nition 5.1 (Continuous semimartingale) A continuous semimartin-


gale on the …ltered probability space (S; (S); t (S); )u:c: is a stochastic
process Xt with:

Xt = X0 + Xt0 ; Xt0 = Mt + Ft 2 MSloc ;

where X0 is 0 (S)-measurable.

The punchline of Itô’s lemma is that if f is twice continuously di¤eren-


tiable, then f (Xt ) is again a continuous semimartingale. In fact, although

215
216 CHAPTER 5 ITÔ’S LEMMA

we will not prove this, if Xt is an m-dimensional continuous semimartingale,


meaning each component is a continuous semimartingale, then f need only
have one continuous derivative in the components for which Xt has bounded
variation. When a component is of bounded variation, this means that it is
a semimartingale for which there is no local martingale term, Mt :
That the space of continuous semimartingales is "closed" relative to su¢ -
ciently smooth transformations is quite remarkable. Continuous martingales
and local martingales are not so closed.

Example 5.2 (Examples of Itô’s Lemma) If f (x) = x2 and Xt = Bt


a Brownian motion, then Bt is a continuous martingale and thus a local
martingale, but by 2.35, f (Bt ) is a continuous semimartingale:
Z t
Bt2 = 2 Bs dBs + t
0
= Mt + Ft ;

where Z t
Mt = 2 Bs dBs ; Ft = hBit :
0
More generally, if Mt is a continuous L2 -bounded martingale that is also
locally bounded, with M0 a random variable not necessarily identically 0;
then f (Mt ) is again a continuous semimartingale by 3.54:

Mt2 = Mt + Ft ;

where Z t
Mt = 2 Ms dMs ; Ft = M02 + hM it :
0

For continuous semimartingales Xt ; Yt 2 MSloc ; which includes the case


of continuous local martingales, then by the stochastic integration by parts
formula in 4.13, Xt Yt 2 MSloc with:
Z t Z t
Mt = Ys dXs + Xs dYs ; Ft = hX; Y it :
0 0

As noted in remark 4.26, when the continuous semimartingales used as in-


tegrators are not required to satisfy X0 = 0; then as in 4.12:
Z t Z t
Xt Yt = X0 Y0 + Ys dXs + Xs dYs + hX; Y it :
0 0
217

A corollary of this is that for all positive integers n; Btn is a continuous


semimartingale, as is Mtn for any continuous local martingale, as is Xtn for
any continuous semimartingale.
These examples provide some circumstantial evidence to the conjecture
that if Xt is a continuous semimartingale, then f (Xt ) is again a continu-
ous semimartingale for appropriately restricted f . To prove this rigorously
requires a detailed analysis of the interplay between the convergence of sto-
chastic integrals and the approximation of f (Xt ) by a Taylor series expan-
sion. As will be seen, most of the work will be required for the "error" term
of such an expansion, since over a given interval [tk 1 ; tk ]; the error term will
be evaluated at some unspeci…ed point k = Xtk 1 + Xtk Xtk 1 where
(!) 2 [0; 1]: For example in one variable, the so-called Lagrange
form of the remainder, named for Joseph-Louis Lagrange (1736 –
1813), takes the form:
1 2
f (Xtk ) f Xtk 1
= f 0 Xtk 1
Xtk Xtk + f 00 ( k ) Xtk Xtk 1 :
1
2
0
Not surprisingly, because f is evaluated at the left endpoint of the inter-
val, the RiemannR sum of these terms will converge nicely to the continuous
t
semimartingale 0 f 0 (Xt ) dXt : To the extent that many references abbre-
viate this proof, it is with regard to the details of the sum of the error
terms,R as it requires a fair amount of detailed e¤ort to show convergence
1 t 00
to 2 0 f (Xt ) d hM it : Various sources present di¤ering amounts of these
details, but Karatzas and Shreve (1988), and Øksendal (1998) provide
examples of in-depth analyses, and the proof below re‡ects their detailed
derivations.
We prove this result for a special 2-dimensional continuous semimartin-
gale (t; Xt ); where Xt is a 1-dimensional continuous semimartingale, and
Yt t is the simplest of form of a bounded variation process. This ap-
proach has the advantage of being frequently applicable, but also provides
the structure of the general proof for an m-dimensional semimartingale with
some components of bounded variation.
After several versions of Itô’s Lemma are stated and proved, we inves-
tigate several applications in the next chapter where the Itô result plays a
prominent role. Indeed, this result is pervasive in stochastic analysis and
will be found to be essential in much that follows. But the applications
addressed in the next chapter are either interesting in their own right (the
theorems of Lévy, Burkholder-Davis-Gundy, and Feynman-Kac), or provide
a valuable tool for later investigations (the Doléans-Dade exponential, Gir-
sanov’s theorems, and Kolmogorov’s equations).
218 CHAPTER 5 ITÔ’S LEMMA

5.1 Versions of Itô’s Lemma


In the next sections we provide several versions of Itô’s lemma. They all
address the same question and state consistent conclusions, but represent
the di¤erent ways this result is presented in the various references. Though
the conclusions are consistent, and the notation makes each result appear
to be an obvious corollary of the others, there are subtleties related to
existence that need to be addressed, and we do this here for completeness.

Throughout the remainder of this chapter we are interested in a class of


di¤erentiable functions that are de…ned as follows:

De…nition 5.3 (C 1;2 ([0; 1) Rm )) The function space C n (Rm ) is de…ned


as the collection of functions: f : Rm ! R; which are n-times continuously
di¤ erentiable, meaning all partial derivatives up to order n are continu-
ous. To accomodate functions with di¤ erent degrees of di¤ erentiability in
0
the variates, one can also de…ne for example C n;n (R R) as the collec-
tion of functions which are n-times continuously di¤ erentiable in x1 ; and
n0 -times continuously di¤ erentiable in x2 ; and so forth.
Analogously, C 1;2 ([0; 1) Rm ) is de…ned as the collection of continuous
functions: f (t; x) : [0; 1) Rm ! R; which are continuously di¤ erentiable
in t 2 (0; 1); twice continuously di¤ erentiable in x 2 Rm ; and where all
derivatives have a continuous extension to [0; 1) Rm : For T < 1 …xed,
the space C 1;2 ([0; T ] Rm ) is de…ned analogously, as continuous functions on
[0; T ] Rm ; di¤ erentiable as above on (0; T ) Rm , and where all derivatives
have a continuous extension to [0; T ] Rm :
The same de…nitions will apply to transformations f (t; x) : [0; 1)
R ! Rp : For example, such f (t; x) 2 C 1;2 ([0; T ] Rm ) if f (t; x) =
m

(f1 (t; x); :::; fp (t; x)) with all fk (t; x) 2 C 1;2 ([0; T ] Rm ) as above. There
should be no confusion caused by avoiding attaching a p to this space.
For any such space, the subscript 0 denotes all such functions have com-
pact support. For example, if f 2 C0n (Rm ) then f 2 C n (Rm ) and there
exists compact K Rm so that f = 0 outside K:

Example 5.4 The restriction to [0; 1) Rm of any function f 2 C 1;2 (( ; 1)


Rm ) with > 0 belongs to C 1;2 ([0; 1) Rm ); though functions in this lat-
ter class need not have such extensions. For example f (t; x) jtj + x 2
1;2 1;2
C ([0; 1) R) but not C (( ; 1) R) for any > 0:

Remark 5.5 (On -a.e. in Itô’s lemma) Note that once it is proved that
f (t; Xt ) is given for every 0 t < 1; -a.e. by the expressions below, such
5.2 SEMIMARTINGALE VERSION 219

as in 5.2; then there is a single set of -measure 0 outside of which 5.2 is


valid for all rational t: Then since both left and right expressions are contin-
uous in t; it follows that outside a single set of -measure 0; 5.2 is valid for
all t: This same remark applies to all versions of Itô’s lemma that follow.
Also, while each version is presented in the context of smooth functions
f (t; x) : [0; 1) Rm ! R; they provide the recipe for smooth transformations
f (t; x) : [0; 1) Rm ! Rp : Indeed, any such transformation can be repre-
sented f (t; x) = (f1 (t; x); :::; fp (t; x)) with all fk (t; x) : [0; 1) Rm ! R:
Thus Itô’s lemma for such transformations obtains the p-vector of results,
with each component obtained from the results below applied to each fk (t; x):

Remark 5.6 (On di¤erentiability) In the various versions of Itô’s lemma


below, we will assume that f 2 C 1;2 ([0; 1) Rm ); meaning f has one contin-
uous derivative in t and two continuous derivatives in the x-variates. Look-
ing at the resulting Itô formulas, one can readily think of examples where
one or more of these second derivatives disappear from the …nal result and
thus wonder if they needed to be assumed to exist.
For example in 5.2, if d hM it 0 then there is no fxx (t; Xt ) term in
this expression. By continuity of hM it this implies that hM it is constant,
and thus hM it = hM i0 = 0 for all t: Book 7’s corollary 6.14 then obtains
that Mt 0 and Xt is a bounded variation process. For such processes Itô’s
lemma remains true for f 2 C 1;1 ([0; 1) R) in the 1-dimensional case, with
similar statements true in other cases.
The …nal conclusion being that f 2 C 1;2 ([0; 1) Rm ) can be general-
ized to require that f only be once continuously di¤ erentiable in any xk -
component for which Xk is a bounded variation process. In 5.16 this is
(k)
realized by noting that for such k; d M (k) t 0 and thus Mt 0 as
above. Hence d M ; M (i) (k) 0
0 for all i by 6.20 of book 7 s de…nition
s
6.25, and 5.16 contains no fxi xk (x)-terms.
We will develop the 1-dimensional cases as an exercise, and refer the
reader to Durrett’s Stochastic Calculus book (1996), which provides more
details on the approach noted in Revuz and Yor (1999).

5.2 Semimartingale Version


This …rst version of Itô’s lemma is the key result, and as such will involve
most of the hard work. The question addressed is, if Xt = X0 + Ft + Mt is
a continuous semimartingale on (S; (S); t (S); )u:c: ; and f (t; x) is a
su¢ ciently smooth function, what can be said about f (t; Xt )? It turn out,
220 CHAPTER 5 ITÔ’S LEMMA

that f (t; Xt ) is always a semimartingale: More speci…cally, local


martingales and semimartingales with Mt 6= 0 transform into
semimartingales, while when Xt = X0 + Ft is a continuous BV process,
f (t; Xt ) reduces to a BV process. The smoothness assumption required is
generally stated as f (t; x) 2 C 1;2 ([0; 1) R): But it will be seen in
exercise 5.10 that when Xt = X0 + Ft is a continuous BV process, the
result only requires f (t; x) 2 C 1;1 ([0; 1) R): In the multivariate
generalization below this observation will be repeated, that second
(j)
derivatives will only be needed in the variates for which Mt 6= 0; but not
proved in this general context.

Exercise 5.7 Con…rm that f (t; Xt ) as expressed in 5.2 is a continuous


semimartingale given earlier results: Hint: Note that -a.e. both Xt and
the integrands are continuous functions, and for such !; these functions are
bounded on the domain of integration because Xt (!) is then bounded.

Notation 5.8 (Di¤erential form of Itô’s lemma) It is common to ex-


press the formula in 5.2 in di¤ erential notation:
1
df (t; Xt ) = ft (t; Xt )dt + fx (t; Xt )dFt + fxx (t; Xt )d hM it (5.1)
2
+fx (t; Xt )dMt ;
Rt
since 0 df (s; Xs ) f (t; Xt ) f (0; X0 ): More economically, this is sometimes
written:
1
df (t; Xt ) = ft (t; Xt )dt + fxx (t; Xt )d hXit + fx (t; Xt )dXt ;
2
since by proposition 4.21:

dXt = dFt + dMt ;

and by book 7’s proposition 6.24, hXit = hM it for a continuous semimartin-


gale.
Analogously, the integral version in 5.2 is sometimes expressed:
Z t Z t
1
f (t; Xt ) = f (0; X0 ) + fxx (s; Xs )d hXis + fx (s; Xs )dXs :
2 0 0

Proposition 5.9 (Itô’s lemma for semimartingales) Let f (t; x) 2 C 1;2 ([0; 1)
R): If Xt = X0 +Ft +Mt is a continuous semimartingale on (S; (S); t (S); )u:c: ;
5.2 SEMIMARTINGALE VERSION 221

meaning Ft + Mt 2 MSloc is a continuous semimartingale and X0 is 0 -


measurable, then f (t; Xt ) is a continuous semimartingale on (S; (S); t (S); )u:c: :
Speci…cally, for every 0 t < 1; the components of f (t; Xt ) are given -
a.e.:
Z t
f (t; Xt ) = f (0; X0 ) + ft (s; Xs )ds
0
Z t Z
1 t
+ fx (s; Xs )dFs + fxx (s; Xs )d hM is (5.2)
0 2 0
Z t
+ fx (s; Xs )dMs ;
0

where ft @f =@t; and so forth. As noted in remark 5.5, this implies that
-a.e., the identity in 5.2 is valid for all t:
Proof. Following Karatzas and Shreve (1988), this proof is divided into
a number of manageable steps.
1: Localization: To facilitate convergence of the various summations,
we …rst introduce stopping times to ensure all integrands are bounded. For
N > 0 de…ne:
8
< 0; if jX0 j N;
TN =
: infftj jM j N or jFt j N or hM it N g; if jX0 j < N:
t

For the second part of de…nition of TN ; this can be expressed:

infftj jMt j N g ^ infftj jFt j N g ^ infftj hM it N g:

Since all processes are continuous, TN is a stopping time by parts 5 and 6 of


book 7’s proposition 5.60. As always, if jMt j < N; jFt j < N; and hM it < N
for all t; we de…ne:

TN = infftj jMt j N or jFt j N or hM it Ng 1:

Then fTN g is increasing as N ! 1; and TN ! 1; -a.e. Unboundedness


-a.e. follows because to have TN K for all N for ! 2 A S implies that
at least one of jMt j ; jFt j and hM it is unbounded on A for t 2 [0; K]; and by
continuity this implies [A] = 0:
(N )
The goal of what follows is to prove the stated result for Xt Xt^TN ;
(N ) (N ) (N )
so Xt = X0 + Ft + Mt : Recalling proposition 4.22 on properties of
222 CHAPTER 5 ITÔ’S LEMMA

(N )
stochastic integrals for semimartingale integrators, and noting that Xs =
Xs for s TN ; this would then prove that for each N; that -a.e.:
Z t Z t
(N )
f (t; Xt ) = f (0; X0 ) + ft (s; Xs(N ) )ds + fx (s; Xs(N ) )dFs(N )
0 0
Z D E Z t
1 t (N ) (N )
+ fxx (s; Xs )d M + fx (s; Xs(N ) )dMs(N )
2 0 s 0
Z t^TN Z t^TN
= f (0; X0 ) + ft (s; Xs )ds + fx (s; Xs )dFs
0 0
Z Z t^TN
1 t^TN
+ fxx (s; Xs )d hM is + fx (s; Xs )dMs :
2 0 0
Given t; this expression is then valid -a.e. for all integer N; and by letting
integer N ! 1 5.2 follows since TN ! 1 -a.e.
(N )
2: Boundedness: Note that for given t that Xt 3N; and since f;
ft ; fx ; and fxx are continuous on the compact set [0; t] [ 3N; 3N ]; each
integrand is bounded by a given constant K:
(N )
3: Taylor Approximation: Given N; denote Xt by Xt for simplicity
and let n denote a partition of [0; t] with 0 = t0 < t1 < tn = t: Since
f (tk ; Xtk ) f tk 1 ; Xtk 1
= f (tk ; Xtk ) f (tk 1 ; Xtk )
+f (tk 1 ; Xtk ) f tk 1 ; Xtk 1
;
we can separately express by …rst and second order Taylor series:
f (tk ; Xtk ) f (tk 1 ; Xtk ) = ft ( k ; Xtk ) (tk tk 1) ;

f (tk 1 ; Xtk ) f tk 1 ; Xtk 1


= fx tk 1 ; Xtk 1
Xtk Xtk 1
1 2
+ fxx (tk 1 ; k) Xtk Xtk 1
:
2
where tk 1 < k (!) < tk ; and k = Xtk 1 + k Xtk Xtk 1 with k
k (!) 2 (0; 1) : Here we use the Lagrange form of the remainder term,
named for Joseph-Louis Lagrange (1736 – 1813).
Summing:
Xn
f (t; Xt ) f (0; X0 ) = ft ( k ; Xtk ) (tk tk 1 )
k=1
X n
+ fx tk 1 ; Xtk 1 Xtk Xtk 1
k=1
1 Xn 2
+ fxx (tk 1 ; k ) Xtk Xtk 1
2 k=1
I1 + I2 + I3 :
5.2 SEMIMARTINGALE VERSION 223

4: Summation I1 : Since ft is continuous in both variables, and Xt is


bounded and continuous -a.e., I1 is an ordinary Riemann summation of a
-a.e. bounded continuous function, and so as n max1 i n fti ti 1 g !
0: Z t
I1 ! ft (s; Xs )ds; -a.e.
0
5: Summation I2 : Because
Xtk Xtk 1
= Ftk Ftk 1
+ Mtk Mtk 1 ;
I2 = I2F + I2M : As fx is continuous in both variables and Xt is bounded and
continuous -a.e., I2F is -a.e. a Riemann summation associated with the
Riemann-Stieltjes integral of a bounded continuous function with respect to
a function of bounded variation. Thus by proposition 4.19 of book 3 and 4.6,
as n ! 0 : Z t
I2F ! fx (s; Xs )dFs ; -a.e.
0
Now Mt is an L2 -bounded martingale, recalling the notational conven-
tion in 3 and the bound as in 2; and fx (s; Xs ) is continuous -a.e. and
bounded. Thus fx (s; Xs ) 2 H2M ([0; 1) S) by book 7’s proposition 6.18 if
it is shown that fx (s; Xs ) is predictable, and this will follow from continuity
by that book’s corollary 5.17 if fx (s; Xs ) is adapted. Fixing s; the function
fx (s; y) : R ! R is continuous in y and hence Borel measurable in y by
book 5’s proposition 1.4, so fx 1 (s; )(A) 2 B(R) for all A 2 B(R): Since
X is adapted: [fx (s; Xs )] 1 (A) = Xs 1 fx 1 (s; )(A) 2 s (S); and thus
fx (s; Xs ) is adapted.
Now from proposition 3.55 it can be concluded that as n ! 0;
Z t
M
I2 !P fx (s; Xs )dMs :
0

This convergence is in probability implies -a.e. convergence for a subse-


quence of partitions with nl ! 0 by book 2’s proposition 5.25.
Combining, it follows that for this subsequence of partitions,
Z t Z t
I2 ! fx (s; Xs )dFs + fx (s; Xs )dMs ; -a.e.
0 0

6: Summation I3 : As noted in the introduction to this section, this


summation requires the most work, and indeed will be further divided. Be-
cause
2 2 2
Xtk Xtk 1
= Ftk Ftk 1
+2 Ftk Ftk 1
Mtk Mtk 1
+ Mtk Mtk 1
;
224 CHAPTER 5 ITÔ’S LEMMA

the integral I3 = I4 +I5 +I6 with the apparent notation. For the summations
(n)
I4 and I5 ; since jfxx j K by step 2; and with v1 (F ) denoting the absolute
variation of F on step 3’s partition n (de…ned as in 4.3 but without the
supremum):
(n)
jI4 j + jI5 j 2Kv1 (F ) maxk Ftk Ftk 1
+ maxk Mtk Mtk 1
:

Continuity of F and M -a.e., and the bounded variation of F then implies:

jI4 j + jI5 j ! 0; -a.e.

For the analysis of I6 (omitting the 12 ):


Xn 2
I6 fxx (tk 1 ; k ) Mtk Mtk 1
;
k=1
Rt
our goal is to show that I6 ! 0 fxx (s; Xs )d hM is ; -a.e. De…ne two inter-
mediate summations:
Xn 2
I7 = fxx tk 1 ; Xtk 1 Mtk Mtk 1 ;
k=1
Xn
I8 = fxx tk 1 ; Xtk 1 hM itk hM itk 1 :
k=1

By proposition 4.19 of book 3 and 4.6, as n ! 0 :


Z t
I8 ! fxx (s; Xs )d hM is ; -a.e.
0

The …nal steps will be to prove that I6 I7 and I7 I8 converge to 0 in


L1 (S) and L2 (S); respectively, as this will imply convergence in probability
and thus convergence -a.e. for a subsequence of partitions.
(n)
7: kI6 I7 kL1 (S) ! 0 : First note that with v2 (M ) denoting the quadratic
variation of M on partition n as in book 7’s de…nition 6.2 (there denoted
Qt n ):
(n)
jI6 I7 j v2 (M ) maxk fxx (tk 1; k ) fxx tk 1 ; Xtk 1
:

By the Cauchy-Schwarz inequality:


(n)
kI6 I7 kL1 v2 (M ) maxk fxx (tk 1; k ) fxx tk 1 ; Xtk 1 L2
:
L2

We now recall the second half of the proof of part 6 of the Doob-Meyer De-
composition theorem of book 7’s proposition 6.5, where with a change in nota-
(n) 2 2
tion v2 (M ) here is equivalent to E Qt n (M ) in that proof (there
L2
5.2 SEMIMARTINGALE VERSION 225

0 2 (n) p
denoted E Qt n n
(M ) ): It then follows that v2 (M ) 12N 2
L2
where N is the bound on jMt j by parts 1 and 2: Now

maxk fxx (tk 1; k ) fxx tk 1 ; Xtk 1


!0

pointwise as n ! 0 by uniform continuity of fxx on a compact set: As this


maximum is bounded, the bounded convergence theorem of book 5’s proposi-
tion proposition 2.46 assures that maxk fxx (tk 1 ; k ) fxx tk 1 ; Xtk 1 L2
!
0:
8: kI7 I8 kL2 (S) ! 0 : To simplify this analysis, …rst note that Mt2
hM it is a martingale by the Doob-Meyer Decomposition theorem of book 7’s
proposition 6.5. Thus with:
2
Mtk Mtk 1
hM itk hM itk 1
((*))

= Mt2k hM itk Mt2k 1


hM itk 1
2Mtk 1
Mtk Mtk 1
;

the tower and measurability properties of book 6’s proposition 5.26 yield:
h i
2
E Mtk Mtk 1 hM itk hM itk 1
h h ii h h ii
= E E Mt2k hM itk j tk (S) E E Mt2k 1 hM itk 1 j tk 1 (S)
2E Mtk 1 E Mtk Mtk 1 j tk 1
(S)
= 0:

If j 6= k; then using the same approach with all product terms from ( )
obtains:
hh ih ii
2 2
E Mtk Mtk 1 hM itk hM itk 1 Mtj Mtj 1 hM itj hM itj 1
= 0:

Recalling that jfxx j K by parts 1 and 2; and that (a b)2 2(a2 + b2 )


for real a; b; and the prior result for j 6= k yields:
nXn h io2
2
kI7 I8 k2L2 = E fxx tk 1 ; Xtk 1
Mtk Mtk 1
hM itk hM itk 1
k=1
Xn h i2
2
K 2E Mtk Mtk 1
hM itk hM itk 1
k=1
Xn 4
Xn 2
2K 2 E Mt k Mtk 1
+ hM itk hM itk 1
:
k=1 k=1
226 CHAPTER 5 ITÔ’S LEMMA

Now because jMt j N and hM itk N by parts 1 and 2; as n ! 0 we


obtain the pointwise limits -a.e.
Xn h i
4 (n) 2
Mtk Mtk 1 v2 (M ) maxk Mtk Mtk 1 ! 0;
k=1
Xn 2
hM itk hM itk 1 hM it maxk hM itk hM itk 1 ! 0:
k=1

Hence by the bounded convergence theorem of book 5’s proposition 2.46,


kI7 I8 kL2 ! 0:
9: Summary: The above steps prove that for each t; I1 +I2 +I3 converges
to the right hand side of 5.2 -a.e. for a subsequence of partitions with
nk ! 0:

Exercise 5.10 (Itô’s lemma for BV processes) As noted in remark 5.6,


prove that if Xt = X0 + Ft then 5.2 remains true for f (t; x) 2 C 1;1 ([0; 1)
R): Hint: In step 3, k is now used in the …rst derivative term, and the
second expression is omitted. Now step 4 remains the same, step 5 must be
adapted for the new integrand, and the lengthy steps 6 - 8 are omitted. Then
investigate how this special case would be stated in corollaries 5.13 and 5.16.

Example 5.11 (Brownian Motion) If Xt = Bt is a Brownian motion


and thus a local martingale on (S; (S); t (S); )u:c: ; let f (x) = xn ; n 2 N:
Then since hBis = s by corollary 2.10, and F = 0 :
Z Z t
1 t
f (Bt ) = fxx (s; Xs )ds + fx (s; Xs )dBs ; -a.e.
2 0 0

Thus for n = 2 : Z t
Bt2 =t+2 Bs dBs ;
0
which is 2.35, and for n 3:
Z t Z t
n(n 1)
Btn = Bsn 2
ds + n Bsn 1
dBs :
2 0 0

Note that the …rst integral can be evaluated as a pathwise Riemann or Lebesgue
integral since Bsn 2 is continuous, and these agree (proposition 2.31, book 3,
applied pathwise). Though perhaps not apparently of bounded variation, this
integral is di¤ erentiable and of bounded variation by that book’s proposition
3.33. The second integral is a continuous local martingale by proposition
3.83 if Bsn 1 2 H2;loc
B ([0; 1) S): Predictability follows from book 7’s corol-
lary 5.17, and it is an exercise to verify 3.57.
5.3 ITÔ PROCESS VERSION 227

5.3 Itô Process Version


The following corollary to Itô’s lemma restates the change of variables
approach to the 1-dimensional Itô process de…ned on
(S; (S); t (S); )u:c: :
Z t Z t
Xt = X0 + u(s; !)ds + v(s; !)dBs : (5.3)
0 0

The speci…cation in 5.3 is often written in di¤erential notation:

dXt = u(t; !)dt + v(t; !)dBt ; (5.4)

with X0 given.

For this statement, we assume that u(s; !) and v(s; !) are locally bounded
predictable processes on (S; (S); t (S); )u:c: : By proposition 4.21, Xt is
then is a continuous semimartingale on (S; (S); t (S); )u:c: if X0 is 0 (S)-
measurable. Thus in this case, Xt X0 2 MSloc :

Notation 5.12 (Di¤erential form of Itô’s lemma) In di¤ erential nota-


tion 5.6 is stated:
1
df (t; Xt ) = ft (t; Xt ) + u(t; !)fx (t; Xt ) + v 2 (t; !)fxx (t; Xt ) dt
2
+v(t; !)fx (t; Xt )dBt ; (5.5)

or equivalently by 5.4:
1
df (t; Xt ) = ft (t; Xt ) + v 2 (t; !)fxx (t; Xt ) dt + fx (t; Xt )dXt :
2

Corollary 5.13 (Itô’s lemma for an Itô process) Let Xt be a stochas-


tic process on (S; (S); t (S); )u:c: as in 5.3 where u(s; !) and v(s; !) are
locally bounded, predictable processes, and X0 is 0 (S)-measurable, and let
f (t; x) 2 C 1;2 ([0; 1) R): Then f (t; Xt ) is a continuous semimartingale on
(S; (S); s (S); )u:c: ; and for every 0 t < 1 :

f (t; Xt ) = f (0; X0 )
Z t
1
+ ft (s; Xs ) + u(s; !)fx (s; Xs ) + v 2 (s; !)fxx (s; Xs ) ds
0 2
Z t
+ v(s; !)fx (s; Xs )dBs ; (5.6)
0
228 CHAPTER 5 ITÔ’S LEMMA

-a.e. As noted in remark 5.5, this implies that -a.e., the identity in 5.6
is valid for all t:
Proof. Since Xt be a continuous semimartingale by proposition 4.21, propo-
sition
Rt 5.9 applies. Comparing R t 5.6 with 5.2 we must justify that with Ft (!) =
0 u(s; !)ds and Mt (!) = 0 v(s; !)dBs ; that:
Z t Z t
fx (s; Xs )dFs = u(s; !)fx (s; Xs )ds;
0 0
Z t Z t
fxx (s; Xs )d hM is = v 2 (s; !)fxx (s; Xs )ds;
0 0

and Z Z
t t
fx (s; Xs )dMs = v(s; !)fx (s; Xs )dBs :
0 0
First, predictable implies progressively measurable by book 7’s proposition
5.19, and then by book 5’s proposition 5.19, u(s; ) and v(s; ) are Borel mea-
surable in s for all !: The …rst identity now follows from book 5’s proposition
3.6 by …rst splitting u(s; ) into positive and negative parts (that book’s de…n-
ition 2.36). The second follows from thisR proposition (without splitting) and
t
part 4 of proposition 3.89, that hM it = 0 v 2 (s; !)ds; recalling that hBit = t
by corollary 2.10. Finally, the stochastic integral identity is the associative
law proposition 3.91.

5.4 Itô Di¤usion Version


Assume that Xt is a continuous adapted process on (S; (S); t (S); )u:c:
that satis…es the integral equation:
Z t Z t
Xt (!) = X0 (!) + u(s; Xs (!))ds + v(s; Xs (!))dBs (!) ; (5.7)
0 0

for appropriately de…ned functions u(s; x) and v(s; x): This integral
equation is often written in stochastic di¤erential equation (SDE) notation
as:
dXt = u(t; Xt )dt + v(t; Xt )dBt ; (5.8)
with X0 given. Such a process is called an Itô di¤usion.

Stochastic di¤erential equations are studied in book 9, so for now we


simply assume that such Xt exists, that the integrals in 5.7 are well de…ned,
and that for each t this identity is satis…ed -a.e. Thus by continuity this
identity is satis…ed for all t outside a single set of -measure 0.
5.4 ITÔ DIFFUSION VERSION 229

Remark 5.14 (On SDEs 1) It will be proved in book 9 that Borel mea-
surability of u(s; x) and v(s; x) and continuity and adaptedness of a process
Xt on (S; (S); t (S); )u:c: assure that the processes u(s; !) u(s; Xs (!))
and v(s; !) v(s; Xs (!)) are predictable stochastic processes on this space.
However, to justify the existence of these integrals using the semimartingale
integration theory requires that u(s; !) and v(s; !) be locally bounded. This
boundedness condition is di¢ cult to specify in terms of u(s; x) and v(s; x);
and so it is more natural to use a di¤ erent criteria to justify existence.
Given a continuous, adapted process Xt ; the integrals in 5.7 will be well
de…ned if u(s; x) and v(s; x) are Borel measurable functions,
u; v : [0; 1) R ! R;
satisfying for all t :
Z t
Pr ju(s; Xs (!))j ds < 1; all t (5.9)
0
Z t
= Pr v 2 (s; Xs (!))ds < 1; all t = 1:
0
As noted above, Borel measurability will assure predictability of these in-
tegrands and thus progressive Rmeasurability by book 7’s proposition 5.19.
t
Then the …rst integral in 5.7, 0 u(s; Xs (!))ds; is de…ned pathwise -a.e.
as a Lebesgue (or Lebesgue-Stieltjes) integral of book 5, and this integral
is adapted by corollary 3.74 with M = B since hBis = s by corollary
Rt
2.10. Further, 0 v(s; Xs (!))dBs (!) is de…nable -a.e. within the local
martingale integration theory since 5.9 assures 3.57 and thus v(s; Xs (!)) 2
B ([0; 1)
H2;loc S):
The existence and uniqueness of Xt that satis…es 5.7 will be seen to
require somewhat more of u(s; x) and v(s; x) than Borel measurability to
ensure that 5.9 is satis…ed.
Finally, given this Borel measurability requirement on u(s; x) and v(s; x)
and the assumption that continuous, adapted Xt exists that satis…es 5.9 and
5.7, it will also be proved in book 9 that such Xt is a continuous semimartin-
gale. It therefore makes sense to investigate the application of Itô’s lemma
to f (t; Xt ) for appropriate functions.
Notation 5.15 (Di¤erential form of Itô’s lemma) In di¤ erential nota-
tion 5.11 is stated:
1
df (t; Xt ) = ft (t; Xt ) + u(t; Xt )fx (t; Xt ) + v 2 (t; Xt )fxx (t; Xt ) dt
2
+v(t; Xt )fx (t; Xt )dBt ; (5.10)
230 CHAPTER 5 ITÔ’S LEMMA

or equivalently by 5.8:
1
df (t; Xt ) = ft (t; Xt ) + v 2 (t; Xt )fxx (t; Xt ) dt + fx (t; Xt )dXt :
2

Corollary 5.16 (Itô’s lemma for an Itô di¤usion ) Given Borel mea-
surable functions u(s; x) and v(s; x) on [0; 1) R; let Xt be a continuous,
adapted process on (S; (S); t (S); )u:c: that satis…es 5.7 and 5.9, and let
f (t; x) 2 C 1;2 ([0; 1) R): Then f (t; Xt ) is a continuous semimartingale on
(S; (S); s (S); )u:c: ; and for every 0 t < 1 :

f (t; Xt ) = f (0; X0 )
Z t
1
+ ft (s; Xs ) + u(s; Xs )fx (s; Xs ) + v 2 (s; Xs )fxx (s; Xs ) ds
0 2
Z t
+ v(s; Xs )fx (s; Xs )dBs ; (5.11)
0

-a.e. As noted in remark 5.5, this implies that -a.e., the identity in 5.11
is valid for all t:
Proof. As noted in remark 5.14, it will be proved in book 9 that Xt is a
generalized continuous semimartingale. Let:
Z t Z t
Ft (!) = u(s; Xs (!))ds; Mt (!) = v(s; Xs (!))dBs :
0 0

Also noted in remark 5.14, the integrands for Ft (!) and Mt (!) are pre-
dictable so by 5.9, Ft is a continuous bounded variation process by proposi-
tion 4.14, and Mt is a continuous local martingale by proposition 3.83. Thus
Ft (!) + Mt (!) is a continuous semimartingale, 5.7 can be expressed:

Xt (!) = X0 (!) + Ft (!) + Mt (!);

and proposition 5.9 applied.


Comparing 5.11 with 5.2, we must justify that:
Z t Z t
fx (s; Xs )dFs = u(s; Xs )fx (s; Xs )ds;
0 0
Z t Z t
fxx (s; Xs )d hM is = v 2 (s; Xs )fxx (s; Xs )ds;
0 0

and Z Z
t t
fx (s; Xs )dMs = v(s; Xs )fx (s; Xs )dBs :
0 0
5.4 ITÔ DIFFUSION VERSION 231

As noted in remark 5.14, the integrands for Ft (!) and Mt (!) are predictable
and thus are progressively measurable by book 7’s proposition 5.19. Now
progressive measurability means that for each s; u(s; Xs (!)) and v(s; Xs (!))
are (B ([0; s) t (S))-measurable, and then by book 5’s proposition 5.19,
u(s; Xs (!)) and v(s; Xs (!)) are Borel measurable for all !: Thus the …rst
identity follows from book 5’s proposition 3.6, by …rst splitting u(s; Xs (!))
into positive and negative parts (that book’s de…nition 2.36). The second
follows from this
R t proposition (without splitting) and 3.69 of proposition 3.89,
that hM it = 0 v 2 (s; Xs (!))ds; recalling that hBit = t and thus d hBit = dt
by corollary 2.10. Finally, the stochastic integral identity is the associative
law of proposition 3.91.

Example 5.17 (Geometric Brownian motion) Assume that given pa-


rameters ; ; that there exists a continuous semimartingale Xt so that for
for each t : Z Z
t t
Xt = X0 + Xs ds + Xs dBs ; -a.e., (5.12)
0 0

where Bt is a Brownian motion on (S; (S); t (S); )u:c: : Thus by continuity


this identity is satis…ed for all t outside a single set of -measure 0. This is
commonly expressed in di¤ erential notation:

dXt = Xt dt + Xt dBt :

This model is known as geometric Brownian motion, and perhaps


its most well-known application is as the price of a traded asset underlying
the Black-Scholes-Merton (BSM) option pricing formulas discussed in
section 9.3.1 of book 6. That such Xt is continuous -a.e. implies that the
…rst integral can be interpreted as a pathwise Riemann or Lebesgue integral
-a.e., and that these integrals agree by book 3’s proposition 2.31. Also,
continuity and adaptedness of Xt implies that this is a predictable integrand
B ([0; 1)
by book 7’s corollary 5.17, and that 3.57 is satis…ed, so Xs 2 H2;loc
S) and the second integral exists as a stochastic integral with local martingale
integrator.
Let f (x) = ln x; which is C 2 (0; 1): If we assume that Xt > 0 -a.e., we
can apply Itô’s lemma as in 5.11 to f (Xt ) to derive that:
Z t Z t Z t
1 2
ln Xt = ln X0 + ds ds + dBs :
0 2 0 0
1 2
= ln X0 + t + Bt :
2
232 CHAPTER 5 ITÔ’S LEMMA

Xt
This implies that the process ln X 0
; which represents the continuous to-
tal return on the asset over [0; t] in the Black-Scholes-Merton framework,
satis…es in di¤ erential notation:
Xt 1 2
d ln = dt + dBt :
X0 2
Solving this produces an expression for the process Xt ; which we have
assumed to exist:
1 2
Xt = X0 exp t + Bt : (5.13)
2
By rearranging we obtain an expression for the continuous total return over
[0; t] in the BSM context:
Xt 1 2
ln = t + Bt : (5.14)
X0 2
Thus if there exists a continuous, positive semimartingale solution to the
stochastic di¤ erential equation in 5.12, Itô’s lemma yields that 5.13 is this
process. This derivation also obtains that for …xed t; the random variable Xt
(or asset price in BSM) has a lognormal distribution, and the continuous
Xt
total change ln X 0
(or return in BSM) has a normal distribution (recall
section 3.2.5 of book 4).
Note however that this derivation has not con…rmed that Xt in 5.13 is
a semimartingale, though it is apparent by inspection that it is both contin-
uous -a.e. and strictly positive if X0 > 0: Further, we have not con…rmed
that this process solves 5.12. In addition, even if all is con…rmed there re-
mains the possibility that there are other solutions of 5.12 which need not
be semimartingales, and/or do not satisfy either the continuity or nonneg-
ativity assumption, and thus for which the above application of Itô’s lemma
would not be justi…ed.
We now address two technical details related to this example. That this
solution is unique will be deferred, and will follow from the development in
book 9. The punchline, however, is that 5.12 has a unique (strong) solution
because the coe¢ cient functions, (t; x) = x and (t; x) = x are linear in
x:
1: Xt in 5.13 is a Semimartingale: It was noted in the introduction
to section 6.2 of book 6 that the Doob-Meyer decomposition theorems 1
and 2 (propositions 6.5 and 6.12) were special cases of a more general Doob-
Meyer decomposition theorem. This general theorem was named for J. L.
5.4 ITÔ DIFFUSION VERSION 233

Doob (1910 –2004) who in 1953 proved the result for discrete time processes
(the Doob Decomposition theorem), and Paul-André Meyer (1934 –
2003) who generalized the result to continuous processes in 1962-3. The
more general statement applies to local submartingales (or local super-
martingales) de…ned on the …ltered probability space (S; (S); t (S); )u:c: :
Such processes are de…ned just as are local martingales except that given the
localizing sequence fTn g; the stopped process Tn >0 XtTn is a submartingale
(respectively, supermartingale) with hrespect to t (S) i for all n: In the gen-
Tn Tn
eral case this means for t s that E Tn >0 Xt j s Tn >0 Xs for a sub-
h i
martingale respectively, E Tn >0 XtTn j s Tn
Tn >0 Xs for a supermartingale ;
h i
in contrast to the local martingale condition that E Tn >0 XtTn j s = Tn >0 XsTn :
When X0 = 0; lemma 3.61 generalizes (recall exercise 3.62) to prove that
these requirements are equivalent to the same statements with XtTn in place
of Tn >0 XtTn :
Recall that a stochastic process is càdlàg (de…nition 5.15, book 7) if it
is continuous from the right, and with left limits. The general Doob-
Meyer decomposition theorem states that a càdlàg local submartingale
Xt has a unique decomposition, Xt = Mt + Ft ; where Ft is a unique, al-
most surely increasing, predictable process with F0 = 0; and Mt is a local
martingale: The same result applies to a càdlàg local supermartingale
but with Ft almost surely decreasing. In either case, when Xt is continuous
so too are the component processes.
Corollary: Every continuous local submartingale (local supermartin-
gale) is a continuous semimartingale.
We now show that Xt de…ned in 5.13 is a continuous submartingale for
X0 > 0: The integrability of Xt is left as an exercise (Hint: 3.66, book
4), as is continuity of E [Xt ] for the next step. It then follows that XtT is a
continuous submartingale for any stopping time T by the proof of Doob’s op-
tional stopping theorem of book 7’s proposition 5.84, again using that book’s
proposition 5.80. Then as in that book’s corollary 5.85, de…ning a localiz-
ing sequence fTn g by Tn = infftjXt ng; which are stopping times by that
book’s proposition 5.60, obtains that Xt is a continuous local submartingale.
The above corollary to the general Doob-Meyer decomposition theorem is
then applicable to prove that Xt = Mt + Ft is a continuous semimartingale.
To prove that Xt is a continuous submartingale we prove the submartin-
gale condition that for t s; E [Xt j s ] Xs : Since Xt is a submartingale if
and only if Yt Xt 1 is a submartingale, this su¢ ces by lemma 3.61 since
Y0 = 0: Based on the …nite dimensional distributions of Brownian motion
234 CHAPTER 5 ITÔ’S LEMMA

p
it follows that Bt = Bs + B where B = t sZ with Z a standard normal
variate that is independent of s : By the measurability and independence
properties of conditional expectations from book 7’s proposition 5.26:
1 2
E [Xt j s] = E X0 exp t + Bt j s
2
1 2
p
= X0 exp t + Bs E exp t sZ j s
2
p
= Xs E exp t sZ :
p
Since t > s; E exp t sZ = exp 2 (t s)=2 > 1 by 3.66 of book 4,
and the result is proved.
2: Xt Solves 5.12: The derivation above was predicated on the assump-
tion that there existed a continuous semimartingale Xt which satis…ed the
equation in 5.12, and that Xt > 0 to justify the application of Itô’s lemma
to ln Xt : The candidate for this process derived in 5.13 has now been proved
to be a semimartingale, and it is apparent that Xt is continuous and Xt > 0
if X0 > 0: But there remains the question: Does this process indeed satisfy
5.12?
1 2
To this end, de…ne f (t; x) = X0 exp 2 t + x ; which is clearly
C 2 (R2 ) with:

1 2 2
ft = f; fx = f; fxx = f:
2

Applying 5.2 to f (t; Bt ); with F = 0; Ms = Bs and d hM is = ds produces:


Z t Z
1 2 1 t 2
f (t; Bt ) = f (0; B0 ) + f (s; Bs )ds + f (s; Bs )ds
2 0 2 0
Z t
+ f (s; Bs )dBs
0
Z t Z t
= X0 + f (s; Bs )ds + f (s; Bs )dBs :
0 0

In other words, Itô’s lemma yields that Xt f (t; Bt ) as de…ned in 5.13


is a continuous process that satis…es 5.12, and indeed yields that this is true
without the requirement that X0 > 0:

With this version of Itô’s lemma, it is worthwhile to introduce a key


insight of the Feynman-Kac Representation Theorem, discussed in
more detail below.
5.4 ITÔ DIFFUSION VERSION 235

Example 5.18 (A Preview of the Feynman-Kac representation theorem)


Let f (t; y) 2 C 1;2 ([0; 1) R); h(t) 2 C 1 (R); and de…ne:

g(t; x) = exp [h(t)] f (t; x):

Then

gt (r; y) = exp [h(r)] h0 (r)f (r; y) + ft (r; y) ;


gx (r; y) = exp [h(r)] fx (r; y); gxx (r; y) = exp [h(r)] fxx (r; y):

Let Bs be a Brownian motion on (S; (S); s (S); )u:c: and assume that
there exists a continuous adapted process Xst;z on this space that satis…es 5.7
on s 2 [t; T ] with "initial value" Xtt;z = z :
Z s Z t
Xst;z (!) = z + u(r; Xrt;z (!))dr + v(r; Xrt;z (!))dBr (!) ; t s T:
t 0

Applying Itô’s lemma to g(t; x) obtains at s = T :

exp [h(T )] f (T; XTt;z ) = exp [h(t)] f (t; z)


Z T
+ ft (r; Xrt;z ) + u(r; Xrt;z )fx (r; Xrt;z )
t
1
+ v 2 (r; Xrt;z )fxx (r; Xrt;z ) + h0 (r)f (r; Xrt;z ) exp [h(r)] dr
2
Z T
+ v(r; Xrt;z )fx (r; Xrt;z ) exp [h(r)] dBr ; -a.e.
t

Assume now that f (t; y) satis…es the partial di¤ erential equation:
1
ft (r; y) + u(r; y)fx (r; y) + v 2 (r; y)fxx (r; y) + h0 (r)f (r; y) 0 on [0; T ] R:
2
((1))
Then the above identity becomes:
Z T
exp [h(T )] f (T; XTt;z ) = exp [h(t)] f (t; z)+ v(r; Xrt;z )fx (r; Xrt;z ) exp [h(r)] dBr :
t
((2))
If the integrand in the Itô integral is in H2 ([0; 1) S) of de…nition 2.31;
meaning that it is measurable, adapted and satis…es:
Z T
2
E v(r; Xrt;z )fx (r; Xrt;z ) exp [h(r)] ds < 1;
t
236 CHAPTER 5 ITÔ’S LEMMA

then this integral is a martingale by proposition 2.52. Hence, taking expec-


tations and using the tower property of book 6’s proposition 5.26:
Z T
E v(r; Xrt;z )fx (r; Xrt;z ) exp [h(r)] dBr
t
Z T
= E E v(r; Xrt;z )fx (r; Xrt;z ) exp [h(r)] dBr j t
t
Z t
= E v(r; Xrt;z )fx (r; Xrt;z ) exp [h(r)] dBr = 0:
t

Thus from (2) :


h i
E f (T; XTt;z ) exp [h(T )] = f (t; z) exp [h(t)] ;

and so the function that solves the PDE in ( ) has the representation:
h i
f (t; z) = exp [h(T ) h(t)] E f (T; XTt;z ) : ((3))

Conclusion: If f (t; x) satis…es the partial di¤ erential equation in ( );


and the L2 -integrability condition is satis…ed that makes the Itô integral
a martingale, then f (t; z) can be "represented" as an expected value, and
in particular as the expected value of exp [h(T ) h(t)] f (T; XTt;z ) as in (3);
where the process Xst;z is de…ned on [t; T ] by 5.7 with Xtt;z = z: This is a key
insight of the Feynman-Kac Representation Theorem discussed below
and again in book 9. This result represents the solutions of certain partial
di¤ erential equations in terms of expectations of stochastic processes de…ned
by certain of the coe¢ cient functions in the PDE.

Example 5.19 (Option Pricing) The above result is fundamental to op-


tion pricing theory as will be seen in book 9: But as a hint, let’s apply
the above derivation assuming that Xst;z satis…es 5.12, and let g(s; y) =
exp [ Rs] f (s; y); with R a …xed interest rate and f (s; y) the price of a traded
European option on an asset with price Xst;z at time s where Xtt;z = z is to-
day’s price. If f (s; y) satis…es the PDE in (1) :
1 2 2
ft (r; y) + xfx (r; y) + x fxx (r; y) = Rf (r; y)on [0; T ] R;
2
then the solution in (3) becomes:
h i
f (t; z) = exp [ R(T t)] E f (T; XTt;z ) :
5.5 MULTIVARIATE SEMIMARTINGALE VERSION 237

This is wonderful, since if T is the expiry date of the option we know the
value of f (T; XTt;x ) exactly for any asset price XTt;x : Further, the distribution
of such XTt;x is lognormal by a small modi…cation to 5.13.
Now this conclusion also required a martingale condition:
Z T
E Xrt;z fx (r; Xrt;x ) exp [ Rr] dBr = 0;
t

and this seems impossible to be satis…ed! In general Xrt;z > 0 for asset prices;
while the option "deltas" satisfy fx (r; Xrt;x ) > 0 for calls and fx (r; Xrt;x ) < 0
for puts. How can this expectation be 0?
In fact this expectation is not 0; and the …nal solution to this problem in
book 9 will involve a change in measure on the space (S; (S); s (S); )u:c:
to make this work. Of course this change in measure must also preserve the
Brownian motion, so there is still work to be done.

5.5 Multivariate Semimartingale Version


We next investigate a version of Itô’s Lemma when Xt = X0 + Ft + Mt is
an m-dimensional continuous semimartingale on
(S; (S); s (S); )u:c: ; meaning that component-wise:

(j) (j) (j) (j)


Xt = X0 + Ft + Mt ; 1 j m;

(j) (j) (j)


where Ft + Mt 2 MSloc and X0 is 0 (S)-measurable. Again the
general conclusion will be that f (t; Xt ) is a continuous semimartingale. As
noted in the 1-dimensional version above, while this result assumes
f (t; x) 2 C 1;2 ([0; 1) Rm ); existence and continuity of the second
(i)
derivative fxi xi is only needed when Mt 6= 0; and similarly fxi xk is only
(i) (j)
needed when both Mt 6= 0 and Mt 6= 0:

Notation 5.20 (Di¤erential form of Itô’s lemma) It is common to ex-


press the formula in 5.16 in di¤ erential notation:
Xm (i)
df (t; Xt ) = ft (t; Xt )dt + fxi (t; Xt )dFt
i=1
1 Xm Xm D E
+ fxi xj (t; Xt )d M (i) ; M (j) (5.15)
2 i=1 j=1 t
Xm (i)
+ fxi (t; Xt )dMt :
i=1
238 CHAPTER 5 ITÔ’S LEMMA

More economically, this is sometimes written:

1 Xm Xm D E
df (t; Xt ) = ft (t; Xt )dt + fxi xj (t; Xt )d X (i) ; X (j)
2 i=1 j=1 t
Xm (i)
+ fxi (t; Xt )dXt ;
i=1

(i) (i) (i)


since dXt = dFt +dMt by proposition 4.21, and X (i) ; X (j) t = M (i) ; M (j) t
by book 7’s proposition 6.34. Analogously, the integral version in 5.2 is some-
times expressed with this same notational convention.

Remark 5.21 (Integration by parts) Note that with m = 2 and for the
function f (t; x) = x1 x2 2 C 1;2 ([0; 1) R2 ); that 5.16 reduces to the sto-
chastic integration by parts formula in 4.12.

Proposition 5.22 (Itô’s lemma for an m-dimensional semimartingale)


Let Xt = X0 + Ft + Mt be an m-dimensional continuous semimartingale on
(S; (S); s (S); )u:c: and f (t; x) 2 C 1;2 ([0; 1) Rm ): Then f (t; Xt ) is a
continuous semimartingale on (S; (S); s (S); )u:c: ; and for every 0 t <
1:
Z t Xm Z t
f (t; Xt ) = f (0; X0 ) + ft (s; Xs )ds + fxi (s; Xs )dFs(i)
0 i=1 0
Z t D E
1 Xm Xm
+ fxi xj (s; Xs )d M (i) ; M (j)
2 i=1 j=1 0 s
Xm Z t
+ fxi (s; Xs )dMs(i) ; (5.16)
i=1 0

-a.e. As noted in remark 5.5, this implies that -a.e., the identity in 5.16
is valid for all t:
Proof. The proof is similar but notationally messier than that of 5.2, now
using the following Taylor approximation in step 3 :

f (tk 1 ; Xtk ) f tk 1 ; Xtk 1


Xm (i) (i)
= fxi tk 1 ; Xtk 1 Xtk Xtk 1
i=1
1 Xm Xm (i) (i) (j) (j)
+ fxi xj (tk 1 ; k ) Xtk Xtk Xtk Xtk ;
2 i=1 j=1 1 1

where k = Xtk 1 + k Xtk Xtk 1 and k k (!) 2 (0; 1): We discuss


the details within the earlier framework.
5.5 MULTIVARIATE SEMIMARTINGALE VERSION 239

1: Localization: To facilitate convergence of the various summations,


we again introduce stopping times to ensure all integrands are bounded. For
(i) (i)
N > 0 de…ne TN = 0 if any X0 N: If all X0 < N :
D E
(i) (i)
TN infftj Mt N; Ft N; or M (i) ; M (j) N g:
t

(i) (i)
This in…mum re‡ects all i; and is de…ned to be 1 if Mt < N; Ft <
N; M (i) ; M (j) t < N for all i and t: Then TN is a stopping time as in
proposition 5.9, and -a.e., TN ! 1 as N ! 1:
(N )
Our goal is again to prove the stated result for Xt Xt^TN : Using the
same justi…cations as in the proof of proposition 5.9, we seek to prove that:
Z t^TN Xm Z t^TN
(N )
f (t; Xt ) = f (0; X0 ) + ft (s; Xs )ds + fxi (s; Xs )dFs(i)
0 i=1 0
Z t^TN D E
1 Xm Xm
+ fxi xj (s; Xs )d M (i) ; M (j)
2 i=1 j=1 0 s
Xm Z t^TN
+ fxi (s; Xs )dMs(i) ; -a.e.
i=1 0

Given t; this expression is then valid -a.e. for all integer N; and by letting
integer N ! 1 5.16 follows since TN ! 1 -a.e.
(i) (N )
2: Boundedness: Note that for given t that Xt 3N for all
i;
Qm and since f; ft ; fxi ; and fxi xj are continuous on the compact set [0; t]
i=1 [ 3N; 3N ]i ; each is bounded by a given constant K:
(N )
3: Taylor Approximation: Given N; denote Xt by Xt for simplicity
and let n denote a partition of [0; t] with 0 = t0 < t1 < tn = t: Since

f (tk ; Xtk ) f tk 1 ; Xtk 1


= f (tk ; Xtk ) f (tk 1 ; Xtk )+f (tk 1 ; Xtk ) f tk 1 ; Xtk 1
;

we can separately express by …rst and second order Taylor series:

f (tk ; Xtk ) f (tk 1 ; Xtk ) = ft ( k ; Xtk ) (tk tk 1) ;

f (tk 1 ; Xtk ) f tk 1 ; Xtk 1


Xm (i) (i)
= fxi tk 1 ; Xtk 1 Xtk Xtk 1
i=1
1 Xm Xm (i) (i) (j) (j)
+ fxi xj (tk 1 ; k ) Xtk Xtk Xtk Xtk :
2 i=1 j=1 1 1
240 CHAPTER 5 ITÔ’S LEMMA

Here tk 1 k (!) tk ; and k = Xtk 1 + k Xtk Xtk 1 with k


k (!) 2 (0; 1) : Again we use the Lagrange form of the remainder term,
named for Joseph-Louis Lagrange (1736 – 1813).
Summing:

f (t; Xt ) f (0; X0 )
Xn
= ft ( k ; Xtk ) (tk tk 1 )
k=1
Xn Xm (i) (i)
+ fxi tk 1 ; Xtk 1 Xtk Xtk 1
k=1 i=1
1 Xn Xm Xm (i) (i) (j) (j)
+ fxi xj (tk 1 ; k ) Xtk Xtk 1 Xtk Xtk
2 k=1 i=1 j=1 1

I1 + I2 + I3 :

4: Summation I1 : Since ft is continuous in both variables, and Xt is


bounded and continuous -a.e., I1 is an ordinary Riemann summation of a
-a.e. bounded continuous function, and so as n max1 i n fti ti 1 g !
0: Z t
I1 ! ft (s; Xs )ds; -a.e.
0
5: Summation I2 : Because
(i) (i) (i) (i) (i) (i)
Xtk Xtk 1
= Ftk Ftk 1
+ Mtk Mtk 1 ;

it follows that:
Xm Xn (i) (i)
I2 = fxi tk 1 ; Xtk 1
Ftk Ftk 1
i=1 k=1
Xm Xn (i) (i)
+ fxi tk 1 ; Xtk 1
Mtk Mtk 1
i=1 k=1
Xm (i)
X m (i)
I2F + I2M :
i=1 i=1

For each i; fxi is continuous in both variables and Xt is bounded and con-
(i)
tinuous -a.e. Thus each I2F is -a.e. equal to a Riemann summation
associated with the Riemann-Stieltjes integral of a bounded continuous func-
tion with respect to a function of bounded variation. Thus by proposition
4.19 of book 3 and 4.6, as n ! 0 :
Z t
F (i)
I2 ! fxi (s; Xs )dFs(i) ; -a.e.
0

(i)
For each i; Mt is an L2 -bounded martingale, recalling the notational
convention in 3 and the bound as in 2; and fxi (s; Xs ) is continuous -a.e.
5.5 MULTIVARIATE SEMIMARTINGALE VERSION 241

(i)
and bounded. Thus fx (s; Xs ) 2 H2M ([0; 1) S) by book 7’s proposition
6.18 if it is shown that fxi (s; Xs ) is predictable, and this will follow by con-
tinuity and that book’s corollary 5.17 if fx (s; Xs ) is adapted. Fixing s; the
function fxi (s; y) : Rm ! R is continuous in y and hence Borel measurable
in y by book 5’s proposition 1.4, so fxi 1 (s; )(A) 2 B(Rm ) for all A 2 B(R):
Since X is adapted: [fxi (s; Xs )] 1 (A) = Xs 1 fxi 1 (s; )(A) 2 s (S); and
thus fxi (s; Xs ) is adapted.
Now from proposition 3.55 it follows that as n ! 0;
Z t
(i)
I2M !P fxi (s; Xs )dMs(i) :
0

This convergence is in probability implies -a.e. convergence for a subse-


quence of partitions with nl ! 0 by book 2’s proposition 5.25.
Combining, it follows that for this subsequence of partitions:
Xm Z t Xm Z t
I2 ! fxi (s; Xs )dFs(i) + fxi (s; Xs )dMs(i) ; -a.e.
i=1 0 i=1 0

1
6: Summation I3 : With apparent notation and ignoring the 2 :
Xm Xm Xn (i) (i) (j) (j)
I3 = fxi xj (tk 1; k ) Xtk Xtk 1
Xtk Xtk 1
i=1 j=1 k=1
Xm Xm (i;j)
I :
i=1 j=1 3

In the same way that the derivation in 5 identically followed the analogous
derivation in 5 of proposition 5.9, we leave it as an exercise that following
identically steps 6; 7; and 8 of that proof will obtain that for all i :
Z t D E Z t D E
(i;i) (i)
I3 ! fxi xi (s; Xs )d M = fxi xi (s; Xs )d M (i) ; M (i) ;
0 s 0 s

(i;j)
where this last step is remark 6.26 of book 7. Thus we focus on I3 for
i 6= j:
(j) (j) (j) (j)
Splitting Xtk = X0 + Ftk + Mtk etc. obtains:

(i) (i) (j) (j)


Xtk Xtk 1
Xtk Xtk 1

(i) (i) (j) (j) (i) (i) (j) (j)


= Ftk Ftk 1
Ftk Ftk 1
+ Ftk Ftk 1
Mtk Mtk 1

(i) (i) (j) (j) (i) (i) (j) (j)


+ Mtk Mtk 1 Ftk Ftk 1 + Mtk Mtk 1 Mtk Mtk 1
:
242 CHAPTER 5 ITÔ’S LEMMA

(i;j)
Substituting into I3 :
(i;j) (i) ;F (j) (i) ;M (j) (i) ;F (j) (i) ;M (j)
I3 = I3F + I3F + I3M + I3M :

The proof will be completed by showing that the …rst three summations con-
verge to 0 -a.e., while the third provides the desired term in 5.16:
Z t D E
(i) (j)
I3M ;M ! fxi xj (s; Xs )d M (i) ; M (j) ; -a.e. ((*))
0 s

To this end, …rst note that since fxi xj (tk 1; k ) K by 2 :


(i) ;F (j) (i) (i)
Xn (j) (j)
I3F K sup Ftk Ftk 1
Ftk Ftk 1
! 0; -a.e.,
k k=1

since the supremum converges to 0 -a.e. by continuity, while the sum con-
(j)
verges to the variation of Fs by de…nition. The next two summations are
addressed identically. For example by the Cauchy-Schwarz inequality:
(i) ;M (j) 2 Xn (i) (i) 2 Xn (j) (j) 2
I3F fx2i xj (tk 1; k ) Ftk Ftk 1
Mtk Mtk 1
k=1 k=1
(i) (i)
Xn (i) (i)
Xn (j) (j) 2
K 2 sup Ftk Ftk 1
Ftk Ftk 1
Mtk Mtk 1
k k=1 k=1
! 0; -a.e.

This follows because the supremum converges to 0 as above, and the …rst
(i)
summation converges to the variation of Fs by de…nition. The second sum-
mation converges in probability to M (j) t by book 7’s proposition 6.5, and
thus this implies -a.e. convergence for a subsequence of partitions with
nl ! 0 by book 2’s proposition 5.25.
To prove ( ); we begin by recalling de…nition 6.25 of book 7, that given
the partition n in 3 :
(i) (i) (j) (j)
Mt k Mtk 1
Mtk Mtk 1
= Qtkn M (i) ; M (j) Qtkn 1 M (i) ; M (j) :

Thus by that book’s exercise 6.27:


(i) (j) Xn h i
I3M ;M = fxi xj (tk 1 ; k ) Qtkn M (i) ; M (j) Qtkn 1 M (i) ; M (j)
k=1
1 Xn h i
= fxi xj (tk 1 ; k ) Qtkn M (i) + M (j) Qtkn 1 M + M (j)
4 k=1
1 Xn h i
fxi xj (tk 1 ; k ) Qtkn M (i) M (j) Qtkn 1 M (i) M (j) :
4 k=1
5.5 MULTIVARIATE SEMIMARTINGALE VERSION 243

Our goal is to prove that:


Xn h i
fxi xj (tk Qtkn M (i)
1; k ) M (j) Qtkn 1 M M (j)
k=1
Z t D E
! fxi xj (s; Xs )d M (i) M (j) ; -a.e. ((**))
0 s

(i) (j)
With the above identity for I3M ;M ; book 7’s 6.20 and proposition 4.14,
this will prove ( ):
To simplify notation we prove ( ) with a " + "; as the derivations are
(n)
identical. For each n and partition n of [0; t]; let Qs Qs n M (i) + M (j) be
(n)
de…ned as in de…nition 6.2 of book 7, and thus Qs is increasing and con-
tinuous -a.e., and de…ne the step function:

fn (s; k) fxi xj (tk 1; k ) ; s 2 (tk 1 ; tk ]; 1 k n:

Then
Rt (n)
0 fn (s; k ) dQs
Xn h i
= fxi xj (tk 1 ; k ) Qtkn M (i) + M (j) Qtkn 1 M + M (j) ;
k=1

which equals the expression on the left of ( ):


Since fxi xj is continuous on [0; t] it is uniformly continuous and thus,
given > 0 there exists so that if the mesh size of n satis…es n < ;
then -a.e.:
sup fn (s; k ) fxi xj (s; k ) < :
s2[0;t]

Hence:
Rt (n)
Rt (n)
Rt
0 fn (s; k ) dQs 0 fxi xj (s; k ) dQs 0 fn (s; k) fxi xj (s; k) dQ(n)
s
Rt (n)
< 0 dQs
(n)
= Qt :

Now by book 7’s proposition 6.5:


D E
(n)
Qt !P M (i) + M (j) ;
t

and thus this convergence is -a.e. for a subsequence of partitions by book 2’s
Rt (n)
proposition 5.25. Since > 0 is arbitrary, this proves that 0 fn (s; k ) dQs
244 CHAPTER 5 ITÔ’S LEMMA

Rt (n)
and 0 fxi xj (s; k ) dQs have the same limit as n ! 1 and thus ( ) can
be proved by showing that:
Rt Rt D E
(n) (i) (j)
f
0 xi xj (s; k ) dQ s ! f
0 xi xj (s; k ) d M + M ; -a.e.
s

To this end, since M (i) + M (j) is also a local martingale by book 7’s
corollary 5.85, it follows by that book’s proposition 6.12 that:
D E
sup Q(n)
s M (i) + M (j) !P 0;
s2[0;t] s

and again this convergence is -a.e. for a subsequence of partitions. It is


an exercise to check that this proves that:
(n)
Qs M (i) + M (j) s
sup (n)
!P 0:
s2[0;t] Qt M (i) + M (j) t

Each of these quotients is then -a.e. a distribution function on [0; t]; respec-
tively Fn (s) and G(s); and this results implies that Fn (s) ) G(s) ; meaning
weak convergence of the associated Borel measures (de…nition 8.2, book 2).
Thus by the portmanteau theorem of book 6’s proposition 4.4, it follows that
-a.e.: Rt Rt
0 fx i x j (s; k ) d Fn (s) ! 0 fxi xj (s; k ) d G(s) :

Substituting Lebesgue-Stieltjes notation:


1 Rt 1 Rt D E
(n)
0 fxi xj (s; k ) dQs ! fx x (s; k) d M (i) + M (j) ;
Qt
(n) M (i) + M (j) t 0 i j s

(n)
and since Qt ! M (i) + M (j) t -a.e., the proof is complete.
7: Summary: The above steps prove that for each t; I1 +I2 +I3 converges
to the right hand side of 5.16 -a.e. for a subsequence of partitions with
nk ! 0:

Exercise 5.23 Verify that 5.16 reduces to 4.13 when m = 2 and f (t; Xt ) =
(1) (2)
Xt Xt :

5.6 Multivariate Itô Process Version


The following corollary to Itô’s lemma restates the change of variables
approach to the m-dimensional Itô process de…ned on
5.6 MULTIVARIATE ITÔ PROCESS VERSION 245

(S; (S); t (S); )u:c: :


Z t Z t
Xt = X0 + u(s; !)ds + v(s; !)dBs : (5.17)
0 0

Here we use the notation from the section Stochastic Integration of Vector
and Matrix Processes, and in particular that of 4.18 which separates the
bounded variation and local martingale integrals. Speci…cally:

(1) (n)
1. Bt (!) Bt (!) ; :::; Bt (!) is an n-dimensional Brownian motion de…ned
on a probability space (S; (S); t (S); )u:c: ;

2. X0 is an 0 (S)-measurable random m-vector;


bP (m 1) bP (m n)
3. u 2 Hloc ([0; 1) S) and v 2 Hloc ([0; 1) S) of de…nition
4.35.

Then: Z Z
t t
Xt = X0 + uds + vdB;
0 0

where the components are given as:


0 Rt Pn R t 1
(1) (j)
X0 + 0 u1 (s; !)ds + j=1 0 v 1j (s; !)dB s (!)
B Rt Pn R t C
B (2) (j) C
B X0 + 0 u2 (s; !)ds + j=1 0 v2j (s; !)dBs (!) C
Xt B C: (5.18)
B .. C
B . C
@ A
(m) R t P R t (j)
X0 + 0 um (s; !)ds + nj=1 0 vmj (s; !)dBs (!)

The speci…cation in 5.18 is often written in di¤erential notation:

(i) Pn (j)
dXt = ui (t; !)dt + j=1 vij (t; !)dBt ; (5.19)

with X0 given.

For this statement, we assume that ui (s; !) and vij (s; !) are locally
bounded predictable processes on (S; (S); t (S); )u:c: : By proposition 4.21,
Xt is then is an m-dimensional continuous semimartingale on (S; (S); t (S); )u:c:
if X0 is 0 (S)-measurable. Then in this case, Xt X0 2 MSloc :
246 CHAPTER 5 ITÔ’S LEMMA

Notation 5.24 (Di¤erential form of Itô’s lemma) In di¤ erential nota-


tion 5.21 is stated:

df (t; Xt ) = ft (t; Xt )dt


1 Xm Xm hXn i
+ fxi xj (t; Xt ) vik (s; !)vjk (s; !) (5.20)
dt
2 i=1 j=1 k=1
Xm (i)
+ fxi (t; Xt )dXt ;
i=1

(i)
where dXt is given in 5.19.

Corollary 5.25 (Itô’s lemma for an m-dimensional Itô process) Let


Xt be an m-dimensional stochastic process on (S; (S); t (S); )u:c: as in
bP (m 1) bP (m n)
5.18, where u(s; !) 2 Hloc ([0; 1) S); v(s; !) 2 Hloc ([0; 1) S);
and X0 is 0 (S)-measurable, and let f (t; x) 2 C 1;2 ([0; 1) Rm ): Then
f (t; Xt ) is a continuous semimartingale on (S; (S); s (S); )u:c: ; and for
every 0 t < 1 :
Z t
f (t; Xt ) = f (0; X0 ) + ft (s; Xs )ds
0
Xm Z t
+ fxi (s; Xs )ui (s; !)ds (5.21)
i=1 0
Z t hXn i
1 Xm Xm
+ fxi xj (s; Xs ) vik (s; !)vjk (s; !) ds
2 i=1 j=1 0 k=1
Xm Xn Z t
+ fxi (s; Xs )vij (s; !)dBs(j) ;
i=1 j=1 0

-a.e. As noted in remark 5.5, this implies that -a.e., the identity in 5.21
is valid for all t:
Proof. Since Xt is a continuous semimartingale by proposition 4.21, propo-
sition 5.22 above applies. Comparing 5.21 with 5.16, we must justify that
(i) Rt (i) P Rt (k)
with Ft (!) = 0 ui (s; !)ds and Mt (!) = nk=1 0 vik (s; !)dBs ; that:
Z t Z t
fxi (s; Xs )dFs(i) = ui (s; !)fx (s; Xs )ds;
0 0
Z t D E Z t hXn i
(i) (j)
fxi xj (s; Xs )d M ;M = fxi xj (s; Xs ) vik (s; !)vjk (s; !) ds;
0 s 0 k=1

and Z Z
t Xn t
fxi (s; Xs )dMs(i) = fxi (s; Xs )vij (s; !)dBs(j) :
0 j=1 0
5.7 MULTIVARIATE ITÔ DIFFUSION VERSION 247

First, predictable implies progressively measurable by book 7’s proposition


5.19, and then by book 5’s proposition 5.19, for all i; j; ui (s; ) and vij (s; )
are Borel measurable in s for all !: The …rst identity now follows from book
5’s proposition 3.6, by splitting ui (s; ) into positive and negative parts (that
book’s de…nition 2.36).
(i)
For the second, Mt (!) is a sum of local martingales by proposition 3.83
(recall proposition 4.20) and is a local martingale by book 7’s exercise 5.78.
Thus apply book 7’s corollary 6.32 and 3.68, recalling 2.8 that B (k) ; B (l) t =
t for k = l and 0 otherwise. Then:

D E Xn Xn Z Z
(i) (j)
M ;M = vik (s; !)dBs(k) ; vjl (s; !)dBs(l)
t k=1 l=1 0 0 t
Xn Xn Z t D E
= vik (s; !)vjl (s; !) B (k) ; B (l)
k=1 l=1 0 s
Xn Z t
= vik (s; !)vjk (s; !)ds:
k=1 0

The second identity now follows from book 7’s proposition 5.19 and splitting
the integrands vik (s; !)vjk (s; !) into positive and negative parts as above.
Finally, the stochastic integral identity is the associative law in 3.72.

5.7 Multivariate Itô Di¤usion Version


Recalling the notation of multivariate stochastic integrals introduced in
section 4.7; assume that Xt is an m-dimensional continuous adapted
process on (S; (S); s (S); )u:c: that satis…es the multivariate stochastic
di¤erential equation (SDE) of 5.7:

Z t Z t
Xt (!) = X0 (!) + u(s; Xs (!))ds + v(s; Xs (!))dBs (!) :
0 0

As noted in the 1-dimensional case, such a process is called an


m;n
m-dimensional Itô di¤usion. Here fui (t; x)gmi=1 and fvij (t; x)gi=1;j=1 are
Borel measurable functions de…ned on [0; 1) Rm ; and Bt an
n-dimensional Brownian motion de…ned on (S; (S); s (S); )u:c: : In the
notation above, Xt ; X0 ; and u(t; x) are m 1 column matrices, Bt is a
n 1 column matrix, and v(t; x) is an m n matrix. In terms of
248 CHAPTER 5 ITÔ’S LEMMA

component processes, the above SDE states that for i = 1; :::; m :


Z t
(i) (i)
Xt (!) = X0 (!) + ui (s; Xs (!))ds (5.22)
0
Xn Z t
+ vij (s; Xs (!))dBs(j) (!) :
j=1 0

These integral equations are sometimes written in stochastic di¤erential


equation (SDE) notation, as the matrix equation:

dXt = u(t; Xt )dt + v(t; Xt (!))dBt ;

and in terms of the components:


(i)
Xn (j)
dXt = ui (t; Xt )dt + vij (t; Xt )dBt ; (5.23)
j=1

with X0 given.

Remark 5.26 (On SDEs 2) As noted in remark 5.14 for the one dimen-
sional stochastic di¤ erential equation, it will be proved in book 9 that Borel
measurability of the component functions in u(s; x) and v(s; x); and con-
tinuity and adaptedness of a process Xt on (S; (S); t (S); )u:c: ; assure
that the processes u(s; !) u(s; Xs (!)) and v(s; !) v(s; Xs (!)) are pre-
dictable stochastic processes on this space. However, to justify the existence
of these integrals using the semimartingale integration theory also requires
that u(s; !) and v(s; !) be locally bounded. This boundedness condition is
di¢ cult to specify in terms of u(s; x) and v(s; x); and so it is more natural
to use a di¤ erent criteria to justify existence.
Given a continuous, adapted process Xt ; the integrals in 5.22 will be
well de…ned if the components of u(s; x) and v(s; x) are Borel measurable
functions:
ui ; vi;j : [0; 1) Rm ! R;
satisfying:
Z t
Pr jui (s; Xs )j ds < 1; all t (5.24)
0
Z t
2
= Pr vij (s; Xs )ds < 1; all t = 1:
0

As noted above, Borel measurability assures predictability of these integrands


and thus progressive measurability by book 7’s proposition 5.19. Then the
5.7 MULTIVARIATE ITÔ DIFFUSION VERSION 249

Rt
…rst integral in 5.22, 0 ui (s; Xs (!))ds; is de…ned pathwise -a.e. as a
Lebesgue (or Lebesgue-Stieltjes) integral of book 5, and these integrals are
adapted by corollary 3.74 with M = B since hBis = s by corollary 2.10.
Rt (j)
Further, 0 vij (s; Xs (!))dBs (!) is de…nable within the local martingale
B (j) ([0; 1) S):
integration theory since 5.24 assures that vij (s; Xs (!)) 2 H2;loc
The existence and uniqueness of Xt that satis…es 5.7 will be seen to
require somewhat more of u(s; x) and v(s; x) than Borel measurability to
ensure that 5.24 is satis…ed.
Finally, given this Borel measurability requirement on u(s; x) and v(s; x)
and the assumption that continuous, adapted Xt exists that satis…es 5.9 and
5.7, it will also be proved in book 9 that such Xt is a continuous semimartin-
gale. It therefore makes sense to investigate the application of Itô’s lemma
to f (t; Xt ) for appropriate functions f:

Notation 5.27 (Di¤erential form of Itô’s lemma) The result in 5.27


can also be expressed in di¤ erential notation as in 5.10:

df (t; Xt ) = ft (t; Xt )dt


1 Xm Xm hXn i
+ fxi xj (t; Xt ) vik (s; Xs )vjk (s; Xs )(5.25)
dt
2 i=1 j=1 k=1
Xm (i)
+ fxi (t; Xt )dXt ;
i=1

(i)
where dXt is given in 5.23.
With f (t; Xt ) deemed a 1 1 matrix, this result can be expressed in matrix
notation:
1
df (t; Xt ) = ft (t; Xt )dt + H(t; Xt ) v(s; Xs (!))v T (s; Xs (!)) dt + rf (t; Xt ) dXt
2
1
ft (t; Xt )dt + H(t; Xt ) v(s; Xs (!))v T (s; Xs (!)) dt (5.26)
2
+rf (t; Xt ) u(t; Xt )dt + rf (t; Xt ) v(t; Xt (!))dBt :

Here H(t; Xt ) is the Hessian matrix of f (t; x) in the spatial variable


x; named for Ludwig Otto Hesse (1811 –1874), and is the m m matrix
de…ned by Hij (t; Xt ) = fxi xk (t; Xt ): Also, v(s; Xs (!))v T (s; Xs (!)) is the
m m matrix de…ned by the matrix product of the m n matrix v(s; Xs (!))
and its n m matrix transpose, so vij T (s; X (!)) = v (s; X (!)): In this no-
s ji s
tation we are taking the inner product of these matrices as if they were m2 -
vectors. Finally, rf (t; Xt ) denotes the gradient of f (t; x) in the spatial
variable x; here de…ned as a 1 m row matrix with rf (t; Xt )i fxi (t; Xt );
250 CHAPTER 5 ITÔ’S LEMMA

and this is multiplied by the m 1 column matrix dXt : The last expression
is similarly de…ned.
The following corollary is a very general restatement of 5.16, but we
will be interested in investigating a few special cases in example 5.30 below.
For the statement of the corollary below, the component Brownian motions
fB (j) gnj=1 are assumed to be independent processes (de…nition 1.36, book
7). By that book’s proposition 1.42, this is equivalent to specifying that
Bt = (B (1) ; :::; B (n) ) is an n-dimensional Brownian motion on this space.
See remark 5.29 below for a generalization.
Corollary 5.28 (Itô’s lemma for an m-dimensional Itô di¤usion) Given
a Borel measurable m-vector u(s; x) and m n-matrix v(s; x) on [0; 1) Rm ;
let Xt be an m-dimensional continuous, adapted process on (S; (S); s (S); )u:c:
that satis…es 5.22 with independent Brownian motions fB (j) gnj=1 ; and also
satis…es 5.24: If f (t; x) 2 C 1;2 ([0; 1) Rm ); then f (t; Xt ) is a continuous
semimartingale on (S; (S); s (S); )u:c: ; and for every 0 t < 1 :
Z t
f (t; Xt ) = f (0; X0 ) + ft (s; Xs )ds
0
Xm Z t
+ fxi (s; Xs )ui (s; Xs )ds (5.27)
i=1 0
Z t hXn i
1 Xm Xm
+ fxi xj (s; Xs ) vik (s; Xs )vjk (s; Xs ) ds
2 i=1 j=1 0 k=1
Xm Xn Z t
+ fxi (s; Xs )vij (s; Xs )dBs(j) ;
i=1 j=1 0
-a.e. As noted in remark 5.5, this implies that -a.e., the identity in 5.27
is valid for all t:
Proof. As noted in remark 5.26, such Xt is a generalized continuous semi-
martingale. Let:
Z t Z t
(i) (i) P
Ft (!) = ui (s; Xs (!))ds; Mt (!) = nj=1 vij (s; Xs (!))dBs(j) :
0 0
(i) (i)
Also noted in remark 5.26, the integrands for Ft (!)
and Mt (!) are pre-
(i)
dictable so by 5.24, Ft is a continuous bounded variation process by propo-
(i)
sition 5.14, and Mt is a continuous local martingale by proposition 3.83
(i) (i)
(recall exercise 5.78, book 7). Thus Ft (!) + Mt (!) is a continuous semi-
martingale, 5.22 can be expressed:
(i) (i) (i) (i)
Xt (!) = X0 (!) + Ft (!) + Mt (!);
5.7 MULTIVARIATE ITÔ DIFFUSION VERSION 251

and proposition 5.22 applied. Comparing 5.27 with 5.16, we verify equality
of the component expressions.
Splitting ui (s; Xs (!)) into positive and negative parts (book 5’s de…nition
2.36) obtains by that book 5’s proposition 3.6:
Z t Z t
fxi (s; Xs )dFs(i) = fxi (s; Xs )ui (s; Xs )ds:
0 0

In addition, by the independence of fB (j) gnj=1 ; the general associative law in


3.72 yields:
Z t Xn Z t
fxi (s; Xs )dMs(i) = fxi (s; Xs )vij (s; Xs )dBs(j) :
0 j=1 0

Finally, by book 7’s corollary 6.32 and then 3.68:


D E Xn Xn Z Z
M (i) ; M (j) = vik (s; Xs (!))dBs(k) ; vjl (s; Xs (!))dBs(l)
t k=1 l=1 0 0 t
Xn Xn Z t D E
= vik (s; Xs (!))vjl (s; Xs (!))d B (k) ; B (l)
k=1 l=1 0 s
Xn Z t
= vik (s; Xs (!))vjk (s; Xs (!))ds;
k=1 0

since B (k) ; B (k) s = s and otherwise B (k) ; B (l) s = 0 by 2.8. Since


vik (s; Xs (!))vjk (s; Xs (!)) is Borel measurable in s for all ! as above, after
splitting the integrand into positive and negative parts, book 5’s proposition
3.6 obtains:
Z t D E Z t hXn i
(i) (j)
fxi xj (s; Xs )d M ; M = fxi xj (s; Xs ) vik (s; Xs )vjk (s; Xs ) ds:
0 s 0 k=1

Assembling the components, 5.27 now follows from 5.16.

(1) (n)
Remark 5.29 (Dependent Brownian Motions) Let Bt = (Bt ; :::; Bt ) be
an n-dimensional Brownian motion on (S; (S); s (S); )u:c: and thus fB (j) gnj=1
are independent Brownian motions by book 7’s proposition 1.42. Recalling
section 2.2.1, an n n matrix S 0 is positive de…nite if xT S 0 x > 0 for all x 2
Rn with x 6= 0: It was shown there that if S 0 = kj is a positive de…nite ma-
trix, where kk = 1; kj = jk ; and 1 kj 1; then there exists a collec-
tion of 1-dimensional Brownian motions fB b g
(j) n
j=1 on (S; (S); s (S); )u:c:
252 CHAPTER 5 ITÔ’S LEMMA
D E
with Bb (j) ; B
b (k) = kj t:
b (j) gn ; there exists a
In addition, given such fB j=1
t
lower triangular matrix L = flkj gj bt = LBt ; or in components:
so that B
k

Xj
b (j) =
B
(k)
ljk Bt : ((*))
t k=1

The above corollary can be generalized in one of two ways in applications


if the given model requires such correlated Brownian motions:

1. The proof above and …nal equation in 5.27 can be modi…ed to the case
where Xt (!) is de…ned in terms of the given fB b (j) gn : This does not
j=1
Rt Rt (j)
change the 0 fxi (s; Xs )ui (s; Xs )ds-term, and 0 fxi (s; DXs )vij (s; XEs )dBs
Rt
becomes bs(j) : However since B
fx (s; Xs )vij (s; Xs )dB b (k) ; B
b (l) =
0 i
t
kl t; 5.27 is modi…ed to re‡ect that now:

D E Xn Xn Z t
(i) (j)
M ;M = vik (s; Xs (!))vjl (s; Xs (!)) kl ds:
t k=1 l=1 0

The double sum integrand of this expression then replaces the bracketed
expression in the integral of fxi xj (s; Xs ); with the same justi…cation as
above.

2. The equation for Xt (!) de…ned in terms of the given fB b (j) gn can
j=1
be expressed in terms of fB (j) gnj=1 by substitution of ( ); and 5.27 ap-
plied directly. The coe¢ cient functions for these independent Brown-
ian processes, fvij0 (t; x)gm;n
i=1;j=1 ; are then given in terms of the original
collection, fvij (t; x)gm;n 0
i=1;j=1 ; by v (t; x) = v(t; x)L; or in components:
Xn
0
vij (t; x) = vik (t; x)lkj :
k=j

Example 5.30 (Special cases of corollary 5.28) As noted above, in some


applications we may be interested in a few special cases of 5.27. We focus on
independent Brownian drivers since these can be adapted as noted in remark
5.29.

1. 1-Dimensional Model: m = n = 1:
This is 5.11.

2. 1-Dimensional Process Xt (m = 1); and an n-Dimensional Brown-


ian Model.
5.7 MULTIVARIATE ITÔ DIFFUSION VERSION 253

In this case, Xt is driven by n independent Brownian motions:


Xn (j)
dXt = u(t; Xt )dt + vj (t; Xt )dBt ;
j=1
and 5.25 reduces to:
1 hXn i
df (t; Xt ) = ft (t; Xt )dt+ fxx (t; Xt ) vj2 (t; Xt ) dt+fx (t; Xt )dXt :
2 j=1

3. General m-Dimensional Process Xt ; 1-Dimensional Brownian


Model (n = 1):
In this case, all m-processes are driven by the same Brownian motion,
(i)
dXt = ui (t; Xt )dt + vi (t; Xt )dBt ;
and 5.25 reduces to:
1 Xm Xm
df (t; Xt ) = ft (t; Xt )dt + fxi xj (t; Xt )vi (t; Xt )vj (t; Xt )dt
2 i=1 j=1
Xm (i)
+ fxi (t; Xt )dXt
i=1
(1) (2)
For example, if m = 2 and f (t; Xt ) = Xt Xt ; a simple product of
two processes, then:
(1) (2) (1) (2) (2) (1)
d Xt Xt = Xt dXt
+ Xt dXt + v1 (t; Xt )v2 (t; Xt )dt:
D E Rt
(1) (2)
This is equivalent to 4.12 since Xt ; Xt = 0 v1 (s; Xt )v2 (s; Xt )ds
t
by 5 of proposition 4.25 and corollary 2.10.
4. General m-Dimensional Process Xt ; and an m-Dimensional
Brownian Model, but vij (s; Xs ) = 0 for i 6= j:
(i)
In this case, each process Xt depends only on B (i) :
(i) (i)
dXt = ui (t; Xt )dt + vii (t; Xt )dBt ;
and 5.25 reduces to:
1 Xm Xm (i)
df (t; Xt ) = ft (t; Xt )dt+ fxi xi (t; Xt )vii2 (t; Xt )dt+ fxi (t; Xt )dXt :
2 i=1 i=1

(1) (2)
For example, if m = 2 and f (t; Xt ) = Xt Xt ; then:
(1) (2) (1) (2) (2) (1)
d Xt Xt = Xt dXt
+ Xt dXt :
D E
(1) (2)
This is again equivalent to 4.12 since Bt ; Bt = 0 and thus
D E t
(1) (2)
Xt ; Xt = 0 by proposition 4.25.
t
Chapter 6

Some Applications of Itô’s


Lemma

Applications of Itô’s lemma are pervasive in stochastic analysis, and it is


thus impossible to do more than scratch the surface of this topic here.
Many other applications of this result will be seen later. In this chapter we
…rst derive Lévy’s characterization of n-dimensional Brownian motion, a
result with its own legacy of important applications. We then derive the
Burkholder-Davis-Gundy inequality, which among other things provides
another criterion for when a continuous local martingale is a martingale
(recall section 5.5.3 of book 7).

The next section investigates the use of Itô’s Lemma to create local
martingales from semimartingales using functions f (t; x) that solve certain
partial di¤erential equations. We then introduce the Feynman-Kac represen-
tation theorem which provides another linkage between partial di¤erential
equations and stochastic processes and which will be further developed in
book 9: Finally we derive Dynkin’s formula, and apply it to derive a special
case of Kolmogorov’s backward equation. This equation re‡ects yet another
connection between partial di¤erential equations and stochastic processes,
though is somewhat disguised in the current version. This result will be
generalized and fully revealed in book 9:

6.1 Lévy’s Characterization of n-Dimensional BM


It was noted in the introduction to chapter 3 that a 1948 result due to
Paul Lévy (1886 –1971) states that Brownian motion is the only

255
256 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

continuous martingale with quadratic variation process hBit = t: In this


section we prove an n-dimensional generalization of this result. This proof
utilizes Itô’s lemma, an approach introduced in a 1967 paper by Hiroshi
Kunita and Shinzo Watanabe. The key insight of Levy’s
characterization is that if Xt is an n-dimensional continuous and adapted
process on (S; (S); s (S); )u:c: with X0 = 0; then we can recognize Xt to
be a Brownian motion simply by observing that the covariation processes
of the components satisfy X (j) ; X (k) t = tIjk ; where I is the identity
matrix and so Ijj = 1 and Ijk = 0 otherwise.

It is worth noting that the assumption of continuity in the following


result is crucial to the conclusion. A compensated Poisson process Xt ;
which we do not study in this book, can be parametrized so that hXit =
t: This process is a square integrable martingale with at most countably
many jump discontinuities. See Billingsley (1995) for a detailed derivation
of this process. The Poisson process Nt has a single parameter and is
a submartingale. By a compensated Poisson process is meant that it is
converted into a martingale by subtracting its mean, so:

NtC Nt E [Nt ] = Nt t:

Then letting Xt = NtC with = 1 obtains a discontinuous martingale with


hXit = t:
Following this proposition we develop two applications of it. The …rst
investigates when an Itô process is a Brownian motion, while the second
investigates the connection between local martingales and Brownian motions
with transformed time parameters. A third application of this result will be
seen in book 9 on change of measures and Girsanov’s theorem.

Proposition 6.1 (Lévy’s characterization of n-dimensional BM) Let


Xt be an n-dimensional continuous and adapted process on (S; (S); s (S); )u:c:
with X0 = 0: Then the following are equivalent, where I denotes the n n
identity matrix, and fIjk g denote the components of I; so Ijj = 1 and Ijk = 0
otherwise.

1. Xt is an n-dimensional Brownian motion.

2. Xt is an n-dimensional local martingale and for 1 j; k n; the


(j) (k)
process Xt Xt tIjk is a continuous local martingale.

3. Xt is an n-dimensional local martingale with quadratic covariation


processes given -a.e. by: X (j) ; X (k) t = tIjk :
6.1 LÉVY’S CHARACTERIZATION OF N -DIMENSIONAL BM257

Proof. By de…nition 4.34, that Xt is an n-dimensional continuous local


(j)
martingale on (S; (S); s (S); )u:c: means that each Xt is a continuous
local martingale on this space. Thus 2 () 3 by proposition 6.29 of book 7.
The proof will be completed by showing that 1 ) 3 and 3 ) 1:
1 ) 3 : This is proved in 2.8 noting that jk there is de…ned identically
with Ijk here.
3 ) 1 : By de…nition 2.7, we seek to prove that given t > s 0;
that Xt Xs is multivariate normally distributed with mean n-vector 0 and
covariance matrix C (t s)I; where I denotes the n n identity matrix,
and that Xt Xs is independent of s : The …rst step is to prove that given
0 6= u 2 Rn ; that:

1 T
E [exp [iu (Xt Xs )] j s] = exp u [(t s) I] u : ((*))
2

To this end, let Yt = u Xt where 0 6= u 2 Rn : Then as a linear combination


of continuous local martingales Yt is a continuous local martingale (exercise
5.78 book 7), and so by corollaries 6.31-6.32 of book 7 and 3 :
P P D E
hY it = nj=1 nk=1 X (j) ; X (k) uj uk = t (u u) ;
t

where (u u) denotes the inner product: With f (x; y) = exp [ix + y=2] ; de…ne
the continuous process Mt f (Yt ; hY it ) :

Mt = exp [iYt + hY it =2] = exp [iu Xt + t (u u) =2] :

We can apply the multivariate semimartingale version of Itô’s lemma in 5.16


to the real and imaginary parts of f (Yt ; hY it ); then re-assemble. It can be
checked that this is equivalent to applying Itô’s lemma to a complex valued
function directly, which we now do.
(1) (1) (2)
In the notation of that formula, Mt = Yt ; Ft = 0; Mt = 0 and
(2)
Ft = hY it ; and since fx (x; y) = if (x; y); fy (x; y) = 21 f (x; y) and fxx (x; y) =
f (x; y) :
Z t Z Z t
1 t
Mt = 1 + fy (Ys ; hY is )d hY is + fxx (Ys ; hY is )d hY is + fx (Ys ; hY is )dYs
0 2 0 0
Z t
= 1+i Ms dYs :
0

By Euler’s formula (6.13, book 5), Mt as de…ned above is bounded over com-
pact intervals: jMt j exp [t (u u) =2] ; and since continuous and adapted Mt
258 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

Y ([0; 1)
is predictable (corollary 5.17, book 7). Thus Mt 2 H2;loc S) and it
Rt
follows from proposition 3.83 that 0 Ms dYs is a continuous local martin-
gale, and hence by the above identity so too is Mt : But a locally bounded
local martingale is a martingale by proposition 5.88 of book 7, and so:

E [Mt Ms j s] = 0:

The proof of ( ) is now completed by evaluating the characteristic func-


tion of Xt Xs evaluated at u; conditional on s : Using the measurability
property of conditional expectations (proposition 5.26, book 6) and the mar-
tingale property:

E [exp [iu (Xt Xs )] j s] = E [Mt [exp ( t (u u) =2 iu Xs )] j s]


= exp ( t (u u) =2 iu Xs ) E [Mt j s]
1 T
= exp u [(t s) I] u :
2

Taking expectations and applying the tower property obtains

1 T
E [exp [iu (Xt Xs )]] = exp u [(t s) I] u :
2

The expression on the right is the characteristic function of a multivariate


normally distributed with mean n-vector 0 and covariance matrix C (t
s)I; and thus Xt Xs has this distribution by the uniqueness theorem of
proposition 6.14, book 6.
To prove independence of (Xt Xs ) ; the sigma algebra generated Xt
Xs (recall 2.7) and s ; let A 2 s and A be the characteristic function
of A: With u 2 Rn and v 2 R; the characteristic function C(u; v) of the
random vector (Xt Xs ; A ) can be calculated using the measurability and
tower properties and ( ) :

E [exp [iu (Xt Xs ) + iv A ]] = E [exp [iv A ] E (exp [iu (Xt Xs )] j s )]


1 T
= exp u [(t s) I] u E [exp [iv A ]]
2
= C(u)C(v):

Now by de…nition C(v) is the characteristic function of A ; and by above,


C(u) is the characteristic function of Xt Xs : By book 6’s proposition 6.20,
C(u)C(v) is the characteristic function of (Xt Xs ; A ) assuming these are
independent variates, and by uniqueness results of book 6’s proposition 6.25,
6.1 LÉVY’S CHARACTERIZATION OF N -DIMENSIONAL BM259

A and Xt Xs are thus independent random variables. By de…nition this


means that ( A ) and (Xt Xs ) are independent sigma algebras. As this
is true for all A 2 s ; it follows that s and (Xt Xs ) are independent
sigma algebras, and thus Xt Xs is independent of s :

6.1.1 When is an Itô Process a Brownian Motion?


Recall that a 1-dimensional Itô process was de…ned in 5.3 by:
Z t Z t
Xt = X0 + u(s; !)ds + v(s; !)dBs ;
0 0

where we assume that u(s; !) and v(s; !) are locally bounded (de…nition
4.17) predictable processes on (S; (S); t (S); )u:c: : By proposition 4.21,
Xt is then is a continuous semimartingale on (S; (S); t (S); )u:c: if X0 is
0 (S)-measurable, and so Xt X0 2 MSloc : A natural question is, when is
such an Itô process equal to another Brownian motion on
(S; (S); t (S); )u:c: ? Identifying the too easy case of X0 = u(s; !) = 0
and v(s; !) = 1; we can investigate this using 3 of Lévy’s characterization.

Below we denote Lebesgue measure on R by m; and m denotes


the product measure on the space ([0; 1) S; [B([0; 1)) (S)] ; m ),
where as above [B([0; 1)) (S)] denotes the smallest sigma algebra con-
taining all measurable rectangles C D with C 2 B([0; 1)) and D 2 (S):
The proof of this result and the m-dimensional generalization below are
somewhat long because we require some care with the various "almost every-
where" statements.

Proposition 6.2 (1-dimensional Brownian Itô process) If Xt is a 1-


dimensional Itô process on (S; (S); t (S); )u:c: as above, with u(s; !);
bP ([0; 1)
v(s; !) 2 Hloc S); then Xt is a Brownian motion on this space
if and only if:

1. X0 = 0; -a.e.,
2. u(t; !) = 0; m -a.e.,
3. v 2 (t; !) = 1; m -a.e.

Proof. 1. 1 3 Imply Brownian Motion: Every such Xt is a continuous


semimartingale by proposition 4.21. Given 2 we claim that -a.e.:
Z t
u(s; !)ds = 0; all t: ((1))
0
260 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

For this, de…ne:


1
A=u (R f0g):
As u(s; !) is [B([0; 1)) (S)]-measurable (proposition 5.19, book 7) it
follows that A 2 [B([0; 1)) (S)] : Now for each !; de…ne A! by:

A! = ftj(t; !) 2 Ag = ftju(t; !) 6= 0g:

Book 5’s proposition 5.14 obtains that A! 2 B([0; 1)) for all !; that h(!)
m A! is -integrable, and:
Z
m A = h(!)d : ((2))
S

But as m A = 0 by 2; and 0 h(!) for all !; it follows that h(!) =


0 and so m A! = 0; -a.e. Translating we have that -a.e., u(s; !) = 0;
m-a.e., and this proves (1):
Thus if Xt satis…es 1 and 2 then -a.e.:
Z t
Xt = v(s; !)dBs ; all t;
0

and this is a continuous local martingale by proposition 3.83 (recall propo-


sition 4.20). Then proposition 3.89 obtains that -a.e.:
Z t
hXit = v 2 (s; !)ds; all t: ((3))
0

1
We now repeat the above argument with B v2 (R f1g); noting
that m B = 0 by 3: In detail, for each ! de…ne B! by:

B! = ftj(t; !) 2 Bg = ftjv 2 (t; !) 6= 1g:

Then B! 2 B([0; 1)) for all !; k(!) m B! is -integrable, and:


Z
m B = k(!)d : ((4))
S

But as m B = 0 by 3; and 0 k(!) for all !; it follows that k(!) =


0 and so m B! = 0; -a.e. Translating we have that -a.e., v 2 (s; !) = 1;
m-a.e.
Thus -a.e., hXit = t for all t by (3); and so Xt = Bt by proposition
6.1.
6.1 LÉVY’S CHARACTERIZATION OF N -DIMENSIONAL BM261

2. Brownian Motion Implies 1 3 : If a Brownian motion Xt is


de…ned with u(s; !); v(s; !) 2 Hloc bP ([0; 1) S); then -a.e., X0 = 0 by
de…nition, which is 1: As Xt is a continuous martingale by proposition
2.8,
R t and thus a continuous local martingale by book 7’s corollary 5.85, and
0 v(s; !)dBs is a continuous local martingale as above, it follows that:
Z t Z t
u(s; !)ds = Xt v(s; !)dBs ;
0 0

is a continuous
Rt local martingale (exercise 5.78, book 7). Also by proposition
4.21, 0 u(s; !)ds is a continuous, adapted process of bounded variation.
R t Book 7’s proposition 6.1 proves that if the continuous local martingale
0 u(s; !)ds is a bounded variation process, then it must be constant -a.e.,
and by continuity this constant must be 0: Denote by A0 the set with (A0 ) =
1 so that: Z t
u(s; !)ds = 0; for all t:
0
Since locally bounded and progressively measurable (3 of book 7’s proposition
5.19), u(s; !) is Borel measurable in s for all ! by book 5’s proposition 5.19:
Then by book 3’s proposition 3.34, given ! 2 A0 we conclude that u(t; !) = 0;
m-a.e. This proves that there exists A0 2 (S) with (A0 ) = 1; and for each
! 2 A0 there exists A! 2 B ([0; 1)) with m (A! ) = 0; so that u(t; !) 6= 0 for
! 2 A0 ; t 2 A! :
Now de…ne A; A! and h(!) as in part 1 of the proof. If ! 2 A0 ; then
A! = A! by de…nition, and so:
h(!) m A! = m (A! ) = 0:
Thus h(!) = 0; -a.e., and from (2) this proves m A = 0; which is 2:
Thus if Xt is a Brownian motion then -a.e.:
Z t
Xt = v(s; !)dBs ; all t:
0

Recalling that hBit = t from corollary 2.10, 3.69 obtains that -a.e.:
Z t
hXit = v 2 (s; !)ds; all t:
0

By 3 of Lévy’s characterization, Xt is a Brownian motion on (S; (S); t (S); )u:c:


if and only if hXit = t; and so -a.e.:
Z t
v 2 (s; !)ds = t; all t: ((5))
0
262 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

Denote by B 0 2 (S) the set with (B 0 ) = 1 for which (5) is satis…ed.


As above, the function v 2 (s; !) is Borel measurable in s for all !: If ! 2 B 0
then by local boundedness, book 5’s proposition 3.36 obtains from (5) that:
v 2 (t; !) = 1; m-a.e.
This proves that there exists B 0 2 (S) with (B 0 ) = 1; and for each ! 2 B 0
there exists B! 2 B ([0; 1)) with m (B! ) = 0; so that v 2 (t; !) 6= 1 for ! 2 B;
t 2 B! : Arguing exactly as above, we can now prove that with B de…ned as
in part 1 of the proof, that that B! = B! ; and thus k(!) = 0; -a.e., and
then m B = 0 by (4); proving 3:
Example 6.3 (Brownian Itô processes) It is interesting to contemplate
the result of proposition 6.2, since the conclusion that v 2 (t; !) = 1; m -
a.e., initially seems quite limiting. First any such function is bounded and
hence locally bounded, so such v(s; !) 2 HlocbP ([0; 1) S) if predictable. By
book 7’s corollary 5.17, any adapted, left continuous process is predictable.
So for example, if ftn g1n=0 is an increasing sequence of …xed times with
t0 = 0; then:
X1
v(t; !) = f0g (t) + an (!) (tn ;tn+1 ] (t);
n=0
bP ([0; 1) S) if a (!) is
is an example of a process in Hloc n tn (S)-measurable
2
(lemma 2.25) and an (!) = 1: Thus for each n there is An 2 tn (S) with
an (!) = 1 for ! 2 An and an (!) = 1 for ! 2 A~n ; the complement of An :
Then:
X1
Xt = an (!) Btn+1 ^t (!) Btn ^t (!)
n=0
X1
= (!) Btn+1 ^t (!) Btn ^t (!)
n=0 An
X 1
A~n (!) Btn+1 ^t (!) Btn ^t (!) :
n=0
Pathwise it is transparent that as a sum of independent normally distributed
variates that Xt N (0; t) for each t; meaning Xt is normally distributed
with mean 0 and variance t: It is also transparent that Xt is continuous,
and with a little work we can verify directly that for 0 s < t; that Xt
Xs has a normal distribution with mean 0 and variance t s ; and Xt
Xs is independent of s (S): Thus Xt so de…ned is a Brownian motion on
(S; (S); t (S); )u:c: by de…nition 2.7.
Recalling proposition 2.56 let fTi g1
i=0 be any sequence of stopping times
so that -a.e., T0 = 0; Ti < Ti+1 for all i; and Ti ! 1: De…ne v(s; !) by:
X1
v(s; !) = a 1 (!) f0g (s) + an (!) (Tn ;Tn+1 ] (s);
n=0
6.1 LÉVY’S CHARACTERIZATION OF N -DIMENSIONAL BM263

where a 1 (!) is T0 (S)-measurable, an (!) is Tn (S)-measurable (de…nition


bP ([0; 1)
5.52, book 7) for all n; and all a2m (!) = 1: Again v(s; !) 2 Hloc S)
and: X1
Xt = aj (!) BTj+1 ^t (!) BTj ^t (!) :
j=0

Then Xt is a Brownian motion by proposition 6.2, but other than continuity,


it is far more di¢ cult to verify this directly.

We turn next to the m-dimensional Itô process de…ned in 5.18 for


bP (m 1) bP (m n)
u 2 Hloc ([0; 1) S) and v 2 Hloc ([0; 1) S) by:
Z t Z t
Xt = X0 + uds + vdB:
0 0

In components:
0 Rt Pn Rt 1
(1) (j)
X0 + 0 u1 (s; !)ds + j=1 0 v1j (s; !)dBs (!)
B Rt Pn R t C
B (2) C (j)
B X0 + 0 u2 (s; !)ds
+ C
j=1 0 v2j (s; !)dBs (!)
Xt B C:
B .. C
B . C
@ A
(m) Rt Pn R t (j)
X0 + 0 um (s; !)ds + j=1 0 vmj (s; !)dBs (!)

bP (m 1) bP (m n)
Recall that u 2 Hloc ([0; 1) S) and v 2 Hloc ([0; 1) S) of
de…nition 4.35, which is to say that for all i; j that ui (s; !); vij (s; !) 2
bP ([0; 1)
Hloc S) of de…nition 4.17.
The conclusion below generalizes the above result and also book 7’s
(1) (n)
proposition 2.61. This latter result proved that if Bt (!) Bt (!) ; :::; Bt (!) is
an n-dimensional Brownian motion de…ned on a probability space (S; (S); t (S); )u:c:
and R is an n n real rotation matrix, then RB is a Brownian motion. Re-
call that a rotation matrix can be speci…ed by RRT = In n ; or equivalently
RT = R 1 ; where the transpose matrix is de…ned by Rij T Rji :
In addition to generalizing from a …xed n n real rotation matrix R to
a stochastically de…ned n n rotation matrix R v(t; !) when m = n; the
next result also allows for the case m 6= n: In the special case where m < n;
then v(t; !) provides an orthogonal projection: Rn ! Rm ; while the case
m > n provides an orthogonal embedding Rn ! Rm : See Strang (2009) for
more on this.
Note that while we denote v(t; !) (vij (t; !))m;ni=1;j=1 for simplicity, this
collection is treated as an m n matrix.
264 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

Proposition 6.4 (m-dimensional Brownian Itô process) If Xt is an


bP (m 1)
m-dimensional Itô process on (S; (S); t (S); )u:c: as above, so u 2 Hloc ([0; 1)
bP (m n)
S) and v 2 Hloc ([0; 1) S); then Xt is a Brownian motion on this
space if and only if:

1. X0 = 0; -a.e.,

2. ui (t; !) = 0 for all i; m -a.e.,

3. v(t; !)v T (t; !) = Im m; m -a.e.

Proof. We will follow the logic of the proof of proposition 6.2, rede…ning
sets as appropriate.
1. 1 3 Imply Brownian Motion: Every such Xt is an m-dimensional
continuous
S semimartingale by proposition 4.21 and de…nition 4.34. De…ne
A= m 1
i=1 i (R
u f0g); and note that since the complement of A :

e = Tm u 1 (0);
A i=1 i

that m A = 0 by 2: Repeating the steps of the above proof obtains


m A! = 0; -a.e. Translating this proves that -a.e., ui (s; !) = 0 for all
i; m-a.e., and hence:
Z t
ui (s; !)ds = 0; all i; t:
0

Thus given 1 and 2 obtains -a.e.:


0 P 1
n Rt (j)
v 1j (s; !)dB s (!)
B Pj=1 R0 C
B n t (j) C
B j=1 0 v2j (s; !)dBs (!) C
Xt B C ; all t: ((1))
B .. C
B . C
@ A
Pn R t (j)
j=1 0 vmj (s; !)dBs (!)

This is an m-dimensional continuous local martingale by proposition 3.83


and book 7’s exercise 5.78.
Then by book 7’s corollary 6.32 and 3.68, recalling 2.8 that B (k) ; B (l) t =
6.1 LÉVY’S CHARACTERIZATION OF N -DIMENSIONAL BM265

t for k = l and 0 otherwise:


D E Xn Xn Z Z
X (i) ; X (j) = vik (s; !)dBs(k) ; vjl (s; !)dBs(l)
t k=1 l=1 0 0 t
Xn Xn Z t D E
= vik (s; !)vjl (s; !)d B (k) ; B (l)
k=1 l=1 0 s
Z tX
n
= vik (s; !)vjk (s; !)ds
0 k=1
Z t
= v(s; !)v T (s; !) i;j
ds: ((2))
0

Now de…ne:
S 1 [S 1
B i6=j vv T i;j
(R f0g) i vv T i;i
(R f1g):

Since the complement of B :


\T
e T
B T 1
vv T
1
i6=j vv i;j
(0) i i;i
(1);

that m B = 0 by 3: Repeating argument of proposition 6.2 obtains that


m B! = 0; -a.e. Translating this proves that -a.e.:
8
< 1; i = j;
v(s; !)v T (s; !) i;j = ; m-a.e.
: 0; i 6= j:

This then obtains from (2) that -a.e., X (i) ; X (j) t = t for i = j and 0
otherwise. Hence Xt is a Brownian motion on (S; (S); t (S); )u:c: by 3
of proposition 6.1.
2. Brownian Motion Implies 1 3 : Book 7’s proposition 1.42
(1) (m)
obtains that Xt = Xt ; :::; Xt is an m-dimensional Brownian mo-
(i)
tion if and only if Xt are independent 1-dimensional Brownian motions.
But these component processes di¤ er from the model of proposition 6.2 due
to the n-dimensional Brownian motion, and so we need to check the de-
(i)
tails to infer 2: For given i; Xt is a continuous martingale by proposi-
tion 2.8, and then a continuous local martingale by book 7’s corollary 5.85,
P Rt (j)
and nj=1 0 vij (s; !)dBs (!) is a continuous local martingale by proposi-
tion 3.83 and book 7’s exercise 5.78.
Thus:
Z t Z t
Pn
ui (s; !)ds = Xt j=1 vij (s; !)dBs(j) (!);
0 0
266 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

is a continuous
Rt local martingale (exercise 5.78, book 7), and by proposition
4.21, 0 ui (s; !)ds is a continuous, adapted process of bounded variation.
Rt
By book 7’s proposition 6.1, 0 ui (s; !)ds must be constant -a.e., and by
continuity this constant must be 0: Repeating the steps of part 2 of the proof
of proposition 6.2 obtains that m A
Sim= 0 for each i; with Ai = ui 1 (R
1
f0g): Thus m A = 0 where A = i=1 ui (R f0g); which is 2:
As X0 = 0 by de…nition of a Brownian motion, we obtain from the
previous step that -a.e., Xt satis…es (1): Then by the same steps as in part
1: Z t
D E
(i) (j)
X ;X = v(s; !)v T (s; !) i;j ds;
t 0
and with 3 of proposition 6.1 can conclude that -a.e.:
8
Z t < t; i = j;
v(s; !)v T (s; !) i;j ds = all t: ((3))
0 : 0; i 6= j;

Denote by B 0 2 (S) the set with (B 0 ) = 1 for which (3) is satis…ed.


As in proposition 6.2, the functions v(s; !)v T (s; !) i;j are Borel measurable
in s for all !: If ! 2 B 0 ; then local boundedness of v(s; !)v T (s; !) i;j and
book 5’s proposition 3.36 obtains from (3) that m-a.e.:
8
< 1; i = j;
v(t; !)v T (t; !) i;j = ((4))
: 0; i 6= j:

In summary, there exists B 0 2 (S) with (B 0 ) = 1; and for each ! 2 B 0


there exists B! 2 B ([0; 1)) with m (B! ) = 0; so that at least one of the
equations (4) is not satis…ed for ! 2 B 0 ; t 2 B! :
Finally, de…ne B as in part 1, and then de…ne B! = ftj(t; !) 2 Bg and
k(!) = m B! : As noted in the proof of proposition 6.2 and following from
book 5’s proposition 5.14:
Z
m B = k(!)d : ((5))
S

If ! 2 B 0 ; then B! = B! by de…nition, and so -a.e.:

k(!) m B! = m (B! ) = 0:

Thus k(!) = 0; -a.e., and from (5) this proves m B = 0; which is 3:


6.1 LÉVY’S CHARACTERIZATION OF N -DIMENSIONAL BM267

6.1.2 Continuous LMs are Time-Changed BMs


In this section we develop a rather remarkable result often attributed to
Wolfgang Doeblin (1915 –1940), who is also credited with
independently deriving results related to Itô’s lemma. In some references
this latter result is therefore called the Itô-Doeblin lemma. However,
Doeblin’s work was not discovered until 2000, by which time the following
result was independently derived and extended by 1965 papers of K. E.
Dambis, and Lester E. Dubins (1920 –2010) and Gideon E. Schwarz
(1933 –2007).

In summary, the Doeblin-Dambis-Dubins-Schwarz result states that every


continuous local martingale can be expressed as a time-changed Brownian
motion:
Mt = BT (t) ;
where the time change function T (t) can be exactly speci…ed. Remarkably:

T (t) = hM it :

Example 6.5 (The Wiener integral ) Recall the Wiener integral of propo-
sition 2.58 as an example of this result. There it was proved that if Bt (!) is
a Brownian motion on (S; (S); t (S); )u:c: and v(s) a continuous function
de…ned on [0; 1); then for every t :
Z t Z t
v(s)dBs (!) N 0; v 2 (s)ds :
0 0

In other words,
R t the Itô integral is normally distributed Rwith expectation 0
t
and variance 0 v 2 (s)ds: But we also know that Mt = 0 v(s)dBs (!) is a
continuous local martingale by proposition 3.83, and that by proposition 3.89
and corollary 2.10: Z t
hM it = v 2 (s)ds:
0
R1
Now assume that v(s) > 0 for all s and 0 v 2 (s)ds = 1: Then T (t)
hM it is strictly monotonic and unbounded, and thus has a well de…ned in-
verse T 1 de…ned on [0; 1): De…ne a new process on (S; (S); T 1 (t) (S); ) :
~t
B MT 1 (t) ;

noting the shift in …ltration to preserve adaptedness. Note that f T 1 (t) g


is indeed a …ltration because T 1 is strictly increasing (though only nonde-
creasing is needed for this). Also this …ltration contains all negligible sets
268 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

(de…nition 5.4, book 7) because the original …ltration has this property. Fi-
nally, f T 1 (t) g is right continuous since T 1 is continuous (though only
right continuity is needed for this). In summary, f T 1 (t) g satis…es the
usual conditions and thus:

(S; (S); T 1 (t) (S); ) = (S; (S); T 1 (t) (S); )u:c: :

Now B~t so de…ned satis…es B


~0 = 0; is continuous by continuity of Mt
1
and T (t); and integrable:
h i
E B ~t = E MT 1 (t) < 1;

since Mt is integrable. Further, B ~t is a martingale with respect to on


(S; (S); T 1 (t) (S); )u:c: ; since if 0 s < t :
h i
E B ~t j T 1 (s) ~s :
E MT 1 (t) j T 1 (s) = MT 1 (s) = B

Finally, by 6.10 of book 7’s proposition 6.12,


D E
B~ hM iT 1 (t) ;
t

and thus by de…nition of T (t) :


D E
B~ T T 1
(t) = t:
t

~t de…ned on (S; (S);


Lévy’s characterization obtains that B T 1 (t) (S); )u:c:
is a Brownian motion. Further:
~T (t)
Mt = B ~hM i :
B t

In other words, Mt is a Brownian motion de…ned with respect to a continu-


ously transformed time scale and …ltration.

To generalize this example to the following proposition, we will more or


less follow the above template but will need to be more careful about the
de…nition of T (t): In the general case T (t) hM it will depend on ! 2 S; and
given !; hM it (!) need not be strictly monotonic and thus we will require a
more general notion of T 1 (t): Further, as this generalized inverse is again a
function of !; measurability becomes an issue, and in particular, the question
of whether such T 1 (t) is a stopping time. Finally, we will explicitly assume
that hM it is unbounded -a.e., as this simpli…es the proof and allows the
6.1 LÉVY’S CHARACTERIZATION OF N -DIMENSIONAL BM269

Brownian motion to be de…ned on the same space. For generalizations to the


case where hM it is bounded and the probability space must be enlarged, and
to the case of a multivariate local martingale, see Revuz and Yor (1999).
Before beginning, we de…ne the notion of a "right continuous inverse"
of a continuous, nondecreasing function F (t) de…ned on [0; 1) with F (0) = 0
and F bounded or not. It is closely related to the left continuous in-
verse introduced in de…nition 3.12 of book 2, but the current modi…cation
is needed to obtain the desired measurability needed to be a stopping time.
As was the case in book 2, we de…ne this function and then set out to prove
that it is indeed right continuous, and to what extent it serves as an inverse.
Then we prove the main result of this section. For the current development,
we follow Karatzas and Shreve (1988).

De…nition 6.6 (Right continuous inverse) Let F (t) be continuous, non-


decreasing function de…ned on [0; 1) with F (0) = 0 and:

S supt F (t);

and so S 1: The "right continuous" inverse of F; denoted F # ; is de…ned


on [0; 1) by:
8
< infft 0jF (t) > sg; 0 s < S;
F # (s) = (6.1)
: 1; s S:

Example 6.7 (Contrast left continuous inverse) Comparing this de…-


nition to that of the left continuous inverse F (s) of book 2’ de…nition 3.12
one sees that in 6.1 the in…mum is over the set where F (t) > s; while for
the earlier de…nition this in…mum is over the set where F (t) s: Given
continuous F as above, this change of de…nition only e¤ ects the value of the
inverse for s = F (t) in a ‡at-spot, or a "step" of the graph of F: For exam-
ple, assume that F (t) = s on [a; b]; and that F (t) < s for t < a and F (t) > s
for t > b: Then F # (s) = b; while F (s) = a: For this simple example we see
that F # is then right continuous at s; but not left continuous, while F is
left continuous at s but not right continuous.

Lemma 6.8 (Properties of F # ) With F and F # as in de…nition 6.6:

1. F # : [0; S) ! [0; 1) is nondecreasing and right continuous. Further,


if F (t) < S for all t; then F # (s) ! 1 as s ! S:

2. F (F # (s)) = s ^ S for all s; where s ^ S minfs; Sg:


270 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

3. F # (F (t)) = supfr tjF (r) = F (t)g for all t:


4. Let ' : [0; 1) ! R be continuous and have the property that if 0
t < t0 and F (t) = F (t0 ); then '(t) = '(t0 ): That is, the steps of F are
contained in the steps of ': Then '(F # (s)) is continuous on [0; S);
and:
'(F # (F (t))) = '(t); all t:

5. For 0 s; t < 1 :
s < F (t) () F # (s) < t;
F # (s) t)s F (t):

Proof. We take these in turn, with many steps simply working through the
de…nitions.
1: That F # is nondecreasing follows from the de…nition since F is non-
decreasing, while the last statement is by de…nition. Right continuity follows
by de…nition for s 2 [S; 1); so assume s 2 [0; S) and F # (s) = t: Then
by de…nition F (t + ) > s for all > 0: Now if s < r < F (t + ); then
F # (r) t + = F # (s) + : Letting ! 0; right continuity of F obtains
that limr!s+ F # (r) F # (s) as thus limr!s+ F # (r) = F # (s) since F # is
nondecreasing.
2: If s 2 [S; 1) then F (F # (s)) = F (1) = S; so assume s 2 [0; S) and
F # (s) = t: If t = 0 then s = 0 by de…nition and continuity of F; recalling
F (0) = 0; so we can assume t > 0: As above F (t + ) F (F # (s) + ) > s
for all > 0; and thus by right continuity of F; F (F # (s)) s: Similarly,
for 0 < < t the de…nition obtains F (t #
) F (F (s) ) s; while this
and continuity of F yield F (F # (s)) s to complete the proof.
3: By de…nition, if F (t) = S then F # (F (t)) = 1 = supfr tjF (r) =
1g by monotonicity. If F (t) < S then by monotonicity and continuity of
F :
F # (F (t)) inffr 0jF (r) > F (t)g = supfr tjF (r) = F (t)g:
4: That '(F # (s)) is right continuous follows from continuity of ' and
right continuity of F # by 1: There is nothing left to prove for s = 0 so
assume s 2 (0; S) and let 0 < sn ! s : Then F # (sn ) is nondecreasing by
1; and since bounded by F # (s) < 1 this sequence has a limit. To prove
limn ' F # (sn ) = ' F # (s) and left continuity, …rst note that by continu-
ity of F; and then 2 since sn < S :

F lim F # (sn ) = lim F F # (sn ) = lim sn = s:


n n n
6.1 LÉVY’S CHARACTERIZATION OF N -DIMENSIONAL BM271

Thus applying F # and using 3 :


n o
sup r lim F # (sn )jF (r) = F lim F # (sn ) = F # (s):
n n

The "steps" property of ' and continuity then obtain:


n o
sup r lim F # (sn )j'(r) = ' lim F # (sn )
n n
n o
= sup r lim F (sn )j'(r) = lim ' F # (sn )
#
n n
#
= F (s);

and thus:
'(F # (s)) = lim ' F # (sn ) :
n

Finally, 2 obtains that F (F # (F (t))) = F (t)^S = F (t); and thus the identity
'(F # (F (t))) = '(t) is a consequence of the "steps" property of ':
5: If s < F (t) then by monotonicity of F # and 3 it follows that:

F # (s) < inffr 0jF (r) > sg < t:

If F # (s) < t then s^S < F (t) by monotonicity of F and 2: But as F (t) S;
the result follows. For the second implication, by monotonicity of F and 2 :

F # (s) t)s^S F (t) ) s F (t);

since F (t) S:

With preliminary work done, we are ready to state and prove the main
result of this section. The logic ‡ow of the proof is to generalize the deriva-
tion of example 6.5 to this more general context.

Proposition 6.9 (Doeblin-Dambis-Dubins-Schwarz characterization)


Given (S; (S); t (S); )u:c: ; let Mt 2 Mloc ; a continuous local martingale
with M0 = 0; and assume that hM it is unbounded -a.e. For each s de…ne:

Ts = infft 0j hM it > sg; (6.2)

and thus by de…nition 6.6, Ts = F # (s) for the increasing function (propo-
sition 6.12, book 7) F (t) = hM it : Then Ts is a stopping time relative to
f t (S)gu:c: for all s; the …ltration f 0s (S)g f Ts (S)g (de…nition 5.52,
book 7) satis…es the usual conditions; and, for all t :

Tt0 hM it ;
272 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

is a stopping time relative to f 0s (S)gu:c: :


Further, Bs MTs is a 1-dimensional Brownian motion on (S; (S); 0s (S); )u:c: ;
and -a.e.:
Mt = BhM it ; 0 t < 1: (6.3)
Proof. For notational convenience we reference properties of lemma 6.8 by
numbers 1) to 5):
Then f 0s (S)g f Ts (S)g is a …ltration by 4 of book 7’s proposition
5.61 because Ts is nondecreasing by 1); it contains all negligible sets because
f (S)g has this property, and is right continuous because Ts is right contin-
uous by 1) (applying 6 of book 7’s proposition 5.61). Summarizing, f 0s (S)g
satis…es the usual conditions and so f 0s (S)g = f 0s (S)gu:c: : Now relative to
f t (S)gu:c: ; Ts is an optional time (remark 5.58, book 7) because by 5) and
adaptedness of hM it (proposition 6.12, book 7):
fTs < tg = fhM it > sg = fhM it sgc 2 t (S):

Thus Ts is a stopping time relative to f t (S)gu:c: because it satis…es the


usual conditions (1 of proposition 5.60, book 7). That Tt0 is a stopping time
relative f 0s (S)gu:c: follows from 5); and that Ts is Ts (S)-measurable, or
equivalently 0s (S)-measurable by book 7’s exercise 5.56:
fTt0 sg = ft Ts g 2 Ts (S):

Turning to Bs ; we proceed in steps.


1: Given s2 > 0; Mt Mt^Ts2 is an L2 -bounded, uniformly inte-
grable martingale:
Since Mt is a continuous local martingale on (S; (S); t (S); )u:c: and
Ts is a stopping time relative to f t (S)gu:c: ; given s2 > 0 it follows that Mt
Mt^Ts2 is a continuous local martingale with the same localizing sequence as
Mt by book 7’s proposition 5.87. Further, applying book 7’s corollary 6.16
and monotonicity of hM it obtains that M t is bounded:

M t
= hM it^Ts hM iTs = s2 ;
2 2

where the last step is 2) since hM iTs = F F # (s2 ) : This implies that
2
E M t s2 for all t and then E M 1 s2 ; noting that M 1 is well
de…ned by monotonicity and boundedness.
If fTn g is a localizing sequence for Mt for which say jMt^Tn j n (propo-
sition 5.75, book 7), then by the Doob-Meyer decomposition theorem of that
book’s proposition 6.5 applied to the martingale Mt^Tn ; then stopped at Ts2 :
2 0
Mt^T n ^Ts
= hM it^Tn ^Ts + Mt^Tn ^Ts
:
2 2 2
6.1 LÉVY’S CHARACTERIZATION OF N -DIMENSIONAL BM273

Here Mt0 is a continuous local martingale on (S; (S); t (S); )u:c: ; with the
same localizing sequence as Mt : Since Mt^T 0 is a martingale and Ts2 < 1
n
-a.e. hby unboundedness
i of hM it ; it follows from book 7’s proposition 5.80
0
that E Mt^T = 0 and thus (corollary 6.16, book 7):
n ^Ts 2

h i h i h i h i
2 2
E Mt^Tn E Mt^T = E hM it^Tn ^Ts E
s2 : M
s 2 ^Tn 2 t^Tn
((1))
The proof of book 6’s proposition 5.8 then obtains from (1) that fMt^Tn gn are
uniformly integrable for each t; and hence Mt is a martingale by proposition
5.88 of book 7. Applying Fatou’s lemma (proposition 2.18, book 5) to (1)
proves that Mt is an L2 -bounded martingale:
h i h i h i
2 2
E Mt2 = E lim Mt^Tn lim inf E Mt^Tn = lim E M t^Tn s2 ;
n n n
((2))
and then also that Mt is a uniformly integrable martingale by book 7’s propo-
sition 5.99.
2: Mt2 M t is a uniformly integrable martingale:
Uniform integrability of Mt and book 7’s proposition 5.112 obtain the
existence of M1 so that Mt !L1 M1 and Mt = E[M1 j t (S)]; -a.e. By
(2) and Fatou’s lemma:
h i
2
E M1 = E lim Mt2 lim E Mt2 = lim E M t = E M 1 s2 :
t t t

Then by Jensen’s inequality (proposition 5.26, book 6):

Mt2 2
E[M1 j t (S)]:

Thus by integrability of M1 2 and book 5’s exercise 5.113, M 2 is bounded by


t
a uniformly integrable family of functions and thus is uniformly integrable.
And since M t s2 ; it follows that Mt2 M t is uniformly integrable, and
2
a martingale as follows. Since Mt^T M t^Tn is a martingale for each n
n
2
as noted above, and Mt^Tn M t^Tn ! Mt2 M t pointwise for each t,
it follows that
2
Mt^T n
M t^Tn !L1 Mt2 M t
for each t by Lebesgue’s dominated convergence theorem (proposition 2.43,
book 5). That Mt2 M t is a martingale now follows from book 7’s propo-
sition 5.32.
3: Bs MTs is a square-integrable martingale on (S; (S); 0s (S); )u:c:
with hBis = s :
274 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

Since Mt is continuous and adapted it is progressively measurable by


book 7’s proposition 5.19, and thus Bs MTs is measurable relative to
0 (S) (S) by that book’s proposition 5.67. Uniform integrability of
s Ts
the martingales Mt and Mt2 M t and Doob’s optional stopping theorem
(proposition 5.117, book 7) obtain that if 0 s1 < s2 with s2 as above, then
since also Bsj = MTsj :
0
E B s2 B s1 j s1 (S) E MTs2 MTs1 j Ts1 (S) = 0:

Also by optional stopping:


h i h i
2
E (Bs2 Bs1 )2 j 0s1 (S) E MTs2 MTs1 j Ts1 (S)
h i
= E MT2s 2MTs2 MTs1 + MT2s j Ts1 (S)
2 1
h i
= E MT2s MT2s j Ts1 (S)
2 1
h i
= E M Ts M Ts j Ts1 (S)
2 1
= s2 s1 :

For this derivation the second last step follows from optional stopping because
Mt2 M t is a martingale, while the last step is the de…nition of Ts :
Thus since adapted as noted above and integrable by de…nition, Bs is a
martingale on (S; (S); 0s (S); )u:c: by the …rst derivation, and is square
integrable by the second. Also hBis = s by the second derivation by book 7’s
proposition 6.12, since it proves Bs2 s is a martingale.
4: Bs is continuous -a.e. and is thus a Brownian motion on
(S; (S); 0s (S); )u:c: :
First, B0 = MT0 = 0: This is apparent if T0 = 0 by the restriction on
M; so assume T0 = b > 0: Then by book 7’s corollary 6.14, Mt = M0 = 0
for t 2 [0; b] with probability 1; and so again MT0 = 0: Thus once it is
proved that Bs is continuous -a.e., it will be a Brownian motion by Lévy’s
characterization. Now Ts is right continuous in s by 1), and thus so too is
Bs :
For left continuity, …x t0 0 and de…ne St0 = infftj hM it > hM it0 g: Note
that with s0 hM it0 that St0 Ts0 and thus St0 is a stopping time relative
to f t (S)gu:c: : Since St0 t0 by monotonicity, left continuity of MTs at Ts0
requires that
Bs0 MTs0 = Mt0 : ((3))
In other words, left continuity
h at Tis0 requires that M is constant on the
interval Mt0 ; MTs0 MT ; MTs0 ; where Ts0 is the left limit of Ts as
s0
6.1 LÉVY’S CHARACTERIZATION OF N -DIMENSIONAL BM275

s ! s0 : Now if St0 = t0 then (3) is satis…ed since St0 Ts0 as noted above,
0
and thus Bs is continuous at s :
For the general case, de…ne a process Ns on s 0 by:
S
t 0
Ns M(t0 +s)^St0 Mt0 Mt0 +s Mt0 :
Note that N0 = 0 since St0 t0 ; and that Ns is a continuous local martingale
on (S; (S); s+t0 (S); )u:c: by Doob’s optional stopping theorem of book 7’s
proposition 5.84. Further, by (6.12) of book 7 and that book’s corollary 6.16:
S 0
hN is = hM i(tt0 +s) hM it0 :

Now by continuity of hM it and the de…nition of St0 ; hM i(t0 +s) = hM it0


for 0 s St0 t0 and thus:
hN is = 0; 0 s St0 t0 :
By book 7’s corollary 6.14 extended by the exercise 6.10 below, it follows that
with probability 1 that Ns = 0 for 0 s St0 t0 : That is:
Mt0 +s = Mt0 ; 0 s St0 ;
which proves left continuity of Bs at s0 hM it0 with probability 1: As this
is true for all rational t0 ; and hM it is continuous, it follows that M is con-
stant on the interval Mt0 ; MTs0 for all t0 : Thus (3) is satis…ed and Bs is
continuous -a.e.
5: With probability 1; Mt = BhM it ; 0 t < 1 :
As noted in the statement of the proposition, Bs MTs can be stated
Bs MF # (s) for F (t) = hM it : Now step 4 proved that if 0 t < t0 and
0 0
F (t) = F (t ); then '(t) = '(t ) with '(t) Mt : Thus by 4);
MF # (F (t)) = Mt ; all t:
As MF # (F (t)) = BF (t) ; the proof is complete.

Exercise 6.10 (Quadratic variation and intervals of constancy) Let


Mt 2 Mloc be a continuous local martingale on the …ltered space (S; (S); t (S); )u:c: ;
and S; T stopping times with S T < 1; -a.e.. Prove that if hM iT =
hM iS -a.e., then Mt is constant on [S; T ] -a.e. Hint: Given positive in-
teger n; de…ne An = fT ng 2 n (S); and prove that if hM iT = hM iS
-a.e. on An ; then Mt is constant on [S; T ] -a.e. on An : Unioning sets for
all n completes the proof. For the An result, recall 6.10 of book 7 that:
m
sup Qt (M ) hM it !P 0;
t n
276 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

and this implies -a.e. convergence for a subsequence of the m -partitions


(proposition 5.25, book 2). That hM iT = hM iS -a.e. on An implies that
QT m (M ) QS n (M ) ! 0 -a.e. for this subsequence. Prove that Mt then
has weak variation on [S; T ] of 0 and complete the proof (recall proposition
6.1, book 7).

6.2 The Burkholder-Davis-Gundy Inequality


Similar to Doob’s martingale maximal inequality of book 7’s proposition
5.91, the Burkholder-Davis-Gundy inequality (sometimes
"inequalities") develops bounds for the Lp -norm of:
Mt max jMs j ;
0 s t

given a continuous local martingale Mt on (S; (S); s (S); )u:c: with


M0 = 0. Thus M 2 Mloc as in de…nition 3.67. It is named for the work of
Donald L. Burkholder (1927 –2013), Burgess Davis, and Richard F.
Gundy.

Whereas Doob’s result provides an Lp upper bound for Mt for a càdlàg


martingale Mt in terms of the Lp -norm of jMt j for p > 1; the BDG-inequality
provides upper and lower bounds for a continuous local martingale for all
p > 0; and these bounds are speci…ed in terms of the Lp=2 -norm of the
associated quadratic variation process hM it : As for Doob’s inequality, the
constants in the BDG-inequality are universal in the sense that they are
independent of t and the given process.
Using the results of book 7, we can derive the BDG-inequality for p = 2:
If M 2 Mloc ; let fTn g be a localizing sequence so that MtTn jMt^Tn j
n; and which exists by book 7’s proposition 5.75. Recalling that M Tn t =
hM it^T by that book’s corollary 6.16, we then apply proposition 6.18, then
Doob’s inequality, and proposition 6.18 again:
h i h i h i
2
E hM it^Tn = E (Mt^Tn )2 E Mt^Tn 4E (Mt^Tn )2 = 4E hM it^Tn :

Letting n ! 1 and applying Lebesgue’s monotone convergence theorem


(proposition 2.21, book 5) obtains the BDG-inequality for p = 2 :
h i
E [hM it ] E (Mt )2 4E [hM it ] : (6.4)

To extend this inequality to p > 2 we will use Itô’s lemma as the pri-
mary technical tool. For 0 < p < 2 we require a di¤erent approach using
6.2 THE BURKHOLDER-DAVIS-GUNDY INEQUALITY 277

Lenglart’s inequalities, named for results from a 1977 publication of E.


Lenglart, and which we state and prove next. The corollary of this result
will provide, after a short set-up, the necessary tool to complete the proof
below.
For its statement, we require the following notion which we state in the
context of continuous, adapted processes. By book 7’s corollary 5.68, the
needed measurability for this de…nition follows from left continuity, or right
continuity including càdlàg, but we do not need this generality below.

De…nition 6.11 (Dominated process) Let Xt be a continuous, adapted


process, and At 0 a continuous, increasing, adapted process on (S; (S); t (S); )u:c: :
Then Xt is dominated by At if for any bounded stopping time T :
E [jXT j] E [AT ] : (6.5)

Note that the following two results provide statements given any stopping
time T; and thus includes T 1: Thus if A1 limt!1 At is well-de…ned,
the following assures the existence of X1 and provides information on its
distribution.

Proposition 6.12 (Lenglart’s inequalities) Let Xt be a continuous, adapted


process, and At 0 a continuous, increasing, adapted process on a …ltered
probability space (S; (S); t (S); )u:c: : If Xt is dominated by At ; then for
all stopping times T; all constants a; b > 0; and:
XT max jXs j :
0 s T

1. Pr [XT a] E [AT ] =a;


2. Pr [XT a; AT < b] E [AT ^ b] =a;
3. Pr [XT a] E [AT ^ b] =a + Pr [AT b] :

Proof. 1: Given T; de…ne the stopping time (proposition 5.60, book 7)


S inffs : jXs j ag: Recalling that A (!) [A] (!) = 1 for ! 2 A and
is 0 otherwise:
a Pr [XT ^S^t a] = E [jXT ^S^t j [XT ^S^t a]] :
This follows because if XT ^S^t a then jXs j a for s T ^ S ^ t; and
so by de…nition S T ^ t and jXT ^S^t j = jXS j = a: Using domination and
monotonicity:
a Pr [XT ^S^t a] E [jXT ^S^t j] E [AT ^S^t ] E [AT ] :
278 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

Letting t ! 1 and noting that T ^ S ^ t ! T ^ S obtains:

Pr [XT ^S a] E [AT ] =a:

The proof is complete since Pr [XT ^S a] = 0 if T < S by de…nition of S;


and so Pr [XT ^S a] = Pr [XT a] :
2: Given T; de…ne the stopping time R inffs : As bg: Thus if AT < b
then T < R and so:

Pr [XT a; AT < b] Pr [XT ^R a] :

Then by 1 :

Pr [XT a; AT < b] E [AT ^R ] =a E [AT ^ b] =a:

3: This follows from 2 since:

Pr [XT a] = Pr [XT a; AT < b] + Pr [XT a; AT b] ;

and the last term is bounded by Pr [AT b] :

Corollary 6.13 (Lenglart’s inequalities) Let Xt and At be given as above.


Then for any stopping time T and 0 < < 1 :

2
E [(XT ) ] E [AT ] : (6.6)
1
R1
Proof. Letting F (x) = x ; then F (x) = 0 [y x] dF (y) and thus:
Z Z 1
E [F (XT )] = [y XT ] dF (y)d :
0

The integrand [y XT ] is nonnegative, and we now prove that it is mea-


surable relative to the product space [0; 1) S and measure dF : Since Xt
is continuous and adapted, XT is T (S)-measurable on fT < 1g by book 7’s
corollary 5.68. Further, fT = 1g is (S)-measurable by de…nition of a stop-
ping time. Thus the random vector Z : [0; 1) S ![0; 1) [0; 1) de…ned by
Z : (y; !) ! (y; XT (!)) is [B[0; 1) (S)]-measurable by book 2’s propo-
sition 3.32 since the component variates are so measurable. Then since fy
xg is closed in [0; 1) [0; 1); Z 1 [y x] is [B[0; 1) (S)]-measurable,
and so by de…nition [y XT ] is a measurable function. Tonelli’s theorem
6.2 THE BURKHOLDER-DAVIS-GUNDY INEQUALITY 279

of book 5’s proposition 5.22 thus allows a reordering of the iterated integral
to obtain with 3 of proposition 6.12:
Z 1Z
E [F (XT )] = [y XT ] d dF (y)
0
Z 1
= Pr [XT y] dF (y)
0
Z 1
(E [AT ^ y] =y + Pr [AT y]) dF (y):
0

Now

E [AT ^ y] =y = E [(AT ^ y [AT < y])] =y + E [(AT ^ y [AT y])] =y


= E [(AT [AT < y])] =y + Pr [AT y]
= E [(AT [AT < y])] =y + E [ [AT y]] :

Applying Tonelli’s theorem again:


Z 1
E [F (XT )] (E [AT [AT < y]] =y + 2E [ [AT y]]) dF (y)
0
Z 1 Z 1
= E (AT [AT < y] =y) dF (y) + 2 [AT y] dF (y) :
0 0
Z 1
= E AT (1=y) dF (y) + 2F (AT ) :
AT

By book 3’s proposition 3.30, for all N > AT :


Z N Z N
2 1
AT (1=y) dF (y) = AT y dy = AT N AT ;
AT AT 1

and thus since 0 < <1:


Z 1
AT (1=y) dF (y) = AT :
AT 1

The proof is complete with a little algebra.

Remark 6.14 (On potential generalizations of 6:6) It is worth noting


that the above proof only utilized the explicit
R 1 form of F (x) in the very last
step, for the evaluation of the integral AT (1=y) dF (y); and otherwise the
proof is perfectly general. However, in order for this integral to be …nite for
280 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

di¤ erentiable F (x) requires that F 0 (y)=y = o(1=y) as y ! 1; recalling that


this means that

F 0 (y)=y = [1=y] = F 0 (y) ! 0; as y ! 1:

In terms of power functions F (x) = x ; this then requires < 1: On the


other hand, since AT may equal 0; we require > 0:
Thus this lemma provides the most general application of Lenglart’s in-
equalities to power function estimates. However, other generalizations are
possible, within the constraints noted.

Proposition 6.15 (The Burkholder-Davis-Gundy inequality) Let Mt


be a continuous local martingale on (S; (S); s (S); )u:c: with M0 = 0; and
thus M 2 Mloc : Then for any p > 0; there exists universal constants cp and
Cp so that for all t 1 :
h i h i
p=2 p=2
cp E hM it E [(Mt )p ] Cp E hM it ; (6.7)

where
Mt max jMs j :
0 s t

Proof. 1: 0 < p < 2 :


First, we claim that the process Mt2 is dominated by hM it : Let fTn g be a
localizing sequence for Mt with jMt^Tn j n (proposition 5.75, book 7), and
T a bounded stopping time, say T N: Then by book 7’s corollary 6.26 and
proposition 6.18, denoting Mt^Tn MtTn :
2
E Mt^Tn ^T
=E M Tn ^T t
= E hM it^Tn ^T :

Letting n ! 1 and applying Lebesgue’s monotone convergence theorem


(proposition 2.21, book 5):
2
E Mt^T = E [hM it^T ] ; ((*))

and the claim is proved with t > N: Applying corollary 6.13, for all 0 < <1
and all t : h i 2
E Mt2 E [hM it ] ;
1
and the upper bound in 6.7 follows since Mt2 = (Mt )2 :
Similarly, the process hM it is dominated by (Mt )2 since by ( ) :
h i
2
E [hM it^T ] = E Mt^T E (Mt^T )2 ;
6.2 THE BURKHOLDER-DAVIS-GUNDY INEQUALITY 281

and letting t > N proves the claim. Thus for all 0 < < 1 and all t :
2 h i
E [hM it ] E (Mt )2 :
1
2: p 2 :
Since hM it is continuous and adapted by book 7’s proposition 6.12, Tn0
infftj hM it ng is a stopping time by that book’s proposition 5.60. Thus
with fTn g a localizing sequence for Mt with MtTn jMt^Tn j n as above
and Rn Tn ^ Tn0 ; we prove the result for MtRnfor t < 1: Letting n ! 1
and applying Lebesgue’s monotone convergence theorem (proposition 2.21,
book 5) obtains 6.7 for t < 1; while letting t ! 1 and applying monotone
convergence again proves the case for t = 1; though these expectations may
be in…nite.
To simplify notation from MtRn we prove 6.7 for p 2 for a continuous
martingale Mt with jMt j n and hM it n for all t < 1: The case p = 2
was derived in 6.4 above so assume p > 2: Letting f (x) = jxjp 2 C 2 (R); then
f 0 (x) = p jxjp 1 ; f 00 (x) = p(p 1) jxjp 2 and Itô’s lemma of 5.2 obtains:
Z t Z t
p p 1 1
jMt j = p jMs j dMs + p(p 1) jMs jp 2 d hM is :
0 2 0

For the stochastic integral, the integrand jMs jp 1 2 H2;loc


M since it is pre-
dictable (corollary 5.17, book 7) and bounded, and thus this integral is a local
martingale by propositions 3.83 and 4.20. Letting vN (s; !) jMs^N jp 1 [0;N ] (s);
so vN (s; !) = jMs jp 1 for s N and 0 otherwise, then
R t since Ms is an
L2 -bounded martingale and vN (s; !) 2 H2M ; the integral 0 vN (s; !)dMs is a
Rt Rt
martingale by proposition 3.27. Thus since 0 jMs jp 1 dMs = 0 vN (s; !)dMs
for t N by proposition 3.64, the stochastic integral is a martingale.
Hence:
Z t h i
1 1
p
E [jMt j ] = p(p 1)E jMs jp 2 d hM is p(p 1)E (Mt )p 2 hM it :
2 0 2
Doob’s inequality of book 7’s proposition 5.91 applied on the left obtains:
p
p
E [(Mt )p ] E [jMt jp ] ;
p 1
while Hölder’s inequality (proposition 3.46, book 4) with conjugate exponents
p0 p=2 > 1 and q p0 =(p0 1) and applied on the right yields:
h i h i 2=p h i 1=q
p=2
E (Mt )p 2 hM it E hM it E (Mt )(p 2)q :
282 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

Now 2=p + 1=q = 1 and so (p 2) q = p: Combining estimates obtains:

1 p p h i 2=p
p=2
E [(Mt )p ]1 1=q
p(p 1) E hM it ;
2 p 1

and since 1 1=q = 2=p; the upper bound is veri…ed by exponentiation.


For the lower bound, de…ne a process
Z t
Nt = hM is(p 2)=4 dMs ;
0

which is a martingale by the above argument. Details of this are left as an


exercise. By 3.69:
Z t
2 p=2
hN it = hM is(p 2)=2 d hM is = hM it : ((**))
0 p

Now since hM ip=2


s is continuous and increasing, using Itô’s lemma of 5.16
(equivalently 4.14) with product function f (xy) = xy :
Z t
Mt hM is(p 2)=4 = Nt + Ms d hM is(p 2)=4 :
0

Thus:
Z t
(p 2)=4 (p 2)=4
jNt j Mt hM it + Ms d hM i(p
s
2)=4
2Mt hM it :
0
h i
From ( ) by applying book 7’s proposition 6.18 that E [hN it ] = E jNt j2 ;
and then Hölder’s inequality with the above conjugate pair:
2 h p=2
i h i
E hM it = E jNt j2
p
h i
(p 2)=2
4E (Mt )2 hM it
h i 1=q
(p 2)q=2
4 E hM it (E [(Mt )p ])2=p :

Here again 2=p + 1=q = 1 and thus (p 2) q=2 = p=2; so dividing obtains:

2 h i
p=2 1 1=q
E hM it 4 (E [(Mt )p ])2=p :
p
Since 1 1=q = 2=p; the lower bound is veri…ed by exponentiation.
6.2 THE BURKHOLDER-DAVIS-GUNDY INEQUALITY 283

Corollary 6.16 (The Burkholder-Davis-Gundy inequality) Let Mt be


a continuous local martingale on (S; (S); s (S); )u:c: with M0 = 0; and
thus M 2 Mloc ; and let T be any stopping time. Then for any p > 0; and
the universal constants cp and Cp of proposition 6.15:
h i h i
p=2 p=2
cp E hM iT E [(MT )p ] Cp E hM iT ; (6.8)

where MT max0 s T jMs j :


Proof. By proposition 5.87 of book 7, Mt^T is a local martingale on (S; (S); s (S); )u:c:
and thus by 6.7, for all p > 0 :
h i h i
p=2 p=2
cp E hM it^T E [(Mt^T )p ] Cp E hM it^T ;

p=2
where Mt^T max0 s t^T jMs j : As t ! 1; continuity assures that hM it^T !
p=2
hM iT and (Mt^T )p ! (MT )p pointwise, and since these are increasing, 6.8
follows from Lebesgue’s monotone convergence theorem of book 5’s proposi-
tion 2.21.

Corollary 6.17 (The Burkholder-Davis-Gundy inequality) Let Mt be


a continuous local martingale on (S; (S); s (S); )u:c: with M0 = 0; and
thus M 2 Mloc : If for any p 1 :
h i
p=2
E hM it < 1; for all t;

then Mt is an Lp -martingale. If:


h i
p=2
E hM it Kp < 1; for all t;

then Mt is an Lp -bounded martingale.


Proof. By 6.7 it follows that E [(Mt )p ] < 1 for all t: If p = 1, the con-
clusion is obtained from proposition 5.88, book 7. If p > 1; then by Hölder’s
inequality (proposition 3.46, book 4):

E [Mt ] E [(Mt )p ]1=p < 1; for all t;

and the conclusion again follows. In both cases, Lp -boundedness follows from
6.7.
284 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

6.3 Local Martingales from Semimartingales


In simple terms Itô’s lemma states that smooth functions (i.e.,
f (t; x) 2 C 1;2 ([0; 1) Rm )) transform m-dimensional continuous
semimartingales Xt = X0 + Ft + Mt on (S; (S); s (S); )u:c: into f (t; Xt );
which are also continuous semimartingales on (S; (S); s (S); )u:c: : When
these semimartingales are Itô di¤usions and expressed as solutions of
stochastic di¤erential equations, or are driven by Brownian motions, Itô’s
lemma also provides a simple and ‡exible recipe for creating local
martingales, meaning members of Mloc of de…nition 3.67.

The development of the next section is the …rst of three that will con-
nect solutions of certain partial di¤erential equations and those of stochastic
di¤erential equations. The other two results will be introduced in the next
two sections and then developed more completely in book 9.

6.3.1 Itô Di¤usion Version


Recalling remark 5.26, let Xt be an m-dimensional Itô di¤usion and thus a
semimartingale on (S; (S); s (S); )u:c: with components speci…ed as in
5.22:
Z t Xn Z t
(i) (i)
Xt (!) = X0 (!) + ui (s; Xs (!))ds + vij (s; Xs (!))dBs(j) (!) :
0 j=1 0
m;n
The components of u(s; x) = fui (t; x)gm
i=1 and v(s; x) = fvij (t; x)gi=1;j=1
are assumed to be Borel measurable functions that satisfy 5.24.

If f (t; x) 2 C 1;2 ([0; 1) Rm ) and fB (j) gnj=1 are independent Brown-


ian motions, then f (t; Xt ) is a generalized continuous semimartingale on
(S; (S); s (S); )u:c: by corollary 5.28, and for every 0 t < 1 :
Z t Xm
f (t; Xt ) f (0; X0 ) = ft (s; Xs ) + fxi (s; Xs )ui (s; Xs ) ds
0 i=1
Z t
1 Xm Xm
+ fxi xj (s; Xs ) 2ij (s; Xs )ds
2 i=1 j=1 0
Xm Xn Z t
+ fxi (s; Xs )vij (s; Xs )dBs(j) ; -a.e.
i=1 j=1 0

As noted in remark 5.5, this and continuity imply that -a.e., this identity
is valid for all t:
6.3 LOCAL MARTINGALES FROM SEMIMARTINGALES 285

Here we introduced the m m matrix:


2
(s; Xs ) v(s; Xs )v T (s; Xs );

and so: Xn
2
ij (s; Xs ) k=1
vik (s; Xs )vjk (s; Xs ): (6.9)

Let L denote a partial di¤erential operator in the spatial x-variables


de…ned by:
Xm @ 1 Xm Xm @2
2
L ui (s; x) + ij (s; x) : (6.10)
i=1 @xi 2 i=1 j=1 @xi @xj

Itô’s lemma of 5.27 can then be expressed:


Z t
@f
f (t; Xt ) f (0; X0 ) = (s; Xs ) + Lf (s; Xs ) ds
0 @t
Xm Xn Z t
+ fxi (s; Xs )vij (s; Xs )dBs(j) ; -a.e.
i=1 j=1 0

The following result is now a simple consequence of Itô’s lemma. This


result applies equally well to spatial functions f (x) 2 C 2 (Rm ) for which the
associated partial di¤erential equations would be Lf (x) = 0 or Lf (x) =
g(x): If ui (t; x) = 0 for all i; then Xt is an m-dimensional generalized
local martingale on (S; (S); s (S); )u:c: ; and the following applies with
operator L that now only re‡ects second derivatives. Finally, we assume
independent Brownian motions, though this can be generalized with some
additionally notational complexity as noted in remark 5.29.

Proposition 6.18 (Local martingales from Itô di¤usions) Let Xt be


an m-dimensional Itô di¤ usion on (S; (S); s (S); )u:c: with independent
fB (j) gnj=1 as above, and f (t; x) 2 C 1;2 ([0; 1) Rm ) that satis…es the partial
di¤ erential equation:
@f
(s; x) + Lf (s; x) = 0;
@t
with L de…ned in 6.10. Then:

Mt f (t; Xt ) f (0; X0 ) 2 Mloc ;

and so is a continuous local martingale on (S; (S); s (S); )u:c: with M0 =


0:
286 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

More generally, if f (t; x) 2 C 1;2 ([0; 1) Rm ) satis…es:

@f
(s; x) + Lf (s; x) = g(s; x);
@t
where g(s; x) is Borel measurable and satis…es the u-constraint in 5.24, then:
Z t
Mt f (t; Xt ) f (0; X0 ) + g(s; Xs )ds 2 Mloc :
0

Proof. The case g(s; x) = 0 follows from the general case which we now
address. As noted in remark 5.26, g(s; Xs ) will be proved to be predictable
in book 9, and thus is progressively measurable by book 7’s proposition 5.19.
It then follows from the u-restriction in 5.24 that this integral is continuous
and adapted by corollary 3.74. Then by substitution into Itô’s lemma of 5.27
expressed as above:
Z t Xm Xn Z t
f (t; Xt ) f (0; X0 )+ g(s; Xs )ds = fxi (s; Xs )vij (s; Xs )dBs(j) ;
0 i=1 j=1 0

-a.e.
The integrals on the right are local martingales by proposition 3.83 if
fxi (s; Xs )vij (s; Xs ) is predictable, and:
Z t
Pr fx2i (s; Xs )vij
2
(s; Xs )ds < 1; all t = 1:
0

By remark 5.26 it follows that vij (s; Xs ) is predictable, while the same book 9
result applied to continuous and thus Borel measurable fxi (s; x) obtains pre-
dictability of fxi (s; Xs ): The integral constraint then follows since vij (s; Xs (!))
satis…es 5.24, and by continuity, fxi (s; Xs ) is bounded on [0; t] for all !:

Example 6.19 (Geometric Brownian motion) If m = n = 1; recall


the continuous semimartingale Xt of 5.12 of example 5.17, where for given
parameters ; :
Z t Z t
Xt = X0 + Xs ds + Xs dBs ; -a.e.,
0 0

where Bt is a Brownian motion on (S; (S); t (S); )u:c: : The solution was
derived in 5.13:
1 2
Xt = X0 exp t + Bt :
2
6.3 LOCAL MARTINGALES FROM SEMIMARTINGALES 287

In this case, noting that 2 (s; x) v 2 (s; x) = 2 x2 :

@ 1 2 2 @2
L x + x ;
@x 2 @x2
and thus f (t; Xt ) f (0; X0 ) 2 Mloc is a local martingale on (S; (S); s (S); )u:c:
for any f (t; x) 2 C 1;2 ([0; 1) R) such that:

@f @f 1 2f
2 2@
+ x + x = 0:
@t @x 2 @x2
More generally, if g(s; x) is Borel
R t measurable and satis…es the u-constraint
in 5.24, then f (t; Xt ) f (0; X0 ) + 0 g(s; Xs )ds 2 Mloc is a local martingale
on (S; (S); s (S); )u:c: for any f (t; x) 2 C 1;2 ([0; 1) R) such that:

@f @f 1 2f
2 2@
+ x + x = g:
@t @x 2 @x2
For this example such g(t; x) must in fact be at least continuous.

Example 6.20 1. Laplacian: Let m = n; ui (t; x) = 0 for all i; and


v(s; x)v T (s; x) = I; so v(s; x) is a rotation matrix for all (s; x): Then L
reduces to:
1 Xm @ 2
L ;
2 i=1 @x2
i
which is half the Laplacian, also called the Laplace operator. This op-
erator is an example of a strongly elliptic operator, is denoted and
1
so L 2 ; and is named for Pierre-Simon Laplace (1749 – 1827). This
operator is sometimes denoted 52 ; with the gradient 5 ( @x@ 1 ; :::; @x@m )
and notationally 52 5 5; an inner product. A function f (x) 2 C 2 (Rm )
that satis…es the elliptic partial di¤ erential equation f = 0 is called
a harmonic function.
Let:

(i) (i)
Xm Z t
Xt (!) = X0 (!) + vij (s; Xs (!))dBs(j) (!) ;
j=1 0

with fB (j) gm
j=1 independent Brownian motions on (S; (S); s (S); )u:c: ; or
equivalently (proposition 1.42, book 7) let B (B (1) ; :::; B (m) ) be an m-
dimensional Brownian motion on (S; (S); s (S); )u:c: : Then if v(s; x)v T (s; x) =
I and f is harmonic, then f (Xt ) f (X0 ) 2 Mloc is a local martingale on
(S; (S); s (S); )u:c: :
288 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

Many harmonic functions have singularities at x = 0; but an example of


one that does not when n = 2 is f (x; y) = ex sin y:
2. Heat equation: Let m = n; ui (t; x) = 0 for all i; and v(s; x)v T (s; x) =
I; and assume that f~(t; x) 2 C 1;2 ([0; 1) Rm ) satis…es:

@ ~
f (t; x) = f~(t; x):
@t

This is an example of the heat equation from physics, which is a partial


di¤ erential equation of parabolic type.
De…ning f (t; x) f~(T t; x) on 0 t T for some T > 0; it follows that
@ (i)
@t f (t; x) + f (t; x) = 0 for t T: Thus with Xt de…ned in 1, f (t; Xt )
f (0; X0 ) 2 Mloc is a local martingale on (S; (S); s (S); )u:c: for t T:
An example of this is:

1 h Xm i
f~(t; x) = exp x2i =4t :
(4t)n=2 i=1

3. GBM Redux: When n = m = 1 the heat equation in 2 reduces to


(changing notation):
f~s (s; y) = f~yy (s; y);

where subscripts denote derivatives. With a change of variables, the dif-


ferential equation of example 6.19 can be transformed to the heat equation.
Speci…cally, let T > 0 be …xed, and f (t; x) of example 6.19 reparametrized
on 0 s 21 2 T by:
f (t; x) = eay f~(s; y);

where:
2 2 1 2
a= 2
; s= (T t); y = ex :
2 2
Then using a change of variables obtains the above heat equation in f~(s; y):
The solution to the heat equation then yields a solution to the original
equation of example 6.19 over 0 t T by resubstitution. With Xt derived
in 5.13:
1 2
Xt = X0 exp t + Bt ;
2
it follows that f (t; Xt ) f (0; X0 ) 2 Mloc is a local martingale on (S; (S); s (S); )u:c: for
t T:
6.3 LOCAL MARTINGALES FROM SEMIMARTINGALES 289

6.3.2 Semimartingale Version


If Xt = X0 + Ft + Mt is an m-dimensional continuous semimartingale on
(S; (S); s (S); )u:c: and f (t; x) 2 C 1;2 ([0; 1) Rm ); then f (t; Xt ) is a
continuous semimartingale on (S; (S); s (S); )u:c: by proposition 5.22,
and for every 0 t < 1 :
Z t Xm Z t
f (t; Xt ) f (0; X0 ) = ft (s; Xs )ds + fxi (s; Xs )dFs(i)
0 i=1 0
Z t D E
1 Xm Xm
+ fxi xj (s; Xs )d M (i) ; M (j)
2 i=1 j=1 0 s
Xm Z t
+ fxi (s; Xs )dMs(i) ; -a.e.
i=1 0

As one example of how we make this general analysis tractable, assume


that fM (j) gm
j=1 fB (j) gm
j=1 are Brownian motions on (S; (S); s (S); )u:c:
(i)
for which B ; B (j) = ij t: As in 4 of section 2.2.1, ( jk ) is a pos-
t
itive semide…nite matrix. If fB (j) gm
j=1 are independent Brownian motions,
or equivalently (proposition 1.42, book 7) if B (1) (m)
(B ; :::; B ) is an m-
dimensional Brownian motion on (S; (S); s (S); )u:c: ; then I; the
m m identity matrix.
De…ne the partial di¤erential operator:
1 Xm Xm @2
L0 ij : (6.11)
2 i=1 j=1 @xi @xj
When the Brownian motions are independent, then I as noted above,
and this L0 = 12 as above,

Proposition 6.21 (Local martingales from semimartingales 1) Assume


that fM (j) gm
j=1 fB (j) gm
j=1 are Brownian motions on (S; (S); s (S); )u:c:
(i)
for which B ; B (j) = ij t; and let Xt = X0 +Bt : If f (t; x) 2 C 1;2 ([0; 1)
t
Rm ) satis…es the partial di¤ erential equation:
@f
(s; x) + L0 f (s; x) = g(s; x);
@t
with L0 de…ned in 6.11, then:
Z t
f (t; Xt ) f (0; X0 ) + g(s; Xs )ds 2 Mloc ;
0

is a continuous local martingale on (S; (S); s (S); )u:c: :


Proof. Left as an exercise.
290 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

Another example of how we can make the general set-up tractable is to


(1) (1) (2) (2)
assume that m = 2; that Xt = X0 + Mt and Xt = X0 + hM it ; and
(1) (1) (2)
thus Xt is local martingale with 0 (S)-measurable X0 ; and Xt is its
(2)
quadratic variation plus 0 (S)-measurable X0 : In this case we obtain:
Z t t Z
1
f (t; Xt ) f (0; X0 ) = ft (s; Xs )ds + fx2 (s; Xs ) + fx1 x1 (s; Xs ) d hM is
0 0 2
Z t
+ fx1 (s; Xs )dMs ; -a.e.
0

Proposition 6.22 (Local martingales from semimartingales 2) Let Mt


2 Mloc be continuous local martingale on (S; (S); s (S); )u:c: with M0 =
0; and de…ne the 2-dimensional semimartingale:
(1) (2)
Xt = X0 + Mt ; X0 + hM it ;

where X0 is 0 (S)-measurable. If f (x1 ; x2 ) 2 C 2;1 (R R) satis…es the partial


di¤ erential equation:

@f 1 @2f
(t; x) + (t; x) = 0;
@x2 2 @ 2 x1
then f (Xt ) f (X0 ) 2 Mloc is a continuous local martingale on (S; (S); s (S); )u:c: :
Proof. Left as an exercise.

Example 6.23 1. Doob-Meyer decomposition: Let f (x1 ; x2 ) = x21


x2 : Then the di¤ erential equation of proposition 6.22 is satis…ed, and if
(1) (2)
Xt = X0 + Mt ; X0 + hM it for a continuous local martingale Mt ; the
continuous local martingale f (Xt ) f (X0 ) is given by:

(1) 2 (1) 2
f (Xt ) f (X0 ) = X0 + Mt X0 hM it
(1)
= Mt2 hM it + 2X0 Mt :

It is apparent in this case that f (Xt ) f (X0 ) is a continuous local martin-


gale as the sum of Mt2 hM it ; a continuous local martingale by book 7’s
(1)
proposition 6.12, and 2X0 Mt ; a continuous local martingale by assumption.
Though perhaps tempting, one cannot use proposition 6.22 as a short
proof of the Doob-Meyer decomposition theorem of book 7’s proposition 6.12,
(1)
arguing that f (Xt ) f (X0 ) and 2X0 Mt are local martingales, and thus so
6.3 LOCAL MARTINGALES FROM SEMIMARTINGALES 291

too is Mt2 hM it by book 7’s exercise 5.78. This would be circular logic
as the Doob-Meyer result was prominent in the development of the various
integration theories, and thus also in Itô’s lemma.
From the above version of Itô’s lemma we also obtain that:
Z t
2 (1) (1)
Mt hM it + 2X0 Mt = 2 X0 + Ms dMs ; -a.e.,
0

and this reduces to 3.54:


Z t
Mt2 hM it = 2 Ms dMs ; -a.e.
0

2. Exponential martingale 1 Let f (x1 ; x2 ) = exp x1 21 2 x2 ;


where 6= 0 and can be complex. Then the di¤ erential equation of proposi-
(1) (2)
tion 6.22 is again satis…ed, and if Xt = X0 + Mt ; X0 + hM it for a con-
tinuous local martingale Mt ; the continuous local martingale f (Xt ) f (X0 )
is given by:

1 2 (1) 1 2 (2)
f (Xt ) f (X0 ) = exp Mt hM it exp X0 X0 :
2 2
(1) (2)
It is common to set X0 = X0 = 0; and then it follows that exp Mt 21 2 hM it
1 is a continuous local martingale, and from the above version of Itô’s
lemma:
Z t
1 2 1 2
exp Mt hM it 1= exp Ms hM is dMs :
2 0 2

Denoting this continuous local martingale:

1 2
E (M )t exp Mt hM it ; (6.12)
2

it follows that: Z t
E (M )t = 1 + E (M )s dMs : (6.13)
0
This equation is reminiscent of the integral equation satis…ed by the ordinary
exponential e (t) exp[ t] :
Z t
e (t) = 1 + e (s)ds:
0
292 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

It is an exercise to check that the integral in 6.13 is well de…ned since


M ([0; 1)
E (M )s 2 H2;loc S):
When Mt = Bt a Brownian motion, then:

1 2
E (B)t exp Bt t
2

is in fact a martingale relative to the natural …ltration Bt (S) by exercise


5.19 of book 7, and to the above …ltration t (S) by exercise 2.11.
In general, E (M )t need not be a martingale. See remark 6.24.
3. Exponential martingale 2 As a general example of example 2; let
Mt 2 Mloc be continuous local martingale on (S; (S); s (S); )u:c: de…ned
by:
Xn Z t
Mt = Q(i) (i)
s dNs ;
i=1 0
(i)
where fNt gni=1 are independent local martingales on (S; (S); s (S); )u:c: ;
(i) N (i) (de…nition 3.70). So de…ned, M is a local martingale by
and Qt 2 H2;loc t
proposition 3.83 and book 7’s exercise 5.78. Then by book 7’s corollary 6.32,
3.68 and proposition 3.92:
Xn Z t 2 D E
hM it = Q(i)
s d N (i) :
i=1 0 s

So by 6.12:
Xn Z t
1 Xn Z t 2 D E
E (M )t exp Q(i) (i)
s dNs
2
Q(i)
s d N (i)
i=1 0 2 i=1 0 s

is a local martingale on (S; (S); s (S); )u:c: which satis…es 6.13. Using
the associative law in 3.72, this identity becomes:
Xn Z t
E (M )t = 1 + E (M )s Q(i) (i)
s dNs : (6.14)
i=1 0

Remark 6.24 It can cause some confusion initially, but the exponential
martingale E (M )t de…ned in 6.12 for a local martingale Mt 2 Mloc is in
fact an exponential local martingale in general, as noted above. It will
be essential for the study of Girsanov’s theorem in book 9 to identify
conditions on Mt which assure that E (M )t is in fact a martingale. For that
application = 1 always, and it is common to denote E1 (M )t by E(M )t :
6.4 THE FEYNMAN-KAC REPRESENTATION THEOREM 1293

6.4 The Feynman-Kac Representation Theorem 1


The Feynman-Kac representation theorem, also called the
Feynman-Kac formula, "represents" the solution to certain partial
di¤erential equations or PDEs. Speci…cally, it represents these solutions in
terms of conditional expectations of stochastic processes which solve
stochastic di¤erential equations parametrized by the given PDE’s
coe¢ cient functions. While the result below is a special case of the more
general result developed within the study of stochastic di¤erential
equations in book 9, it can be derived with just an application of Itô’s
lemma. In e¤ect, we will circumvent the deeper questions addressed later
by assuming the existence of a solution of the given stochastic di¤erential
equation. However, this limited result is still adequate for a number of
problems of interest related to …nancial derivatives, since the associated
stochastic di¤erential equations are explicitly solvable and hence there is
no question of existence.

This representation theorem is named for Richard Feynman (1918


– 1988) and Mark Kac (1914 – 1984). Their results were published in
1948 and 1949, respectively, and apply to partial di¤erential equations of
parabolic type. The formal de…nition of parabolic type will be deferred to
book 9, but notable examples of this type of equation are the heat equation
of physics noted in example 6.20, and the equations for contingent claim
prices or …nancial derivatives in …nance.
In contrast to the traditional approach to PDEs and the book 9 study of
stochastic di¤erential equations, which investigates existence and uniqueness
results, the Feynman-Kac formula is not an existence theorem for the given
partial di¤erential equation. Its statement speci…es that: "...if f (t; x) is a
solution to..., then..." In other words, this theorem states that if there is
a solution to the given partial di¤erential equation, then this solution can
be represented by a formula involving a stochastic process which solves a
related stochastic di¤erential equation.
Thus it can be seen as "only" a representation theorem for a known-to-
exist solution, and not an existence theorem for that solution. However, since
this theorem concludes that any solution must have the given representation,
this theorem yields a uniqueness theorem for the partial di¤erential equation
whenever there is a unique solution to the related stochastic di¤erential
equation. We will return to this point after the book 9 development of
stochastic di¤erential equations where such existence and uniqueness results
are studied.
294 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

The version of the Feynman-Kac result below re…nes that presented in


example 5.18 in the discussion on Itô’s lemma. In book 9 this result will be
re…ned and also generalized to multidimensional processes.

Remark 6.25 (On SDE solutions 1) Let Bs be a Brownian motion on


the …ltered probability space (S; (S); s (S); )u:c: : For the following result,
we assume that a given stochastic di¤ erential equation:

dXs = u(s; Xs )ds + v(s; Xs )dBst ;

has a solution for s t; denoted Xst;x ;with an initial condition speci…ed at


time t 0 by Xtt;x = x: With a slight abuse of notation, we then specify
that for the solution of this equation, that the associated Brownian motion
Bst is also de…ned for s t and starts at time t; so Btt = 0. In other words,
Bst is a Brownian motion on (S; (S); s t (S); )u:c: : The slight abuse of
notation is that whereas hBis = s for s 0 for the original Brownian
motion, for this shifted version B t s = s t for s t; but this should
cause no confusion. The notation here for the shifted Brownian motion Bst
is consistent with the notation introduced above for the time-shifted solution
process Xst;x ; which again starts at s = t and with value Xtt;x = x: Thus in
this notation Bst Bst;0 ; but the extra notation is suppressed since Brownian
motion is conventionally initialized to 0: This notation is also reminiscent
of that used for Markov processes in chapter 4 of book 7.
This shifted Brownian process could be de…ned by Bst Bs t or Bst
Bs Bt with Bs the original Brownian motion: One could even restate this
equation with the original Brownian motion and solve dYs = u(t + s; Ys )ds +
v(t + s; Ys )dBs ; Y0 = x; then de…ne Xst;x for s t by Xst;x Ys t : These
versions are all Brownian motions by Lévy’s characterization and thus have
the same …nite dimensional distributions. But these are not pathwise iden-
tical processes as functions of ! 2 S; and thus it raises the question as to
how the solutions to these equations relate to one another.
First, if u(s; x) and v(s; x) satisfy any of the conditions of book 9 which
assure the existence of what will be called a strong solution, then each
of these stochastic di¤ erential equations driven by di¤ erent Brownian mo-
tions will be strongly solvable and have continuous solutions.. Further, if
u(s; x) and v(s; x) also satisfy any of the conditions which assure strong
uniqueness of a solution, meaning pathwise uniqueness, then each of these
solutions will be strongly unique relative to the Brownian motion utilized.
Finally, just as the Brownian motions underlying these solutions will not
agree pathwise but have the same …nite dimensional distributions, the same
is true for these solutions. This latter conclusion will be proved in book 9 in
6.4 THE FEYNMAN-KAC REPRESENTATION THEOREM 1295

the section on weak uniqueness of stochastic di¤ erential equations. There


this notion is de…ned as uniqueness in the sense of probability law
and will imply uniqueness in terms of the …nite dimensional distributions
(de…nition 1.37, book 7) of Xst;x :
The conclusion from all of this is that if the coe¢ cient functions sat-
isfy any of the book 9 assumptions that assure strong existence and unique-
ness, we can make statements about distributional properties of the solution
process Xst;x without being very precise about what version of Brownian mo-
tion is being used in the stochastic di¤ erential equation. An important ex-
ample of the needed assumptions on these coe¢ cient functions is Lipschitz
continuity and linear growth in x; uniformly in t; though this can be liberal-
ized somewhat.

Notation 6.26 (On notation and expectations) Some references use the
notation E t;x or Et;x in the constraint in 6.19 below and in the representa-
tion for f (t; x); and then drop the t; x from the notation for the process. For
example, 6.20 would become:
Z T Z T Z r
f (t; x) = E t;x (XT ) exp (s; Xs )ds + k(r; Xr ) exp (s; Xs )ds dr ;
t t t

where this expectation is taken with the understanding that Xt = x: As seen


in the discussion on Markov processes in book 7, this expectation is also
denoted Et;x :
This expectation is formalized as follows. First, …xing t 0 and x 2 R;
let 0 (S) denote the sigma algebra generated by fXst;x g for s t: In applica-
tions where Xst;x is an m-dimensional process, then x 2 Rm : Similarly, let
0 (S) denote the sigma algebra generated by fX t;x g for such t s r: The
r s
existence theory for stochastic di¤ erential equations of book 9 will prove that
if Bs is a Brownian motion on (S; (S); s (S); )u:c: ; that Xst;x is measur-
able relative to s (S) and so 0s (S) s (S) and
0 (S) (S): Thus is
well de…ned on these sigma algebras.
The expectation E t;x is now de…ned in terms of an integral relative to
the measure t;x de…ned on 0 (S) as follows. Let fsj gnj=1 [t; 1); then for
Borel sets fAj gnj=1 B(R) :
\n 1 \n 1
t;x Xst;x
j
(Aj ) Xst;x
j
(Aj ) : (6.15)
j=1 j=1

In the m-dimensional case, fAj gnj=1 B(Rm ): The collection of such cylin-
der sets forms a semi-algebra A0 that generates 0 (S); and t;x is extended
to this sigma algebra using the extension theory of chapter 5 of book 1:
296 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

Now the reader may have noticed that in the representation for f (t; x)
above, that the process Xs would appear to be the notation for the solution
of this stochastic di¤ erential equation from t = 0; so Xs = Xs0; ; but with
some unspeci…ed initial condition. Ignoring this initial condition for the
moment, this would seem to imply that the solution process Xst;x for s t is
the same as fXs js t; Xt = xg; at least in terms of its …nite dimensional
distributions. We discuss this again in the next section, in remark 6.41.
For the following, recall de…nition 5.3 that f (t; x) 2 C 1;2 ([0; T ] Rm )
denotes that f is continuous on this space, and has one continuous deriva-
tive for t 2 (0; T ) and two continuous derivatives for x 2 Rm that extend
continuously to [0; T ] Rm : While in many contexts the variable T serves as
notation for a stopping time, in the discussion of partial di¤erential equa-
tions and stochastic di¤erential equations this letter is often reserved for a
special …xed future time, such as the time of the boundary value. It is then
common to use the Greek letter or another letter for stopping times in
this context.
Proposition 6.27 (Feynman-Kac representation theorem 1) Let u(s; x)
and v(s; x) be measurable functions de…ned on [0; T ] R and Bs a Brownian
motion on (S; (S); s (S); )u:c: :
Assume that:
1. Given t 2 [0; T ) and x; there exists a continuous semimartingale Xst;x
on (S; (S); s t (S); )u:c: which is a solution to the stochastic di¤ er-
ential equation:
dXst;x = u(s; Xst;x )ds + v(s; Xst;x )dBst ;
for t s T with Xtt;x = x: Here Bst is a Brownian motion on
(S; (S); s t (S); )u:c: ; de…ned in terms of Bs or otherwise as above,
and thus Btt = 0 and Bst = s t:
In other words, Xst;x is an Itô di¤ usion:
Z s Z s
Xst;x = x + u(r; Xrt;x )dr + v(r; Xrt;x )Brt ; t s T: (6.16)
t t

2. There exists f (t; x) 2 C 1;2 ([0; T ] R) which solves the partial di¤ er-
ential equation:
@f @f 1 @2f
+u + v 2 2 + f + k = 0; (t; x) 2 [0; T ) R; (6.17)
@t @x 2 @x
f (T; x) = (x); x 2 R; (6.18)
6.4 THE FEYNMAN-KAC REPRESENTATION THEOREM 1297

where u; v are as above, (s; x); k(s; x) are continuous functions de-
…ned on [0; T ] R; and (x) a continuous function de…ned on R:
3. For f (t; x) in 2 :
"Z Z #
T r 2
E v(r; Xrt;x )fx (r; Xrt;x ) exp (s; Xst;x )ds dr < 1:
t t
(6.19)
Then f (t; x) has the representation:
Z T Z T Z r
f (t; x) = E (XTt;x ) exp (s; Xst;x )ds + k(r; Xrt;x ) exp (s; Xst;x )ds dr :
t t t
(6.20)
Proof. Given t; x; de…ne the process g(r; Xrt;x )
on [t; T ] S by:
Z r
g(r; Xrt;x ) = exp (s; Xst;x )ds f (r; Xrt;x ):
t

By continuity of and Xst;x :


Z r
Yr (!) (s; Xst;x )ds
t

is pathwise absolutely continuous as a function of r (proposition 3.57, book


3) and hence de…nes is a bounded variation process (that book’s proposition
3.58).
Since dYr = (r; Xrt;x )dr; Itô’s lemma of 5.10 can then be applied to
F (r; Yr ) with F (r; y) = exp(y) to yield:
Z r Z r
d exp (s; Xst;x )ds = (r; Xrt;x ) exp (s; Xst;x )ds dr:
t t

Similarly, an application of 5.10 to f (r; Xrt;x ) for any C 1;2 ([0; T ] R) func-
tion f (r; x) obtains:
1
df (r; Xrt;x ) = ft (r; Xrt;x ) + u(r; Xrt;x )fx (r; Xrt;x ) + v 2 (r; Xrt;x )fxx (r; Xrt;x ) dr
2
t;x t;x t
+v(r; Xr )fx (r; Xr )dBr :
As g(r; Xrt;x ) is de…ned as the product of these two processes, we can
apply stochastic integration by parts from 4.13, recalling remark 4.26. For
this application note that by book 7’s proposition 6.34:
Z r
exp (s; Xst;x )ds ; f (r; Xrt;x ) = 0;
t r
298 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

since the …rst factor is a bounded variation process. Substitution of the above
di¤ erentials then yields:

g(t0 ; Xtt;x
0 ) g(t; Xtt;x )
Z t0 Z r Z t0 Z r
t;x t;x
= exp (s; Xs )ds df (r; Xr ) + f (r; Xrt;x )d exp (s; Xst;x )ds
t t t t
Z t0 Z r
= exp (s; Xst;x )ds ft (r; Xrt;x ) + u(r; Xrt;x )fx (r; Xrt;x ) dr
t t
Z t0 Z r
1 2
+ exp (s; Xst;x )ds v (r; Xrt;x )fxx (r; Xrt;x ) + f (r; Xrt;x ) (r; Xrt;x ) dr
t t 2
Z t0 Z r
+ exp (s; Xst;x )ds v(r; Xrt;x )fx (r; Xrt;x )dBrt :
t t

If f (t; x) satis…es 6.17, the sum of the bracketed expressions in the …rst
two integrals equals k(r; Xrt;x ) identically. Substituting all terms, and re-
calling that Xtt;x = x; obtains:
Z 0 !
t
exp (s; Xst;x )ds f (t0 ; Xtt;x
0 ) f (t; x)
t
Z t0 Z r
= k(r; Xrt;x ) exp (s; Xst;x )ds dr
t t
Z t0 Z r
+ exp (s; Xst;x )ds v(r; Xrt;x )fx (r; Xrt;x )dBrt :
t t

The Itô integral is a martingale since the integrand satis…es 2.27 by 6.19
and hence is a member of H2 ([0; 1) S): Letting t0 = T and f (T; XTt;x ) =
(XTt;x ) obtains 6.20 by taking expectations:
Z T
f (t; x) = E exp (s; Xst;x )ds (XTt;x )
t
Z T Z r
+ k(r; Xrt;x ) exp (s; Xst;x )ds dr :
t t

Remark 6.28 (On Feynman-Kac assumptions) In the book 9 study of


stochastic di¤ erential equations, we will address existence and uniqueness
results for stochastic di¤ erential equations, and then return to a more ro-
bust version of Feynman-Kac which makes its applicability more predictable,
6.4 THE FEYNMAN-KAC REPRESENTATION THEOREM 1299

at least in terms of the assumption on the existence of Xst;x : Speci…cally, this


development will include conditions on u and v which assure that the stochas-
tic di¤ erential equation has a solution. and will also identify assumptions
on uniform growth rates in x of ; k and f to assure the validity of 6.19.
This future work on stochastic di¤ erential equations will also address
when the solution process is a Markov process, and indeed a di¤ usion process
as de…ned in section 4.3 of book 7: This will also clarify why there is no con-
‡ict in terminology of calling the solution of such equations Itô di¤ usions
as noted in 1 of the statement of the above proposition.
It is always prudent to be somewhat sceptical of a theorem which requires
the existence of functions with certain properties. For example, one could
potentially derive some initially interesting, though ultimately con‡icting,
properties of an assumed to exist function which is di¤erentiable but not
continuous. Of course no such function exists, so any such derivation would
serve no purpose even if initially "interesting." For the above theorem we
do not yet know when the equation in 6.16 has a continuous semimartingale
solution, or put another way, that there exists such an Itô di¤usion. Logically
this will rely on properties of the coe¢ cient functions u(t; x) and v(t; x):
But two simple examples of existence follow.
Example 6.29 (Brownian motion) In the simplest case, let u(t; x) =
0; v(t; x) = 1; and Bst Bs Bt where Bs is a Brownian motion on
(S; (S); s (S); )u:c: : Then 6.16 has the apparent solution:
Xst;x = x + Bs Bt ; t s T;
which can be reparametrized as:
t;x
Xt+r = x + Bt+r Bt ; 0 r T t:
De…ning Br0 Bt+r Bt ; it follows by the tower property of conditional
expectations that
E Br0 = E (E [Bt+r Bt j t (S)]) = 0:
Further, it is the case that (Br0 )2 r is a martingale relative to f t+r (S)g:
This process is integrable for all r; and if s r the measurability property
of conditional expectations (proposition 5.26, book 6) and the Doob-Meyer
Decomposition theorem of book 7’s proposition 6.12 obtain:
h i
2 2
E Bs0 Br0 j t+r (S)
2 2
= E Bt+s Bt+r j t+r (S) 2Bt E [Bt+s Bt+r j t+r (S)]
= (t + s) (t + r) = s r:
300 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

Thus (Br0 )2 r is a martingale, so hB 0 ir = r by the Doob-Meyer Decomposi-


tion theorem (proposition 6.12, book 7), and thus Br0 is a Brownian motion
by Levy’s characterization.
Changing notation from Br0 to conventional Br :
t;x
Xt+r = x + Br ; 0 r T t:
The partial di¤ erential equation then takes the form:
@f 1 @2f
+ + (t; x)f + k(t; x) = 0; f (T; x) = (x);
@t 2 @x2
and any solution of this equation which satis…es 6.19 has the representation
after a change of variables:
Z T t
f (t; x) = E (x + BT t ) exp (t + s; x + Bs )ds
0
Z T t Z r
+E k(t + r; x + Br ) exp (t + s; x + Bs )ds dr :
0 0

Example 6.30 (Geometric Brownian motion) By 2 of example 5.17,


when u(t; x) = x and v(t; x) = x; this equation is again solvable and
Xst;x represents the process known as geometric Brownian motion. The
process Xst;x given in 5.13 by:
t;x 1 2
Xt+r = x exp r + Br :
2
The Feynman-Kac result as stated is applicable to the partial di¤ erential
equation:
@f @f 1 2 2 @2f
+ x + x + (t; x)f + k(t; x) = 0; f (T; x) = (x);
@t @x 2 @x2
with ; k continuous functions de…ned on [0; T ] R:
In other words, if f is any solution of such an equation which also sat-
is…es 6.19:
"Z Z r #
T 2
t;x t;x t;x
E Xr fx (r; Xr ) exp (s; Xs )ds dr < 1;
t t

then f has the representation in 6.20:


Z T
f (t; x) = E (XTt;x ) exp (s; Xst;x )ds
t
Z T Z r
+ k(r; Xrt;x ) exp (s; Xst;x )ds dr ;
t t
6.5 DYNKIN’S FORMULA 301

with Xst;x given above.

6.5 Dynkin’s Formula


An important consequence of Dynkin’s formula below is to prove that
under appropriate assumptions:
h i
f (t; x; T ) E (XTt;x ) ;

is a di¤erentiable function of T: This will be enough to then derive a


special version of Kolmogorov’s backward equation for homogeneous
processes. Dynkin’s formula, named for a 1965 result ofh Eugenei B.
Dynkin (1924 –2014), provides the key expression for E (X t;x ) which
is valid for any stopping time with E [ ] < 1:The m-dimensional process
Xt is assumed to solve the multivariate version of the stochastic di¤erential
equation:
dXt = u(t; Xt )dt + v(t; Xt )dBt ;
with component representations given in 5.23. Under certain conditions
identi…ed in the book 9 existence theory for such equations, these solution
processes will in fact be di¤usion processes (section 4.3, book 7), called Itô
di¤usions, and thus have transition measures.

Utilizing the notation of multivariate stochastic integrals introduced in


(1) (m)
section 4.7, the process Xt Xt ; :::; Xt above is an m-dimensional
continuous, adapted process on (S; (S); t (S); )u:c: that satis…es the mul-
tivariate stochastic di¤erential equation (SDE):
Z t Z t
Xt (!) = X0 (!) + u(s; Xs (!))ds + v(s; Xs (!))dBs (!) ; 0 t T;
0 0

for some T > 0: As noted in remark 5.14, such a process is a continu-


ous semimartingale. In more detail, Xt ; X0 ; and u(t; x) are m 1 col-
(1) (n)
umn matrices, Bt Bt ; :::; Bt is n-dimensional Brownian motion on
(S; (S); t (S); )u:c: and identi…ed with an n 1 column matrix, and v(t; x)
is an m n matrix. The components of u(t; x) and v(t; x) are written with
subscripts:

u(t; x) (ui (t; x))m


i=1 ; v(t; x) (vij (t; x))m;n
i=1;j=1 ;

and are assumed to be Borel measurable functions de…ned on [0; 1) Rm :


302 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

Written in terms of component processes as in 5.22, the above equation


for Xt (!) implies that for i = 1; :::; m :
Z t Xn Z t
(i) (i)
Xt (!) = X0 (!) + ui (s; Xs (!))ds + vij (s; Xs (!))dBs(j) (!) :
0 j=1 0

Also noted in remark 5.14, if Xt is an m-dimensional continuous and adapted


stochastic process de…ned on (S; (S); s (S); )u:c: and u(t; x) and v(t; x)
are Borel measurable, then u(t; Xt (!)) and v(t; Xt (!)) are predictable processes
on this space as will be proved in book 9: Thus the above integrals are well
de…ned given some additional restrictions to be speci…ed.
In addition it will be assumed that given t 2 [0; T ] and x 2 Rm ; there ex-
ists an m-dimensional continuous semimartingale Xst;x on (S; (S); s t (S); )u:c:
that is a solution for t s T to the stochastic di¤erential equation:

dXst;x = u(s; Xst;x )ds + v(s; Xst;x )dBst ; Xtt;x = x:

The process Bst is an n-dimensional shifted Brownian motion on (S; (S); s t (S); )u:c:
and de…ned as in remark 6.25 of the prior section. But as an n-dimensional
(j)
Brownian
D process
E it has independent components that satisfy Btt =0
(j)
and Bst =t s: In integral form:
Z s Z s
Xst;x =x+ u(r; Xrt;x )dr + v(r; Xrt;x )dBrt ; t s T:
t t

Remark 6.31 (Time Homogeneous Process 1) In the special case where


u(t; x) = u(x) and v(t; x) = v(x); the stochastic di¤ erential equation and the
Itô di¤ usion solution are said to be time homogeneous. It follows that
with a change of limits:
Z s Z s
t;x t;x t;x t
Xt+s = x + u(Xt+r )dr + v(Xt+r )dBt+r ;
0 0
Z s Z s
t;x t;x
= x+ u(Xt+r )dr + v(Xt+r )dBr0 ; 0 s T t:((1))
0 0

For the Itô integral, the continuous and t


D E adapted process Bt+r is de…ned on
(j) (j)
r 0 with Btt = 0 and Bt+r t = r; and thus this is a version of a
standard Brownian motion Br0 by Levy’s characterization..
Initializing the original process to x at t = 0; so Br0 Br :
Z s Z s
0;x 0;x
Xs = x + u(Xr )dr + v(Xr0;x )dBr ; 0 s T t: ((2))
0 0
6.5 DYNKIN’S FORMULA 303

t;x
Comparing (1) and (2) it follows that Ys Xt+s and Ys0 Xs0;x satisfy the
same stochastic di¤ erential equation on 0 s T t; but with di¤ erent ver-
sions of a Brownian motion. By the section in book 9 on weak uniqueness,
meaning uniqueness in the sense of probability law, it will follow that
Ys and Ys0 have the same …nite dimensional distributions (de…nition 1.37,
book 7) under on 0 s T t: These conclusions will again require
assumptions on the coe¢ cient functions as noted in remark 5.26.
One consequence of this is that for a time homogeneous process, the
distribution of XTt;x depends only on x and T 0 T t:

For the next result, recall that by de…nition 5.3, 2 C02 (Rm ) means
that is twice continuously di¤erentiable and has compact support. That
is, = 0 on the complement of compact K Rm : Also, vv T denotes the
matrix product of v(t; y) and its transpose, so:
Xn m;m
vv T (s; y) vij (s; y)vkj (s; y) ;
j=1 i=1;k=1

and thus for 1 i; k m:


Xn
vv T ik
(s; y) = vij (s; y)vkj (s; y): (6.21)
j=1

The stated assumption on ui (t; y) and vij (t; y) needed for Dynkin’s for-
mula below will be assured by the assumptions in book 9 that will be made
for Itô’s existence theory for solutions of stochastic di¤erential equations.
As noted in remark 6.25, this existence theory will require a uniform lin-
ear growth bound for ui (t; y) and vij (t; y) in y; and Lipschitz continuity of
ui (t; y) and vij (t; y) in y for all t: In addition, if the stopping time is con-
stant or bounded, T; then the assumption below on ui (t; y) and vij (t; y)
is also assured by the time-localized linear growth bound assumed for the
more general existence theory.

Proposition 6.32 (Dynkin’s formula) For t 0 and x; let Xst;x be an


m-dimensional continuous process de…ned on (S; (S); s (S); )u:c: that is
a solution on s t to the stochastic di¤ erential equation:

dXst;x = u(s; Xst;x )ds + v(s; Xst;x )dBst ; Xtt;x = x:

As above, Bst denotes a shifted n-dimensional Brownian motion de…ned


(j)
on
D this E space for s t with components that satisfy Btt = 0 and
t (j) 2 m
Bs = s t: Assume that (y) 2 C0 (R ) ; that all ui (t; y) and vij (t; y)
304 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

are bounded by M < 1 uniformly in t for y in the compact support of ;


and let t be a stopping time with E [ ] < 1:
Then: Z
E X t;x = (x) + E L Xst;x ds ; (6.22)
t
where the associated di¤ erential operator L LX is given by:
Xm 1 Xm Xm
L (x) ui (t; x) xi (x) + vv T ik
(t; x) xi xk (x);
i=1 2 i=1 k=1
(6.23)
and where the xi -subscripts denote partial derivatives. In addition:
Z
E X t;x = (x) + E L Xst;x ds: (6.24)
t

Further, given bounded D Rm ; if inffsjXst;x 2 = Dg and E [ ] < 1;


then 6.22 and 6.24 hold for all (y) 2 C (R ) : 2 m

Proof. To simplify notation, …x t and x and let Ys Xst;x : Itô’s lemma in


(i)
5.25 can be applied to Z (Ys ) ; and after substituting for the dYs terms
obtains:
1 Xm Xm hXn i
dZs = yi yk (Ys ) vij (s; Ys )vkj (s; Ys ) ds
2 i=1 k=1 j=1
Xm Xm Xn
(j)
+ yi (Ys )ui (s; Ys )ds + yi (Ys )vij (s; Ys )dBs :
i=1 i=1 j=1

For notational simplicity the superscript t is suppressed in the shifted Brown-


(j) (j)
ian motion, so here: Bs Bst : Integrating over [t; T ] and noting that
by de…nition Yt = x; yields:
Z T hXn i
1 Xm Xm
(YT ) = (x) + yi yk (Ys ) vij (s; Ys )vkj (s; Ys ) ds
t 2 i=1 k=1 j=1
Z T X
m
+ yi (Ys )ui (s; Ys ) ds
t i=1
Z TX X
m n (j)
+ yi (Ys )vij (s; Ys )dBs :
t i=1 j=1

For any …xed T < 1; this formula for (YT ) is valid -a.e. by Itô’s
lemma, and thus it is valid for all rational T; -a.e. As (YT ) is continuous
in T it follows that this formula is then valid for all …nite T; -a.e. Now
given the stopping time with E [ ] < 1; then < 1 -a.e. and thus on
the intersection of these sets of measure 1; this formula is valid -a.e. with
6.5 DYNKIN’S FORMULA 305

T replace by : Taking expectations and a change of notation using 6.23


produces:
Z
t;x
E X = (x) + E L Xst;x ds
t
Xm Xn Z
(j)
+ E yi (Ys )vij (s; Ys )dBs :
i=1 j=1 t

The proof of 6.22 is complete by showing the expectations in the double


summation have value 0:
To this end, for given i; j de…ne gk on S as the integral stopped at k :
Z ^k Z k
(j) (j)
gk (!) (Y )v
yi s ij (s; Ys )dB s = [0; (!)] (s) yi (Ys )vij (s; Ys )dBs :
t t

Now as the composition of a continuous (and thus Borel measurable) func-


tion and a continuous adapted process, yi (Ys ) is continuous and adapted,
and so yi (Ys ) is a predictable process on (S; (S); s (S); )u:c: by book 7’s
corollary 5.17. As noted in remark 5.26, the same is true for vij (s; Ys )
by Borel measurability of vij and continuity and adaptedness of Ys : Thus
yi (Ys )vij (s; Ys ) is adapted and measurable by book 7’s proposition 5.19,
and so too is the above integrand [0; ] (s) yi (Ys )vij (s; Ys ) since [0; ] (s) has
this property by the de…nition of stopping time.
In addition, by the boundedness assumption on vij and continuity of
yi s ); this integrand is bounded by cM for Ys in the support of ; uniformly
(Y
in s: Thus:
[0; ] (s) yi (Ys )vij (s; Ys ) 2 H2 ([0; 1) S);
since by de…nition 2.31:
Z Z 1
2
2 2
[0; ] (s) yi (Ys )vij (s; Ys ) [0; ] (s) yi (Ys )vij (s; Ys )dsd
H2 ([0;1) S) S 0
c2 M 2 E [ ] < 1:

Hence by 2.38:
Z k
(j)
E [gk (!)] = E [0; ] (s) yi (Ys )vij (s; Ys )dBs = 0:
t

In addition, Itô’s isometry in 2.39 applies to yield:


2
E gk2 (!) = [0; ] (s) yi (Ys )vij (s; Ys ) < 1: ((*))
H2 ([0;1) S)
306 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

Hence:
Z Z
1 1
sup jgk (!)j d sup jgk (!)j2 d sup E gk2 (!) ;
k jgk j N N k jgk j N N k
and so Z
lim sup jgk (!)j d = 0:
N !1 k jgk j N

In other words, fgk (!)g is uniformly integrable (de…nition 2.50, book 5) and
thus by that book’s proposition 2.52:
Z
(j)
E yi (Ys )vij (s; Ys )dBs = limk!1 E [gk (!)] = 0:
t

This completes the proof of 6.22.


For 6.24, note that by de…nition:
Z Z Z
t;x
E L Xs ds L Xst;x dsd :
t t

The integrand L Xst;x as given in 6.23 is by assumption bounded by K


say, and thus L Xst;x is integrable as an iterated integral:
Z Z Z
t;x
L Xs dsd K ( t)d KE [ ] :
t

By Tonelli’s theorem of proposition 5.22 of book 5, the integral of L Xst;x


exists as a product space integral with respect to d(m ) with m Lebesgue
measure, and thus the same is true of the product space integral of L Xst;x :
A change in the order of the iterated integrals is then justi…ed by Fubini’s
theorem (that book’s proposition 5.15), completing the proof.
Finally, given bounded D 2 Rm ; then ( ) above remains valid by de…ning
c; respectively M; as the bound on yi ; respectively vij ; on D: The rest of
the proof now follows as above.

Remark 6.33 (On generalizations on (y)) Note that the assumption


on (y) 2 C02 (Rm ) can be generalized in two ways:

1. The assumption of compact support can be relaxed, and the above result
and proof will apply to (y) 2 Cb2 (Rm ) if all ui (t; y) and vij (t; y) are
globally bounded by M < 1 for all y and t: Here Cb2 (Rm ) is de…ned as
the space of twice continuously di¤ erentiable functions with bounded
derivatives.
6.5 DYNKIN’S FORMULA 307

2. The assumption of twice continuously di¤ erentiable can also be relaxed


as was the case for Itô’s lemma as noted in remark 5.6. That is, (x)
need only be continuously di¤ erentiable in any xk -component for which
Xk is a bounded variation process. Then for such k; vkj (t; x) = 0 for
all j; and thus vv T ik (t; x) 0 for all i; and L (x) contains no
xi xk (x)-terms.
These generalizations then also apply to the corollary below and propo-
sition 6.43.
For the following result, we now explicitly require all ui (t; y) and vij (t; y)
to be continuous. As noted in remark 6.25, Lipschitz continuity in y will be
assumed for the existence and uniqueness theory of stochastic di¤erential
equations, while continuity in t and y will be needed to prove that the
solution process is a di¤usion process in the sense of chapter 4 of book
7: Thus the assumed continuity below is not in fact a signi…cant added
restriction.
For completeness, this corollary explicitly identi…es all assumptions.
Corollary 6.34 (Dynkin’s formula) For t 0 and x; let Xst;x be an m-
dimensional continuous process de…ned on (S; (S); s (S); )u:c: that is a
solution on s t to the stochastic di¤ erential equation:
dXst;x = u(s; Xst;x )ds + v(s; Xst;x )dBst ; Xtt;x = x:
As above, Bst denotes a shifted n-dimensional Brownian motion de…ned
(j)
on
D this E space for s t with components that satisfy Btt = 0 and
(j)
Bst = t s: Assume that (y) 2 C02 (Rm ) ; and that all ui (t; y) and
vij (t; y) are continuous and bounded by M < 1 uniformly in t for y in the
compact support of : Let T t be given and de…ne f (t; x; T ) on [0; T ] Rm
[0; 1) by: h i
f (t; x; T ) E XTt;x :
Then f is di¤ erentiable in T for T > t; where with L de…ned in 6.23:
@f h i
(t; x; T ) = E L XTt;x : (6.25)
@T
In addition, f is di¤ erentiable in T at T = t where this derivative is inter-
preted as a right derivative (de…nition 3.8, book 3, but with limit supremum
replaced by an ordinary limit), and then:
@f
(t; x; T ) = L (x) : (6.26)
@T T =t
308 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

@f
Further, @T (t; x; T ) is continuous in T for T t:
Proof. Using 6.24 with a change of notation:
Z T
f (t; x; T ) = (x) + E L Xst;x ds: ((*))
t

By the given assumptions, g(s; !) L Xst;x is continuous in s for ! 2 S


and uniformly bounded in s and
h !: So by ithe bounded convergence theorem
(proposition 2.46, book 5), E L Xst;x is continuous in s: Speci…cally,
if sn ! s; gn (!) L Xst;x
n and g(!) L Xst;x ; then gn (!) ! g(!)
R R
and jgn (!)j K assure that gn d ! gd :
Thus 6.25 follows for T > t by the fundamental theorem of calculus in
book 3’s proposition 3.2. For T = t we use ( ) in the de…nition of the right
derivative:

@f f (t; x; t + h) f (t; x; t)
(t; x; T ) limh!0+
@T T =t h
Z t+h
1
= limh!0+ E L Xst;x ds:
h t

Then 6.26 follows from continuity of he integrand by the mean value theorem
for integrals (for example, proposition 10.27, Reitano (2010)) since Xtt;x =
x: h i
@f
For continuity of @T (t; x; T ) = E L XTt;x ; …x t; x and T t: Since
Xst;x is continuous it follows that XTt;x
n
! XTt;x -a.e. if Tn ! T; where this
is a right limit for T = t: Now L (y) is continuous and thus L XTt;x
n
!
L XTt;x -a.e. if Tn ! T:Since L (y) is also of compact support and
h i h i
is thus bounded in Rm ; E L XTt;x n
! E L X t;x
T by the bounded
convergence theorem of book 5’s proposition 2.46.

Remark 6.35 (Time Homogeneous Process 2) When ui (t; y) ui (y)


and vij (t; y) vij (y) are continuous, and thus the stochastic di¤ erential
equation in 5.8 is time homogeneous, then the solution process XTt;x
depends only on T 0 T t as noted in remark 6.31. In other words,
t;x
Ys Xt+s and Ys0 Xs0;x satisfy the same stochastic di¤ erential equation
on 0 s T t, but with di¤ erent Brownian motions. By the book 9
development on weak uniqueness, it then follows that Ys and Ys0 have the
6.5 DYNKIN’S FORMULA 309

same …nite dimensional distributions under on 0 s T t: In this case,


it is common to take t = 0 and simplify notation with f (x; T ) f (0; x; T ) :

f (x; T ) E [ (XTx )] ;
h i
where we also simplify notation with E [ (XTx )] E XT0;x :

Example 6.36 (Geometric Brownian motion) With m = 1; let the sto-


chastic di¤ erential equation of 5.8 be that of geometric Brownian motion as
in 5.12:
dXt = Xt dt + Xt dBt :
By 5.13 this has solution with Xtt;x = x given on s t by:

1
Xst;x = x exp 2
(s t) + Bst ;
2

where Bst Bs t or Bst Bs Bt with Bs denoting the original Brownian


motion on (S; (S); s (S); )u:c: : The associated di¤ erential operator L
LX is given in 6.23 by:
1 2 2
L (x) x x (x) + x xx (x);
2
and then Dynkin’s formula states that for (y) 2 C02 (R) :
h i Z T
1
f (t; x; T ) E XTt;x = (x)+E Xst;x t;x
x (Xs ) + ( Xst;x )2 t;x
xx (Xs ) ds :
t 2

6.5.1 In…nitesimal Generators


For …xed t 0 and x; let Xst;x be an m-dimensional continuous process
de…ned on (S; (S); s (S); )u:c: that is a solution on s t to the
stochastic di¤erential equation:

dXst;x = u(s; Xst;x )ds + v(s; Xst;x )dBst ; Xtt;x = x:

As above, Bst denotes a shifted n-dimensional Brownian


D motion E de…ned on
t (j) t (j)
s t with components that satisfy Bt = 0 and Bs = t s:
Since this section uses Dynkin’s formula, we make the same assumptions
on the coe¢ cient functions of the stochastic di¤erential equation, ui (t; y)
and vij (t; y); as stated above.
310 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

It turns out that the operator L in 6.23 above is related to another


operator de…ned next.

De…nition 6.37 (In…nitesimal generator) With Xst;x de…ned above, the


in…nitesimal generator of X at t and x; At;x At;x
X ; also called the
generator of X; is de…ned on a function when the limit exists by:
h i
t;x
E Xt+h (x)
At;x limh!0+ : (6.27)
h
The collection of functions de…ned on Rm for which this limit exists at
the given t and x is denoted DA (t; x); while the collection of functions for
which this limit exists for all (t; x) is denoted by DA :

Remark 6.38 (On notation and expectations) Recalling remark 6.26


on notation and expectations, 6.27 is also expressed:

Et;x [ (Xt+h )] (x)


At;x limh!0+ : (6.28)
h
As seen in the next result this limit is a function of both x and t; and so one
can also use the notation At;x A (t; x) :

Proposition 6.39 (On when At;x = L (x)) Assume that all ui (t; y) and
vij (t; y) are continuous and bounded by MK < 1 uniformly in t for y in any
compact set K: Then C02 (Rm ) DA ; and if 2 C02 (Rm ) :

At;x = L (x); (6.29)

with L de…ned in 6.23. In other words, for 2 C02 (Rm ) :


Xm 1 Xm Xm
At;x = ui (t; x) xi (x) + vv T ik
(t; x) xi xk (x):
i=1 2 i=1 k=1
h i (6.30)
Proof. Since E Xtt;x = (x); the de…nition of At;x
(x) is equivalent
h i
to the right derivative of E XTt;x with respect to T at T = t:Thus this
is a restatement of 6.26.

Remark 6.40 1. Recalling remark 6.25, if all ui (t; y) and vij (t; y) are
continuous and bounded by M < 1 for all y and t; then 6.29 also
applies to all 2 Cb2 (Rm ) :
6.5 DYNKIN’S FORMULA 311

2. For 2 C02 (Rm )h; 6.27 and i6.29 can be combined to provide an ap-
t;x
proximation to E Xt+h :
t;x
E Xt+h (x)
Xm 1 Xm Xm
= h ui (t; x) xi (x) + vv T ik
(t; x) xi xk (x) + o(h);
i=1 2 i=1 k=1

recalling that an o(h) error means that o(h)=h ! 0 as h ! 0:


2. When Xs is a time-homogeneous Itô process (remarks 6.31 and
6.35), so ui (t; x) = ui (x) and vij (t; x) = vij (x); then At;x = A0;x is in-
dependent of t and thus only a function ofh x: This isiapparent h from 6.30i
t;x
2 m
if 2 C0 (R ) and in general since E Xt+h = E Xh0;x ;
and thus one can also use the notation At;x A (x) :

6.5.2 Kolmogorov’s Backward Equation 1


Kolmogorov’s equations, speci…cally Kolmogorov’s backward equation
and Kolmogorov’s forward equation, are named for 1931 results of
Andrey Kolmogorov (1903 –1987). These equations are partial
di¤erential equations which in book 9 will be seen to provide
information on the spatial distributions of Xt in the terminology of
chapter 8 of book 6; or on the transition measures for Xt in the
terminology of chapter 4 of book 7: Derived almost 20 years before the
Feynman-Kac representation theorem of the previous section, one
version of Kolmogorov’s backward equation again addresses the question of
solutions to certain partial di¤erential equations. The partial di¤erential
equations addressed are in one respect less general than those for
Feynman-Kac, and Kolmogorov’s backward equation is sometimes
presented as an immediate consequence of this later result. But the key
insight to the Kolmogorov derivation is that the formula seen in 6.20,
adapted to the special case addressed, is not just a representation of an
assumed to exist solution. The Kolmogorov theorem states that this
formula is indeed a solution of the partial di¤erential equation, and in fact
the unique solution.
For example in the special 1-dimensional case (m = n = 1); Kolmogorov’s
backward equation is the Feynman-Kac equation in 6.17 with (t; x) =
k(t; x) = 0 :
@f @f 1 @2f
+ u(t; x) + v 2 (t; x) 2 = 0; f (T; x) = (x): (6.31)
@t @x 2 @x
312 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

In the Feynman-Kac approach, if f (t; x) denotes an assumed to exist solution


to this equation, and Xst;x is a continuous process that solves the stochastic
di¤erential equation on s t :

dXst;x = u(s; Xst;x )ds + v(s; Xst;x )dBst ; Xtt;x = x;

then from 6.20 this solution must have the representation f (t; x) f (t; x; T )
with: h i
f (t; x; T ) = E (XTt;x ) :

The Kolmogorov result states that this is not just a representation of f (t; x)
assuming this solution exists, but it is in fact the unique solution to this
equation if (x) 2 C02 (R) :
h What isi clear is that the proposed representation formula f (t; x) =
E (XTt;x ) satis…es the boundary condition since by de…nition XTT;x = x
and thus f (T; x) = (x): More importantly, we would also like to be able
to prove that:
limt!T f (t; x) = (x);
where this notation implies a left limit. hThe remarkable
i thing about this
t;x
result, and the key to the proof, is that E (XT ) is di¤erentiable in t and
twice di¤erentiable in x; properties we defer toh book 9:iThis also explains
the "backward" quali…er to this result, that E (XTt;x ) is a di¤erentiable
function in terms of the backward variables (t; x) which de…ne when and
where the process Xst;x starts.
For the result below we will derive a very special case h of thisi result to
keep our focus on Dynkin’s formula and its result that E (XTt;x ) is di¤er-
entiable in T: In order to justify this change of parameter from t to T; we will
assume that the process XTt;x is time homogeneous, so that the result stated
in terms of T can be restated in terms of t. With the aid of Dynkin’s formula
we will then derive a disguised formulation of Kolmogorov’s result, though
one commonly encountered in the literature. In this version, the desired
di¤erential operator L de…ned in 6.23 is replaced with A de…ned in 6.27. Of
course L = A when applied to functions 2 C02 (Rm ) by proposition 6.39,
but this will largely not be applicable in the context below beyond providing
the strong hint of the …nal result to come in book 9: See also remark 6.46.
In book 9 we will derive the …nal statement for Kolmogorov’s back-
ward equations in terms of L for nonhomogeneous processes, as well as a
second version of this equation that is satis…ed by the transition measure
p(t; x; T; dy) of this Markov process (see remark 6.41), or more speci…cally
6.5 DYNKIN’S FORMULA 313

the assumed to exist transition density function p(t; x; T; y) (de…nition 4.2,


book 7). This second version of the backward equation identi…es how the
…nal distribution of XTt;x at time T changes as a function of x; t; and thus is a
partial di¤erential equation in these backward variates. We then also inves-
tigate Kolmogorov’s forward equation that …xes the initial distribution at
time t to be x0 (x); de…ned as 1 when x = x0 and 0 otherwise, and presents
an identity for the emerging density function p(T; y) p(t; x0 ; T; y) stated
in terms of forward (T; y)-derivatives. These results will be addressed in
book 9 after it is proved that under speci…ed conditions, the solution to
a stochastic di¤erential equation is in fact a Markov process, a result we
motivate next.

Remark 6.41 (On SDE solutions 2) Continuing with remark 6.25, here
we comment more on the manner in which Xst;x can be interpreted as a
Markov process (chapter 4, book 7), since we need this attribute in the proof
below.
Let Bs be an n-dimensional Brownian motion de…ned on the natural
…ltered space (S; (S); B s (S); )u:c: and Z (!) a random m-vector that is
Pm (j) 2 < 1: Recall de…nition
independent of 1 (S) with E jZj2
B
j=1 E Z
5.4 of book 7 on …ltrations, and summary 1.25 of that book for notions of
independence. Then under certain continuity and growth assumptions on
u(s; x) [ui (s; x)]m
i=1 and v(s; x) [vij (s; x)]m;n
i=1;j=1 de…ned on [0; 1) Rm ;
we will prove in book 9 that the stochastic di¤ erential equation:

dXs = u(s; Xs )ds + v(s; Xs )dBs ; X0 = Z;

has a strong solution that is strongly unique.

By a strong solution is meant that the m-dimensional process Xs is


continuous and adapted to (S; (S); s (S); )u:c: ; that Pr[X0 = Z] =
1; and that for 1 i m :
Z s Z s
(i) (i) Pn
Xs (!) = Z (!)+ ui (r; Xr (!))dr+ j=1 vij (r; Xr (!))dBr(j) (!) :
0 0

Implicit in this last statement is that the integrands have the neces-
sary properties to ensure that these integrals are well-de…ned. Also,
the …ltration in (S; (S); s (S); )u:c: is de…ned in terms of unions of
B (S) and (Z) ; the sigma algebra generated by Z; where this …ltra-
s
tion is then made right continuous and complete to satisfy the usual
conditions (see remark 5.6 of book 7 for more on this).
314 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

By strongly unique is meant that if Xs0 is another process with these


properties, then
PrfXs = Xs0 for all sg = 1:

Further, with the same assumptions on u(s; x) and v(s; x); if Bs and Bs0
are n-dimensional Brownian motions de…ned on (S; (S); s (S); )u:c:
and Z is as above, then the two associated strong solutions Xs and
Xs0 have the same …nite dimensional distributions. Put another way,
solutions to such stochastic di¤ erential equations are said to be weakly
unique.

It is then also the case that given T > 0; that E jXs j2 < 1 for 0
s T:

With this set-up, let’s now compare Xst;Xt and Xs Xs0;Z on s t: The
notation Xst;Xt generalizes that above in the apparent way, in that Xtt;Xt =
Xt : Then assuming E jZj2 < 1; there exists the strong solution:
Z s Z s
Xs (!) = Z (!) + u(r; Xr (!))dr + v(r; Xr (!))dBr (!)
Z0 s Z0 s
= Xt (!) + u(r; Xr (!))dr + v(r; Xr (!))dBr (!) :
t t

Since E jXt j2 < 1; there also exists the strong solution:


Z s Z s
Xst;Xt = Xt (!) + u(r; Xrt;Xt (!))dr + v(r; Xrt;Xt (!))dBrt (!) : ((1))
t t

Comparing, it follows that Xs (!) and Xst;Xt satisfy the same stochastic dif-
ferential equation for s t; with the same initial value Xtt;Xt = Xt (!) ; but
potentially di¤ erent Brownian motions.
If the shifted Brownian process Brt in (1) is de…ned by Brt = Br Bt ;
then with probability 1 :
Z s Z s
v(r; Xr (!))dBr (!) = v(r; Xr (!))dBrt (!) ; all s t:
t t

This follows because the integral of any simple function over [t; s] as de…ned
in 2.13 is the same for all ! using Br or Brt : Thus the Itô integral, de…ned as
an L2 -limit of such simple process integrals, agrees -a.e. for every rational
s; and by continuity agrees -a.e. for all s: Thus by strong uniqueness,
Xs = Xst;Xt with probability 1 on s t:
6.5 DYNKIN’S FORMULA 315

When Brt is de…ned more generally, say Brt = Br t using the original
Brownian motion, or Brt = Br0 Bt0 or Brt = Br0 t with a di¤ erent Brownian
motion, et cetera, then the equation in (1) again has a strong solution and
by weak uniqueness it follows that Xs =F DD Xst;Xt on s t; meaning these
processes have the same …nite dimensional distributions.
Without attempting to repeat the book 9 development, this is perhaps
enough to appreciate that the solutions of such an SDE can be "identi…ed"
with a Markov process, and indeed a di¤ usion (chapter 4, book 7) with ad-
ditional assumptions on the coe¢ cient functions u(s; x) and v(s; x). This
identi…cation is made between the continuous solutions of this equation and
the space of continuous functions (i.e., transformations) G : R+ ! Rm ;
where we impose a Markov probability structure on this latter space using a
generalization of the measures t;x of remark 6.26.
We need this insight to justify the application of 4.13 of book 7 in the
following proof. This result states that for Borel measurable (x) and t; h >
0:
Eh;Xh [ (Xt+h )] = E0;x [ (Xt+h ) j h (S)] : ((2))
In the notation of remark 6.26,
h i
h;Xh
Eh;Xh [ (Xt+h )] E Xt+h ;

where Xsh;Xh is the solution to the above equation on s h with Xhh;Xh = Xh :


Similarly:
h i
0;x
E0;x [ (Xt+h ) j h (S)] E Xt+h j h (S) ;

where Xs0;x is the solution to the above equation on s 0 with X00;x = x;


and the conditional expectation is as given in de…nition 5.19 of book 6:

Notation 6.42 (On time, T vs. t) For the following proposition, the no-
tation in 6.32 is not standard for homogeneous processes and one often sees
T in this statement replaced by t: See for example Øksendal (1998)). How-
ever, the notation below is consistent with the more general notation above
and that in Dynkin’s formula which we apply, and will facility our transi-
tion to the alternative presentation of this result in proposition 6.45 below
in terms of t:

Proposition 6.43 (Kolmogorov’s backward equation) For x 2 Rm ;


let Xs0;x be an m-dimensional continuous process de…ned on (S; (S); s (S); )u:c:
316 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

that is a solution on s 0 to the homogeneous stochastic di¤ erential equa-


tion:
dXs0;x = u(Xs0;x )ds + v(Xs0;x )dBs ; Xt0;x = x;
where Bs is an n-dimensional Brownian motion de…ned on this space. As-
sume that (x) 2 C02 (Rm ) ; and that all u(x) [ui (x)]m
i=1 and v(x)
m;n
[vij (x)]i=1;j=1 are continuous and bounded by M < 1 for x in the compact
support of : De…ne g(T; x) on [0; 1) Rm by:
h i
g(T; x) E XT0;x ; (6.32)

noting that g(T; x) is also bounded by max j (x)j :


Let Ag(T; x) A0;x g as de…ned in 6.27 with g(T; x) = g( ; x) treated as
a function of x: Then g(T; x) 2 DA for each T 0; and g(T; x) satis…es:
@g
(T; x) = Ag(T; x); (6.33)
@T
with initial boundary value:

limT !0+ g(T; x) = (x): (6.34)

Further, g(T; x) is unique. That is, if h(T; x) 2 C 1;2 ([0; 1) Rm ) is


bounded and satis…es 6.33 and 6.34; then h(T; x) = g(T; x):
Proof. Apply the operator A A0;x in the notation of 6.28 to g(T; x) as a
function of x :
E0;x [g(T; Xh )] g(T; x)
Ag(T; x) = limh!0+ :
h
As noted in remark 6.36, homogeneity of the process yields that the distrib-
ution of XTt;X is determined by T t; and thus:

g(T; Xh ) E0;Xh [ (XT )] = Eh;Xh [ (XT +h )] :

Then as a Markov process, an application of 4.13 of book 7 obtains (recall


(2) of remark 6.41):

Eh;Xh [ (XT +h )] = E0;x [ (XT +h ) j h (S)] :

Finally, by the tower property of conditional expectations (proposition 5.26,


book 6):

E0;x [g(T; Xh )] = E0;x [E0;x [ (XT +h ) j h (S)]] = E0;x [ (XT +h )] g(T +h; x):
6.5 DYNKIN’S FORMULA 317

Combining:
E0;x [g(T; Xh )] g(T; x)
Ag(T; x) = limh!0+
h
g(T + h; x) g(T; x)
= limh!0+ :
h
@g
Now @T exists for all T 0 by corollary 6.34, and so the limit above exists.
This proves that g(T; x) 2 DA for each T 0 and 6.33 is satis…ed. The
@g
boundary value in 6.34 alsoh follows from
i corollary 6.34 since @T (T; x) is
continuous in T 0 and E X00;x = (x):
To prove uniqueness, let bounded h(T; x) 2 C 1;2 ([0; 1) Rm ) be given
that satis…es 6.33 and 6.34. Fix (T; x) 2 [0; 1) Rm and de…ne a continuous
process Ys in Rm+1 for s 0 by:

Ys (T s; Xs0;x ):
(j)
Denoting the components of Ys by fYs gm
j=0 ; the stochastic di¤ erential equa-
(j) (j) (j)
tions for fYs gm m
j=1 are the same as those for fXs gj=1 with Y0 = xj ; while
(0) (0) (0)
the equation for Ys T s is dYs= ds; Y0
= T: Thus by 6.29, the
di¤ erential operator LY associated with Ys in Dynkin’s formula is given by:

LY = d=dT + L;

with L the di¤ erential operator associated with Xs and given in 6.23.
Now let Cn = fyj jyj ng Rm+1 and de…ne the stopping time (propo-
sition 5.60, book 7) n = inffsjYs 2 = Cn g: As E [s ^ n ] < 1 for any s;
we can apply Dynkin’s formula in 6.22 to h(y) 2 C 2 Rm+1 : Further, since
(0)
h (Ys ) = h(T s; Xs0;x ) and Ys T s is of bounded variation, this formula
then applies to h(s; x) 2 C 1;2 ([0; 1) Rm ) by remark 5.6. Thus:
Z s^ n
0;x 0;x
E [h(Ys^ n )] = h(Y0 ) + E LY h (Yr ) dr
0
Z s^ n
0;x
= h(T; x) + E ( d=dT + L) h (Yr ) dr :
0

Applying exercise 6.44, let hk (s; x) 2 C01;2 ([0; 1) Rm ) with jhk (s; x)j
jh(s; x)j and hk (s; x) = h(s; x) for j(s; x)j k: Then the above formula also
applies to hk (s; x) to obtain:
Z s^ n
0;x 0;x
E [hk (Ys^ n )] = hk (T; x) + E ( d=dT + A) hk (Yr ) dr ;
0
318 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

noting that A = L on hk (s; x) for any s by 6.29. Now by assumption on


h(T; x); for jYr j k :

( d=dT + A) hk (Yr ) = ( d=dT + A) h (Yr ) = 0:

Choosing k > n; say k = 2n obtains:

E 0;x [h2n (Ys^ n )] = h2n (T; x):

Recalling that h is bounded, the bounded convergence theorem (proposi-


tion 2.46, book 5) yields as n ! 1 :

E 0;x [h(Ys )] = h(T; x):

This identity is valid for all s; so letting s = T obtains:

h(T; x) = E 0;x [h(YT )] = E 0;x [h(0; XT )] = E 0;x [ (XT )] ;

and thus h(T; x) = g(T; x):


h i
Exercise 6.44 On Rn de…ne the function '(x) = c exp 1= jxj2 1 for
P 2
jxj2 xj 1; and '(x) = 0 otherwise. Check that '(x) 2 C01 (Rn ) :
Hint: It has support on jxj2 1 by de…nition, and it is enough to check that
the function f (t) = exp [1=t] for t < 0 and f (t) = 0 otherwise, is in…nitely
m
di¤ erentiable and f (m) (t) ddtmf ! 0 as t ! 0 for all m:
Now given k 2; R de…ne k (x) = 1 for jxj k and k (x) = 0 otherwise.
Choosing c so that '(x)dx = 1; de…ne:
Z
gk (x) = 2k (y)'(x y)dy:

Now show that gk (x) 2 C01 (Rn ) ; jgk (x)j 1; and that gk (x) 1 for jxj k:
Then in the above proof, hk = gk h: Hint: Note that by de…nition of '(x)
that: Z
gk (x) = 2k (x)'(y x)dy:
jx yj 1
Justify di¤ erentiating under the integral using section 2.4, book 5:

Proposition 6.45 (Kolmogorov’s backward equation) For t 0 and


x; let Xst;x be an m-dimensional continuous process de…ned on the …ltered
space (S; (S); s (S); )u:c: that is a solution on s t to the stochastic
di¤ erential equation:

dXst;x = u(Xst;x )ds + v(Xst;x )dBst ; Xtt;x = x:


6.5 DYNKIN’S FORMULA 319

As above, Bst denotes a shifted n-dimensional Brownian motion de…ned


(j)
on
D this E space for s t with components that satisfy Btt = 0 and
(j)
t
Bs = t s: Assume that (x) 2 C0 (R ) ; that all u(x) [ui (x)]m
2 m
i=1
and v(x) [vij (x)]m;n
i=1;j=1 are continuous and bounded by M < 1 for x in
the compact support of : Let T > 0 be given and de…ne f (t; x) on [0; T ) Rm
by: h i
f (t; x) E XTt;x ; (6.35)

noting that f (t; x) is bounded by max j (x)j :


Let Af (t; x) At;x f as de…ned in 6.27 with f (t; x) = f ( ; x) treated as
a function of x: Then f (t; x) 2 DA for each 0 t T; and f (t; x) satis…es:
@f
(t; x) = Af (t; x); (6.36)
@t
with boundary value:

limt!T f (t; x) = (x): (6.37)

Further, f (t; x) is unique. That is, if h(t; x) 2 C 1;2 ([0; 1) Rm ) is


bounded and satis…es 6.36 and 6.37; then h(t; x) = f (t; x):
Proof. By homogeneity of the process the distribution of XTt;X is determined
by T t: Thus:
h i h i
f (t; x) E XTt;x = E XT0;x t = g(T t; x);

where g(s; x) is de…ned in 6.32. Then Af (t; x) = Ag(T t; x) and:


@f @g
(t; x) = (T t; x);
@t @T
so this result follows from proposition 6.43. Details are left as an exercise.

Remark 6.46 (On Kolmogorov’s backward "di¤erential" equation)


While providing a nice application of Dynkin’s formula, the versions of Kol-
mogorov’s backward equations in 6.33 and 6.36, while often seen in the ref-
erences, are hardly partial di¤ erential equations. But they are nearly PDE’s.
Playing fast and loose, if we just "misuse" the A = L result of proposition
6.39 then 6.36 becomes:
@f Xm 1 Xm Xm
(t; x) = ui (x)fxi (t; x) vv T ik (x) fxi xk (t; x);
@t i=1 2 i=1 k=1
320 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA

a partial di¤ erential equation indeed. And this will be the …nal form of
this equation derived in book 9 for the homogeneous process of proposition
6.45, as well as the general case of u(t; x) [ui (t; x)]m
i=1 and v(t; x)
m;n
[vij (t; x)]i=1;j=1 with appropriate assumptions.
Of course we cannot justify the substitution A = L even in this homoge-
neous case. First, proposition 6.39 requires compact support of the function
f (t; x) for each t; and in general we have no justi…cation for this for any t:
As noted in remark 6.33, this support assumption can be removed by assum-
ing that u(x) [ui (x)]m i=1 and v(x) [vij (x)]m;n
i=1;j=1 are globally bounded in
x; a reasonable assumption. But then we must also be able to demonstrate
that f (t; x) is twice continuously di¤ erentiable in x for each t:
As it turns out, the most subtle and di¢ cult part of the proof of Kol-
mogorov’
h s backward
i equation in the above form is the proof that f (t; x; T )
t;x
E (XT ) is continuously di¤ erentiable in t and twice continuously dif-
ferentiable in x when (x) 2 C02 (Rm ) : Di¤ erentiability in t is assured in
the homogeneous case by Dynkin’s formula as derived above, while a direct
derivation is needed in the general case. For di¤ erentiability in x; this is no
easier to demonstrate in the homogeneous case than the general case.
Di¤ erentiability of f (t; x) with respect to x will require di¤ erentiability
assumptions on the coe¢ cient functions u(t; x) and v(t; x); as well as a gen-
eralization of Itô’s existence and uniqueness theory for stochastic di¤ erential
equations to accommodate coe¢ cient functions that also depend on ! 2 S;
and thus are of the form ui (t; x; !) and vij (t; x; !): The details of this t and
x-di¤ erentiability can be found in Friedman (1975) where it is assumed
that all …rst and second x-derivatives of the coe¢ cient functions are uni-
formly bounded by polynomials in x such as (1 + jxj)k for k > 0: The result
above is there also generalized to (x) 2 C 2 (Rm ) when derivatives of are
similarly bounded.
We will return to this discussion in book 9.
References

I have listed below a number of textbook references for the mathematics


and …nance presented in this series of books. All provide both theoretical
and applied materials in their respective areas that are beyond those
developed here and are worth pursuing by those interested in gaining a
greater depth or breadth of knowledge. This list is by no means complete
and is intended only as a guide to further study. In addition, these
references include various published research papers if they have been
identi…ed in this book’s chapters.

The reader will no doubt observe that the mathematics references are
somewhat older than the …nance references and upon web searching will
…nd that several of the older texts in each category have been updated to
newer editions, sometimes with additional authors. Since I own and use the
editions below, I decided to present these editions rather than reference the
newer editions which I have not reviewed. As many of these older texts are
considered "classics", they are also likely to be found in university and other
libraries.
That said, there are undoubtedly many very good new texts by both new
and established authors with similar titles that are also worth investigating.
One that I will at the risk of immodesty recommend for more introductory
materials on mathematics, probability theory and …nance is:

0. Reitano, Robert, R. Introduction to Quantitative Finance: A Math


Tool Kit. Cambridge, MA: The MIT Press, 2010.

Topology, Measure, Integration, Linear Algebra


1. Dugundji, James. Topology. Boston, MA: Allyn and Bacon, 1970.

2. Doob, J. L. Measure Theory. New York, NY: Springer-Verlag, 1994.

321
322 REFERENCES

3. Edwards, Jr., C. H. Advanced Calculus of Several Variables. New


York, NY: Academic Press, 1973.

4. Gemignani, M. C. Elementary Topology. Reading, MA: Addison-


Wesley Publishing, 1967.

5. Halmos, Paul R. Measure Theory. New York, NY: D. Van Nostrand,


1950.

6. Hewitt, Edwin, and Karl Stromberg. Real and Abstract Analysis. New
York, NY: Springer-Verlag, 1965.

7. Royden, H. L. Real Analysis, 2nd Edition. New York, NY: The MacMil-
lan Company, 1971.

8. Rudin, Walter. Principals of Mathematical Analysis, 3rd Edition.


New York, NY: McGraw-Hill, 1976.

9. Rudin, Walter. Real and Complex Analysis, 2nd Edition. New York,
NY: McGraw-Hill, 1974.

10. Shilov, G. E., and B. L. Gurevich. Integral, Measure & Derivative: A


Uni…ed Approach. New York, NY: Dover Publications, 1977.

11. Strang, Gilbert. Introduction to Linear Algebra, 4th Edition. Welles-


ley, MA: Cambridge Press, 2009.
Probability Theory & Stochastic Processes
12. Billingsley, Patrick. Probability and Measure, 3rd Edition. New York,
NY: John Wiley & Sons, 1995.

13. Chung, K. L., and R. J. Williams. Introduction to Stochastic Integra-


tion. Boston, MA: Birkhäuser, 1983.

14. Davidson, James. Stochastic Limit Theory. New York, NY: Oxford
University Press, 1997.

15. de Haan, Laurens, and Ana Ferreira. Extreme Value Theory, An In-
troduction. New York, NY: Springer Science, 2006.

16. Durrett, Richard. Probability: Theory and Examples, 2nd Edition.


Belmont, CA: Wadsworth Publishing, 1996.

17. Durrett, Richard. Stochastic Calculus, A Practical Intriduction. Boca


Raton, FL: CRC Press, 1996.
323

18. Feller, William. An Introduction to Probability Theory and Its Appli-


cations, Volume I. New York, NY: John Wiley & Sons, 1968.

19. Feller, William. An Introduction to Probability Theory and Its Appli-


cations, Volume II, 2nd Edition. New York, NY: John Wiley & Sons,
1971.

20. Friedman, Avner. Stochastic Di¤ erential Equations and Applications,


Volume 1 and 2. New York, NY: Academic Press, 1975.

21. Ikeda, Nobuyuki, and Shinzo Watanabe. Stochastic Di¤ erential Equa-
tions and Di¤ usion Processes. Tokyo, Japan: Kodansha Scienti…c,
1981.

22. Karatzas, Ioannis, and Steven E. Shreve. Brownian Motion and Sto-
chastic Calculus. New York, NY: Springer-Verlag, 1988.

23. Kloeden, Peter E., and Eckhard Platen. Numerical Solution of Sto-
chastic Di¤ erential Equations. New York, NY: Springer-Verlag, 1992.

24. Lowther, George, Almost Sure, A Maths Blog on Stochastic Calculus,


https://almostsure.wordpress.com/stochastic-calculus/

25. Lukacs, Eugene. Characteristic Functions. New York, NY: Hafner


Publishing, 1960.

26. Nelson, Roger B. An Introduction to Copulas, 2nd Edition. New York,


NY: Springer Science, 2006.

27. Øksendal, Bernt. Stochastic Di¤ erential Equations, An Introduction


with Applications, 5th Edition. New York, NY: Springer-Verlag, 1998.

28. Protter, Phillip. Stochastic Integration and Di¤ erential Equations, A


New Approach. New York, NY: Springer-Verlag, 1992.

29. Revuz, Daniel, and Marc Yor. Continuous Martingales and Brownian
Motion, 3rd Edition. New York, NY: Springer-Verlag, 1991.

30. Rogers, L. C. G., and D. Williams. Di¤ usions, Markov Processes and
Martingales, Volume 1, Foundations, 2nd Edition. Cambridge, UK:
Cambridge University Press, 2000.

31. Rogers, L. C. G., and D. Williams. Di¤ usions, Markov Processes and
Martingales, Volume 2, Itô Calculus, 2nd Edition. Cambridge, UK:
Cambridge University Press, 2000.
324 REFERENCES

32. Sato, Ken-Iti. Lévy Processes and In…nitely Divisible Distributions.


Cambridge University Press, Cambrideg, UK, 1999.

33. Schilling, René L. and Lothar Partzsch. Brownian Motion: An Intro-


duction to Stochastic Processes, 2nd Edition. Berlin/Boston: Walter
de Gruyter GmbH, 2014.

34. Schuss, Zeev, Theory and Applications of Stochastic Di¤ erential Equa-
tions. New York, NY: John Wiley and Sons, 1980.
Finance Applications
35. Etheridge, Alison. A Course in Financial Calculus. Cambridge, UK:
Cambridge University Press, 2002.

36. Embrechts, Paul, Claudia Klüppelberg, and Thomas Mikosch. Mod-


elling Extremal Events for Insurance and Finance. New York, NY:
Springer-Verlag, 1997.

37. Hunt, P. J., and J. E. Kennedy. Financial Derivatives in Theory and


Practice, Revised Edition. Chichester, UK: John Wiley & Sons, 2004.

38. McLeish, Don L. Monte Carlo Simulation and Finance. New York,
NY: John Wiley, 2005.

39. McNeil, Alexander J., Rüdiger Frey, and Paul Embrechts. Quantita-
tive Risk Management: Concepts, Techniques, and Tools. Princeton,
NJ.: Princeton University Press, 2005.
Research Papers/Books for Book 8
40. Bichteler, Klaus. "Stochastic Integration and Lp -Theory of Semi-
martingales." Ann. Probab. 9, no. 1, 49–89, 1981.

41. Cherny, Alexander. "Some Particular Problems of Martingale The-


ory." From Stochastic Calculus to Mathematical Finance: The Shiryaev
Festschrift, Editors: Kabanov, Yu., Liptser, R., Stoyanov, J. Berlin,
Heidelberg: Springer-Verlag, 2006.

42. Dambis, K.E. "On the decomposition of continuous submartingales."


Theor. Probab. Appl. 10, 438–448, 1965.

43. Dellacherie, C. "Un survol de la théorie de l’intégrale stochastique."


Stochastic Process. Appl. 10, 115–144, 1980.
325

44. Dubins, L. E. and Schwarz, G. "On continuous martingales." Proc.


Nat. Acad. Sci. USA 53, 913–916, 1965.

45. Itô, Kiyosi. "On a formula concerning stochastic di¤ erentials." Nagoya
Mathematical Journal 3, 55–65, 1951.

46. Itô, Kiyosi. "On a stochastic integral equation". Proceedings of the


Japan Academy. 22 (2), 32–35, 1946.

47. Itô, Kiyosi. "Stochastic integral". Proceedings of the Imperial Acad-


emy. 20 (8), 519–524, 1944.

48. Kunita, Hiroshi and Shinzo Watanabe. "On square integrable martin-
gales." Nagoya Math. J. 30, 209-245, 1967.
Index

Associative law of stochastic integra- time-changed BM, 267


tion, 170, 173 Doléans-Dade, Catherine A.
associative law of stochastic integra- Doléans-measure, 88
tion, 135 dominated, 277
Doob, J. L.
bounded variation process, 185 Doob-Meyer decomposition the-
m-dimensional, 212 orem, 233
Brownian Motion Dubins, Lester E.
Lévy’s characterization, 256 time-changed BM, 267
Brownian motion Dynkin, Eugene B.
de…nition, 12 Dynkin’s formula, 301
on a general …ltered space, 20
Burkholder, Donald L. exponential martingale, 291, 292
Burkholder-Davis-Gundy inequal-
ity, 276 Feynman, Richard
Burkholder-Davis-Gundy inequality, Feynman-Kac representation, 293
276 Feynman-Kac representation theorem,
234
càdlàg, 233
càdlàg (French) geometric Brownian motion, 208, 231,
"continu à droite, limite à gauche", 286, 300
187 gradient
C 1;2 ([0; 1) 218 of a function, 250
Cholesky, André-Louis greatest integer function, 187
Cholesky decomposition, 16 Gundy, Richard F.
Burkholder-Davis-Gundy inequal-
Dambis, K. E. ity, 276
time-changed BM, 267
Davis, Burgess H2 ([0; 1) S)
Burkholder-Davis-Gundy inequal- adapted, measurable, L2 -bounded,
ity, 276 41
Dellacherie, Claude , 80 simple process approximations, 44
Doeblin, Wolfgang H2M ([0; 1) S)

326
INDEX 327

predictable, L2 -bounded, 91 Itô, Kiyoshi


simple process approximations, 94 Itô integral, xiv, 5, 27, 39
M
H2;loc integral - Final de…nition, 155
Kac, Mark
H2M integral - Final de…nition, 109
bP (m n) Feynman-Kac representation, 293
H2;loc ([0; 1) S) Kolmogorov, Andrey
m n-dim, predictable, locally Kolmogorov’s backward equation,
bounded, 212 311
B(1 n)
H2;loc ([0; 1) S) Kunita, Hiroshi
n-dim, predictable, locally L2 -bounded,Kunita-Watanabe inequality, 109,
210 165
B(m n)
H2;loc ([0; 1) S) Kunita-Watanabe inequality, 109, 117,
m n-dim, predictable, locally 165
L2 -bounded, 210 Kunita-Watanabe identity, 132
M ([0; 1)
H2;loc S)
predictable, locally L2 -bounded, Lévy’s characterization
149 of Brownian Motion, 256
bP (1 n)
Hloc ([0; 1) S) Lagrange, Joseph-Louis
n-dim, predictable, locally bounded, Lagrange for of remainder, 222,
212 240
bP
Hloc ([0; 1) S) Laplace, Pierre-Simon
predictable, locally bounded, 195 Laplacian, 287
Hesse, Ludwig Otto Lenglart, E.
Hessioan matrix, 249 Lenglart’s inequalities, 277
local martingale, 148
independent random vectors, 13 m-dimensional, 212
in…nitesimal generator, 309 locally bounded predictable process ,
isometry, 38 195
Itô Integral - Final de…nition, 65
Itô Integration M2
via Riemann Sums, 65 cont’s L2 -bounded martingales,
Itô Isometry 89
Itô M -Isometry, 100, 103, 162 Mloc
Itô Isometry, 36 cont’s local martingales, 148, 212
Itô isometry, 56 MSloc
Itô process, 227, 245, 259, 263 cont’s semimartingales, 181, 212
Itô’s Isometry martingale
Itô M -Isometry, 107 L2 -bounded, 84, 89
Ito di¤usion, 299 L2 -martingale, 36, 89
Itô di¤usion, 228, 247, 296, 301 local martingale, 148
Itô’s lemma, 215 semimartingale, 181
328 INDEX

submartingale, 144, 233 total variation process, 185


measure
mutually singular, 115 Watanabe, Shinzo
signed, 110 Kunita-Watanabe inequality, 109,
total variation measure, 114 165
Measures Induced by BV Functions, Weiner, Norbert
110 Weiner Integral, 76
mesh size, 28
Meyer, Paul-André
Doob-Meyer decomposition the-
orem, 233
multivariate normal distribution, 13

partial di¤erential equation


elliptic, 287
parabolic, 288, 293

Quadratic variation process


Brownian motion, 27

right continuous inverse, 269

Schwarz, Gideon E.
time-changed BM, 267
semimartingale, 181
m-dimensional, 212
signed measure, 110
simple process, 24
stochastic di¤erential equation, 248
SDE, 228
Stochastic dominated convergence the-
orem, 175, 201
Stochastic Integrals
via Riemann Sums, 74, 138, 178,
203
Stochastic integration by parts, 204
Stratonovich, Ruslan
Stratonovich integral, 27

time homogeneous
SDE, 302

You might also like