Book 8
Book 8
Robert R. Reitano
Brandeis International Business School
Waltham, MA 02454
July, 2021
Copyright © 2021 by Robert R. Reitano
Brandeis International Business School
Preface ix
to Lisa xi
Introduction xiii
v
vi CONTENTS
References 321
Preface
The idea for a reference book on the mathematical foundations of quantita-
tive …nance has been with me throughout my career in this …eld. But the
urge to begin writing it didn’t materialize until shortly after completing my
…rst book, Introduction to Quantitative Finance: A Math Tool Kit, in 2010.
The one goal I had for this reference book was that it would be complete
and detailed in the development of the many materials one …nds referenced
in the various areas of quantitative …nance. The one constraint I realized
from the beginning was that I could not accomplish this goal, plus write a
complete survey of the quantitative …nance applications of these materials,
in the 700 or so pages that I budgeted for myself for my …rst book. Little
did I know at the time that this project would require a multiple of this
initial page count budget even without detailed …nance applications.
I was never concerned about the omission of the details on applications
to quantitative …nance because there are already a great many books in
this area that develop these applications very well. The one shortcoming
I perceived many such books to have is that they are written at a level of
mathematical sophistication that requires a reader to have signi…cant formal
training in mathematics, as well as the time and energy to …ll in omitted
details. While such a task would provide a challenging and perhaps welcome
exercise for more advanced graduate students in this …eld, it is likely to
be less welcome to many other students and practitioners. It is also the
case that quantitative …nance has grown to utilize advanced mathematical
theories from a number of …elds. While there are also a great many very
good references on these subjects, most are again written at a level that
does not in my experience characterize the backgrounds of most students
and practitioners of quantitative …nance.
So over the past several years I have been drafting this reference book,
accumulating the mathematical theories I have encountered in my work in
this …eld, and then attempting to integrate them into a coherent collection
of books that develops the necessary ideas in some detail. My target readers
would be quantitatively literate to the extent of familiarity, indeed comfort,
with the materials and formal developments in my …rst book, and su¢ ciently
motivated to identify and then navigate the details of the materials they were
attempting to master. Unfortunately, adding these details supports learning
but also increases the lengths of the various developments. But this book was
never intended to provide a “cover-to-cover”reading challenge, but rather to
be a reference book in which one could …nd detailed foundational materials
in a variety of areas that support current questions and further studies in
quantitative …nance.
ix
x Preface
Over these past years, one volume turned into two, which then became a
work not likely publishable in the traditional channels given its unforgiving
size and likely limited target audience. So I have instead decided to self-
publish this work, converting the original chapters into stand-alone books, of
which there are now nine. My goal is to …nalize each book over the coming
year or two.
I hope these books serve you well.
I am grateful for the support of my family: Lisa, Michael, David, and
Je¤rey, as well as the support of friends and colleagues at Brandeis Interna-
tional Business School.
Robert R. Reitano
Brandeis International Business School
to Lisa
xi
Introduction
This is the eighth book in a series of nine that will be self-published under
the collective title of Foundations of Quantitative Finance. Each book in
the series is intended to build from the materials in earlier books, with the
…rst six volumes alternating between books with a more foundational
mathematical perspective, which was the case with the …rst, third and …fth
book, and books which develop probability theory and some quantitative
applications to …nance, the focus of the second, fourth and sixth book.
This is the second of three books on stochastic processes.
xiii
xiv INTRODUCTION
included topics have been curated from a vast mathematical and probability
literature for the express purpose of supporting applications in quantitative
…nance. In addition, the development of these topics will be found to be at
a much greater level of detail than in most advanced quantitative …nance
books, and certainly in more detail that most advanced mathematics and
probability theory texts. Finally and most importantly for a reference work,
this series of books is extensively self-referenced. The reader can enter the
volumes at any place of interest, and any earlier results utilized will be
explicitly identi…ed for easy reference.
The title of this eighth book is Itô Integration and Stochastic Calculus I.
While book 7 developed properties of Brownian motion and other stochastic
processes in some detail, this book sets out to begin the study a "calculus"
of such processes with an emphasis on the associated integration theories.
Subjects formally categorized under stochastic calculus but not pursued in
this book are Girsanov’s theorem(s), martingale representation theorems,
and the study of stochastic di¤erential equations, which are deferred to
book 9.
The goal of chapter 1 is to motivate the studies in this book from the
framework of quantitative …nance, recalling the asset and …nancial deriva-
tive pricing models of book 6, and then summarizes the investigations in this
and the next book from this perspective. Chapter 2 then turns to the devel-
opment of the foundational integral in stochastic calculus, the Itô integral,
named for Kiyoshi Itô (1915 –2008), where such integrals use Brownian mo-
tion as integrators. Itô also pioneered a mathematical framework for such
integrals and related concepts that is collectively known as Itô calculus.
After justifying that the proposed integral does not …t into the integra-
tion theories of books 3 and 5, the development begins as in earlier books
with the Itô integral of simple processes. It is then seen that such integrals
converge within an L2 -framework, where the quadratic variation results of
book 7 play a prominent role. Properties of this integral are then developed,
including cases where the Itô integral can be de…ned in terms of limits of
the associated Riemann sums of earlier books.
Chapter 3 then sets out to generalize this integration theory from Brown-
ian motion integrators to continuous local martingale integrators. While
Brownian motion is indeed a continuous local martingale by book 7’s corol-
lary 5.85, this generalization will require a fair amount of machinery to
substitute for the special properties of Brownian motion that general local
martingales do not enjoy. In order to achieve this general result, the …rst
part of the chapter focuses on continuous L2 -bounded martingale integra-
tors, and then develops the needed properties to extend these results to
xv
Stochastic Calculus: An
Informal Link to Finance
Recall that there T is …xed and …nite, t T =n; and fbi g are independent
binomial variates with bi = 1; each with probability 1=2: At any time
point t T representable as t = jT =n for integers 0 < j n; this
(mn)
proposition addressed the distributional limit of Xmj as m ! 1: Since
t mjT =mn is …xed and independent of m, Xt can be de…ned as a limit of
(mn)
Xmj -variates de…ned as above, as a sum of mj binomial variates with a
step size of t = T =mn.
(mn) (m)
Denoting Xmj Xt ; the price variate at such time t for given m;
this proposition proved that as m ! 1 :
(m)
ln[Xt =X0 ] t
p !d Z1 N (0; 1); ((1))
t
1
2CHAPTER 1 STOCHASTIC CALCULUS: AN INFORMAL LINK TO FINANCE
p
h(y) X0 exp ty + t ; it follows that for such t = jT =n :
(m)
Xt !d Xt X0 exp [ t + Zt ] ; ((2))
with Zt N (0; t): This is a result for each such t; but more can be said with
the results of book 7.
In the notation of 1.2 of book 7, for given t = jT =n as above:
(m)
ln[Xt =X0 ] t
= Bt ( t m ) ;
Bt ( tm ) !d Zt : (1.2)
and that (Zt1 ; :::; Ztk ) is multivariate normally distributed with mean vector
= 0 and covariance matrix Cij = min(ti tj ): Thus the Mann-Wald theorem
of book 6’s proposition 4.21 and h : Rk ! Rk de…ned componentwise with
h above obtains that (2) generalizes to:
(m) (m)
Xt1 ; :::; Xtk !d (X0 exp [ t1 + Zt1 ] ; :::; X0 exp [ tk + Ztk ]) ; (1.4)
(m)
where Xtj is de…ned as in 1.3,
Book 7’s proposition 1.21 summarizes that these limiting variates Zt in
1.2 have all of the attributes of a Brownian motion Bt (de…nition 1.27 there)
except a demonstration of continuity with probability 1 (see also remark 1.23
1.1 ASSET MODEL LIMITS 3
Bt ( tm ) !d Bt ;
Xt = X0 exp [ t + Bt ] ; (1.5)
(m)
In other words, all …nite dimensional distributions of Xt converge in distri-
bution to the respective …nite dimensional distributions of X0 exp [ t + Bt ] :
In the notation of book 7’s proposition 2.21:
(m)
Xt !F DD X0 exp [ t + Bt ] :
(m)
This does not necessarily imply that Xt !d X0 exp [ t + Bt ] in the sense
of book 7’s de…nition 2.17 as that book’s example 2.20 illustrates.
Since the …nal model in 1.5 results from 1.1 as t ! 0; it is natural to
investigate if this …nal result can also be produced with a simpler discrete
model initially that omits the exponential function. Since b2j = 1; a Taylor
series analysis of 1.1 shows that for t small:
(n) (n)
p
Xj = Xj 1 exp t+ tbj
1 2 p h i
(n)
= Xj 1 1 + + t+ tbj + O t3=2 ;
2
(n)
Yj 1 p
2
Yj = X0 1+ + t+ tbi : (1.7)
i=1 2
4CHAPTER 1 STOCHASTIC CALCULUS: AN INFORMAL LINK TO FINANCE
ln(1 + y) = y y 2 =2 + O y 3 ;
Summary 1.1 The following models have 1.5 as limiting distribution in the
sense of 1.6:
(n)
Yj 1 p
2
Yj = X0 1+ + t+ tbi ; 1 j n;
i=1 2
(1.9)
(n)
Further investigating the model in 1.9, but returning to notation Xj
for simplicity:
1.1 ASSET MODEL LIMITS 5
1 Xj (n)
Xj (n)
2
= X0 + + Xi 1 t+ Xi 1 Bi 1:
2 i=1 i=1
Here: p
Bi 1 Bi ( t) Bi 1( t) = tbi ;
is the change in a Binomial path over this time interval, recalling (1.1) of
book 7.
Now let t ! 0 and imagine for a moment that the Xt price paths are
continuous functions of time. We then have derived very informally, or
better said we can imagine that:
Z t Z t
1 2
Xt = X0 + + Xs ds + Xs dBs : (1.10)
2 0 0
Remark 1.2 For the next chapter on the Itô Integral we do not need to
assume the usual conditions on the …ltered probability space, but will re-
quire this for the following chapter 3, where local martingales and thus stop-
ping times will be studied. For consistency, we assume the usual conditions
throughout this book.
1.2.1 Book 8
Framed in terms of the above derivation, this book 8 will investigate:
requires yet another new theory. Certainly when v(s; !) = 1 no new theory
is needed, and it will be taken as axiomatic that under any rational
de…nition of integral that:
Z t
dBs (!) Bt (!) B0 (!): (2.1)
0
9
10 CHAPTER 2 THE ITÔ INTEGRAL
(a) Despite being de…nable for all integrands v(t; !) of …nite strong
q-variation for q < 2; this result can’t possibly be extended even
to
R t all continuous v(t; !). For example, the integral It (!)
0 Bs (!)dBs (!) does not …t Young’s framework, even though as
noted in book 7’s remark 6.8, this integral has a meaningful inter-
pretation in the context of the Doob-Meyer decomposition theo-
rem of proposition 6.5. More generally, by book 3’s propositions
4.27 and 4.52, It (!) de…ned above exists for all continuous v(t; !)
if and only if Bt (!) has …nite strong 2-variation. As noted above,
Bt (!) has in…nite strong 2-variation.
(b) Perhaps more importantly, the Riemann-Stieltjes approach aban-
dons the measure theoretic structure of the underlying probability
space (S; (S); t (S); )u:c: ; other than allowing for statements
such as: Bt (!) is continuous with probability 1: But the measur-
ability properties of Bt (!) and v(t; !) would then play no role in
2.1 IS A NEW INTEGRATION THEORY NEEDED? 11
this theory. Thus this approach does not allow one to investigate
if or when It (!) is adapted to f t (S)g for example, nor to in-
vestigate other measure theoretic properties of this integral. For
example, when is It (!) a local martingale?
But recall that by book 1’s proposition 5.7, every such Borel measure
is given by an increasing, right continuous function F ! de…ned by:
F !
(t) = ! [[0; t]] :
While Bt (!) has more than enough continuity, book 7’s proposition
2.83 obtains that it is nowhere monotonic and so the integral It (!)
cannot be represented as a Lebesgue-Stieltjes integral.
Conclusion 2.1 The notion of an Itô integral does not …t into any of the
integration theories studied in books 3 and 5:
so that:
That is, t (B) is the smallest sigma algebra that contains Bs 1 (B(Rn ))
for all s 2 [0; t]: Recall that this is called the natural …ltration for a
stochastic process, and it is by de…nition the smallest …ltration with respect
to which the given stochastic process is adapted (de…nition 5.10, book 7).
Generalizing, Bt is a martingale relative to the general …ltered space
(S; (S); t (S); )u:c: ; if Bt Bs is independent of s (S) for all 0 s < t:
That is, if (Bt Bs ) and s (S) are independent sigma algebras in the sense
of de…nition 2.4.
Proof. One must verify the requirements of book 7’s de…nition 5.22 to be
a martingale. For the martingale condition, that
where: Xmin(j;l)
jl = cjk clk :
k=1
Proof. Recall book 7’s proposition 6.30 and its corollary.
4. Continuing 3; show that for any constants fcjk g1 k j n ; the n n
matrix S = ( jl ) is positive semide…nite, meaning that xT Sx 0 for all
P b (j) 0:
x 2 Rn : Further, xT Sx = 0 if and only if nj=1 xj B
Proof. That S = ( jl ) is positive semide…nite follows from 3 and the same
book 7 results:
1 Xn Xn D b (j) b (l) E
xT Sx = B ;B xj xl
t j=1 l=1 t
1 Xn Xn D b (j) b (l)
E
= xj B ; xl B
t j=1 l=1 t
1 DX n b (j) ;
Xn
b (l)
E
= xj B xl B
t j=1 l=1 t
1 DX n b (j)
E
= xj B
t j=1 t
0:
Pn b (j)
Thus xT Sx = 0 if j=1 xE jB 0: Conversely, if xT Sx = 0 for some
DP
x 2 Rn then n b (j) = 0 for all t > 0 and thus Pn xj B b (j)
j=1 xj B j=1 0:
t
16 CHAPTER 2 THE ITÔ INTEGRAL
sup Qs n (M ) !p 0:
s t
S = LLT ;
where L is the lower triangular matrix with jth row (cj1 ; cj2 ; :::; cjj ) and LT
denotes the upper triangular transpose of L: In other words, LTij = Lji : In
this notation, B b = LB; where Bb and B denote the n 1 column matrices
b (j) (j)
of Bt and Bt variates with 1 j n:
It then follows that S is positive de…nite if and only if L is invertible
2
because xT Sx = LT x 2 ; where this last expression denotes the square
of the L2 -norm of LT x in Rn : Thus if S is positive de…nite this implies
2
LT x 2 = 0 if and only if x = 0; and so L is invertible. The opposite
conclusion is similar.
6: Given a real, symmetric, positive de…nite matrix S =D( jl ), so Ejl =
lj ; then fcjk g can be uniquely chosen in 3 so that B b (j) ; B
b (l) =
1 k j n t
T
jl t: Put another way as in 5; such S can be uniquely expressed as S = LL
where L is the lower triangular invertible matrix with jth row (cj1 ; cj2 ; :::; cjj )
and LT denotes the upper triangular transpose of L:
Proof. Given such S; the calculation of fcjk gk j ; or equivalently the matrix
L so that S = LLT ; is called the Cholesky decomposition of S and
named for André-Louis Cholesky (1875 –1918). This is proposition 3.14
of book 6.
7. Continuing 6; if the matrix S = ( jl ) is real, symmetric, and positive
p
de…nite, then so too is S 0 = jl where jl jl = jj ll : Thus jj = 1;
jl = lj ; and 1 jl 1:
p
Proof. xT S 0 x = y T Sy where yj xj = jj ; so S 0 is positive de…nite.
That 1 jl 1 follows from proposition 6.33 of book 7, since for the
D E
constructed processes in 3 using fcjk g ; it follows that B b (j) ; B
b (l) =
1 k j n t
2.2 BROWNIAN MOTION 17
D E . rD E D E
jl t and thus jk = b (j) ; B
B b (k) Bb (j) Bb (k) : That 1 jl 1
t t t
also follows from the Cauchy-Schwarz inequality for m-vectors, that jx yj
jxj jyj ; by making the associated vectors the same dimension m max(j; k)
by end-…lling one with 0s as necessary.
D E
Remark 2.5 (Covariance of BM) From 3; if b (j) ; B
B b (l) = jl t then
t
by book 7’s proposition 6.29,
^ (j) B
B ^ (l) jl t;
t t
^ (j) B
E sup B ^ (l) jl s E sup B ^ (l)
^ (j) sup B +j jl j t
s s s s
s t s t s t
" # " #
2 1=2 2 1=2
^ (j)
E sup B ^ (l)
E sup B +j jl j t
s s
s t s t
2
4E ^ (j)
B +j jl j t
t
= (4 + j jk j) t:
Thus as a martingale:
h i
E B^ (j) B
^ (l) = jl t;
t t
h i
and since E B^ (j) = 0 for all t; j :
t
h i
cov B^ (j) B
^ (l) = jl t;
t t
h i
corr B^ (j) B
^ (l) = jl :
t t
18 CHAPTER 2 THE ITÔ INTEGRAL
In other words for r s; r (S) is the smallest sigma algebra that contains
r (B) and the sigma algebra generated by Bt Bs : Then Bt Bs can not
be independent of s (S) since (Bt Bs ) s (S):
The next result reframes the de…nition of a Brownian motion on the …l-
tered space (S; (S); t (B); )u:c: in a manner that explicitly identi…es the
role of this …ltration and independence. It will then form the basis of the de…-
nition of a Brownian motion on a general …ltered space (S; (S); t (S); )u:c: :
Proof. Comparing the statement here with that in de…nition 2.2, the only
di¤ erence is item 2 here versus item 3 in the de…nition. Now if B is an n-
dimensional Brownian motion by de…nition 2.2, then let m = 2 with s1 = 0;
t1 = s2 = s and t2 = t: Then the de…nition states that Bt Bs is independent
of Bs ; which by de…nition means it is independent of s (B):
Conversely, assume item 2 in the statement, let 0 s1 < t 1 s2 <
m
t2 ::: sm < tm be given, and we prove that Btj Bsj j=1 are in-
dependent random vectors. By item 2; each Btm Bsm is independent
of sm (B): However, since sm 1 (B) tm 1 (B) sm (B) implies that
Btm 1 Bsm 1 sm (B); it follows by de…nition that Btm Bsm is inde-
m 1
pendent of Btm 1 Bsm 1 : Repeating this argument obtains that Btj Bsj j=1
20 CHAPTER 2 THE ITÔ INTEGRAL
are each independent of Btm Bsm : But then applying this argument to the
observation that Btm 1 Bsm 1 is independent of sm 1 (B) obtains that
m 2
Btj Bsj j=1 are each independent of Btm 1 Bsm 1 : Continuing …nally
m
obtains that Btj B sj j=1
are independent random vectors.
It follows from remark 5.6 of book 7 that for this proposition, we can in
item 2 specify s (B) as the given sigma algebra but de…ned in terms of the
natural …ltration f s (B)gu:c: : Recall that this denotes the …ltration f s (B)g
increased to be right continuous and include all negligible sets of (S), and
thus now satis…es the usual conditions. By that prior book’s remark,
s (B) and s (B)u:c: di¤er only by sets of measure 0 or 1; and thus Bt Bs
is independent of s (B) if and only if Bt Bs is independent of s (B)u:c: :
This leads naturally to the general de…nition of a Brownian motion de…ned
on a …ltered probability space.
By proposition 2.6 and this discussion, de…nitions 2.2 and 2.7 are equiv-
alent for (S; (S); t (S); )u:c: = (S; (S); t (S); )u:c: :
E [Bt j s (S)] = Bs ;
h i
3. E (Bt Bs )2 j s (S) =t s:
= t s:
The point of the additional assumption in 2.9 is that it then allows the
evaluation of these conditional expectations.
hBit = t: (2.10)
This identity provides more than just a statement about the value of such
integrals, as it also provides a statement about existence of the integral.
Speci…cally, if v(s; !) and w(s; !) are integrable, then so too is
a(!)v(s; !) + w(s; !):
1. Characteristic Functions:
Given the characteristic (or indicator) function v(s; !) (a;b] (s); 0
a < b; meaning (a;b] (s) = 1 for s 2 (a; b] and is 0 otherwise, then for
almost all ! 2 S we de…ne:
Z 1
(a;b] (s)dBs (!) = Bb (!) Ba (!):
0
Thus in particular:
Z 1
(0;t] (s)dBs (!) = Bt (!); -a.e.
0
In addition we de…ne:
Z 1
f0g (s)dBs (!) = 0:
0
24 CHAPTER 2 THE ITÔ INTEGRAL
where:
0 t0 < t1 < < tn+1 < 1:
For this we require linearity and thus de…ne (simplifying notation by
suppressing (!)):
Z 1h Xn i
a 1 f0g (s) + aj (tj ;tj+1 ] (s) dBs
j=0
Z0 1 Xn Z 1
a 1 f0g (s)dBs + aj (tj ;tj+1 ] (s)dBs ;
0 j=0 0
which we evaluate by 2 :
Z 1h Xn i Xn
a 1 f0g (s) + aj (tj ;tj+1 ] (s) dBs = aj Btj+1 Btj :
0 j=0 j=0
(2.13)
As in the case of simple functions in book 5, we must check that this
de…nition is consistent as in that book’s proposition 2.4. See exercise
2.13.
2.3 PRELIMINARY INSIGHTS TO A NEW INTEGRAL 25
with
0 t00 < t01 < < t0m+1 < 1;
and w(s; !) = v(s; !); -a.e., then the respective integrals produced by 2.13
agree -a.e. Hint: De…ne a common partition and note that if (tj ; tj+1 ] \
(t0k ; t0k+1 ] 6= ;; then aj (!) = bk (!); -a.e.
Exercise 2.13 Show that 2.13 generalizes using 2.14 as follows. If v(s; !)
is a simple process as in 2.12, then:
Z t Xn
v(s; !)dBs (!) = aj (!) Btj+1 ^t (!) Btj ^t (!) ; (2.15)
0 j=0
Since
Btj (!) Btj 1 (!) = Btj (!) Bsj (!) + Bsj (!) Btj 1 (!);
it follows that
Z T Xn
vn (s; !)dBs (!) = Bsj (!) Btj (!) Bsj (!)
0 j=1
Xn
+ Bsj (!) Bsj (!) Btj 1 (!) :
j=1
Noting that Bsj (!) is measurable relative to sigma algebra sj (S); apply
the tower and measurability properties of conditional expectations of book 6’s
proposition 5.26, and then the independence property (justi…ed in proposition
2.6):
Thus:
Z T Xn
E vn (s; !)dBs (!) = (sj tj 1) = rT:
0 j=1
2.4 QUADRATIC VARIATION PROCESS OF BROWNIAN MOTION27
Exercise 2.15 Show that the same expectations are produced if instead of
de…ning vn (s; !) in terms of aj (!) Bsj (!) where sj = tj 1 + r(tj tj 1 );
we de…ne vn (s; !) in terms of aj (!) (1 r)Btj 1 (!) + rBtj (!):
Remark 2.16 (On example 2.14) Note that the above example does not
RT
show that the sequence of random variables 0 vn (s; !)dBs (!) converges to
a random variable on (S; (S); t (S); )u:c: ; nor even addresses the meaning
of such convergence. However, it does show that each random variable of this
sequence has constant expectation, and that we can make this constant equal
to rT for any r with 0 r 1: The signi…cance of this is that our usual
procedure for approximating measurable functions with simple functions will
require some additional thought since even for the -a.e. continuous process
Bt ; the unbounded variation over intervals makes the choice of intermediate
values very signi…cant.
Speci…cally, this choice re‡ects the measurability property desired for the
aj -variates in 2.13.
The Itô integral developed below and named for Kiyoshi Itô (1915 –
2008) is de…ned with aj -variates that are measurable relative to the sigma
algebra tj (S); the natural …ltration associated with Bt (de…nition 5.4, book
7). This integral turns out to be a martingale, and as such the Itô calculus
is universally used in …nance where martingales play an integral role.
An alternative approach is to use a midpoint estimate for the aj -variates,
for example aj (!) = Btj (!) + Btj+1 (!) =2 or aj (!) = Bsj (!) with sj =
[tj + tj+1 ] =2 as in the example of the above integral. The …rst approach
produces what is known as the Stratonovich integral, named for Rus-
lan Stratonovich (1930 – 1997), and also the Fisk-Stratonovich inte-
gral, for Donald L. Fisk who developed these results at the same time
as Stratonovich. This integral, though not a martingale, and the associ-
ated Stratonovich calculus form the basis for important applications in
physics.
Recall that for the weak variation de…nition that the supremum is re-
stricted to partitions with n ! 0; where the mesh size n is de…ned:
n max fti ti 1 g:
1 i n
Though Brownian motion does not have …nite strong quadratic variation,
where the supremum re‡ects all partitions, it does have …nite weak quadratic
variation. This last result was proved in book 7’s proposition 2.88, along
with the result that Brownian motion’s weak quadratic variation over the
interval [a; b] converges to b a in the L2 (S)-norm and in probability.
The next result is largely a restatement and summary of the more general
results of chapter 6 of book 7 applied to Brownian motion. But here we
are able to de…ne the quadratic variation process more explicitly as in the
Doob-Meyer decomposition theorem for bounded continuous martingales
of book 7’s proposition 6.5, and then obtain a slightly stronger result than
that book’s proposition 6.12 for continuous local martingales. To simplify
notation this result is stated over [0; t]; though the same proof works for
Brownian motion over [t; t0 ]: The result would then be that Qs n (B) !L2
t0 t uniformly over s 2 [t; t0 ] , meaning in the L2 -norm, as well as in
probability. We then add the corollary 2.10 result for completeness.
sups t Qs n (B) s !P 0:
To this end, let the partition n of [0; t] be given with t0 = 0 and tNn = t; and
2
de…ne Xi Bti (!) Bti 1 (!) (ti ti 1 ) : Now fXi g are independent
by de…nitional independence of fBti (!) Bti 1 (!)gN n
i=1 and proposition 3.56
of book 2. Also:
XNn
Xi = Qt n (B) t;
i=1
and: XNn X
2
Qt n (B) t = Xi2 + 2 Xi Xj :
i=1 i<j
2.4 QUADRATIC VARIATION PROCESS OF BROWNIAN MOTION31
Thus since Bti (!) Bti 1 (!) N (0; ti ti 1) yields E [Xi ] = 0; inde-
pendence of fXi g obtains:
2 XNn
E Qt n (B) t = E Xi2 :
i=1
Now:
h i
4
E Xi2 = E Bti (!) Bti 1 (!)
h i
2 2
2 (ti ti 1 ) E Bti (!) Bti 1 (!) + (ti ti 1) :
Since h i
2
E Bti (!) Bti 1 (!) = (ti ti 1)
and h i
4 2
E Bti (!) Bti 1 (!) = 3 (ti ti 1)
X1
and so Pr sups t Qs n (B) s > converges. By the Borel-Cantelli
n=1
theorem of book 2’s proposition 2.6:
Hence for any > 0; with probability 1 there exists N so that sups t Qs n (B) s
for n N:
for appropriate v(s; !): When v(s; !) is a simple process as in 2.12, the
value of this integral is de…ned in 2.13 (simplifying notation by suppressing
(!)):
Z 1h Xn i Xn
a 1 f0g (s) + aj (tj ;tj+1 ] (s) dBs aj Btj+1 Btj :
0 j=0 j=0
Remark 2.20 (On notation for partitions) As was often used in ear-
lier books, we simplify notion in the above statement. Speci…cally, we do not
mean to imply that the partition points ftj gn+1
j=0 are independent of n; nor
2.5 ITÔ INTEGRAL OF SIMPLE PROCESSES 33
are the coe¢ cient random variables. In full notation, the above convergence
of simple processes would be stated that as n ! 1 :
(n)
XNn (n)
a 1 (!) f0g (s) + aj (!) (n) (n)
(tj ;tj+1 ]
(s) ! v(s; !):
j=0
It is hoped that the reader will agree that the simpler notation is preferred.
Now summations of f Btj (!) Btj 1 (!) g do not have nice convergence
properties as n ! 1 as noted above and in book 7’s proposition 2.87.
2
However, summations of f Btj (!) Btj 1 (!) g are far more manageable
as seen above and provide the avenue for investigation that Itô developed.
But to obtain sums of such terms from the integrals of simple processes we
must work with the square of these integrals. Thus it is natural to consider
the Itô integrals of simple processes and general v(s; !) to be elements of
an L2 -space (recall chapter 4, book 5). Based on a sample calculation of
the associated L2 -norms of the Itô integral of a simple process, it becomes
apparent that we should assume at least initially that such processes are
elements of an L2 -space over [0; t] S:
So we begin with an L2 -space of potential integrands v(s; !); and then
derive an important result on certain simple processes in this space. While
the following de…nition does not explicitly make reference to the …ltration
noted, this will be important momentarily. For background on L2 and re-
lated spaces, see chapter 4 of book 5:
De…nition 2.21 (L02 ([0; t] S)) Given a …ltered probability space (S; (S);
0
t (S); )u:c: ; let L2 ([0; t] S) denote the L2 -space of (B[0; t] (S))-measurable
functions v(s; !) de…ned on [0; t] S under the norm:
Z t Z Z t
kv(s; !)k2L0 ([0;t] S) E 2
v (s; !)ds = v 2 (s; !)dsd ; (2.19)
2
0 S 0
Remark 2.23 (On L02 ([0; t] S)) Although this norm is expressed as an
iterated integral, this space is equivalent to the L2 -spaces of book 5’s chapter
4, and in particular equivalent to
with B[0; t] the Borel sigma algebra. The product measure space ([0; t]
S; (B[0; t] (S)) ; mL ) is derived in book 1’s chapter 7, but here the
product sigma algebra is de…ned as the smallest sigma algebra that con-
tains the algebra A generated by the measurable rectangles from B[0; t] (S):
In that book’s proposition 7.20 this sigma algebra is denoted (A) ; while in
chapter 5 of book 5 this was also denoted 0 (B[0; t] (S)) :
The space L2 ([0; t] S) is de…ned as the collection of measurable func-
tions v(s; !) so that:
ZZ
2
kv(s; !)kL2 ([0;t] S) v 2 (s; !)d (mL ) < 1:
[0;t] S
Since both B[0; t] and (S) are …nite measures spaces (when t < 1), Fubini’s
theorem of book 5’s corollary 5.20 applies. Thus, if v(s; !) 2 L2 ([0; t] S)
then v(s; !) 2 L02 ([0; t] S) and the norms agree. When t = 1 this version of
Fubini’s theorem remains valid, and as noted in the earlier book, Billingsley
(1995) is a reference for this more general result.
Conversely, since v 2 (s; !) is nonnegative, Tonelli’s theorem assures that
if measurable v(s; !) 2 L02 ([0; t] S) then v(s; !) 2 L2 ([0; t] S) and the
norms agree. Book 5’s proposition 5.22 states Tonelli’s theorem in the con-
text of complete and sigma …nite component measure spaces, but as was
proved for Fubini’s theorem, this result is also true without completeness,
again referencing Billingsley (1995).
This convention is needed to make kv(s; !)kL2 a norm (de…nition 4.3, book
5) since kv(s; !)kL2 = 0 only implies that v(s; !) = 0; mL -a.e.
Exercise 2.26 Check that the collection of adapted simple processes given
in 2.21 form a vector space over R (de…nition 4.34, book 3), and that if 0
r < r0 and v(s; !) is an adapted simple process, then so too is (r;r0 ] (s)v(s; !):
1. For all t :
Z t
v(s; !)dBs (!) 2 L2 (S) L2 (S; (S); );
0
36 CHAPTER 2 THE ITÔ INTEGRAL
3. The process Z t
Vt (!) v(s; !)dBs (!)
0
is continuous -a.e., and an L2 (S)-martingale relative to f t (S)g;
meaning that Vt (!) is a martingale and for all t :
Similarly, if t > tj > ti then ai (!)aj (!) Bt^ti+1 (!) Bt^ti (!) is measur-
able relative to tj (S); and by the same steps:
E ai (!)aj (!) Bt^ti+1 (!) Bt^ti (!) Bt^tj+1 (!) Bt^tj (!)
= E ai (!)aj (!) Bt^ti+1 (!) Bt^ti (!) E Bt^tj+1 (!) Btj (!) j tj (S)
= 0:
Pathwise continuity of this integral -a.e. follows from 2.24 and the -
a.e. continuity of Brownian motion. The same observation proves Vt (!) is
measurable relative to the sigma algebra t (S): Applying Jensen’s inequality
of book 4’s proposition 3.39:
and thus Vt (!) is integrable for all t by 2.22. For the martingale property, if
t > s then linearity of conditional expectations and the measurability prop-
erty yields, simplifying notation:
If s 2 (tk ; tk+1 ] :
Xn Xn
Vt (!) Vs (!) = aj Bt^tj+1 Bt^tj aj Bs^tj+1 Bs^tj
j=0 j=0
Xn
= ak Bt^tk+1 Bs + aj Bt^tj+1 Bt^tj :
j=k+1
E ak Bt^tk+1 Bs j s = ak E Bt^tk+1 Bs j s = 0:
T : (X1 ; d1 ) ! (X2 ; d2 );
d2 (T x; T y) = d1 (x; y) :
2.6 H2 ([0; 1) S) AND SIMPLE PROCESS APPROXIMATIONS39
We have not yet fully identi…ed theseR t metric spaces, but at this point it can
be appreciated that the Itô Integral 0 dBs (!) will be the transformation T
of interest. Once identi…ed, it will be seen that the identity in 2.22 general-
izes to these spaces. That this identity assures the preservation of distances
comes from the observation that in a normed space, an L2 -space for example,
we can de…ne a distance function d( ; ) by:
d(f; g) kf gk : (2.25)
See section 3.2 of Reitano (2010), for example, for a discussion on normed
and metric spaces.
The big idea behind Itô’s approach to de…ning this integral is now easy
to describe informally, with details to follow.
6. Prove that this process is well-de…ned in the sense that if fvn0 (s; !)g1
n=1
L02 ([0; t] S) have the same properties as fvn (s; !)g1n=1 ; the resulting
function Vt0 (!) satis…es Vt0 (!) = Vt (!); -a.e.
Remark 2.29 (The big idea’s big challenge) Note that the big challenge
in Itô’s approach is characterizing the functions v(s; !) that have simple
process approximations as required in step
R t 1: Once done, steps 2 5 will
unambiguously produce a de…nition for 0 v(s; !)dBs (!) for all t as a ran-
dom variable de…ned on (S; (S); ); which is square integrable and hence
an element of L2 (S): Well-de…nedness must then also be addressed.
We identify the candidate function space in de…nition 2.31 below, and
prove the existence of simple process approximations in proposition 2.37 be-
low. The details are then assembled in proposition 2.40.
Beneath this big challenge is the most basic question: what is the appro-
priate measurability properties for v(s; !) to assure success in step 1? Are
all (B[0; t] (S))-measurable functions v(s; !) so approximable? The
derivation of the Itô isometry depended on a very speci…c kind of measur-
ability for simple processes, that such processes were adapted and so by
lemma 2.25 each aj (!) is tj (S)-measurable. This detailed type of mea-
surability is far beyond that normally imposed on the space L02 ([0; t] S):
As developed in book 5; the standard measurability requirement and that
used above would generally be de…ned relative to the product space sigma
algebra,
[B([0; t]) (S)] ;
de…ned in remark 2.23. This is then enough measurability to de…ne an
integral on [0; t] S.
But to make Itô’s idea work, we will need to de…ne an L2 -space on
[0; 1) S that has a more detailed form of measurability. First, taking
our queue from lemma 2.25, we will require that v(t; !) be measurable with
2.6 H2 ([0; 1) S) AND SIMPLE PROCESS APPROXIMATIONS41
respect to t (S) for all t, and also that such functions can be approximated
in an L2 -norm by the adapted simple processes of 2.21. In addition to these
technical requirements for the Itô isometry, we will also be interested in
studying the integral Vt (!) as a stochastic process that evolves in t, so again
the measurability of v(t; !) relative to t (S) will be critical for this analysis.
De…nition 2.31 (H2 ([0; 1) S)) Given (S; (S); t (S); )u:c: ; and Lebesgue
measure m on B([0; 1)); the space:
3. v(t; !) satis…es:
Z Z 1
kv(s; !)k2H2 ([0;1) S) v 2 (s; !)dsd < 1: (2.27)
S 0
Remark 2.35 (On Completeness of H2 ([0; 1) S)) Assume that fvn (t; !)g
H2 ([0; 1) S) and there exists a function v(t; !); measurable with respect
to [B([0; 1)) (S)] ; so that:
Z 1
E [v(s; !) vn (s; !)]2 ds ! 0: ((*))
0
2.6 H2 ([0; 1) S) AND SIMPLE PROCESS APPROXIMATIONS43
in L2 ([0; 1) S), de…ned relative to this product space: Since complete, this
implies that v(t; !) 2 L2 ([0; 1) S):
Hence v(t; !) satis…es 1 and 3 of de…nition 2.31, and the question of
membership in H2 ([0; 1) S) relates to requirement 2 and whether v(t; !)
must be adapted to the …ltration f t (S)g: To investigate, we reverse the
iterated integrals in ( ) by Tonelli’s theorem to obtain:
Z 1Z
[v(s; !) vn (s; !)]2 d ds ! 0:
0 S
That is, vn (s; !) ! v(s; !) in L2 (S; s (S); ) for s outside a set of Lebesgue
measure 0:
To see this, note that vn (s; !) is s (S)-measurable for all s by require-
ment 2: For any s outside the above set of measure 0; fvn (s; !)g is then a
Cauchy sequence in L2 (S; s (S); ) and since this space is complete, there
exists ves (!) 2 L2 (S; s (S); ) with vn (s; !) !L2 ves (!). But then ves (!) =
v(s; !) -a.e., and we conclude by the completeness of s (S) that v(s; !) 2
L2 (S; s (S); ) for almost all s:
In summary, we can generally only conclude that v(t; !) is adapted to the
…ltration f t (S)g for almost all t; and hence H2 ([0; 1) S) is not quite
complete. See remark 2.43 for a continuation of this discussion, and the
section, M2 -Integrators and H2M ([0; 1) S)-Integrands; where H2 ([0; 1)
S) is generalized to a space which is complete.
Then each vn (t; !) is adapted to t (S) for all t by the adaptedness of v(t; !);
and since left continuous, is also measurable by proposition 5.19 of book 7.
The challenge is to prove H2 -convergence.
Example 2.36 If v(t; !) 0 for rational t; then for any partition based on
rational points we would have vn (t; !) 0 for all t and no H2 -convergence
could be expected. However, if v(t; !) is continuous in t; the success of an
approximation would not be dependent on the particular partition sequence.
That is: Z 1
E [v(s; !) vn (s; !)]2 ds ! 0: (2.30)
0
Proof. We implement the proof in 3 steps, showing that general v(t; !) 2
H2 ([0; 1) S) can be H2 -approximated by bounded v(t; !); that bounded
v(t; !) can be H2 -approximated by bounded v(t; !) that is continuous in t for
all !; and that such bounded and continuous v(t; !) can be H2 -approximated
by simple processes. Step 2 is the most challenging.
2. Given bounded v(t; !); say jv(t; !)j N; we next de…ne bounded fvn (t; !)g;
continuous in t for each !; such that 2.30 is satis…ed. Let '(x) be de-
…ned by:
8
>
> x + 2; 2 x 1;
>
<
'(x) = x; 1 x 0;
>
>
>
: 0; otherwise;
and let 'n (x) n'(nx): Note that 'n (x) is piecewise linear, supported
on [ 2=n; 0]; has a maximum value of n at x = 1=n; and integrates
to 1: Now de…ne:
Z t
vn (t; !) = v(s; !)'n (s t)ds:
t 2=n
R
Then jvn (t; !)j N since 'n (x)dx = 1:
Also, vn (t; !) is continuous in t for each !: To see this:
Z
0
vn (t; !) vn (t ; !) = v(s; !) 1 (s)'n (s t) 2 (s)'n (s t0 ) ds ;
For the second term we prove that the dsd dt-integral is …nite, and
then apply Tonelli’s theorem as noted in remark 2.23, since the inte-
grand is nonnegative. Using the substitution r = s + t=n in the inner
dr-integral:
Z 0Z Z 1 Z 0Z Z 1
2
v (s + t=n; !)dsd dt = v 2 (r; !)drd dt
2 0 2 t=n
Z 0 Z 1
E v 2 (r; !)dr dt
2 0
Z 1
= 2E v 2 (r; !)dr :
0
(n)
This is possible by 2.27. Now given n; let tj = jM=n and de…ne:
Xn (n)
vn (t; !) v(tj ; !) (t(n) ;t(n) ] (t):
j=0 j j+1
Then:
Z 1 Z M
2
E [v(s; !) vn (s; !)] ds = E [v(s; !) vn (s; !)]2 ds + ;
0 0
That is:
Z 1h i2
E v(s; !) (t;t0 ] (s) vn (s; !) (t;t0 ] (s) ds ! 0: (2.31)
0
This follows from the triangle inequality for a norm, with H2 H2 ([0; 1)
S) :
Since vn (s; !) vm (s; !) is a simple process for all n; m (exercise), and for
such processes:
Z 1 Z 1 Z 1
[vn (t; !) vm (t; !)] dBs (!) = vn (s; !)dBs (!) vm (s; !)dBs (!);
0 0 0
Now the …rst two expressions converge to 0 by construction, and for the
third, the Itô isometry obtains:
Z 1 Z 1
0 0
vn (s; !)dBs (!) vm (s; !)dBs (!) = vn (s; !) vm (s; !) H2 :
0 0 L2 (S)
V1 (!); -a.e.
R t0
For the integral t v(s; !)dBs (!); 2.31 obtains that with fvn (s; !)g1 n=1
as above, v~n (s; !) vn (s; !) (t;t0 ] (s) is a sequence of simple processes as in
2.21 that form a Cauchy sequence in H2 ([0; 1) S): Thus the above steps
can be repeated to both de…ne this integral, and prove uniqueness -a.e.
R t0
Corollary 2.41 The integral t v(s; !)dBs (!) of proposition 2.40 is equiv-
alently de…ned -a.e. for all t; t0 with 0 t < t0 1 by:
Z t0 Z 1
v(s; !)dBs (!) v(s; !) (t;t0 ] (s)dBs (!): (2.33)
t 0
Proof. This follows from 2.33, since (t;t0 ] (s) = (0;t0 ] (s) (0;t] (s):
Returning to the original notation, let 0 = t0 < t2 < < tn+1 = t and
de…ne: Xn
vn (s; !) Btj (!) (tj ;tj+1 ] (s):
j=0
this obtains:
Xn Xn 2
Btj 1 Btj Btj 1 = Bt2 =2 Btj Btj 1 =2:
j=1 j=1
Xn 2 2
E Btj Btj 1 t ! 0;
j=1
Thus in this case, we have two explicit representations for the contin-
uous local martingale Bt0 assured to exist by this earlier
Rt theorem. This
will be generalized in 3.54 of the next section once 0 Ms dMs is de…ned
for a continuous locally bounded martingale Mt :
It should also be noted that the equivalent formulation of 2.35:
Z t
Bt2 =t+2 Bs dBs ;
0
2. From 2.35 we can conclude thatR t for the special case of v(s; !) = Bs (!);
that the Itô integral Vt (!) 0 Bs (!)dBs (!) is in fact a continuous
martingale with respect to the …ltration f t (S)g: It is apparently con-
tinuous -a.e. by its explicit formula, and is a martingale by proposi-
tion 2.8.
54 CHAPTER 2 THE ITÔ INTEGRAL
2. For constant a 2 R;
Z Z t0
2 Z Z t0
v(s; !)dBs (!) d = v 2 (s; !)dsd :
t t
5. The process:
Z t
Vt (!) v(s; !)dBs (!)
0
Now 2 holds for all simple functions by 2.24 and 2.33. The general
identity then follows since given approximating sequences vn ! v and un !
u in H2 (S); it follows as an exercise that avn + un ! av + u in H2 (S): Thus
simplifying notation:
Z t0 Z t0 Z t0
[av + u] dBs (!) a vdBs (!) udBs (!)
t t t
L2 (S)
Z t0 Z t0
[av + u] dBs (!) [avn + un ] dBs (!)
t t
L2 (S)
Z t0 Z t0 Z t0 Z t0
+a vdBs (!) vn dBs (!) + udBs (!) un dBs (!) :
t t t t
L2 (S) L2 (S)
Taking limits as n ! 1; the result follows since by book 5’s proposition 4.20:
Z t0 Z t0
vn (s; !)dBs (!) ! v(s; !)dBs (!)
t t
L2 (S) L2 (S)
58 CHAPTER 2 THE ITÔ INTEGRAL
and
kvn (s; !)kH2 ((t;t0 ] S) ! kv(s; !)kH2 ((t;t0 ] S) :
To demonstrate that Vt (!) is a martingale relative to the …ltration f t (S)g
(n)
(de…nition 5.22, book 7), …rst note that Vt (!) is the L2 -limit of Vt (!)
Rt (n)
0 vn (s; !)dBs (!) for simple vn (s; !) for each t: In addition Vt (!) is a
(continuous) martingale by proposition 2.27. Thus Vt (!) is a martingale by
book 7’s proposition 5.32.
Remark 2.48 (Variance of Itô integral) Note that 3 and 4 above obtain
an expression for the variance of the random variable de…ned by the Itô
integral: "Z 0 # Z Z 0
t t
V ar v(s; !)dBs (!) = v 2 (s; !)dsd : (2.40)
t t
By de…nition 5.10 of book 7, It (!) is then a version of the above de…ned Itô
integral.
The implications of these -a.e. statements are subtle, and we return to
them in remark 2.51 once the proposition is proved.
For general v(s; !); …x the interval [0; T ] where here T is a constant
(i.e., and not a stopping time). If fvn (s; !)g1 n=1 H2 ([0; 1) S) is an
approximating sequence of simple processes as in proposition 2.37, then by
corollary 2.38:
kvn (s; !) v(s; !)kH2 ([0;T ] S) ! 0:
De…ne In (t; !) on [0; T ] S by:
Z t
In (t; !) = vn (s; !)dBs (!);
0
(T )
Let Ak S be de…ned by
( )
(T ) k
Ak = !j sup In(T ) (t; !) In(T ) (t; !) 2 :
0 t T k+1 k
P (T )
Then 1 k=1 [Ak ] 1; and the Borel-Cantelli lemma of book 2’s proposi-
(T )
tion 2.6 applies to yield that [lim sup Ak ] = 0: Hence, for ! outside the
2.9 A CONTINUOUS VERSION OF THE ITÔ INTEGRAL 61
(T )
set lim sup Ak of measure zero, there are at most …nitely many k with
k
sup In(T ) (t; !) In(T ) (t; !) 2 :
0 t T k+1 k
Corollary 2.50 With It (!) de…ned as above, the Itô isometry holds. That
is, for t 1 :
kIt (!)k2L2 (S) = kv(s; !)k2H2 ([0;t] S) : (2.43)
In addition, corollary 2.41 and properties 1 3 of proposition 2.47 remain
valid.
Proof. For t < 1 the isometry follows from 2.39 and 2.42, since for all t :
Z t
It (!) = v(s; !)dBs (!); -a.e.
0
implies that Z t
vn (s; !)dBs (!) ! Vt (!); -a.e.
0
In other words, for any t the proposition 2.40 de…nition:
Z t
v(s; !)dBs (!) Vt (!);
0
3. The above proposition 2.49 states that there is a single exceptional set
of -measure zero, outside of which there is a function It (!) that is
continuous in t; with
Z t
vnk (s; !)dBs (!) ! It (!) for all t;
0
2.9 A CONTINUOUS VERSION OF THE ITÔ INTEGRAL 63
has measure 1 as above, and by continuity it follows that It (!) = It0 (!)
for all t on CQ .
In other words, any two continuous versions of Vt (!) are indistin-
guishable in the terminology of de…nition 5.10 of book 7.
there exists I1 (!) 2 L2 (S) with It (!) !L2 (S) I1 (!): Then I1 (!) satis-
…es 2.43 by proposition 4.20 of book 5, noting that kv(s; !)k2H2 ([0;t] S) !
kv(s; !)k2H2 ([0;1) S) by 2.27.
Exercise 2.54 Prove that for 0 t < t0 1 that this …nal de…nition
implies that:
Z t0
v(s; !)dBs (!) It0 (!) It (!):
t
Hint: 2.33.
Both of these results are applicable in the current context because Brown-
ian motion is a continuous martingale which can be made into an L2 -
bounded martingale when stopped with a …xed stopping time T t0 ; and
is a continuous local martingale by book 7’s corollary 5.85. However, the
L2 -bounded results of proposition 3.55 provide the stronger conclusions for
the Itô integral. Speci…cally, one obtains convergence in probability and
in L2 (S); and in fact uniform convergence as de…ned below. In addition,
it then follows that there exists a subsequence of partitions nk so that
uniform convergence exists pointwise -a.e.
66 CHAPTER 2 THE ITÔ INTEGRAL
v 1 (0; )(A) = a 11 (A) and the result follows since T0 (S) = 0 (S)
(exercise 5.56, book 7). The reader may want to compare this result
with lemma 2.25.
then -a.e.:
Z t X1
v(s; !)dBs (!) = aj (!) BTj+1 ^t (!) BTj ^t (!) ; for all t:
0 j=0
(2.46)
Proof. The …rst step for 2.46 is to prove that -a.e.:
Z t X1 Z t
v(s; !)dBs (!) = aj (!) (Tj ;Tj+1 ] (s)dBs (!); all t: ((1))
0 j=0 0
De…ne:
Xm
vm (s; !) = a 1 (!) f0g (s) + aj (!) (Tj ;Tj+1 ] (s);
j=0
To apply this proposition it should be con…rmed that aj (!) (Tj ;Tj+1 ] (s) 2
H2 ([0; 1) S); and this follows from the introductory comments to this
proposition.
Now if we prove that vm (s; !) ! v(s; !) in H2 ([0; 1) S); then by the
Itô isometry of corollary 2.50 it will follow that for all t :
Z t Z t
vm (s; !)dBs (!) !L2 (S) v(s; !)dBs (!):
0 0
Corollary 4.17 of book 5 then obtains that for each t; there exists a subse-
quence mk ! 1 so that:
Z t Z t
vmk (s; !)dBs (!) ! v(s; !)dBs (!); -a.e.
0 0
Thus (1) is valid -a.e. for each t; and then also valid -a.e. for all rational
t. Thus (1) follows -a.e. by continuity of these integrals.
For the required H2 ([0; 1) S) result, …rst note that by disjointness of
intervals:
X1
(v(s; !) vm (s; !))2 = a2j (!) (Tj ;Tj+1 ] (s):
j=m+1
and this can be made as small as desired since the full series converges as
noted above.
With (1) established, 2.46 will follow from a proof that -a.e.:
Z t
aj (!) (Tj ;Tj+1 ] (s)dBs (!) = aj (!) BTj+1 ^t (!) BTj ^t (!) ; for all t:
0
((3))
We do this using the corollary 2.50 representation of the integral on the left:
Z t Z 1
aj (!) (Tj ;Tj+1 ] (s)dBs (!) = aj (!) (Tj ;Tj+1 ] (s) (0;t] (s)dBs (!):
0 0
2.10 ITÔ INTEGRATION VIA RIEMANN SUMS 69
As a …rst step, if these stopping times have only …nitely many values:
Xn Xm
Tj = ci Ai ; Tj+1 = dk Ck ;
i=1 k=1
S S S T
where ni=1 Ai = m k=1 Ck = S; then also S T
= i;k (Ai Ck ) ; where some of
these intersection sets may be empty. If Ai Ck 6= ; then Tj < Tj+1 implies
that ci < dk : Hence:
P
(Tj ^t;Tj+1 ^t] (s) = i;k Ai \Ck (!) (ci ^t;dk ^t] (s);
and thus aj (!) (Tj ^t;Tj+1 ^t] (s) is a sum of simple processes. With a little
algebra, (3) then follows from 1 of proposition 2.49, 2 of proposition 2.47,
and 2.13.
In the general case, by proposition 5.57 of book 7 with a small change of
(n) (n) 1
notation; there exists sequences of stopping times fTj g1 n=1 and fTj+1 gn=1 ;
so that in each case T (n) T (n+1) and T (n) ! T: Further, all such stopping
times in the sequences have only …nitely many values. It is an exercise to
check that based on the de…nition of these sequences in that proposition that
(n) (n)
Tj < Tj+1 implies that Tj Tj+1 for all n: Thus the prior proof obtains
for each n :
Z 1
aj (!) (T (n) ^t;T (n) ^t] (s)dBs (!) = aj (!) BT (n) ^t (!) BT (n) ^t (!) :
0 j j+1 j+1 j
aj (!) (Tj
(n) (n)
^t;Tj+1 ^t]
!H2 ([0;1) S) aj (!) (Tj ^t;Tj+1 ^t] ;
…rst note that the ds-integrals are well de…ned pointwise, and:
Z 1
aj (!) (T (n) ^t;T (n) ^t] (Tj ^t;Tj+1 ^t] ds
0 j j+1
h i
(n) (n)
= aj (!) Tj+1 ^ t Tj+1 ^ t Tj ^ t Tj ^ t :
It now follows from the Itô isometry of corollary 2.50 that for all t :
Z 1 Z 1
aj (!) (T (n) ^t;T (n) ^t] dBs (!) !L2 (S) aj (!) (Tj ^t;Tj+1 ^t] dBs (!):
0 j j+1 0
Since (3) has been proved for stopping times with …nitely many values, this
obtains for all t :
Z 1
aj (!) BT (n) ^t (!) BT (n) ^t (!) !L2 (S) aj (!) (Tj ^t;Tj+1 ^t] dBs (!):
j j+1 0
This identity is thus valid for all rational t; -a.e., and (3) follows -a.e. by
continuity of both expressions.
as n ! 1:
(n) (n) (n)
Proof. De…ne T0 = 0 for all !; n; and then de…ne Ti+1 Ti+1 (!) by:
(n)
Now T1 is the hitting time for v(s; !) v(0; !) for the open set G
( 1; 2 n ) [ (2 n ; 1) for any n: Thus by right continuity of v(s; !) and of
(n)
the …ltration t (S); T1 is a stopping time by 4 of book 7’s proposition 5.60
(n)
as noted in the introduction above. More generally, if v(s; !) v(Ti ; !) >
n (n)
2 for some s > Ti ; then by right continuity there exists (!) > 0 so that:
(n) n
v(r; !) v(Ti ; !) > 2 ; s r < (!):
(n) (n)
Thus if Ti+1 < t; there exists rational r with Ti < r < t so that:
(n) n
v(r; !) v(Ti ; !) > 2 ;
and hence:
n o S n o
(n) (n) n
Ti+1 < t = r2Q;T (n) <r<t v(r; !) v(Ti ; !) > 2 :
i
(n)
Thus Ti+1 is an optional time, and by the assumed right continuity of the
(n)
…ltration t (S); Ti+1 is a stopping time by book 7’s proposition 5.60. By
(n) (n)
construction Ti+1 > Ti for all i and it as an exercise to check that so
(n)
de…ned, Ti ! 1; -a.e.
Given n; de…ne the generalized simple process:
X1 (n)
vn (s; !) = v(Ti ; !) i (s):
(n) (n)
i=0 Ti ;Ti+1
To justify the application of proposition 2.55, we will prove that vn (s; !) (0;t] (s) 2
H2 ([0; 1) S) for all n and t:
Recalling the discussion preceding proposition 2.55, vn (s; !) is adapted if
(n)
ai (!) v(Ti ; !) is T (n) (S)-measurable (de…nition 5.52, book 7) for all i:
i
(n) (n)
For s = 0; since T0 = 0 for all ! and fTi 0g is empty for i 1; only
(n)
T0 need be checked. Then for A 2 B (R) ; since v(s; !) is adapted:
1 (n) T (n) 1
v (T0 ( ); )(A) fT0 0g = v (0; )(A) 2 0 (S):
72 CHAPTER 2 THE ITÔ INTEGRAL
For s > 0 we use 1 of book 7’s proposition 5.61 and prove that for A 2 B (R) :
1 (n) T (n)
v (Ti ( ); )(A) fTi < sg 2 s (S): ((1))
(n)
To verify (1); let Ti = T to simplify notation, and for given N and
m N 2N de…ne TN as the …nite valued stopping time of book 7’s proposition
5.57. That is:
N N N
TN m2 ; for (m 1)2 T < m2 ;
and TN = 1 if T N: By construction:
1 T
v (TN ( ); )(A) fT < sg
S
= m ms v 1 (m2 N ; )(A) \ f(m 1)2 N
T < m2 N
g ;
Z Z 1
vn2 (s; !) (0;t] (s)dsd 2 kv(s; !)k2H2 ([0;1) S) +2 2n
t :
S 0
Hence:
" Z #
X1 r h i 2
E sup vn (s; !) (0:t] (s) v(s; !) dBs (!)
n=1 r t 0
" Z #
X1 r h i 2
= E sup vn (s; !) (0:t] (s) v(s; !) dBs (!) < 1:
n=1 r t 0
and thus as n ! 1 :
Z rh i
sup vn (s; !) (0:t] (s) v(s; !) dBs (!) ! 0; -a.e.
r t 0
with mesh size n max1 i n fti+1 ti g ! 0; then with the integral de…ned
as It (!) of proposition 2.49:
Xn Z t
v(ti ; !) Bti+1 (!) Bti (!) !L2 (S) v(s; !)dBs (!): (2.48)
i=0 0
3.9. To check the integrability constraint 3.11 for this space, recall corollary
6.16 of book 7; and then corollary 2.10:
hM is = hBis^T = s ^ T;
with integrand v(s; !) v(s) independent of !: Such Itô integrals are called
Wiener integrals after Norbert Wiener (1894 –1964). Since v(s) is ap-
parently adapted, if v(s) is continuous the above result applies since continu-
ity assures local boundedness: Consequently, this Itô integral is the limit of
Riemann sums, and since these sums are normally distributed by the above
observation, the following will not surprise.
This result can be generalized to left continuous v(s) with the addition
of the above local boundedness assumption.
Now fBti+1 (!) Bti (!)gni=0 are independent normal variates with means
equal to 0; and respective variances fti+1 ti gni=0 : So for each n; denoting
this Riemann sum by Xn :
Xn
Xn N 0; v 2 (ti )(ti+1 ti ) :
i=0
With X denoting the above Itô integral, the above proposition states that
Xn !p X: By proposition 5.21 of book 2 this implies convergence in distrib-
ution Xn !d X; and then by de…nition (remark 8.3, book 2) this is equivalent
to weak convergence of the associated distribution functions, Fn ) F:
2.10 ITÔ INTEGRATION VIA RIEMANN SUMS 77
The Lévy’s continuity theorem for CF (r) in book 6’s proposition 6.16 obtains
that Fn ) F if and only if CFn (r) ! CF (r) for all r: But:
Z t
1 2
CFn (r) CXn (r) ! exp r v 2 (s)ds ;
2 0
R
1 2 t 2
and thus it follows that CX (r) = exp 2r 0 v (s)ds : This is the char-
Rt 2
acteristic function of N 0; 0 v (s)ds ; and so by the uniqueness theorem
of book 6’s proposition 6.14, 2.52 follows.
Chapter 3
Item 3 re‡ects the fact that Bt (!) has …nite quadratic variation in the
sense of proposition 2.17. Speci…cally, as n max1 i n fti ti 1 g ! 0 for
79
80CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
" #
XNn 2
2
E Bti (!) Bti 1 (!) t ! 0:
i=1
hBit = t:
In proposition 6.1 we will prove a result due to Paul Lévy (1886 –1971)
and known as Lévy’s characterization of Brownian motion. It states
that Brownian motion is the only continuous martingale with quadratic
variation process hBit = t: So 3 cannot be satis…ed for any other general
process Mt (!): On the other hand, perhaps it is the existence of hBit more
than this actual value that is relevant. Thus it is compelling to wonder if
any
R t continuous martingale (or local martingale) will support a de…nition of
0 v(s; !)dMs (!) for suitable functions v(s; !); since the existence of a …nite
quadratic variation process hM it is assured by book 7’s proposition 6.12.
To this end, the next section addresses integration of simple processes
with respect to continuous martingales. The following section will then
generalize the development of the Itô Integral of v(s; !) 2 H2 ([0; 1) S)
with respect to Brownian motion Bt ; to stochastic integrals of a de…ned
space of predictable integrands, H2M ([0; 1) S), with respect to continuous
L2 -bounded martingales Mt with M0 = 0: The …nal section in this chapter
will generalize this result to continuous local martingales using a bigger
space of integrands.
Then in the next chapter we address continuous semimartingale integra-
tors. The general integration theory of predictable integrands with respect
to semimartingales originates with a 1980 paper of Claude Dellacherie
(b. 1943). This theory can also be developed in the somewhat more general
context of progressively measurable integrands. Indeed, for many proofs it
will be seen that it is progressive measurability of an integrand, as implied
by predictability (proposition 5.19, book 7) that is needed for the stated
result. See for example Karatzas and Shreve (1988) for the more general
development.
3.1 INTEGRATION OF SIMPLE PROCESSES 81
noting that v(s; !) (t;t0 ] (s) is an adapted simple process as de…ned above.
From this is derived:
Z t0 Z t0 Z t
v(s; !)dMs (!) = v(s; !)dMs (!) v(s; !)dMs (!): (3.3)
t 0 0
The …rst result and proof will be of little surprise, other than the quali-
…cation of "almost" in 2: See remark 3.2.
1. For 0 t < t0 1;
"Z #
t0
E v(s; !)dMs (!) = 0: (3.4)
t
2. The process: Z t
VtM (!) v(s; !)dMs (!); (3.5)
0
is continuous, satis…es the martingale property relative to f t (S)g; and
is thus almost a continuous martingale on (S; (S); t (S); ):
82CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
The last step follows since Mt is a martingale relative to f t (S)g: When t >
tn+1 ; the formula in ( ) obtains VtM (!) = VtM
n+1
(!) and thus E[VtM (!)] = 0
for all t: Then 3.4 follows from 3.3:
For t tn+1 , continuity of VtM (!) follows from ( ) and the continuity
of Mt ; while t -measurability follows from ( ) and the adaptedness of Mt :
Both extend to t > tn+1 since there VtM (!) = VtM n+1
(!): For 0 s t :
Now E VsM j s (S) = VsM since VsM is s -measurable, and E VtM VsM j s (S) =
0 follows from 1 since by 3.3:
Z 1 Z 1
M M
Vt Vs = v(r; !) (0;t] (r)dMr (!) v(r; !) (0;s] (r)dMr (!)
0 0
Z t
= v(r; !)dMr (!):
s
Hence
E VtM j s (S) = VsM ;
3.5:
2 Xn h i
M
V1 (!) L2 (S) = E a2j (!) Mt2j+1 (!) Mt2j (!) : (3.6)
j=0
Proof. Repeating the derivation for the analogous Itô result in proposition
2.27, and then expanding:
2
hXn i2
M
V1 (!) L2 (S) E aj (!) Mtj+1 (!) Mtj (!)
j=0
Xn h h ii
= E a2j (!) Mt2j+1 (!) + Mt2j (!)
j=0
Xn
2 E a2j (!)Mtj+1 (!)Mtj (!)
j=0
X
+2 E ai (!)aj (!) Mti+1 (!) Mti (!) Mtj+1 (!) Mtj (!) :
i<j
E a2j (!)Mtj+1 (!)Mtj (!) = E a2j (!)Mtj (!)E Mtj+1 (!)j tj (S)
Looking at the proof of proposition 3.3, 3.6 looks like the beginning of
an Itô-type isometry as seen in 2.22. But while 3.6 relates the L2 -norm of
M (!) to an expression that re‡ects the integrand
the stochastic integral V1
84CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
v(s; !); in the earlier derivation the summation on the right could be related
to the L2 -norm of v(s; !): For the formula above, such a conversion is not
apparent.
This is a major obstacle for the current development, because the exis-
tence of an Itô-type isometry was the key tool used for generalizing the Itô
integral from adapted simple processes to appropriately speci…ed integrands.
To make progress on this we will …rst restrict the continuous martingale in-
tegrator Mt to be L2 -bounded, and then later consider continuous local
martingales.
The strategy for accomplishing this result will be similar to that used for
the Itô integral, …rst de…ning the integral for suitable simple processes
v(s; !) as in 3.1, then generalizing to other v(s; !) with the aid of an
approximation theorem and a generalized Itô isometry.
While a good approach for Brownian motion, this calculation does not
easily generalize to continuous martingales because we have no ready
formula for:
2
E Mtj+1 (!) Mtj (!) :
The second summation is the Lebesgue integral of v 2 (s; !) as before. For the
…rst, since Bt2 (!) t is a martingale (proposition2.8), applying properties of
conditional expectations obtains:
h h ii
E a2j (!) Bt2j+1 (!) tj+1 Bt2j (!) tj
h i
= E a2j (!)E Bt2j+1 (!) tj+1 Bt2j (!) tj j tj (S)
= 0:
v(t; !), the Lebesgue-Stieltjes model is better suited to the current investi-
gation.
The …nal result in 3.7 is the Itô isometry in the current context.
where d hM is denotes the pathwise de…ned Borel measure associated with the
increasing quadratic variation process hM it :
Hence by proposition 3.1 and remark 3.2, VtM is a continuous mar-
tingale if for all t :
Z t
E v 2 (s; !)d hM is < 1:
0
Proof. Repeating the steps above with Bt2j (!) tj replaced by Mt2j hM itj obtains:
M 2
V1 (!) L2 (S)
Xn h h ii
= E a2j (!)E Mt2j+1 hM itj+1 Mt2j hM itj j tj (S) ((*))
j=0
Xn h h ii
2
+ E aj (!) hM itj+1 hM itj :
j=0
which is 3.7.
Given proposition 3.1, to show that VtM is a continuous martingale re-
quires only that the integrability assumption is satis…ed. By the Cauchy-
Schwarz inequality (corollary 3.48, book 4):
Remark 3.5 (On the Doléans measure) It should be noted that given a
continuous L2 -bounded martingale Mt and the associated increasing, contin-
uous process hM it ; it is by no means obvious that there is a measure M on
[0; 1) S with which to de…ne the above space of functions as an L2 -space
as in book 5, in the sense that:
Z
2
kv(s; !)kL0 ([0;1)hM i S) = v 2 (s; !)d M : (3.8)
2
The complexity here is that L02 ([0; 1)hM i S) is not a traditional product
measure space since the time integrals are de…ned by Borel measures d hM it
that depend on ! 2 S:
Such a measure was shown to exist by Catherine A. Doléans-Dade
(1942 – 2004), and is called a Doléans measure. This measure is de-
…ned on the predictable sigma algebra P of book 7’s de…nition 5.10, which is
generated by the semi-algebra of sets:
f(s; t] As ; f0g A0 g;
where 0 s < t; and As 2 s (S) for all s: On such sets, the set function
0 is de…ned by:
M
Z Z t Z
0
M [(s; t] As ] d hM ir d = [hM it hM is ] d :
As s As
Example 3.7 (All Itô integrals are integrators) The prior section’s de-
velopment of the Itô integral provides examples of the continuous L2 -bounded
martingales addressed in this section. Let Bt (!) be a Brownian motion on
(S; (S); t (S); )u:c: and v(t; !) 2 H2 ([0; 1) S) as in de…nition 2.31.
Then Z t
Mt (!) v(s; !)dBs (!);
0
is a continuous martingale by proposition 2.52, where this integral is given in
proposition 2.49 and denoted It (!): Further, this martingale is L2 -bounded
by the Itô isometry of 2.43:
h i
E jMt (!)j2 = kv(s; !)k2H2 ([0;t] S) ;
90CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
Remark 3.8 (On M2 ) Recalling book 7’s de…nition 5.10, that Mt is con-
tinuous in t is to be interpreted as continuous -a.e. This is a formality in
the sense that we can always make Mt continuous for all ! by rede…ning
Mt (!) 0 on the exceptional set. But there is never any reason to do this,
as it changes nothing in the theory.
As is the case for all Lp -type spaces of book 5, M2 is a space of equiva-
lence classes of processes as noted in book 7’s de…nition 5.120. Speci…cally,
Mt and Mt0 are deemed to be in the same equivalence class if M1 = M1 0 -
0
a.e. Here the limiting random variables M1 and M1 are in L2 (S); de…ned
in terms of the L2 (S)-limits:
M 2 M2 ) M1 2 L2 (S; (S); );
The …rst result follows from Doob’s martingale convergence theorem as noted
above. For the second, if M1 2 L2 (S; (S); ) then Mt E [M1 j t (S)] is
in M2 by the proof of that book’s proposition 5.122.
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES91
Turning next to the space of integrands, note that the norming constraint
in 3.11 will in general select di¤erent collections of predictable processes for
di¤erent continuous L2 -bounded martingales.
2. v(t; !) satis…es:
Z 1
kv(t; !)k2H M ([0;1) S) E v 2 (s; !)d hM is < 1; (3.11)
2
0
where
Z 1 Z Z 1
2
E v (s; !)d hM is v 2 (s; !)d [hM is (!)] d :
0 S 0
Notation 3.10 (H2M ([t; t0 ] S)) If v(t; !) 2 H2M ([0; 1) S); it is some-
times convenient to de…ne for 0 t < t0 1 :
Remark 3.11 (H2 ([0; 1) S) vs. H2B ([0; 1) S)) We might be interested
to compare de…nition 2.31 for H2 ([0; 1) S) to H2M ([0; 1) S) above with
M = B:
First, H2B is not formally de…ned. Though a continuous
p martingale, Bt
is not an L2 -bounded martingale since kBt kL2 = t: That said, hBit = t
exists and thus we can formally de…ne H2B ([0; 1) S) as above. Then the
integrability requirement for v(t; !) 2 H2 ([0; 1) S) in 2.27 is identical to
the integrability requirement for v(t; !) 2 H2B ([0; 1) S) in 3.11.
For measurability, processes v(t; !) in any H2M ([0; 1) S) are pre-
dictable, and so by book 7’s proposition 5.19 they are adapted and mea-
surable and hence satisfy the measurability requirements for processes in
H2 ([0; 1) S): But being predictable is more restrictive than being adapted
and measurable since adapted. An adapted right continuous processes is
measurable by book 7’s proposition 5.19 but such process are not necessarily
predictable (recall book 7’s remark 5.18).
92CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
Thus given this informal de…nition of H2B ([0; 1) S); we conclude that:
with 0 = t0 < t1 < < tn+1 < 1: Then v(s; !) is predictable if and only
if a 1 (!) is measurable relative to 0 (S); and aj (!) is measurable relative
to tj (S) for all j:
Proof. If v(s; !) is predictable it must be adapted and measurable by book
7’s proposition 5.19. Hence a 1 (!) is measurable relative to 0 (S) and each
other aj (!) is measurable relative to s (S) for all s 2 (tj 1 ; tj ]: By the as-
sumed right continuity of the …ltration it follows that aj (!) is measurable
relative to tj (S). Conversely, if v(t; !) is given above with all aj (!) mea-
surable as described; then v(t; !) is adapted. Since left continuous, book 7’s
corollary 5.17 obtains that it is predictable.
Exercise 3.13 Check that H2M ([0; 1) S) is a real vector space that con-
tains v(t; !) 1 and all simple processes as in 2.21 with appropriately mea-
surable faj (!)gnj=0 L2 (S; ): Hint: This is not trivial, and requires book
7’s proposition 6.18 and corollary 5.116.
Before proceeding with this section’s development, we verify that H2M ([0; 1)
S) is a normed space under kkH M ([0;1) S) de…ned in 3.11.
2
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES93
Z 1 1=2
[u(s; !) + v(s; !)]2 d hM is
0
Z 1 1=2 Z 1 1=2
u2 (s; !)d hM is + v 2 (s; !)d hM is :
0 0
To this end, the …rst step is to show that every processes v(t; !) 2
H2M ([0; 1) S) can be approximated in the H2M ([0; 1) S) norm with
simple processes. For this proof we will utilize the functional monotone
class theorem of proposition 1.32 of book 5. Recalling the proof of propo-
sition 2.57, if v(t; !) 2 H2 ([0; 1) S) is left continuous, then v(t; !) 2
T
H2B ([0; 1) S) for any …xed stopping time T < 1: The approximating
T
sequence fvn (t; !)g1 n=1 H2B ([0; 1) S) of the next result is then an
approximating sequence for v(t; !) (0;T ] (s) in H2 ([0; 1) S) since the mea-
surability conditions are identical (compare lemma 2.25 with lemma 3.12),
as are the norming constraints.
Proof. Assume that for arbitrary T that the proposition is true for all
bounded v(t; !) 2 H2M with v(t; !) = 0 for t > T: For general v(t; !) 2 H2M ;
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES95
de…ne v (k) (t; !) = v(t; !) [0;k] (t) jv(t;!)j k (!) and note that v (k) (t; !) !
v(t; !) pointwise for all (t; !): Now:
(k)
So for each k choose nk with vnk (t; !) v (k) (t; !) < 1=k; and thus
H2M
(k)
by the triangle inequality, v(t; !) vnk (t; !) ! 0: Hence the assumed
H2M
limited result for bounded v(t; !) with compact support implies the general
result.
To circumvent the use of the (here) unproved Doléans measure we can
proceed as follows. For general v(t; !) 2 H2M ; the requirement that kv(t; !)kH M <
R1 2
1 implies that 0 v 2 (s; !)d hM is < 1; -a.e. On this set of probability 1;
v (k) (t; ) ! v(t; ) pointwise for all t; and Lebesgue’s dominated convergence
R1 2
theorem obtains that 0 v (k) (t; ) v(t; ) d hM is ! 0; -a.e. Now since:
Z 1 Z 1
2
(k)
v (t; !) v(t; !) d hM is 4 v 2 (t; !)d hM is ;
0 0
We now check the three requirements of the functional monotone class the-
orem.
96CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
From 6.15 of book 7’s proposition 6.18 it follows that for all t s:
The functional monotone class theorem now applies and it can be con-
cluded that HT contains all bounded predictable functions with v(t; !) = 0
for t > T; and hence all bounded v(t; !) 2 H2M with v(t; !) = 0 for t > T;
as was to be proved.
Corollary 3.16 If v(t; !) and fvn (t; !)g1n=1 are given as in the above propo-
sition with 3.13 satis…ed, then for any interval (t; t0 ] with 0 t < t0 1 :
Tm Mj
Since v(s; !) 2 j=1 H2 ([0; 1) S) as above:
h i
kv(s; !)k Mj cE hMj iT = cE Mj2 T
< 1;
H2
vn(k)
k
(t; !) v (k) (t; !) Mj < 1=k
H2
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES99
v(t; !) vn(k)
k
(t; !) Mk !0
H2
Proposition 3.18 (Simple process integrals redux) Given (S; (S); t (S); )u:c:
and v(t; !) 2 H2M ([0; 1) S) a simple process as above:
where Z Z
t0 1
v(s; !)dMs (!) v(s; !) (t;t0 ] (s)dMs (!):
t 0
3. Itô M -Isometry:
where Z t
kv(s; !)k2H M ([0;t] S) E v 2 (s; !)d hM is :
2
0
Proposition 3.19 (Stochasic integral on H2M ([0; 1) S)) Given the …l-
tered probability space (S; (S); t (S); )u:c: ; let Mt 2 M2 be a continu-
ous L2 -bounded martingale with M0 = 0 and v(s; !) 2 H2M ([0; 1) S):
If fvn (t; !)g1
n=1 H2M ([0; 1) S) is any approximating sequence of sim-
ple processes satisfying 3.13, then for any 0 t < t0 1; the stochastic
integral:
Z t0
v(s; !)dMs (!);
t
is well de…ned as the L2 (S)-limit of the sequence of the integrals of the simple
processes fvn (s; !) (t;t0 ] (s)g1
n=1 H2M ([0; 1) S) as de…ned in 3.1. That
is:
Z 1 Z t0
vn (s; !) (t;t0 ] (s)dMs (!) v(s; !)dMs (!) ! 0: (3.18)
0 t
L2 (S)
R t0
By "well de…ned" is meant that for every such t; t0 ; t v(s; !)dMs (!) is
uniquely de…ned -a.e., independent of the approximating sequence.
Proof. As before, fvn (t; !)g1 M
n=1 is a Cauchy sequence in H2 ([0; 1) S):
kvn (s; !) vm (s; !)kH M kvn (s; !) v(s; !)kH M +kv(s; !) vm (s; !)kH M :
2 2 2
Since vn (t; !) vm (t; !) is a simple process for all n; m; and for such processes
3.1 obtains:
Z 1 Z 1 Z 1
[vn (t; !) vm (t; !)] dMs (!) = vn (s; !)dMs (!) vm (s; !)dMs (!);
0 0 0
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES101
R1
it follows from 3.17 that f 0 vn (s; !)dMs (!)g1 n=1 is a Cauchy sequence in
M
L2 (S): This sequence therefore has a limit V1 (!) 2 L2 (S) with
Z 1
M
V1 (!) vn (s; !)dMs (!) ! 0:
0 L2 (S)
To see that V1M (!) is well de…ned, assume that f~ vn (t; !)g1 H2M ([0; 1)
n=1
S) is another approximating sequence of simple processes for v(s; !) satisfy-
ing 3.13, and let V~1
M (!) then be derived similarly. That V M (!)
1 V~1M (!) =
L2 (S)
0 is proved exactly as in proposition 2.40, and thus V1 ~1
M (!) = V M (!); -a.e.
R t0
The proof for t v(s; !)dMs (!) is then the same using corollary 3.16.
R t0
Corollary 3.20 For all t; t0 with 0 t < t0 1; the integral t v(s; !)dMs (!)
is equivalently de…ned -a.e. by:
Z t0 Z 1
v(s; !)dMs (!) v(s; !) (t;t0 ] (s)dMs (!): (3.19)
t 0
Proof. With fvn (s; !)g1 n=1 H2M ([0; 1) S) an approximating sequence
for v(s; !) as in proposition 3.15 above, then vn (s; !) (t;t0 ] (s) H2M ([0; 1)
S) is an approximating sequence for v(s; !) (t;t0 ] (s) by corollary 3.16. Thus
by 3.18 applied to v(s; !) (t;t0 ] (s); the integral on the right of 3.19 is de…ned
by:
Z 1 Z 1
vn (s; !) (t;t0 ] (s)dMs (!) v(s; !) (t;t0 ] (s)dMs (!) ! 0:
0 0 L2 (S)
Exercise 3.21 Show that given linearity of this integral, which is 2 of propo-
sition 3.24 below, that 3.19 obtains:
Z t0 Z t0 Z t
v(s; !)dMs (!) = v(s; !)dMs (!) v(s; !)dMs (!); -a.e. (3.20)
t 0 0
This is because:
Z t0
vn (s; !)dMs (!)
t
L2 (S)
Z t0 Z t0 Z t0
v(s; !)dMs (!) + vn (s; !)dMs (!) v(s; !)dMs (!) ;
t t t
L2 (S) L2 (S)
R t0
and the last expression converges to 0: Denoting Xn t vn (s; !)dMs (!) to
simplify notation, then as N ! 1 :
Z Z h i
1 1
sup jXn (s)j d sup jXn (s)j2 d sup E jXn j2 ! 0:
n jXn (s)j N N n jXn (s)j N N n
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES103
The next result summarizes properties of this integral, providing all the
usual results other than continuity, which is addressed below.
Proposition 3.24 (Properties of the stochastic integral) Given (S; (S); t (S); )u:c: ;
let Mt 2 M2 be a continuous L2 -bounded martingale with M0 = 0; v(s; !); u(s; !) 2
H2M ([0; 1) S); and 0 t < t0 1:
5. The process: Z t
VtM (!) v(s; !)dMs (!)
0
For 2; …rst note that av(s; !)+u(s; !) 2 H2M ([0; 1) S) since k kH M ([t;t0 ] S)
2
is a norm by proposition 3.14, and thus:
R t0
Now with fvn (s; !)g1
n=1 as above, L2 -convergence t vn (s; !)dMs (!) !
R t0
t v(s; !)dMs (!) assures
Z t0 Z t0
vn (s; !)dMs (!) ! v(s; !)dMs (!)
t t
L2 (S) L2 (S)
The next result shows that this stochastic integral is not only linear in
its integrand, but also linear in its integrator.
Proposition 3.26 (Integral is linear w.r.t. integrators) Given (S; (S); t (S); )u:c: ;
let Mt ; Nt 2 M2 be a continuous L2 -bounded martingales with M0 = N0 = 0;
and v(s; !) 2 H2M ([0; 1) S)\H2N ([0; 1) S): Then v(s; !) 2 H2M +N ([0; 1)
S) and if 0 t 1 :
Z t Z t Z t
v(s; !)d [Ms + Ns ] (!) = v(s; !)dMs (!) + v(s; !)dNs (!); -a.e.
0 0 0
(3.25)
Proof. By book 7’s propositions 6.30:
hM + N it = hM it + hN it + 2 hM; N it ;
hM + N it 2 (hM it + hN it ) ;
V M +N VM VN L2 (S)
V M +N VnM +N L2 (S)
+ VM VnM L2 (S)
+ VN VnN L2 (S)
:
Proposition 3.27 (A Continuous version of VtM (!)) Given (S; (S); t (S); )u:c: ;
let Mt 2 M2 be a continuous L2 -bounded martingales with M0 = 0 and
v(s; !) 2 H2M ([0; 1) S): Then there exists a process ItM (!); continuous in
t 2 [0; 1) for all ! 2 S; so that with integrals below de…ned in 3.18:
kvn (s; !) vm (s; !)kH M kvn (s; !) v(s; !)kH M +kvm (s; !) v(s; !)kH M :
2 2 2
k (T )
Now let =2 and de…ne an increasing sequence fnk g1
k=1 so that
" #
Pr sup I M(T ) (t; !) I M(T ) (t; !) 2 k
2 k
:
0 t T nk+1 nk
(T )
De…ning Ak S by
( )
(T )
Ak !j sup I M(T ) (t; !) I M(T ) (t; !) 2 k
;
0 t T nk+1 nk
P (T )
then 1 k=1 [Ak ] 1 and the Borel-Cantelli lemma of book 2’s proposition
(T )
2.6 applies to yield that [lim sup Ak ] = 0: Hence, for ! outside this set of
measure zero, there are at most …nitely many k with
De…nitionR 3.28 (Final Stochastic Integral in H2M ) For v(t; !) 2 H2M ([0; 1)
t
S); de…ne 0 v(s; !)dMs (!) by:
Z t
v(s; !)dMs (!) = ItM (!); (3.30)
0
d hM is d hM is ; d hN is d hN is :
1
hM; N it [hM + N it hM N it ] ;
4
and thus induces the signed measure (de…nition 7.5, book 5):
hM;N it 1
4
hM +N it 1
4
hM N it ; (3.32)
Thus each of these measures is …nite -a.e. If we assume that Mt and Nt are
in fact L2 -bounded as above, then these measures are also …nite for T = 1
by book 7’s corollary 5.116.
Repeating exercise 3.31, either formulation provides the same value for
0
hM;N it [(a; b]] on the semi-algebra A and thus by extension (chapter 5,
book 1) will agree on B (R) :
Now given the bounded variation process hM; N it ; the total variation
process jhM; N it j is de…nable in terms of the strong total variation of
hM; N is on [0; t] as in de…nition 2.84 of book 7; or as the total variation
function of book 3’s de…nition 3.23 and denoted T0t : The total variation
process jhM; N it j T0t is then increasing as a simple corollary of 3.36
below, and can be represented as in book 3’s proposition 3.26 as:
Consequently the Borel measure jhM;N ijt T0t ; introduced in 3.31 above,
but using the notation d jhM; N it j is de…nable:
Remark 3.33 (On the notation jhM; N it j) The next result shows that
continuity of hM; N it assures continuity of jhM; N it j : This would be trivial if
jhM; N it j denoted the absolute value of hM; N it ; since jjaj jbjj ja bj ;
and it is perhaps unfortunate that the standard notational convention for the
total variation process T0t associated with hM; N it is the same as that of the
absolute value of this process. In the notation of book 3; this process would
be denoted T0t (hM; N it ) ; recalling remark 3.30 that this is parametrized by
! 2 S:
Exercise 3.34 Prove that for a bounded variation function, that continuity
of T0t (f ) assures continuity of f (t): Then show that this and the next result
are true pointwise, and hence, T0t (f ) is continuous at t0 if and only if f (t)
is continuous at t0 :
so that:
Pn
Ttt+s (f ) i=1 jf (ti ) f (ti 1 )j + =2: ((2))
This is possible since Ttt+s (f ) is the supremum of such sums over all par-
titions. Next, by continuity of f at t choose so that jf (t) f (t0 )j < =2
if jt t0 j < : Then if jt t1 j < we use the above partition in the next
step, otherwise we augment this partition with a new t1 that satis…es this
inequality, relabel partition points, and note that by the triangle inequality
that the restated (2) remains valid with this augmented partition. For nota-
tional simplicity we assume the latter outcome, and write t1 = t + t: Then
after relabeling:
t + t = t1 < ::: < tn+1 = t + s
is a partition of [ t + t; t + s] and so:
Pn+1 t+s
i=2 jf (ti ) f (ti 1 )j Tt+ t (f ): ((3))
Hint: Any partitions of [a; c] and [c; b] induce a partition of [a; b] but not all
partitions of [a; b]; so we get in 3.36. On the other hand, any partition
of [a; b] can be augmented with c if needed to induce partitions of [a; c] and
[c; b]; and then the triangle inequality obtains the other inequality.
114CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
We next investigate how the Borel measure jhM;N ijt can be de…ned on
intervals in terms of the signed measure hM;N it :
where the supremum is over all partitions a = t0 < ::: < tn = b: Hint:
Recalling book 3’s notation 3.24, jhM;N ijt [(a; b]] Tab (f ) with the bounded
variation function f (t) hM; N it : Use the de…nition of Tab (f ) and exercise
3.36.
where
S the supremum is over all measurable and countable partitions, (a; b] =
Ej with disjoint fEj g B (R). The next result proves that the expression
on the right of this inequality also de…nes a Borel measure.
where
S the supremum is over all measurable and countable partitions: E =
j j with disjoint fEj gj
E (X).
Then j j is a measure on (X); called the total variation measure
associated with :
Proof. Following Rudin (1974), given any such countable partition fEj g
of E; choose reals
P tj < j j [Ej ] : Then by de…nition each Ej has a partition
fEj;i gi so that i j [Ej;i ]j > tj : As fEj;i gi;j is then a partition of E :
P P
j tj j;i j [Ej;i ]j j j [E] :
Taking a supremum
P on the left over all such partitions fEj;i gi of fEj g ob-
tains that j j j [Ej ] j j [E] :
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES115
j j j [Ej ] :
As …nal results for the signed measure hM;N it ; we also have the Hahn
and Jordan decomposition theorems of book 5’s propositions 7.12 and 7.14.
The Jordan theorem states that there exists a unique decomposition:
+
hM;N it = ; (3.38)
+ ~ =
E (E) = 0:
+
(A) = hM;N it (A \ A+ ); (A) = hM;N it (A \ A ):
+
hM;N it = + : (3.39)
J
We are now ready to bring these notions together. The following is presented
in terms of inequalities, and this is adequate for the application below. But
we note that one or both of these inequalities may in fact be equalities but
the proofs have proved elusive.
where these measures are respectively de…ned in 3.35, 3.37, and 3.39.
Proof. By exercise 3.37 and the paragraph immediately afterward, the …rst
inequality of measures holds on the semi-algebra A0 of right semi-closed in-
tervals f(a; b]g which generates B (R) : Thus by book 1’s remark 5.17, the
same inequality extends to the associated outer measures, and then by ex-
tension (that book’s propositions 5.20 and 5.23) to the associated Borel mea-
sures.
For the second inequality, if A 2 B (R) and A+ and A are de…ned rela-
tive to the Hahn decomposition theorem above, then by the above discussion:
Thus:
P P + P
j hM;N it [Ej ] j [Ej ] + j [Ej ]
+ P
= [A] + j [A]
= hM;N it [A] :
J
As this is true for all such partitions, it is true for the supremum of all such
partitions and the second inequality is proved.
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES117
Xt (!) : S ! R (or Rn ),
where (B (I) (S)) is de…ned as the smallest sigma algebra that contains
all measurable rectangles A B with A 2 B (I) ; B 2 (S); and m denotes
Lebesgue measure. De…ne:
It is an exercise to check that fn (t; !) is also measurable for all n; and that
fn (t; !) ! f (t; !) pointwise on I S: Also since bounded by n and also
supported on [0; n] in the t-domain, fn (t; !) is integrable on I S relative
to the product measure m constructed in book 1’s chapter 7, where m is
Lebesgue measure on R:
Then by Fubini’s theorem (corollary 5.20, book 5), fn (t; ) is m-integrable
for all !; and so by de…nition B (I)-measurable for all !: Applying book 5’s
118CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
Now that the integrals in 3.31 are better understood and seen to be well-
de…ned, we need an additional measure-theoretic result before proving this
inequality.
dhM;N i
Proposition 3.41 (A type of Radon-Nikodým derivative: djhM;N ijs )
s
Let Mt ; Nt be continuous L2 -bounded martingales on (S; (S); t (S); )u:c:
with M0 = N0 = 0; and de…ne the signed measure hM;N it and associated
Borel measure jhM;N i j as above, recalling remark 3.30 that both are para-
t
metrized by ! 2 S: Then for each ! there exists a Borel measurable function
h(t) h! (t) with jh(t)j = 1 for all t; so that for all A 2 B (R) and all Borel
measurable functions f (t) :
Z Z
h(s)f (s)d jhM; N ijs = f (s)d hM; N is ;
A A
Proof. De…ne:
+ 1 1
= jhM;N ijs + hM;N is ; = jhM;N ijs hM;N is ;
2 2
jhM;N ijs [(a; b]] hM;N is [(a; b]] jhM;N ijs [(a; b]] : ((1))
where Tab (c) is the total variation of c(s) on [a; b]: Since c(s) is a bounded
variation function, 3.36 and then proposition 5.7 of book 1 obtain:
Tab (c) jhM; N ijb jhM; N ija = jhM;N ijs [(a; b]] :
So (1) is proved.
Now jhM;N ijs = + + by (1); and as a sum of measures it follows that
if jhM;N ijs [A] = 0 for A 2 B (R) then + [A] = [A] = 0: In the termi-
nology of book 5’s de…nition 7.3, both + and are absolutely continuous
with respect to jhM;N ijs ; denoted + jhM;N ijs and jhM;N ijs :
The Radon-Nikodým theorem of book 5’s corollary 7.24 now assures the
existence of nonnegative Borel measurable functions h (s) so that for all
A 2 B (R) and Borel measurable f (t) :
Z Z
h (s)f (s)d jhM;N i j = f (s)d :
s
A A
Thus 3.41 is satis…ed with h(s) = h+ (s) h (s) if it can be proved that for
all Borel measurable f (t) :
Z Z Z
+
f (s)d hM;N is = f (s)d f (s)d :
A A A
But since hM;N is = + ; this integral identity follows for simple func-
tions f (s) by de…nition 2.3 of book 5, and for general measurable functions
by approximation (that book’s de…nition 2.37, and then propositions 1.18
and 2.21).
To prove that jh(!)j = 1 as de…ned; let A(a;b) = fa < h < bg: Note that
A(a;b) 2 B (R) (why?). Then with f = 1 and A = A(a;b) ; 3.41 obtains:
Z
a jhM;N i j A(a;b) hM;N is A (a;b) f (s)d hM;N is b jhM;N i j A(a;b) :
s s
A(a;b)
If a > 1 or b < 1 then since (1) applies to all Borel sets as noted above, it
follows that jhM;N i j A(a;b) = 0: Thus jhM;N i j [1 < jh(!)j] = 0:
s s
Now for any disjoint measurable partition fEj g of A( 1;1) = fjh(!)j <
1g; let f = 1 in 3.41 as above and apply corollary 2.49 of book 5:
Z Z
P P P
j hM;N is [Ej ] j d hM;N is j jhj d jhM;N i j
s
Ej Ej
Z
< d jhM;N i j = jhM;N i j A( 1;1) : ((2))
s s
A( 1;1)
120CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
Taking the supremum over all such partitions to obtain hM;N it A( 1;1) ;
the total variation measure associated with hM;N it ; and applying 3.40:
With this pre-work done, we are now ready for the …nal result. Some of
the details are assigned as exercises.
hM its hM it hM is :
De…ning hM; N its analogously, it follows from book 7’s propositions 6.12 and
6.30 that for all real r :
1=2 1=2
hM; N its hM its hN its ; -a.e., ((1))
recalling that quadratic variation and covariation processes are well de…ned
up to indistinguishability (de…nition 5.10, book 7).
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES121
Thus, that (2) is valid for simple L2 -processes -a.e. assures it is similarly
valid for bounded measurable processes for all t < 1:
Now with h(t) h! (t) of proposition 3.41:
Z t Z t
v(s; !)w(s; !)d hM; N is = v(s; !)w(s; !) h(s)d jhM; N ijs ;
0 0
and thus since jh(t)j = 1 it follows that for all bounded measurable processes:
Z t Z t 1=2 Z t 1=2
2
v(s; !)w(s; !) h(s)d jhM; N ijs v (s; !)d hM is w2 (s; !)d hN is :
0 0 0
1. Given two measurable simple processes v(t; !) and w(t; !); provide the
details that these can be rede…ned on a common t-partition. For ap-
plications below, repeat this exercise for adapted simple processes, re-
calling lemma 2.25.
A0 f(a; b] Aj 1 a b 1; A 2 (S)g
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES123
In this and the next section we investigate these and related questions.
First, we address the covariation and quadratic variation of stochastic in-
tegrals. This …rst result requires a rather long proof to justify a number
of details, and the reader is encouraged to …rst scan the logic ‡ow before
digging into these details.
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES125
Proposition 3.45 (Covariation of stochastic integrals) Given (S; (S); t (S); )u:c: ;
if Mt ; Nt 2 M2 ; v(t; !) 2 H2M ([0; 1) S); and w(t; !) 2 H2N ([0; 1) S);
then the covariation process (de…nition 6.25, book 7) of the continuous L2 -
bounded martingales:
Z t Z t
M N
It (!) = v(s; !)dMs (!); Jt (!) = w(s; !)dNs (!);
0 0
is given by: Z t
IM ; JN t
= v(s; !)w(s; !)d hM; N is ; (3.44)
0
where hM; N it is the covariation process of Mt ; Nt .
Proof. It is apparent that as de…ned in 3.44 that I M ; J N 0 = 0; and
we leave it as an exercise to con…rm that this integral de…nes an adapted,
continuous, bounded variation process (see exercise 3.46).
Thus to prove 3.44 it is su¢ cient to show that the process:
Z t Z t Z t
Xt v(s; !)dMs (!) w(s; !)dNs (!) v(s; !)w(s; !)d hM; N is ;
0 0 0
is a continuous local martingale, and then apply proposition 6.29 of book 7.
For this, let fvn (t; !)g H2M ([0; 1) S); fwn (t; !)g H2N ([0; 1)
S) be approximating sequences of simple processes of proposition 3.15 that
(n)
converge to v(t; !) in H2M ; and to w(t; !) in H2N ; respectively. De…ning Xt
(n)
as Xt but with v; w replaced by vn ; wn ; respectively, we …rst prove that Xt
(n)
is a continuous martingale for all n: By then demonstrating that Xt ! Xt
in L1 for all t; and applyinghbook 7’s propositioni 5.32, it will follow that Xt
(n)
is a martingale. In fact, E supt 0 Xt Xt ! 0 as n ! 1; and thus
as a uniform limit of continuous martingales, Xt so de…ned is continuous.
(n)
1. Xt is a continuous martingale for all n : For this step, let
vn (s; !) be given as in 2.21 with 0 = t0 < t2 < < tmn +1 < 1 :
Xmn
vn (t; !) a 1 (!) f0g (t) + aj (!) (tj ;tj+1 ] (t);
j=0
where aj (!) is tj (S)-measurable and a 1 (!) is 0 (S)-measurable by lemma
3.12. By using the common partition points justi…ed in exercise 3.43, we
can assume that wn (t; !) is de…ned identically, but with random variables
fbj (!)gm n
j=0 : Now by linearity of all integrals:
Xmn Z t Z t
(n)
Xt = aj (tj ;tj+1 ] dMs bk (tk ;tk+1 ] dNs
j;k=0 0 0
Z t
aj bk (tj ;tj+1 ] (tk ;tk+1 ] d hM; N is ;
0
126CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
which is …nite since vn (t; !) 2 H2M ([0; 1) S); wn (t; !) 2 H2N ([0; 1) S):
For the martingale property, assume tj+1 tk : Then the product on
the right in (1) is 0 if t tk ; which satis…es the martingale property by
de…nition. So assume tk < t and thus combining: tj+1 tk < t and then:
and so:
Identifying again the cases tj < s < t tj+1 ; tj < s < tj+1 t and tj+1
s < t; we focus on the …rst two cases since It Is = 0 in the third. In
both of these cases E [Bj s ] = 0 by the measurability property since Mt and
Nt are martingales. For example:
since s < tj+1 by assumption. Now if fTn g is a localizing sequence for Mt0 ;
0
then since Tn >0 Mt^T is a martingale:
n
h i
E Tn >0 Mt0j+1 ^t^Tn Ms^T0
n
j s = 0;
128CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
Mt0j+1 ^t^Tn 0
Ms^Tn
2 sup Ms0 :
s tj+1 ^t
Now
and integrability of sups tj+1 ^t jMs Ns j follows from the Cauchy-Schwarz in-
equality, Doob’s martingale maximal inequality (proposition 5.91, book 7),
and L2 -boundedness of Mt ; Nt :
" # " #
E sup jMs Ns j E sup jMs j sup jNs j
s tj+1 ^t s tj+1 ^t s tj+1 ^t
2 !2 31=2 2 !2 31=2
E4 sup jMs j 5 E4 sup jNs j 5
s tj+1 ^t s tj+1 ^t
4 Mtj+1 ^t L2
Ntj+1 ^t L2
< 1:
The same follows for sups tj+1 ^t jhM; N it j using (2) and book 7’s proposition
6.18, recalling that hM N is are increasing processes:
" # " # " #
1 1
E sup jhM; N i sj E sup hM + N is + E sup hM N is
s tj+1 ^t 4 s tj+1 ^t 4 s tj+1 ^t
1 2 1 2
= Mtj+1 ^t + Ntj+1 ^t L2
+ Mtj+1 ^t Ntj+1 ^t L2
:
4 4
(n)
2. That Xt !L1 Xt ; uniformly in t: This will be proved in two
(n) (n) (n)
steps, that each of the terms of Xt Yt + Zt converges uniformly to
the respective terms of Xt Yt + Zt ; and then the …nal result follows from:
(n) (n)
= E sup Yt Yt + E sup Zt Zt :
t 0 t 0
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES129
Z t Z t Z t Z t
E sup vn (s; !)dMs wn (s; !)dNs v(s; !)dMs w(s; !)dNs
t 0 0 0 0 0
Z t Z t
E sup [vn (s; !) v(s; !)] dMs sup wn (s; !)dNs
t 0 0 t 0 0
Z t Z t
+E sup v(s; !)dMs sup [wn (s; !) w(s; !)] dNs
t 0 0 t 0 0
I + II:
The …rst term converges to 0 by proposition 3.19, while the second converges
to kw(s; !)kH N by proposition 4.20 of book 5. Similarly:
2
The third step uses 3.35 and a change back in notation. Applying the Kunita-
Watanabe inequality of proposition 3.42:
Z 1
sup I jvn vj jwn j jd hM; N is j
t 0 0
Z t 1=2 Z t 1=2
2
[vn v] d hM is w2 d hN is :
0 0
Thus by Cauchy_Schwarz:
1=2 1=2
E sup I kvn vkH M kwkH N ! 0:
t 0 2 2
Exercise 3.46 Prove that the integral in 3.44 de…nes an adapted, continu-
ous, bounded variation process. Hint: Given !; split u(t; !) and v(t; !) into
positive and negative parts (de…nition 2.36, book 5), and split the signed
measure d hM; N is as in 3.32, then express this integral as a di¤ erence of
integrals which are continuous increasing functions and recall proposition
3.27, book 3. Continuity of the quadratic variation processes is used for
continuity of the two integrals. For adaptedness, approximate v(s; !) and
w(s; !) with simple process (exercise 3.43), then corollary 1.10, book 5.
is given by: Z t
M
I t
= v 2 (s; !)d hM is : (3.45)
0
The expectation on the right is also equal to V ar ItM ; the variance of ItM :
Proof. Since I0M = 0; book 7’s proposition 6.18 obtains that E I M t =
h i
2
E ItM ; and thus 3.46 follows from 3.45. That this second moment is
the variance of ItM is a consequence of 4 of proposition 3.27 which proved
3.22 for ItM :
The expectation on the right is also equal to Cov ItM ; JtN ; the covariance
of ItM and JtN :
Proof. This expectation equals the covariance since both variates ItM and
JtN have 0 expectation as above. By proposition 3.45 and book 7’s proposition
6.29: Z t
M N
Mt It Jt v(s; !)w(s; !)d hM; N is
0
is a continuous martingale. Thus recalling exercise 5.22 of book 6, let 0
f;; Sg: Then since 0 0 (S) and M0 = 0; the tower property of book 7’s
proposition 5.26 obtains:
which is 3.47.
132CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
which is 3.48.
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES133
Now assume such Xt exists and satis…es 3.49, then from 3.48 it follows
that hX; N it I M ; N t = 0 for all Nt 2 M2 : Book 7’s proposition 6.30 then
obtains X I M ; N t = 0 for all such Nt : Letting Nt = Xt ItM ; which
is an L2 -bounded martingale by assumption (for Xt ) and proposition 3.27
(for ItM ), this proves that X I M ; X I M t = 0: Book 7’s remark 6.26
obtains that X I M t = 0; and thus by the Doob-Meyer Decomposition
2
theorem 2 in that book’s proposition 6.12, X I M is a continuous local
2
martingale. If fTn g is an associated localizing sequence then X I M t^Tn
h i
2
is a martingale for all n; and thus E X I M t^Tn = 0 as in the proof of
corollary 3.49. This implies Xt = ItM almost everywhere for t Tn by book
7’s corollary 6.14, and letting Tn ! 1 the result follows.
Remark 3.52 (On the Kunita-Watanabe identity) The above proof showed
that the Kunita-Watanabe identity in 3.48 is implied by the covariation for-
mula in 3.44, but it is also theR case that 3.48 implies 3.44 and thus these
t
are equivalent. Since JtN (!) 2
0 w(s; !)dNs (!) 2 M , two applications of
3.48 produce:
Z t
IM ; JN t
= v(s; !)d M; J N s
0
Z t Z s
= v(s; !)d w(r; !)d hM; N ir :
0 0
The identity in 3.44 then follows from book 5’s proposition 3.8 as a result in
change of variables.
To see this, …rst recall that by 3.32 that d hM; N ir is a signed measure,
which in the notation of Borel measures is de…ned by hM;N it 1
hM +N it
4
1
hM N i : The integral in square brackets is then de…ned by:
4 t
Z s Z s
w(r; !)d hM; N ir w(r; !)d hM;N ir
0
Z0 s Z s
= w(r; !)d 1
hM +N ir w(r; !)d 1
hM N ir :
4 4
0 0
To apply book 5’s proposition 3.8 to each of these integrals, we formally need
to express:
w(r; !) = w+ (r; !) w (r; !);
as a di¤ erence of nonnegative functions using book 5’s de…nition 2.36. Ap-
plying the book 5 result and reassembling obtains:
Z t Z t
IM ; JN t = v(s; !)w(s; !)d 1 hM +N i v(s; !)w(s; !)d 1 hM N i
4 s 4 s
0 0
Z t
= v(s; !)w(s; !)d hM;N is
0
Z t
v(s; !)w(s; !)d hM; N is ;
0
which is 3.44.
The …nal result of this section addresses the last of the questions at the
start of this section. The so-called
Rt associative law of stochastic
Rt integra-
M
tion states that if ItM (!) = 0 v(s; !)dMs (!) and ItI (!) = 0 w(s; !)dIsM (!);
then: Z t Z t
M
w(s; !)dIs (!) = v(s; !)w(s; !)dMs (!): ((*))
0 0
As part of this proof we must verify that given v(t; !) 2 H2M and w(t; !) 2
M
H2I ; then v(t; !)w(t; !) 2 H2M and thus the integral on the right is well
de…ned.
This identity can be compared to the results in chapter 3 of book 5 on
change of variables in integrals, and speci…cally to the result in that book’s
proposition 3.8. Admittedly, the conclusion in ( ) is a deeper result due
to the manner in which such stochastic integrals are de…ned, but this basic
notion has been encountered before.
To see the analogy and the current challenge, …rst note that changing
notation:
Z t Z t Z s
M
w(s; !)dIs (!) w(s; !)d v(r; !)dMr (!) :
0 0 0
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES135
Rt
This same identity applied to ItM 0 v(s; !)dMs (!) produces:
Z t
M
I ;N t
= v(s; !)d hM; N is ;
0
and then using the same derivation as in remark 3.52 and applying book 5’s
proposition 3.8 thus obtains for all Nt 2 M2 :
Z t
hK; N it = w(s; !)d I M ; N s :
0
Rt
Now applying 3.48 to Lt 0 w(s; !)dIsM yields for all Nt 2 M2 :
Z t
hL; N it = w(s; !)d I M ; N s
:
0
By the uniqueness result of proposition 3.51 above, we see that for almost
all !; Kt = Lt for all t:
and assume without loss of generality (by exercise 3.43) that w(t; !) is de-
…ned identically, but with random variables fbj (!)gmj= 1 : By lemma 3.12
it follows that a 1 (!)=b 1 (!) are 0 (S)-measurable, and aj (!)=bj (!) are
tj (S)-measurable for all j:
From 3.2:
Z t Xm
M
It (!) = v(s; !)dMs (!) = aj (!) Mtj+1 ^t (!) Mtj ^t (!) :
0 j=0
Similarly:
Xm
v(s; !)w(s; !) = a0 b0 (!) f0g (t) + aj (!)bj (!) (tj ;tj+1 ] (t);
j=0
and so:
Z t Xm
v(s; !)w(s; !)dMs (!) = aj (!)bj (!) Mtj+1 ^t (!) Mtj ^t (!) :
0 j=0
((*))
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES137
Finally:
Z t
w(s; !)dIsM (!)
0
Xm h i
= bj (!) ItMj+1 ^t
(!) I M
tj ^t (!)
j=0
Xm Xm
= bj (!) ak (!) Mtk+1 ^tj+1 ^t (!) Mtk ^tj+1 ^t (!)
j=0 k=0
Xm Xm
bj (!) ak (!) Mtk+1 ^tj ^t (!) Mtk ^tj ^t (!) :
j=0 k=0
and thus the coe¢ cient of bj (!) is aj (!) Mtj+1 ^t (!) Mtj ^t (!) : By sub-
stitution, the expression in ( ) is obtained.
To set the stage, recall from proposition 3.15 that if v(s; !) 2 H2M ([0; 1)
S); there exists simple processes fvn (t; !)g1 n=1 H2M ([0; 1) S) so that
vn (s; !) ! v(s; !) in the H2M -norm. Then by propositions 3.19 and 3.23,
for all t; t0 there is convergence both in L2 (S) and in probability:
Z t0 Z t0
vn (s; !)dMs (!) ! v(s; !)dMs (!):
t t
138CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
Proposition 3.55 (Riemann sum approximation) Given (S; (S); t (S); )u:c: ;
let Mt 2 M2 ; v(s; !) 2 H2M ([0; 1) S); and assume v(s; !) is also left con-
tinuous in s for almost all !; and locally bounded:
jv(s; !)j Kt < 1 for s t:
Given partitions n of [0; t] :
0 = t0 < t1 < tn+1 = t;
with n max0 i n fti+1 ti g ! 0; then with the integral de…ned as ItM (!)
of proposition 3.27 (suppressing (!)):
Xn Z t
v(ti ; !) Mti+1 Mti !L2 (S) v(s; !)dMs : (3.51)
i=0 0
and implies uniform convergence in probability over [0; t]: That is, for all
>0:
Xn Z r
Pr sup v(ti ; !) Mti+1 ^r Mti ^r v(s; !)dMs > ! 0:
r t i=0 0
(3.53)
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES139
Kt E [hM it ] = Kt2 E
2
Mt2 < 1:
Now
(vn (s; !) v(s; !))2 2 vn2 (s; !) + v 2 (s; !) ;
140CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
and since v(s; !); vn (s; !) 2 H2M ([0; 1) S) it follows that this upper bound
is d hM is -integrable, -a.e. Also, vn (s; !) ! v(s; !) pointwise in s; -a.e.,
and so Lebesgue’s dominated convergence theorem of book 5’s corollary 2.45
applies to conclude that as n ! 1;
Z t
(vn (s; !) v(s; !))2 d hM is ! 0; -a.e.
0
But as above:
Z t Z t
(vn (s; !) v(s; !))2 d hM is 2 vn2 (s; !) + v 2 (s; !) d hM is ;
0 0
and v(s; !); vn (s; !) 2 H2M ([0; 1) S) assures that this upper bound is -
integrable. Thus another application of Lebesgue’s dominated convergence
theorem obtains:
" # Z Z t
2
E sup Nr (n)
4 (vn (s; !) v(s; !))2 d hM is d ! 0:
r t S 0
which is 3.53.
Finally, de…ne nk so that
Pr sup Nr(nk ) 2 k
2 k
;
0 r t
n o
(n )
and let Ak S be de…ned by Ak = !j sup0 r t Nr k 2 k : Then since
P1
k=1 [Ak ] < 1; the Borel-Cantelli theorem of proposition 2.6 of book 2
applies to conclude that [A] = 0 where A lim sup Ak : So for ! 2 A~ the
(n )
complement of A; there are at most …nitely many k with sup0 r t Nr k
(n )
2 k ; and hence fNr k g1 k=1 converges uniformly for r 2 [0; t] on this set of
measure 1: As the integrals of simple processes are continuous, this uniform
convergence is to a continuous functions, which then must agree with ItM (!)
-a.e.
3.2 INTEGRALS W.R.T. CONTINUOUS L2 -BOUNDED MARTINGALES141
The following example generalizes the formula in 2.35 for Brownian mo-
tion, since hBit = t by corollary 2.10.
Rt
Example 3.56 (On 0 Ms dMs ) Given (S; (S); t (S); )u:c: ; let Mt 2 M2 also
be locally bounded, then v(s; !) = Ms (!) satis…es the conditions of the above
theorem. Given partitions n of [0; t] :
Xn Z t
Mti (!) Mti+1 (!) Mti (!) !L2 (S) Ms (!)dMs (!):
i=0 0
However,
2
Mt2i+1 Mt2i Mti+1 Mti = 2Mti Mti+1 Mti ;
Xn Z t
2
Mt2 Mti+1 Mti !L2 (S) 2 Ms (!)dMs (!);
i=0 0
since M0 = 0:
By book 7’s proposition 6.5 (see exercise 3.57):
Xn 2
Mti+1 Mti !L2 (S) hM it : ((*))
i=0
Exercise 3.57 Justify the application of book 7’s proposition 6.5 above,
even though it was not assumed that Mt was bounded, but only locally bounded.
Hint: De…ne Nt = Mt for t T and Nt = MT for t > T: Then Nt is a
martingale (book 7, proposition 5.84), and bounded (why?). Thus in ( ); we
can conclude by the referenced result that convergence occurs to hN it : Why
is hM it = hN it for t T ? Consider stopping each at T and recall book 7’s
corollary 6.16.
142CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
(n)
and thus Mt 2 M if M0 = 0: It is a martingale by de…nition 5.70
of book 7, since fTn g is a localizing sequence, and is L2 -bounded since
2 (n)
Tn
E Tn >0 Mt Kn2 : Hence for any predictable process v(s; !) 2 H2M ([0; 1)
Rt (n)
S); the integral 0 v(s; !)dMs is well de…ned by the previous section.
Remark 3.60 (On Tn >0 ) Recall from book 7’s remark 5.71 that the pres-
ence of Tn >0 in the de…nition of a local martingale Mt is to weaken the
integrability requirement on M0 : Speci…cally, if we required MtTn to be a
martingale in this de…nition rather than Tn >0 MtTn ; this would require inte-
grability MtTn for all t and thus integrability of M0Tn = M0 : With the given
de…nition, we instead require only integrability of Tn >0 M0 :
In the current context, we will continue the assumption for local martin-
gales that was used for L2 -bounded martingales Mt 2 M2 that M0 = 0; and
thus integrability is not an issue. We will therefore discontinue the use of
Tn >0 ; and simply de…ne a local martingale Mt with M0 = 0 as a process
such that there exists a localizing sequence such that MtTn is a martingale.
This is justi…ed by the following:
MtT = T
T >0 Mt + T
T =0 Mt = T
T >0 Mt ;
it follows that MtT is integrable and/or adapted for all t if and only if
T
T >0 Mt is integrable and/or adapted for all t: Further, for t > s it follows
that E MtT j s (S) = MsT if and only if E T >0 MtT j s (S) = T >0 MsT :
Remark 3.63 (On stopping R-S and L-S integrals) Note that if sto-
chastic integrals were de…ned pathwise as Riemann-Stieltjes or Lebesgue-
Stieltjes integrals, then the validity of 3.55 is reasonably transparent.
Rt
If t T; then all three integrals are simply equal to 0 v(s; !)dMs (!):
This is clear for the …rst two, while for the third integral, since s t it
follows that MsT = Ms and thus dMsT = dMs :
RT
For t > T; all three integrals equal 0 v(s; !)dMs (!): This is again clear
for the
R t …rst two, while for the third, note that MsT = MT for s > T implies
T
that T v(s; !)dMs (!) = 0:
What makes this proposition a challenge is the veri…cation that this in-
tuition generalizes to the present case where integrals are de…ned in the L2 -
sense.
Proposition 3.64 (On stopping stochastic integrals) Given (S; (S); t (S); )u:c: ;
let Mt 2 M2 ; v(s; !) 2 H2M ([0; 1) S); and T a stopping time. Then -a.e.:
Z t^T Z t Z t
v(s; !)dMs (!) = v(s; !) (0;T ] (s)dMs (!) = v(s; !)dMsT (!);
0 0 0
(3.55)
for all t:
Proof. We …rst address existence of theR integrals in 3.55 relative to the
t
development of the prior section. Since 0 v(s; !)dMs (!) is a continuous,
L2 -bounded martingale by the previous section, and the …rst integral equals
this martingale stopped at T; this integral is again continuous and a mar-
tingale by Doob’s optional stopping theorem of book 7’s proposition 5.84. In
fact it is again an L2 -bounded martingale by Doob’s martingale maximal
inequality of book 7’s proposition 5.91 and 2.39:
" Z 2
# " Z #
t^T r 2
E v(s; !)dMs (!) E sup v(s; !)dMs (!)
0 r t 0
" Z 2
#
t
4E v(s; !)dMs (!)
0
The second integral is well de…ned because v(s; !) (0;T ] 2 H2M ([0; 1)
S): First this process is predictable as a product (proposition 1.5, book 5) of
predictable v(s; !) and (0;T ] ; which is predictable since it is left continuous
and adapted (corollary 5.17, book 7). The H2 -bound is satis…ed since hM it
146CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
Z 1
2
2
(0;T ] v(t; !) E (0;T ] v (s; !)d hM is
H2M ([0;1) S) 0
Z 1
E v 2 (s; !)d hM is :
0
" Z #
1 2
kv(s; !)kH M T ([0;1) S)
= E v 2 (s; !)d M T s
2 0
kv(s; !)kH M ([0;1) S) : ((*))
2
Turning to 3.55, let fvn (s; !)g H2M ([0; 1) S) be a sequence of simple
processes as in proposition 3.15 with kv(s; !) vn (s; !)kH M ([0;1) S) ! 0:
2
It is an exercise to prove that 3.55 is true for all such vn (s; !); so it is then
enough to prove that for each of the three expressions above, the integrals
of vn (s; !) converges to the respective integrals of v(s; !) in L2 (S): Also, we
prove this result for t < 1; since this implies the result for t = 1; noting
that all such integrals are …nite by the above discussion.
Rt
1. For each n; 0 (v(s; !) vn (s; !)) dMs is a continuous martingale which
is L2 -bounded by Itô’s M -isometry in 2.39, and thus also uniformly
integrable (proposition 5.99, book7). By Doob’s optional stopping the-
orem (proposition 5.117, book 7), arbitrarily de…ning the stopping time
T0 = t t ^ T :
Z t^T Z t
(v(s; !) vn (s; !)) dMs = E (v(s; !) vn (s; !)) dMs j t^T :
0 0
This
R t^T converges to zero as n ! 1 by Itô’s M -isometry in 2.39, and so
R t^T
0 v n (s; !)dM s ! 0 v(s; !)dMs in L2 (S):
2. For the second representation of the integral, since:
Exercise 3.65 Prove 3.55 for simple processes as de…ned in 2.21 using 3.1
and 3.2.
Corollary 3.66 Given the assumptions of proposition 3.64:
Z t^T Z t
v(s; !)dMs (!) = v(s; !) (0;T ] (s)dMsT (!): (3.56)
0 0
Proof. This follows from two applications of 3.55.
148CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
M
3.3.1 Mloc -Integrators and H2;loc ([0; 1) S)-Integrands
With the aid of proposition 3.64, stochastic integrals with respect to
continuous local martingales can now be de…ned, and for a larger class of
integrands than contemplated for L2 -bounded martingale integrators. We
begin with a de…nition of the class of local martingale integrators.
Notation 3.68 In some references which deal with more general spaces
of local martingales, the associated spaces of integrators might be denoted
Mc;loc to emphasize that these local martingales are continuous. In general,
one always assumes M0 = 0 for stochastic integrators.
The stopped process MtTn is then a continuous martingale, and thus is adapted,
and integrable for each t: Also M Tn satis…es 3.10 since for all t :
Z 1=2
2
MtTn MtTn d n:
L2
De…nition 3.70 (H2;loc M ([0; 1) S)) Given (S; (S); t (S); )u:c: and M 2
M M when there is no con-
Mloc ; the space H2;loc ([0; 1) S) and sometimes H2;loc
fusion, is the collection of real valued functions v(t; !) de…ned on [0; 1) S
so that:
for some t; then this integral is …nite over every subinterval of [0; t]:
Hence if At S denotes the collection of ! with …nite integral of
v 2 (s; !) over [0; t]; then this alternative statement implies that [At ] =
1 for all t: But fAt g isT a decreasing
T nested collection of sets with At
As for t s; so A1 At = An with integer n and hence [A1 ] =
1:
150CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
M ([0; 1)
2. H2;loc S) contains all continuous adapted processes. This fol-
lows because such processes are predictable by corollary 5.17 of book 7,
and Lebesgue-Stieltjes integrability of continuous functions is proved
in proposition 2.54 of book 5:
H2M $ H2;loc
M
:
First, all martingales are local martingales by book 7’s corollary 5.85.
Inclusion then follows because the measurability requirement of being
predictable is the same, and the constraint in 3.11:
Z 1
E v 2 (s; !)d hM is < 1;
0
implies
R 1 2 3.57. But 3.57 does not even imply the weaker constraint that
0 v (s; !)d hM is < 1 for some !:
So once the current integration theory is developed, the above collec-
tion of integrands "expands" the space allowed in de…nition 3.9 for
continuous L2 -bounded martingales M 2 M2 : But there will be a small
price
Rt to pay for so expanding this space of integrands. That is, while
0 v(s; !)dMs (!) is de…nable for t < 1; in general we cannot extend
this de…nition to t = 1 as was possible given the square integrability
condition of 3.11.
A stopping time result below along with proposition 3.64 above will
justify the applicability of the L2 -bounded martingale integration results to
the current more general set-up. But …rst a technical result is needed which
has a simple and intuitive conclusion, but a somewhat lengthy justi…cation.
Proposition 3.72 (Stopping a Lebesgue-Stieltjes integral) Given (S; (S); t (S); )u:c: ;
M ; then for any real t
M 2 Mloc and v(t; !) 2 H2;loc 0:
Z r
Tt0 inf r 0j v 2 (s; !)d hM is t ;
0
is a stopping time.
Proof. Let Xt be a process de…ned on (S; (S); t (S); )u:c: by:
8R
< t v 2 (s; !)d hM i ; ! 2 A1 ;
0 s
Xt =
: 0; !2= A1 ;
Next de…ne
n
vn2 (s; !) = (j 1)2 n; (n)
(s; !) 2 Aj ; 1 j N:
and R tso the t (S)-measurability of this integral follows from the t (S)-measurability
of 0 A (s; !)d hM is for all A 2 [B([0; t]) t (S)] : To prove this last state-
ment we use the monotone class theorem of book 5’s proposition Rt 1.30.
Let C be the collection of sets A [0; t] S for which 0 A (s; !)d hM is is
t (S)-measurable. First, C contains all rectangular sets of the form (a; b] B
where (a; b] [0; t] and B 2 t (S) since then
Z t
A (s; !)d hM is = B (!) [hM ib^t hM ia ] ;
0
T1
n=1 An ; and in either case An (s; !) ! A (s; !) pointwise. Thus by the
integrability of [0;t] S ; Lebesgue’s dominated convergence theorem (proposi-
tion 2.43, book 5) obtains that for all ! :
Z t Z t
An (s; !)d hM is ! A (s; !)d hM is :
0 0
Exercise 3.73 Prove that Xt above is continuous for all !: Hint: Recall
the proof for the Lebesgue integral in proposition 3.33 of book 3; but using
Lebesgue’s monotone convergence theorem of book 5’s proposition 2.21.
In the process of proving this needed result on the stopping time Tr0 ;
an important conclusion was derived about integrals with respect to hM it
which we codify as a corollary. This result will be generalized below in the
section, Stochastic Integrals w.r.t. Continuous B.V. Processes.
Corollary 3.74 Given (S; (S); t (S); )u:c: ; M 2 Mloc and v(t; !) 2
M ; let Y be the process de…ned -a.e. by:
H2;loc t
Z t
Yt v 2 (s; !)d hM is :
0
then Z t
Zt u(s; !)d hM is ;
0
is an adapted process, and for almost all ! is continuous in t:
Proof. The statement on Yt is a restatement of that proved in proposi-
tion 3.72. For Zt ; if ! is in the set of probability 1 on which this inte-
gral constraint is satis…ed, split u(s; !) = u+ (s; !) u (s; !) as in book
5’s de…nition 2.36, where u+ (s; !) and u (s; !) are nonnegative. Since
154CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
ju(s; !)j = u+ (s; !)+ u (s; !); the integral constraint above now applies to
both functions. With apparent notation, split Zt = Zt+ Zt and note that
the above proof for Yt can now be used for each of Zt+ and Zt separately,
and the result follows.
With the result of proposition 3.72, we can now use stopping times to
reduce the integrators and integrands of this section to …t the context of the
prior section on integration with respect to continuous L2 -bounded martin-
gales. Once this is done, we will only need to check that this reduction can
be accomplished in a consistent way so as to form the basis of the de…nition
of a generalized stochastic integral.
MtRn n;
L2
and so M Rn 2 M2 :
Next, Xt (0;Rn ] (t) is left continuous for almost all ! since Rn > 0
-a.e. Thus X0 0; and this implies X0 1 (B) 2 f;; Sg 0 (S) for all
Borel sets B; and for …xed t > 0 :
Rn
That v(t; !) (0;Rn ] (t) 2 H2M now follows by de…nition of the Lebesgue-
Stieltjes integral. By book 7’s corollary 6.16, M Rn s = hM is^Rn and thus
M Rn s = hM is for s t ^ Rn : Hence:
Z t Z t^Rn
2 Rn
v (s; !) (0;Rn ] (t)d M s
= v 2 (s; !)d hM is n < 1:
0 0
Exercise 3.76 Prove that if fRn0 g is any sequence of stopping times with
0
Rn0 Rn for all n; then such a sequence also obtains M Rn 2 M2 and
R 0
v(t; !) (0;Rn0 ] (t) 2 H2M n : Thus if Rn0 ! 1 with probability 1; then it is also
a localizing sequence for M by book 7’s proposition 5.77.
Z t
M Rn
It (!) = v(s; !) (0;Rn ] (s)dMsRn (!);
0
where the integrals on the right are understood to be the continuous versions
identi…ed in 3.30. In other words, with apparent notation:
Rn
h i
ItM [v(s; !)] lim I M v(s; !) (0;Rn ] (s) :
n!1
Once these de…nitional details are settled and a couple technical results
proved, proposition 3.83 will then prove that as de…ned above, ItM (!) is a
continuous local martingale.
Z t^Qn Z t
Rn Rn ^Qn
v(s; !) (0;Rn ] (s)dMs (!) = v(s; !) (0;Rn ^Qn ] (s)dMs (!)
0 0
Z t
Qn
= v(s; !) (0;Qn ] (s)dMs (!):
0
Hence for t Qn ;
Z t Z t
Rn Qn
v(s; !) (0;Rn ] (s)dMs (!) = v(s; !) (0;Qn ] (s)dMs (!):
0 0
but in the case of Brownian motion we have additional insight to the -a.e.
boundedness of this localizing sequence.
If we apply de…nition 3.77:
Z 1 Z 1
Rn
v(s; !)dBs (!) lim (0;Rn ] (s)dBs (!)
0 n!1 0
= lim BRn (!) (!):
n!1
Note that the integral valuation is justi…ed with a simple process approxi-
mation of (0;Rn ] (s); then using 3.1 and continuity of Bt : Since Rn ! 1
almost certainly, this is equivalent to limt!1 Bt (!): By book 7’s corollary
2.71, lim inf t!1 Bt (!) = 1 and lim supt!1 Bt (!) = 1 with probability
1; so there is no hope of giving this integral meaning as a random variable.
For a general continuous local martingale Mt much of the above example
applies, other than the conclusion that with probability 1; Rn < 1 for all n:
Corollary 3.81 Given (S; (S); t (S); )u:c: ; let Mt 2 Mloc and v(t; !) 2
M ; and fR g be the sequence of stopping times de…ned in proposition
H2;loc n
Rt
3.75. If 0 v(s; !)dMs (!) is de…ned as in 3.58 for t < 1; then for any m :
Z t^Rm Z t
Rm
v(s; !)dMs (!) = v(s; !) (0;Rm ] (s)dMs (!); t < 1: (3.62)
0 0
The next corollary generalizes proposition 3.64 and its corollary 3.66 to
integrals with respect to local martingales.
Corollary 3.82 (On stopping stochastic integrals) Given (S; (S); t (S); )u:c: ;
M ; and T be a stopping time. Then for t < 1 :
let Mt 2 Mloc ; v(t; !) 2 H2;loc
Z t^T Z t
v(s; !)dMs (!) = v(s; !) (0;T ] (s)dMs (!)
0 0
Z t
= v(s; !)dMsT (!) (3.63)
0
Z t
T
= v(s; !) (0;T ] (s)dMs (!):
0
3.3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES159
The identities in 3.55 and 3.56 can be applied to the integrals on the right
to derive equality of these three integrals. In detail:
Z t Z t
Rn
v(s; !) (0;T ] (s) (0;Rn ] (s)dMs (!) = v(s; !) (0;Rn ] (s)dMsRn ^T (!)
0 0
Z t
Rn
= v(s; !) (0;Rn ] (s)d MsT (!)
0
Z t
Rn
= v(s; !) (0;T ] (s) (0;Rn ] (s)d MsT (!):
0
For the …rst equality, 3.61 obtains that for t T ^ Rn :
Z t Z t
v(s; !) (0;Rn ] (s)dMsRn (!) = v(s; !) (0;T ] (s)dMsT (!):
0 0
Letting n ! 1 and applying 3.58 yields for t T :
Z t Z t
v(s; !)dMs (!) = v(s; !) (0;T ] (s)dMsT (!);
0 0
and so for all t :
Z t^T Z t^T
v(s; !)dMs (!) = v(s; !) (0;T ] (s)dMs (!):
0 0
The last step of the proof is to show that for all t :
Z t^T Z t
v(s; !) (0;T ] (s)dMs (!) = v(s; !) (0;T ] (s)dMs (!);
0 0
which is apparent by de…nition for t T; so assume that t T: By 3.58,
then 3.55:
Z t Z t
v(s; !) (0;T ] (s)dMs (!) = lim v(s; !) (0;T ] (s) (0;Rn ] (s)dMsRn (!)
0 n!1 0
Z t^T
= lim v(s; !) (0;Rn ] (s)dMsRn (!)
n!1 0
Z T
= lim v(s; !) (0;Rn ] (s)dMsRn (!):
n!1 0
160CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
Rt
Hence 0 v(s; !) ([0;T ] (s)dMs (!) is constant for t T and the proof is com-
plete.
The …nal result is that the stochastic integral de…ned by 3.58 is a con-
tinuous local martingale. See also corollary 3.86 below.
Proposition 3.83 (Continuity of stochastic integral) Given (S; (S); t (S); )u:c: ;
M ; and fR g be the sequence of stopping
let Mt 2 Mloc and v(t; !) 2 H2;loc n
times de…ned in proposition 3.75. Then as de…ned in 3.58:
Z t
ItM (!) = v(s; !)dMs (!);
0
Rn
and thus since M Rn 2 M2 and v(t; !) (0;Rn ] (t) 2 H2M ; these stopped
processes are martingales by proposition 3.27:
Proposition 3.79 proves that ItM (!) can also be de…ned with such fRn0 g:
That ItM (!) is also reduced by fRn0 g is then book 7’s proposition 5.77.
Proposition 3.85 (Properties of the stochastic integral) Given (S; (S); t (S); )u:c: ;
let Mt 2 Mloc be a continuous local martingale with M0 = 0; and v(s; !);
M ([0; 1)
w(s; !) 2 H2;loc S): Let 0 t < t0 < 1:
M ([0; 1)
2. For constant a 2 R; av(s; !) + w(s; !) 2 H2;loc S) and for
almost all ! 2 S :
Z t0 Z t0 Z t0
[av(s; !) + w(s; !)] dMs (!) = a v(s; !)dMs (!)+ w(s; !)dMs (!):
t t t
M ([0; 1)
4. Itô M -Isometry and Mean equal to zero: If v(s; !) 2 H2;loc
S) also satis…es for given t 0 1:
"Z 0 #
t
E v 2 (s; !)d hM is < 1;
0
and:
" Z 2
# Z
t t
E v(s; !)dMs (!) =E v 2 (s; !)d hM is : (3.65)
0 0
Thus if v(s; !) 2 H2M ([0; 1) S); then 3.64 and 3.65 are valid for all
t 1:
T R
where for the last integral we use that MsRn = MsT n : It is an exercise
M ([0; 1) S); and thus we can take limits
to check that v(s; !) (0;T ] (s) 2 H2;loc
and apply de…nition 3.77 to …nish the proof of 3:
Rn
For 4; again since M Rn 2 M2 and v(t; !) (0;Rn ] (t) 2 H2M ; 3.63 and
3.23 obtain that for t t0 and all n :
Z Z t 2 Z Z t
Rn
v(s; !) (0;Rn ] dMs (!) d = v 2 (s; !) (0;Rn ] d M Rn s
d :
0 0
((1))
For the integral on the right in 1; M Rn s = hM iR n
s = hM is for s Rn by
book 7’s corollary 6.16 and d M Rn s = 0 for s > Rn ; so:
Z Z t Z Z t
v 2 (s; !) (0;Rn ] d M
Rn
s
d = v 2 (s; !) (0;Rn ] d hM is d :
0 0
and thus:
Z Z t Z Z t
2 Rn
v (s; !) (0;Rn ] d M s
d ! v 2 (s; !)d hM is d : ((2))
0 0
For the integral on the left in (1); 3.63 and 3.58 obtain -a.e.:
Z t^Rn 2 Z t 2 Z t 2
Rn
v(s; !)dMs (!) = v(s; !) (0;Rn ] dMs (!) ! v(s; !)dMs (!) :
0 0 0
Rt ((3))
Now by proposition 3.83, Nt 0 v(s; !)dM s (!) is a local martingale with
the same localizing sequence as Mt ; and thus Nt^Rn is a martingale. By
Doob’s martingale maximal inequality of book 7’s proposition 5.91:
h i Z Z t^Rn 2
2 2 2
E Nt^Rn E sup (Ns^Rn ) 4E Nt^R n
=4 v(s; !)dMs (!) d :
s t 0
As Rn ! 1 -a.e., Fatou’s lemma of book 5’s corollary 2.19 and the inte-
grability assumption on v 2 (s; !) obtain:
h i Z Z t^Rn 2
E (Nt )2 4 lim inf v(s; !)dMs (!) d
n!1 0
Z Z t
= 4 v 2 (s; !)d hM is d < 1: ((4))
0
2
Thus Nt^R n
! Nt2 -a.e., and Nt^R2
n
(Nt )2 for all n with (Nt )2 inte-
grable. Lebesgue’s dominated convergence theorem (corollary 2.45, book 5)
and (3) then obtain:
" Z 2
# " Z 2
#
t^Rn t
E v(s; !)dMs (!) !E v(s; !)dMs (!) : ((5))
0 0
Combining this with (1) and (2) completes the proof of 3.65 for t t0 :
For 3.64, it follows from 3.63 and 4 of proposition 3.27 that for each n :
Z t^Rn Z t
E v(s; !)dMs (!) = E v(s; !) [0;Rn ] dMsRn (!) = 0: ((6))
0 0
Corollary 3.86 (When ItM (!) is L2 -bounded) Given (S; (S); t (S); )u:c: ;
let M 2 Mloc be a continuous local martingale with M0 = 0; and v(s; !) 2
M ([0; 1)
H2;loc S) that also satis…es for given t0 1 :
"Z #
t0
2
E v (s; !)d hM is < 1:
0
M
E sup It^R n
(!) E [Nt ] < 1:
n
Thus ItM (!) is a martingale for t t0 by book 7’s proposition 5.88, and is
L2 -bounded by 3.65.
M Tn ; N Tn s
= jhM; N ijs^Tn ;
The following result generalizes proposition 3.26, and is useful for split-
ting integrators below.
Then by 3.63:
Z t^Rn Z t^Rn Z t^Rn
v(s; !)d [Ms + Ns ] (!) = v(s; !)dMs (!)+ v(s; !)dNs (!);
0 0 0
Rt
1. The the covariation of ItM (!) = v(s; !)dMs (!) and JtN (!) =
Rt 0
0 w(s; !)dNs (!) is given by:
Z t
M N
I ; J t
= v(s; !)w(s; !)d hM; N is ; (3.68)
0
For the third integral, by splitting the signed measure into positive and neg-
ative parts (recall remark 3.52), we obtain pointwise:
Z t^Rn Z t
v(s; !)w(s; !)d hM; N is = v(s; !)w(s; !) [0;Rn ] (s)d hM; N is :
0 0
hM N i s = M Rn N Rn s
; s Rn ;
The next result generalizes the associative law of proposition 3.53 to the
current context, and this will be further generalized in propositions 3.95 and
4.22 below.
170CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
Thus -a.e.: Z t
w2 (s; !)d I M s
< 1 for all t < 1;
0
if and only if -a.e.:
Z t
v 2 (s; !)w2 (s; !)d hM is < 1 for all t < 1:
0
I M M ; and so
In other words, w(t; !) 2 H2;loc if and only if v(t; !)w(t; !) 2 H2;loc
the integral on the right in 3.70 is well de…ned.
M
With fRnM g and fRnI g de…ned as in proposition 3.75 in terms of M and
M
I M ; let Rn RnM ^ RnI : Then since Rn ! 1 -a.e. and both Rn RnM
M
and Rn RnI ; Rn can play the role of Rn0 of proposition 3.79 for both the
dMs and dIsM (!) integrals, and for reducing both M and I M : For notational
simplicity, let I I M :
By remark 3.84 and proposition 3.75, M Rn ; I Rn 2 M2 ; w(t; !) [0;Rn ] (t) 2
Rn Rn
H2I and v(s; !)w(s; !) 2 H2M ; Thus by proposition 3.53, -a.e.:
[0;Rn ] (t)
Z t Z t
Rn
w(s; !) [0;Rn ] (t)dIs (!) = v(s; !)w(s; !) [0;Rn ] (t)dMsRn (!);
0 0
The second summation is 0 since for example if ti < tj ; the tower and
measurability properties of conditional expectations (proposition 5.26, book
6) obtain:
Working
h iwith the …rst summation, the tower property hyields iE Nti Nti 1 =
2
E Nt2i 1 and thus E Nti Nti 1 = E Nt2i E Nt2i 1 : Hence since
N0 = 0 :
m
2 2 Pm 2
E Qt (M; N ) max E Mti Mti 1 i=1 E Nti Nti 1
i
h h ii
= max E Mt2i E Mt2i 1 E Nt2 :
i
(j)
M ([0; 1)
Now vj (s; !) 2 H2;loc S) is predictable by de…nition, and thus
is measurable by book 7’s proposition 5.19. Book 5’s proposition 5.19 then
obtains that vj (s; ) is Borel measurable for all !; and it then follows from
3.69 and that book’s proposition 3.8 that:
Z t Z t D E
2 M Pn
w (s; !)d I s = j=1 vj2 (s; !)w2 (s; !)d M (j) :
0 0 t
Thus -a.e.:
Z t
w2 (s; !)d I M s
< 1 for all t < 1;
0
I M M ([0; 1) (j)
In other words, w(t; !) 2 H2;loc if and only if vj (t; !)w(t; !) 2 H2;loc
S) for all j; and thus the integrals on the right in 3.72 are well de…ned.
(j) Rt (j) M (j) (!) 2 M
Now if we de…ne ItM (!) 0 vj (s; !)dMs (!); then It loc
M (j)
by proposition 3.83. Further the conclusion that vj (t; !)w(t; !) 2 H2;loc ([0; 1)
I M (j)
S) is equivalent to w(t; !) 2 H2;loc ([0; 1) S) since vj (s; ) is Borel mea-
surable for all ! as noted above, and it again follows from 3.69 and book 5’s
proposition 3.8 that:
Z t D (j) E Z t D E
2 M
w (s; !)d I = vj2 (s; !)w2 (s; !)d M (j) :
0 s 0 t
(j)
M ([0; 1)
Thus vj (t; !)w(t; !) 2 H2;loc S) for all j is equivalent to w(t; !) 2
Tn I M (j)
j=1 H2;loc ([0; 1) S):
3.3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES175
Pn M (j) (!);
Finally, since ItM (!) = j=1 It 3.67 and then 3.70 obtain -a.e.:
Z t Z t
Pn (j)
w(s; !)dIsM (!) = j=1 w(s; !)dIsM (!)
0 0
Z t
Pn
= j=1 vj (s; !)w(s; !)dMs(j) (!) for all t;
0
which is 3.72.
Rt
More generally, 0 vn (s; !)dMs (!) converges to 0 in probability, uniformly
in t over every compact set.
Proof. Let fRn g be the localizing sequence of proposition 3.75 so that
Rn
M Rn 2 M2 and v(t; !) [0;Rn ] (t) 2 H2M ; and fRn0 g be de…ned by Rn0 =
infftj hM it ng: By continuity of hM it (proposition 6.12, book 7), Rn0
is a sequence of stopping times (that book’s proposition 5.60), and Sn
Rn ^ Rn0 (that book’s proposition 5.77) is another localizing sequence for M:
Tracing back through these stopping time de…nitions we now have Sn ! 1
-a.e., and:
Z Sn
Sn
M n; v 2 (s; !)d hM is n; hM iSn n:
0
Let T; > 0 and > 0 be given, and choose N so that Pr[SN T] <
: Note that N exists since Pr[Sn T] for all n contradicts -a.e.
176CHAPTER 3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES
SN
Recall that M SN 2 M2 and v(s; !) [0;SN ] (s) 2 H2M by construction,
S
and from jvn (s; !)j v(s; !) it follows that vn (s; !) [0;SN ] (s) 2 H2M N for
Rt
all n; and thus 0 vn (s; !) [0;SN ] (s)dMsSN (!) is a martingale for all n by
proposition 3.27. Doob’s martingale maximal inequality of book 7’s proposi-
tion 5.46, and 3.23 then obtain:
" Z #
t
SN
Pr sup vn (s; !) [0;SN ] (s)dMs (!) >
t2[0;T ] 0
"Z 2
#
T
1 SN
2
E vn (s; !) [0;SN ] (s)dMs (!)
0
Z T
1
= 2
E vn2 (s; !) [0;SN ] (s)d M SN s
(!) :
0
Now -a.e., vn2 (s; !) [0;SN ] (s) v 2 (s; !) [0;SN ] (s) and this upper bound is
d M SN s -integrable since v(s; !) 2 H2;loc M ([0; 1) S): Thus by Lebesgue’s
dominated convergence theorem of book 5’s corollary 2.45, as n ! 1 :
Z T
vn2 (s; !) [0;SN ] (s)d M SN s (!) ! 0; -a.e.
0
Then since:
Z T Z T
vn2 (s; !) [0;SN ] (s)d M
SN
s
(!) v 2 (s; !) [0;SN ] (s)d M SN s
(!); -a.e..
0 0
3.3 INTEGRALS W.R.T. CONTINUOUS LOCAL MARTINGALES177
and since > 0 is arbitrary, this lim sup equals 0: As this sequence of prob-
abilities is nonnegative, we obtain:
" Z #
t
lim Pr sup vn (s; !)dMs (!) > = 0;
n!1 t2[0;T ] 0
which is 3.73.
Since every compact set is contained in an interval [0; T ]; this proves
uniform convergence in probability over all compact sets.
Proof. Let vn (s; !) wn (s; !) w(s; !) and v(s; !) u(s; !) + jw(s; !)j
in proposition 3.96. Then for all …xed T < 1 :
Z t Z t
sup wn (s; !)dMs (!) w(s; !)dMs (!) !P 0;
t2[0;T ] 0 0
Proposition 3.98 (Riemann sum approximation) Given (S; (S); t (S); )u:c: ;
M ([0; 1)
let M 2 Mloc and v(s; !) 2 H2;loc S); and assume v(s; !) is con-
tinuous in s for almost all !: Then given partitions n of [0; t] :
for all m. Denoting the Riemann summation in ( ) by Sn;m (!) and the
integral there by Im (!); with Sn (!) and I(!) denoting the corresponding
terms in 3.75, note that Sn;m (!) = Sn (!) and Im (!) = I(!) on Am : Thus
for any m :
Remark 3.99 The above result is stated under the hypothesis that v(s; !) 2
M ([0; 1)
H2;loc S) is continuous ( -a.e.). This is one of two options for the
conclusion in 3.75. In order to apply proposition 3.55 we require v(t; !) (0;Rm ] (t) 2
Rm
H2M ; a given by proposition 3.75, and that this process be left contin-
uous and locally bounded for all m. Thus one option is to assume that
M ([0; 1)
v(s; !) 2 H2;loc S) is left continuous and locally bounded. We leave
it as an exercise to complete the details for this result.
The approach taken assumes such that v(s; !) is continuous. Then using
4 of book 7’s proposition 5.60, the localizing sequence fRm g could be modi…ed
to fRm0 g so that v(t; !)
0 ] (t) is locally bounded for all m:
(0;Rm
Chapter 4
Xt = X0 + Mt + Ft ; (4.1)
Recall that by "of bounded variation" is meant in the sense of book 7’s
de…nition 2.84 with p = 1. In other words, v1 (F ) < 1 -a.e. for any
compact interval [a; b]; where v1 (F ) denotes the "weak" quadratic
variation (see de…nition 4.6 below).
For the purpose of using such processes as integrators, we will also as-
sume X0 0:
De…nition 4.2 (MSloc ) Given the …ltered probability space (S; (S); t (S); )u:c: ;
let MSloc denote the collection of continuous semimartingales with X0 = 0:
In other words, if Xt 2 MSloc ; then
Xt = Mt + Ft
181
182CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES
Remark 4.3 Note that for each interval [0; n] there exists An S with
[An ] = 1 so
T that for ! 2 An ; v1 (F ) < 1 on [0; n]: Since An+1 An ;
de…ne A A
n n : Then it follows that [A] = 1 (proposition 5.26, book
1); and for ! 2 A; v1 (F ) < 1 on all compact sets [a; b]: In other words,
there is a single exceptional set of measure zero outside of which a process
of bounded variation has bounded variation on all compact sets.
The following result shows that for Xt 2 MSloc ; the decomposition into
M and F is unique with probability 1: This is also true for more general
semimartingales as in 4.1.
If both integrals exist, this de…nition is well de…ned because the decompo-
sition of X into M and F is unique. However, we will need to reconcile
two apparently disparate integration theories. For appropriately de…ned
integrands v(s; !) :
(1) Rt
The process Yt (!) 0 v(s; !)dMs is not de…ned pathwise, outside
of the special case of proposition 3.98 where v(s; !) is continuous and
this integrable is de…nable -a.e. But we know that this process is in
general a continuous local martingale for appropriate v(s; !);
183
(2) Rt
The process Yt (!) 0 v(s; !)dFs is de…ned pathwise as a Lebesgue-
Stieltjes integral, but outside of the special case of corollary 3.74 where
Fs = hM is with M 2 Mloc and v(t; !) 2 H2;loc M ; we do not yet even
(1)
As we cannot in the general case make the de…nition of Yt (!)
path-based, the logical approach is to investigate the measurability
(2)
properties of the process Yt (!): We discuss this below, but …rst return to
considerations regarding the integrand v(s; !):
1. v(t; !) is predictable,
2. For all t;
Z t
v 2 (s; !)d hXis < 1; -a.e.
0
But by book 7’s proposition 6.24, which proved that hXit = hM it ; this
X
would obtain that H2;loc M : Hence the integration constraint in 2
= H2;loc
would only re‡ect integrability relative to the local martingale M; and
would provide no assurance as to the integrability of such v(t; !) relative
X
to the bounded variation process F: Thus if v(t; !) 2 H2;loc so de…ned; we
Rt
can only be assured that the integral 0 v(s; !)dMs exists, and some other
Rt
constraint would perhaps be needed to assure that 0 v(s; !)dFs exists.
1. v(t; !) is predictable,
2. That -a.e.:
Z t Z t
2
v (s; !)d hM is < 1 and v(s; !)dFs < 1 for all t:
0 0
Rt
Then 0 v(s; !)dXs as in 4.2 is well de…ned as a sum of a stochastic
integral and a Lebesgue-Stieltjes integral.
Not surprisingly, the integral with respect to the local martingale M will
be straightforward to justify based on earlier results. What is now required
are some additional results for stochastic integrals de…ned with bounded
variation integrators, and this is studied next.
Consistent with book 7’s de…nition 6.2 for the quadratic variation process,
we have the following de…nition. This de…nition re‡ects the "weak" notion
of total variation by restricting the supremum to partitions with mesh size
! 0; while the strong version uses the supremum over all partitions.
with mesh size maxi fti ti 1 g; the total variation process is de…ned
pathwise by:
Xn
VF (t; !) sup jF (ti+1 ; !) F (ti ; !)j : (4.3)
!0 i=0
Notation 4.7 (On VF (t; !)) The reader may have noticed that we intro-
duced a small notational inconsistency in this de…nition. In book 7’s de…-
nition 2.84, v1 (F ) denoted weak variation, and V1 (F ) the strong variation
of a function. In the current context we want to de…ne the total variation
process of F; and since this is a weak variation we ought to have denoted
this process by vF (t; !): However in the context of this book, vF (t; !) looks
like an integrand process, and so have opted for the above notation.
where the superscripts denote the number of partition points in the respective
summations. Taking a supremum on the left over all such partitions obtains:
By the triangle inequality this expression remains valid with s added to this
partition if it is not already included. Splitting the summation at s; this
yields with the same notation:
(n) (n)
VF ([r; t] ; !) VF ([r; s] ; !) + VF ([s; t] ; !):
Taking a supremum on the right, this and the …rst part then imply that:
Expressed this way, it is clear that with each increment in m; all intervals
are bisected except that last one, which is split unequally or not at all,
depending on t: The notation below is perhaps too general, but avoids the
necessity of splitting the summation for the special case of the last term.
Proposition 4.9 (VF (t; !) for càdlàg F (t; !) ) If F (t; !) is a càdlàg process
of bounded variation on (S; (S); t (S); )u:c: ; then VF (t; !) de…ned in 4.3
can be calculated by:
Xb2m tc+1
VF (t; !) sup jF (t ^ j=2m ; !) F (t ^ (j 1) =2m ; !)j ; -a.e.
m j=1
(4.5)
188CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES
Proof. Temporarily denote the expression on the right in 4.5 by WF (t; !);
and that in 4.3 as VF (t; !): Then WF (t; !) VF (t; !) since the partitions
underlying VF as de…ned in 4.3 include those used for WF : To prove the
opposite inequality, and hence that WF (t; !) = VF (t; !); we prove that any
partition used in the VF -calculation can be dominated by partitions of the
type used in the WF -calculation.
To this end, let a partition ft0 ; t1 ; :::; tn g be given, where as above t0 = 0
and tn = t: Given m and i with 1 i n; assume that ft^j=2m g (ti 1 ; ti ]
for j 2 Ii : Hence Ii is the empty set, or, a sequential set of integers with
minimum and maximum values denoted ji and ki ; respectively, with ji = ki
possible. Analogously, de…ne jn+1 as the smallest j such that j=2m > t:
Now for m large no set can be empty since the ti -partition is …xed, and
ft ^ j=2m gj;m is dense in [0; t]: By the triangle inequality for 1 i n; now
simplifying notation:
Xki
jF (ti ) F (ti 1 )j jF (t ^ j=2m ) F (t ^ (j 1) =2m )j
j=ji +1
+ jF (ti ) F (t ^ (ki =2m ))j + jF (ti 1) F (t ^ (ji =2m ))j :
(n) (m)
Let VF denote the summation with the above ti partition, and WF
the summation as in 4.5 for given m: It then follows by addition that:
(n) (m)
Xn
VF (t; !) WF (t; !) + jF (ti ) F (t ^ ki =2m )j
i=1
Xn
+ jF (ti 1 ) F (t ^ ji =2m )j
i=1
Xn
jF (t ^ ji+1 =2m ) F (t ^ ki =2m )j :
i=1
Now because tn = t; it follows that with jn+1 = b2m tc + 1 :
jF (tn ) F (t ^ (kn =2m ))j = jF (t ^ jn+1 =2m ) F (t ^ kn =2m )j ;
and thus the nth terms of the …rst and third summations cancel. In addition,
jF (t0 ) F (t ^ (j1 =2m ))j = 0 since t0 = j1 = 0; and so:
(n) (m)
Xn 1
VF (t; !) WF (t; !) + jF (ti ) F (t ^ ki =2m )j
i=1
Xn
+ jF (ti 1 ) F (t ^ ji =2m )j
i=2
Xn 1
jF (t ^ ji+1 =2m ) F (t ^ ki =2m )j :
i=1
Rewriting:
(n) (m)
Xn 1
VF (t; !) WF (t; !) + jF (ti ) F (ki =2m )j
i=1
Xn 1
+ (jF (ti ) F (ji+1 =2m )j jF (ji+1 =2m ) F (ki =2m((*))
)j) :
i=1
4.1 INTEGRALS W.R.T. CONTINUOUS B.V. PROCESSES 189
Remark 4.10 Note that the above result can be generalized somewhat in
the sense that being continuous from the right and with left limits was not
explicitly needed. More generally, the requirement on F that su¢ ces for this
conclusion is that for every t; F is continuous from one side, and has a limit
from the other. This follows because the critical step in the above proof was
to be able to prove that if ti 2 [ki =2m ; ji+1 =2m ) for all m; and by construction
both ki =2m ! ti and ji+1 =2m ! ti ; then for any > 0 there is an m so that:
jF (ti ) F (ki =2m )j + jF (ti ) F (ji+1 =2m )j jF (ji+1 =2m ) F (ki =2m )j < :
Requiring left and right limits at ti and one-sided continuity is adequate for
the given result.
That said, this result does not generalize to other types of discontinuities.
For example, if F (t) is de…ned on [0; 1] so F (t) = 0 for t 6= r; and F (r) = 1;
where r is a given irrational, then WF (t) = 0 yet VF (t) = 2: If F (r) = 1 for
all irrationals, then again WF (t) = 0 and now VF (t) = 1:
While these observations are perhaps interesting, we focus on càdlàg
Ft (!): By book 1’s proposition 5.7, every Borel measure on R has an as-
sociated increasing càdlàg distribution, while by that book’s proposition 5.23,
every increasing right continuous function gives rise to a Borel measure.
Since every bounded variation function can be expressed as a di¤ erence of
increasing functions by book 3’s proposition 3.27, and we will want to use
such functions as integrators, it makes sense that we must at least demand
right continuity of BV Ft (!). The existence of left limits then allows a useful
technical result expressed in the above propositions.
190CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES
The next result uses the representation in 4.5 to show that when Ft (!)
is adapted, and continuous or càdlàg, the total variation process VF (t; !) is
adapted, and again continuous or càdlàg, respectively.
Remark 4.11 (On "continuous or càdlàg") In a number of results be-
low the expression "continuous or càdlàg" is used despite sounding some-
what redundant. It would seem to be enough to simply say that the process is
càdlàg, since this includes continuous. However, the point of this awkward
construction is to connect a property of the process Ft (!), whether "contin-
uous or càdlàg," to the respective property of another process that depends
on Ft (!): For example in the next result, it is the bounded variation process
VF (t; !).
Proposition 4.12 (Properties of VF (t; !)) If Ft (!) is a continuous or
càdlàg adapted process of bounded variation on (S; (S); t (S); )u:c: ; then
VF (t; !) is adapted, and continuous or càdlàg, respectively.
(m) (m)
Proof. Denoting by VF (t; !) the summation in 4.5, each VF (t; !) is
adapted since Ft (!) is adapted. As measurability is preserved in a supremum
of countably many measurable functions by book 5’s proposition 1.9, it follows
that VF (t; !) is adapted. In addition, the triangle inequality obtains that
(m) (m+1)
VF (t; !) VF (t; !); and since the supremum is …nite we conclude
that -a.e.: :
(m)
VF (t; !) = lim VF (t; !); all t:
m!1
(m)
It is an exercise to verify that -a.e., VF (t; !) is increasing in t since VF (t; !)
is increasing for all m. By proposition 5.8 of book 1, every monotonic func-
tion has left and right limits at every point, so left to prove is that -a.e.,
right continuity of F at t produces right continuity of VF at t; and similarly
for left continuity.
Arguing by contradiction, assume that there is a t > 0 and > 0 so
that for all s with t < s < t + t (recalling 4.4:
VF ([t; s]; !) = VF (s; !) VF (t; !) 2 > 0;
for ! 2 A with (A) > 0: Fixing any such s; since VF ([t; s]; !) 2 there
is a partition of [t; s] with t = t0 < t1 < tn = s so that:
Xn
jF (ti ; !) F (ti 1 ; !)j :
i=1
Since F is right continuous at t; there is a t0 with t < t0 < t1 so that
jF (t0 ; !) F (t; !)j =2: Adding t0 to the partition obtains:
Xn
jF (ti ; !) F (ti 1 ; !)j + F (t1 ; !) F (t0 ; !) =2;
i=2
4.1 INTEGRALS W.R.T. CONTINUOUS B.V. PROCESSES 191
and hence VF ([t0 ; s]; !) =2: But since t0 < s it follows by assumption
VF ([t; t0 ]; !) 2 ; so we can now repeat this construction over the inter-
val [t; t0 ]; …nding t00 with t < t00 < t0 and VF ([t00 ; t0 ]; !) =2 and still
VF ([t; t00 ]; !) 2 : Iterating this procedure and applying 4.4 obtains that
VF ([t; s]; !) is unbounded for ! 2 A with (A) > 0; a contradiction.
For left continuity we repeat the construction with details left as an ex-
ercise.
Remark 4.13 (On [B(R+ ) (S)]) As noted above from book 7’s nota-
tion 5.9, the sigma algebra [B(R+ ) (S)] in the statement of the following
result is de…ned as the smallest sigma algebra that contains the semi-
algebra A of measurable rectangles E F with E 2 B(R+ ) and F 2 (S):
This sigma algebra was denoted 0 [B(R+ ) (S)] in book 5.
It (!) = (VF (t; !) + Ft (!)) =2; Jt (!) = (VF (t; !) Ft (!)) =2:
Then It (!) and Jt (!) are adapted, and continuous or càdlàg by proposition
4.12. In addition, It (!) and Jt (!) are increasing. For example, using 4.4
with t0 > t :
2 (It0 It ) = VF ([t; t0 ]) + [Ft0 Ft ] 0;
since VF ([t; t0 ]) jFt0 Ft j by de…nition.
Conversely, so de…ned Ft (!) is of bounded variation:
Xn
VF (t; !) sup jF (ti+1 ; !) F (ti ; !)j
!0 i=0
Xn Xn
sup jI(ti+1 ; !) I(ti ; !)j + sup jJ(ti+1 ; !) J(ti ; !)j
!0 i=0 !0 i=0
since this is true for simple functions by de…nition, and thus true for all mea-
surable functions by the book 5 development of the integral speci…ed above.
This is also true for the integral on the right of ( ); and this proves 4.7.
When It (!) and Jt (!) are continuous or càdlàg in t; the integral is also
continuous or càdlàg. If jv(s; !)j K; then by book 5’s proposition 2.40
applied to the separate Lebesgue-Stieltjes integrals:
Z t+ t Z t
v(s; !)dFs (!) v(s; !)dFs (!)
0 0
Z t+ t Z t+ t
v(s; !)dIs (!) + v(s; !)dJs (!)
t t
K [It+ t It ] + K [Jt+ t Jt ] :
Thus continuity properties of It (!) and Jt (!) induce the same continuity
properties on the integral.
That this integral is of bounded variation for bounded v(t; !); again as-
sume jv(t; !)j K: Suppressing the ! variable, for any partition of a com-
pact interval [a; b]; using book 5’s proposition 2.40 as above:
Xn Z ti Z ti 1 Xn Z ti
v(s; )dFs v(s; )dFs = v(s; )dFs
i=1 0 0 i=1 ti 1
Xn
K Fti Fti 1
i=1
KVF ([a; b]):
Thus the integral is of bounded variation for all ! with VF ([a; b]) < 1; which
is -a.e.
194CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES
Rt
If v(t; !) is bounded and progressively measurable then 0 v(s; !)dFs (!) is
adapted if each of the dIs and dJs integrals are adapted. To prove that these
Lebesgue-Stieltjes integrals are adapted, we can use the proof of corollary
3.74. While that proof addressed a special case with Ft hM it for a local
martingale Mt ; the proof did not assume anything about this integrator other
than the applicability of book 5’s integration to the limit results, which are
perfectly general. See exercise 4.15.
In the …nal case when v(t; !) is progressively measurable and satis…es the
stated integrability result, de…ne vn (t; !) = max(v(t;
R t !); n): Then vn (t; !) is
bounded and progressively measurable and thus 0 vn (s; !)dFs (!) is adapted
for all n: Now vn (t; !) ! v(t; !) for all (t; !) and jvn (t; !)j jv(t; !)j ;
which is integrable -a.e. Thus by Lebesgue’s dominated convergence theo-
rem (proposition 2.43, book 5):
Z t Z t
vn (s; !)dFs (!) ! v(s; !)dFs (!); -a.e.
0 0
Rt
It then follows from corollary 1.10, book 5 that 0 v(s; !)dFs (!) is adapted.
Rt
Exercise 4.15 Provide the details of the proof that 0 v(s; !)dFs (!) is adapted
when v(t; !) is bounded and progressively measurable, adapting the proof of
corollary 3.74 (and in turn proposition 3.72).
Letting f (t; !) fI (t; !) fJ (t; !); it follows from book 5’s proposition 3.6
that: Z Z t
t
v(s; !)dFs (!) = v(s; !)f (s; !)ds; -a.e.
0 0
1. v(s; !) is predictable,
196CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES
Remark 4.18 Note that if v(s; !) 2 Hloc bP ([0; 1) S); then fTn gS1
n=1 can be
rede…ned so that the above bounds are valid for all !: Let A = n An S
T
be the set of measure zero and de…ned with An = f v (s; !) > cn g: Then
n
Rt
Exercise 4.19 Recalling exercise 4.15, prove that 0 v(s; !)dFs (!) is adapted
when v(t; !) is a locally bounded predictable process. Hint: Proposition 5.19
of book 7 and remark 3.6. This is also proved in proposition 4.21.
bP ([0; 1)
The next result shows that the Hloc S)-class of integrands con-
tains all continuous adapted processes and thus also the class of continu-
ous semimartingale integrators, MSloc : In addition, this class of integrands
is contained in the class of integrands for continuous local martingales,
bP
Hloc M ; for any such M:
H2;loc
Proposition 4.20 (On Hloc bP ([0; 1) S)) Given (S; (S); t (S); )u:c: ; if
bP ([0; 1)
v(s; !) is a continuous, adapted process then v(s; !) 2 Hloc S):
Hence in particular,
MSloc Hloc bP
([0; 1) S):
In addition, for all M 2 Mloc :
bP M
Hloc ([0; 1) S) H2;loc ([0; 1) S):
4.2 THE GENERAL STOCHASTIC INTEGRAL 197
Proof. By book 7’s corollary 5.17, a continuous and adapted process is pre-
dictable. By that book’s proposition 5.60, if v(t; !) is a continuous, adapted
process and F R is closed, then TF inffs 0jv(s; !) 2 F g is a stop-
ping time. Letting Fn = ( 1; n] [ [n; 1) produces a sequence fTn g1 n=1
with v Tn (s; !) n all !: That this sequence is almost surely unbounded,
and thus a reducing sequence is proved as follows. Assume that for ! 2 A
with (A) > 0 that Tn (!) K for all n: This implies by continuity that
jv(Tn (!); !)j n for all n and ! 2 A: Thus v(s; ) is unbounded on [0; K]
on a set of positive -measure, contradicting continuity almost everywhere.
Hence HlocbP ([0; 1) S) contains all continuous, adapted processes, and thus
S
Mloc Hloc bP ([0; 1) S):
Given M 2 Mloc ; the Doob-Meyer decomposition theorem of book 7’s
proposition 6.12 obtains that hM it is a continuous, increasing, adapted process.
bP ([0; 1)
If v(s; !) 2 Hloc S); then by that book’s proposition 5.19, v(s; !) is
progressively measurable and then by Fubini’s theorem of book 5’s proposi-
tion 5.19, v(s; !) is Borel measurable in s for all !: Given t and fTn g1 n=1 ;
the reducing sequence for v(s; !); it follows that for almost all ! there exists
n(!) so that Tn(!) (!) > t: Thus -a.e.:
Z t
v 2 (s; !)d hM is
0
probability space (S; (S); t (S); )u:c: ; let Xt 2 MSloc be a continuous semi-
martingale, and v(s; !) 2 HlocbP ([0; 1) S) a locally bounded predictable
Rt
process. Then 0 v(s; !)dXs as de…ned by:
Z t Z t Z t
v(s; !)dXs v(s; !)dMs + v(s; !)dFs ; (4.8)
0 0 0
198CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES
is
R t well de…ned, continuous in t; and a semimartingale. In other words,
S
0 v(s; !)dXs 2 Mloc :
Proof. Assume that Xt = Mt + R tFt ; recalling that this decomposition is
unique by proposition 4.4. Then 0 v(s; !)dMs is well de…ned by the prior
proposition, since HlocbP M
H2;loc for all M 2 Mloc ; and this integral is a
continuous local martingale by proposition 3.83 of the prior section.
Next, given the reducing sequence fTn g1 Tn
n=1 ; v (s; !) is bounded and pre-
dictable by de…nition. Since then also progressively measurable Rt by book 7’s
proposition 5.19, it follows from proposition 4.14 that 0 v Tn (s; !)dFs is
adapted, continuous and of bounded variation. But for Lebesgue-Stieltjes
integrals it follows by the discussion of remark 3.63 that:
Z t Z t^Tn
Tn
v (s; !)dFs = v(s; !)dFs :
0 0
5. The covariation
R of the continuous semimartingale Xt0 (!) in 4 and
t
Yt0 (!) 0 v(s; !)dYs (!) is given by:
Z t
0 0
X ;Y t
= v(s; !)w(s; !)d hM; N is :
0
Rt
6. The associative law applies: If Xt0 (!) = 0 v(s; !)dXs (!) as in 4;
then for almost all ! 2 S :
Z t Z t
0
w(s; !)dXs (!) = v(s; !)w(s; !)dXs (!); all t:
0 0
Proof. Using 4.8, 1 and 2 follow from the analogous properties for integrals
with respect to continuous local martingales by proposition 3.85, recalling
bP
that Hloc M
H2;loc for all M 2 Mloc by proposition 4.20, and for Lebesgue-
Stieltjes integrals by book 5’s proposition 2.40. To check that av(s; !) +
w(s; !) 2 HlocbP ([0; 1) S) for 2; predictability is immediate (proposition
1.5, book 5), while being locally bounded follows using Tn Tnv ^ Tnw ; with
apparent notation. Then Tn is a stopping time by book 7’s proposition 5.60,
200CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES
and the triangle inequality obtains boundedness with cn jaj cvn + cw n : Both
statements in 1 and 2 are then quali…ed as true -a.e. re‡ects the fact that
a semimartingale can always be changed on sets of -measure 0 without
corrupting its properties, since the …ltration is assumed complete.
Similarly, 3 follows from proposition 3.85 and remark 3.63.
For 4; book 7’s proposition 6.24 obtains hX 0 it = hM 0 it where Mt0 (!)
Rt
0 v(s; !)dMs (!); and so this result follows from 3.69. Similarly, the deriva-
tion of 5 uses book 7’s proposition 6.34, which obtains hX 0 ; Y 0 it = hM 0 ; N 0 it
Rt
with Nt0 (!) 0 w(s; !)dNs (!); and then 3.68.
bP ([0; 1) S): Predictabil-
Finally for 6; …rst note that v(s; !)w(s; !) 2 Hloc
ity is again immediate (proposition 1.5, book 5), while being locally bounded
follows using Tn Tnv ^ Tnw ; a stopping time as above, obtaining local bound-
edness with cvwn cvn cw 0 S
n : Since Xt 2 Mloc by proposition 4.21, both integrals
in this result are thus well de…ned. Using proposition 4.21:
Z t Z t
Xt0 (!) = v(s; !)dMs (!) + v(s; !)dFs (!) Mt0 (!) + Ft0 (!);
0 0
M ([0; 1) 0
Since w(s; !) 2 H2;loc S) by proposition 4.20, 3.70 obtains:
Z t Z t
w(s; !)dMs0 (!) = v(s; !)w(s; !)dMs (!): ((**))
0 0
and similarly for the dJs0 (!)-integral. Assembling the pieces and applying
proposition 4.14 again:
Z t Z t Z t
0
w(s; !)dFs (!) = v(s; !)w(s; !)dIs (!) v(s; !)w(s; !)dJs (!); t Tnw
0 0 0
Z t
= v(s; !)w(s; !)dFs (!) t Tnw ^ Tnv :
0
bP ([0; 1)
Since v(s; !)w(s; !) 2 Hloc S) this proves 6 by proposition 4.21.
Rt
More generally, 0 vn (s; !)dXs (!) converges to 0 in probability, uniformly
in t over compact sets.
Proof. Given > 0 :
" Z t #
Pr sup vn (s; !)dXs (!) >
t2[0;T ] 0
" Z # " Z #
t t
Pr sup vn (s; !)dMs (!) > =2 + Pr sup vn (s; !)dFs (!) > =2 :
t2[0;T ] 0 t2[0;T ] 0
bP ([0; 1) S)
By proposition 4.20, Hloc M ([0; 1) S) and thus by propo-
H2;loc
sition 3.96: " Z #
t
Pr sup vn (s; !)dMs (!) > =2 ! 0:
t2[0;T ] 0
where the last step is book 5’s proposition 2.40. These last two expressions
converge to 0 using the same argument.
In detail for the dIs -integral, since jvn (s; !)j is nonnegative and Is is
increasing, the supremum is attained at t = T :
" Z t # Z T
Pr sup jvn (s; !)j dIs (!) > =4 = Pr jvn (s; !)j dIs (!) > =4 :
t2[0;T ] 0 0
Now jvn (s; !)j v(s; !) -a.e., and this upper bound is dIs -integrable since
bP ([0; 1)
v(s; !) 2 Hloc S):Thus by Lebesgue’s dominated convergence theo-
rem of book 5’s corollary 2.45:
Z T
jvn (s; !)j dIs (!) ! 0; -a.e.
0
4.5 STOCHASTIC INTEGRALS VIA RIEMANN SUMS 203
Proposition 4.25 (Riemann sum approximation) Given (S; (S); t (S); )u:c: ;
let Xt 2 MSloc and assume v(s; !) 2 Hloc bP ([0; 1) S) is continuous in s;
-a.e. Then given partitions n of [0; t] :
Xn Z t
v(ti ; !) Xti+1 (!) Xti (!) !P v(s; !)dXs (!): (4.11)
i=0 0
Proof. If Xt = Mt + Ft :
Xn
v(ti ; !) Xti+1 (!) Xti (!)
Xi=0
n Xn
v(ti ; !) Mti+1 (!) Mti (!) + v(ti ; !) Fti+1 (!) Fti (!) :
i=0 i=0
If each of the summations on the right converges in probability to the re-
spective integral, then 4.11 will be proved. Notationally, if Yn !p Y and
Zn !p Z; it follows that Yn + Zn !p Y + Z since:
[j(Yn Y ) + (Zn Z)j > ] [j(Yn Y j > =2] + [jZn Zj > =2] :
Now let a sequence of partitions as described be given. Since Mt 2 Mloc
M ([0; 1) S) by proposition 4.20; 3.75 of proposition 3.98
and v(s; !) 2 H2;loc
obtains,
Xn Z t
v(ti ; !) Mti+1 (!) Mti (!) !p v(s; !)dMs (!):
i=0 0
In addition, by proposition 2.54 of book 5:
Xn Z t
v(ti 1 ; !) Fti (!) Fti 1 (!) ! v(s; !)dFs (!);
i=1 0
for all ! for which v(s; !) is continuous, meaning for almost all !: Since con-
vergence almost everywhere implies convergence in probability (proposition
5.21, book 2), 4.11 follows.
That convergence in probability implies the existence of a subsequence of
partitions nk so that this convergence exists pointwise -a.e. is book 2’s
proposition 5.25.
Letting n ! 1 obtains that this upper bound is 0 for any > 0 and thus the
proof is complete.
The next result is initially a bit of a surprise. If Xt ; Yt 2 MSloc with
Xt = Mt + Ft and Yt = Nt + Gt ; then formally the product Xt Yt can be
expressed:
Xt Yt = Ft Nt + Gt Mt + Mt Nt + Ft Gt :
Written this way, it is not apparent what type of process Xt Yt is, though it
is apparently continuous and adapted.
The next result proves that Xt Yt is in fact equal to the sum of a local
martingale and a bounded variation process. Thus Xt Yt is a semimartingale.
This is a very special case of Itô’s lemma to be studied in chapter 5:
Corollary 4.28 If Xt ; Yt 2 MSloc ; then Xt Yt 2 MSloc : In other words, the
product of continuous semimartingales
Rt R tis a continuous semimartingale.
Proof. Both integrals 0 Ys dXs and 0 Xs dYs are continuous semimartin-
gales by proposition 4.21, while hX; Y it is an adapted, continuous, bounded
variation process (book 7’s propositions 6.34 and 6.29). Thus the conclu-
sion follows by 4.13 since the collection of local martingales is closed under
addition, as is the collection of bounded variation processes.
4.7 INTEGRATION OF VECTOR AND MATRIX PROCESSES207
for appropriate integrands. Such processes may for example represent the
evolution of an asset price where the …rst integral re‡ects "drift" in this
asset price, while the summation of Itô integrals re‡ects asset price
"volatility" or risk relative to m-factors: For such a model, these
integrands must then be appropriately speci…ed as functions on an
underlying probability space (S; (S); ); and the bounded variation
process identi…ed, and this can be di¢ cult except for simple models.
so that both the drift and volatility of such an asset at each moment of
time re‡ects the asset value at that time. Itô’s lemma is addressed in
the next chapter and will motivate this idea. It will show that if Xt is a
continuous semimartingale expressed as in the …rst speci…cation, and f (x; t)
is an appropriately de…ned smooth function, then f (Xt ; t) is a continuous
semimartingale expressed as in the second speci…cation.
While notationally compelling, we will need to specify properties of the
functions u
~(s; x) and v~j (s; x) that will assure that these integrands have the
208CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES
necessary properties to make these integrals well de…ned. Also, this latter
speci…cation is in fact an integral equation to be solved for such Xt (!) ;
and this raises the issue of solvability in the sense of existence and uniqueness
of such a solution. While formally an integral equation, such equations and
their solutions will be addressed in book 9 under the title of stochastic
di¤erential equations, or SDEs.
A simple example of such an equation de…nes geometric Brownian
motion , the model underlying the famous Black-Scholes-Merton op-
tion pricing formulas:
Z t Z t
Xt (!) = Xs (!) ds + Xs (!) dB(!):
0 0
See chapters 8 and 9 of book 6 for background on this option pricing ap-
proach. The challenge with any such speci…cation is to demonstrate that
there exists a process Xt (!) so that the …rst integral is well de…ned as a
Lebesgue integral, and the second well de…ned as an Itô integral, and that
these integrals then reproduce the given process. It is also important to
know if such a "solution" to this equation is unique.
(1) (m)
In a market of m such assets, with Xs (!) Xt (!) ; :::; Xt (!) ;
this becomes:
Z t Xn Z t
(i)
Xt (!) = ui (s; !)dFis (!) + vij (s; !)dBs(j) (!); i = 1; :::; m;
0 j=1 0
or
Z t Xn Z t
(i)
Xt (!) = u
~i (s; Xs (!))ds+ v~ij (s; Xs (!))dBs(j) (!); i = 1; :::; m;
0 j=1 0
(1) (n)
where Bt (!) Bt (!) ; :::; Bt (!) is an n-dimensional Brownian motion:
The above models with standard (s; !)-integrands provide some motiva-
tion for introducing the ideas and notations below.
X = (X1 ; X2 ; ; Xn );
or to liberate the subscript position for t one sometimes uses as noted above:
or:
(1) (2) (n)
Xt (!) = (Xt (!); Xt (!); ; Xt (!));
but it is often unambiguous to omit variables in a given formula. This lack
of ambiguity is a result of one’s familiarity with the ideas and notations of
the above sections, and within that framework, this notation often has only
one interpretation.
For example, given the development of this chapter the expression:
Z t
vdX;
0
is well de…ned for all the categories of processes X developed above, and all
the processes v for which such integrals were de…ned. That this integral can
be expressed:
Z t
v(s; !)dXs (!);
0
adds little to our understanding once stochastic integration theory has been
developed and the notation becomes more familiar.
That said, those new to the theory can …nd such notation ambiguous due
to lack of familiarity. Thus we will generally continue with the more explicit
notation, with apologies to the more expert readers.
v Tk (s; !) ck ; -a.e.:
For integrals with respect to Brownian motion, all three spaces provide
well-de…ned integrals by propositions 2.52, 3.83 and 4.21. However, it is
common to use the middle ground for its generality, while circumventing
the need to introduce stopping times. However, any of the above
de…nitions is a reasonable choice if it suits the given application. Recalling
de…nition 3.70.
B(m n) (1) (n)
De…nition 4.31 (H2;loc ([0; 1) S)) Let Bt (!) Bt (!) ; :::; Bt (!) be
an n-dimensional Brownian motion de…ned on a …ltered probability space
B(1 n)
(S; (S); t (S); )u:c: : Then H2;loc ([0; 1) S) denotes the space of n-
dimensional processes v(t; !) (vj (t; !))nj=1 ; where:
2. -a.e.: Z t
vj2 (s; !)ds < 1; all t, all j:
0
4.7 INTEGRATION OF VECTOR AND MATRIX PROCESSES211
B(m n)
More generally, the space H2;loc ([0; 1) S) of m n-matrix processes
m;n
v(t; x) (vij (t; x))i=1;j=1 is de…ned similarly, meaning each vij (s; !) is pre-
dictable, and -a.e.:
Z t
2
vij (s; !)ds < 1; all t, all i; j:
0
If the above integrals are …nite over [0; 1); -a.e., we refer to these
B(1 n) B(m n)
spaces as H2 ([0; 1) S) and H2 ([0; 1) S); respectively.
B(m n)
2. If v 2 H2;loc ([0; 1) S) :
0 P 1
n Rb (j)
v 1j (s; !)dB s (!)
B Pj=1 Ra C
Z B n b (j) C
b B j=1 a v2j (s; !)dBs (!) C
vdB B C: (4.16)
B .. C
a B . C
@ A
Pn R b (j)
v
j=1 a mj (s; !)dB s (!)
Note that each of the integrals in these expressions is wellR de…ned from
t
proposition 3.83. In addition, each such integral expressed as 0 is a contin-
Rt
uous local martingale on the space (S; (S); t (S); )u:c: ; and so 0 vdB de-
…ned in 4.15 is a continuous local martingale on this space, and that in 4.16
is an m-dimensional vector process of continuous local martingales (which
is by de…nition an m-dimensional continuous local martingale).
bP (m n)
De…nition 4.35 (Hloc ([0; 1) S)) Given a …ltered probability space
bP (1 n)
(S; (S); t (S); )u:c: ; Hloc ([0; 1) S) is de…ned as the space of n-
dimensional locally bounded predictable processes u(t; x) (ui (t; x))ni=1 :
That is, each component ui (s; !) 2 Hloc bP ([0; 1) S) as in de…nition 4.17:
bP (m n)
More generally, the space Hloc ([0; 1) S) of m n-matrix processes
v(t; !) (vij (t; !))m;n
i=1;j=1 is de…ned similarly, meaning each vij (s; !) 2
bP
Hloc ([0; 1) S):
Remark 4.36 Note that both the sequence of stopping times fTk g1 k=1 and
the sequence of constants fck g1
k=1 can always be de…ned to be independent
of i; j by de…ning with apparent notation:
(i;j) (i;j)
Tk = minfTk g; ck = maxfck g:
i;j i;j
Then each Tk is a stopping time by book 7’s proposition 5.60. Also, by proof
by contradiction, Tk ! 1; -a.e.
The following de…nition is explicitly stated for continuous semimartin-
gales, but naturally applies in the special cases of continuous m-dimensional
local martingales, or continuous m-dimensional processes of bounded varia-
tion.
bP (1 n)
De…nition 4.37 If u 2 Hloc ([0; 1) S); and X 2 MSloc is an n-
dimensional, continuous semimartingale, then:
Z b Xn Z b
udX uj (s; !)dXs(j) (!); (4.17)
a j=1 a
where each component integral is de…ned as in 4.2.
bP (m 1) bP (m n)
Analogously, if u 2 Hloc ([0; 1) S) and v 2 Hloc ([0; 1) S);
then for an m-dimensional continuous, bounded variation process
F; and n-dimensional continuous local martingale M :
0 R Pn R b 1
b (1) (j)
u1 (s; !)dF s (!) + j=1 a v 1j (s; !)dM s (!)
B Ra Pn R b C
Z b Z b B b (2) (j) C
B a u2 (s; !)dFs (!) + j=1 a v2j (s; !)dMs (!) C
udF + vdM B B ..
C:
C
a a B . C
@ A
Rb (m) P n R b (j)
u
a m (s; !)dF s (!) + v
j=1 a mj (s; !)dM s (!)
(4.18)
214CHAPTER 4 INTEGRALS W.R.T. CONTINUOUS SEMIMARTINGALES
Rb bP (m n)
Remark 4.38 Note that a udX can well be de…ned for u 2 Hloc ([0; 1)
S) analogous with 4.16, and one obtains a result as in 4.18 only with an n-
sum of bounded variation integrals. In general, however, it is more common
for models to have n-sums of Brownian or local martingale integrals, and
only one bounded variation integral. Of course, this is just notation, so it
can be adapted to …t the need of the application.
Also, as was the case for the integrals with respect to n-dimensional
Brownian motion above, there is no new theory here, only notational con-
ventions.
Chapter 5
Itô’s Lemma
where X0 is 0 (S)-measurable.
215
216 CHAPTER 5 ITÔ’S LEMMA
where Z t
Mt = 2 Bs dBs ; Ft = hBit :
0
More generally, if Mt is a continuous L2 -bounded martingale that is also
locally bounded, with M0 a random variable not necessarily identically 0;
then f (Mt ) is again a continuous semimartingale by 3.54:
Mt2 = Mt + Ft ;
where Z t
Mt = 2 Ms dMs ; Ft = M02 + hM it :
0
(f1 (t; x); :::; fp (t; x)) with all fk (t; x) 2 C 1;2 ([0; T ] Rm ) as above. There
should be no confusion caused by avoiding attaching a p to this space.
For any such space, the subscript 0 denotes all such functions have com-
pact support. For example, if f 2 C0n (Rm ) then f 2 C n (Rm ) and there
exists compact K Rm so that f = 0 outside K:
Remark 5.5 (On -a.e. in Itô’s lemma) Note that once it is proved that
f (t; Xt ) is given for every 0 t < 1; -a.e. by the expressions below, such
5.2 SEMIMARTINGALE VERSION 219
Proposition 5.9 (Itô’s lemma for semimartingales) Let f (t; x) 2 C 1;2 ([0; 1)
R): If Xt = X0 +Ft +Mt is a continuous semimartingale on (S; (S); t (S); )u:c: ;
5.2 SEMIMARTINGALE VERSION 221
where ft @f =@t; and so forth. As noted in remark 5.5, this implies that
-a.e., the identity in 5.2 is valid for all t:
Proof. Following Karatzas and Shreve (1988), this proof is divided into
a number of manageable steps.
1: Localization: To facilitate convergence of the various summations,
we …rst introduce stopping times to ensure all integrands are bounded. For
N > 0 de…ne:
8
< 0; if jX0 j N;
TN =
: infftj jM j N or jFt j N or hM it N g; if jX0 j < N:
t
(N )
stochastic integrals for semimartingale integrators, and noting that Xs =
Xs for s TN ; this would then prove that for each N; that -a.e.:
Z t Z t
(N )
f (t; Xt ) = f (0; X0 ) + ft (s; Xs(N ) )ds + fx (s; Xs(N ) )dFs(N )
0 0
Z D E Z t
1 t (N ) (N )
+ fxx (s; Xs )d M + fx (s; Xs(N ) )dMs(N )
2 0 s 0
Z t^TN Z t^TN
= f (0; X0 ) + ft (s; Xs )ds + fx (s; Xs )dFs
0 0
Z Z t^TN
1 t^TN
+ fxx (s; Xs )d hM is + fx (s; Xs )dMs :
2 0 0
Given t; this expression is then valid -a.e. for all integer N; and by letting
integer N ! 1 5.2 follows since TN ! 1 -a.e.
(N )
2: Boundedness: Note that for given t that Xt 3N; and since f;
ft ; fx ; and fxx are continuous on the compact set [0; t] [ 3N; 3N ]; each
integrand is bounded by a given constant K:
(N )
3: Taylor Approximation: Given N; denote Xt by Xt for simplicity
and let n denote a partition of [0; t] with 0 = t0 < t1 < tn = t: Since
f (tk ; Xtk ) f tk 1 ; Xtk 1
= f (tk ; Xtk ) f (tk 1 ; Xtk )
+f (tk 1 ; Xtk ) f tk 1 ; Xtk 1
;
we can separately express by …rst and second order Taylor series:
f (tk ; Xtk ) f (tk 1 ; Xtk ) = ft ( k ; Xtk ) (tk tk 1) ;
the integral I3 = I4 +I5 +I6 with the apparent notation. For the summations
(n)
I4 and I5 ; since jfxx j K by step 2; and with v1 (F ) denoting the absolute
variation of F on step 3’s partition n (de…ned as in 4.3 but without the
supremum):
(n)
jI4 j + jI5 j 2Kv1 (F ) maxk Ftk Ftk 1
+ maxk Mtk Mtk 1
:
We now recall the second half of the proof of part 6 of the Doob-Meyer De-
composition theorem of book 7’s proposition 6.5, where with a change in nota-
(n) 2 2
tion v2 (M ) here is equivalent to E Qt n (M ) in that proof (there
L2
5.2 SEMIMARTINGALE VERSION 225
0 2 (n) p
denoted E Qt n n
(M ) ): It then follows that v2 (M ) 12N 2
L2
where N is the bound on jMt j by parts 1 and 2: Now
the tower and measurability properties of book 6’s proposition 5.26 yield:
h i
2
E Mtk Mtk 1 hM itk hM itk 1
h h ii h h ii
= E E Mt2k hM itk j tk (S) E E Mt2k 1 hM itk 1 j tk 1 (S)
2E Mtk 1 E Mtk Mtk 1 j tk 1
(S)
= 0:
If j 6= k; then using the same approach with all product terms from ( )
obtains:
hh ih ii
2 2
E Mtk Mtk 1 hM itk hM itk 1 Mtj Mtj 1 hM itj hM itj 1
= 0:
Thus for n = 2 : Z t
Bt2 =t+2 Bs dBs ;
0
which is 2.35, and for n 3:
Z t Z t
n(n 1)
Btn = Bsn 2
ds + n Bsn 1
dBs :
2 0 0
Note that the …rst integral can be evaluated as a pathwise Riemann or Lebesgue
integral since Bsn 2 is continuous, and these agree (proposition 2.31, book 3,
applied pathwise). Though perhaps not apparently of bounded variation, this
integral is di¤ erentiable and of bounded variation by that book’s proposition
3.33. The second integral is a continuous local martingale by proposition
3.83 if Bsn 1 2 H2;loc
B ([0; 1) S): Predictability follows from book 7’s corol-
lary 5.17, and it is an exercise to verify 3.57.
5.3 ITÔ PROCESS VERSION 227
with X0 given.
For this statement, we assume that u(s; !) and v(s; !) are locally bounded
predictable processes on (S; (S); t (S); )u:c: : By proposition 4.21, Xt is
then is a continuous semimartingale on (S; (S); t (S); )u:c: if X0 is 0 (S)-
measurable. Thus in this case, Xt X0 2 MSloc :
or equivalently by 5.4:
1
df (t; Xt ) = ft (t; Xt ) + v 2 (t; !)fxx (t; Xt ) dt + fx (t; Xt )dXt :
2
f (t; Xt ) = f (0; X0 )
Z t
1
+ ft (s; Xs ) + u(s; !)fx (s; Xs ) + v 2 (s; !)fxx (s; Xs ) ds
0 2
Z t
+ v(s; !)fx (s; Xs )dBs ; (5.6)
0
228 CHAPTER 5 ITÔ’S LEMMA
-a.e. As noted in remark 5.5, this implies that -a.e., the identity in 5.6
is valid for all t:
Proof. Since Xt be a continuous semimartingale by proposition 4.21, propo-
sition
Rt 5.9 applies. Comparing R t 5.6 with 5.2 we must justify that with Ft (!) =
0 u(s; !)ds and Mt (!) = 0 v(s; !)dBs ; that:
Z t Z t
fx (s; Xs )dFs = u(s; !)fx (s; Xs )ds;
0 0
Z t Z t
fxx (s; Xs )d hM is = v 2 (s; !)fxx (s; Xs )ds;
0 0
and Z Z
t t
fx (s; Xs )dMs = v(s; !)fx (s; Xs )dBs :
0 0
First, predictable implies progressively measurable by book 7’s proposition
5.19, and then by book 5’s proposition 5.19, u(s; ) and v(s; ) are Borel mea-
surable in s for all !: The …rst identity now follows from book 5’s proposition
3.6 by …rst splitting u(s; ) into positive and negative parts (that book’s de…n-
ition 2.36). The second follows from thisR proposition (without splitting) and
t
part 4 of proposition 3.89, that hM it = 0 v 2 (s; !)ds; recalling that hBit = t
by corollary 2.10. Finally, the stochastic integral identity is the associative
law proposition 3.91.
for appropriately de…ned functions u(s; x) and v(s; x): This integral
equation is often written in stochastic di¤erential equation (SDE) notation
as:
dXt = u(t; Xt )dt + v(t; Xt )dBt ; (5.8)
with X0 given. Such a process is called an Itô di¤usion.
Remark 5.14 (On SDEs 1) It will be proved in book 9 that Borel mea-
surability of u(s; x) and v(s; x) and continuity and adaptedness of a process
Xt on (S; (S); t (S); )u:c: assure that the processes u(s; !) u(s; Xs (!))
and v(s; !) v(s; Xs (!)) are predictable stochastic processes on this space.
However, to justify the existence of these integrals using the semimartingale
integration theory requires that u(s; !) and v(s; !) be locally bounded. This
boundedness condition is di¢ cult to specify in terms of u(s; x) and v(s; x);
and so it is more natural to use a di¤ erent criteria to justify existence.
Given a continuous, adapted process Xt ; the integrals in 5.7 will be well
de…ned if u(s; x) and v(s; x) are Borel measurable functions,
u; v : [0; 1) R ! R;
satisfying for all t :
Z t
Pr ju(s; Xs (!))j ds < 1; all t (5.9)
0
Z t
= Pr v 2 (s; Xs (!))ds < 1; all t = 1:
0
As noted above, Borel measurability will assure predictability of these in-
tegrands and thus progressive Rmeasurability by book 7’s proposition 5.19.
t
Then the …rst integral in 5.7, 0 u(s; Xs (!))ds; is de…ned pathwise -a.e.
as a Lebesgue (or Lebesgue-Stieltjes) integral of book 5, and this integral
is adapted by corollary 3.74 with M = B since hBis = s by corollary
Rt
2.10. Further, 0 v(s; Xs (!))dBs (!) is de…nable -a.e. within the local
martingale integration theory since 5.9 assures 3.57 and thus v(s; Xs (!)) 2
B ([0; 1)
H2;loc S):
The existence and uniqueness of Xt that satis…es 5.7 will be seen to
require somewhat more of u(s; x) and v(s; x) than Borel measurability to
ensure that 5.9 is satis…ed.
Finally, given this Borel measurability requirement on u(s; x) and v(s; x)
and the assumption that continuous, adapted Xt exists that satis…es 5.9 and
5.7, it will also be proved in book 9 that such Xt is a continuous semimartin-
gale. It therefore makes sense to investigate the application of Itô’s lemma
to f (t; Xt ) for appropriate functions.
Notation 5.15 (Di¤erential form of Itô’s lemma) In di¤ erential nota-
tion 5.11 is stated:
1
df (t; Xt ) = ft (t; Xt ) + u(t; Xt )fx (t; Xt ) + v 2 (t; Xt )fxx (t; Xt ) dt
2
+v(t; Xt )fx (t; Xt )dBt ; (5.10)
230 CHAPTER 5 ITÔ’S LEMMA
or equivalently by 5.8:
1
df (t; Xt ) = ft (t; Xt ) + v 2 (t; Xt )fxx (t; Xt ) dt + fx (t; Xt )dXt :
2
Corollary 5.16 (Itô’s lemma for an Itô di¤usion ) Given Borel mea-
surable functions u(s; x) and v(s; x) on [0; 1) R; let Xt be a continuous,
adapted process on (S; (S); t (S); )u:c: that satis…es 5.7 and 5.9, and let
f (t; x) 2 C 1;2 ([0; 1) R): Then f (t; Xt ) is a continuous semimartingale on
(S; (S); s (S); )u:c: ; and for every 0 t < 1 :
f (t; Xt ) = f (0; X0 )
Z t
1
+ ft (s; Xs ) + u(s; Xs )fx (s; Xs ) + v 2 (s; Xs )fxx (s; Xs ) ds
0 2
Z t
+ v(s; Xs )fx (s; Xs )dBs ; (5.11)
0
-a.e. As noted in remark 5.5, this implies that -a.e., the identity in 5.11
is valid for all t:
Proof. As noted in remark 5.14, it will be proved in book 9 that Xt is a
generalized continuous semimartingale. Let:
Z t Z t
Ft (!) = u(s; Xs (!))ds; Mt (!) = v(s; Xs (!))dBs :
0 0
Also noted in remark 5.14, the integrands for Ft (!) and Mt (!) are pre-
dictable so by 5.9, Ft is a continuous bounded variation process by proposi-
tion 4.14, and Mt is a continuous local martingale by proposition 3.83. Thus
Ft (!) + Mt (!) is a continuous semimartingale, 5.7 can be expressed:
and Z Z
t t
fx (s; Xs )dMs = v(s; Xs )fx (s; Xs )dBs :
0 0
5.4 ITÔ DIFFUSION VERSION 231
As noted in remark 5.14, the integrands for Ft (!) and Mt (!) are predictable
and thus are progressively measurable by book 7’s proposition 5.19. Now
progressive measurability means that for each s; u(s; Xs (!)) and v(s; Xs (!))
are (B ([0; s) t (S))-measurable, and then by book 5’s proposition 5.19,
u(s; Xs (!)) and v(s; Xs (!)) are Borel measurable for all !: Thus the …rst
identity follows from book 5’s proposition 3.6, by …rst splitting u(s; Xs (!))
into positive and negative parts (that book’s de…nition 2.36). The second
follows from this
R t proposition (without splitting) and 3.69 of proposition 3.89,
that hM it = 0 v 2 (s; Xs (!))ds; recalling that hBit = t and thus d hBit = dt
by corollary 2.10. Finally, the stochastic integral identity is the associative
law of proposition 3.91.
dXt = Xt dt + Xt dBt :
Xt
This implies that the process ln X 0
; which represents the continuous to-
tal return on the asset over [0; t] in the Black-Scholes-Merton framework,
satis…es in di¤ erential notation:
Xt 1 2
d ln = dt + dBt :
X0 2
Solving this produces an expression for the process Xt ; which we have
assumed to exist:
1 2
Xt = X0 exp t + Bt : (5.13)
2
By rearranging we obtain an expression for the continuous total return over
[0; t] in the BSM context:
Xt 1 2
ln = t + Bt : (5.14)
X0 2
Thus if there exists a continuous, positive semimartingale solution to the
stochastic di¤ erential equation in 5.12, Itô’s lemma yields that 5.13 is this
process. This derivation also obtains that for …xed t; the random variable Xt
(or asset price in BSM) has a lognormal distribution, and the continuous
Xt
total change ln X 0
(or return in BSM) has a normal distribution (recall
section 3.2.5 of book 4).
Note however that this derivation has not con…rmed that Xt in 5.13 is
a semimartingale, though it is apparent by inspection that it is both contin-
uous -a.e. and strictly positive if X0 > 0: Further, we have not con…rmed
that this process solves 5.12. In addition, even if all is con…rmed there re-
mains the possibility that there are other solutions of 5.12 which need not
be semimartingales, and/or do not satisfy either the continuity or nonneg-
ativity assumption, and thus for which the above application of Itô’s lemma
would not be justi…ed.
We now address two technical details related to this example. That this
solution is unique will be deferred, and will follow from the development in
book 9. The punchline, however, is that 5.12 has a unique (strong) solution
because the coe¢ cient functions, (t; x) = x and (t; x) = x are linear in
x:
1: Xt in 5.13 is a Semimartingale: It was noted in the introduction
to section 6.2 of book 6 that the Doob-Meyer decomposition theorems 1
and 2 (propositions 6.5 and 6.12) were special cases of a more general Doob-
Meyer decomposition theorem. This general theorem was named for J. L.
5.4 ITÔ DIFFUSION VERSION 233
Doob (1910 –2004) who in 1953 proved the result for discrete time processes
(the Doob Decomposition theorem), and Paul-André Meyer (1934 –
2003) who generalized the result to continuous processes in 1962-3. The
more general statement applies to local submartingales (or local super-
martingales) de…ned on the …ltered probability space (S; (S); t (S); )u:c: :
Such processes are de…ned just as are local martingales except that given the
localizing sequence fTn g; the stopped process Tn >0 XtTn is a submartingale
(respectively, supermartingale) with hrespect to t (S) i for all n: In the gen-
Tn Tn
eral case this means for t s that E Tn >0 Xt j s Tn >0 Xs for a sub-
h i
martingale respectively, E Tn >0 XtTn j s Tn
Tn >0 Xs for a supermartingale ;
h i
in contrast to the local martingale condition that E Tn >0 XtTn j s = Tn >0 XsTn :
When X0 = 0; lemma 3.61 generalizes (recall exercise 3.62) to prove that
these requirements are equivalent to the same statements with XtTn in place
of Tn >0 XtTn :
Recall that a stochastic process is càdlàg (de…nition 5.15, book 7) if it
is continuous from the right, and with left limits. The general Doob-
Meyer decomposition theorem states that a càdlàg local submartingale
Xt has a unique decomposition, Xt = Mt + Ft ; where Ft is a unique, al-
most surely increasing, predictable process with F0 = 0; and Mt is a local
martingale: The same result applies to a càdlàg local supermartingale
but with Ft almost surely decreasing. In either case, when Xt is continuous
so too are the component processes.
Corollary: Every continuous local submartingale (local supermartin-
gale) is a continuous semimartingale.
We now show that Xt de…ned in 5.13 is a continuous submartingale for
X0 > 0: The integrability of Xt is left as an exercise (Hint: 3.66, book
4), as is continuity of E [Xt ] for the next step. It then follows that XtT is a
continuous submartingale for any stopping time T by the proof of Doob’s op-
tional stopping theorem of book 7’s proposition 5.84, again using that book’s
proposition 5.80. Then as in that book’s corollary 5.85, de…ning a localiz-
ing sequence fTn g by Tn = infftjXt ng; which are stopping times by that
book’s proposition 5.60, obtains that Xt is a continuous local submartingale.
The above corollary to the general Doob-Meyer decomposition theorem is
then applicable to prove that Xt = Mt + Ft is a continuous semimartingale.
To prove that Xt is a continuous submartingale we prove the submartin-
gale condition that for t s; E [Xt j s ] Xs : Since Xt is a submartingale if
and only if Yt Xt 1 is a submartingale, this su¢ ces by lemma 3.61 since
Y0 = 0: Based on the …nite dimensional distributions of Brownian motion
234 CHAPTER 5 ITÔ’S LEMMA
p
it follows that Bt = Bs + B where B = t sZ with Z a standard normal
variate that is independent of s : By the measurability and independence
properties of conditional expectations from book 7’s proposition 5.26:
1 2
E [Xt j s] = E X0 exp t + Bt j s
2
1 2
p
= X0 exp t + Bs E exp t sZ j s
2
p
= Xs E exp t sZ :
p
Since t > s; E exp t sZ = exp 2 (t s)=2 > 1 by 3.66 of book 4,
and the result is proved.
2: Xt Solves 5.12: The derivation above was predicated on the assump-
tion that there existed a continuous semimartingale Xt which satis…ed the
equation in 5.12, and that Xt > 0 to justify the application of Itô’s lemma
to ln Xt : The candidate for this process derived in 5.13 has now been proved
to be a semimartingale, and it is apparent that Xt is continuous and Xt > 0
if X0 > 0: But there remains the question: Does this process indeed satisfy
5.12?
1 2
To this end, de…ne f (t; x) = X0 exp 2 t + x ; which is clearly
C 2 (R2 ) with:
1 2 2
ft = f; fx = f; fxx = f:
2
Then
Let Bs be a Brownian motion on (S; (S); s (S); )u:c: and assume that
there exists a continuous adapted process Xst;z on this space that satis…es 5.7
on s 2 [t; T ] with "initial value" Xtt;z = z :
Z s Z t
Xst;z (!) = z + u(r; Xrt;z (!))dr + v(r; Xrt;z (!))dBr (!) ; t s T:
t 0
Assume now that f (t; y) satis…es the partial di¤ erential equation:
1
ft (r; y) + u(r; y)fx (r; y) + v 2 (r; y)fxx (r; y) + h0 (r)f (r; y) 0 on [0; T ] R:
2
((1))
Then the above identity becomes:
Z T
exp [h(T )] f (T; XTt;z ) = exp [h(t)] f (t; z)+ v(r; Xrt;z )fx (r; Xrt;z ) exp [h(r)] dBr :
t
((2))
If the integrand in the Itô integral is in H2 ([0; 1) S) of de…nition 2.31;
meaning that it is measurable, adapted and satis…es:
Z T
2
E v(r; Xrt;z )fx (r; Xrt;z ) exp [h(r)] ds < 1;
t
236 CHAPTER 5 ITÔ’S LEMMA
and so the function that solves the PDE in ( ) has the representation:
h i
f (t; z) = exp [h(T ) h(t)] E f (T; XTt;z ) : ((3))
This is wonderful, since if T is the expiry date of the option we know the
value of f (T; XTt;x ) exactly for any asset price XTt;x : Further, the distribution
of such XTt;x is lognormal by a small modi…cation to 5.13.
Now this conclusion also required a martingale condition:
Z T
E Xrt;z fx (r; Xrt;x ) exp [ Rr] dBr = 0;
t
and this seems impossible to be satis…ed! In general Xrt;z > 0 for asset prices;
while the option "deltas" satisfy fx (r; Xrt;x ) > 0 for calls and fx (r; Xrt;x ) < 0
for puts. How can this expectation be 0?
In fact this expectation is not 0; and the …nal solution to this problem in
book 9 will involve a change in measure on the space (S; (S); s (S); )u:c:
to make this work. Of course this change in measure must also preserve the
Brownian motion, so there is still work to be done.
1 Xm Xm D E
df (t; Xt ) = ft (t; Xt )dt + fxi xj (t; Xt )d X (i) ; X (j)
2 i=1 j=1 t
Xm (i)
+ fxi (t; Xt )dXt ;
i=1
Remark 5.21 (Integration by parts) Note that with m = 2 and for the
function f (t; x) = x1 x2 2 C 1;2 ([0; 1) R2 ); that 5.16 reduces to the sto-
chastic integration by parts formula in 4.12.
-a.e. As noted in remark 5.5, this implies that -a.e., the identity in 5.16
is valid for all t:
Proof. The proof is similar but notationally messier than that of 5.2, now
using the following Taylor approximation in step 3 :
(i) (i)
This in…mum re‡ects all i; and is de…ned to be 1 if Mt < N; Ft <
N; M (i) ; M (j) t < N for all i and t: Then TN is a stopping time as in
proposition 5.9, and -a.e., TN ! 1 as N ! 1:
(N )
Our goal is again to prove the stated result for Xt Xt^TN : Using the
same justi…cations as in the proof of proposition 5.9, we seek to prove that:
Z t^TN Xm Z t^TN
(N )
f (t; Xt ) = f (0; X0 ) + ft (s; Xs )ds + fxi (s; Xs )dFs(i)
0 i=1 0
Z t^TN D E
1 Xm Xm
+ fxi xj (s; Xs )d M (i) ; M (j)
2 i=1 j=1 0 s
Xm Z t^TN
+ fxi (s; Xs )dMs(i) ; -a.e.
i=1 0
Given t; this expression is then valid -a.e. for all integer N; and by letting
integer N ! 1 5.16 follows since TN ! 1 -a.e.
(i) (N )
2: Boundedness: Note that for given t that Xt 3N for all
i;
Qm and since f; ft ; fxi ; and fxi xj are continuous on the compact set [0; t]
i=1 [ 3N; 3N ]i ; each is bounded by a given constant K:
(N )
3: Taylor Approximation: Given N; denote Xt by Xt for simplicity
and let n denote a partition of [0; t] with 0 = t0 < t1 < tn = t: Since
f (t; Xt ) f (0; X0 )
Xn
= ft ( k ; Xtk ) (tk tk 1 )
k=1
Xn Xm (i) (i)
+ fxi tk 1 ; Xtk 1 Xtk Xtk 1
k=1 i=1
1 Xn Xm Xm (i) (i) (j) (j)
+ fxi xj (tk 1 ; k ) Xtk Xtk 1 Xtk Xtk
2 k=1 i=1 j=1 1
I1 + I2 + I3 :
it follows that:
Xm Xn (i) (i)
I2 = fxi tk 1 ; Xtk 1
Ftk Ftk 1
i=1 k=1
Xm Xn (i) (i)
+ fxi tk 1 ; Xtk 1
Mtk Mtk 1
i=1 k=1
Xm (i)
X m (i)
I2F + I2M :
i=1 i=1
For each i; fxi is continuous in both variables and Xt is bounded and con-
(i)
tinuous -a.e. Thus each I2F is -a.e. equal to a Riemann summation
associated with the Riemann-Stieltjes integral of a bounded continuous func-
tion with respect to a function of bounded variation. Thus by proposition
4.19 of book 3 and 4.6, as n ! 0 :
Z t
F (i)
I2 ! fxi (s; Xs )dFs(i) ; -a.e.
0
(i)
For each i; Mt is an L2 -bounded martingale, recalling the notational
convention in 3 and the bound as in 2; and fxi (s; Xs ) is continuous -a.e.
5.5 MULTIVARIATE SEMIMARTINGALE VERSION 241
(i)
and bounded. Thus fx (s; Xs ) 2 H2M ([0; 1) S) by book 7’s proposition
6.18 if it is shown that fxi (s; Xs ) is predictable, and this will follow by con-
tinuity and that book’s corollary 5.17 if fx (s; Xs ) is adapted. Fixing s; the
function fxi (s; y) : Rm ! R is continuous in y and hence Borel measurable
in y by book 5’s proposition 1.4, so fxi 1 (s; )(A) 2 B(Rm ) for all A 2 B(R):
Since X is adapted: [fxi (s; Xs )] 1 (A) = Xs 1 fxi 1 (s; )(A) 2 s (S); and
thus fxi (s; Xs ) is adapted.
Now from proposition 3.55 it follows that as n ! 0;
Z t
(i)
I2M !P fxi (s; Xs )dMs(i) :
0
1
6: Summation I3 : With apparent notation and ignoring the 2 :
Xm Xm Xn (i) (i) (j) (j)
I3 = fxi xj (tk 1; k ) Xtk Xtk 1
Xtk Xtk 1
i=1 j=1 k=1
Xm Xm (i;j)
I :
i=1 j=1 3
In the same way that the derivation in 5 identically followed the analogous
derivation in 5 of proposition 5.9, we leave it as an exercise that following
identically steps 6; 7; and 8 of that proof will obtain that for all i :
Z t D E Z t D E
(i;i) (i)
I3 ! fxi xi (s; Xs )d M = fxi xi (s; Xs )d M (i) ; M (i) ;
0 s 0 s
(i;j)
where this last step is remark 6.26 of book 7. Thus we focus on I3 for
i 6= j:
(j) (j) (j) (j)
Splitting Xtk = X0 + Ftk + Mtk etc. obtains:
(i;j)
Substituting into I3 :
(i;j) (i) ;F (j) (i) ;M (j) (i) ;F (j) (i) ;M (j)
I3 = I3F + I3F + I3M + I3M :
The proof will be completed by showing that the …rst three summations con-
verge to 0 -a.e., while the third provides the desired term in 5.16:
Z t D E
(i) (j)
I3M ;M ! fxi xj (s; Xs )d M (i) ; M (j) ; -a.e. ((*))
0 s
since the supremum converges to 0 -a.e. by continuity, while the sum con-
(j)
verges to the variation of Fs by de…nition. The next two summations are
addressed identically. For example by the Cauchy-Schwarz inequality:
(i) ;M (j) 2 Xn (i) (i) 2 Xn (j) (j) 2
I3F fx2i xj (tk 1; k ) Ftk Ftk 1
Mtk Mtk 1
k=1 k=1
(i) (i)
Xn (i) (i)
Xn (j) (j) 2
K 2 sup Ftk Ftk 1
Ftk Ftk 1
Mtk Mtk 1
k k=1 k=1
! 0; -a.e.
This follows because the supremum converges to 0 as above, and the …rst
(i)
summation converges to the variation of Fs by de…nition. The second sum-
mation converges in probability to M (j) t by book 7’s proposition 6.5, and
thus this implies -a.e. convergence for a subsequence of partitions with
nl ! 0 by book 2’s proposition 5.25.
To prove ( ); we begin by recalling de…nition 6.25 of book 7, that given
the partition n in 3 :
(i) (i) (j) (j)
Mt k Mtk 1
Mtk Mtk 1
= Qtkn M (i) ; M (j) Qtkn 1 M (i) ; M (j) :
(i) (j)
With the above identity for I3M ;M ; book 7’s 6.20 and proposition 4.14,
this will prove ( ):
To simplify notation we prove ( ) with a " + "; as the derivations are
(n)
identical. For each n and partition n of [0; t]; let Qs Qs n M (i) + M (j) be
(n)
de…ned as in de…nition 6.2 of book 7, and thus Qs is increasing and con-
tinuous -a.e., and de…ne the step function:
Then
Rt (n)
0 fn (s; k ) dQs
Xn h i
= fxi xj (tk 1 ; k ) Qtkn M (i) + M (j) Qtkn 1 M + M (j) ;
k=1
Hence:
Rt (n)
Rt (n)
Rt
0 fn (s; k ) dQs 0 fxi xj (s; k ) dQs 0 fn (s; k) fxi xj (s; k) dQ(n)
s
Rt (n)
< 0 dQs
(n)
= Qt :
and thus this convergence is -a.e. for a subsequence of partitions by book 2’s
Rt (n)
proposition 5.25. Since > 0 is arbitrary, this proves that 0 fn (s; k ) dQs
244 CHAPTER 5 ITÔ’S LEMMA
Rt (n)
and 0 fxi xj (s; k ) dQs have the same limit as n ! 1 and thus ( ) can
be proved by showing that:
Rt Rt D E
(n) (i) (j)
f
0 xi xj (s; k ) dQ s ! f
0 xi xj (s; k ) d M + M ; -a.e.
s
To this end, since M (i) + M (j) is also a local martingale by book 7’s
corollary 5.85, it follows by that book’s proposition 6.12 that:
D E
sup Q(n)
s M (i) + M (j) !P 0;
s2[0;t] s
Each of these quotients is then -a.e. a distribution function on [0; t]; respec-
tively Fn (s) and G(s); and this results implies that Fn (s) ) G(s) ; meaning
weak convergence of the associated Borel measures (de…nition 8.2, book 2).
Thus by the portmanteau theorem of book 6’s proposition 4.4, it follows that
-a.e.: Rt Rt
0 fx i x j (s; k ) d Fn (s) ! 0 fxi xj (s; k ) d G(s) :
(n)
and since Qt ! M (i) + M (j) t -a.e., the proof is complete.
7: Summary: The above steps prove that for each t; I1 +I2 +I3 converges
to the right hand side of 5.16 -a.e. for a subsequence of partitions with
nk ! 0:
Exercise 5.23 Verify that 5.16 reduces to 4.13 when m = 2 and f (t; Xt ) =
(1) (2)
Xt Xt :
Here we use the notation from the section Stochastic Integration of Vector
and Matrix Processes, and in particular that of 4.18 which separates the
bounded variation and local martingale integrals. Speci…cally:
(1) (n)
1. Bt (!) Bt (!) ; :::; Bt (!) is an n-dimensional Brownian motion de…ned
on a probability space (S; (S); t (S); )u:c: ;
Then: Z Z
t t
Xt = X0 + uds + vdB;
0 0
(i) Pn (j)
dXt = ui (t; !)dt + j=1 vij (t; !)dBt ; (5.19)
with X0 given.
For this statement, we assume that ui (s; !) and vij (s; !) are locally
bounded predictable processes on (S; (S); t (S); )u:c: : By proposition 4.21,
Xt is then is an m-dimensional continuous semimartingale on (S; (S); t (S); )u:c:
if X0 is 0 (S)-measurable. Then in this case, Xt X0 2 MSloc :
246 CHAPTER 5 ITÔ’S LEMMA
(i)
where dXt is given in 5.19.
-a.e. As noted in remark 5.5, this implies that -a.e., the identity in 5.21
is valid for all t:
Proof. Since Xt is a continuous semimartingale by proposition 4.21, propo-
sition 5.22 above applies. Comparing 5.21 with 5.16, we must justify that
(i) Rt (i) P Rt (k)
with Ft (!) = 0 ui (s; !)ds and Mt (!) = nk=1 0 vik (s; !)dBs ; that:
Z t Z t
fxi (s; Xs )dFs(i) = ui (s; !)fx (s; Xs )ds;
0 0
Z t D E Z t hXn i
(i) (j)
fxi xj (s; Xs )d M ;M = fxi xj (s; Xs ) vik (s; !)vjk (s; !) ds;
0 s 0 k=1
and Z Z
t Xn t
fxi (s; Xs )dMs(i) = fxi (s; Xs )vij (s; !)dBs(j) :
0 j=1 0
5.7 MULTIVARIATE ITÔ DIFFUSION VERSION 247
D E Xn Xn Z Z
(i) (j)
M ;M = vik (s; !)dBs(k) ; vjl (s; !)dBs(l)
t k=1 l=1 0 0 t
Xn Xn Z t D E
= vik (s; !)vjl (s; !) B (k) ; B (l)
k=1 l=1 0 s
Xn Z t
= vik (s; !)vjk (s; !)ds:
k=1 0
The second identity now follows from book 7’s proposition 5.19 and splitting
the integrands vik (s; !)vjk (s; !) into positive and negative parts as above.
Finally, the stochastic integral identity is the associative law in 3.72.
Z t Z t
Xt (!) = X0 (!) + u(s; Xs (!))ds + v(s; Xs (!))dBs (!) :
0 0
with X0 given.
Remark 5.26 (On SDEs 2) As noted in remark 5.14 for the one dimen-
sional stochastic di¤ erential equation, it will be proved in book 9 that Borel
measurability of the component functions in u(s; x) and v(s; x); and con-
tinuity and adaptedness of a process Xt on (S; (S); t (S); )u:c: ; assure
that the processes u(s; !) u(s; Xs (!)) and v(s; !) v(s; Xs (!)) are pre-
dictable stochastic processes on this space. However, to justify the existence
of these integrals using the semimartingale integration theory also requires
that u(s; !) and v(s; !) be locally bounded. This boundedness condition is
di¢ cult to specify in terms of u(s; x) and v(s; x); and so it is more natural
to use a di¤ erent criteria to justify existence.
Given a continuous, adapted process Xt ; the integrals in 5.22 will be
well de…ned if the components of u(s; x) and v(s; x) are Borel measurable
functions:
ui ; vi;j : [0; 1) Rm ! R;
satisfying:
Z t
Pr jui (s; Xs )j ds < 1; all t (5.24)
0
Z t
2
= Pr vij (s; Xs )ds < 1; all t = 1:
0
Rt
…rst integral in 5.22, 0 ui (s; Xs (!))ds; is de…ned pathwise -a.e. as a
Lebesgue (or Lebesgue-Stieltjes) integral of book 5, and these integrals are
adapted by corollary 3.74 with M = B since hBis = s by corollary 2.10.
Rt (j)
Further, 0 vij (s; Xs (!))dBs (!) is de…nable within the local martingale
B (j) ([0; 1) S):
integration theory since 5.24 assures that vij (s; Xs (!)) 2 H2;loc
The existence and uniqueness of Xt that satis…es 5.7 will be seen to
require somewhat more of u(s; x) and v(s; x) than Borel measurability to
ensure that 5.24 is satis…ed.
Finally, given this Borel measurability requirement on u(s; x) and v(s; x)
and the assumption that continuous, adapted Xt exists that satis…es 5.9 and
5.7, it will also be proved in book 9 that such Xt is a continuous semimartin-
gale. It therefore makes sense to investigate the application of Itô’s lemma
to f (t; Xt ) for appropriate functions f:
(i)
where dXt is given in 5.23.
With f (t; Xt ) deemed a 1 1 matrix, this result can be expressed in matrix
notation:
1
df (t; Xt ) = ft (t; Xt )dt + H(t; Xt ) v(s; Xs (!))v T (s; Xs (!)) dt + rf (t; Xt ) dXt
2
1
ft (t; Xt )dt + H(t; Xt ) v(s; Xs (!))v T (s; Xs (!)) dt (5.26)
2
+rf (t; Xt ) u(t; Xt )dt + rf (t; Xt ) v(t; Xt (!))dBt :
and this is multiplied by the m 1 column matrix dXt : The last expression
is similarly de…ned.
The following corollary is a very general restatement of 5.16, but we
will be interested in investigating a few special cases in example 5.30 below.
For the statement of the corollary below, the component Brownian motions
fB (j) gnj=1 are assumed to be independent processes (de…nition 1.36, book
7). By that book’s proposition 1.42, this is equivalent to specifying that
Bt = (B (1) ; :::; B (n) ) is an n-dimensional Brownian motion on this space.
See remark 5.29 below for a generalization.
Corollary 5.28 (Itô’s lemma for an m-dimensional Itô di¤usion) Given
a Borel measurable m-vector u(s; x) and m n-matrix v(s; x) on [0; 1) Rm ;
let Xt be an m-dimensional continuous, adapted process on (S; (S); s (S); )u:c:
that satis…es 5.22 with independent Brownian motions fB (j) gnj=1 ; and also
satis…es 5.24: If f (t; x) 2 C 1;2 ([0; 1) Rm ); then f (t; Xt ) is a continuous
semimartingale on (S; (S); s (S); )u:c: ; and for every 0 t < 1 :
Z t
f (t; Xt ) = f (0; X0 ) + ft (s; Xs )ds
0
Xm Z t
+ fxi (s; Xs )ui (s; Xs )ds (5.27)
i=1 0
Z t hXn i
1 Xm Xm
+ fxi xj (s; Xs ) vik (s; Xs )vjk (s; Xs ) ds
2 i=1 j=1 0 k=1
Xm Xn Z t
+ fxi (s; Xs )vij (s; Xs )dBs(j) ;
i=1 j=1 0
-a.e. As noted in remark 5.5, this implies that -a.e., the identity in 5.27
is valid for all t:
Proof. As noted in remark 5.26, such Xt is a generalized continuous semi-
martingale. Let:
Z t Z t
(i) (i) P
Ft (!) = ui (s; Xs (!))ds; Mt (!) = nj=1 vij (s; Xs (!))dBs(j) :
0 0
(i) (i)
Also noted in remark 5.26, the integrands for Ft (!)
and Mt (!) are pre-
(i)
dictable so by 5.24, Ft is a continuous bounded variation process by propo-
(i)
sition 5.14, and Mt is a continuous local martingale by proposition 3.83
(i) (i)
(recall exercise 5.78, book 7). Thus Ft (!) + Mt (!) is a continuous semi-
martingale, 5.22 can be expressed:
(i) (i) (i) (i)
Xt (!) = X0 (!) + Ft (!) + Mt (!);
5.7 MULTIVARIATE ITÔ DIFFUSION VERSION 251
and proposition 5.22 applied. Comparing 5.27 with 5.16, we verify equality
of the component expressions.
Splitting ui (s; Xs (!)) into positive and negative parts (book 5’s de…nition
2.36) obtains by that book 5’s proposition 3.6:
Z t Z t
fxi (s; Xs )dFs(i) = fxi (s; Xs )ui (s; Xs )ds:
0 0
(1) (n)
Remark 5.29 (Dependent Brownian Motions) Let Bt = (Bt ; :::; Bt ) be
an n-dimensional Brownian motion on (S; (S); s (S); )u:c: and thus fB (j) gnj=1
are independent Brownian motions by book 7’s proposition 1.42. Recalling
section 2.2.1, an n n matrix S 0 is positive de…nite if xT S 0 x > 0 for all x 2
Rn with x 6= 0: It was shown there that if S 0 = kj is a positive de…nite ma-
trix, where kk = 1; kj = jk ; and 1 kj 1; then there exists a collec-
tion of 1-dimensional Brownian motions fB b g
(j) n
j=1 on (S; (S); s (S); )u:c:
252 CHAPTER 5 ITÔ’S LEMMA
D E
with Bb (j) ; B
b (k) = kj t:
b (j) gn ; there exists a
In addition, given such fB j=1
t
lower triangular matrix L = flkj gj bt = LBt ; or in components:
so that B
k
Xj
b (j) =
B
(k)
ljk Bt : ((*))
t k=1
1. The proof above and …nal equation in 5.27 can be modi…ed to the case
where Xt (!) is de…ned in terms of the given fB b (j) gn : This does not
j=1
Rt Rt (j)
change the 0 fxi (s; Xs )ui (s; Xs )ds-term, and 0 fxi (s; DXs )vij (s; XEs )dBs
Rt
becomes bs(j) : However since B
fx (s; Xs )vij (s; Xs )dB b (k) ; B
b (l) =
0 i
t
kl t; 5.27 is modi…ed to re‡ect that now:
D E Xn Xn Z t
(i) (j)
M ;M = vik (s; Xs (!))vjl (s; Xs (!)) kl ds:
t k=1 l=1 0
The double sum integrand of this expression then replaces the bracketed
expression in the integral of fxi xj (s; Xs ); with the same justi…cation as
above.
2. The equation for Xt (!) de…ned in terms of the given fB b (j) gn can
j=1
be expressed in terms of fB (j) gnj=1 by substitution of ( ); and 5.27 ap-
plied directly. The coe¢ cient functions for these independent Brown-
ian processes, fvij0 (t; x)gm;n
i=1;j=1 ; are then given in terms of the original
collection, fvij (t; x)gm;n 0
i=1;j=1 ; by v (t; x) = v(t; x)L; or in components:
Xn
0
vij (t; x) = vik (t; x)lkj :
k=j
1. 1-Dimensional Model: m = n = 1:
This is 5.11.
(1) (2)
For example, if m = 2 and f (t; Xt ) = Xt Xt ; then:
(1) (2) (1) (2) (2) (1)
d Xt Xt = Xt dXt
+ Xt dXt :
D E
(1) (2)
This is again equivalent to 4.12 since Bt ; Bt = 0 and thus
D E t
(1) (2)
Xt ; Xt = 0 by proposition 4.25.
t
Chapter 6
The next section investigates the use of Itô’s Lemma to create local
martingales from semimartingales using functions f (t; x) that solve certain
partial di¤erential equations. We then introduce the Feynman-Kac represen-
tation theorem which provides another linkage between partial di¤erential
equations and stochastic processes and which will be further developed in
book 9: Finally we derive Dynkin’s formula, and apply it to derive a special
case of Kolmogorov’s backward equation. This equation re‡ects yet another
connection between partial di¤erential equations and stochastic processes,
though is somewhat disguised in the current version. This result will be
generalized and fully revealed in book 9:
255
256 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA
NtC Nt E [Nt ] = Nt t:
1 T
E [exp [iu (Xt Xs )] j s] = exp u [(t s) I] u : ((*))
2
where (u u) denotes the inner product: With f (x; y) = exp [ix + y=2] ; de…ne
the continuous process Mt f (Yt ; hY it ) :
By Euler’s formula (6.13, book 5), Mt as de…ned above is bounded over com-
pact intervals: jMt j exp [t (u u) =2] ; and since continuous and adapted Mt
258 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA
Y ([0; 1)
is predictable (corollary 5.17, book 7). Thus Mt 2 H2;loc S) and it
Rt
follows from proposition 3.83 that 0 Ms dYs is a continuous local martin-
gale, and hence by the above identity so too is Mt : But a locally bounded
local martingale is a martingale by proposition 5.88 of book 7, and so:
E [Mt Ms j s] = 0:
1 T
E [exp [iu (Xt Xs )]] = exp u [(t s) I] u :
2
where we assume that u(s; !) and v(s; !) are locally bounded (de…nition
4.17) predictable processes on (S; (S); t (S); )u:c: : By proposition 4.21,
Xt is then is a continuous semimartingale on (S; (S); t (S); )u:c: if X0 is
0 (S)-measurable, and so Xt X0 2 MSloc : A natural question is, when is
such an Itô process equal to another Brownian motion on
(S; (S); t (S); )u:c: ? Identifying the too easy case of X0 = u(s; !) = 0
and v(s; !) = 1; we can investigate this using 3 of Lévy’s characterization.
1. X0 = 0; -a.e.,
2. u(t; !) = 0; m -a.e.,
3. v 2 (t; !) = 1; m -a.e.
Book 5’s proposition 5.14 obtains that A! 2 B([0; 1)) for all !; that h(!)
m A! is -integrable, and:
Z
m A = h(!)d : ((2))
S
1
We now repeat the above argument with B v2 (R f1g); noting
that m B = 0 by 3: In detail, for each ! de…ne B! by:
is a continuous
Rt local martingale (exercise 5.78, book 7). Also by proposition
4.21, 0 u(s; !)ds is a continuous, adapted process of bounded variation.
R t Book 7’s proposition 6.1 proves that if the continuous local martingale
0 u(s; !)ds is a bounded variation process, then it must be constant -a.e.,
and by continuity this constant must be 0: Denote by A0 the set with (A0 ) =
1 so that: Z t
u(s; !)ds = 0; for all t:
0
Since locally bounded and progressively measurable (3 of book 7’s proposition
5.19), u(s; !) is Borel measurable in s for all ! by book 5’s proposition 5.19:
Then by book 3’s proposition 3.34, given ! 2 A0 we conclude that u(t; !) = 0;
m-a.e. This proves that there exists A0 2 (S) with (A0 ) = 1; and for each
! 2 A0 there exists A! 2 B ([0; 1)) with m (A! ) = 0; so that u(t; !) 6= 0 for
! 2 A0 ; t 2 A! :
Now de…ne A; A! and h(!) as in part 1 of the proof. If ! 2 A0 ; then
A! = A! by de…nition, and so:
h(!) m A! = m (A! ) = 0:
Thus h(!) = 0; -a.e., and from (2) this proves m A = 0; which is 2:
Thus if Xt is a Brownian motion then -a.e.:
Z t
Xt = v(s; !)dBs ; all t:
0
Recalling that hBit = t from corollary 2.10, 3.69 obtains that -a.e.:
Z t
hXit = v 2 (s; !)ds; all t:
0
In components:
0 Rt Pn Rt 1
(1) (j)
X0 + 0 u1 (s; !)ds + j=1 0 v1j (s; !)dBs (!)
B Rt Pn R t C
B (2) C (j)
B X0 + 0 u2 (s; !)ds
+ C
j=1 0 v2j (s; !)dBs (!)
Xt B C:
B .. C
B . C
@ A
(m) Rt Pn R t (j)
X0 + 0 um (s; !)ds + j=1 0 vmj (s; !)dBs (!)
bP (m 1) bP (m n)
Recall that u 2 Hloc ([0; 1) S) and v 2 Hloc ([0; 1) S) of
de…nition 4.35, which is to say that for all i; j that ui (s; !); vij (s; !) 2
bP ([0; 1)
Hloc S) of de…nition 4.17.
The conclusion below generalizes the above result and also book 7’s
(1) (n)
proposition 2.61. This latter result proved that if Bt (!) Bt (!) ; :::; Bt (!) is
an n-dimensional Brownian motion de…ned on a probability space (S; (S); t (S); )u:c:
and R is an n n real rotation matrix, then RB is a Brownian motion. Re-
call that a rotation matrix can be speci…ed by RRT = In n ; or equivalently
RT = R 1 ; where the transpose matrix is de…ned by Rij T Rji :
In addition to generalizing from a …xed n n real rotation matrix R to
a stochastically de…ned n n rotation matrix R v(t; !) when m = n; the
next result also allows for the case m 6= n: In the special case where m < n;
then v(t; !) provides an orthogonal projection: Rn ! Rm ; while the case
m > n provides an orthogonal embedding Rn ! Rm : See Strang (2009) for
more on this.
Note that while we denote v(t; !) (vij (t; !))m;ni=1;j=1 for simplicity, this
collection is treated as an m n matrix.
264 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA
1. X0 = 0; -a.e.,
Proof. We will follow the logic of the proof of proposition 6.2, rede…ning
sets as appropriate.
1. 1 3 Imply Brownian Motion: Every such Xt is an m-dimensional
continuous
S semimartingale by proposition 4.21 and de…nition 4.34. De…ne
A= m 1
i=1 i (R
u f0g); and note that since the complement of A :
e = Tm u 1 (0);
A i=1 i
Now de…ne:
S 1 [S 1
B i6=j vv T i;j
(R f0g) i vv T i;i
(R f1g):
This then obtains from (2) that -a.e., X (i) ; X (j) t = t for i = j and 0
otherwise. Hence Xt is a Brownian motion on (S; (S); t (S); )u:c: by 3
of proposition 6.1.
2. Brownian Motion Implies 1 3 : Book 7’s proposition 1.42
(1) (m)
obtains that Xt = Xt ; :::; Xt is an m-dimensional Brownian mo-
(i)
tion if and only if Xt are independent 1-dimensional Brownian motions.
But these component processes di¤ er from the model of proposition 6.2 due
to the n-dimensional Brownian motion, and so we need to check the de-
(i)
tails to infer 2: For given i; Xt is a continuous martingale by proposi-
tion 2.8, and then a continuous local martingale by book 7’s corollary 5.85,
P Rt (j)
and nj=1 0 vij (s; !)dBs (!) is a continuous local martingale by proposi-
tion 3.83 and book 7’s exercise 5.78.
Thus:
Z t Z t
Pn
ui (s; !)ds = Xt j=1 vij (s; !)dBs(j) (!);
0 0
266 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA
is a continuous
Rt local martingale (exercise 5.78, book 7), and by proposition
4.21, 0 ui (s; !)ds is a continuous, adapted process of bounded variation.
Rt
By book 7’s proposition 6.1, 0 ui (s; !)ds must be constant -a.e., and by
continuity this constant must be 0: Repeating the steps of part 2 of the proof
of proposition 6.2 obtains that m A
Sim= 0 for each i; with Ai = ui 1 (R
1
f0g): Thus m A = 0 where A = i=1 ui (R f0g); which is 2:
As X0 = 0 by de…nition of a Brownian motion, we obtain from the
previous step that -a.e., Xt satis…es (1): Then by the same steps as in part
1: Z t
D E
(i) (j)
X ;X = v(s; !)v T (s; !) i;j ds;
t 0
and with 3 of proposition 6.1 can conclude that -a.e.:
8
Z t < t; i = j;
v(s; !)v T (s; !) i;j ds = all t: ((3))
0 : 0; i 6= j;
k(!) m B! = m (B! ) = 0:
T (t) = hM it :
Example 6.5 (The Wiener integral ) Recall the Wiener integral of propo-
sition 2.58 as an example of this result. There it was proved that if Bt (!) is
a Brownian motion on (S; (S); t (S); )u:c: and v(s) a continuous function
de…ned on [0; 1); then for every t :
Z t Z t
v(s)dBs (!) N 0; v 2 (s)ds :
0 0
In other words,
R t the Itô integral is normally distributed Rwith expectation 0
t
and variance 0 v 2 (s)ds: But we also know that Mt = 0 v(s)dBs (!) is a
continuous local martingale by proposition 3.83, and that by proposition 3.89
and corollary 2.10: Z t
hM it = v 2 (s)ds:
0
R1
Now assume that v(s) > 0 for all s and 0 v 2 (s)ds = 1: Then T (t)
hM it is strictly monotonic and unbounded, and thus has a well de…ned in-
verse T 1 de…ned on [0; 1): De…ne a new process on (S; (S); T 1 (t) (S); ) :
~t
B MT 1 (t) ;
(de…nition 5.4, book 7) because the original …ltration has this property. Fi-
nally, f T 1 (t) g is right continuous since T 1 is continuous (though only
right continuity is needed for this). In summary, f T 1 (t) g satis…es the
usual conditions and thus:
S supt F (t);
5. For 0 s; t < 1 :
s < F (t) () F # (s) < t;
F # (s) t)s F (t):
Proof. We take these in turn, with many steps simply working through the
de…nitions.
1: That F # is nondecreasing follows from the de…nition since F is non-
decreasing, while the last statement is by de…nition. Right continuity follows
by de…nition for s 2 [S; 1); so assume s 2 [0; S) and F # (s) = t: Then
by de…nition F (t + ) > s for all > 0: Now if s < r < F (t + ); then
F # (r) t + = F # (s) + : Letting ! 0; right continuity of F obtains
that limr!s+ F # (r) F # (s) as thus limr!s+ F # (r) = F # (s) since F # is
nondecreasing.
2: If s 2 [S; 1) then F (F # (s)) = F (1) = S; so assume s 2 [0; S) and
F # (s) = t: If t = 0 then s = 0 by de…nition and continuity of F; recalling
F (0) = 0; so we can assume t > 0: As above F (t + ) F (F # (s) + ) > s
for all > 0; and thus by right continuity of F; F (F # (s)) s: Similarly,
for 0 < < t the de…nition obtains F (t #
) F (F (s) ) s; while this
and continuity of F yield F (F # (s)) s to complete the proof.
3: By de…nition, if F (t) = S then F # (F (t)) = 1 = supfr tjF (r) =
1g by monotonicity. If F (t) < S then by monotonicity and continuity of
F :
F # (F (t)) inffr 0jF (r) > F (t)g = supfr tjF (r) = F (t)g:
4: That '(F # (s)) is right continuous follows from continuity of ' and
right continuity of F # by 1: There is nothing left to prove for s = 0 so
assume s 2 (0; S) and let 0 < sn ! s : Then F # (sn ) is nondecreasing by
1; and since bounded by F # (s) < 1 this sequence has a limit. To prove
limn ' F # (sn ) = ' F # (s) and left continuity, …rst note that by continu-
ity of F; and then 2 since sn < S :
and thus:
'(F # (s)) = lim ' F # (sn ) :
n
Finally, 2 obtains that F (F # (F (t))) = F (t)^S = F (t); and thus the identity
'(F # (F (t))) = '(t) is a consequence of the "steps" property of ':
5: If s < F (t) then by monotonicity of F # and 3 it follows that:
If F # (s) < t then s^S < F (t) by monotonicity of F and 2: But as F (t) S;
the result follows. For the second implication, by monotonicity of F and 2 :
since F (t) S:
With preliminary work done, we are ready to state and prove the main
result of this section. The logic ‡ow of the proof is to generalize the deriva-
tion of example 6.5 to this more general context.
and thus by de…nition 6.6, Ts = F # (s) for the increasing function (propo-
sition 6.12, book 7) F (t) = hM it : Then Ts is a stopping time relative to
f t (S)gu:c: for all s; the …ltration f 0s (S)g f Ts (S)g (de…nition 5.52,
book 7) satis…es the usual conditions; and, for all t :
Tt0 hM it ;
272 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA
M t
= hM it^Ts hM iTs = s2 ;
2 2
where the last step is 2) since hM iTs = F F # (s2 ) : This implies that
2
E M t s2 for all t and then E M 1 s2 ; noting that M 1 is well
de…ned by monotonicity and boundedness.
If fTn g is a localizing sequence for Mt for which say jMt^Tn j n (propo-
sition 5.75, book 7), then by the Doob-Meyer decomposition theorem of that
book’s proposition 6.5 applied to the martingale Mt^Tn ; then stopped at Ts2 :
2 0
Mt^T n ^Ts
= hM it^Tn ^Ts + Mt^Tn ^Ts
:
2 2 2
6.1 LÉVY’S CHARACTERIZATION OF N -DIMENSIONAL BM273
Here Mt0 is a continuous local martingale on (S; (S); t (S); )u:c: ; with the
same localizing sequence as Mt : Since Mt^T 0 is a martingale and Ts2 < 1
n
-a.e. hby unboundedness
i of hM it ; it follows from book 7’s proposition 5.80
0
that E Mt^T = 0 and thus (corollary 6.16, book 7):
n ^Ts 2
h i h i h i h i
2 2
E Mt^Tn E Mt^T = E hM it^Tn ^Ts E
s2 : M
s 2 ^Tn 2 t^Tn
((1))
The proof of book 6’s proposition 5.8 then obtains from (1) that fMt^Tn gn are
uniformly integrable for each t; and hence Mt is a martingale by proposition
5.88 of book 7. Applying Fatou’s lemma (proposition 2.18, book 5) to (1)
proves that Mt is an L2 -bounded martingale:
h i h i h i
2 2
E Mt2 = E lim Mt^Tn lim inf E Mt^Tn = lim E M t^Tn s2 ;
n n n
((2))
and then also that Mt is a uniformly integrable martingale by book 7’s propo-
sition 5.99.
2: Mt2 M t is a uniformly integrable martingale:
Uniform integrability of Mt and book 7’s proposition 5.112 obtain the
existence of M1 so that Mt !L1 M1 and Mt = E[M1 j t (S)]; -a.e. By
(2) and Fatou’s lemma:
h i
2
E M1 = E lim Mt2 lim E Mt2 = lim E M t = E M 1 s2 :
t t t
Mt2 2
E[M1 j t (S)]:
For this derivation the second last step follows from optional stopping because
Mt2 M t is a martingale, while the last step is the de…nition of Ts :
Thus since adapted as noted above and integrable by de…nition, Bs is a
martingale on (S; (S); 0s (S); )u:c: by the …rst derivation, and is square
integrable by the second. Also hBis = s by the second derivation by book 7’s
proposition 6.12, since it proves Bs2 s is a martingale.
4: Bs is continuous -a.e. and is thus a Brownian motion on
(S; (S); 0s (S); )u:c: :
First, B0 = MT0 = 0: This is apparent if T0 = 0 by the restriction on
M; so assume T0 = b > 0: Then by book 7’s corollary 6.14, Mt = M0 = 0
for t 2 [0; b] with probability 1; and so again MT0 = 0: Thus once it is
proved that Bs is continuous -a.e., it will be a Brownian motion by Lévy’s
characterization. Now Ts is right continuous in s by 1), and thus so too is
Bs :
For left continuity, …x t0 0 and de…ne St0 = infftj hM it > hM it0 g: Note
that with s0 hM it0 that St0 Ts0 and thus St0 is a stopping time relative
to f t (S)gu:c: : Since St0 t0 by monotonicity, left continuity of MTs at Ts0
requires that
Bs0 MTs0 = Mt0 : ((3))
In other words, left continuity
h at Tis0 requires that M is constant on the
interval Mt0 ; MTs0 MT ; MTs0 ; where Ts0 is the left limit of Ts as
s0
6.1 LÉVY’S CHARACTERIZATION OF N -DIMENSIONAL BM275
s ! s0 : Now if St0 = t0 then (3) is satis…ed since St0 Ts0 as noted above,
0
and thus Bs is continuous at s :
For the general case, de…ne a process Ns on s 0 by:
S
t 0
Ns M(t0 +s)^St0 Mt0 Mt0 +s Mt0 :
Note that N0 = 0 since St0 t0 ; and that Ns is a continuous local martingale
on (S; (S); s+t0 (S); )u:c: by Doob’s optional stopping theorem of book 7’s
proposition 5.84. Further, by (6.12) of book 7 and that book’s corollary 6.16:
S 0
hN is = hM i(tt0 +s) hM it0 :
To extend this inequality to p > 2 we will use Itô’s lemma as the pri-
mary technical tool. For 0 < p < 2 we require a di¤erent approach using
6.2 THE BURKHOLDER-DAVIS-GUNDY INEQUALITY 277
Note that the following two results provide statements given any stopping
time T; and thus includes T 1: Thus if A1 limt!1 At is well-de…ned,
the following assures the existence of X1 and provides information on its
distribution.
Then by 1 :
2
E [(XT ) ] E [AT ] : (6.6)
1
R1
Proof. Letting F (x) = x ; then F (x) = 0 [y x] dF (y) and thus:
Z Z 1
E [F (XT )] = [y XT ] dF (y)d :
0
of book 5’s proposition 5.22 thus allows a reordering of the iterated integral
to obtain with 3 of proposition 6.12:
Z 1Z
E [F (XT )] = [y XT ] d dF (y)
0
Z 1
= Pr [XT y] dF (y)
0
Z 1
(E [AT ^ y] =y + Pr [AT y]) dF (y):
0
Now
where
Mt max jMs j :
0 s t
and the claim is proved with t > N: Applying corollary 6.13, for all 0 < <1
and all t : h i 2
E Mt2 E [hM it ] ;
1
and the upper bound in 6.7 follows since Mt2 = (Mt )2 :
Similarly, the process hM it is dominated by (Mt )2 since by ( ) :
h i
2
E [hM it^T ] = E Mt^T E (Mt^T )2 ;
6.2 THE BURKHOLDER-DAVIS-GUNDY INEQUALITY 281
and letting t > N proves the claim. Thus for all 0 < < 1 and all t :
2 h i
E [hM it ] E (Mt )2 :
1
2: p 2 :
Since hM it is continuous and adapted by book 7’s proposition 6.12, Tn0
infftj hM it ng is a stopping time by that book’s proposition 5.60. Thus
with fTn g a localizing sequence for Mt with MtTn jMt^Tn j n as above
and Rn Tn ^ Tn0 ; we prove the result for MtRnfor t < 1: Letting n ! 1
and applying Lebesgue’s monotone convergence theorem (proposition 2.21,
book 5) obtains 6.7 for t < 1; while letting t ! 1 and applying monotone
convergence again proves the case for t = 1; though these expectations may
be in…nite.
To simplify notation from MtRn we prove 6.7 for p 2 for a continuous
martingale Mt with jMt j n and hM it n for all t < 1: The case p = 2
was derived in 6.4 above so assume p > 2: Letting f (x) = jxjp 2 C 2 (R); then
f 0 (x) = p jxjp 1 ; f 00 (x) = p(p 1) jxjp 2 and Itô’s lemma of 5.2 obtains:
Z t Z t
p p 1 1
jMt j = p jMs j dMs + p(p 1) jMs jp 2 d hM is :
0 2 0
1 p p h i 2=p
p=2
E [(Mt )p ]1 1=q
p(p 1) E hM it ;
2 p 1
Thus:
Z t
(p 2)=4 (p 2)=4
jNt j Mt hM it + Ms d hM i(p
s
2)=4
2Mt hM it :
0
h i
From ( ) by applying book 7’s proposition 6.18 that E [hN it ] = E jNt j2 ;
and then Hölder’s inequality with the above conjugate pair:
2 h p=2
i h i
E hM it = E jNt j2
p
h i
(p 2)=2
4E (Mt )2 hM it
h i 1=q
(p 2)q=2
4 E hM it (E [(Mt )p ])2=p :
Here again 2=p + 1=q = 1 and thus (p 2) q=2 = p=2; so dividing obtains:
2 h i
p=2 1 1=q
E hM it 4 (E [(Mt )p ])2=p :
p
Since 1 1=q = 2=p; the lower bound is veri…ed by exponentiation.
6.2 THE BURKHOLDER-DAVIS-GUNDY INEQUALITY 283
p=2
where Mt^T max0 s t^T jMs j : As t ! 1; continuity assures that hM it^T !
p=2
hM iT and (Mt^T )p ! (MT )p pointwise, and since these are increasing, 6.8
follows from Lebesgue’s monotone convergence theorem of book 5’s proposi-
tion 2.21.
and the conclusion again follows. In both cases, Lp -boundedness follows from
6.7.
284 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA
The development of the next section is the …rst of three that will con-
nect solutions of certain partial di¤erential equations and those of stochastic
di¤erential equations. The other two results will be introduced in the next
two sections and then developed more completely in book 9.
As noted in remark 5.5, this and continuity imply that -a.e., this identity
is valid for all t:
6.3 LOCAL MARTINGALES FROM SEMIMARTINGALES 285
and so: Xn
2
ij (s; Xs ) k=1
vik (s; Xs )vjk (s; Xs ): (6.9)
@f
(s; x) + Lf (s; x) = g(s; x);
@t
where g(s; x) is Borel measurable and satis…es the u-constraint in 5.24, then:
Z t
Mt f (t; Xt ) f (0; X0 ) + g(s; Xs )ds 2 Mloc :
0
Proof. The case g(s; x) = 0 follows from the general case which we now
address. As noted in remark 5.26, g(s; Xs ) will be proved to be predictable
in book 9, and thus is progressively measurable by book 7’s proposition 5.19.
It then follows from the u-restriction in 5.24 that this integral is continuous
and adapted by corollary 3.74. Then by substitution into Itô’s lemma of 5.27
expressed as above:
Z t Xm Xn Z t
f (t; Xt ) f (0; X0 )+ g(s; Xs )ds = fxi (s; Xs )vij (s; Xs )dBs(j) ;
0 i=1 j=1 0
-a.e.
The integrals on the right are local martingales by proposition 3.83 if
fxi (s; Xs )vij (s; Xs ) is predictable, and:
Z t
Pr fx2i (s; Xs )vij
2
(s; Xs )ds < 1; all t = 1:
0
By remark 5.26 it follows that vij (s; Xs ) is predictable, while the same book 9
result applied to continuous and thus Borel measurable fxi (s; x) obtains pre-
dictability of fxi (s; Xs ): The integral constraint then follows since vij (s; Xs (!))
satis…es 5.24, and by continuity, fxi (s; Xs ) is bounded on [0; t] for all !:
where Bt is a Brownian motion on (S; (S); t (S); )u:c: : The solution was
derived in 5.13:
1 2
Xt = X0 exp t + Bt :
2
6.3 LOCAL MARTINGALES FROM SEMIMARTINGALES 287
@ 1 2 2 @2
L x + x ;
@x 2 @x2
and thus f (t; Xt ) f (0; X0 ) 2 Mloc is a local martingale on (S; (S); s (S); )u:c:
for any f (t; x) 2 C 1;2 ([0; 1) R) such that:
@f @f 1 2f
2 2@
+ x + x = 0:
@t @x 2 @x2
More generally, if g(s; x) is Borel
R t measurable and satis…es the u-constraint
in 5.24, then f (t; Xt ) f (0; X0 ) + 0 g(s; Xs )ds 2 Mloc is a local martingale
on (S; (S); s (S); )u:c: for any f (t; x) 2 C 1;2 ([0; 1) R) such that:
@f @f 1 2f
2 2@
+ x + x = g:
@t @x 2 @x2
For this example such g(t; x) must in fact be at least continuous.
(i) (i)
Xm Z t
Xt (!) = X0 (!) + vij (s; Xs (!))dBs(j) (!) ;
j=1 0
with fB (j) gm
j=1 independent Brownian motions on (S; (S); s (S); )u:c: ; or
equivalently (proposition 1.42, book 7) let B (B (1) ; :::; B (m) ) be an m-
dimensional Brownian motion on (S; (S); s (S); )u:c: : Then if v(s; x)v T (s; x) =
I and f is harmonic, then f (Xt ) f (X0 ) 2 Mloc is a local martingale on
(S; (S); s (S); )u:c: :
288 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA
@ ~
f (t; x) = f~(t; x):
@t
1 h Xm i
f~(t; x) = exp x2i =4t :
(4t)n=2 i=1
where:
2 2 1 2
a= 2
; s= (T t); y = ex :
2 2
Then using a change of variables obtains the above heat equation in f~(s; y):
The solution to the heat equation then yields a solution to the original
equation of example 6.19 over 0 t T by resubstitution. With Xt derived
in 5.13:
1 2
Xt = X0 exp t + Bt ;
2
it follows that f (t; Xt ) f (0; X0 ) 2 Mloc is a local martingale on (S; (S); s (S); )u:c: for
t T:
6.3 LOCAL MARTINGALES FROM SEMIMARTINGALES 289
@f 1 @2f
(t; x) + (t; x) = 0;
@x2 2 @ 2 x1
then f (Xt ) f (X0 ) 2 Mloc is a continuous local martingale on (S; (S); s (S); )u:c: :
Proof. Left as an exercise.
(1) 2 (1) 2
f (Xt ) f (X0 ) = X0 + Mt X0 hM it
(1)
= Mt2 hM it + 2X0 Mt :
too is Mt2 hM it by book 7’s exercise 5.78. This would be circular logic
as the Doob-Meyer result was prominent in the development of the various
integration theories, and thus also in Itô’s lemma.
From the above version of Itô’s lemma we also obtain that:
Z t
2 (1) (1)
Mt hM it + 2X0 Mt = 2 X0 + Ms dMs ; -a.e.,
0
1 2 (1) 1 2 (2)
f (Xt ) f (X0 ) = exp Mt hM it exp X0 X0 :
2 2
(1) (2)
It is common to set X0 = X0 = 0; and then it follows that exp Mt 21 2 hM it
1 is a continuous local martingale, and from the above version of Itô’s
lemma:
Z t
1 2 1 2
exp Mt hM it 1= exp Ms hM is dMs :
2 0 2
1 2
E (M )t exp Mt hM it ; (6.12)
2
it follows that: Z t
E (M )t = 1 + E (M )s dMs : (6.13)
0
This equation is reminiscent of the integral equation satis…ed by the ordinary
exponential e (t) exp[ t] :
Z t
e (t) = 1 + e (s)ds:
0
292 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA
1 2
E (B)t exp Bt t
2
So by 6.12:
Xn Z t
1 Xn Z t 2 D E
E (M )t exp Q(i) (i)
s dNs
2
Q(i)
s d N (i)
i=1 0 2 i=1 0 s
is a local martingale on (S; (S); s (S); )u:c: which satis…es 6.13. Using
the associative law in 3.72, this identity becomes:
Xn Z t
E (M )t = 1 + E (M )s Q(i) (i)
s dNs : (6.14)
i=1 0
Remark 6.24 It can cause some confusion initially, but the exponential
martingale E (M )t de…ned in 6.12 for a local martingale Mt 2 Mloc is in
fact an exponential local martingale in general, as noted above. It will
be essential for the study of Girsanov’s theorem in book 9 to identify
conditions on Mt which assure that E (M )t is in fact a martingale. For that
application = 1 always, and it is common to denote E1 (M )t by E(M )t :
6.4 THE FEYNMAN-KAC REPRESENTATION THEOREM 1293
Notation 6.26 (On notation and expectations) Some references use the
notation E t;x or Et;x in the constraint in 6.19 below and in the representa-
tion for f (t; x); and then drop the t; x from the notation for the process. For
example, 6.20 would become:
Z T Z T Z r
f (t; x) = E t;x (XT ) exp (s; Xs )ds + k(r; Xr ) exp (s; Xs )ds dr ;
t t t
In the m-dimensional case, fAj gnj=1 B(Rm ): The collection of such cylin-
der sets forms a semi-algebra A0 that generates 0 (S); and t;x is extended
to this sigma algebra using the extension theory of chapter 5 of book 1:
296 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA
Now the reader may have noticed that in the representation for f (t; x)
above, that the process Xs would appear to be the notation for the solution
of this stochastic di¤ erential equation from t = 0; so Xs = Xs0; ; but with
some unspeci…ed initial condition. Ignoring this initial condition for the
moment, this would seem to imply that the solution process Xst;x for s t is
the same as fXs js t; Xt = xg; at least in terms of its …nite dimensional
distributions. We discuss this again in the next section, in remark 6.41.
For the following, recall de…nition 5.3 that f (t; x) 2 C 1;2 ([0; T ] Rm )
denotes that f is continuous on this space, and has one continuous deriva-
tive for t 2 (0; T ) and two continuous derivatives for x 2 Rm that extend
continuously to [0; T ] Rm : While in many contexts the variable T serves as
notation for a stopping time, in the discussion of partial di¤erential equa-
tions and stochastic di¤erential equations this letter is often reserved for a
special …xed future time, such as the time of the boundary value. It is then
common to use the Greek letter or another letter for stopping times in
this context.
Proposition 6.27 (Feynman-Kac representation theorem 1) Let u(s; x)
and v(s; x) be measurable functions de…ned on [0; T ] R and Bs a Brownian
motion on (S; (S); s (S); )u:c: :
Assume that:
1. Given t 2 [0; T ) and x; there exists a continuous semimartingale Xst;x
on (S; (S); s t (S); )u:c: which is a solution to the stochastic di¤ er-
ential equation:
dXst;x = u(s; Xst;x )ds + v(s; Xst;x )dBst ;
for t s T with Xtt;x = x: Here Bst is a Brownian motion on
(S; (S); s t (S); )u:c: ; de…ned in terms of Bs or otherwise as above,
and thus Btt = 0 and Bst = s t:
In other words, Xst;x is an Itô di¤ usion:
Z s Z s
Xst;x = x + u(r; Xrt;x )dr + v(r; Xrt;x )Brt ; t s T: (6.16)
t t
2. There exists f (t; x) 2 C 1;2 ([0; T ] R) which solves the partial di¤ er-
ential equation:
@f @f 1 @2f
+u + v 2 2 + f + k = 0; (t; x) 2 [0; T ) R; (6.17)
@t @x 2 @x
f (T; x) = (x); x 2 R; (6.18)
6.4 THE FEYNMAN-KAC REPRESENTATION THEOREM 1297
where u; v are as above, (s; x); k(s; x) are continuous functions de-
…ned on [0; T ] R; and (x) a continuous function de…ned on R:
3. For f (t; x) in 2 :
"Z Z #
T r 2
E v(r; Xrt;x )fx (r; Xrt;x ) exp (s; Xst;x )ds dr < 1:
t t
(6.19)
Then f (t; x) has the representation:
Z T Z T Z r
f (t; x) = E (XTt;x ) exp (s; Xst;x )ds + k(r; Xrt;x ) exp (s; Xst;x )ds dr :
t t t
(6.20)
Proof. Given t; x; de…ne the process g(r; Xrt;x )
on [t; T ] S by:
Z r
g(r; Xrt;x ) = exp (s; Xst;x )ds f (r; Xrt;x ):
t
Similarly, an application of 5.10 to f (r; Xrt;x ) for any C 1;2 ([0; T ] R) func-
tion f (r; x) obtains:
1
df (r; Xrt;x ) = ft (r; Xrt;x ) + u(r; Xrt;x )fx (r; Xrt;x ) + v 2 (r; Xrt;x )fxx (r; Xrt;x ) dr
2
t;x t;x t
+v(r; Xr )fx (r; Xr )dBr :
As g(r; Xrt;x ) is de…ned as the product of these two processes, we can
apply stochastic integration by parts from 4.13, recalling remark 4.26. For
this application note that by book 7’s proposition 6.34:
Z r
exp (s; Xst;x )ds ; f (r; Xrt;x ) = 0;
t r
298 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA
since the …rst factor is a bounded variation process. Substitution of the above
di¤ erentials then yields:
g(t0 ; Xtt;x
0 ) g(t; Xtt;x )
Z t0 Z r Z t0 Z r
t;x t;x
= exp (s; Xs )ds df (r; Xr ) + f (r; Xrt;x )d exp (s; Xst;x )ds
t t t t
Z t0 Z r
= exp (s; Xst;x )ds ft (r; Xrt;x ) + u(r; Xrt;x )fx (r; Xrt;x ) dr
t t
Z t0 Z r
1 2
+ exp (s; Xst;x )ds v (r; Xrt;x )fxx (r; Xrt;x ) + f (r; Xrt;x ) (r; Xrt;x ) dr
t t 2
Z t0 Z r
+ exp (s; Xst;x )ds v(r; Xrt;x )fx (r; Xrt;x )dBrt :
t t
If f (t; x) satis…es 6.17, the sum of the bracketed expressions in the …rst
two integrals equals k(r; Xrt;x ) identically. Substituting all terms, and re-
calling that Xtt;x = x; obtains:
Z 0 !
t
exp (s; Xst;x )ds f (t0 ; Xtt;x
0 ) f (t; x)
t
Z t0 Z r
= k(r; Xrt;x ) exp (s; Xst;x )ds dr
t t
Z t0 Z r
+ exp (s; Xst;x )ds v(r; Xrt;x )fx (r; Xrt;x )dBrt :
t t
The Itô integral is a martingale since the integrand satis…es 2.27 by 6.19
and hence is a member of H2 ([0; 1) S): Letting t0 = T and f (T; XTt;x ) =
(XTt;x ) obtains 6.20 by taking expectations:
Z T
f (t; x) = E exp (s; Xst;x )ds (XTt;x )
t
Z T Z r
+ k(r; Xrt;x ) exp (s; Xst;x )ds dr :
t t
The process Bst is an n-dimensional shifted Brownian motion on (S; (S); s t (S); )u:c:
and de…ned as in remark 6.25 of the prior section. But as an n-dimensional
(j)
Brownian
D process
E it has independent components that satisfy Btt =0
(j)
and Bst =t s: In integral form:
Z s Z s
Xst;x =x+ u(r; Xrt;x )dr + v(r; Xrt;x )dBrt ; t s T:
t t
t;x
Comparing (1) and (2) it follows that Ys Xt+s and Ys0 Xs0;x satisfy the
same stochastic di¤ erential equation on 0 s T t; but with di¤ erent ver-
sions of a Brownian motion. By the section in book 9 on weak uniqueness,
meaning uniqueness in the sense of probability law, it will follow that
Ys and Ys0 have the same …nite dimensional distributions (de…nition 1.37,
book 7) under on 0 s T t: These conclusions will again require
assumptions on the coe¢ cient functions as noted in remark 5.26.
One consequence of this is that for a time homogeneous process, the
distribution of XTt;x depends only on x and T 0 T t:
For the next result, recall that by de…nition 5.3, 2 C02 (Rm ) means
that is twice continuously di¤erentiable and has compact support. That
is, = 0 on the complement of compact K Rm : Also, vv T denotes the
matrix product of v(t; y) and its transpose, so:
Xn m;m
vv T (s; y) vij (s; y)vkj (s; y) ;
j=1 i=1;k=1
The stated assumption on ui (t; y) and vij (t; y) needed for Dynkin’s for-
mula below will be assured by the assumptions in book 9 that will be made
for Itô’s existence theory for solutions of stochastic di¤erential equations.
As noted in remark 6.25, this existence theory will require a uniform lin-
ear growth bound for ui (t; y) and vij (t; y) in y; and Lipschitz continuity of
ui (t; y) and vij (t; y) in y for all t: In addition, if the stopping time is con-
stant or bounded, T; then the assumption below on ui (t; y) and vij (t; y)
is also assured by the time-localized linear growth bound assumed for the
more general existence theory.
For any …xed T < 1; this formula for (YT ) is valid -a.e. by Itô’s
lemma, and thus it is valid for all rational T; -a.e. As (YT ) is continuous
in T it follows that this formula is then valid for all …nite T; -a.e. Now
given the stopping time with E [ ] < 1; then < 1 -a.e. and thus on
the intersection of these sets of measure 1; this formula is valid -a.e. with
6.5 DYNKIN’S FORMULA 305
Hence by 2.38:
Z k
(j)
E [gk (!)] = E [0; ] (s) yi (Ys )vij (s; Ys )dBs = 0:
t
Hence:
Z Z
1 1
sup jgk (!)j d sup jgk (!)j2 d sup E gk2 (!) ;
k jgk j N N k jgk j N N k
and so Z
lim sup jgk (!)j d = 0:
N !1 k jgk j N
In other words, fgk (!)g is uniformly integrable (de…nition 2.50, book 5) and
thus by that book’s proposition 2.52:
Z
(j)
E yi (Ys )vij (s; Ys )dBs = limk!1 E [gk (!)] = 0:
t
1. The assumption of compact support can be relaxed, and the above result
and proof will apply to (y) 2 Cb2 (Rm ) if all ui (t; y) and vij (t; y) are
globally bounded by M < 1 for all y and t: Here Cb2 (Rm ) is de…ned as
the space of twice continuously di¤ erentiable functions with bounded
derivatives.
6.5 DYNKIN’S FORMULA 307
@f
Further, @T (t; x; T ) is continuous in T for T t:
Proof. Using 6.24 with a change of notation:
Z T
f (t; x; T ) = (x) + E L Xst;x ds: ((*))
t
@f f (t; x; t + h) f (t; x; t)
(t; x; T ) limh!0+
@T T =t h
Z t+h
1
= limh!0+ E L Xst;x ds:
h t
Then 6.26 follows from continuity of he integrand by the mean value theorem
for integrals (for example, proposition 10.27, Reitano (2010)) since Xtt;x =
x: h i
@f
For continuity of @T (t; x; T ) = E L XTt;x ; …x t; x and T t: Since
Xst;x is continuous it follows that XTt;x
n
! XTt;x -a.e. if Tn ! T; where this
is a right limit for T = t: Now L (y) is continuous and thus L XTt;x
n
!
L XTt;x -a.e. if Tn ! T:Since L (y) is also of compact support and
h i h i
is thus bounded in Rm ; E L XTt;x n
! E L X t;x
T by the bounded
convergence theorem of book 5’s proposition 2.46.
f (x; T ) E [ (XTx )] ;
h i
where we also simplify notation with E [ (XTx )] E XT0;x :
1
Xst;x = x exp 2
(s t) + Bst ;
2
Proposition 6.39 (On when At;x = L (x)) Assume that all ui (t; y) and
vij (t; y) are continuous and bounded by MK < 1 uniformly in t for y in any
compact set K: Then C02 (Rm ) DA ; and if 2 C02 (Rm ) :
Remark 6.40 1. Recalling remark 6.25, if all ui (t; y) and vij (t; y) are
continuous and bounded by M < 1 for all y and t; then 6.29 also
applies to all 2 Cb2 (Rm ) :
6.5 DYNKIN’S FORMULA 311
2. For 2 C02 (Rm )h; 6.27 and i6.29 can be combined to provide an ap-
t;x
proximation to E Xt+h :
t;x
E Xt+h (x)
Xm 1 Xm Xm
= h ui (t; x) xi (x) + vv T ik
(t; x) xi xk (x) + o(h);
i=1 2 i=1 k=1
then from 6.20 this solution must have the representation f (t; x) f (t; x; T )
with: h i
f (t; x; T ) = E (XTt;x ) :
The Kolmogorov result states that this is not just a representation of f (t; x)
assuming this solution exists, but it is in fact the unique solution to this
equation if (x) 2 C02 (R) :
h What isi clear is that the proposed representation formula f (t; x) =
E (XTt;x ) satis…es the boundary condition since by de…nition XTT;x = x
and thus f (T; x) = (x): More importantly, we would also like to be able
to prove that:
limt!T f (t; x) = (x);
where this notation implies a left limit. hThe remarkable
i thing about this
t;x
result, and the key to the proof, is that E (XT ) is di¤erentiable in t and
twice di¤erentiable in x; properties we defer toh book 9:iThis also explains
the "backward" quali…er to this result, that E (XTt;x ) is a di¤erentiable
function in terms of the backward variables (t; x) which de…ne when and
where the process Xst;x starts.
For the result below we will derive a very special case h of thisi result to
keep our focus on Dynkin’s formula and its result that E (XTt;x ) is di¤er-
entiable in T: In order to justify this change of parameter from t to T; we will
assume that the process XTt;x is time homogeneous, so that the result stated
in terms of T can be restated in terms of t. With the aid of Dynkin’s formula
we will then derive a disguised formulation of Kolmogorov’s result, though
one commonly encountered in the literature. In this version, the desired
di¤erential operator L de…ned in 6.23 is replaced with A de…ned in 6.27. Of
course L = A when applied to functions 2 C02 (Rm ) by proposition 6.39,
but this will largely not be applicable in the context below beyond providing
the strong hint of the …nal result to come in book 9: See also remark 6.46.
In book 9 we will derive the …nal statement for Kolmogorov’s back-
ward equations in terms of L for nonhomogeneous processes, as well as a
second version of this equation that is satis…ed by the transition measure
p(t; x; T; dy) of this Markov process (see remark 6.41), or more speci…cally
6.5 DYNKIN’S FORMULA 313
Remark 6.41 (On SDE solutions 2) Continuing with remark 6.25, here
we comment more on the manner in which Xst;x can be interpreted as a
Markov process (chapter 4, book 7), since we need this attribute in the proof
below.
Let Bs be an n-dimensional Brownian motion de…ned on the natural
…ltered space (S; (S); B s (S); )u:c: and Z (!) a random m-vector that is
Pm (j) 2 < 1: Recall de…nition
independent of 1 (S) with E jZj2
B
j=1 E Z
5.4 of book 7 on …ltrations, and summary 1.25 of that book for notions of
independence. Then under certain continuity and growth assumptions on
u(s; x) [ui (s; x)]m
i=1 and v(s; x) [vij (s; x)]m;n
i=1;j=1 de…ned on [0; 1) Rm ;
we will prove in book 9 that the stochastic di¤ erential equation:
Implicit in this last statement is that the integrands have the neces-
sary properties to ensure that these integrals are well-de…ned. Also,
the …ltration in (S; (S); s (S); )u:c: is de…ned in terms of unions of
B (S) and (Z) ; the sigma algebra generated by Z; where this …ltra-
s
tion is then made right continuous and complete to satisfy the usual
conditions (see remark 5.6 of book 7 for more on this).
314 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA
Further, with the same assumptions on u(s; x) and v(s; x); if Bs and Bs0
are n-dimensional Brownian motions de…ned on (S; (S); s (S); )u:c:
and Z is as above, then the two associated strong solutions Xs and
Xs0 have the same …nite dimensional distributions. Put another way,
solutions to such stochastic di¤ erential equations are said to be weakly
unique.
It is then also the case that given T > 0; that E jXs j2 < 1 for 0
s T:
With this set-up, let’s now compare Xst;Xt and Xs Xs0;Z on s t: The
notation Xst;Xt generalizes that above in the apparent way, in that Xtt;Xt =
Xt : Then assuming E jZj2 < 1; there exists the strong solution:
Z s Z s
Xs (!) = Z (!) + u(r; Xr (!))dr + v(r; Xr (!))dBr (!)
Z0 s Z0 s
= Xt (!) + u(r; Xr (!))dr + v(r; Xr (!))dBr (!) :
t t
Comparing, it follows that Xs (!) and Xst;Xt satisfy the same stochastic dif-
ferential equation for s t; with the same initial value Xtt;Xt = Xt (!) ; but
potentially di¤ erent Brownian motions.
If the shifted Brownian process Brt in (1) is de…ned by Brt = Br Bt ;
then with probability 1 :
Z s Z s
v(r; Xr (!))dBr (!) = v(r; Xr (!))dBrt (!) ; all s t:
t t
This follows because the integral of any simple function over [t; s] as de…ned
in 2.13 is the same for all ! using Br or Brt : Thus the Itô integral, de…ned as
an L2 -limit of such simple process integrals, agrees -a.e. for every rational
s; and by continuity agrees -a.e. for all s: Thus by strong uniqueness,
Xs = Xst;Xt with probability 1 on s t:
6.5 DYNKIN’S FORMULA 315
When Brt is de…ned more generally, say Brt = Br t using the original
Brownian motion, or Brt = Br0 Bt0 or Brt = Br0 t with a di¤ erent Brownian
motion, et cetera, then the equation in (1) again has a strong solution and
by weak uniqueness it follows that Xs =F DD Xst;Xt on s t; meaning these
processes have the same …nite dimensional distributions.
Without attempting to repeat the book 9 development, this is perhaps
enough to appreciate that the solutions of such an SDE can be "identi…ed"
with a Markov process, and indeed a di¤ usion (chapter 4, book 7) with ad-
ditional assumptions on the coe¢ cient functions u(s; x) and v(s; x). This
identi…cation is made between the continuous solutions of this equation and
the space of continuous functions (i.e., transformations) G : R+ ! Rm ;
where we impose a Markov probability structure on this latter space using a
generalization of the measures t;x of remark 6.26.
We need this insight to justify the application of 4.13 of book 7 in the
following proof. This result states that for Borel measurable (x) and t; h >
0:
Eh;Xh [ (Xt+h )] = E0;x [ (Xt+h ) j h (S)] : ((2))
In the notation of remark 6.26,
h i
h;Xh
Eh;Xh [ (Xt+h )] E Xt+h ;
Notation 6.42 (On time, T vs. t) For the following proposition, the no-
tation in 6.32 is not standard for homogeneous processes and one often sees
T in this statement replaced by t: See for example Øksendal (1998)). How-
ever, the notation below is consistent with the more general notation above
and that in Dynkin’s formula which we apply, and will facility our transi-
tion to the alternative presentation of this result in proposition 6.45 below
in terms of t:
E0;x [g(T; Xh )] = E0;x [E0;x [ (XT +h ) j h (S)]] = E0;x [ (XT +h )] g(T +h; x):
6.5 DYNKIN’S FORMULA 317
Combining:
E0;x [g(T; Xh )] g(T; x)
Ag(T; x) = limh!0+
h
g(T + h; x) g(T; x)
= limh!0+ :
h
@g
Now @T exists for all T 0 by corollary 6.34, and so the limit above exists.
This proves that g(T; x) 2 DA for each T 0 and 6.33 is satis…ed. The
@g
boundary value in 6.34 alsoh follows from
i corollary 6.34 since @T (T; x) is
continuous in T 0 and E X00;x = (x):
To prove uniqueness, let bounded h(T; x) 2 C 1;2 ([0; 1) Rm ) be given
that satis…es 6.33 and 6.34. Fix (T; x) 2 [0; 1) Rm and de…ne a continuous
process Ys in Rm+1 for s 0 by:
Ys (T s; Xs0;x ):
(j)
Denoting the components of Ys by fYs gm
j=0 ; the stochastic di¤ erential equa-
(j) (j) (j)
tions for fYs gm m
j=1 are the same as those for fXs gj=1 with Y0 = xj ; while
(0) (0) (0)
the equation for Ys T s is dYs= ds; Y0
= T: Thus by 6.29, the
di¤ erential operator LY associated with Ys in Dynkin’s formula is given by:
LY = d=dT + L;
with L the di¤ erential operator associated with Xs and given in 6.23.
Now let Cn = fyj jyj ng Rm+1 and de…ne the stopping time (propo-
sition 5.60, book 7) n = inffsjYs 2 = Cn g: As E [s ^ n ] < 1 for any s;
we can apply Dynkin’s formula in 6.22 to h(y) 2 C 2 Rm+1 : Further, since
(0)
h (Ys ) = h(T s; Xs0;x ) and Ys T s is of bounded variation, this formula
then applies to h(s; x) 2 C 1;2 ([0; 1) Rm ) by remark 5.6. Thus:
Z s^ n
0;x 0;x
E [h(Ys^ n )] = h(Y0 ) + E LY h (Yr ) dr
0
Z s^ n
0;x
= h(T; x) + E ( d=dT + L) h (Yr ) dr :
0
Applying exercise 6.44, let hk (s; x) 2 C01;2 ([0; 1) Rm ) with jhk (s; x)j
jh(s; x)j and hk (s; x) = h(s; x) for j(s; x)j k: Then the above formula also
applies to hk (s; x) to obtain:
Z s^ n
0;x 0;x
E [hk (Ys^ n )] = hk (T; x) + E ( d=dT + A) hk (Yr ) dr ;
0
318 CHAPTER 6 SOME APPLICATIONS OF ITÔ’S LEMMA
Now show that gk (x) 2 C01 (Rn ) ; jgk (x)j 1; and that gk (x) 1 for jxj k:
Then in the above proof, hk = gk h: Hint: Note that by de…nition of '(x)
that: Z
gk (x) = 2k (x)'(y x)dy:
jx yj 1
Justify di¤ erentiating under the integral using section 2.4, book 5:
a partial di¤ erential equation indeed. And this will be the …nal form of
this equation derived in book 9 for the homogeneous process of proposition
6.45, as well as the general case of u(t; x) [ui (t; x)]m
i=1 and v(t; x)
m;n
[vij (t; x)]i=1;j=1 with appropriate assumptions.
Of course we cannot justify the substitution A = L even in this homoge-
neous case. First, proposition 6.39 requires compact support of the function
f (t; x) for each t; and in general we have no justi…cation for this for any t:
As noted in remark 6.33, this support assumption can be removed by assum-
ing that u(x) [ui (x)]m i=1 and v(x) [vij (x)]m;n
i=1;j=1 are globally bounded in
x; a reasonable assumption. But then we must also be able to demonstrate
that f (t; x) is twice continuously di¤ erentiable in x for each t:
As it turns out, the most subtle and di¢ cult part of the proof of Kol-
mogorov’
h s backward
i equation in the above form is the proof that f (t; x; T )
t;x
E (XT ) is continuously di¤ erentiable in t and twice continuously dif-
ferentiable in x when (x) 2 C02 (Rm ) : Di¤ erentiability in t is assured in
the homogeneous case by Dynkin’s formula as derived above, while a direct
derivation is needed in the general case. For di¤ erentiability in x; this is no
easier to demonstrate in the homogeneous case than the general case.
Di¤ erentiability of f (t; x) with respect to x will require di¤ erentiability
assumptions on the coe¢ cient functions u(t; x) and v(t; x); as well as a gen-
eralization of Itô’s existence and uniqueness theory for stochastic di¤ erential
equations to accommodate coe¢ cient functions that also depend on ! 2 S;
and thus are of the form ui (t; x; !) and vij (t; x; !): The details of this t and
x-di¤ erentiability can be found in Friedman (1975) where it is assumed
that all …rst and second x-derivatives of the coe¢ cient functions are uni-
formly bounded by polynomials in x such as (1 + jxj)k for k > 0: The result
above is there also generalized to (x) 2 C 2 (Rm ) when derivatives of are
similarly bounded.
We will return to this discussion in book 9.
References
The reader will no doubt observe that the mathematics references are
somewhat older than the …nance references and upon web searching will
…nd that several of the older texts in each category have been updated to
newer editions, sometimes with additional authors. Since I own and use the
editions below, I decided to present these editions rather than reference the
newer editions which I have not reviewed. As many of these older texts are
considered "classics", they are also likely to be found in university and other
libraries.
That said, there are undoubtedly many very good new texts by both new
and established authors with similar titles that are also worth investigating.
One that I will at the risk of immodesty recommend for more introductory
materials on mathematics, probability theory and …nance is:
321
322 REFERENCES
6. Hewitt, Edwin, and Karl Stromberg. Real and Abstract Analysis. New
York, NY: Springer-Verlag, 1965.
7. Royden, H. L. Real Analysis, 2nd Edition. New York, NY: The MacMil-
lan Company, 1971.
9. Rudin, Walter. Real and Complex Analysis, 2nd Edition. New York,
NY: McGraw-Hill, 1974.
14. Davidson, James. Stochastic Limit Theory. New York, NY: Oxford
University Press, 1997.
15. de Haan, Laurens, and Ana Ferreira. Extreme Value Theory, An In-
troduction. New York, NY: Springer Science, 2006.
21. Ikeda, Nobuyuki, and Shinzo Watanabe. Stochastic Di¤ erential Equa-
tions and Di¤ usion Processes. Tokyo, Japan: Kodansha Scienti…c,
1981.
22. Karatzas, Ioannis, and Steven E. Shreve. Brownian Motion and Sto-
chastic Calculus. New York, NY: Springer-Verlag, 1988.
23. Kloeden, Peter E., and Eckhard Platen. Numerical Solution of Sto-
chastic Di¤ erential Equations. New York, NY: Springer-Verlag, 1992.
29. Revuz, Daniel, and Marc Yor. Continuous Martingales and Brownian
Motion, 3rd Edition. New York, NY: Springer-Verlag, 1991.
30. Rogers, L. C. G., and D. Williams. Di¤ usions, Markov Processes and
Martingales, Volume 1, Foundations, 2nd Edition. Cambridge, UK:
Cambridge University Press, 2000.
31. Rogers, L. C. G., and D. Williams. Di¤ usions, Markov Processes and
Martingales, Volume 2, Itô Calculus, 2nd Edition. Cambridge, UK:
Cambridge University Press, 2000.
324 REFERENCES
34. Schuss, Zeev, Theory and Applications of Stochastic Di¤ erential Equa-
tions. New York, NY: John Wiley and Sons, 1980.
Finance Applications
35. Etheridge, Alison. A Course in Financial Calculus. Cambridge, UK:
Cambridge University Press, 2002.
38. McLeish, Don L. Monte Carlo Simulation and Finance. New York,
NY: John Wiley, 2005.
39. McNeil, Alexander J., Rüdiger Frey, and Paul Embrechts. Quantita-
tive Risk Management: Concepts, Techniques, and Tools. Princeton,
NJ.: Princeton University Press, 2005.
Research Papers/Books for Book 8
40. Bichteler, Klaus. "Stochastic Integration and Lp -Theory of Semi-
martingales." Ann. Probab. 9, no. 1, 49–89, 1981.
45. Itô, Kiyosi. "On a formula concerning stochastic di¤ erentials." Nagoya
Mathematical Journal 3, 55–65, 1951.
48. Kunita, Hiroshi and Shinzo Watanabe. "On square integrable martin-
gales." Nagoya Math. J. 30, 209-245, 1967.
Index
326
INDEX 327
Schwarz, Gideon E.
time-changed BM, 267
semimartingale, 181
m-dimensional, 212
signed measure, 110
simple process, 24
stochastic di¤erential equation, 248
SDE, 228
Stochastic dominated convergence the-
orem, 175, 201
Stochastic Integrals
via Riemann Sums, 74, 138, 178,
203
Stochastic integration by parts, 204
Stratonovich, Ruslan
Stratonovich integral, 27
time homogeneous
SDE, 302