0% found this document useful (0 votes)
113 views

Essential Questions For The Exam 2017, AMCS 336, Numerical Methods For Stochastic Differential Equations

The document contains 5 essential questions for an exam on numerical methods for stochastic differential equations. 1. Defines the Itô integral using the forward Euler method and proves that the limit is independent of the partition used in the time discretization. 2. Asks to formulate two mathematical models based on stochastic differential equations, describe the goal of computation, and discuss the choice of driving noise and numerical method. 3. Lists the basic properties of a Wiener process. 4. Asks to show by an example that piecewise linear approximation of the Wiener process in an SDE approximates Stratonovich integrals. 5. Asks to use Itô's formula to show that if a
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
113 views

Essential Questions For The Exam 2017, AMCS 336, Numerical Methods For Stochastic Differential Equations

The document contains 5 essential questions for an exam on numerical methods for stochastic differential equations. 1. Defines the Itô integral using the forward Euler method and proves that the limit is independent of the partition used in the time discretization. 2. Asks to formulate two mathematical models based on stochastic differential equations, describe the goal of computation, and discuss the choice of driving noise and numerical method. 3. Lists the basic properties of a Wiener process. 4. Asks to show by an example that piecewise linear approximation of the Wiener process in an SDE approximates Stratonovich integrals. 5. Asks to use Itô's formula to show that if a
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Essential Questions for the Exam 2017, AMCS 336,

Numerical Methods for Stochastic Differential Equations

May 9, 2018

1. Define the Itô integral by the limit of the forward Euler method and show
that the limit is independent of the partition used in the time discretization.

Let us suppose that there exists C > 0 and f : [0, T ] × R → R such that

|f (t + ∆t, Wt + ∆Wt ) − f (t, Wt )| ≤ C(∆t + |∆Wt |), (1)

where Wt , t ≥ 0 is a standard Wiener process and ∆Wt = Wt+h − Wt . Let π = {0 =


t0 < t1 < . . . < tN = T } a partition of the interval [0, T ]. We will denote Wn = Wtn
and ∆tn = tn+1 − tn , for n = 0, 1, . . . , N . The Itô integral defined by the Forward
Euler method is given by
Z T N
X −1
f (t, Wt )dWt =: lim f (tn , Wn )∆Wn .
0 ∆t→0
n=0

Theorem 0.1. Under the condition (1), the previous limit is independent of the par-
tition used.

Proof. Let us consider two arbitrary different partitions of the interval [0, T ],

P = {0 = t0 < t1 < . . . < tN = T }

and
S = {0 = s0 < s1 < . . . < sM = T }.
The corresponding Forward Euler approximations are given by
N
X −1 M
X −1
IP = f (tn , Wn )∆Wn and IS = f (tm , Wm )∆Wm
n=0 m=0

Let K =: P ∪ S with nK = #K, the union of the two partitions and ∆tmax be the
maximum time step
 
∆tmax = max max ∆tn , max ∆sm .
0≤n≤N −1 0≤m≤M −1

1
Then, we can write
nK
X
IP − IS = ∆fk ∆Wk
k=0

where ∆fk = f (tn , Wn ) − f (tm , Wm ), ∆Wk = Wk+1 − Wk and the indices m, n satisfy
tk ∈ [tm , tm+1 ) and tk ∈ [tn , tn+1 ). Therefore,
" #
X
E[(IP − IS )2 ] = E ∆fj ∆fk ∆Wj ∆Wk
j,k
X X
= E[(∆fk ∆Wk )2 ] + 2 E[∆fj ∆fk ∆Wj ∆Wk ]
k j<k
X X
= E[(∆fk )2 ]E[(∆Wk )2 ] = E[(∆fk )2 ]∆t2k
k k

This equality holds because the increments of the Wiener process in disjoint intervals
are independent. In this case, the interval [tk , tk+1 ) is disjoint from [tn , tm ) which
implies E[∆fj ∆fk ∆Wj ∆Wk ] = E[∆fj ∆fk ∆Wj ]E[∆Wk ] and E[∆Wk ] = 0.1
Taking squares in (1) we get

(∆fk )2 ≤ 2C 2 ((∆0 tk )2 + (∆0 Wk )2 )

where
∆0 tk = tn − sm ≤ ∆tmax
and
∆0 Wk = Wsn − Wsm .

Thus,
 
X
E[(IP − IS )2 ] ≤ 2C 2 (∆0 tk )2 + E[(∆0 Wk )2 ] ≤ T ∆tmax (1 + ∆tmax )
| {z }
k
=∆0 tk

which implies that E[(IP − IS )2 ] → 0 as ∆tmax → 0.

Therefore, I∆t is a Cauchy


psequence in the Hilbert space of random variables generated
2
by the norm kI∆t kL2 = E[I∆t ] and the scalar product hX, Y i = E(XY ) and since
Cauchy sequences are convergent in Hilbert spaces, we obtain
Z T
L2
X
fk ∆Wk −→ f (s, Ws )dWs
k 0
1
This is not very clear for me. We should improve.

2
2. Formulate two mathematical models based on stochastic differential equa-
tions. Describe the goal of the computation. Discuss the choice of the
driving noise, the numerical method to approximate them and whether it
is natural to use Itô or Stratonovich forms.

Solution.
First Model: Stock price
Let S(t) denote the stock value at time t. Assume that S(t) satisfies the differential
equation
dS(t)
= a(t)S(t).
dt
The solution to this equation is
Rt
a(s)ds
S(t) = S(0)e 0 .

If we want to take into account the uncertainty of the evolution of the stock price, we
can introduce a random noise. An example of that model is given by the SDE

dS(t) = r(t)S(t)dt + σS(t)dWt

where the term involving dWt will introduce the random noise.
The previous model follows the Itô’s formula and we can use the Forward Euler method
to determine a numerical solution for the previuos SDE. This numerical approximation
is given by
Sn+1 = Sn + rn Sn ∆tn + σn ∆Wn
where ∆Wn , n = 1, 2, . . . , N are i.i.d. Normal distributed random variables with mean
0 and variance ∆tn = tn+1 − tn .
Second Model: Epidemiology
Let x(t) be the fraction of a population that has an infectiuos desease at time t. The
rate of change in x(t) usually follows an equation such tnat
dx
= ax(1 − x) − bx + c(1 − x)
dt
where
• a > 0 is the rate of person to person transmission.
• b > 0 is the rate of recovering.
• c > 0 is the rate of transmission from an external source.
To convert this model into a SDE model, we have to introduce an assumption con-
cerning the difussion function σ 2 (x), which given some assumptions, can be equal to
σ 2 (x) = εx(1 − x)/. Then, the SDE model is given by

 µ(x) = ax(1 − x) − bx + c(1 − x)
σ 2 (x) = εx(1 − x)
dxt = µ(xt )dt + σ(xt )dWt

3
3. Formulate the basic properties a Wiener process. Solution.
The one dimensional Wiener process is a mapping W : [0, ∞) × Ω → R, where Ω
is a probability space with probability measure associated. This Process satisfies the
following properties
• W0 = 0.
• The mapping t → Wt is almost surely continuos in [0, ∞).
• The increments of Wt are independent and normal distributed random variables:
Wt − Ws ∼ N (0, t − s)f or0 ≤ s < t.
4. Show by an example that piecewise linear approximation of the Wiener
process in a SDE approximates Stratonovich integrals.

Solution.

5. Show by Itô’s formula that if a function u solves the Kolmogorovs backward


PDE with data u(T, x) = g(x), then

u(t, x) = E[g(X(T ))|X(t) = x],

where X is the solution of a certain SDE (which?).

Solution.
let X be the solution of the SDE

dX(t) = a(t, X(t))dt + b(t, X(t))dWt

and the Kolmogorov backward equation


 ∗
L u = ut + aux + 21 b2 uxx = 0, t<T
u(x, T ) = g(x)
Let us define û to be the solution of the Kolmogorov backwatd equation

L∗ û = 0, t < T


û(·, T ) = g(·)
We want to verify that

û(x, t) = E[g(X(T ))|X(t) = x].

The Itô formula applied to û(X(t), t) is given by

1
dû(X(t), t) = (ût + aûx + b2 ûxx )dt + bûx dWt
2
= L∗ ûdt + bûx dWt

4
Integrating the last expression from t to T and using L∗ û = 0,

û(X(T ), T ) − û(X(t), t) = g(X(T )) − û(X(t), t)


Z T
= bûx dWs .
t

Taking expectation and using the fact that the expected value of the Itô integral is
zero, we get

E[g(X(T ), T )|X(t) = x] − E[û(X(t), t)|X(t) = x]


| {z }
=û(x,t)
Z T

=E b(s, X(s))ûx (X(s), s)dWs X(t) = x
t
= 0.

We conclude
û(x, t) = E[g(X(T ), T )|X(t) = x],

which proves the claim since the solution of the Kolmogorov backward equation is
unique.

6. Formulate and prove a theorem on error estimates for weak convergence of


the forward Euler method for SDEs.

Solution.
Consider the SDE

dX(t) = a(t, X(t))dt + b(t, X(t))dWt , 0 ≤ t ≤ T

Let X̄ be the Forward Euler discretization of X, i.e.

X̄(tn+1 ) − X̄(tn ) = a(tn , X̄(tn ))∆tn + b(tn , X̄(tn ))∆Wn

where ∆tn = tn+1 − tn , ∆Wn = Wtn+1 − Wtn for a given discretization 0 = t0 < t1 <
. . . < tN = T . We can rewrite the discretization as
Z t Z t
X̄(t) − X̄(tn ) = ā(s, X̄)ds + b̄(s, X̄)dWs , tn ≤ t ≤ tn+1
tn tn

where

ā(s, X̄) = a(tn , X̄(tn ))


b̄(s, X̄) = b(tn , X̄(tn ))

5
Theorem 0.2. Let a, b and g smooth functions and decay suffieciently fast as |x| → ∞.
Then, there holds that

E[g(X(T )) − g(X̄(T ))] = O(max ∆t)

Proof. Let u satisfies the equation


 ∗
L u = ut + aux + 12 b2 uxx = 0, t < T
(2)
u(x, T ) = g(x)

The Feynman Kac formula shows

u(x, t) = E[g(X(t))|X(t) = x]

and in particular
u(x, 0) = E[g(X(T ))], (3)
then, by Itô formula
1
du(t, X̄(t)) = (ut + āux + b̄2xx )(t, X̄(t))dt + b̄ux (t, X̄(t))dWt
2
1 2 1
= (−aux − bxx + āux + b̄2xx )(t, X̄(t))dt + b̄ux (t, X̄(t))dWt .
2 2
Taking integral from 0 to T
T T
(b̄2 − b2 )
Z Z
u(T, X̄(T )) − u(0, X̄(0)) = (ā − a)ux (t, X̄(t))dt + uxx (t, X̄(t))dt
0 0 2
Z T
+ b̄ux (t, X̄(t))dWt .
0

Taking expectations and using (3)


Z T Z T 
1 
E [(ā − a)ux ] + E (b̄2 − b2 )uxx dt + E

E[g(X̄(t)) − g(X(t))] = b̄ux (t, X̄(t))dWt
0 2 0
Z T
1 
E [(ā − a)ux ] + E (b̄2 − b2 )uxx dt.

=
0 2
To conclude the proof, we need first to prove the next lemma.

Lemma 0.3. For tn ≤ t ≤ tn+1 there holds


 
f1 (t) =: E (ā(t, X̄) − a(t, X̄(t)))ux (t, X̄(t)) = O(∆tn ) (4)

and
f2 (t) =: E (b̄2 (t, X̄) − b2 (t, X̄(t)))uxx (t, X̄(t)) = O(∆tn )
 
(5)

6
Proof. Since ā(t, X̄) = a(tn , X̄(tn )),
 
f1 (tn ) =: E (ā(tn , X̄) − a(tn , X̄(tn )))ux (tn , X̄(tn )) = 0

Provided that |f10 (t)| ≤ C, the previous condition implies that

f1 (t) = O(∆tn ), for tn ≤ t ≤ tn+1 .

Therefore, it remains to show that |f10 (t)| ≤ C. Let

α(t, x) = −(a(t, x) − a(tn , X̄(tn )))ux (t, x),

so that f1 (t) = E[α(t, X̄(t))].


Then, by Itô formula

df1 d E[dα(t, X̄(t))]


= E[α(t, X̄(t))] =
dt dt dt
b̄2
E[(αt + āαx + α )dt
2 xx
+ αx b̄dWt ]
=
dt
b̄2
= E[αt + āαx + αxx ] = O(1)
2
2
Therefore, there exists a constant C such that |f1 (t)| ≤ C for tn ≤ t ≤ tn+1 and
consequently
 
f1 (t) =: E (ā(t, X̄) − a(t, X̄(t)))ux (t, X̄(t)) = O(∆tn ).

for tn ≤ t ≤ tn+1 . Similaryly, we can prove that

f2 (t) =: E (b̄2 (t, X̄) − b2 (t, X̄(t)))uxx (t, X̄(t)) = O(∆tn )


 

for tn ≤ t ≤ tn+1 .

7. Motivate the use of Monte-Carlo methods to compute European options


based on a basket of several stocks and discuss some possibilities of meth-
ods of variance reduction.

Solution.
We assume that the price of each stock l which the payoff depends has the following
dynamics
Xd
dSi = ai dt + bij dWj =
|{z} ai dtbij dWj
j=1 Einstein notation

and the riskless asset evolutes as

dBt = rBt dt.


2
Here we have to justify why the expectation is O(1).

7
The portfolio I expression is given by

It = −f + αi Si + βBt .

where αi S are the amount of money invested in Si and β is the one invested in B. We
also assume the self-financing condition

d(αi Si + βBt ) = αi dSi + βdBt

Thus, applying the Itô formula we get

dIt = −df + αi dSi + βdBt


1
− ft dt − fsi ai dt − fsi bij dWj − fsi sj bik bjk dt + αi ai dt + αi bij dWj + βdBt .
2
Since we want the portfolio to be risk free, then we choose αi = fsi to get
1
dIt = −ft dt − fsi sj bik bjk dt + βdBt .
2
On the other hand, we know
dIt = rIt ,
which leads to the following PDE for the payoff

ft = 12 fsi sj bik bjk − rf + rfsi si , 0 < t < T , s ≥ 0




f (T, s) = g(s)

Using the Feynman-Kac representation, we can write3

f (t, x) = e−rT E[g(S(T ))|S(t) = x]

where S solves the SDE


dSi = rSi dt + bij Si dWj .
Using the Forward Euler method, we can write

Si (tn ) = Si (tn−1 ) + rSi (tn−1 )∆tn + bij Si ∆Wn
Si (t0 ) = s

Using the M C method we can approximate


M
e−rT X
f (s, t) ≈ fM C = g(S (k) (T ))
M k=1

where S (k) (T ), k = 1, 2, . . . , M is a sample of S(T ).


For a diversified portfolio, we can use for instance the geometric mean as a control
variate for the aritmetic in order to reduce the variance.
3
Is this ok?

8
8. State and derive Itô’s formula.

Solution.

Theorem 0.4. We consider the following SDE

dX(t) = a(t, X(t))dt + b(t, X(t))dWt .

Let g : [0, ∞) × R → R a given bounded function in C 2 ([0, ∞) × R). Then y(t) =:


g(t, X(t)) satisfies the SDE

b( t, X(t))
 
dy(t) = gt (t, X(t)) + a(t, X(t))gx (t, X(t)) + gxx (t, X(t)) dt
2
+ b(t, X(t))gx (t, X(t))dWt

Proof.
τ
b( t, X(t))
Z  
g(τ, X(τ )) − g(0, X(0)) = gt (t, X(t)) + a(t, X(t))gx (t, X(t)) + gxx (t, X(t)) dt
2
Z 0τ
+ b(t, X(t))gx (t, X(t))dWt
0

Let X̄ be the Forward approximation of X, i.e.

X̄(tn+1 ) − X̄(tn ) = a(tn , X̄(tn ))∆tn + b(tn , X̄(tn ))∆Wn

Doing the Taylor expansion of g up to the second order gives

g(tn+1 , X̄(tn+1 )) − g(tn , X̄(tn ))


1
= gt (tn , X̄(tn ))∆tn + gx (tn , X̄(tn ))∆X̄(tn ) + gtt (tn , X̄(tn ))∆t2n
2
1
+ gtx (tn , X̄(tn ))∆tn ∆X̄(tn ) + gxx (tn , X̄(tn ))(∆X̄(tn ))2 + O(∆t2n + |∆X̄(tn )|2 ).
2

g(tm , X̄(tm )) − g(0, X̄(0))


m−1 m−1 m−1
X X 1X 2
= gt ∆tn + (āgx ∆tn + b̄gx ∆Wn ) + b̄ gxx (∆Wn )2
n=0 n=0
2 n=0
m−1    
X 1 2 2
+ (b̄gtx + āb̄gxx )∆tn ∆Wn + (gtt + ā gxx ) + āgtx ∆tn
n=0
2
m−1
X
+ O(∆t2n + |∆X̄(tn )|2 ).
n=0

9
1
Pm−1
Let α = (b̄2 gxx )(ti , X̄(ti )) and Y = 2 n=0 b̄2 gxx ((∆Wn )2 − ∆tn )
X X
E[Y 2 ] = E[αi αj ((∆Wi )2 − ∆ti )((∆Wj )2 − ∆tj )] + E[αi2 ((∆Wi )2 − ∆ti )2 ]
i6=j i
X X
=2 E[αi αj ((∆Wi )2 − ∆ti )((∆Wj )2 − ∆tj )] + E[αi2 ((∆Wi )2 − ∆ti )2 ]
i>j i
X X
2 2
=2 E[αi αj ((∆Wj ) − ∆tj )] E[(∆Wi ) − ∆ti ] + E[αi2 ] E[((∆Wi )2 − ∆ti )2 ]
i>j
| {z } i
| {z }
=0 =2∆t2i

Thus, E[Y 2 ] → 0 as ∆tmax → 0. Therefore,


m−1 Z t
2
2 L
X
2
b̄ gxx (X̄)(∆Wn ) −→ b2 gxx (s)ds
n=0 0

Pm−1
Using the same analysis for the terms in n=0 O(∆t2n + |∆X̄(tn )|2 ), we get the result.

9. State and derive the Feynman-Kac formula.

Solution.

Suppose that a, b, g, h and V are bounded and smooth functions. Let X be the solution
of the SDE
dX(t) = a(t, X(t))dt + b(t, X(t))dWt
and let
h RT
 Z i T Rs

V (s,X(s))ds V (τ,X(τ ))dτ
u(x, t) = E g(X(T ))e t |X(t) = x +E − h(s, X(s))e t |X(t) = x
t

then, u is the solution of the PDE


( 2
ut + aux + b2 uxx + V u = h, 0 < t < T, x ∈ R
(6)
u(T, x) = g(x)

Proof. Let u be the solution of (6) and


Rs
V (τ,X(τ ))dτ
G(s) =: e t .

Using the Itô’s formula, we can write


 Rs  b2
d u(s, X(s))e t V (τ,X(τ ))dτ = G[(ut + aux + uxx )dt + bux dWt ] + uV Gdt
2

10
Integrating from t to T from both sides of the last equation, we get

E[g(X(T ))G(T )|X(t) = x] − u(x, t)


Z T
b2

=E (ut + aux + uxx )Gds|X(t) = x
t 2
Z T  Z T 
+E bux GdWs |X(t) = x + E uV Gds|X(t) = x
t t
Z T  Z T 
=E Ghds|X(t) = x − E uV Gds|X(t) = x
t t
Z T 
+E uV Gds|X(t) = x
t
Z T 
=E Ghds|X(t) = x
t

Therefore
Z T 
u(x, t) = E[g(X(T ))G(T )|X(t) = x] − E Ghds|X(t) = x .
t

10. State and derive a PDE that describes the evolution of the pdf for a process
solving (
dX(t) = a(X(t))dt + b(X(t))dWt ,
X(0) = x0 .

Solution.

Corollary 0.4.1. Let


Z
u(x, t) =: E[g(X(T ))|X(t) = x] = g(y)P (y, T ; x, t)dy.
R

Then, the density P as a function of the first two variables, solves the Kolmogorov
forward equation, also called the Fokker - Planck equation
(
−∂s P (y, s; x, t) − ∂y (a(y, s)P (y, s; x, t)) + 21 ∂y2 (b2 (y, s)P (y, s; x, t)) = 0
(7)
P (y, t; x, t) = δ(x − y)

where δ is the Dirac-delta measure concentrated at zero.

11
Proof. Let P̂ be the solution of the Fokker Planck equation and u(x, t) the solution of
the Backward Kolmogorov equation, then, integrating by parts, we have
Z TZ
b2
0= (us + auy + uyy )P̂ (y, s; x, t)dyds
t R 2
Z TZ
1
= (−∂s P̂ (y, s; x, t) − ∂y (a(y, s)P̂ (y, s; x, t)) + ∂y2 (b2 (y, s)P̂ (y, s; x, t)))u(s, y)dyds
t R 2
Z s=T
+ u(y, s)P̂ (y, s; x, t)dy
R s=t

By construction of P̂ , we conclude that


Z s=T Z Z
0= u(y, s)P̂ (y, s; x, t)dy = g(y)P̂ (y, T ; x, t)dy − u(y, t) P̂ (y, t; x, t) dy
R s=t R R | {z }
δ(x−y)

Since Z
u(y, t) P̂ (y, t; x, t) dy = u(x, t),
R | {z }
δ(x−y)

we conclude that
Z
u(x, t) = u(y, t) P̂ (y, t; x, t) dy = E[g(X(T ))|X(t) = x].
R | {z }
δ(x−y)

11. State and derive the central limit theorem. State the Berry-Esseen Theo-
rem.

Solution.

Theorem 0.5. Central Limit Theorem


Assume that (Sj )j∈N are i.i.d. random variables such that E(Sj ) = 0 and E(Sj2 ) = 1
for all j ∈ N, then
M
X S L
√j − →ν
j=1
M
where L means Law or Weak convergence and ν ∼ N (0, 1).

Proof. Let f (t) =: E(eitSj ) then, the m-derivative of f is

f (m) (t) = E(im Sjm eitSj ).

12
So,
√ 
  M   2 M

it
PM
j=1 Sj / M
t t 0 t 00 t
E e = f √ = f (0) + √ f (0) + f (0) + O
M M 2M M

We have

f (0) = E(1) = 1
f 0 (0) = iE(Sj ) = 0
f 00 (0) = i2 E(Sj2 ) = −1

Therefore,
 2 M
√  t2

 PM t 2
E e it j=1 Sj / M
= 1− +O → e−t /2 = E(eitν )
2M M

as M → ∞.
Let us consider a bounded function g
M
!! Z
X √ 1
Z  PM √ 
E g Sj / M = g(x)ρ(x)dx = E eit j=1 Sj / M E(g(t))dt
j=1 R 2π R

Using Parseval’s inequality, we get


M
!!
X √ 1
Z  PM √ 
E g Sj / M → E eit j=1 Sj / M F (g)(t)dt
j=1
2π R

as M → ∞. Where F (g) is the complex conjugate of the Fourier transform of g.

Theorem 0.6. Berry - Essen Assume



E[|Y − E(Y )|3 ]1/3
λ= ,
σY
then we have a uniform estimate in the CLT given by

CBE λ3
|FZM (x) − φ(x)| ≤ √
(1 + |x|)3 M )
Where φ is the cdf of a standard normal, CBE ≈ 30.51175, FZM (x) = P (ZM ≤ x)∀x ∈
R and √ !
M
M 1 X
ZM = Y (wi ) − E(Y )
σY M i=1

13
12. Show how to apply the Monte Carlo method to compute an integral and
discuss the corresponding error.

Solution.

Let us consider the integral


Z
I =: f (x)dx.
[0,1]d

Let p be the pdf of the uniform distribution on [0, 1]d , then we can write
Z
I= f (x)p(x)dx = E[f (X)]
Rd

where X is uniformly distributed on [0, 1]d . The MC estimate in this case is given by
N
1 X
IN = f (X(ωn )).
N n=1

Where {X(wn )}N d


n=1 are sampled uniformly in [0, 1] by sampling the components X(ωn )
independently and uniformly on [0, 1].
The statistial error EN is defined by
N Z N
1 X 1 X
EN =: f (X(ωn )) − f (x)dx = (f (xn ) − E(f (X)))
N n=1 [0,1]d N n=1

where xn denotes an observed value of X.


By the CLT, we know that
√ L
N EN ⇒
= σν
where ν ∼ N (0, 1) and
Z Z 2
2 3
σ = f (x)dx − f (x)dx .
[0,1]d [0,1]d

In practice, σ 2 is apporximated by

N N
!2
1 X 1 X
σ̂ 2 =: f (xj ) − f (xn ) .
N − 1 j=1 N n=1
We may select a confidence interval of level 1 − α and selec an appropiate Cα in the
sense that Φ(Cα ) = 1 − α/2, where Φ is the cdf of a standard Gaussian. For large N ,
we have  

P |EN | ≤ √ ≈ 1 − α.
N

14
13. Describe the sampling method of acceptance-rejection.

Solution.

In the acceptance-rejection technique, we generate sampling value from an arbitrary


pdf ρY (x) by the use of an auxiliary pdf ρX (x). We have the following assumptions:

- It is simple to sample from ρX .


- There exists 0 < ε ≤ 1 such that
ρY (x)
ε ≤ 1, ∀x ∈ R
ρX (x)

The algorithm is given by

(1) Set k = 1.
(2) Sample two independent random variables Xk from ρk and Uk from U ∼ U nif (0, 1)
ρY (x)
(3) If Uk ≤ ε then accept Y = Xk , otherwise reject Xk .
ρX (x)
k = k + 1 and go to step 2.

Let us see that Y sampled by acceptance -rejection technique has indeed density ρY .
The acceptance probability is given by
  Z Z ε ρY (x) Z
ρY (x) ρX (x) ρY (x)
P Uk ≤ ε = duρX (x)dx = ε ρX (x)dx
ρX (x) 0 ρX (x)
Z
= ε ρY (x)dx

= ε.

Let K(ω) be the first value of k in which Xk is accepted as a realization of Y . We


want to show that Xk has the desired density, ρY .
Let B an open set
X
P (Xk ∈ B) = P (Xk ∈ B, K = k)
k≥1
  k−1  
X ρY (Xk ) Y ρY (x)
= P Xk ∈ B, Uk ≤ ε P Um > ε
k≥1 |
ρX (Xk ) m=1 ρX (x)
{z } | {z }
does not depends on k 1−ε
 
ρY (Xk ) X
= P Xk ∈ B, Uk ≤ ε (1 − ε)k−1
ρX (Xk ) k≥1
 
ρY (Xk ) 1
= P Xk ∈ B, Uk ≤ ε
ρX (Xk ) ε

15
Since
  Z Z ε ρY (x)
ρY (Xk ) ρX (x)
P Xk ∈ B, Uk ≤ ε = duρX (x)dx
ρX (Xk ) B 0
Z
ρY (x)
=ε ρX (x)dx
B ρX (x)
Z
= ε ρY (x)dx
B

Thus, substituting in the prevoius equation,


Z
P (Xk ∈ B) = ρY (x)dx.
B
The expected number of samples per accepted ones is given by
X X 1
E[K] = kP (K = k) = k(1 − ε)k ε =
k≥1 k≥1
ε

Therefore, the smaller the value of ε, the bigger is E[K]. For ε close to 1, the algorithm
will be eficient in terms of cost since E[K] ≈ 1.

14. Motivate the use of variance reduction techniques. Describe two techniques
and discuss their computational efficiency.

Solution.

The idea behind the variance reduction technique comes from the MC error expression
M
r
1 X V ar[Y ]
EM = E[Y ] − Y (ωj ) ≈ Cα .
M j=1 M

We introduce this kind of technique to “reduce” V ar[Y ] while keeping E[Y ] unchanged.
So basically, we look for a random variable such that E[X] = E[Y ] and V ar[X] <
V ar[Y ].
First technique: Control variates.
The new variables are defined as

V =: Y − β(X − E[X])

for a given β, we define the following unbiesed estimator


M M
1 X β X
VM = Yj − (Xj − E[X])
M j=1 M j=1

16
where Yj = Y (ωj ) and Xj = X(ωj ).
Note that E[VM ] = E[Y ].
For the variance,
1 
V ar[Y ] + β 2 V ar[X] − 2βCov(X, Y ) .

V ar[VM ](β) =
M
Minimizing the last expression, we get

Cov(X, Y )
β∗ =
V ar(Y )

and thus,
V ar[Y ] V ar[Y ]
V ar[VM ](β) = (1 − ρ2XY ) < = V ar[YM C ]
M M
where YM C is the MC estimator and ρXY = Cov(X,Y
σX σY
)
.
Therefore, the higher the correlation between X and Y is, the smaller the variance of
VM and the more efficient the method is.
Let us assume that the work needed to sample X is δ times the work needed to generate
Y (0 < δ ≤ 1). In order to reduce the cost, we have the following condition

(1 + δ)(1 − ρ2XY )V ar[Y ] < V ar[Y ]

if and only if
δ
ρ2XY > .
1+δ
Second technique: Antithetic variates.
Let Y = g(X) such that X has a symetric distribution around its mean. Assume
E[X] = 0. Then, since X and −X are identically distributed, we have E[g(X)] =
E[g(−X)]. Therefore, we can write
 
g(X) + g(−X)
E[Y ] = E .
2

Using the following estimator


M
1 X g(Xj ) + g(−Xj )
VM = ,
M j=1 2

we know that E[VM ] = E[g(X)] = E[Y ].


The variance is given by
1
V ar[VM ] = V ar[g(X) + g(−X)].
4M
The method is efficient if
V ar[Y ]
2V ar[VM ] ≤
M
17
where we assume that computing the pair (g(X), g(−X)) takes double the work for
g(X). The last inequality implies

V ar[Vm ] ≤ 2V ar(g(X))

wich implies
Cov(g(X), g(−X)) < 0.

15. State and prove an error representation for the global error

g(X(T )) − g(X̄(T )),

where
dX
= a(X(t)), X(0) = X0
dt
and X̄ is an approximation based on a one step method of the form

X̄(tn+1 ) = φ(X̄(tn ), ∆tn ).

Solution.

Let 0 = t0 < t1 < . . . < tN = T and X̄ is an approximation based on a one step


method of the form X̄(tn+1 ) = φ(X̄(tn ), ∆tn ).
We are looking for an error representation for the global error g(X(T )) − g(X̄(T )).
The global error can be expressed as the weighted sum of the local error
N 
X Z 1 
g(X(T )) − g(X̄(T )) = e(tn ), ψ(tn , X̄(tn ) + se(tn ))ds
[n=1] 0

where (·, ·) is the standard scalar product on Rd and ψ(t, y) = ψX (t) ∈ Rd is the first
variation of u.
Let us define
u = g(X(T ; t, y)), t < T
and for all t, τ ∈ [tn−1 , tn ], we have

u(t, X̃(t)) = u(τ, X̃(τ ))

where X̃ is the exact solution with initial condition X̃(tn−1 ) = X̄(tn−1 ), then

u(tn , X̃(tn )) = u(tn−1 , X̃(tn−1 )) = u(tn−1 , X̄(tn−1 )), n = 1, . . . , N

Thus,
N 
X 
u(tn , X̃(tn )) − u(tn , X̄(tn )) = u(0, X̃(0)) = g(X(T )) − g(X̄(t)).
n=1

18
Now, we introduce the auxiliary function U : [0, 1] → R4 defined by

U (s) = u(tn , sX̃(tn ) + (1 − s)X̄(tn )).

Then, we have
Z 1
U (1) − U (0) = u(tn , X̃(tn )) − u(tn , X̄(tn )) = U 0 (s)ds
0

which implies
 Z t 
u(tn , X̃(tn )) − u(tn , X̄(tn )) = e(tn ), ψ(tn , X̄(tn ) + se(tn ))ds)
0

where ψ(t, x) = ψX (t) and ψX (t) solves


(
− dψdsX (s) = (a0 )∗ (s, X(s; t, y))ψX (s)
ψX (T ) = g 0 (X(T ; t, y))

∂ai
where (a0 )∗ (s, x) is the transpose of the Jacobian matrix a0 (s, x) = { ∂xj
(s, x)} ∈ Rd .

16. State and prove an error representation for the global error

E[g(X(T )) − g(X̄(T ))],

where (
dXt = a(X(t))dt + b(X(t))dWt ,
X(0) = X0 ,
and X̄ is a Forward Euler approximation.

Solution.

Theorem 0.7. Let a, b, g smooth and with appropiate decay functions when |x| → ∞,
then
Z T

E[g(X̄(T )) − g(X(T ))] = E[ a(t, X̄(t)) − ā(t, X̄) ux (t, X̄)]dt
0
1 T
Z
E[ b2 (t, X̄(t)) − b̄2 (t, X̄) uxx (t, X̄)]dt

+
2 0
where u is the cost to go function such that

u(t, x) = E[g(X(T ))|X(t) = x]

19
Proof. The cost to go function solves the Kolmogorov Backward equation
(
∂t u + a∂x u + 12 b2 ∂s xx2 u = 0
(8)
u(T, ·) = g

which implies

E[g(X̄(T )) − g(X(T ))] = E[u(T, X̄(T )) − u(0, X̄(0))].

If we use the Itô formula, we have


 
1 2 2
du = ∂t u + ā∂x u + b̄ ∂s xx u dt + b̄dWt .
2

Integrating from 0 to T , using the ecuation (8) and taking expected value, we get
Z T Z T
1
E[g(X̄(T )) − g(X(T ))] = E[(a − ā)ux (t, X̄)]dt + E[(b2 − b̄2 )uxx (t, X̄)]dt
0 2 0

17. What is a Brownian bridge? Discuss an algorithm to sample the mid point
of the Brownian bridge.

Solution.

The Brownian Bridge is defined as the conditional value

W o (s) =: Ws |W1 for s ∈ (0, 1)

Since (Ws , W1 ) is a Gaussian vector, then Ws |W1 is also Gaussian with

Cov(Ws , W1 )
E[W o (s)] = E[Ws ] + (W1 − E[W1 ])
V ar(W1 )
= Cov(Ws , W1 )W1

Cov(Ws , W1 )
V ar[W o (s)] = V ar[Ws ] − Cov(W1 , Ws )
V ar(W1 )
= s − (Cov(Ws , W1 ))2
= s − (s ∧ 1)2
= s − s2 .

20
and

Cov(Ws , W1 ) = E[W1 Ws ]
= E[Ws (W1 + Ws − Ws )]
= E[Ws2 ] + E[Ws (W1 − Ws )]
= E[Ws2 ] + E[Ws ]E[W1 − Ws ]
= s.
(
E[W o (s)] = sW1
Therefore .
V ar[W o (s)] = s − s2
For the midpoint (s = 1/2), we have
(
E[W o ( 12 )] = 12 W1
V ar[W o ( 12 )] = 14

We conclude that
W o (s) ∼ N (sW1 , s − s2 )
and for the midpoint W o ( 21 ) ∼ N ( 12 W1 , 14 ).
Given a realization of W1 , we construct b, a realization of W o ( 12 ) by

1 1
b = W1 rand(1, 1).
2 2

18. Define the notions of weak and strong convergence. What is the weak order
of convergence for the Forward Euler method for the approximation of Itô
SDEs? And the strong order?

Solution.

19. Describe the multilevel Forward Euler Monte Carlo method. State and
prove its computational complexity when applied to Itô SDEs using an Eu-
ler Maruyama discretization.

Solution.

20. Give a motivation for the Black-Scholes equation.

Solution.

21. Discuss the alternatives to determine the option price on the previous prob-
lem numerically in case of contracts based on one and many stocks.

Solution.

21

You might also like