Essential Questions For The Exam 2017, AMCS 336, Numerical Methods For Stochastic Differential Equations
Essential Questions For The Exam 2017, AMCS 336, Numerical Methods For Stochastic Differential Equations
May 9, 2018
1. Define the Itô integral by the limit of the forward Euler method and show
that the limit is independent of the partition used in the time discretization.
Let us suppose that there exists C > 0 and f : [0, T ] × R → R such that
Theorem 0.1. Under the condition (1), the previous limit is independent of the par-
tition used.
Proof. Let us consider two arbitrary different partitions of the interval [0, T ],
and
S = {0 = s0 < s1 < . . . < sM = T }.
The corresponding Forward Euler approximations are given by
N
X −1 M
X −1
IP = f (tn , Wn )∆Wn and IS = f (tm , Wm )∆Wm
n=0 m=0
Let K =: P ∪ S with nK = #K, the union of the two partitions and ∆tmax be the
maximum time step
∆tmax = max max ∆tn , max ∆sm .
0≤n≤N −1 0≤m≤M −1
1
Then, we can write
nK
X
IP − IS = ∆fk ∆Wk
k=0
where ∆fk = f (tn , Wn ) − f (tm , Wm ), ∆Wk = Wk+1 − Wk and the indices m, n satisfy
tk ∈ [tm , tm+1 ) and tk ∈ [tn , tn+1 ). Therefore,
" #
X
E[(IP − IS )2 ] = E ∆fj ∆fk ∆Wj ∆Wk
j,k
X X
= E[(∆fk ∆Wk )2 ] + 2 E[∆fj ∆fk ∆Wj ∆Wk ]
k j<k
X X
= E[(∆fk )2 ]E[(∆Wk )2 ] = E[(∆fk )2 ]∆t2k
k k
This equality holds because the increments of the Wiener process in disjoint intervals
are independent. In this case, the interval [tk , tk+1 ) is disjoint from [tn , tm ) which
implies E[∆fj ∆fk ∆Wj ∆Wk ] = E[∆fj ∆fk ∆Wj ]E[∆Wk ] and E[∆Wk ] = 0.1
Taking squares in (1) we get
where
∆0 tk = tn − sm ≤ ∆tmax
and
∆0 Wk = Wsn − Wsm .
Thus,
X
E[(IP − IS )2 ] ≤ 2C 2 (∆0 tk )2 + E[(∆0 Wk )2 ] ≤ T ∆tmax (1 + ∆tmax )
| {z }
k
=∆0 tk
2
2. Formulate two mathematical models based on stochastic differential equa-
tions. Describe the goal of the computation. Discuss the choice of the
driving noise, the numerical method to approximate them and whether it
is natural to use Itô or Stratonovich forms.
Solution.
First Model: Stock price
Let S(t) denote the stock value at time t. Assume that S(t) satisfies the differential
equation
dS(t)
= a(t)S(t).
dt
The solution to this equation is
Rt
a(s)ds
S(t) = S(0)e 0 .
If we want to take into account the uncertainty of the evolution of the stock price, we
can introduce a random noise. An example of that model is given by the SDE
where the term involving dWt will introduce the random noise.
The previous model follows the Itô’s formula and we can use the Forward Euler method
to determine a numerical solution for the previuos SDE. This numerical approximation
is given by
Sn+1 = Sn + rn Sn ∆tn + σn ∆Wn
where ∆Wn , n = 1, 2, . . . , N are i.i.d. Normal distributed random variables with mean
0 and variance ∆tn = tn+1 − tn .
Second Model: Epidemiology
Let x(t) be the fraction of a population that has an infectiuos desease at time t. The
rate of change in x(t) usually follows an equation such tnat
dx
= ax(1 − x) − bx + c(1 − x)
dt
where
• a > 0 is the rate of person to person transmission.
• b > 0 is the rate of recovering.
• c > 0 is the rate of transmission from an external source.
To convert this model into a SDE model, we have to introduce an assumption con-
cerning the difussion function σ 2 (x), which given some assumptions, can be equal to
σ 2 (x) = εx(1 − x)/. Then, the SDE model is given by
µ(x) = ax(1 − x) − bx + c(1 − x)
σ 2 (x) = εx(1 − x)
dxt = µ(xt )dt + σ(xt )dWt
3
3. Formulate the basic properties a Wiener process. Solution.
The one dimensional Wiener process is a mapping W : [0, ∞) × Ω → R, where Ω
is a probability space with probability measure associated. This Process satisfies the
following properties
• W0 = 0.
• The mapping t → Wt is almost surely continuos in [0, ∞).
• The increments of Wt are independent and normal distributed random variables:
Wt − Ws ∼ N (0, t − s)f or0 ≤ s < t.
4. Show by an example that piecewise linear approximation of the Wiener
process in a SDE approximates Stratonovich integrals.
Solution.
Solution.
let X be the solution of the SDE
L∗ û = 0, t < T
û(·, T ) = g(·)
We want to verify that
1
dû(X(t), t) = (ût + aûx + b2 ûxx )dt + bûx dWt
2
= L∗ ûdt + bûx dWt
4
Integrating the last expression from t to T and using L∗ û = 0,
Taking expectation and using the fact that the expected value of the Itô integral is
zero, we get
We conclude
û(x, t) = E[g(X(T ), T )|X(t) = x],
which proves the claim since the solution of the Kolmogorov backward equation is
unique.
Solution.
Consider the SDE
where ∆tn = tn+1 − tn , ∆Wn = Wtn+1 − Wtn for a given discretization 0 = t0 < t1 <
. . . < tN = T . We can rewrite the discretization as
Z t Z t
X̄(t) − X̄(tn ) = ā(s, X̄)ds + b̄(s, X̄)dWs , tn ≤ t ≤ tn+1
tn tn
where
5
Theorem 0.2. Let a, b and g smooth functions and decay suffieciently fast as |x| → ∞.
Then, there holds that
u(x, t) = E[g(X(t))|X(t) = x]
and in particular
u(x, 0) = E[g(X(T ))], (3)
then, by Itô formula
1
du(t, X̄(t)) = (ut + āux + b̄2xx )(t, X̄(t))dt + b̄ux (t, X̄(t))dWt
2
1 2 1
= (−aux − bxx + āux + b̄2xx )(t, X̄(t))dt + b̄ux (t, X̄(t))dWt .
2 2
Taking integral from 0 to T
T T
(b̄2 − b2 )
Z Z
u(T, X̄(T )) − u(0, X̄(0)) = (ā − a)ux (t, X̄(t))dt + uxx (t, X̄(t))dt
0 0 2
Z T
+ b̄ux (t, X̄(t))dWt .
0
and
f2 (t) =: E (b̄2 (t, X̄) − b2 (t, X̄(t)))uxx (t, X̄(t)) = O(∆tn )
(5)
6
Proof. Since ā(t, X̄) = a(tn , X̄(tn )),
f1 (tn ) =: E (ā(tn , X̄) − a(tn , X̄(tn )))ux (tn , X̄(tn )) = 0
for tn ≤ t ≤ tn+1 .
Solution.
We assume that the price of each stock l which the payoff depends has the following
dynamics
Xd
dSi = ai dt + bij dWj =
|{z} ai dtbij dWj
j=1 Einstein notation
7
The portfolio I expression is given by
It = −f + αi Si + βBt .
where αi S are the amount of money invested in Si and β is the one invested in B. We
also assume the self-financing condition
f (T, s) = g(s)
8
8. State and derive Itô’s formula.
Solution.
b( t, X(t))
dy(t) = gt (t, X(t)) + a(t, X(t))gx (t, X(t)) + gxx (t, X(t)) dt
2
+ b(t, X(t))gx (t, X(t))dWt
Proof.
τ
b( t, X(t))
Z
g(τ, X(τ )) − g(0, X(0)) = gt (t, X(t)) + a(t, X(t))gx (t, X(t)) + gxx (t, X(t)) dt
2
Z 0τ
+ b(t, X(t))gx (t, X(t))dWt
0
9
1
Pm−1
Let α = (b̄2 gxx )(ti , X̄(ti )) and Y = 2 n=0 b̄2 gxx ((∆Wn )2 − ∆tn )
X X
E[Y 2 ] = E[αi αj ((∆Wi )2 − ∆ti )((∆Wj )2 − ∆tj )] + E[αi2 ((∆Wi )2 − ∆ti )2 ]
i6=j i
X X
=2 E[αi αj ((∆Wi )2 − ∆ti )((∆Wj )2 − ∆tj )] + E[αi2 ((∆Wi )2 − ∆ti )2 ]
i>j i
X X
2 2
=2 E[αi αj ((∆Wj ) − ∆tj )] E[(∆Wi ) − ∆ti ] + E[αi2 ] E[((∆Wi )2 − ∆ti )2 ]
i>j
| {z } i
| {z }
=0 =2∆t2i
Pm−1
Using the same analysis for the terms in n=0 O(∆t2n + |∆X̄(tn )|2 ), we get the result.
Solution.
Suppose that a, b, g, h and V are bounded and smooth functions. Let X be the solution
of the SDE
dX(t) = a(t, X(t))dt + b(t, X(t))dWt
and let
h RT
Z i T Rs
V (s,X(s))ds V (τ,X(τ ))dτ
u(x, t) = E g(X(T ))e t |X(t) = x +E − h(s, X(s))e t |X(t) = x
t
10
Integrating from t to T from both sides of the last equation, we get
Therefore
Z T
u(x, t) = E[g(X(T ))G(T )|X(t) = x] − E Ghds|X(t) = x .
t
10. State and derive a PDE that describes the evolution of the pdf for a process
solving (
dX(t) = a(X(t))dt + b(X(t))dWt ,
X(0) = x0 .
Solution.
Then, the density P as a function of the first two variables, solves the Kolmogorov
forward equation, also called the Fokker - Planck equation
(
−∂s P (y, s; x, t) − ∂y (a(y, s)P (y, s; x, t)) + 21 ∂y2 (b2 (y, s)P (y, s; x, t)) = 0
(7)
P (y, t; x, t) = δ(x − y)
11
Proof. Let P̂ be the solution of the Fokker Planck equation and u(x, t) the solution of
the Backward Kolmogorov equation, then, integrating by parts, we have
Z TZ
b2
0= (us + auy + uyy )P̂ (y, s; x, t)dyds
t R 2
Z TZ
1
= (−∂s P̂ (y, s; x, t) − ∂y (a(y, s)P̂ (y, s; x, t)) + ∂y2 (b2 (y, s)P̂ (y, s; x, t)))u(s, y)dyds
t R 2
Z s=T
+ u(y, s)P̂ (y, s; x, t)dy
R s=t
Since Z
u(y, t) P̂ (y, t; x, t) dy = u(x, t),
R | {z }
δ(x−y)
we conclude that
Z
u(x, t) = u(y, t) P̂ (y, t; x, t) dy = E[g(X(T ))|X(t) = x].
R | {z }
δ(x−y)
11. State and derive the central limit theorem. State the Berry-Esseen Theo-
rem.
Solution.
12
So,
√
M 2 M
it
PM
j=1 Sj / M
t t 0 t 00 t
E e = f √ = f (0) + √ f (0) + f (0) + O
M M 2M M
We have
f (0) = E(1) = 1
f 0 (0) = iE(Sj ) = 0
f 00 (0) = i2 E(Sj2 ) = −1
Therefore,
2 M
√ t2
PM t 2
E e it j=1 Sj / M
= 1− +O → e−t /2 = E(eitν )
2M M
as M → ∞.
Let us consider a bounded function g
M
!! Z
X √ 1
Z PM √
E g Sj / M = g(x)ρ(x)dx = E eit j=1 Sj / M E(g(t))dt
j=1 R 2π R
CBE λ3
|FZM (x) − φ(x)| ≤ √
(1 + |x|)3 M )
Where φ is the cdf of a standard normal, CBE ≈ 30.51175, FZM (x) = P (ZM ≤ x)∀x ∈
R and √ !
M
M 1 X
ZM = Y (wi ) − E(Y )
σY M i=1
13
12. Show how to apply the Monte Carlo method to compute an integral and
discuss the corresponding error.
Solution.
Let p be the pdf of the uniform distribution on [0, 1]d , then we can write
Z
I= f (x)p(x)dx = E[f (X)]
Rd
where X is uniformly distributed on [0, 1]d . The MC estimate in this case is given by
N
1 X
IN = f (X(ωn )).
N n=1
In practice, σ 2 is apporximated by
N N
!2
1 X 1 X
σ̂ 2 =: f (xj ) − f (xn ) .
N − 1 j=1 N n=1
We may select a confidence interval of level 1 − α and selec an appropiate Cα in the
sense that Φ(Cα ) = 1 − α/2, where Φ is the cdf of a standard Gaussian. For large N ,
we have
Cα
P |EN | ≤ √ ≈ 1 − α.
N
14
13. Describe the sampling method of acceptance-rejection.
Solution.
(1) Set k = 1.
(2) Sample two independent random variables Xk from ρk and Uk from U ∼ U nif (0, 1)
ρY (x)
(3) If Uk ≤ ε then accept Y = Xk , otherwise reject Xk .
ρX (x)
k = k + 1 and go to step 2.
Let us see that Y sampled by acceptance -rejection technique has indeed density ρY .
The acceptance probability is given by
Z Z ε ρY (x) Z
ρY (x) ρX (x) ρY (x)
P Uk ≤ ε = duρX (x)dx = ε ρX (x)dx
ρX (x) 0 ρX (x)
Z
= ε ρY (x)dx
= ε.
15
Since
Z Z ε ρY (x)
ρY (Xk ) ρX (x)
P Xk ∈ B, Uk ≤ ε = duρX (x)dx
ρX (Xk ) B 0
Z
ρY (x)
=ε ρX (x)dx
B ρX (x)
Z
= ε ρY (x)dx
B
Therefore, the smaller the value of ε, the bigger is E[K]. For ε close to 1, the algorithm
will be eficient in terms of cost since E[K] ≈ 1.
14. Motivate the use of variance reduction techniques. Describe two techniques
and discuss their computational efficiency.
Solution.
The idea behind the variance reduction technique comes from the MC error expression
M
r
1 X V ar[Y ]
EM = E[Y ] − Y (ωj ) ≈ Cα .
M j=1 M
We introduce this kind of technique to “reduce” V ar[Y ] while keeping E[Y ] unchanged.
So basically, we look for a random variable such that E[X] = E[Y ] and V ar[X] <
V ar[Y ].
First technique: Control variates.
The new variables are defined as
V =: Y − β(X − E[X])
16
where Yj = Y (ωj ) and Xj = X(ωj ).
Note that E[VM ] = E[Y ].
For the variance,
1
V ar[Y ] + β 2 V ar[X] − 2βCov(X, Y ) .
V ar[VM ](β) =
M
Minimizing the last expression, we get
Cov(X, Y )
β∗ =
V ar(Y )
and thus,
V ar[Y ] V ar[Y ]
V ar[VM ](β) = (1 − ρ2XY ) < = V ar[YM C ]
M M
where YM C is the MC estimator and ρXY = Cov(X,Y
σX σY
)
.
Therefore, the higher the correlation between X and Y is, the smaller the variance of
VM and the more efficient the method is.
Let us assume that the work needed to sample X is δ times the work needed to generate
Y (0 < δ ≤ 1). In order to reduce the cost, we have the following condition
if and only if
δ
ρ2XY > .
1+δ
Second technique: Antithetic variates.
Let Y = g(X) such that X has a symetric distribution around its mean. Assume
E[X] = 0. Then, since X and −X are identically distributed, we have E[g(X)] =
E[g(−X)]. Therefore, we can write
g(X) + g(−X)
E[Y ] = E .
2
V ar[Vm ] ≤ 2V ar(g(X))
wich implies
Cov(g(X), g(−X)) < 0.
15. State and prove an error representation for the global error
where
dX
= a(X(t)), X(0) = X0
dt
and X̄ is an approximation based on a one step method of the form
Solution.
where (·, ·) is the standard scalar product on Rd and ψ(t, y) = ψX (t) ∈ Rd is the first
variation of u.
Let us define
u = g(X(T ; t, y)), t < T
and for all t, τ ∈ [tn−1 , tn ], we have
where X̃ is the exact solution with initial condition X̃(tn−1 ) = X̄(tn−1 ), then
Thus,
N
X
u(tn , X̃(tn )) − u(tn , X̄(tn )) = u(0, X̃(0)) = g(X(T )) − g(X̄(t)).
n=1
18
Now, we introduce the auxiliary function U : [0, 1] → R4 defined by
Then, we have
Z 1
U (1) − U (0) = u(tn , X̃(tn )) − u(tn , X̄(tn )) = U 0 (s)ds
0
which implies
Z t
u(tn , X̃(tn )) − u(tn , X̄(tn )) = e(tn ), ψ(tn , X̄(tn ) + se(tn ))ds)
0
∂ai
where (a0 )∗ (s, x) is the transpose of the Jacobian matrix a0 (s, x) = { ∂xj
(s, x)} ∈ Rd .
16. State and prove an error representation for the global error
where (
dXt = a(X(t))dt + b(X(t))dWt ,
X(0) = X0 ,
and X̄ is a Forward Euler approximation.
Solution.
Theorem 0.7. Let a, b, g smooth and with appropiate decay functions when |x| → ∞,
then
Z T
E[g(X̄(T )) − g(X(T ))] = E[ a(t, X̄(t)) − ā(t, X̄) ux (t, X̄)]dt
0
1 T
Z
E[ b2 (t, X̄(t)) − b̄2 (t, X̄) uxx (t, X̄)]dt
+
2 0
where u is the cost to go function such that
19
Proof. The cost to go function solves the Kolmogorov Backward equation
(
∂t u + a∂x u + 12 b2 ∂s xx2 u = 0
(8)
u(T, ·) = g
which implies
Integrating from 0 to T , using the ecuation (8) and taking expected value, we get
Z T Z T
1
E[g(X̄(T )) − g(X(T ))] = E[(a − ā)ux (t, X̄)]dt + E[(b2 − b̄2 )uxx (t, X̄)]dt
0 2 0
17. What is a Brownian bridge? Discuss an algorithm to sample the mid point
of the Brownian bridge.
Solution.
Cov(Ws , W1 )
E[W o (s)] = E[Ws ] + (W1 − E[W1 ])
V ar(W1 )
= Cov(Ws , W1 )W1
Cov(Ws , W1 )
V ar[W o (s)] = V ar[Ws ] − Cov(W1 , Ws )
V ar(W1 )
= s − (Cov(Ws , W1 ))2
= s − (s ∧ 1)2
= s − s2 .
20
and
Cov(Ws , W1 ) = E[W1 Ws ]
= E[Ws (W1 + Ws − Ws )]
= E[Ws2 ] + E[Ws (W1 − Ws )]
= E[Ws2 ] + E[Ws ]E[W1 − Ws ]
= s.
(
E[W o (s)] = sW1
Therefore .
V ar[W o (s)] = s − s2
For the midpoint (s = 1/2), we have
(
E[W o ( 12 )] = 12 W1
V ar[W o ( 12 )] = 14
We conclude that
W o (s) ∼ N (sW1 , s − s2 )
and for the midpoint W o ( 21 ) ∼ N ( 12 W1 , 14 ).
Given a realization of W1 , we construct b, a realization of W o ( 12 ) by
1 1
b = W1 rand(1, 1).
2 2
18. Define the notions of weak and strong convergence. What is the weak order
of convergence for the Forward Euler method for the approximation of Itô
SDEs? And the strong order?
Solution.
19. Describe the multilevel Forward Euler Monte Carlo method. State and
prove its computational complexity when applied to Itô SDEs using an Eu-
ler Maruyama discretization.
Solution.
Solution.
21. Discuss the alternatives to determine the option price on the previous prob-
lem numerically in case of contracts based on one and many stocks.
Solution.
21