0% found this document useful (0 votes)
7 views

Lectures On Stochastic Control and Its Applications To Finance Chap 4 Martingale Approach Pham

Uploaded by

TeddyBear20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Lectures On Stochastic Control and Its Applications To Finance Chap 4 Martingale Approach Pham

Uploaded by

TeddyBear20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

Lectures on

Stochastic control and


applications in finance

Huyên PHAM
University Paris Diderot, LPMA
and CREST-ENSAE
[email protected]
https://sites.google.com/site/phamxuanhuyen/home

French-Vietnamese Summer school on


“Mathematical methods in finance and economics”
and Master course QCF, JVN
Ho-Chi-Minh City, August 13-17, 2012
Stochastic Control and applications in finance

Abstract. The aim of these lectures is to present an introduction to stochastic


control, a classsical topic in applied mathematics, which has known important devel-
opments over the last years inspired especially by problems in mathematical finance.
We give an overview of the main methods and results in this area.
We first present the standard approach by dynamic programming equation and
verification, and point out the limits of this method. We then move on to the viscosity
solutions approach: it requires more theory and technique, but provides the general
mathematical tool for dealing with stochastic control in a Markovian context. The
third lecture gives an introduction to the theory of Backward stochastic differential
equations (BSDEs), which has emerged as a major research topic with significant
contributions in relation with stochastic control beyond the Markovian framework.
The last lecture is devoted to the martingale approach for portfolio optimization. The
various methods presented in these lectures will be illustrated by several applications
arising in economics and finance.

Chapter 1 : Classical approach to stochastic control problem


Chapter 2 : Viscosity solutions and stochastic control
Chapter 3 : BSDEs and stochastic control
Chapter 4: Martingale approach to portfolio optimization
Chapter 1 : Classical approach to stochastic control problem

I. Introduction

II. Controlled diffusion processes

III. Dynamic Programming Principle (DPP)

IV. Hamilton-Jacobi-Bellman (HJB) equation

V. Verification theorem

VI. Applications : Merton portfolio selection (CRRA utility functions and general
utility functions by duality approach), Merton portfolio/ consumption choice

VII. Some other classes of stochastic control


I. Introduction

• Basic structure of stochastic control problem

• Dynamic system in an uncertain environment:


- filtered probability space (Ω, F, F = (Ft ), P): uncertainty and information
- state variables X = (Xt ): F-adapted stochastic process representing the evolu-
tion of the quantitative variables describing the system

• Control: a process α = (αt ) whose value is decided at time t in function of


the available information Ft , and which can influence the dynamics of the state
process X.

• Performance/criterion: optimize over controls a functional J(X, α), e.g.


Z T 
J(X, α) = E f (Xt , αt )dt + g(XT ) on a finite horizon
0

or
Z ∞ 
−βt
J(X, α) = E e f (Xt , αt )dt on an infinite horizon
0

I Various and numerous applications in economics and finance

I In parallel, problems in mathematical finance → new developments in the theory


of stochastic control
• Solving a stochastic control problem

• Basic goal: find the optimal control (which achieves the optimum of the objective
functional) if it exists and the value function (the optimal objective functional)

• Tractable characterization of the value function and optimal control →


- if possible, explicit solutions
- otherwise: qualitative description and quantitative results via numerical solu-
tions

• Mathematical tools

• Dynamic programming principle and stochastic calculus →


- PDE characterization in a Markovian context
- BSDE in general

I Stochastic control is a topic at the interface between probability, stochastic analysis


and PDE.
II. Controlled diffusion processes

• Dynamics of the state variables in Rn :

dXs = b(Xs , αs )ds + σ(Xs , αs )dWs , (1)

W d-dimensional Brownian motion on (Ω, F, F = (Ft ), P).

- The control α = (αt ) is an F-adapted process, valued in A subset of Rm , and


satisfying some integrability conditions and/or state constraints → A set of admissible
controls.

- Given α ∈ A, (t, x) ∈ [0, T ] × Rn , we denote by X t,x = X t,x,α the solution to (1)


starting from x at t.

• Performance criterion (on finite horizon)

Given a function f from Rn × A into R, and a function g from Rn into R, we define


the objective functional:
Z T 
t,x
J(t, x, α) = E f (Xs , αs )ds + g(XT ) , (t, x) ∈ [0, T ] × Rn , α ∈ A,
t,x
t

and the value function:

v(t, x) = sup J(t, x, α).


α∈A

• α̂ ∈ A is an optimal control if: v(t, x) = J(t, x, α̂).

• A process α in the form αs = a(s, Xst,x ) for some measurable function a from
[0, T ] × Rn into A is called Markovian or feedback control.
III. Dynamic programming principle

Bellman’s principle of optimality

“An optimal policy has the property that whatever the initial state and initial decision
are, the remaining decisions must constitute an optimal policy with regard to the
state resulting from the first decision”

(See Bellman, 1957, Ch. III.3)

Mathematical formulation of the Bellman’s principle or Dynamic Pro-


gramming Principle (DPP)

The usual version of the DPP is written as


Z θ 
t,x t,x
v(t, x) = sup E f (Xs , αs )ds + v(θ, Xθ ) , (2)
α∈A t

for any stopping time θ ∈ Tt,T (set of stopping times valued in [t, T ]).
• Stronger version of the DPP

In a stronger and useful version of the DPP, θ may actually depend on α in (2). This
means:
Z θ 
v(t, x) = sup sup E f (Xst,x , αs )ds + v(θ, Xθt,x )
α∈A θ∈Tt,T t
Z θ 
= sup inf E f (Xst,x , αs )ds + v(θ, Xθt,x ) .
α∈A θ∈Tt,T t

⇐⇒

(i) For all α ∈ A and θ ∈ Tt,T :


Z θ 
v(t, x) ≥ E f (Xst,x , αs )ds + v(θ, Xθt,x ) .
t

(ii) For all ε > 0, there exists α ∈ A such that for all θ ∈ Tt,T
Z θ 
t,x t,x
v(t, x) − ε ≤ E f (Xs , αs )ds + v(θ, Xθ ) .
t
Proof of the DPP. (First part)

1. Given α ∈ A, we have by pathwise uniqueness of the flow of the SDE for X, the
Markovian structure
θ,Xθt,x
Xst,x = Xs , s ≥ θ,

for any θ ∈ Tt,T . By the law of iterated conditional expectation, we then get
Z θ 
t,x
J(t, x, α) = E f (s, Xst,x , αs )ds + J(θ, Xθ , α) ,
t

Since J(., ., α) ≤ v, and θ is arbitrary in Tt,T , this implies


Z θ 
J(t, x, α) ≤ inf E f (s, Xst,x , αs )ds + v(θ, Xθt,x )
θ∈Tt,T t
Z θ 
≤ sup inf E f (s, Xst,x , αs )ds + v(θ, Xθt,x ) .
α∈A θ∈Tt,T t

=⇒

Z θ 
v(t, x) ≤ sup inf E f (s, Xst,x , αs )ds + v(θ, Xθt,x ) . (3)
α∈A θ∈Tt,T t
Proof of the DPP. (Second part)

2. Fix some arbitrary control α ∈ A and θ ∈ Tt,T . By definition of the value functions,
for any ε > 0 and ω ∈ Ω, there exists αε,ω ∈ A, which is an ε-optimal control for
t,x
v(θ(ω), Xθ(ω) (ω)), i.e.
t,x t,x
v(θ(ω), Xθ(ω) (ω)) − ε ≤ J(θ(ω), Xθ(ω) (ω), αε,ω ). (4)

Let us now define the process


(
αs (ω), s ∈ [0, θ(ω)]
α̂s (ω) =
αsε,ω (ω), s ∈ [θ(ω), T ].
→ Delicate measurability questions! By measurable selection results, one can show
that α̂ is F-adapted, and so α̂ ∈ A.

By using again the law of iterated conditional expectation, and from (4):
Z θ 
t,x t,x ε
v(t, x) ≥ J(t, x, α̂) = E f (s, Xs , αs )ds + J(θ, Xθ , α )
t
Z θ 
t,x t,x
≥ E f (s, Xs , αs )ds + v(θ, Xθ ) − ε.
t

Since α, θ and ε are arbitrary, this implies


Z θ 
v(t, x) ≥ sup sup E f (s, Xst,x , αs )ds + v(θ, Xθt,x ) .
α∈A θ∈Tt,T t
IV. Hamilton-Jacobi-Bellman (HJB) equation

The HJB equation is the infinitesimal version of the dynamic programming principle:
it describes the local behavior of the value function when we send the stopping time θ
in the DPP (2) to t. The HJB equation is also called dynamic programming equation.

Formal derivation of HJB

We assume that the value function is smooth enough to apply Itô’s formula, and we
postpone integrability questions.

I For any α ∈ A, and a controlled process X t,x , apply Itô’s formula to v(s, Xst,x )
between s = t and s = t + h:
Z t+h  
t,x ∂v
v(t + h, Xt+h ) = v(t, x) + + L v (s, Xst,x )ds
αs
+ (local) martingale,
t ∂t
where for a ∈ A, La is the second-order operator associated to the diffusion X with
constant control a:
1
La v = b(x, a).Dx v + tr σ(x, a)σ 0 (x, a)Dx2 v .

2
I Plug into the DPP:

Z t+h 
∂v
+ Lαs v (s, Xst,x ) + f (Xst,x , αs )ds

sup E = 0.
α∈A t ∂t

I Divide by h, send h to zero, and “obtain” by the mean-value theorem, the so-called
HJB equation:
∂v
+ sup [La v + f (x, a)] = 0, (t, x) ∈ [0, T ) × Rn . (5)
∂t a∈A
Moreover, if v is continuous at T , we have the terminal condition

v(T − , x) = v(T, x) = g(x), x ∈ Rn .

Classical approach to stochastic control:

• Show if possible the existence of a smooth solution to HJB, or even better obtain
an explicit solution

• Verification step: prove that this smooth solution to HJB is the value function of
the stochastic control problem, and obtain as a byproduct the optimal control.

Remark.

In the classical verification approach, we don’t need to prove the DPP, but “only” to
get the existence of a smooth solution to the HJB equation.
V. Verification approach

Theorem

Let w be a function in C 1,2 ([0, T ] × Rn ), solution to the HJB equation:


∂w
(t, x) + sup La w(t, x) + f (x, a) = 0, (t, x) ∈ [0, T ) × Rn ,
 
∂t a∈A
w(T, x) = g(x), x ∈ Rn .

(and satisfying eventually additional growth conditions related to f and g). Suppose
there exists a measurable function â(t, x), (t, x) ∈ [0, T ) × Rn , valued in A, attaining
the supremum in HJB i.e.

sup La w(t, x) + f (x, a) = Lâ(t,x) w(t, x) + f (x, â(t, x)),


 
a∈A

such that the SDE

dXs = b(Xs , â(s, Xs ))ds + σ(Xs , â(s, Xs ))dWs

admits a unique solution, denoted by X̂st,x , given an initial condition Xt = x, and the
process α̂ = {â(s, X̂st,x ) t ≤ s ≤ T } lies in A. Then,

w = v,

and α̂ is an optimal feedback control.


Proof of the verification theorem. (First part)

1. Suppose that w is a smooth supersolution to the HJB equation:


∂w
(t, x) − sup La w(t, x) + f (x, a) ≥ 0, (t, x) ∈ [0, T ) × Rn ,
 
− (6)
∂t a∈A
w(T, x) ≥ g(x), x ∈ Rn . (7)

I For any α ∈ A, and a controlled process X t,x , apply Itô’s formula to w(s, Xst,x )
between s = t and s = T ∧ τn , and take expectation:
Z T ∧τn   
 t,x  ∂w αs t,x
E w(T ∧ τn , XT ∧τn ) = w(t, x) + E + L w (s, Xs )ds
t ∂t
where (τn ) is a localizing sequence of stopping times for the local martingale appearing
in Itô’s formula.

I Since w is a supersolution to HJB (6), this implies:


Z T ∧τn 
 t,x  t,x
E w(T ∧ τn , XT ∧τn ) + E f (Xs , αs )ds ≤ w(t, x).
t

By sending n to infinity, and under suitable integrability conditions, we get:


Z T 
E w(T, XTt,x ) + E f (Xst,x , αs )ds ≤ w(t, x).
 
t

I Since w(T, .) ≥ g, and α is arbitrary, we obtain


Z T 
t,x t,x
v(t, x) = sup E f (Xs , αs )ds + g(XT ) ≤ w(t, x).
α∈A t
Proof of the verification theorem. (Second part)

2. Suppose that the supremum in HJB equation is attained:


∂w
− (t, x) − Lâ(t,x) w(t, x) + f (x, â(t, x)) = 0, (t, x) ∈ [0, T ) × Rn , (8)
∂t
w(T, x) = g(x), x ∈ Rn . (9)

I Apply Itô’s formula to w(s, X̂st,x ) for the feedback control α̂. By same arguments
as in the first part, we have now the equality (after an eventual localization):
Z T   
∂w α̂s t,x t,x
w(t, x) = −E + L w (s, X̂s )ds + w(T, X̂T )
t ∂t
Z T 
t,x t,x
= E f (X̂s , α̂s )ds + g(X̂T ) (≤ v(t, x)).
t

I Together with the first part, this proves that w = v and α̂ is an optimal feedback
control.
Probabilistic formulation of the verification approach

The analytic statement of the verification theorem has a probabilistic formulation:

Suppose that the measurable function w on [0, T ] × Rn satisfies the two properties:

• for any control α ∈ A with associated controlled process X, the process


Z t
w(t, Xt ) + f (Xs , αs )ds is a supermartingale (10)
0

• there exists a control α̂ ∈ A with associated controlled process X, such that the
process
Z t
w(t, X̂t ) + f (X̂s , α̂s )ds is a martingale. (11)
0

Then, w = v, and α̂ is an optimal control.

Remark. Notice that in the probabilistic verification approach, we do not need


smoothness of w, but we require a supermartingale property. In the analytic verifi-
cation approach, the smoothness of w is used for applying Itô’s formula to w(t, Xt ).
This allows us to derive the supermartingale property as in (10), which is in fact the
key feature for proving that w ≥ v, and then w = v with the martingale property
(11).
Extensions-variations
• Discounting:
hZ T Rs RT i
r(Xut,x ,αu )du r(Xut,x ,αu )du
v(t, x) = sup E −
e t f (Xst,x , αs )ds +e − t g(XTt,x ).
α∈A t

The associated HJB equation is:


∂v
− sup − r(x, a)v + La v(t, x) + f (x, a) = 0.
 

∂t a∈A

• Infinite horizon:
hZ ∞ i
−rt
v(x) = sup E e f (Xsx , αs )ds .
α∈A 0

The associated HJB equation is:

rv − sup La v(t, x) + f (x, a) = 0.


 
a∈A
VI. Applications

1. Merton portfolio selection in finite horizon

An agent invests at any time t a proportion αt of his wealth X in a stock of price S


and 1 − αt in a bond of price S 0 with interest rate r. The investor faces the portfolio
constraint that at any time t, αt is valued in A closed convex subset of R.

I Assuming a Black-Scholes model for S (with constant rate of return µ and volatility
σ > 0), the dynamics of the controlled wealth process is:
Xt αt Xt (1 − αt ) 0
dXt = dSt + dSt
St St0
= Xt (r + αt (µ − r)) dt + Xt αt σdWt .

• The preferences of the agent is described by a utility function U : increasing and con-
cave function. The performance of a portfolio strategy is measured by the expected
utility from terminal wealth → Utility maximization problem at a finite horizon T :

v(t, x) = sup E[U (XTt,x )], (t, x) ∈ [0, T ] × (0, ∞).


α∈A

→ Standard stochastic control problem


HJB equation for Merton’s problem

h 1 2 2 2 i
vt + rxvx + sup a(µ − r)xvx + x a σ vxx = 0, (t, x) ∈ [0, T ) × (0, ∞)
a∈A 2
v(T, x) = U (x), x > 0.

• The case of CRRA utility functions:


xp
U (x) = , p < 1, p 6= 0
p
→ Relative Risk Aversion: −xU 00 (x)/U 0 (x) = 1 − p.

I We look for a candidate solution to HJB in the form

w(t, x) = ϕ(t)U (x).

Plugging into HJB, we see that ϕ should satisfy the ODE:

ϕ0 (t) + ρϕ(t) = 0, φ(T ) = 1,

where
1
ρ = rp + p sup a(µ − r) − a2 (1 − p)σ 2 ,
 
a∈A 2

ϕ(t) = eρ(T −t) .


I The value function is equal to

v(t, x) = eρ(T −t) U (x),

and the optimal control is constant (in proportion of wealth invested)


1
â = arg max a(µ − r) − a2 (1 − p)σ 2 .
 
a∈A 2

When A = R (no portfolio constraint), the values of ρ and â are explicitly given by
(µ − r)2 p
ρ = + rp.
2σ 2 1 − p
and
µ−r
â = ,
σ 2 (1
− p)
• General utility functions:

U is C 1 , strictly increasing and concave on (0, ∞), and satisfies the Inada conditions:

U 0 (0) = ∞, U 0 (∞) = 0.

I Convex conjugate of U :

Ũ (y) := sup[U (x) − xy] = U (I(y)) − yI(y), y > 0,


x>0

where I := (U 0 )−1 = −Ũ 0 .

• Assume that A = R (no portfolio constraint and complete market) and for simplicity
r = 0 so that HJB is also written as
1 µ2 vx2
vt − = 0,
2 σ 2 vxx
with a candidate for the optimal feedback control:
µ vx
â(t, x) = − .
σ 2 xvxx
Recall the terminal condition:

v(T, x) = U (x).

→ Fully nonlinear second order PDE

→ But remarkably, it can be solved explicitly by convex duality!


• Introduce the convex conjugate of v, also called dual value function:

ṽ(t, y) = sup[v(t, x) − xy], y > 0.


x>0

↔ change of variables: y = vx and x = −ṽy .

I ṽ satisfies the linear parabolic Cauchy problem:


1 µ2 2
ṽt + y ṽyy = 0
2 σ2
ṽ(T, y) = Ũ (y).

From Feynman-Kac formula, ṽ is represented as

ṽ(t, y) = E Ũ (yYTt ) ,
 

where Y t is the solution to


µ
dYst = −Yst dWs , Ytt = 1.
σ

Remark. YT0 = dQ/dP is the density of the risk-neutral probability measure Q,


under which S is a martingale:

dSt = St σdWtQ ,
• The primal value function is obtained by duality relation:
 
v(t, x) = inf ṽ(t, y) + xy , x > 0.
y>0

I From the representation of ṽ, we get:


n  o
t

v(t, x) = inf E Ũ (yYT ) + xy (12)
y>0

Recalling that Ũ 0 = −(U 0 )−1 =: I, the infimum in (12) is attained at ŷ = ŷ(t, x) s.t.

E YTt I(ŷYTt ) = x,
 
(saturation budget constraint) (13)

and we have
h i
t t t
v(t, x) = E Ũ (ŷYT ) + ŷYT I(ŷYT ) .

Recalling that the supremum in Ũ is attained at x = I(y), i.e. Ũ (y) = U (I(y))−yI(y),


we obtain:
h i
t,x
v(t, x) = E U (X̂T ) , with X̂Tt,x = I(ŷYTt ). (14)
I Consider now the strictly positive Q-martingale process:
h i
t,x t
X̂s := E I(ŷYT ) Fs , t ≤ s ≤ T.
Q

- From the saturation budget constraint (13), we have X̂tt,x = x.

- From the martingale representation theorem (or since the market is complete), there
exists α̂ ∈ A s.t.
X̂st,x α̂s
dX̂st,x = X̂st,x σ α̂s dWsQ = dSs ,
Ss
which means that X̂ t,x is a wealth process controlled by the proportion α̂, and starting
from initial capital x at time t.

I From the representation (14) of the value function, this proves that X̂ t,x is the
optimal wealth process:
h i
t,x
v(t, x) = E U (X̂T ) .
2. Merton portfolio/consumption choice on infinite horizon

In addition to the investment α in the stock, the agent can also consume from his
wealth:
→ (ct )t≥0 consumption per unit of wealth

I The wealth process, controlled by (α, c) is governed by:

dXt = Xt (r + αt (µ − r) − ct ) dt + Xt αt σdWt .

• The preferences of the agent is described by a utility U from consumption, and the
goal is to maximize over portfolio/consumption the expected utility from intertem-
poral consumption up to a random time horizon:
Z τ 
−βt x
v(x) = sup E e U (ct Xt )dt , x > 0.
(α,c) 0

We assume that τ is independent of F∞ (market information), and follows E(λ).

I Infinite horizon stochastic control problem:


Z ∞ 
−(β+λ)t x
v(x) = sup E e U (ct Xt )dt , x > 0.
(α,c) 0
HJB equation

1
(β + λ)v − rxv 0 − sup[a(µ − r)v 0 + a2 x2 σ 2 v 00 ] − sup[U (cx) − cxv 0 ] = 0, x > 0.
a∈A 2 c≥0

• Explicit solution for CRRA utility function: U (x) = xp /p.

Under the condition that β + λ > ρ, we have

 1−p
1−p
v(x) = K U (x), with K = .
β+λ−ρ

The optimal portfolio/consumption strategies are:


1
â = arg max[a(µ − r) − a2 (1 − p)σ 2 ]
a∈A 2
1 1 1
ĉ = (v 0 (x)) p−1 = K p−1 .
x
VII. Some other classes of stochastic control problems

• Ergodic and risk-sensitive control problems

- Risk-sensitive control problem:


1
  Z T 
lim sup ln E exp θ f (Xt , αt )dt
T →∞ θT 0

→ Applications in finance: Bielecki, Pliska, Fleming, Sheu, Nagai, Davis, etc ...

- Large deviations control problem:


 
1 XT
lim sup ln P ≥x
T →∞ T T

←→ Dual of risk-sensitive control problem

→ Applications in finance: Pham, Sekine, Nagai, Hata, Sheu


• Optimal stopping problems:

The control decision is a stopping time τ where we decide to stop the process

→ Value function of optimal stopping problem (over a finite horizon):


Z τ 
t,x t,x
v(t, x) = sup E f (Xs )ds + g(Xτ ) .
t≤τ ≤T t

I The HJB equation is a free boundary or variational inequality:


 ∂v 
min − − Lv − f , v − g = 0,
∂t
where L is the infinitesimal generator of the Markov process X.

→ Typical applications in finance in American option pricing


• Impulse and optimal switching problems:

The control is a sequence of increasing stopping times (τn )n associated to a sequence


of actions (ζn )n : τn represents the time decision when we decide to intervene on the
state system X by using an action ζn Fτn -measurable: Xτn− → Γ(Xτn− , ζn )

→ Value function:
"Z #
T X
v(t, x) = sup E f (Xst,x )ds + g(XTt,x ) + c(Xτn− , ζn ) .
(τn ζn ) t n

I The HJB equation is a quasi-variational inequality:


 ∂v 
min − − Lv − f , v − Hv = 0,
∂t
where L is the infinitesimal generator of the Markov process X, and H is a nonlocal
operator associated to the jump and cost induced by an action:
 
Hv(t, x) = sup v(t, Γ(x, e)) + c(x, e) .
e∈E

→ Various applications in finance:

• Transaction costs and liquidity risk models, execution in limit order books, where
trading times take place discretely

• Real options and firm investment problems, where decisions represent change of
regimes or production technologies
Chapter 2 : Viscosity solutions and stochastic control

I. Non smoothness of value functions: a motivating financial example

II. Introduction to viscosity solutions

III. Viscosity properties of the dynamic programming equation

IV. Comparison principles

V. Application: Super-replication in uncertain volatility models


I. Non smoothness of value functions: a motivating financial example

• Consider the controlled diffusion process

dXs = αs Xs dWs ,

with an unbounded control α valued in A = R+ : Uncertain volatility model.

Consider the stochastic control problem

v(t, x) = sup E[g(XTt,x )], (t, x) ∈ [0, T ] × (0, ∞),


α∈A

←→ Superreplication cost of an option payoff g(XT ).

I If v were smooth, it would be a classical solution to the HJB equation:


h1 i
2 2
vt + sup a x vxx = 0, (t, x) ∈ [0, T ) × (0, ∞). (1)
a∈R+ 2

But, for the supremum in a ∈ R to be finite and HJB equation (1) to be well-posed,
we must have

vxx ≤ 0, i.e. v(t, .) is concave in x, for any t ∈ [0, T ).


• Now, by taking the zero control in the definition of v, we get

v(t, x) ≥ g(x),

which combined with the concavity of v(t, .), implies:

v(t, x) ≥ ĝ(x), t < T,

where ĝ is the concave envelope of g: the smallest concave function above g.

• Moreover, since g ≤ ĝ, and by Jensen’s inequality and martingale property of X,


we have

v(t, x) ≤ sup E[ĝ(XTt,x )]


α∈A
≤ sup ĝ E[XTt,x ] = ĝ(x).

α∈A

I Therefore,

v(t, x) = ĝ(x), ∀(t, x) ∈ [0, T ) × (0, ∞).

→ There is a contradiction with the smoothness of v, whenever ĝ is not smooth!,


for example when g is concave (hence equal to ĝ) but not smooth.

• Need to consider the case where the supremum in HJB can explode (singular case)
and to define weak solutions for HJB equation

→ Notion of viscosity solutions (Crandall, Ishii, P.L. Lions)


II. Introduction to viscosity solutions

Consider nonlinear parabolic second-order partial differential equations:


∂w 2
F (t, x, w, , Dx w, Dxx w) = 0, (t, x) ∈ [0, T ) × O, (2)
∂t
where O is an open subset of Rn and F is a continuous function of its arguments,
satisfying the ellipticity condition: for all (t, x) ∈ [0, T ) × O, r ∈ R, (q, p) ∈ R × Rn ,
M, Mc ∈ Sn ,

M ≤M
c =⇒ F (t, x, r, q, p, M ) ≥ F (t, x, r, q, p, M
c), (3)

and the parabolicity condition: for all t ∈ [0, T ), x ∈ O, r ∈ R, q, q̂ ∈ R, p ∈ Rn ,


M ∈ Sn ,

q ≤ q̂ =⇒ F (t, x, r, q, p, M ) ≥ F (t, x, r, q̂, p, M ). (4)

• Typical example: HJB equation

F (t, x, r, q, p, M ) = −q − H(x, p, M ),

where H is the Hamiltonian function of the stochastic control problem:


1
H(x, p, M ) = sup b(x, a).p + tr (σσ 0 (x, a)M ) + f (x, a)
 
a∈A 2
Intuition for the notion of viscosity solutions

• Assume that w is a smooth supersolution to (2). Let ϕ be a smooth test function


on [0, T ) × O, and (t̄, x̄) ∈ [0, T ) × O be a minimum point of w − ϕ:

0 = (w − ϕ)(t̄, x̄) = min(w − ϕ).

In this case, the first and second-order optimality conditions imply


∂(w − ϕ)
(t̄, x̄) ≥ 0 (= 0 if t̄ > 0)
∂t
Dx w(t̄, x̄) = Dx ϕ(t̄, x̄) and Dx2 w(t̄, x̄) ≥ Dx2 ϕ(t̄, x̄).

I From the ellipticity and parabolicity conditions (3) and (4), we deduce that
∂ϕ
F (t̄, x̄, ϕ(t̄, x̄), (t̄, x̄), Dx ϕ(t̄, x̄), Dx2 ϕ(t̄, x̄))
∂t
∂w
≥ F (t̄, x̄, w(t̄, x̄), (t̄, x̄), Dx w(t̄, x̄), Dx2 w(t̄, x̄)) ≥ 0,
∂t

• Similarly, if w is a classical subsolution to (2), then for all test functions ϕ, and
(t̄, x̄) ∈ [0, T ) × O such that (t̄, x̄) is a maximum point of w − ϕ, we have
∂ϕ
F (t̄, x̄, ϕ(t̄, x̄), (t̄, x̄), Dx ϕ(t̄, x̄), Dx2 ϕ(t̄, x̄)) ≤ 0.
∂t
General definition of (discontinuous) viscosity solutions

Given a locally bounded function w on [0, T ] × O, we define its upper-semicontinuous


(usc) envelope w∗ and lower-semicontinuous (lsc) envelope w∗ by

w∗ (t, x) = lim sup w(t0 , x0 ), w∗ (t, x) = lim inf w(t0 , x0 ).


t0 <T →t,x0 →x t0 <T →t,x0 →x

Remark. w∗ ≤ w ≤ w∗ , and w is usc (resp. lsc) on [0, T ) × O iff w = w∗ (resp. =


w∗ ), and w is continuous [0, T ) × O iff w = w∗ = w∗ .

Definition .1 Let w : [0, T ] × O → R be locally bounded.


(i) w is a viscosity supersolution (resp. subsolution) of (2) on [0, T ) × O if
∂ϕ 2
F (t̄, x̄, ϕ(t̄, x̄), (t̄, x̄), Dx ϕ(t̄, x̄), Dxx ϕ(t̄, x̄)) ≥ (resp ≤) 0,
∂t
for all (t̄, x̄) ∈ [0, T ) × O, and test functions ϕ such that (t̄, x̄) is a minimum (resp.
maximum) point of w∗ −ϕ (resp. w∗ −ϕ), with 0 = (w∗ −ϕ)(t̄, x̄) (resp. (w∗ −ϕ)(t̄, x̄)).

(ii) w is a viscosity solution of (2) on [0, T ) × O if it is both a subsolution and


supersolution of (2).
III. Viscosity properties of the DPE

We turn back to the stochastic control problem:


Z T 
t,x
v(t, x) = sup E f (Xst,x , αs )ds + g(XT ) , (t, x) ∈ [0, T ] × Rn ,
α∈A t

with Hamiltonian function H on Rn × Rn × Sn :


1
H(x, p, M ) = sup b(x, a).p + tr (σσ 0 (x, a)M ) + f (x, a) .
 
a∈A 2

I We introduce the domain of H as

dom(H) = {(x, p, M ) ∈ Rn × Rn × Sn : H(x, p, M ) < ∞} ,

and make the following hypothesis (DH):

H is continuous on int(dom(H))
and there exists a continuous function G on Rn × Rn × Sn such that
(x, p, M ) ∈ dom(H) ⇐⇒ G(x, p, M ) ≥ 0.

Example. In the example considered at the beginning of this lecture:


h1 i
2 2
H(x, p, M ) = sup a x M ,
a∈R 2

and so

G(x, p, M ) = −M.
• Viscosity property inside the domain

Theorem .1 The value function v is a viscosity solution to the HJB variational


inequality
 
∂v
min − − H(x, Dx v, Dx2 v) , G(x, Dx v, Dx2 v) = 0, on [0, T ) × Rn .
∂t

Remark. In the regular case when the Hamiltonian H is finite on the whole domain
Rn × Rn × Sn (this occurs typically when the control space is compact), the condition
(DH) is satisfied with any choice of strictly positive continuous function G. In this
case, the HJB variational inequality is reduced to the regular HJB equation:
∂v
− (t, x) − H(x, Dx v, Dx2 v) = 0, (t, x) ∈ [0, T ) × Rn ,
∂t
which the value function satisfies in the viscosity sense. Hence, the above Theorem
states a general viscosity property including both the regular and singular case.
Proof of viscosity supersolution property

• Let (t̄, x̄) ∈ [0, T ) × Rn and let ϕ ∈ C 2 ([0, T ) × Rn ) be a test function such that

0 = (v∗ − ϕ)(t̄, x̄) = min (v∗ − ϕ). (5)


[0,T )×Rn

By definition of v∗ (t̄, x̄), there exists a sequence (tm , xm )m in [0, T ) × Rn such that

(tm , xm ) → (t̄, x̄) and v(tm , xm ) → v∗ (t̄, x̄),

when m goes to infinity. By the continuity of ϕ and by (5) we also have that

γm := v(tm , xm ) − ϕ(tm , xm ) → 0.

• Let α ∈ A, a constant process equal to a ∈ A, and Xstm ,xm the associated controlled
process. Let τm = inf{s ≥ tm : |Xstm ,xm − xm | ≥ η}, with η > 0 a fixed constant. Let
(hm ) be a strictly positive sequence such that
γm
hm → 0 and → 0.
hm
We apply the first part of the DPP for v(tm , xm ) to θm := τm ∧ (tm + hm ) and get
Z θm 
t ,x
v(tm , xm ) ≥ E f (s, Xstm ,xm , a)ds + v(θm , Xθmm m ) .
tm

Equation (5) implies that v ≥ v∗ ≥ ϕ, thus


Z θ m 
ϕ(tm , xm ) + γm ≥ E f (s, Xstm ,xm , a)ds + ϕ(θm , Xθtmm ,xm ) .
tm
Apply Itô’s formula to ϕ(s, Xstm ,xm ) between tm and θm :
 Z θm   
γm 1 ∂ϕ a tm ,xm
+ E − − L ϕ − f (s, Xs , a)ds ≥ 0. (6)
hm hm tm ∂t
Now, send m to infinity: by the mean value theorem, and the dominated convergence
theorem, we get
∂ϕ
− (t̄, x̄) − La ϕ(t̄, x̄) − f (t̄, x̄, a) ≥ 0.
∂t
Since a is arbitrary in A, and by definition of H, this means:
∂ϕ
− (t̄, x̄) − H(x̄, Dx ϕ(t̄, x̄), Dx2 ϕ(t̄, x̄)) ≥ 0.
∂t
In particular, (x̄, Dx ϕ(t̄, x̄), Dx2 ϕ(t̄, x̄)) ∈ dom(H), and so

G(x̄, Dx ϕ(t̄, x̄), Dx2 ϕ(t̄, x̄)) ≥ 0.

Therefore,
h ∂ϕ i
2 2
min − (t̄, x̄) − H(x̄, Dx ϕ(t̄, x̄), Dx ϕ(t̄, x̄)) , G(x̄, Dx ϕ(t̄, x̄), Dx ϕ(t̄, x̄)) ≥ 0,
∂t
which is the required supersolution property.
Proof of viscosity subsolution property

• Let (t̄, x̄) ∈ [0, T ) × Rn and let ϕ ∈ C 2 ([0, T ) × Rn ) be a test function such that

0 = (v ∗ − ϕ)(t̄, x̄) = max n (v ∗ − ϕ). (7)


[0,T )×R

As before, there exists a sequence (tm , xm )m in [0, T ) × Rn s.t.

(tm , xm ) → (t̄, x̄) and v(tm , xm ) → v ∗ (t̄, x̄),


γm := v(tm , xm ) − ϕ(tm , xm ) → 0.

• We will show the result by contradiction, and assume on the contrary that
∂ϕ
− (t̄, x̄) − H(x̄, Dx ϕ(t̄, x̄), Dx2 ϕ(t̄, x̄)) > 0,
∂t
and G(x̄, Dx ϕ(t̄, x̄), Dx2 ϕ(t̄, x̄)) > 0.

Under (DH), there exists η > 0 such that


∂ϕ
− (t, y) − H(y, Dx ϕ(t, y), Dx2 ϕ(t, y)) > 0, for (t, x) ∈ B(t̄, η) × B(x̄, η). (8)
∂t
Observe that we can assume w.l.o.g. in (7) that (t̄, x̄) achieves a strict maximum so
that

max (v ∗ − ϕ) =: −δ < 0, (9)


∂p B((t̄,x̄),η)

where ∂p B((t̄, x̄), η) = [t̄, t̄ + η] × ∂B(x̄, η) ∪ {t̄ + η} × B(x̄, η).


I We apply the second part of DP: there exists α̂m ∈ A s.t.
Z θ m 
δ
v(tm , xm ) − ≤ E f (X̂stm ,xm , α̂sm )ds + v(θm , X̂θtmm ,xm ) , (10)
2 tm

where θm = inf{s ≥ tm : (s, X̂stm ,xm ) ∈


/ B(t̄, η) × B(x̄, η)}. Observe by continuity of
the state process that (θm , X̂θtmm ,xm ) ∈ ∂p B((t̄, x̄), η) so that from (9)-(10):
Z θm 
δ tm ,xm m tm ,xm
ϕ(tm , xm ) + γm − ≤ E f (X̂s , α̂s )ds + ϕ(θm , X̂θm ) − δ.
2 tm

Apply Itô’s formula to ϕ(s, X̂stm ,xm ) between tm and θm , we then get after noting that
the stochastic integral vanishes in expectation:
Z θ m   
δ ∂ϕ m
γm − + E − − Lα̂s ϕ − f (s, X̂stm ,xm , α̂sm )ds ≤ −δ. (11)
2 tm ∂t
Now, from (8) and definition of H, we have
∂ϕ m
− (s, X̂stm ,xm ) − Lα̂s ϕ(s, X̂stm ,xm ) − f (X̂stm ,xm , α̂sm )
∂t
∂ϕ
≥ − (s, X̂stm ,xm ) − H(s, Dx ϕ(s, X̂stm ,xm ), Dx2 ϕ(s, X̂stm ,xm ))
∂t
> 0, for tm ≤ s ≤ θm .

Plugging into (11), this implies


δ
γm − ≤ −δ, (12)
2
and we get the contradiction by sending m to infinity: −δ/2 ≤ −δ.
• Terminal condition

Due to the singularity of the Hamiltonian H, the value function may be discontinuous
at T , i.e. v(T − , x) may be different from g(x). The right terminal condition is given
by the relaxed terminal condition:

Theorem .2 The value function v is a viscosity solution to

min v − g , G(x, Dx v, Dx2 v) = 0, on {T } × Rn .


 
(13)

This means that v∗ (T, .) is a viscosity supersolution to

min v∗ (T, x) − g(x) , G(x, Dx v∗ (T, x), Dx2 v∗ (T, x)) ≥ 0, on Rn .


 
(14)

and v ∗ (T, .) is a viscosity subsolution to

min v ∗ (T, x) − g(x) , G(x, Dx v ∗ (T, x), Dx2 v ∗ (T, x)) ≤ 0, on Rn .


 
(15)

Remark. Denote by ĝ the upper G-envelope of g, defined as the smallest lsc function
above g, and viscosity supersolution to

G(x, Dĝ, D2 ĝ) ≥ 0.

Then v∗ (T, x) ≥ ĝ(x). On the other hand, since ĝ is a viscosity supersolution to (14),
and if a comparison principle holds for (13), then v ∗ (T, x) ≤ ĝ(x). This implies

v(T − , x) = v∗ (T, x) = v∗ (T, x) = ĝ(x).

In the regular case, we have ĝ = g, and v is continuous at T .


IV. Strong comparison principles and uniqueness

Consider the DPE satisfied by the value function


 
∂v 2 2
min − − H(x, Dx v, Dx v) , G(x, Dx v, Dx v) = 0, on [0, T ) × Rn . (16)
∂t
min v(T, x) − g(x) , G(x, Dx v, Dx2 v) = 0, on {T } × Rn .
 
(17)

• We say that a strong comparison principle holds for (16)-(17) when the follow-
ing statement is true:

If u is an usc viscosity subsolution to (16)-(17) and w is a lsc viscosity supersolution


to (16)-(17), satisfying some growth condition, then u ≤ w.

Remark. The arguments for proving comparison principles are:


- dedoubling variables technique
- Ishii’s Lemma

→ Standard reference: user’s guide of Crandall, Ishii’s Lions (92).


Consequence of strong comparison principles

• Uniqueness and continuity

Suppose that v and w are two viscosity solutions to (16)-(17). This means that v ∗ is
a viscosity subsolution to (16)-(17), and w∗ is a viscosity supersolution to (16)-(17),
and vice-versa. By the strong comparison principle, we get:

v ∗ ≤ w∗ and w∗ ≤ v∗ .

Since w∗ ≤ w∗ , v∗ ≤ v ∗ , this implies:

v ∗ = v∗ = w∗ = w∗ .

Therefore,

v = w, i.e. uniqueness
v∗ = v ∗ , i.e. continuity of v on [0, T ) × Rn .

• Conclusion

The value function of the stochastic control problem is the unique continuous viscosity
solution to (16)-(17) (satisfying some growth condition).
V. Application: superreplication in uncertain volatility model

Consider the controlled diffusion

dXs = αs Xs dWs , t ≤ s ≤ T,

with the control process α valued in A = [a, ā], where 0 ≤ a ≤ ā ≤ ∞. Given a


continuous function g on R+ , we consider the stochastic control problem:

v(t, x) = sup E g(XTt,x ) , (t, x) ∈ [0, T ] × (0, ∞).


 
α∈A

Financial interpretation

α represents the uncertain volatility process of the stock price X, and the function
g represents the payoff of an European option of maturity T . The value function
v is the superreplication cost for this option, that is the minimum capital required
to superhedge (by means of trading strategies on the stock) the option payoff at
maturity T whatever the realization of the uncertain volatility.
The Hamiltonian of this stochastic control problem is
1 2 2 
H(x, M ) = sup a x M , (x, M ) ∈ (0, ∞) × R.
a∈[a,ā] 2
→ We shall then distinguish two cases: ā finite or not.

• Bounded volatility: ā < ∞.


In this regular case, H is finite on the whole domain (0, ∞) × R, and is given by
1 2
H(x, M ) = â (M )x2 M,
2
with
(
ā if M ≥ 0
â(M ) =
a if M < 0.

I v is continuous on [0, T ] × (0, ∞), and is the unique viscosity solution with linear
growth condition to the so-called Black-Scholes-Barenblatt equation
1
vt + â2 (vxx ) x2 vxx = 0, (t, x) ∈ [0, T ) × (0, ∞),
2
satisfying the terminal condition

v(T, x) = g(x), x ∈ (0, ∞).

Remark. If g is convex, then v is equal to the Black-Scholes price with volatility ā,
which is convex in x, so that â(vxx ) = ā.
• Unbounded volatility: ā = ∞.

In this singular case, the Hamiltonian is given by


(
1 2 2
2a x M if G(M ) := −M ≥ 0
H(x, M ) =
∞ if − M < 0.
I v is the unique viscosity solution to the HJB variational inequality
1
min − vt − a2 x2 vxx , −vxx = 0,
 
on [0, T ) × (0, ∞), (18)
2  
min v − g , −vxx = 0, on {T } × (0, ∞). (19)

Explicit solution to (18)-(19)

Denote by ĝ the concave envelope of g, i.e. the solution to

min[ĝ − g , −ĝxx ] = 0.

Let us consider the Black-Scholes price with volatility a of the option payoff ĝ, i.e.
h i
t,x 
w(t, x) = E ĝ X̂T ,

where

dX̂s = aX̂s dWs , t ≤ s ≤ T, X̂t = x.

Then,

v = w, on [0, T ) × (0, ∞).


Proof.

Indeed, the function w is solution to the Black-Scholes equation:


1
wt + a2 x2 wxx = 0, on [0, T ) × (0, ∞)
2
w(T, x) = ĝ(x), x ∈ (0, ∞).

Moreover, w inherits from ĝ the concavity property, and so

−wxx ≥ 0, (t, x) ∈ [0, T ) × (0, ∞).

(This holds true in the viscosity sense)

I Therefore, w satisfies the same HJB variational inequality as v:


1
min − wt − a2 x2 wxx , −wxx = 0,
 
on [0, T ) × (0, ∞),
2  
min w − g , −wxx = 0, on {T } × (0, ∞).

We conclude by uniqueness result.


Remarks. 1. When a = 0, we have w = ĝ, and so v(t, x) = ĝ(x) on [0, T ) × (0, ∞).
This also holds true for a ≥ 0 when g is convex. Indeed, by Jensen’s inequality to
the convex function g, we have: v(t, x) = supα E[g(XTt,x )] ≥ supα g(E[XTt,x ]) = g(x).
Since v(t, .) is concave, this implies: v(t, x) ≥ ĝ(x) for (t, x) ∈ [0, T ) × (0, ∞). On
the other hand, by Jensen’s inequality to the concave function ĝ, we have: v(t, x) =
w(t, x) ≤ ĝ(E[X̂Tt,x ]) = ĝ(x), which proves the equality: v(t, .) = ĝ for t < T .

2. The superhedging strategy is derived by applying Itô’s formula to w(t, Xt ) where


X is the observed price process in the uncertain volatility model:
Z T
∂w 1 2 2 ∂ 2 w 
g(XT ) ≤ ĝ(XT ) = w(T, XT ) = w(0, X0 ) + + αt X t 2
(t, Xt )dt
∂t 2 ∂x
Z0 T
∂w
+ (t, Xt )dXt
0 ∂x
Z T
∂w 1 2 2 ∂ 2 w 
(by concavity of w) ≤ w(0, X0 ) + + a Xt (t, Xt )dt
0 ∂t 2 ∂x2
Z T
∂w
+ (t, Xt )dXt
0 ∂x
Z T
∂w
= w(0, X0 ) + (t, Xt )dXt .
0 ∂x

∂w
→ This shows that θt = (t, Xt ) is the superhedging strategy associated to the
∂x
superreplication cost w(0, X0 ) of the option payoff g(XT ) in the uncertain volatility
model.
Chapter 3 : Backward Stochastic Differential Equations and
stochastic control

I. Introduction

II. General properties of BSDE

III. The Markov case : nonlinear Feynman-Kac formula. Simulation of BSDE

IV. Application: CRRA utility maximization


I. Introduction

• BSDEs first introduced by Bismut (73): adjoint equation in Pontryagin maximum


principle (linear BSDEs)

• Emergence of the theory since the seminal paper by Pardoux and Peng (90): general
BSDEs

• BSDEs widely used in stochastic control and mathematical finance

• Hedging and pricing problems ↔ linear and reflected BSDE

• Portfolio optimization, risk measure ↔ nonlinear BSDE, reflected and constrained


BSDEs

• Improve existence and uniqueness of BSDEs, especially quadratic BSDEs

• BSDE provide a probabilistic representation of nonlinear PDEs: nonlinear Feynman-


Kac formulae

→ Numerical methods for nonlinear PDEs


II. General results on BSDEs

Let W = (Wt )0≤t≤T be a standard d-dimensional Brownian motion on (Ω, F, F, P )


where F = (Ft )0≤t≤T is the natural filtration of W , and T is a fixed finite horizon.

Notations

• P: set of progressively measurable processes on Ω × [0, T ]

• S2 (0, T ): set of elements Y ∈ P such that

E sup |Yt |2 < ∞,


 
0≤t≤T

• H2 (0, T )d : set of elements Z ∈ P, Rd -valued, such that


Z T 
2
E |Zt | dt < ∞.
0

Definition of BSDE

A (one-dimensional) Backward Stochastic Differential Equation (BSDE in short) is


written in differential form as

−dYt = f (t, Yt , Zt )dt − Zt .dWt , YT = ξ, (1)

where the data is a pair (ξ, f ), called terminal condition and generator (or driver):
ξ ∈ L2 (Ω, FT , P) , f (t, ω, y, z) is P ⊗ B(R × Rd )-measurable.

A solution to (1) is a pair (Y, Z) ∈ S2 (0, T ) × H2 (0, T )d such that


Z T Z T
Yt = ξ + f (s, Ys , Zs )ds − Zs .dWs , 0 ≤ t ≤ T.
t t
Under some specific assumptions on the generator f , there is existence and uniqueness
of a solution to the BSDE (1).

Standard Lipschitz assumption (H1)

• f is uniformly Lipschitz in (y, z), i.e. there exists a positive constant C s.t. for
all (y, z, y 0 , z 0 ):
 
0 0 0 0
|f (t, y, z) − f (t, y , z )| ≤ C |y − y | + |z − z | , dt ⊗ dP a.e.

• The process {f (t, 0, 0), t ∈ [0, T ]} ∈ H2 (0, T )

Theorem (Pardoux and Peng 90) Under (H1), there exists a unique solution (Y, Z)
to the BSDE (1).
Proof. (a) Assume first the case where f does not depend on (y, z), and consider
the martingale
 Z T 
Mt = E ξ + f (t, ω)dt Ft ,
0

which is square-integrable under (H1), i.e. M ∈ S2 (0, T ). By the martingale


representation theorem, there exists a unique Z ∈ H2 (0, T )d s.t.
Z t
Mt = M0 + Zs .dWs , 0 ≤ t ≤ T.
0

Then, the process


h Z T i Z t
Yt := E ξ + f (s, ω)ds Ft = Mt − f (s, ω)ds, 0 ≤ t ≤ T,
t 0

satisfies (with Z) the BSDE (1).


Proof. (b) Consider now the general Lipschitz case. As in the deterministic case,
we give a proof based on a fixed point method. Let us consider the function Φ on
S2 (0, T )m × H2 (0, T )d , mapping (U, V ) ∈ S2 (0, T ) × H2 (0, T )d to (Y, Z) = Φ(U, V )
defined by
Z T Z T
Yt = ξ + f (s, Us , Vs )ds − Zs .dWs .
t t

This pair (Y, Z) exists from Step (a). We then see that (Y, Z) is a solution to the
BSDE (1) if and only if it is a fixed point of Φ.

Let (U, V ), (U 0 , V 0 ) ∈ S2 (0, T )×H2 (0, T )d and (Y, Z) = Φ(U, V ), (Y 0 , Z 0 ) = Φ(U 0 , V 0 ).


We set (Ū , V̄ ) = (U − U 0 , V − V 0 ), (Ȳ , Z̄) = (Y − Y 0 , Z − Z 0 ) and f¯t = f (t, Ut , Vt ) −
f (t, Ut0 , Vt0 ). Take some β > 0 to be chosen later, and apply Itô’s formula to eβs |Ȳs |2
between s = 0 and s = T :
Z T
2
eβs β|Ȳs |2 − 2Ȳs .f¯s ds

|Ȳ0 | = −
Z0 T Z T
− βs 2
e |Z̄s | ds − 2 eβs Ȳs0 Z̄s .dWs .
0 0

By taking the expectation, we get


hZ T   i hZ T i
2
E|Ȳ0 | + E βs 2 2
e β|Ȳs | + |Z̄s | ds = 2E βs ¯
e Ȳs .fs ds
0 0
hZ T i
≤ 2Cf E eβs |Ȳs |(|Ūs | + |V̄s |)ds
0
hZ T i 1 hZ T i
2 βs 2 βs 2 2
≤ 4Cf E e |Ȳs | ds + E e (|Ūs | + |V̄s | )ds
0 2 0
Proof continued. (b) Now, we choose β = 1 + 4Cf2 , and obtain
hZ T   i 1 h T βs
Z i
βs 2 2 2 2
E e |Ȳs | + |Z̄s | ds ≤ E e (|Ūs | + |V̄s | )ds .
0 2 0

This shows that Φ is a strict contraction on the Banach space S2 (0, T ) × H2 (0, T )d
endowed with the norm
 hZ T  i 12
βs 2 2
k(Y, Z)kβ = E e |Ys | + |Zs | ds .
0

We conclude that Φ admits a unique fixed point, which is the solution to the BSDE
(1). 

Non-Lipschitz conditions on the generator

• f is continuous in (y, z) and satisfies a linear growth condition on (y, z). Then,
there exists a minimal solution to the BSDE (1). (Lepeltier and San Martin 97)

• f is continuous in (y, z), linear in y, and quadratic in z, and ξ is bounded. Then,


there exists a unique bounded solution to the BSDE (1) (Kobylanski 00).
III. The Markov case: non-linear Feynman-Kac formulae

Linear Feynman-Kac formula

Consider the linear parabolic PDE


∂v
(t, x) + Lv(t, x) + f (t, x) = 0, on [0, T ) × Rd (2)
∂t
v(T, .) = g, on Rd , (3)

where L is the second-order differential operator


1
Lv = b(x).Dx v + tr(σσ 0 (x)Dx2 v).
2
I Consider the (forward) diffusion process

dXt = b(Xt )dt + σ(Xt )dWt .

Then, by Itô’s formula to v(t, Xt ) between t and T , with v smooth solution to (2)-(3):
Z T Z T
v(t, Xt ) = g(XT ) + f (s, Xs )ds − Dx v(s, Xs )0 σ(Xs )dWs .
t t

It follows that the pair (Yt , Zt ) = (v(t, Xt ), σ 0 (Xt )Dx v(t, Xt )) solves the linear BSDE:
Z T Z T
Yt = g(XT ) + f (s, Xs )ds − Zs dWs .
t t

Remark
We can compute the solution v(0, X0 ) = Y0 by the Monte-Carlo expectation:
 Z T 
Y0 = E g(XT ) + f (s, Xs )ds .
0
Non linear Feynman-Kac formula

Consider the semilinear parabolic PDE


∂v
+ Lv + f (t, x, v, σ 0 Dx v) = 0, on [0, T ) × Rd (4)
∂t
v(T, .) = g, on Rd , (5)

I The corresponding BSDE is


Z T Z T
Yt = g(XT ) + f (s, Xs , Ys , Zs )ds − Zs dWs , (6)
t t

in the sense that:

• the pair (Yt , Zt ) = (v(t, Xt ), σ 0 (Xt )Dx v(t, Xt )) solves (6)

• Conversely, if (Y, Z) is a solution to (6), then Yt = v(t, Xt ) for some deterministic


function v, which is a viscosity solution to (4)-(5).

I The time discretization and simulation of the BSDE (6) provides a numerical
method for solving the semilinear PDE (4)-(5)
Simulation of BSDE: time discretization

• Time grid π = (ti ) on [0, T ] : ti = i∆t, i = 0, . . . , N , ∆t = T /N

• Forward Euler scheme X π for X : starting from Xtπ0 = x,

Xtπi+1 := Xtπi + b(Xtπi )∆t + σ(Xtπi ) Wti+1 − Wti




• Backward Euler scheme (Y π , Z π ) for (Y, Z) : starting from YtπN = g(XtπN ),

Ytπi = Ytπi+1 + f (Xtπi , Ytπi+1 , Ztπi )∆t − Ztπi . Wti+1 − Wti



(7)

and take conditional expectation:


h i
π π π π π π
Yti = E Yti+1 + f (Xti , Yti+1 , Zti )∆t Xti

To get the Z-component, multiply (7) by Wti+1 − Wti and take expectation:
1 h π i
Ztπi = E Yti+1 (Wti+1 − Wti ) Xtπi
∆t
Simulation of BSDE: numerical methods

How to compute these conditional expectations! several approaches:

• Regression based algorithms (Longstaff, Schwartz)

Choose q deterministic basis functions ψ1 , . . . , ψq , and approximate


h i q
X
Ztπi π π
= E Yti+1 (Wti+1 − Wti ) Xti ' αk ψk (Xtπi )
k=1

where α = (αk ) solve the least-square regression problem:


h q
X i2
π
arg infq Ē Yti+1 (Wti+1 − Wti ) − αk ψk (Xtπi )
α∈R
k=1

Here Ē is the empirical mean based on Monte-Carlo simulations of Xtπi , Xtπi+1 , Wti+1 −
Wt i .

→ Efficiency enhanced by using the same set of simulation paths to compute all
conditional expectations.

• Other methods:

• Malliavin Monte-Carlo approach (P.L. Lions, Regnier)

• Quantization methods (Pagès)

→ Important literature: Kohatsu-Higa, Pettersson (01), Ma, Zhang (02), Bally and
Pagès (03), Bouchard, Ekeland, Touzi (04), Gobet et al. (05), Soner and Touzi (05),
Peng, Xu (06), Delarue, Menozzi (07), Bender and Zhang (08), etc ...
IV. Application: CRRA utility maximization

• Consider a financial market model with one riskless asset S 0 = 1, and n stocks of
price process

dSt = diag(St ) µt dt + σt dWt ,

wher W is a d-dimensional Brownian motion (with d ≥ n), b, σ bounded adapted


processes, σ of full rank n.

Consider an agent investing in the stocks a fraction α of his wealth X at any time:

dXt = Xt αt0 diag(St )−1 dSt = Xt (αt0 µt dt + αt0 σt dWt ) (8)

A0 : set of F-adapted processes α valued in A closed convex set of Rn , and satisfying:


RT 0 RT 0 2
0 |α t µt |dt + 0 |αt σt | dt < ∞, → (8) is well-defined.

• Given a utility function U on (0, ∞), and starting from initial capital X0 > 0, the
objective of the agent is:

V0 := sup E[U (XTα )]. (9)


α∈A

Here, X α is the solution to (8) controlled by α ∈ A0 , and starting from X0 at time 0,


and A is the subset of elements α ∈ A0 s.t. {U (Xτα ), τ ∈ T0,T } is uniformly integrable.

I We solve (9) by dynamic programming and BSDE.


• Value function processes:

For t ∈ [0, T ], and α ∈ A, we denote by:

At (α) = {β ∈ A : β.∧t = α.∧t } ,

and define the family of F-adapted processes


h i
β
Vt (α) := ess sup E U (XT ) Ft , 0 ≤ t ≤ T.
β∈At (α)

• Dynamic programming (DP)

• For any α ∈ A, the process {Vt (α), 0 ≤ t ≤ T } is a supermartingale

• There exists an optimal control α̂ ∈ A to V0 if and only if the martingale property


holds, i.e. the process {Vt (α̂), 0 ≤ t ≤ T } is a martingale.

I In the sequel, we exploit the DP in the case of CRRA utility functions: U (x) =
xp /p, p < 1. The key observation is the property that the F-adapted process
Vt (α)
Yt := > 0 does not depend on α ∈ A, and YT = 1.
U (Xtα )

I We adopt a BSDE verification approach: we are looking for (Y, Z) solution to


Z T Z T
Yt = 1 + f (s, ω, Ys , Zs )ds − Zs dWs , (10)
t t
for some generator f to be determined such that

• For any α ∈ A, the process {U (Xtα )Yt , 0 ≤ t ≤ T } is a supermartingale

• There exists α̂ ∈ A for which {U (Xtα̂ )Yt , 0 ≤ t ≤ T } is a martingale.


I By applying Itô’s formula to U (Xtα )Yt , the supermartingale property for all α ∈
A, and the martingale property for some α̂ imply that f should be equal to
h 1−p i
f (t, Yt , Zt ) = p sup (µt Yt + σt Zt ).a − Yt |σt a|2 , (11)
a∈A 2
with a candidate for the optimal control given by
h 1−p 2
i
α̂t ∈ arg max (µt Yt + σt Zt ).a − Yt |σt a| , 0 ≤ t ≤ T. (12)
a∈A 2

• Existence and uniqueness of a solution to the BSDE (10)-(11):

• Change of variables Ỹ = ln Y , Z̃ = Z/Y

• (Ỹ , Z̃) satisfy a quadratic BSDE. Then, we rely on results by Kobylanski (00)

• Existence and uniqueness of (Y, Z) ∈ S∞ (0, T ) ∩ H2 (0, T )d

• Verification argument: let (Y, Z) be the solution to (10)-(11)

• By construction U (Xtα )Yt is a (local)-supermartingale + integrability conditions


on α ∈ A: → it is a supermartingale → supα∈A E[U (XTα )] ≤ U (X0 )Y0 .

• By BMO techniques, we show that α̂ defined in (12) lies in A → U (Xtα̂ )Yt is a


martingale → E[U (XTα̂ )] = U (X0 )Y0

• We conclude that V0 := supα∈A E[U (XTα )] = U (X0 )Y0 , and α̂ is an optimal control.
Markov cases

• Merton model: the coefficients of the stock price µ(t) and σ(t) are deterministic

In this deterministic case, the BSDE (10)-(11) is reduced to an ODE:


Z T h 1−p 2
i
Y (t) = 1 + f (s, Y (s))ds, f (t, y) = y p sup µ(t).a − |σ(t)a| =: yρ(t)
t a∈A 2
RT
ρ(s)ds
and the solution is given by: Y (t) = e t → we find again the solution to the
Merton problem:
Z T 
V0 = U (X0 ) exp ρ(s)ds .
0
• Factor model: the coefficients of the stock price µ(t, Lt ) and σ(t, Lt ) depend on
a factor process

dLt = η(Lt )dt + dWt .

In this case, the BSDE for (Ỹ , Z̃) = (ln Y, Z/Y ) is written as:
Z T Z T
Yt = f˜(s, Ls , Ỹs , Z̃s )ds − Z̃s dWs
t t

with a quadratic (in z) generator

˜
h 1−p i 1
f (t, `, y, z) = p sup (µ(t, `) + σ(t, `)z).a − |σ(t, `)a| + z 2 .
2
a∈A 2 2
→ Ỹt = ϕ(t, Lt ), with a corresponding semilinear PDE for ϕ:
∂ϕ 1
+ η(`).D` ϕ + tr(D`2 ϕ) + f˜(t, `, ϕ, D` ϕ) = 0, ϕ(T, `) = 0.
∂t 2
→ Value function:

V0 = U (X0 ) exp ϕ(0, L0 ) .

Remark The BSDE approach and dynamic programming is also well-suitable for
exponential utility maximization:

→ Many papers: El Karoui, Rouge (00), Hu, Imkeller, Muller (04), Sekine (06),
Becherer (06), etc ...
Chapter 4 : Martingale approach to portfolio optimization problem

I. Complete market framework

II. Utility maximization

III. Value at Risk hedging problem


I. Complete market model
• One bond of price S 0 ≡ 1 (zero interest rate), and one stock of price process S:
dSt
= µt dt + σt dWt , 0 ≤ t ≤ T,
St
where W is standard Brownian motion on (Ω, F, P) equipped with the natural filtra-
tion F = (Ft )0≤t≤T generated by W , µ and σ > 0 are F-adapted processes s.t. the
risk premium λ := µ/σ is bounded.

• Porfolio strategy: F-adapted process θ representing number of shares in stock


RT
s.t. 0 |θt σt |2 dt < ∞ a.s. → wealth process starting from initial capital x ≥ 0:
Z t
x,θ
Xt = x + θu dSu
0

→ Θ(x): set of admissible strategies θ s.t. Xtx,θ ≥ 0, 0 ≤ t ≤ T .

I Unique risk-neutral martingale measure Q with density process:


h dQ i  Z t 1 t
Z 
2
Zt = E Ft = exp − λu dWu − |λu | du , (1)
dP 0 2 0
under which
dSt
= σt dWtQ ,
St
R.
where W Q = W + 0 λu du is a Q-Brownian motion.
Martingale approach:

1. Decomposition of the dynamic portfolio optimization into a static optimization


problem on the terminal wealth

2. representation problem: find the dynamic portfolio strategy leading to the optimal
terminal wealth

I We illustrate this approach in two portfolio optimization problems:

• utility maximization

• Value at Risk criterion


II. Utility maximization
Given a utility function U defined on R+ , the utility maximization problem is:

sup E U (XTx,θ ) ,
 
v(x) = x ≥ 0. (2)
θ∈Θ(x)

• Key point: In the complete market framework, any nonnegative contingent claim
is attainable by a terminal wealth if and only if its unique arbitrage price is equal to
the initial capital of the wealth. In other words, for any x ≥ 0, H ∈ L0+ (FT ), the set
of nonnegative FT -measurable random variables, we have:

H = XTx,θ for some θ ∈ Θ(x) ⇐⇒ EQ [H] = E[ZT H] = x.

I The dynamic optimization problem (2) is thus decomposed into:

(i) a static optimization problem:


n o
0
(S) v(x) = sup E[U (H)], H(x) = H ∈ L+ (FT ) : E[ZT H] = x .
H∈H(x)

(ii) a representation problem: find a portfolio strategy θ∗ ∈ Θ(x) s.t.



(R) XTx,θ = H ∗ , where H ∗ solves (S).
• Solution to the static optimization problem (S)
Formal derivation.
Problem (S) is a convex optimization problem with linear constraint, and may then
be solved by Lagranger multiplier and dual methods from convex analysis: → La-
grangian function:

L(H, y) = E[U (H)] − y(E[ZT H] − x), H ∈ L0+ (FT ), y ≥ 0.

We seek a zero of the gradient of L(H, y), which leads formally to the equations:

E[U 0 (H) − yZT ] = 0,


E[ZT H] − x = 0.

Hence, one looks at an H in the form

H = (U 0 )−1 (yZT ) := I(yZT )

with a positive y s.t.

EQ [H] = E[ZT I(yZT )] = x,

and one would expect that such an H is solution to (S).


Rigorous resolution.
Assume that U is C 1 , strictly concave on (0, ∞), and satisfies the Inada conditions:

U 0 (0) := lim U 0 (x) = ∞, U 0 (∞) := lim U 0 (x) = 0.


x→0 x→∞

We denote by I = (U 0 )−1 the inverse of U 0 , which is then one to one strictly decreasing
from (0, ∞) into (0, ∞). We then have:

Ũ (y) := sup[U (x) − xy] = U (I(y)) − yI(y), y > 0. (3)


x>0

Fix now x > 0. Then, for any H ∈ H(x), y > 0, we have by (3):
    
E U (H) − yZT H ≤ E Ũ (yZT )] = E U (I(yZT )) − yZT I(yZT ) ,

and so
       
E U (H) ≤ E U (I(yZT )) − y E ZT I(yZT ) − x . (4)

We shall assume that E[ZT I(yZT )] < ∞ for all y > 0. Then, since I(0) = ∞, I(∞)
= 0, there exists a unique y ∗ = y ∗ (x) > 0 s.t.

E[ZT I(y ∗ ZT )] = x.

Therefore, by setting:

H ∗ = I(y ∗ ZT ) ∈ H(x),

we deduce from (4) that

v(x) = sup E[U (H)] = E[U (H ∗ )],


H∈H(x)

which shows that H ∗ is solution to (S).


• Computation of the optimal strategy in the representation problem (R)
Denoting by H ∗ ∈ H(x) the solution to problem (S), we now want to compute a

portfolio strategy θ∗ ∈ Θ(x) s.t. the corresponding wealth process X x,θ satisfies

XTx,θ = H ∗ = I(y ∗ ZT ) a.s.

Since a wealth process is a martingale under the risk-neutral martingale measure,


the optimal wealth process is then given by

Xtx,θ = EQ I(y ∗ ZT ) Ft =: M̂t 0 ≤ t ≤ T,
 


and we notice in particular for t = 0, that x = X0x,θ = EQ [I(y ∗ ZT )] = M̂0 . The
general method for determining the optimal portfolio strategy θ∗ consists then of
computing the positive Q-martingale M̂t , and write it by derivation in terms of the
Brownian motion W Q under Q:

dM̂t = M̂t φt dWtQ , (5)

and then to identify with the general form for the dynamics of the wealth process

dXtx,θ = θt∗ dSt = θt∗ St σt dWtQ , (6)

so that
φt M̂t θt∗ St φt
θt∗ = , or equivalently in proportion αt∗ := x,θ∗
= , 0 ≤ t ≤ T.
σt St Xt σt
In some particular cases (see Examples below), the derivation (5) can be made explicit
with Itô’s formula. In the general case, the representation (5) is not explicit and
involves Clark-Ocone formula.
Example: Logarithmic utility function
We consider an utility function of the form

U (x) = ln x.

→ I(y) = 1/y, y > 0. The optimal wealth process is then given by


h 1 i hZ 1 i
x,θ∗ T
Xt = E Q
Ft = E Ft
y ∗ ZT Zt y ∗ ZT
1
= ∗ =: M̂t , 0 ≤ t ≤ T,
y Zt
∗ ∗
where y ∗ is s.t. X0x,θ = x, and so y ∗ = 1/x. Hence, Xtx,θ = x/Zt , 0 ≤ t ≤ T . Now,
recalling the expression (1) of Z, we have by Itô’s formula :
µt
dM̂t = M̂t dWtQ .
σt
Therefore, by identifying with the dynamics (6) of the wealth process, we obtain the
optimal portfolio strategy in terms of number of shares :

µt Xtx,θ
θt∗ = 2 , 0 ≤ t ≤ T,
σt St
or equivalently in proportion :
θt∗ St µt
αt∗ = x,θ∗
= , 0 ≤ t ≤ T.
Xt σt2
Example: Power utility function and deterministic coefficients
We consider an utility function of the form

U (x) = , 0 < γ < 1,
γ
and we assume that the ratio λt := µt /σt is deterministic. One easily check that
I(y) = y −δ with δ = 1/(1 − γ). The optimal wealth process is then given by
Z T
1 T 2  i
h i h Z
x,θ∗ ∗ −δ −δ ∗ −δ Q Q
Xt = E (y ) ZT Ft = (y ) E exp
Q
λt δdWt − λ δdt Ft
0 2 0 t
1 T 2
Z T
1 T 2 2  i
Z h Z
∗ −δ Q
= (y ) exp λ δ(δ − 1)dt)E exp
Q
λt δdWt − λ δ dt Ft
2 0 t 0 2 0 t
1 T 2
Z t
1 t 2 2 
Z Z
∗ −δ
= (y ) exp λ δ(δ − 1)dt) exp λu δdWu − Q
λu δ du , 0 ≤ t ≤ T.
2 0 t 0 2 0
x,θ∗ T
By writing that X0 = x, we get x = (y ∗ )−δ exp 21 0 λ2t δ(δ − 1)dt), and so
R
Z t Z t
∗ 1
Xtx,θ = x exp λ2u δ 2 du =: M̂t , 0 ≤ t ≤ T.

λu δdWuQ −
0 2 0
By Itô’s formula, we have

dM̂t = M̂t λt δdWtQ .

Therefore, by identifying with the dynamics (6) of the wealth process, we obtain the
optimal portfolio strategy in terms of number of shares :

∗ λt δ M̂t µt Xtx,θ
θt = = , 0 ≤ t ≤ T,
σt St (1 − γ)σt2 St
or equivalently in proportion :
∗ θt∗ St µt
αt = x,θ∗
= , 0 ≤ t ≤ T.
Xt (1 − γ)σt2
This generalizes the result obtained by Bellman method in Chapter 1 for constant
coefficients.
III. VaR hedging problem

• In the context of complete market model as in the Black-Scholes model, the price of
any contingent claim is equal to its replication cost, that is the initial capital allowing
to construct a portfolio strategy whose terminal wealth replicates at maturity its
payoff. This price is computed as the expectation of the (discounted) claim under
the unique risk-neutral martingale measure. In particular, by putting up an initial
capital at least equal to the cost of replication, we can stay on the safe side with a
portfolio strategy whose terminal wealth superhedges the payoff of the option.

• What if the investor is unwilling to put up the initial amount required by the
replication? What is the maximal probability of a successful hedge the investor can
achieve with a given smaller amount? Equivalently, one can ask how much initial
capital an investor can save by accepting a certain shortfall probability, i.e. by being
willing to take the risk of having to supply additional capital at maturity in e.g. 1%
of the cases. This question corresponds to the familiar “Value at Risk” (VaR)
concept, which is very popular among investors and practioners. Just as in VaR a
certain level of security (e.g. 99%) is chosen, and the investor is looking for the most
efficient allocation of capital.
VaR optimization problems
• Given a contingent claim represented by a nonnegative FT -measurable random
variable H ∈ L0+ (FT ), and for some fixed initial capital x stricty smaller than the
price of the claim, i.e. x < EQ [H], we are looking for:

v(x) = sup P[XTx,θ ≥ H] (7)


θ∈Θ(x)

The set {XTx,θ ≥ H} is called the success set associated to a strategy θ ∈ Θ(x), and
v(x) the maximal probability of success sets.

I Given shortfall probability ε ∈ (0, 1), we are looking for the least amount of initial
capital which allows us to stay on the safe side with probability 1 − ε, i.e. we want
to determine the minimal initial capital x s.t. there exists an admissible strategy θ
∈ Θ(x) with

P[XTx,θ ≥ H] ≥ 1 − ε.

→ this minimal capital is given by

x∗ = inf{x ≥ 0 : v(x) ≥ 1 − ε}.


• Reduction to a static problem on success sets

Proposition 1. Let H ∈ L0+ (FT ) and x ≥ 0.


Consider A∗ ∈ FT be a solution to the problem

P[A] → max over A ∈ FT (8)

under the constraint

EQ [H1A ] ≤ x. (9)

Then, the replicating strategy θ∗ for the knock-out option

H ∗ := H1A∗

solves the VaR optimization problem (7), and the corresponding success set coincides
almost surely with A∗ .
Proof. 1) Let θ ∈ Θ(x) be an admissible strategy, and denote by A = {XTx,θ ≥ H}
its corresponding success set. Then, we have

XTx,θ ≥ H1A a.s.

since XTx,θ ≥ 0 by admissibility, and so by recalling that the wealth process is a


Q-martingale

EQ [H1A ] ≤ EQ [XTx,θ ] = x.

Hence, A satisfies the constraint (9), which implies by (8)

P[A] = P[XTx,θ ≥ H] ≤ P[A∗ ].

2) Let us consider the replicating strategy θ∗ of H ∗ = H1A∗ and notice that θ∗ ∈



Θ(x) since the corresponding weath process Xtx,θ = EQ [H ∗ |Ft ] is nonnegative. Its
success set satisfies

{XTx,θ ≥ H} = {H1A∗ ≥ H} ⊇ A∗ .

On the other hand, part 1) of the proof yields that



P[XTx,θ ≥ H] ≤ P[A∗ ].

It follows that the two sets A∗ and {XTx,θ ≥ H} coincide up to P-null sets. In
particular, θ∗ is an optimal strategy for (7).
• Construction of the maximal success set
Let us introduce the measure Q∗ given by
dQ∗ H
= Q .
dQ E [H]
The static optimization problem of maximal success set is then rewritten as

P[A] → max over A ∈ FT

under the constraint


x
Q∗ [A] ≤ α := .
EQ [H]
→ This is known in statistical test theory as the Neyman-Pearson test of the null
hypothesis Q∗ against the alternative hypothesis P at the size α.
I Solution to the Neyman-Pearson problem
Define the level
∗ dP
n h i o

c = inf c ≥ 0 : Q >c ≤α ,
dQ∗
and the set

n dP

o n dP

o c∗
A = >c = >a H , with a∗ = . (10)
dQ∗ dQ EQ [H]
If the set A∗ satisfies

Q∗ [A∗ ] = α, i.e. EQ [H1A∗ ] = x, (11)

then A∗ is solution to the Neyman-Pearson test, i.e. (8)-(9). Indeed, let A ∈ FT s.t.
Q∗ [A] ≤ α. By definition of A∗ , we then get
 dP 

(1A∗ − 1A ) ∗
−c ≥ 0, Q∗ a.s.
dQ
and so
∗ Q∗
h dP i
P[A ] − P[A] = E (1A − 1A ) ∗

dQ
Q∗
≥ E [(1A∗ − 1A )c∗ ] = c(Q∗ [A∗ ] − Q[A]) = c(α − Q[A]) ≥ 0,

which shows that A∗ solves the Neyman-Pearson test.

Remark. The solution to Neyman-Pearson problem relies on the condition that the
set A∗ of (10) satisfies (11). This condition is clearly satisfied whenever the function
h i h i
c → Q dQ∗ > c is continuous at c , i.e. Q dQ∗ = c = 0. Since Q∗ is absolutely
∗ dP ∗ ∗ dP ∗

continuous with respect to P, it suffices to check that


h dP i

P = a H = 0.
dQ
Example: VaR hedging in the Black-Scholes model
We consider a geometric Brownian motion for the stock price
 1 2 
St = S0 exp µ − σ t + σWt ,
2
and for simplicity, we set the interest rate equal to zero. We recall that the unique
risk-neutral martingale measure Q is given by
dQ  µ 1 µ 2 
= exp − WT − T .
dP σ 2 σ
Notice that we can also write
dQ −µ/σ 2
= const.ST . (12)
dP
A call option H = (ST −K)+ can be perfectly replicated from the initial capital given
by the famous Black-Scholes formula

EQ [H] = S0 N (d1 ) − KN (d2 ),

where
1 1 √ √
d1 = √ ln(S0 /K) + σ T , d2 = d1 − σ T ,
σ T 2
and N (.) is the distribution function of the standard normal random variable.
Now, suppose we start from an initial capital x smaller than the Black-Scholes price,
and we want to maximize the probability of success set. From the above result, the
optimal strategy consists in replicating a knock-out option H1A where the success
set A is of the form
n dP o
A = > const.H .
dQ
Due to (12), we can write
 µ/σ 2
A = ST > a(ST − K)+ ,

for some constant a chosen s.t.

EQ [H1A ] = x. (13)

→ We distinguish two cases depending whether µ/σ 2 is larger or smaller than 1.


• Case (i) : µ ≤ σ 2 .
In this case, the success set takes the form

A = {ST < c} = {WT < b},

where
 1 2 
c = S0 exp µ − σ T + σb .
2
Hence, the knock-out option H1A can be written as

H1A = (ST − K)+ 1{ST <c} = (ST − K)+ − (ST − c)+ − (c − K)1{ST >c} ,

which is a combination of two call options and of a binary option. We then calculate
the maximal probability of success set

P[A] = N (b/ T ),

with a constant b determined from the condition (13) written explicitly as :

x = EQ [H1A ] = EQ [(ST − K)+ ] − EQ [(ST − c)+ ] − (c − K)EQ [1{ST >c} ]


 −b − µ T + σT   −b − µ T 
σ
= S0 N (d1 ) − KN (d2 ) − S0 N √ + KN √ σ .
T T
• Case (ii) : µ > σ 2 .
In this case, the success set takes the form

A = {ST < c1 } ∪ {ST > c2 } = {WT < b1 } ∪ {WT > b2 },

for some c1 < c2 . Hence, the knock-out option H1A can be written again as a
combination of call options and digital options. We calculate the maximal probability
of success set
√ √
P[A] = N (b1 / T ) + N (−b2 / T ),

with constants b1 and b2 determined from the condition (13), which can be written
explicitly (after some straightforward computations) as :

x = EQ [H1A ]
 −b − µ T + σT   −b − µ T 
1 σ 1
= S0 N (d1 ) − KN (d2 ) − S0 N √ + KN √ σ
T T
 −b − µ T + σT   −b − µ T 
2 2
+S0 N √σ − KN √ σ .
T T
Numerical illustration
We illustrate the amount of initial capital that can be saved by accepting a certain
shortfall probability.

Let us consider the following numerical example : T = 0, 25 (i.e. 3 months), σ =


0, 3, µ = 0, 08 (< σ 2 ), S0 = 100, K = 110. For the values ε = 1%, 5%, 10%, we
compute the corresponding proportions x/EQ [H] given respectively by 89%, 59%,
34%. Thus, if we are ready to accept a shortfall probability of 5%, we can reduce the
initial capital by 41%.

You might also like