0% found this document useful (0 votes)
71 views39 pages

Monte Carlo and Early Exercise: Alessandro Gnoatto

The document discusses methods for pricing options with early exercise features using Monte Carlo simulation. It describes the challenges of approximating the optimal early exercise boundary and introduces the Longstaff-Schwartz least squares Monte Carlo method. This method works backward in time to estimate continuation values using basis functions and determine whether to exercise options at each time step based on these estimated values.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views39 pages

Monte Carlo and Early Exercise: Alessandro Gnoatto

The document discusses methods for pricing options with early exercise features using Monte Carlo simulation. It describes the challenges of approximating the optimal early exercise boundary and introduces the Longstaff-Schwartz least squares Monte Carlo method. This method works backward in time to estimate continuation values using basis functions and determine whether to exercise options at each time step based on these estimated values.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Monte Carlo and Early exercise

Alessandro Gnoatto

05.05.2021

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 1 / 45


Introduction

Early Exercise

Perhaps the biggest challenge for Monte Carlo methods is the accurate
and efficient pricing of options with optional early exercise:
Bermudan options: can exercise at a finite number of times tj
American options: can exercise at any time The challenge is to
find/approximate the optimal strategy (i.e. when to exercise) and
hence determine the price and Greeks.

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 2 / 45


Introduction

Early Exercise

Approximating the optimal exercise boundary introduces new


approximation errors:
An approximate exercise boundary is inevitably sub-optimal =)
under-estimate of ”true” value, but accurate value for the
sub-optimal strategy
For the option buyer, sub-optimal price reflects value achievable with
sub-optimal strategy
For the option seller, ”true” price is best a purchaser might achieve
Can also derive an upper bound approximation

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 3 / 45


Problem formulation

Problem Formulation

Following description in Glasserman’s book, the Bermudan problem has


the dynamic programming formulation:

Vem (x) = h
em (x)
⇣ h i⌘
e
Vi 1 (x) = max h ei e
1 (x), E Di 1,i Vi (Xi ) | Xi 1 =x

where
Xi is the underlying at exercise time ti
Ṽi (x) is option value at time ti assuming not previously exercised
ei (x) is exercise value at time ti
h
Di 1,i is the discount factor for interval [ti 1 , ti ]

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 4 / 45


Problem formulation

Problem Formulation

By defining
ei (x)
hi (x) = D0,i h
Vi (x) = D0,i Vei (x)
where
D0,i = D0,1 D1,2 . . . Di 1,i

can simplify the formulation to

Vm (x) = hm (x)
Vi 1 (x) = max (hi 1 (x), E [Vi (Xi ) | Xi 1 = x])

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 5 / 45


Problem formulation

Problem formulation

An alternative point of view considers stopping rules ⌧ , the time at which


the option is exercised.
For a particular stopping rule, the initial option value is

V0 (X0 ) = E [h⌧ (X⌧ )]

the expected value of the option at the time of exercise. The best that can
be achieved is then
V0 (X0 ) = sup E [h⌧ (X⌧ )]

giving an optimisation problem.

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 6 / 45


Problem formulation

The continuation value is

Ci (x) = E [Vi+1 (Xi+1 ) | Xi = x]

and so the optimal stopping rule is

⌧ = min {i : hi (Xi ) > Ci (Xi )}

Approximating the continuation value leads to an approximate stopping


rule.

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 7 / 45


Problem formulation

The Longsta↵-Schwartz method (2001) is the one most used in practice.


Start with N path simulations, each going from initial time t = 0 to
maturity t = T = tm . Problem is to assign a value to each path, working
out whether and when to exercise the option.
This is done by working backwards in time, approximating the
continuation value.

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 8 / 45


Problem formulation

At maturity, the value of an option is

Vm (Xm ) = hm (Xm )

At the previous exercise date, the continuation value is

Cm 1 (x) = E [Vm (Xm ) | Xm 1 = x]

This is approximated using a set of R basis functions as


R
X
Cbm 1 (x) = r r (x)
r =1

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 9 / 45


Problem formulation

The coefficients r are obtained by a least-squares minimisation,


minimising
⇢⇣ ⌘2
E E [Vm (Xm ) | Xm 1] Cbm 1 (Xm 1)

Setting the derivative w.r.t. r to zero gives


n⇣ ⌘ o
E E [Vm (Xm ) | Xm 1 ] Cbm 1 (Xm 1) r (Xm 1 ) =0

and hence
h i
E [Vm (Xm ) r (Xm 1 )] = E Cbm 1 (Xm 1 ) r (Xm 1)
X
= E[ r (Xm 1) s (Xm 1 )] s
s

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 10 / 45


Problem formulation

This set of equations can be written collectively as

B = BV

where
(BV )r = E [Vm (Xm ) r (Xm 1 )]
(B )rs = E [ r (Xm 1 ) s (Xm 1 )]
Therefore,
1
=B BV

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 11 / 45


Problem formulation

In the numerical approximation, each of the expectations is replaced by an


average over the values from the N paths. For example,

E[ r (Xm 1) s (Xm 1 )]

is approximated as
N
X ⇣ ⌘ ⇣ ⌘
1 (n) (n)
N r Xm 1 s Xm 1
n=1

Assuming that the number of paths is much greater than the number of
basis functions, the main cost is in approximating B with a cost which is
O NR 2 .

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 12 / 45


Problem formulation

Once we have the approximation for the continuation value, what do we


do?
if Cb (Xm 1 ) < hm 1 (Xm 1 ), exercise the option and set

Vm 1 = hm 1 (Xm 1 )

if not, then set


Vm 1 = Vm
(Longsta↵ & Schwartz, 2001)

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 13 / 45


Problem formulation

Longsta↵-Schwartz

The Longsta↵-Schwarz treatment only uses the continuation estimate


to decide on the exercise boundary - no loss of accuracy for paths
which are not exercised.
Also, Longsta↵-Schwarz can do least squares fit only for paths which
are in-the-money (i.e. h(X ) > 0) - leads to improved accuracy.
Because of the optimality condition, the option value is insensitive to
small perturbations in the exercise boundary, so can assume that
exercise of paths is not a↵ected when computing first order Greeks.

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 14 / 45


Problem formulation

Provided the basis functions are chosen suitably, the approximation


R
X
Cbm 1 (x) = r r (x)
r =1

gets increasingly accurate as R ! 1. Longsta↵ & Schwartz used


5-20 basis functions in their paper
Having completed the calculation for tm 1 , repeat the procedure for
tm 2 then tm 3 and so on. Could use di↵erent basis functions for
each exercise time - the coefficients will certainly be di↵erent.

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 15 / 45


Problem formulation

The estimate will tend to be biased low because of the sub-optimal


exercise boundary, however might be biased high due to using the
same paths for decision-making and valuation.
To be sure of being biased low, should use two sets of paths, one to
estimate the continuation value and exercise boundary, and the other
for the valuation.
However, in practice the di↵erence is quite small.
This leaves the problem of computing an upper bound.

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 16 / 45


Implementation in Matlab

Receipt:
1 Approximate the continuation value

r (ti+1 ti )
ci = e E [Vi+1 | Si ]

2 Via
Nr
X
r (ti+1 ti )
ci = e E [vi+1 | Si ] ⇡ ak fk (Si )
k=1

3 To determine the coefficients ak we minimize in the least square error


sense the error ✏
Nr
X
ci = ak fk (Si ) + ".
k=1

Notice that this is not a standard linear regression! Notice that the
coefficients multiply a function f of the underlying, several choices of f are
possibile! f are called basis functions.
Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 17 / 45
Implementation in Matlab

1 Let Nx be number of exercise dates and Nsim the number of Monte


Carlo paths.
2 Introduce the vectors (mind the di↵erent dimensions!)
0 1 0 1 0 1
v1 a1 "1
B v2 C B a2 C B "2 C
B C B C B C
V = B . C , A = B . C , E = B . C 2 RNSim
@ . A . @ . A. @ . A .
vNSim aNr "NSim

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 18 / 45


Implementation in Matlab

1 Let fk , k = 1, . . . , Nr be the regression functions. We denote by


fk (Si , n)
the value of the k-th basis function at time ti for the n-th simulated
path.
2 For a fixed point in time ti we collect all functions evaluations in a
matrix F 2 RN Sim ⇥Nr of the form
0 1
f1 (Si , 1) f2 (Si , 1) ··· fNr (Si , 1)
B f1 (Si , 2) f2 (Si , 2) ··· fNr (Si , 2) C
B C
F := B .. .. .. .. C
@ . . . . A
f1 (Si , NSim) f2 (Si , N Sim) · · · fNr (Si , NSim)
3 In summary, we have written in matrix form the (generalized)
regression we had before (albeit with a change of notation with
respect to the book of Glasserman)
V = FA + E

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 19 / 45


Implementation in Matlab

1 The least square minimizer in matrix form is then given by

F > FA = F > V

which leads us the vector of optimal coefficients for time ti


⇣ ⌘ 1
A = F >F F >V

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 20 / 45


Implementation in Matlab

Implementation details

1 First choice of the basis functions (notice the non-linearity!). This is


our choice for the numerical experiments

2 This is a particular case of a basis of monomials. A more general code


is

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 21 / 45


Implementation in Matlab

1 Weighted Laguerre polynomials (this is the original choice by


Longsta↵ and Schwartz)
x x x
⇣ ⌘
x2
l0 (x) = e 2 l1 (x) = e 2 (1 x) l2 (x) = e 2 1 2x + 2
x
ex d n
ln (x) = e 2
n! dx n (x n e x)

2 Non weighted Laguerre polynomials


L0 (x) = 1 L1 (x) = 1 x L2 (x) = 12 2 4x + x 2
x dn
Ln (x) = en! dx n
n (x e
x)

3 Code for the non weighted Laguerre polynomials

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 22 / 45


Implementation in Matlab

1 Hermite polynomials

H0 (x) = 1 H1 (x) = x H2 (x) = x 2 1


x2
dn x2 (1)
Hn (x) = ( 1)n e 2
dx n e 2

2 Code:

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 23 / 45


Implementation in Matlab

1 Legendre Polynomials

P0 (x) = 1 P1 (x) = x P2 (x) = 12 3x 2 1


dn
⇥ 2 n⇤ (2)
Pn (x) = (2n n!) 1 dx n x 1

2 Code:

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 24 / 45


Implementation in Matlab

1 Chebychev Polynomials

T0 (x) = 1 T1 (x) = x T2 (x) = 2x 2 1


(3)
Tn+1 (x) = 2xTn (x) Tn 1 (x)

2 Code:

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 25 / 45


Implementation in Matlab

A first Matlab implementation: regression and pricing in one function.

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 26 / 45


Implementation in Matlab

Second implementation. We use a first function to estimate the regression


coefficients.

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 27 / 45


Implementation in Matlab

Second implementation. We use a second dedicated function for pricing,


given the regression coefficients we estimated using the previous function.

We use two di↵erent set of simulated paths to estimate the regressors and
to perform the pricing step.
Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 28 / 45
Implementation in Matlab

Pricing of American Options

1 Put Options For a fixed time point t the payo↵ of a Put option is
given by
max(K S(t), 0)
Let us fix the parameters S(0) = K = 100, the volatility = 0.25 and
the rate r = 0.04. We assume that there are no dividends and the
maturity is one year.
2 Call Options In the case without dividens remember that the exercise
of an American call option is never optimal! We can use this
theoretical result to test our implementation against a multinomial
tree or the Black Scholes formula.

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 29 / 45


Implementation in Matlab

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 30 / 45


Implementation in Matlab

Here we compare three methods:


A binomial Tree
Regression and Pricing with the same set of paths.
Separation of Regression and Pricing using two di↵erent sets of paths.

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 31 / 45


Implementation in Matlab

Remark

1 The Longsta↵-Schwartz algorithm does not compute the price just


once like a classical Monte Carlo estimate: we are computing the
price for all future simulated paths: with 100.000 Monte Carlo paths
and 40 exercise dates this means that we are computing the price 4
million times.
2 Exercise for you: take the matrix Val in the code above and plot the
histogram/discrete distribution for every point in time.
3 Remember: a conditional expectation is a random variable!

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 32 / 45


Implementation in Matlab

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 33 / 45


Implementation in Matlab

Exotic American Options

Since we estimate future price distributions, the Longsta↵ Schwartz


algorithm can be employed to price also American exotic options. Let us
look at some examples. We consider
1 American Asian options.
2 American Barrier options.

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 34 / 45


Implementation in Matlab

Example
Asian Options Let us fix a grid T = {t1 , . . . , tN = T } . For an Arithmetic
Asian option we consider the arithmetic average on T . It is given by
N
X
1
A(T ) = S (ti )
N 1
i=1

They come in di↵erent fashions, for example

AC = max(A(T ) K , 0)
(4)
AP = max(K A(T ), 0)

with a fixed strike K or in average strike form

AC = max(S(T ) A(T ), 0)
(5)
AP = max(A(T ) S(T ), 0)

We consider the basis functions 1, S, S 2 , SA , SA2 , S 2 SA , SSA2


Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 35 / 45
Implementation in Matlab

We need paths both of the underlying and the average. These are
provided by

For comparison purposes, we also have a pricer for American Asian options
using trees.

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 36 / 45


Implementation in Matlab

Test Script

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 37 / 45


Implementation in Matlab

Test Script - continued

Also in this case we compare the two versions of the Algorithm: Regress
and Price with the same set of paths Vs Separation of Regression estimate
and pricing.

Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 38 / 45


Implementation in Matlab

Example
Barrier Options. We consider Knock-In and Knock-Out Barrier options.
Let two functions lb and ub given by
lb, ub : [0, T ] ! R+
t 7! lb(t), ub(t)

The functions lb and ub respectively are called the lower barrier and the
upper barrier. A Knock-Out option is triggered once the asset moves
below lb or above ub, whereas a Knock-In option is triggered when the
asset moves below ub or above lb.

max(S(T ) K , 0) Tlb  T
DIP =
⇢ 0 Tlb > T
max(K S(T ), 0) Tub  T
UIP = ,
⇢0 Tub > T
max(S(T ) K , 0) Tlb T
DOP =
⇢ 0 Tlb < T
max(K S(T ), 0) Tub T
UOP = .
0 Tub < T
Alessandro Gnoatto (UNIVR) Least Squares Monte Carlo 05.05.2021 39 / 45

You might also like