ModelsOfReturnsAndSimulation
ModelsOfReturnsAndSimulation
One can check that P (t) satisfies the ordinary differential equation
dP
= rP
dt
or
dP = rP dt.
This last formulation says that in a tiny time interval of length dt the
change in the principal is equal to the amount of principal times the annual
rate times the length of the time interval. As the length of the interval gets
smaller and smaller this assertion becomes more and more accurate.
Continuous, Deterministic Interest Rates. There are several ways to
extend the idea of continuously compounded interest with a constant rate.
The first way is to allow non-constant rates. A very general deterministic
mathematical model that would allow this would be
dP = f (P, t)dt.
or
dP
= f (P, t)
dt
where f is allowed to be a function of P and t. More simply one would have
f (P (t), t) = r(t)P (t) where r(t) is some known, reasonably nice1 function.
To describe the solution completely, one needs to add an initial condition,
say P (t0 ) = c, where t0 is some initial time and c is a constant.
This model is tractable in the sense that depending on the nature of f one
can either invoke results from the theory of ordinary differential equations
to state a solution formula or use numerical methods to find very good
approximate solutions. For example, if we require that f (P, t) = r(t)P (t),
then we have to solve the differential equation
dP
= r(t)dt
P (t)
which, after integrating both sides and taking into account the constant that
arises, has the solution
Rt
P (t) = P (0)e 0 r(s)ds
With this sort of model of interest rates one can explore scenarios based
on different assumptions about how rates will change. For example, there
have been times recently when the short term direction of the Federal Re-
serve’s discount rate has been easy to predict–based on announcements by
the Chairman. Currently, one might make some guesses about when rate
will start to move up and with what speed and then explore the effects of
this on a portfolio. However, interest rates are notoriously difficult to pre-
dict and are subject to shocks—we have seen these in the last year in the
sudden lowering of interest rates by the central banks.
1For example, we could allow it to have a finite number of discontinuities.
2
Monte Carlo Simulation c 2009-11, Stephen Demko
The case of random interest rates. There are a number models for interest
rates that allow for randomness. We will mention some very soon. One
major complication
Rt
that arises beyond the model itself is how to interpret
r(s)ds
something like e 0 when r is random.
2We only know that such a point exists; we do not have a method of finding it.
3
Monte Carlo Simulation c 2009-11, Stephen Demko
4
Monte Carlo Simulation c 2009-11, Stephen Demko
5
Monte Carlo Simulation c 2009-11, Stephen Demko
Taking t = 0 and ∆t = T , since W (0) = 0, one can make one big jump
and write:
σ2
(5) S(T ) = S(0) exp{(µ − )T + σW (T )}
2
Realizations of ST can be calculated as
σ2 √
(6) ST (ω) = S0 exp{(µ − )T + σ T ω}
2
where ω is a sample from a Gaussian distribution4.
In principle one can use this to estimate probabilities related to ST (e.g
Probability {m ≤ ST ≤ M } where m and M are prescribed values) by
taking ω to be independent samples from a normal distribution.
The computer calculation of (6) requires the ability to generate samples
of a Gaussian distribution. Either a computing environment will contain a
functionality to manufacture independent Gaussian (pseudo-)random5 num-
bers, or the programmer will write such a tool. For example, in Matlab the
function randn does the job and in Java the method Gaussian() of the
Random class does it. Methods for generating a sequence of independent
Gaussian samples typically involve a transformation of independent uni-
formly distributed samples. In the last ten years the “Mersenne Twister”
algorithm has become very widely used because of both its speed and the
quality of the uniformly distributed random numbers it produces. Code for
this is readily available on the internet6. See the Wikipedia article on the
Mersenne Twister for appropriate links. A good reference on the generation
of pseudo-random numbers is Donald Knuth’s classic The Art of Computer
Programming, Volume 2: Seminumerical Algorithms (3rd Edition).
The current discussion of option pricing has as its focus the case that
the underlying satisfies a GBM. As noted above this allows us to “jump”
from the present spot value to spot values at expiration and, thus, avoid
the discrete time approximation of the Euler-Maruyama method. Certain
options that depend on the history (or path, or trajectory) of the underlying
can also be easily priced with the method discussed here, for example, a call
option for which the settlement price is not the spot at expiration but an
average of the closing prices of the underlying at a finite sets of trading
days. Such an option is called an Asian-tail option 7. Some other types of
options, like American Puts, can require complex modifications of the basic
4A Gaussian has a normal distribution with mean 0 and variance 1. We also call this
the Standard Normal distribution.
5We call the output of algorithms that attempt to produce random numbers of any
sort “pseudo-random numbers”. This is in contrast to typos which should be deemed as
random, and hopefully rare, events.
6As a rule, if you have both legal access and user rights to well written, useful code, it
makes lots of sense to use it if you have the need.
7A good exercise is to plan the coding for an Asian tail option, assuming that strike,
rate, and volatility are given
8
Monte Carlo Simulation c 2009-11, Stephen Demko
Monte Carlo method because the option contract can cease to exist before
expiration.
Recalling that the present value of a non-path dependent European option
with payoff function P (S) is the discounted expectation of the payoff give
the present spot value, we write this in symbols as:
Error Theory. The basic error theory of the Monte Carlo method is phrased
in terms of concepts from probability and statistics, in particular in terms
of confidence intervals. This is no surprise since the idea is essentially to
estimate the expectation of a random variable by averaging a largeR number of
x
samples of the random variable. We use the notation Φ(x) = √12π −∞ e−0.5s ds
If we let Vi denote the estimate computed at step i, Vi = e−rT Pi , then
with VˆN = N1 (V1 + V2 + · · · + VN ), we can compute the sample standard
deviation of V1 , V2 , . . . , VN by
r
1
sN = (VˆN − Vi )2
N −1
9
Monte Carlo Simulation c 2009-11, Stephen Demko
With V denoting the true value of the option, the Central Limit Theorem
asserts that the probability of
VˆN − V
−x ≤ √ ≤x
SN / N
Rx 2
converges, as N → ∞, to 2Φ(x) − 1 = √12π −x e−0.5s ds.
If we want this probability to be large, say ≥ 1 − ǫ, then we should take x
such that 2Φ(x) − 1 = 1 − ǫ, or Φ(x) = 1 − 2ǫ . Let’s denote this value of x by
xǫ : Φ(xǫ ) = 1 − 2ǫ .. So with N simulations and given ǫ reasonable small, we
would take some asymptotic probabilistic comfort that the value of V lay in
the interval (VˆN − √ SN
x , Vˆ + √
N ǫ N
SN
x ). We can make the interval smaller by
N ǫ
taking more simulations, but since the length of the interval is controlled by
1 th
N which appears under a square root sign, to make the interval about 10
smaller we would need to go from N simulations to 100N .
This points out the main limitation of the method. Accuracy depends on
many simulations. Normally, tens of thousands or even millions are required.
Monte Carlo As a Numerical Integration Method. Since the stan-
dard normal distribution (i.e., N (0, 1)) has distribution function d(x) =
2
√1 exp(− x ), we can write the expectation of the payoff function as an
2π 2
integral
1
Z ∞
σ2 √ ω2
(9) E[P (S)] = √ P [S0 exp{(r − )T + σ T ω}]e− 2 dω
2π −∞ 2
It is useful to normalize the interval of integration to [0, 1]. So, with
Z ω
1 t2
N (ω) = √ exp{− } dt
2π −∞ 2
we make the change of variables z = N (ω) and conclude from standard
Calculus that
Z 1
σ2 √
(10) E[P (S)] = P [S0 exp (r − )T + σ T N −1 (z)] dz
0 2
In this second integral we see that as z moves throughout the interval [0, 1],
the values of N −1 (z) vary over the entire set of real numbers. Moreover,
if the values of z are distributed uniformly over [0.1], then the values of
w = N −1 (z) are distributed like a Gaussian over (−∞, ∞). It is clear that
if f is continuous, then approximations like
N √
1 X σ2
(11) E[P (S)] ≈ P [S0 exp (r − )T + σ T N −1 (zi )]
N 2
i=1
should converge to the value of the integral as the number of points N
grows, provided that the points z1 , z2 , . . . , zN fill up the interval [0, 1] as
N → ∞.
10
Monte Carlo Simulation c 2009-11, Stephen Demko
8However, if you modify your code, then a given seed need not replicate the results of
an earlier experiment
11
Monte Carlo Simulation c 2009-11, Stephen Demko
N
1 X σ 2 (g)
E[ g(xi ) − E[g]]2 =
N N
i=1
N
1 X σ 2 (f ) σ 2 (f )
E[ g(xi ) − E[f ]]2 = <
N 2N N
i=1
4.6
4.4
4.2
4
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
15
Monte Carlo Simulation c 2009-11, Stephen Demko
The second picture is a close up of the first over the range of 2500 to 5000
realizations.
4.7
"R5000-1.out"
"R5000-2.out"
"R5000-3.out"
"R5000-4.out"
4.6 "R5000-5.out"
4.5
4.4
4.3
4.2
4.1
4
2500 3000 3500 4000 4500 5000
The third picture shows the result of applying the method using the Hal-
ton Sequence to generate uniformly distributed points and then generating
the associated normally distributed points. Note that there is one answer
here since the process is deterministic. Also, note the rate of convergence.
5
"QR5000.out"
4.8
4.6
4.4
4.2
4
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
16
Monte Carlo Simulation c 2009-11, Stephen Demko
14These arise for example when the value of an option depends on more than one
underlying.
17