100% found this document useful (1 vote)
131 views

P1.T2. Quantitative Analysis

This document provides an overview of Monte Carlo simulation and bootstrapping techniques. It describes the basic steps to conduct a Monte Carlo simulation as: 1) generate data according to a specified process, 2) calculate test statistics for each trial, 3) repeat for multiple replications, and 4) estimate quantities of interest from the replications. Ways to reduce Monte Carlo sampling error include increasing the sample size or employing variance reduction techniques. Bootstrapping involves resampling observations with replacement from an original sample to estimate properties of an estimator.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
131 views

P1.T2. Quantitative Analysis

This document provides an overview of Monte Carlo simulation and bootstrapping techniques. It describes the basic steps to conduct a Monte Carlo simulation as: 1) generate data according to a specified process, 2) calculate test statistics for each trial, 3) repeat for multiple replications, and 4) estimate quantities of interest from the replications. Ways to reduce Monte Carlo sampling error include increasing the sample size or employing variance reduction techniques. Bootstrapping involves resampling observations with replacement from an original sample to estimate properties of an estimator.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.

The information provided in this document is intended solely for you. Please do not freely distribute.

P1.T2. Quantitative Analysis

Chapter 13: Simulation and Bootstrapping

Bionic Turtle FRM Study Notes


Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.

Chapter 13: Simulation and Bootstrapping

DESCRIBE THE BASIC STEPS TO CONDUCT A MONTE CARLO SIMULATION. ..................................... 4


DESCRIBE WAYS TO REDUCE MONTE CARLO SAMPLING ERROR. .................................................. 6
EXPLAIN THE USE OF ANTITHETIC AND CONTROL VARIATES IN REDUCING MONTE CARLO SAMPLING
ERROR. .................................................................................................................................... 7
DESCRIBE THE BOOTSTRAPPING METHOD AND ITS ADVANTAGE OVER MONTE CARLO SIMULATION. . 9
DESCRIBE PSEUDO-RANDOM NUMBER GENERATION .................................................................... 9
DESCRIBE SITUATIONS WHERE THE BOOTSTRAPPING METHOD IS INEFFECTIVE. ........................... 10
DESCRIBE THE DISADVANTAGES OF THE SIMULATION APPROACH TO FINANCIAL PROBLEM SOLVING.
.............................................................................................................................................. 11
PRACTICE QUESTIONS & ANSWERS ......................................................................................... 12

2
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.

Chapter 13: Simulation and Bootstrapping

 Describe the basic steps to conduct a Monte Carlo simulation.


 Describe ways to reduce Monte Carlo sampling error.
 Explain the use of antithetic and control variates in reducing Monte Carlo sampling
error.
 Describe the bootstrapping method and its advantage over Monte Carlo simulation.
 Describe pseudo-random number generation and how a good simulation design
alleviates the effects the choice of the seed has on the properties of the generated
series.
 Describe situations where the bootstrapping method is ineffective.
 Describe the disadvantages of the simulation approach to financial problem solving.

Key ideas and/or definitions in this chapter

 To be developed on next revision


 TBD-2

3
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.

Describe the basic steps to conduct a Monte Carlo simulation.


To conduct a Monte Carlo simulation (MCS), we conduct the following basic steps1

1. Generate the data, x(i) = {x(1i), x(2i), …, x(ni)} according to the specified data
generating process (DGP) with random errors drawn from some given distribution; for
example, a common error is the random standard normal, ~N(0,1), but any plausible
distribution is possible.

Each (i) is a single simulation trial or replication. Because the MCS of often simulated
over time, we can think of the trial/replication as a single row where the columns
represent time intervals. For example, if the MCS is a simulation of a daily stock price
going forward one year, the first trial/replication is a single row: [1 row × 250 columns of
future simulated prices; S(1), S(2), …, S(250)].

2. Calculate the test statistic, function, or regression based on trial (i), such that g(i) =
g[x(i)]. For example, the final price (at the end of the row); or if we are valuing an Asian
option, the arithmetic average of the replication’s (row’s) price series.

3. Repeat the first two steps to produce (b) replications. If we continue the matrix view,
this is where we add one additional row for each replication. It is common to generate
many replications, so perhaps the matrix adds 1,000 or 10,000 rows; e.g., if we conduct
1,000 replications, then the MCS matrix looks like: [1,000 rows/replications/trials × 250
columns of future simulated prices; S(1), S(2), …, S(250)].

4. From the replications, estimate the quantity of interest from {g(1), g(2), …, g(b)} which is
likely to be summary statistic. For example, this might be quantile such as the 95th-
percentile of the {g(1), g(2), …, g(b)} so that we have estimated as 95.0% value at risk
(VaR) based on the simulated data!

5. Compute the standard error (SE) to evaluate the accuracy of the quantity of interest.

According to Brooks, we can abstract from the many steps and think about two key stages2:
 The first stage is the specification of the (data generating) model. This model may be either
a pure time series model or a structural model.
o Pure time series models are simpler to implement.
o A full structural model is harder because it requires (in addition) the
specification of the DGP for the explanatory variables also.
Once the time series model is selected, the next choice is the probability distribution
used to specify the random errors.
 The second stage involves estimation of the parameter of interest in the study.
Viable parameters include, for example, coefficient value in a regression; option value at
maturity; or portfolio value under a set of scenarios that govern asset price dynamics.

1
Chapter 13 (GARP 2020). But also informed by (prior author-specific FRM assignment): Brooks, Chris,
Introductory Econometrics for Finance (Cambridge University Press; 3rd Edition 2014)
2
Brooks, Chris, Introductory Econometrics for Finance (Cambridge University Press; 3rd Edition 2014)

4
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.

Additional ideas about the Monte Carlo simulation (MCS):


 The number of replications should be as large as possible, as a rule. As Brooks explains,
“The central idea behind Monte Carlo is that of random sampling from a given
distribution. If the number of replications is set too small, the results will be sensitive to
odd combinations of random number draws.”3 Further, because asymptotic arguments
apply (i.e., the results of a simulation study will equal their analytical counterparts
asymptotically, if they exist), we prefer larger samples (more replications).
 In many situations, Monte Carlo simulation is the best possible approach. Often,
analytical solutions are tractable but too simplistic to capture realistic and complex
dynamics. Historical simulation, on the other hand, does offer the advantage of plentiful
data, but may not give us the means to effectively imagine possible future outcomes.
 A key advantage of MCS is that it can be used to investigate the properties and
behavior of various statistics of interest. The technique is used in econometrics when
the properties of a particular estimation method are not known.

For example: Monte Carlo simulation (MCS) of interest rates

Consider interest rate paths. With relatively few assumptions, simulation allows us to randomize
possible future interest rate paths. The two charts below can be found in the Learning
Spreadsheet associated with this reading. Each chart simulates only ten (10) different interest
rate paths over a ten-year (120 month) horizon. The two models employed are Cox-Ingersoll-
Ross (CIR, on the left-hand side) and the Vasicek model (on the right-hand side). In each case,
the “error” (the random component) is a random standard normal variable, N(0,1). These
interest rate paths become inputs into a credit exposure (in the XLS) or can feed into a value at
risk (VaR) model. Such is the flexibility of simulations, which allow us to manufacture data!

3
Brooks, Chris, Introductory Econometrics for Finance (Cambridge University Press; 3rd Edition 2014)

5
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.

Describe ways to reduce Monte Carlo sampling error.


There are two basic ways to reduce sampling error

a) Increase the sample size,


b) Employ a variance reduction technique; aka, acceleration method

Sampling variation scales with the square root of the sample size

Let’s denote with x(i) the parameter value of interest for the replication (i). If we generate N =
10,000 replications and retrieve the average, we expect a slightly different result than another
researcher who also conducts 10,000 replications, under an identical model. The difference is
due to the fact that each experiment’s random number matrix (aka, the error matrix) will be
different.

The sampling variation in a Monte Carlo study is measured by the standard error estimate, Sx:

( )
=

where var(x) is the variance of the estimates of the quantity of interest over the N replications.

The key relationship here is the embedded 1/SQRT(n) such that to reduce the Monte Carlo
standard error by a factor of 10, the number of replications must be increased by a factor
of 100. In general, to achieve acceptable accuracy, the number of replications may have to be
set at an very high level; e.g., 10,000 replications.

An alternative way to reduce Monte Carlo sampling error is to use a variance reduction
technique. Two of the intuitively simplest and most widely used variance reduction methods are:
 Antithetic variate technique: This involves taking the complement of a set of random
numbers and running a parallel simulation on those so that the covariance is negative,
and therefore the Monte Carlo sampling error is reduced.
 Control variate technique: This involves using employing a variable similar to that
used in the simulation, but whose properties are known prior to the simulation. The
effects of sampling error for the problem under study and the known problem will be
similar, and hence can be reduced by calibrating the Monte Carlo results using the
analytic ones.

6
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.

Explain the use of antithetic and control variates in reducing Monte


Carlo sampling error.
The antithetic variate technique takes the complement of a random number vector and
runs a parallel simulation on it 4. For example, if the error vector is a set of T ~N (0,1) draws,
denoted u(t) for each replication, an additional error vector given by -u(t) is also used. This will
reduce the MCS standard error. For example,5
 Imagine that the average value of the parameter of interest across two sets of MCS
replication is given by:
=( + )/

where x1 and x2 are the average parameter values for replications sets 1 and 2,
respectively.
 The variance of ̅ will be given by:
( )= ( ( )+ ( )+ ( , ))

 If no antithetic variates are used, the two sets of MCS replication will be independent,
such that their covariance will be zero, i.e.

( )= ( ( )+ ( ))

 However, if we employ antithetic variates, then the covariance will be negative and
the MCS sampling error will be reduced. It might seem that the reduction in MCS
sampling variation from using antithetic variates will be huge since, by definition,
( ,− ) = ( ,− ) = − .

But the relevant covariance is between the simulated quantity of interest; i.e., between
the standard replications and the antithetic variate replications. Most pricing applications
(e.g., option prices) involve a non-linear transformation of u(t) such that covariances
between the terminal prices of the underlying assets based on the draws and based on
the antithetic variates will be negative, but not perfectly negative.

Other similar variance reduction techniques are available, including low discrepancy
sequencing, stratified sampling, and moment-matching. Low discrepancy sequencing is a quasi-
random sequence that involves the selection of a specific sequence of representative samples
from a given probability distribution. Random samples are selected to fill the unselected gaps
left by the probability distribution. In this way, the result is a set of random draws, but they are
deliberately distributed across all of the outcomes of interest. This approach creates MCS
standard error that are reduced in direct proportion to the number of replications rather
than in proportion to the square root of the number of replications; to reduce the MCS standard
error by a factor of 10, we only need to increase the number of replications by a factor of 10,
rather than the typical 10^2 = 100.

4
Brooks, Chris, Introductory Econometrics for Finance (Cambridge University Press; 3rd Edition 2014)
5
Brooks, Chris, Introductory Econometrics for Finance (Cambridge University Press; 3rd Edition 2014)

7
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.

Control variate technique

The control variate technique employs a whose properties are known prior to the simulation.
Suppose we want to simulate the variable denoted by (x) but we already know the properties of
the variable denoted by (y). The MCS is conducted on both (x) and (y) with the same error
vector; ie, vector of random numbers. Denoting the simulation estimates of x and y by and ,
respectively, a new estimate of x can be derived from:


= +( − ) (Brooks 13.5) 6

The MCS sampling error of this quantity, x∗, will be lower than that of x under certain condition:
the correlation between and must be sufficiently high. For this reason, as GARP explains, “a
good control variate should have two properties:
1. First, it should be [cheap] to construct from x(i). If the control variate is slow to compute,
then larger variance reductions—holding computational cost fixed— may be achieved by
increasing the number of simulations (b) rather than constructing the control variates;
2. Second, a control variate should have a high correlation with g(x). The optimal
combination parameter β that minimizes the approximation error is estimated using the
regression g[x(i)] = α + β*h[x(i)] + v(i)” 7

In this way, control variates reduce the MCS variation by using a single error vector (random
draws) on a related function whose solution is already known. Under the hypothesis that the
sampling error effects for both (i.e., the study function and the already known function) will be
similar, the sampling error can be reduced on the study function

According to the second property above, control variates reduces the MCS sampling error only
if the control and simulation functions are correlated. If this correlation is too low, the variance
reduction will be ineffective. As Brooks explains, we can take the variance of the above (13.5):

( ∗) = ( + − )) (Brooks 13.6)

Because (y) is the known and without sampling variation, var(y) = 0 so that:

( ∗) = ( )+ ( )− ( , ) (Brooks 13.7)

To be effective, the MCS sampling variance must be lower with the control variate than without:

( )− ( , )< → ( , )> ( )

Divide both sides by (var( ),var( ))1/2 obtains the necessary correlation condition:

( )
( , )>
( )

6
Brooks, Chris, Introductory Econometrics for Finance (Cambridge University Press; 3rd Edition 2014)
7
GARP Chapter 13 (2020)

8
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.

Describe the bootstrapping method and its advantage over Monte


Carlo simulation.
Monte Carlo simulation (MCS) creates artificial data, but bootstrapping uses observed
sample data points to obtain a description of the properties of empirical estimators. Unlike
historical simulation, which also uses the observed historical sample but only uses the one
sample that (actually!) did occur, bootstrapping samples repeatedly with replacement from
the actual data. As Brooks explains8, in the bootstrap procedure we can:
 Imagine a historical data sample, = , , … , and our wish to estimate parameter .
We can obtain an approximation to the statistical properties of by studying a sample
of bootstrap estimators. This is done by:
 Taking (N) samples of size (T) with replacement from y and re-calculating for each
new sample. A vector of estimates, with its own distribution, is thusly obtained.
 Instead of imposing a shape on the sampling distribution of the value, bootstrapping
empirically estimates the sampling distribution by analyzing the variation of the
statistic within-sample.
 A set of new samples is drawn with replacement from the sample and the test statistic of
interest calculated from each of these. Effectively this is sampling from the sample, i.e.
we treat the sample as a population from which samples can be drawn.
 Call the test statistics calculated from the new samples ∗ . The samples are likely to be
quite different from each other and from the original value, since some observations
may be sampled several times and others not at all. Thus, a distribution of values of ∗ is
obtained, from which standard errors or some other statistics of interest can be
calculated.

The advantage of bootstrapping over analytical results is that we can avoid making
strong, or even any, distributional assumptions, since the distribution is empirical. (Further,
variance reduction techniques are also available).

Describe pseudo-random number generation


The simplest class of random numbers are generated from a uniform (0,1) distribution, which
can be either discrete or continuous, but where each outcome is equally likely. In Excel, the
function = RAND() takes no arguments and returns a uniformly distributed random real number
greater than or equal to 0 and less than 1. If we want to generate a continuous (i.e., real)
random number between (a) and (b), we can use =RAND()*(b-a)+a.

8
Brooks, Chris, Introductory Econometrics for Finance (Cambridge University Press; 3rd Edition 2014)

9
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.

But computer-generated random number draws are accurately described as pseudo-random


numbers because they are not random at all. Pseudo-random numbers only appear to be
random, but in truth are determined by a complex but deterministic function. As GARP explains:

“Pseudo-random numbers are generated by complex but deterministic functions that produce
values that are difficult to predict, and so appear to be random. The functions that produce
pseudo-random values are known as pseudo-random number generators (PRNGs) and are
initialized with what is called a seed value. It is important to note that each distinct seed value
produces an identical set of random values every time the PNRG is run.

The reproducibility of outputs from PRNGs has two important applications.


1. It allows results to be replicated across multiple experiments, because the same
sequence of random values can always be generated by using the same seed value.
This feature is important when exploring alternative models or estimators, because it
allows the alternatives to be estimated on the same simulated data. It also allows
simulation-based results to be reproduced later, which is important for regulatory
compliance.
2. Setting the seed allows the same set of random numbers to be generated on multiple
computers. This feature is widely used when examining large portfolios containing many
financial instruments (i.e., thousands or more) that are all driven by a common set of
fundamental factors. Distributed simulations that assess the risk in the portfolio must use
the same simulated values of the factors when studying the joint behavior—especially
the probability of an adverse outcome—of the instruments held in the portfolio.
Initializing the PRNGs with a common seed in a cluster environment ensures that the
realizations of the common factors are identical.” 9

Describe situations where the bootstrapping method is ineffective.


There are at least two situations where the bootstrap is ineffective; and there are at least two
limitations of bootstrapping. Bootstrapping may be ineffective when:
 There are uninformative outliers in the data: The results of a replication may depend
greatly on whether outliners appear in the data.
 Non-independent data: Bootstrapping implicitly assumes the data are independent of
each another; i.e., autocorrelation, by definition, violates independence. (A solution to is
to use a moving block bootstrap because this method samples whole blocks at a time).

The two limitations of bootstrapping are:


 It uses the entire set of observed data. As GARP explains, “When the past and the
present are similar, this is a desirable feature. However, if the current state of the
financial market is different from its normal state, then the bootstrap may not be reliable.”
 If the past (i.e., the historical sample that informs the bootstrap) is structurally different
than the present, the bootstrap cannot produce samples that are currently relevant.

9
GARP Chapter 13 (2020)

10
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.

Describe the disadvantages of the simulation approach to financial


problem solving.
The disadvantages of the simulation approach to financial problem-solving include that it might
be computationally expensive; the results might not be precise, the results are often hard to
replicate, and the results are experiment-specific (including model risk)10:
 Results are experiment-specific. This is a version of model risk and this is, in our
opinion (David’s) the most significant issue. As Brooks says, “the need to specify the
data generating process using a single set of equations or a single equation implies that
the results could apply to only that exact type of data. Any conclusions reached may
or may not hold for other data generating processes.”
 Might be computationally expensive: The number of replications required could be
very large, especially if the goal is to cover the entire probability space.
 Results may be imprecise: If unrealistic assumptions are made in the data generating
process (DGP), the simulation may not give a precise answer to the problem
 Results may be difficult to replicate. It is important to aim for reproducibility. The key
here is to set a seed parameter so that that subsequent experiments can utilize an
identical random number vector, if desired.

According to GARP, the two big disadvantages (aka, limitations of simulation) of simulations are
the first two disadvantages above (of the four above): model risk, and potential cost. With
respect to the model risk (i.e., simulations are experiment-specific), GARP says (emphasis
ours), “Monte Carlo simulation is a straightforward method to approximate moments or to
understand estimators’ behavior. The biggest challenge when using simulation to
approximate moments is the specification of the DGP. If the DGP does not adequately
describe the observed data, then the approximation of the moment may be unreliable. The
misspecification in the DGP can occur due to many factors, including the choice of distributions,
the specification of the dynamics used to generate the sample, or the use of imprecise
parameter estimates to simulate the data.”

10
Brooks, Chris, Introductory Econometrics for Finance (Cambridge University Press; 3rd Edition 2014)

11
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.

Practice Questions & Answers


600.1. Although simulation methods might be employed in each of the following situations (or
"use cases"), which situation below LEAST requires the use of a simulation method?
a) Estimating the value of an exotic option when an analytical pricing formula is unavailable
b) Determining the effect on financial markets of substantial changes in the macroeconomic
environment
c) Calibrating the size of a Treasury bond trade in order to hedge the duration risk of a
corporate bond portfolio
d) Stress-testing a risk management model to determine whether they generate capital
requirements sufficient to cover losses in all situations

601.1. Betty is an analyst using Monte Carlo simulation to price an exotic option. Her simulation
consists of 10,000 replications where the key random variable is a random standard normal
because the underlying process is geometric Brownian motion (GBM). For example, in Excel a
random standard normal value is achieved with an inverse transformation of a random uniform
variable by way of the nested function NORM.S.INV(RAND()). In this case, each random
standard normal, z(i) = N(0,1), is the random draw that becomes an input into the option price
function. Her simulation succeeds in producing an estimate for the option's price, but Betty is
concerned the confidence interval around her estimate is too large. If her aim is to reduce the
standard error, which of the following approaches is NEAREST to the antithetic variate
technique?
a) She simulates 5,000 pairs of random z(i) and -z(i) such that each pair has perfectly
negative covariance
b) She quadruples the number of replications which will reduce the standard error by 50%
because the sqrt(four) is equal to two
c) She imposes a condition of i.i.d. (independence and identically distributed) on the series
of z(i) which eliminates the covariance term
d) She introduces low-discrepancy sequencing with leads the Monte Carlos standard errors
to be reduced in direct proportion to the number of replications rather than in proportion
to the square root of the number of replications

602.3. Peter used a simple Monte Carlo simulation to estimate the price of an Asian option. In
his first step, he specified a geometric Brownian motion (GBM) which is the same process used
in the Black-Scholes-Merton model. His boss Sally observes, "This is nice work Peter, but the
drawback to this approach is that you've assumed underlying returns are normally distributed.
Yet we know that returns are fat-tailed in practice." How can Peter overcome this objection and
include a fat-tailed assumption in his model?
a) He could assume the errors follow a GARCH process
b) He could assume the errors are drawn from a fat-tailed distribution; e.g., student's t
c) Either he could either assume errors follow a GARCH process or that errors are drawn
from a fat-tailed distribution
d) Monte Carlo simulation cannot overcome this objection; this is a disadvantage of Monte
Carlo simulation in comparison to bootstrapping

12
Licensed to Christian Rey Magtibay at [email protected]. Downloaded August 22, 2021.
The information provided in this document is intended solely for you. Please do not freely distribute.

Answers:

600.1. C. False: Hedging duration (as a linear approximation) is a simple analytic,


especially compared to the other choices. In regard to true (A), (B), and (D), Brooks writes:
"Examples from econometrics of where simulation may be useful include:
 Quantifying the simultaneous equations bias induced by treating an endogenous variable
as exogenous
 Determining the appropriate critical values for a Dickey– Fuller test
 Determining what effect heteroscedasticity has upon the size and power of a test for
autocorrelation.
Simulations are also often extremely useful tools in finance, in situations such as:
 The pricing of exotic options, where an analytical pricing formula is unavailable
 Determining the effect on financial markets of substantial changes in the macroeconomic
environment
 ‘Stress-testing’ risk management models to determine whether they generate capital
requirements sufficient."

Discuss here in the forum: https://www.bionicturtle.com/forum/threads/p1-t2-600-monte-carlo-


simulation-sampling-error-brooks.9228/

601.1. A. TRUE: She simulates 5,000 pairs of random z(i) and -z(i) such that each pair has
perfectly negative covariance

Brooks: "The antithetic variate technique involves taking the complement of a set of random
numbers and running a parallel simulation on those. For example, if the driving stochastic force
is a set of T N(0,1) draws, denoted u(t), for each replication, an additional replication with errors
given by -u(t) is also used. It can be shown that the Monte Carlo standard error is reduced when
antithetic variates are used.

Discuss here in the forum: https://www.bionicturtle.com/forum/threads/p1-t2-601-variance-


reduction-techniques-brooks.9236/

602.3. C. Either could either assume errors follow a GARCH process or that errors are
drawn from a fat-tailed distribution

Brooks: "13.8.1 Simulating the price of a financial option using a fat-tailed underlying process. A
fairly limiting and unrealistic assumption in the above methodology for pricing options is that the
underlying asset returns are normally distributed, whereas in practice, it is well known that asset
returns are fat-tailed. There are several ways to remove this assumption. First, one could
employ draws from a fattailed distribution, such as a Student’s, in step 2 above. Another
method, which would generate a distribution of returns with fat tails, would be to assume that
the errors and therefore the returns follow a GARCH process. To generate draws from a
GARCH process, do the steps shown in box 13.6. "

Discuss here in the forum: https://www.bionicturtle.com/forum/threads/p1-t2-602-


bootstrapping-brooks.9244/

13

You might also like