QRM:TimeSeries RiskManagement
QRM:TimeSeries RiskManagement
where the i ’s and j ’s are parameters to be estimated. Note that at is the innovation or random component of
the log-return and in practice, the conditional variance of this innovation, t2 , is time varying and stochastic.
GARCH models may be used to model this dynamics behavior of conditional variances. In particular, we say t2
follows a GARCH(p, q) model1 if
at = t ✏t (4)
p
X q
X
2
and t = ↵0 + ↵i a2t i + 2
j t j (5)
i=1 j=1
where ↵0 > 0, ↵i 0 for i = 1, . . . , p, j 0 for j = 1, . . . , q and where the ✏t ’s are IID random variables with
mean zero and variance one. It should be clear from (4) and (5) that the volatility clustering e↵ect we observe
in the market-place is therefore captured by the GARCH(p, q) model. Fat tails are also captured by this model in
the sense that the log-returns in (1) have heavier tails (when at satisfies (4) and (5)) than the corresponding
normal distribution. GARCH models are typically estimated jointly with the conditional mean equation in (3)
using maximum likelihood techniques. The fitted model can then be checked for goodness-of-fit using standard
diagnostic methods. See, for example, Ruppert (2011) or Tsay (2010).
1 See Chapter 18 of Statistics and Data Analysis for Financial Engineering (2011) by Ruppert or Chapter 3 of Analysis
of Financial Time Series (2010) by Tsay for a discussion of volatility modeling in general and additional details on the
GARCH(p, q) model. Our modal description here follows Tsay. The acronym “GARCH” stands for generalized autoregressive
conditional heteroscedastic.
Risk Management and Time Series 2
Exercise 1 Under what condition(s) is the GARCH(p, q) model stationary? Give an expression for the
unconditional variance, ✓, when these condition(s) are satisfied.
Exercise 2 Assuming ↵1 + 1 < 1 show that we may write
2
⇥ ⇤
t+1 = ✓ + (1 ) (1 )a2t + 2
t (6)
for some constants and . What are the values of and ?
library(fGarch)
data(bmw,package="evir")
bmw.garch_norm = garchFit(~arma(1,0)+garch(1,1),data=bmw,cond.dist="norm")
options(digits=3)
summary(bmw.garch_norm)
In this code fragment an ARMA(1, 0) / GARCH(1, 1) model was fit to BMW return data. Of course it’s
important to check how well the fitted model actually fits the data by performing various diagnostics tests. In
this example Ruppert ultimately settled on an ARMA(1, 1) / GARCH(1, 1) model where the ✏t ’s had a
t-distribution rather than the normal distribution used in the code fragment above. Typing “? garchFit” at the
R prompt will provide further details on the garchFit function.
Risk Management and Time Series 3
The distribution of ✏t+1 will have been estimated when the time-series of portfolio returns was fitted. Note that
(11) and (12) are estimates of loss measures based on the conditional loss distribution. We therefore expect
them to be considerably more accurate than estimates based on the unconditional loss distribution, particularly
over short horizons.
Figure 1 displays the daily price level and daily returns of the S&P 500 from January 2006 to July 2011. Figure
2 then displays the daily VaR.99 violations for the S&P 500 over this period where the daily VaR was estimated
using one of four possible methods: (1) historical Monte-Carlo (2) a normal approximation (3) a t6
approximation and (4) a GARCH(1, 1) model. Each method used a rolling window of one year’s worth of daily
returns to estimate the VaR. Note that only the GARCH model attempts to estimate the conditional loss
distribution. Over this time period the percentages of VaR violations for the four methods were 1.86%, 2.48%,
1.95% and 1.06%, respectively. Clearly the GARCH model performs best according to this metric. Equally
importantly we see that the GARCH violations are not clustered and appear much closer to an IID sequence of
Bernoulli(p = .01) trials than the VaR violations of the unconditional methods. (Standard statistical tests could
be used to test this observation more formally.) This is particularly noticeable during the height of the global
financial crisis in the latter half of 2009. The unconditional methods were clearly underestimating VaR in this
extremely volatile period.
Figure 2: VaR.99 violations for the S&P500. The first three methods (Historical MC, Normal Approx and
t Approx) all estimate the VaR using a rolling window of one year’s worth of daily closing prices. They
are therefore largely based on approximating the unconditional loss distribution. In the case of the t
approximation we simply set the degrees of freedom equal to 6. The fourth method (GARCH(1, 1)) is based
bt+1 simply taken
on approximating the conditional loss distribution with the VaR estimated via (11) with µ
to be the mean daily return over the previous year.
Risk Management and Time Series 6
where B is an n ⇥ k matrix of factor loadings, Ft+1 is a k ⇥ 1 vector of factor returns and ✏t+1 is an n ⇥ 1
random vector of idiosyncratic error terms which are uncorrelated and have mean zero. We also assume k < n,
that Ft+1 has a positive-definite covariance matrix and that each component of Ft+1 is uncorrelated with each
component of ✏t+1 . All of these statements are conditional upon the information available at time t. Equation
(13) is then a factor model for X.
Given time series data on Xt we could simply use the factor model to compute a univariate time series of
portfolio losses and then estimate risk measures as described in Section 2.1. An alternative approach, however,
would be to fit separate time series models to each factor, i.e. each component of X. We could still use the
fitted time series models to estimate conditional loss measures as in (11) and (12) but we could also use the
factor model to perform a scenario analysis, however. In particular, we could use the fitted time series models to
provide guidance on the range of plausible factor stresses.
For example, we could use a principal components analysis to construct a factor model and then fit a separate
GARCH model to the time series of each principal component. The estimated conditional variance of each
principal component could then be used to determine the range of factor stresses. This is in contrast to the
method of using the eigen values to determine the range of factor stresses. Since the eigen values are estimates
of the unconditional factor variances we would generally prefer the GARCH approach to construct plausible
scenarios.
where the superscript GPD in (14) and (15) is used to emphasize that the ✏t ’s (or their tails), follow a GPD
distribution. Note that there appears to be an inconsistency here in that the original time series model was
fitted using one set of assumptions for the ✏t ’s, i.e. that they are normally or t distributed, and that a di↵erent
assumption is used in (14) and (15), i.e. that they have a GPD distribution. This does not present a problem
due to the theory of quasi-maximum likelihood estimation (QMLE) which e↵ectively states that µ bt+1 and bt+1
are still consistent estimators of µt+1 and t+1 even though the distributions of the ✏t ’s were misspecified.
Section 7.2.6 (which refers to results in Section 2.3.6) of Quantitative Risk Management by McNeil, Frey and
Embrechts describes several numerical experiments that compare di↵erent methods for estimating VaR or ES.
They conclude that methods based on GARCH-EVT, i.e. estimators such as (14) and (15), are most accurate.
Note that we could also have used the Hill estimator as an alternative to the GPD distribution when estimating
t t
d and ES
VaR c . Section 19.6 of Ruppert provides some examples and should be consulted for further details.
↵ ↵