Risk Measures in Quantitative Finance: by Sovan Mitra
Risk Measures in Quantitative Finance: by Sovan Mitra
Finance
by
Sovan Mitra
arXiv:0904.0870v1 [q-fin.RM] 6 Apr 2009
Abstract
This paper was presented and written for two seminars: a national UK University
Risk Conference and a Risk Management industry workshop. The target audience is
therefore a cross section of Academics and industry professionals.
1
The current ongoing global credit crunch has highlighted the importance of risk
measurement in Finance to companies and regulators alike. Despite risk measurement’s
central importance to risk management, few papers exist reviewing them or following
their evolution from its foremost beginnings up to the present day risk measures.
This paper reviews the most important portfolio risk measures in Financial Math-
ematics, from Bernoulli (1738) to Markowitz’s Portfolio Theory, to the presently pre-
ferred risk measures such as CVaR (conditional Value at Risk). We provide a chrono-
logical review of the risk measures and survey less commonly known risk measures e.g.
Treynor ratio.
Investors are constantly faced with a trade-off between adjusting potential returns
for higher risk. However the events of the current ongoing global credit crisis and past
financial crises (see for instance [Sto99a] and [Mit06]) have demonstrated the necessity
for adequate risk measurement. Poor risk measurement can result in bankruptcies and
threaten collapses of an entire finance sector [KH05].
Risk measurement is vital to trading in the multi-trillion dollar derivatives industry
[Sto99b] and insufficient risk analysis can misprice derivatives [GF99]. Additionally
1
“Risk comes from not knowing what you’re doing”, Warren Buffett.
The outline of the paper is as follows. The paper starts with the first risk mea-
sures proposed; unknown to many Financial Mathematicians, these arose prior to
Markowitz’s risk measure. We then introduce Markowitz’s Portfolio Theory, which pro-
vided the first formal model of risk measurement and diversification. After Markowitz
we discuss Markowitz related risk measures, in particular the CAPM model (Capital
Asset Pricing Model) and other related risk measures e.g. Treynor ratio.
We then discuss the next most significant introduction to risk measurement: Value
at Risk (VaR) and explain its benefits. This is followed by a discussion on the first
axioms on risk theory -the coherency axioms by Artzner et al. [ADEH97]. Coherent
risk measures and copulas are discussed and finally we mention the future directions
of risk measurement.
This paper was presented and written for two seminars: a national UK University
Risk Conference and a Risk Management industry workshop. The target audience is
therefore a cross section of Academics and industry professionals.
ρ : G −→ R. (1)
We note that some Academics distinguish between risk and uncertainty as first defined
by Knight [Kni21]. Knight defines risk as randomness with known probabilities (e.g.
probability of throwing a 6 on a die) whereas uncertainty is randomness with unknown
probabilities (e.g. probability of rainy weather). However, in Financial risk literature
this distinction is rarely made.
A particularly important aspect of risk and risk measurement is portfolio diversifi-
cation. Diversification is the concept that one can reduce total risk without sacrificing
2
possible returns by investing in more than one asset. This is possible because not all
risks affect all assets, for example, a new Government regulation on phone call charges
would affect the telecoms sector and perhaps a few others deriving significant revenues
or costs from phone calls, but not every sector or company. Thus by investing in more
than 1 asset, one is less exposed to such “asset specific” risks. Note that there also exist
risks that cannot be mitigated by diversification, for example a rise in interest rates
would affect all businesses as they all save or spend money. We can conceptually cat-
egorise all risks into diversifiable and non-diversifiable risk (also known as “systematic
risk” or “market risk”).
Contrary to popular opinion, risk measurement and diversification had been investi-
gated prior to Markowitz’s Portfolio Theory (MPT). Bernoulli in 1738 [Ber54] discusses
the famous St. Petersburg paradox and that risky decisions can be assessed on the ba-
sis of expected utility. Rubinstein [Rub02] states Bernoulli recognised diversification;
that investing in a portfolio of assets reduces risk without decreasing return.
Prior to Markowitz a number of Economists had used variance as a measure of
portfolio risk. For example Irving Fisher [Fis06] in 1906 suggested measuring Economic
risk by variance, Tobin [Tob58] related investment portfolio risks to variance of returns.
Before significant contributions were made in Financial Mathematics to risk mea-
surement theory, risk measurement was primarily a securities analysis based topic.
Furthermore, securities analysis in itself was still in its infancy in the first half of the
twentieth century. Benjamin Graham, widely considered the father of modern secu-
rities analysis, proposed the idea of margin of safety as a measure of risk in [Gra03].
Graham also recommended portfolio diversification to reduce risks. Graham’s method-
ology of investing (widely known as value investing) has not been pursued by the Fi-
nancial Mathematics community, partly because of its reliance of securities rather than
mathematical analysis. Exponents of the value based investment methodology include
renowned investors Jeremy Grantham, Warren Buffett [Hag05] and Walter Schloss.
3
Markowitz proposed a portfolio’s risk is equal to the variance of the portfolio’s
returns. If we define the weighted expected return of a portfolio µp as
X
N
µp = wi µi , (2)
i=1
X
N X
N
σp2 = σij wi wj , (3)
i=1 j=1
where
0 ≤ wi ≤ 1,
X
N
wi = 1;
i=1
Markowitz’s portfolio theory was the first to explicitly account for portfolio diversi-
fication as the correlation (or covariance) between assets. From equation 3 one observes
that σp2 decreases as σij without necessarily reducing µp . The MPT also introduced
the idea of optimising portfolio selection by selecting assets lying on an efficient fron-
tier. The efficient frontier is found by minimising risk (σp2 ) by adjusting wi subject to
the constraint µp is fixed; hence such portfolios provide the best µp for minimal risk
[Mar91a]. Additionally, it can be shown that the efficient frontier follows a concave
relation between µp and σp2 . This reflects the idea of expected utility concavely increas-
ing with risk. Most portfolio managers apply a MPT framework to optimise portfolio
selection [Rub02].
Based on MPT portfolio risk measurement, Sharpe [Sha66] invented the Sharpe
Ratio S:
µp − Rf
S= , (4)
σp
4
where Rf is the risk-free rate of return. Sharpe’s ratio can be intepretted as the excess
return above the risk free rate per unit of risk, where risk is measured by MPT. The
Sharpe ratio provides a portfolio risk measure in terms of determining the quality of
the portfolio’s return at a given level of risk. It is worth noting the Sharpe ratio’s
similarity to the t-statistic. A discussion on the Sharpe ratio can be found at Sharpe’s
website: www.stanford.edu/∼wfsharpe/.
A variant on the Sharpe ratio is the Sortino Ratio [SP94], where we replace the
denominator by the standard deviation of the portfolio returns below µp . This ratio
essentially performs the same measurement as the Sharpe ratio but does not penalise
portfolio performance for returns above µp .
It is worth mentioning that Roy [Roy52] formulated Markowtiz’s Portfolio Theory
at the same time as Markowitz. As Markowitz says [Rub02]:“On the basis of Markowitz
(1952), I am often called the father of modern portfolio theory (MPT), but Roy (1952)
can claim an equal share of this honor.”
µi = Rf + βi (µm − Rf ), (5)
σim
βi = , (6)
σm
where
The βi measures the sensitivity of the asset i’s returns to the market; a high βi
implies asset i’s returns increase with the market. In the CAPM model the term
(µm − Rf ) is the market risk premium, which is the return awarded above the risk-free
rate for investing in a risky asset.
The CAPM theory postulates that all investors of different risk aversion would all
hold the same portfolio. This portfolio would be a mixture of riskless and risky assets,
5
weighted according to the asset’s market capital (number of shares outstanding × share
price). Thus CAPM theory essentially suggests investors would hold an index tracker
fund and encouraged the development of index funds. Index funds have been pioneered
by investment managers such as John Bogle [Bog93] (e.g. Vanguard 500 Index Fund).
The CAPM theory gave portfolio fund managers the first “standard” portfolio per-
formance benchmark by measuring against an index’s performace. Benchmark exam-
ples are given from [Jia03]:
Variations on the CAPM model include Jensen’s risk measure. Jensen quantifies port-
folio returns above that predicted by CAPM with α:
6
• finally, previous measures focussed on explaining the return on an asset based on
some theoretical model of the risk and return relation e.g. CAPM. VaR on the
other hand shifted the focus to measuring and quantifying the risk itself and in
terms of losses (rather than expected return).
VaR’s purpose is to simply address the question “How much can one expect to lose,
with a given cumulative probability ζ, for a given time horizon T?”. VaR is therefore
defined as [Sze05]:
where
• F(.) is the cumulative probability distribution function;
• Z(T) is the loss. The loss Z(t) is defined by
7
VaR Monte Carlo Simulation
Monte Carlo simulation is a generic method of simulating some random process
(e.g. stochastic differential equation) representing the assets or the portfolio itself.
Consequently after sufficient simulations we can obtain a loss distribution and therefore
extract VaR for different probabilities as was done for historical simulation. One can
improve Monte Carlo simulation through various computational techniques [Gla04]
such as importance sampling, stratified sampling and antithetic sampling.
• the portfolio is linear: the change in the portfolio’s price V(t) is linearly dependent
on its constituent asset prices Si (t). In other words:
X
N
∆V (t) = ∆Si (t). (11)
i=1
• the constituent assets have a joint Normal return distribution, which implies the
portfolio’s returns are Normally distributed. It is worth noting that the sum of
Normally distributed functions is not strictly always Normal; the specific property
of joint Normality however guarantees the portfolio’s return is Normal. Hence
the linear portfolio assumption alone cannot guarantee the portfolio’s return is
Normal.
Given the two assumptions enables us to describe the portfolio’s loss using a Normal
distribution, for which numerous analytical equations and distribution fitting methods
exist. Therefore VaR calculation and implementation becomes significantly simpler.
8
4. risk is sub-additive: ρ(X+Y) ≤ ρ(X)+ρ(Y).
We will now explain each axiom in turn. The monotonicity axiom tells us that
we associate higher risk with higher loss. The homogeneity axiom ensures that we
cannot increase or decrease risk by investing differing amounts in the same stock; in
other words the risk arises from the stock itself and is not a function of the quantity
purchased2 .
The translation invariance axiom can be explained by the fact that the investment
in a riskless bond bears no loss with probability 1. Hence we must always receive the
initial amount invested. The initial investment is subtracted because risk measures
measure loss as a positive amount, a hence gain is negative.
The subadditivity is the most important axiom because it ensures that a coherent
risk measure takes into portfolio diversification. The axioms shows that investing in
both portfolio X and Y results in a lower risk overall than the sum of the risks in
investing in portfolio X plus the risk in portfolio Y separately. VaR is not a coherent
risk measure because it does not obey the subadditivity axiom, consequently it can
result in higher risk arising from diversification.
We say a risk measure is weakly coherent if it is convex, translationally invariant
and homogeneous. It is also worth mentioning that coherency axioms ensure the risk
measure is convex and so is amenable to computational optimisation; for more infor-
mation the reader is referred to [RU00]. VaR on the other hand is non-convex and so
possesses many local minima.
An alternative definition of CVaR is the mean of the tail distribution of the VaR losses.
An additional advantage of CVaR is that the portfolio weights can be easily optimised
by linear programming [RU00] to minimise CVaR.
Spectral risk measures are a group of coherent risk measures, whereby the risk is
given as the sum of a weighted average of outcomes. The weights can be chosen to
reflect risk preferences towards particular outcomes. For more information the reader
is referred to [Ace02].
2
Note: this assumes we have no liquidity risks. In reality this is not true, particularly during the
current global credit crisis.
9
4.3. Copulas
The subaddivity axiom demonstrates the importance to capture dependencies be-
tween stocks when measuring the risk of a portfolio. Consequently this gave rise to the
interest in copulas [Nel06]; these are functions mapping a set of marginal distributions
into a multivariate distribution and vice versa. Sklar’s Theorem underpins copula the-
ory, which states that for a given multivariate distribution there exists a copula that
can combine all the marginal distributions to give the joint distribution. For example,
in the bivariate case if we have two marginal distributions F(x) and G(y) then there
exists a copula function C to give the multivariate distribution H(x,y):
H(x, y) = C(F (x), G(y)). (13)
There exist a variety of copulas and examples include the Gaussian copula [FMN01]
and Clayton copula [CNF05]. Prior to their application in Financial Mathematics
copulas have been used for many years in Actuarial Sciences, Reliability Engineering,
Civil and Mechanical Engineering.
In Extreme Value Theory copulas become extremely important because it is not
possible to capture dependencies between random variables using standard correlation.
To use multivariate Extreme Value Theory, instead we must apply copulas to capture
dependencies [Dow02].
Despite the number of copulas in existence this continues to be an area of active
research as it is important to have copulas that capture the correct type of dependencies
between stocks. For instance copulas are used in the pricing of collateralized debt
obligations (CDOs) [MV04]. However the usage of credit derivatives have been widely
cited as an important cause of the current global credit crisis.
6. Conclusions
This paper has surveyed the key risk measures in Financial Mathematics as well as
its progressive development since the beginning. We have also mentioned, contrary to
popular knowledge, that risk measures existed prior to Markowitz.
We have examined the key contributions of the major risk measures, such as
Markowitz’s Portfolio Theory, whilst also highlighting their influence within the fi-
nancial industry. We have also discussed newer risk measures such as spectral risk
measures, VaR and its variants (e.g. CVaR) and mentioned future areas of research.
10
References
[Ace02] C. Acerbi. Spectral measures of risk: A coherent representation of subjective
risk aversion. Journal of Banking and Finance, 26(7):1505–1518, 2002.
[ADE+ 03] P. Artzner, F. Delbaen, J.M. Eber, D. Heath, and H. Ku. Coherent multi-
period risk measurement. Manuscript, ETH Zurich, 2003.
[Bog93] J.C. Bogle. Bogle on mutual funds: new perspectives for the intelligent
investor. McGraw-Hill Professional, 1993.
[CT04] R. Cont and P. Tankov. Financial Modelling with Jump Processes. CRC
Press, 2004.
[DS05] K. Detlefsen and G. Scandolo. Conditional and dynamic convex risk mea-
sures. Finance and Stochastics, 9(4):539–561, 2005.
[DSZ04] J. Danıelsson, H.S. Shin, and J.P. Zigrand. The impact of risk regulation
on price dynamics. Journal of Banking and Finance, 2004.
[Efr79] B. Efron. Bootstrap Methods: Another Look at the Jackknife. The Annals
of Statistics, 7(1):1–26, 1979.
[Fis06] Irving Fisher. The Nature of Capital and Income. Macmillan, 1906.
[FMN01] R. Frey, A.J. McNeil, and M. Nyfeler. Copulas and credit models. Risk,
14(10):111–114, 2001.
[GF99] T.C. Green and S. Figlewski. Market Risk and Model Risk for a Finan-
cial Institution Writing Options. The Journal of Finance, 54(4):1465–1499,
1999.
11
[Hag05] R.G. Hagstrom. The Warren Buffett Way. Wiley, 2005.
[Hul00] J. Hull. Options, futures and other derivatives. Prentice Hall, 2000.
[KH05] M.H. Kabir and M.K. Hassan. The Near-Collapse of LTCM, US Financial
Stock Returns, and the Fed. Journal of Banking and Finance, 29(2):441–
460, 2005.
[Kni21] F.H. Knight. Risk, uncertainty and profit. Houghton Mifflin Company, 1921.
[Mer74] R.C. Merton. On the Pricing of Corporate Debt: The Risk Structure of
Interest Rates. The Journal of Finance, 29(2):449–470, 1974.
[Rie04] F. Riedel. Dynamic coherent risk measures. Stochastic processes and their
applications, 112(2):185–200, 2004.
[Roy52] AD Roy. Safety First and the Holding of Assets. Econometrica, 20(3):431–
449, 1952.
12
[RU00] R.T. Rockafellar and S. Uryasev. Optimization of conditional value-at-risk.
Journal of Risk, 2(3):21–41, 2000.
[Sha64] W.F. Sharpe. Capital Asset Prices: A Theory of Market Equilibrium under
Conditions of Risk. The Journal of Finance, 19(3):425–442, 1964.
[SP94] F.A. Sortino and L.N. Price. Performance measurement in a downside risk
framework. Journal of investing, (FALL 1994), 1994.
[Sto99a] Paul Stonham. Too Close To The Hedge: The Case Of Long Term Capital
Management. Part Two: Near-Collapse And Rescue. European Management
Journal,Volume 17, Issue 4 ,Pages 382-390, 1999.
[Sto99b] L.A. Stout. Why the Law Hates Speculators: Regulation and Private Order-
ing in the Market for OTC Derivatives. Duke Law Journal, 48(4):701–786,
1999.
13