0% found this document useful (0 votes)
58 views

Risk Measures in Quantitative Finance: by Sovan Mitra

This document summarizes the evolution of risk measures in quantitative finance from their earliest beginnings to modern approaches. It discusses (1) pre-Markowitz risk measures like those proposed by Bernoulli and Fisher that used concepts like expected utility and variance; (2) Markowitz's seminal portfolio theory that introduced variance as a formal risk measure; (3) later developments like value at risk and risk measures based on coherent risk measurement theory. The target audience is academics and industry professionals interested in the history and development of risk measurement in finance.

Uploaded by

Vova Kulyk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views

Risk Measures in Quantitative Finance: by Sovan Mitra

This document summarizes the evolution of risk measures in quantitative finance from their earliest beginnings to modern approaches. It discusses (1) pre-Markowitz risk measures like those proposed by Bernoulli and Fisher that used concepts like expected utility and variance; (2) Markowitz's seminal portfolio theory that introduced variance as a formal risk measure; (3) later developments like value at risk and risk measures based on coherent risk measurement theory. The target audience is academics and industry professionals interested in the history and development of risk measurement in finance.

Uploaded by

Vova Kulyk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Risk Measures in Quantitative

Finance
by
Sovan Mitra
arXiv:0904.0870v1 [q-fin.RM] 6 Apr 2009

Abstract
This paper was presented and written for two seminars: a national UK University
Risk Conference and a Risk Management industry workshop. The target audience is
therefore a cross section of Academics and industry professionals.
1
The current ongoing global credit crunch has highlighted the importance of risk
measurement in Finance to companies and regulators alike. Despite risk measurement’s
central importance to risk management, few papers exist reviewing them or following
their evolution from its foremost beginnings up to the present day risk measures.
This paper reviews the most important portfolio risk measures in Financial Math-
ematics, from Bernoulli (1738) to Markowitz’s Portfolio Theory, to the presently pre-
ferred risk measures such as CVaR (conditional Value at Risk). We provide a chrono-
logical review of the risk measures and survey less commonly known risk measures e.g.
Treynor ratio.

Key words: Risk measures, coherent, risk management, portfolios, investment.

1. Introduction and Outline

Investors are constantly faced with a trade-off between adjusting potential returns
for higher risk. However the events of the current ongoing global credit crisis and past
financial crises (see for instance [Sto99a] and [Mit06]) have demonstrated the necessity
for adequate risk measurement. Poor risk measurement can result in bankruptcies and
threaten collapses of an entire finance sector [KH05].
Risk measurement is vital to trading in the multi-trillion dollar derivatives industry
[Sto99b] and insufficient risk analysis can misprice derivatives [GF99]. Additionally

1
“Risk comes from not knowing what you’re doing”, Warren Buffett.

Preprint submitted to Elsevier April 6, 2009


incorrect risk measurement can significantly underestimate particular risk types e.g.
market risk, credit risk etc. .
This paper reviews the most significant risk measures with a particular focus on
those practised within the Financial Mathematics/Quantitative Finance arena. The
evolution of risk measures can be categorised into four main stages:

1. Pre-Markowitz risk measures;


2. Markowitz Portfolio Theory based risk measures;
3. Value at Risk and related risk measures;
4. Risk Measures based on Coherent Risk Measurement Theory.

The outline of the paper is as follows. The paper starts with the first risk mea-
sures proposed; unknown to many Financial Mathematicians, these arose prior to
Markowitz’s risk measure. We then introduce Markowitz’s Portfolio Theory, which pro-
vided the first formal model of risk measurement and diversification. After Markowitz
we discuss Markowitz related risk measures, in particular the CAPM model (Capital
Asset Pricing Model) and other related risk measures e.g. Treynor ratio.
We then discuss the next most significant introduction to risk measurement: Value
at Risk (VaR) and explain its benefits. This is followed by a discussion on the first
axioms on risk theory -the coherency axioms by Artzner et al. [ADEH97]. Coherent
risk measures and copulas are discussed and finally we mention the future directions
of risk measurement.
This paper was presented and written for two seminars: a national UK University
Risk Conference and a Risk Management industry workshop. The target audience is
therefore a cross section of Academics and industry professionals.

2. Pre-Markowitz and Markowitz Risk Measurement Era


2.1. Pre-Markowitz Risk Measurement
A risk measure ρ is a function mapping a distribution of losses G to R, that is

ρ : G −→ R. (1)

We note that some Academics distinguish between risk and uncertainty as first defined
by Knight [Kni21]. Knight defines risk as randomness with known probabilities (e.g.
probability of throwing a 6 on a die) whereas uncertainty is randomness with unknown
probabilities (e.g. probability of rainy weather). However, in Financial risk literature
this distinction is rarely made.
A particularly important aspect of risk and risk measurement is portfolio diversifi-
cation. Diversification is the concept that one can reduce total risk without sacrificing

2
possible returns by investing in more than one asset. This is possible because not all
risks affect all assets, for example, a new Government regulation on phone call charges
would affect the telecoms sector and perhaps a few others deriving significant revenues
or costs from phone calls, but not every sector or company. Thus by investing in more
than 1 asset, one is less exposed to such “asset specific” risks. Note that there also exist
risks that cannot be mitigated by diversification, for example a rise in interest rates
would affect all businesses as they all save or spend money. We can conceptually cat-
egorise all risks into diversifiable and non-diversifiable risk (also known as “systematic
risk” or “market risk”).
Contrary to popular opinion, risk measurement and diversification had been investi-
gated prior to Markowitz’s Portfolio Theory (MPT). Bernoulli in 1738 [Ber54] discusses
the famous St. Petersburg paradox and that risky decisions can be assessed on the ba-
sis of expected utility. Rubinstein [Rub02] states Bernoulli recognised diversification;
that investing in a portfolio of assets reduces risk without decreasing return.
Prior to Markowitz a number of Economists had used variance as a measure of
portfolio risk. For example Irving Fisher [Fis06] in 1906 suggested measuring Economic
risk by variance, Tobin [Tob58] related investment portfolio risks to variance of returns.
Before significant contributions were made in Financial Mathematics to risk mea-
surement theory, risk measurement was primarily a securities analysis based topic.
Furthermore, securities analysis in itself was still in its infancy in the first half of the
twentieth century. Benjamin Graham, widely considered the father of modern secu-
rities analysis, proposed the idea of margin of safety as a measure of risk in [Gra03].
Graham also recommended portfolio diversification to reduce risks. Graham’s method-
ology of investing (widely known as value investing) has not been pursued by the Fi-
nancial Mathematics community, partly because of its reliance of securities rather than
mathematical analysis. Exponents of the value based investment methodology include
renowned investors Jeremy Grantham, Warren Buffett [Hag05] and Walter Schloss.

2.2. Markowitz’s Portfolio Theory (MPT)


Although Financial Analysts and Economists were aware of risk, prior to Markowitz’s
risk measure it was more concerned with standard financial statement analysis, fol-
lowing a similar line of enquiry to Graham [Gra03]. However Markowitz ([Mar52],
[Mar91b]) was the first to formalise portfolio risk, diversification and asset selection in
a mathematically consistent framework. All that was needed were asset return means,
variances and covariances. In this respect MPT was a significant innovation in risk
measurement, for which Markowitz won the Nobel prize.

3
Markowitz proposed a portfolio’s risk is equal to the variance of the portfolio’s
returns. If we define the weighted expected return of a portfolio µp as

X
N
µp = wi µi , (2)
i=1

then the portfolio’s variance σp2 is

X
N X
N
σp2 = σij wi wj , (3)
i=1 j=1

where

• N is the number of assets in a portfolio;

• i,j are the asset indices and i, j ∈ {1, ..., N} ;

• wi is the asset weight, subject to the constraints:

0 ≤ wi ≤ 1,

X
N
wi = 1;
i=1

• σij is the covariance of asset i with asset j;

• µi is the expected return for asset i.

Markowitz’s portfolio theory was the first to explicitly account for portfolio diversi-
fication as the correlation (or covariance) between assets. From equation 3 one observes
that σp2 decreases as σij without necessarily reducing µp . The MPT also introduced
the idea of optimising portfolio selection by selecting assets lying on an efficient fron-
tier. The efficient frontier is found by minimising risk (σp2 ) by adjusting wi subject to
the constraint µp is fixed; hence such portfolios provide the best µp for minimal risk
[Mar91a]. Additionally, it can be shown that the efficient frontier follows a concave
relation between µp and σp2 . This reflects the idea of expected utility concavely increas-
ing with risk. Most portfolio managers apply a MPT framework to optimise portfolio
selection [Rub02].
Based on MPT portfolio risk measurement, Sharpe [Sha66] invented the Sharpe
Ratio S:
µp − Rf
S= , (4)
σp

4
where Rf is the risk-free rate of return. Sharpe’s ratio can be intepretted as the excess
return above the risk free rate per unit of risk, where risk is measured by MPT. The
Sharpe ratio provides a portfolio risk measure in terms of determining the quality of
the portfolio’s return at a given level of risk. It is worth noting the Sharpe ratio’s
similarity to the t-statistic. A discussion on the Sharpe ratio can be found at Sharpe’s
website: www.stanford.edu/∼wfsharpe/.
A variant on the Sharpe ratio is the Sortino Ratio [SP94], where we replace the
denominator by the standard deviation of the portfolio returns below µp . This ratio
essentially performs the same measurement as the Sharpe ratio but does not penalise
portfolio performance for returns above µp .
It is worth mentioning that Roy [Roy52] formulated Markowtiz’s Portfolio Theory
at the same time as Markowitz. As Markowitz says [Rub02]:“On the basis of Markowitz
(1952), I am often called the father of modern portfolio theory (MPT), but Roy (1952)
can claim an equal share of this honor.”

2.3. Capital Asset Pricing Model (CAPM)


The MPT in the 1960s’ was computationally infeasible; it required covariance cal-
culations for all the assets where N ≥ 100. This motivated another risk measurement
technique by Sharpe [Sha64] called CAPM, which was based on the MPT risk model:

µi = Rf + βi (µm − Rf ), (5)
σim
βi = , (6)
σm
where

• Rf is the risk-free rate of return;

• µm is the expected market return;

• βi is known as the beta coefficient for asset i;

• σim is the covariance of asset i and the market;

• σm is the standard deviation of the market.

The βi measures the sensitivity of the asset i’s returns to the market; a high βi
implies asset i’s returns increase with the market. In the CAPM model the term
(µm − Rf ) is the market risk premium, which is the return awarded above the risk-free
rate for investing in a risky asset.
The CAPM theory postulates that all investors of different risk aversion would all
hold the same portfolio. This portfolio would be a mixture of riskless and risky assets,

5
weighted according to the asset’s market capital (number of shares outstanding × share
price). Thus CAPM theory essentially suggests investors would hold an index tracker
fund and encouraged the development of index funds. Index funds have been pioneered
by investment managers such as John Bogle [Bog93] (e.g. Vanguard 500 Index Fund).
The CAPM theory gave portfolio fund managers the first “standard” portfolio per-
formance benchmark by measuring against an index’s performace. Benchmark exam-
ples are given from [Jia03]:

Index Portfolio Benchmark


S & P 500 Large Market Capital Equity Funds
S & P 400 Mid-Capital Equity Funds
Russell 2000 Small Capital Equity Funds
NASDAQ Composite Technology Sector Equity Funds

Variations on the CAPM model include Jensen’s risk measure. Jensen quantifies port-
folio returns above that predicted by CAPM with α:

α = µp − [Rf + βp (µm − Rf )], (7)

where βp is the portfolio’s beta. The term α can be interpretted as a measure of a


portfolio manager’s investment ability or “beating the market”.
Finally, another lesser well known portfolio ratio measure is the Treynor ratio
[Hüb05] T :
µp − Rf
T = . (8)
βp
Similar to the Sharpe ratio, the Treynor ratio can be interpretted as the “quality” of
portfolio return above Rf per unit of risk but with risk measured on a CAPM basis.

3. Value at Risk (VaR)


3.1. VaR Risk Measure
The next era of risk measurement after MPT can be traced to the introduction of
Value at Risk (VaR). This represented a significant change in direction of risk mea-
surement for the following reasons:
• firstly, VaR initiated a shift in focus of using risk measures for the management
of risk in an industry context. In 1994 JP Morgan created the VaR risk mea-
sure, apparently to measure risk across the whole institution under one holistic
risk measure [Dow02]. Previous risk measures did not focus on such a holistic
approach to risk measurement or management.

• secondly, the Basel Committee on Banking Supervision, which standardises in-


ternational banking regulations and practises, stipulated a market risk capital
requirement based upon VaR in 1995. This factor has subsequently fuelled inter-
est in VaR and VaR related measures as well as becoming a popular risk measure
[DSZ04].

6
• finally, previous measures focussed on explaining the return on an asset based on
some theoretical model of the risk and return relation e.g. CAPM. VaR on the
other hand shifted the focus to measuring and quantifying the risk itself and in
terms of losses (rather than expected return).
VaR’s purpose is to simply address the question “How much can one expect to lose,
with a given cumulative probability ζ, for a given time horizon T?”. VaR is therefore
defined as [Sze05]:

F (Z(T ) ≤ V aR) = ζ, (9)

where
• F(.) is the cumulative probability distribution function;
• Z(T) is the loss. The loss Z(t) is defined by

Z(t) = S(0) − S(t), (10)

where S(t) is the stock price at time t;


• ζ is a cumulative probability associated with threshold value VaR, on the loss
distribution of Z(t).
To give an example of VaR, a portfolio may have a VaR of $10,000,000, for one day,
with a cumulative probability of 90%. This means that the portfolio can expect a
maximum loss of $10,000,000, over one day, with a 90% cumulative probability. An
alternative interpretation would be that the portfolio’s loss could exceed $10,000,000,
in one day, with a cumulative probability of 100-90=10%. Typically ζ is chosen to be
0.90, 095 and 0.99.

3.2. VaR Implementation


The measurement of VaR on a portfolio presents a theoretical and computational
challenge as it is difficult to model the evolution of a portfolio over time containing
hundreds of assets [MR05],[Hul00]. Hence the implementation of VaR is of particular
interest to industry and academic Researchers; the four main methods are [Dow02]:

VaR Historical Simulation


Using a set of historical data we obtain an empirical probability distribution of
losses for the portfolio. One can then determine the appropriate VaR by extracting it
from the associated quantile. Such an approach is convenient to implement and can be
improved by a range of statistical methods. For example one can apply bootstrapping
[Efr79] when one has a small data set or implement a weighted historical simulation
approach.

VaR Parametric Approach


The parametric approach requires an analytic solution to determining the VaR
for any cumulative probability. Unfortunately not all distributions have solutions,
however one can apply Extreme Value Theory (Peak over Threshold and Generalised
Extreme Value distributions) to estimate VaR; the reader is referred to [CT04] for more
information.

7
VaR Monte Carlo Simulation
Monte Carlo simulation is a generic method of simulating some random process
(e.g. stochastic differential equation) representing the assets or the portfolio itself.
Consequently after sufficient simulations we can obtain a loss distribution and therefore
extract VaR for different probabilities as was done for historical simulation. One can
improve Monte Carlo simulation through various computational techniques [Gla04]
such as importance sampling, stratified sampling and antithetic sampling.

VaR Variance-Covariance Method (also known as the Delta-Normal Method)


Under the variance-covariance method, we model the portfolio’s loss distribution
by making two assumptions:

• the portfolio is linear: the change in the portfolio’s price V(t) is linearly dependent
on its constituent asset prices Si (t). In other words:

X
N
∆V (t) = ∆Si (t). (11)
i=1

A portfolio will be linear if it contains no derivatives. Furthermore, in practise


some modellers assume non-linear portfolios are linear for analytical tractability.
This assumption is made in [RU00].

• the constituent assets have a joint Normal return distribution, which implies the
portfolio’s returns are Normally distributed. It is worth noting that the sum of
Normally distributed functions is not strictly always Normal; the specific property
of joint Normality however guarantees the portfolio’s return is Normal. Hence
the linear portfolio assumption alone cannot guarantee the portfolio’s return is
Normal.

Given the two assumptions enables us to describe the portfolio’s loss using a Normal
distribution, for which numerous analytical equations and distribution fitting methods
exist. Therefore VaR calculation and implementation becomes significantly simpler.

4. Coherent Risk Measures


4.1. Axioms of Risk Measurement
A significant milestone in risk measurement was achieved when Artzner et al.
[ADEH97] proposed the first axioms of risk measurement; risk measures that obeyed
such axioms were called coherent risk measures. The coherency axioms had far reaching
implications as it was no longer possible to arbitrarily assign a function for risk mea-
surement unless it obeyed these axioms, consequently VaR was no longer considered
an adequate risk measure.
We now define a coherent risk measure ρ(.). Let X and Y denote the future loss of
two portfolios, then a risk measure ρ is coherent if it adheres to the four axioms:
1. risk is monotonic: if X ≤Y then ρ(X) ≤ ρ(Y);
2. risk is homogeneous: ρ(λX) = λρ(X) for λ > 0;
3. riskless translation invariance: ρ(X + χ) = ρ(X) − χ, where χ is a riskless bond;

8
4. risk is sub-additive: ρ(X+Y) ≤ ρ(X)+ρ(Y).
We will now explain each axiom in turn. The monotonicity axiom tells us that
we associate higher risk with higher loss. The homogeneity axiom ensures that we
cannot increase or decrease risk by investing differing amounts in the same stock; in
other words the risk arises from the stock itself and is not a function of the quantity
purchased2 .
The translation invariance axiom can be explained by the fact that the investment
in a riskless bond bears no loss with probability 1. Hence we must always receive the
initial amount invested. The initial investment is subtracted because risk measures
measure loss as a positive amount, a hence gain is negative.
The subadditivity is the most important axiom because it ensures that a coherent
risk measure takes into portfolio diversification. The axioms shows that investing in
both portfolio X and Y results in a lower risk overall than the sum of the risks in
investing in portfolio X plus the risk in portfolio Y separately. VaR is not a coherent
risk measure because it does not obey the subadditivity axiom, consequently it can
result in higher risk arising from diversification.
We say a risk measure is weakly coherent if it is convex, translationally invariant
and homogeneous. It is also worth mentioning that coherency axioms ensure the risk
measure is convex and so is amenable to computational optimisation; for more infor-
mation the reader is referred to [RU00]. VaR on the other hand is non-convex and so
possesses many local minima.

4.2. Coherent Risk Measures


Given the introduction of coherency axioms and conclusion that VaR was not co-
herent, new risk coherent measures were proposed to capture the advantages of VaR.
In particular, there was a need for a “holistic” risk measure and one that was sim-
ple to grasp in that it could capture all the key risk information with three pieces of
information: probability, loss and time horizon.
In response to a coherent equivalent to VaR, a variety of VaR related risk measures
were proposed. Examples include TVaR (tail value at risk) [ADE+ 03], WCE (Worst
conditional expectation) [Ino03] and CVaR (conditional value at risk).
CVaR has become a particularly popular risk measure due to its similarity to VaR
but also it assesses “how bad things can get” if the VaR loss is exceeded. CVaR is the
expected loss given that the VaR loss is exceeded; it is defined by:

CV aR = E[Z(T ) | Z(T ) > V aR]. (12)

An alternative definition of CVaR is the mean of the tail distribution of the VaR losses.
An additional advantage of CVaR is that the portfolio weights can be easily optimised
by linear programming [RU00] to minimise CVaR.
Spectral risk measures are a group of coherent risk measures, whereby the risk is
given as the sum of a weighted average of outcomes. The weights can be chosen to
reflect risk preferences towards particular outcomes. For more information the reader
is referred to [Ace02].

2
Note: this assumes we have no liquidity risks. In reality this is not true, particularly during the
current global credit crisis.

9
4.3. Copulas
The subaddivity axiom demonstrates the importance to capture dependencies be-
tween stocks when measuring the risk of a portfolio. Consequently this gave rise to the
interest in copulas [Nel06]; these are functions mapping a set of marginal distributions
into a multivariate distribution and vice versa. Sklar’s Theorem underpins copula the-
ory, which states that for a given multivariate distribution there exists a copula that
can combine all the marginal distributions to give the joint distribution. For example,
in the bivariate case if we have two marginal distributions F(x) and G(y) then there
exists a copula function C to give the multivariate distribution H(x,y):
H(x, y) = C(F (x), G(y)). (13)
There exist a variety of copulas and examples include the Gaussian copula [FMN01]
and Clayton copula [CNF05]. Prior to their application in Financial Mathematics
copulas have been used for many years in Actuarial Sciences, Reliability Engineering,
Civil and Mechanical Engineering.
In Extreme Value Theory copulas become extremely important because it is not
possible to capture dependencies between random variables using standard correlation.
To use multivariate Extreme Value Theory, instead we must apply copulas to capture
dependencies [Dow02].
Despite the number of copulas in existence this continues to be an area of active
research as it is important to have copulas that capture the correct type of dependencies
between stocks. For instance copulas are used in the pricing of collateralized debt
obligations (CDOs) [MV04]. However the usage of credit derivatives have been widely
cited as an important cause of the current global credit crisis.

5. Future Directions of Risk Measurement


Risk measurement is a thriving area of research. A current area of interest is to find
satisfactory methods of modelling dependencies between stocks other than by copulas
and correlations. Alternatively, there is much interest in finding copulas that can
meaningfully capture dependency behaviours.
Another area of risk measurement research is dynamic risk measurement. This
involves measuring risk in continuous time, rather than applied to a static distribution.
Examples of dynamic risk measurement include [Rie04],[GO08], [DS05].
Finally, another future direction of risk measurement is devising risk specific risk
measures, such credit risk measures, liquidity risk measure etc. . However it should
be noted that such risk measures already exist in these areas (for example Merton’s
structural credit default model [Mer74] and the KMV model [Kea03]).

6. Conclusions
This paper has surveyed the key risk measures in Financial Mathematics as well as
its progressive development since the beginning. We have also mentioned, contrary to
popular knowledge, that risk measures existed prior to Markowitz.
We have examined the key contributions of the major risk measures, such as
Markowitz’s Portfolio Theory, whilst also highlighting their influence within the fi-
nancial industry. We have also discussed newer risk measures such as spectral risk
measures, VaR and its variants (e.g. CVaR) and mentioned future areas of research.

10
References
[Ace02] C. Acerbi. Spectral measures of risk: A coherent representation of subjective
risk aversion. Journal of Banking and Finance, 26(7):1505–1518, 2002.

[ADE+ 03] P. Artzner, F. Delbaen, J.M. Eber, D. Heath, and H. Ku. Coherent multi-
period risk measurement. Manuscript, ETH Zurich, 2003.

[ADEH97] P. Artzner, F. Delbaen, J.M. Eber, and D. Heath. Thinking coherently.


Risk, 10(11):68–71, 1997.

[Ber54] D. Bernoulli. Exposition of a new theory on the measurement of risk. Econo-


metrica: Journal of the Econometric Society, pages 23–36, 1954.

[Bog93] J.C. Bogle. Bogle on mutual funds: new perspectives for the intelligent
investor. McGraw-Hill Professional, 1993.

[CNF05] E. Cuvelier and M. Noirhomme-Fraiture. Clayton copula and mixture de-


composition. ASMDA 2005, pages 699–708, 2005.

[CT04] R. Cont and P. Tankov. Financial Modelling with Jump Processes. CRC
Press, 2004.

[Dow02] K. Dowd. An Introduction to Market Risk Measurement. J. Wiley Hoboken,


NJ, 2002.

[DS05] K. Detlefsen and G. Scandolo. Conditional and dynamic convex risk mea-
sures. Finance and Stochastics, 9(4):539–561, 2005.

[DSZ04] J. Danıelsson, H.S. Shin, and J.P. Zigrand. The impact of risk regulation
on price dynamics. Journal of Banking and Finance, 2004.

[Efr79] B. Efron. Bootstrap Methods: Another Look at the Jackknife. The Annals
of Statistics, 7(1):1–26, 1979.

[Fis06] Irving Fisher. The Nature of Capital and Income. Macmillan, 1906.

[FMN01] R. Frey, A.J. McNeil, and M. Nyfeler. Copulas and credit models. Risk,
14(10):111–114, 2001.

[GF99] T.C. Green and S. Figlewski. Market Risk and Model Risk for a Finan-
cial Institution Writing Options. The Journal of Finance, 54(4):1465–1499,
1999.

[Gla04] P. Glasserman. Monte Carlo Methods in Financial Engineering. Springer,


2004.

[GO08] H. Geman and S. Ohana. Time-consistency in managing a commodity port-


folio: A dynamic risk measure approach. Journal of Banking and Finance,
32(10):1991–2005, 2008.

[Gra03] B. Graham. The intelligent investor. Harper Collins, 2003.

11
[Hag05] R.G. Hagstrom. The Warren Buffett Way. Wiley, 2005.

[Hüb05] G. Hübner. The generalized Treynor ratio. Review of Finance, 9(3):415–435,


2005.

[Hul00] J. Hull. Options, futures and other derivatives. Prentice Hall, 2000.

[Ino03] A. Inoue. On the worst conditional expectation. Journal of Mathematical


Analysis and Applications, 286(1):237–247, 2003.

[Jia03] W. Jiang. A nonparametric test of market timing. Journal of Empirical


Finance, 10(4):399–425, 2003.

[Kea03] S. Kealhofer. Quantifying credit risk I: default prediction. Financial Ana-


lysts Journal, 59(1):30–44, 2003.

[KH05] M.H. Kabir and M.K. Hassan. The Near-Collapse of LTCM, US Financial
Stock Returns, and the Fed. Journal of Banking and Finance, 29(2):441–
460, 2005.

[Kni21] F.H. Knight. Risk, uncertainty and profit. Houghton Mifflin Company, 1921.

[Mar52] H. Markowitz. Portfolio Selection. The Journal of Finance, 7(1):77–91,


1952.

[Mar91a] H.M. Markowitz. Foundations of portfolio theory. Journal of Finance, pages


469–477, 1991.

[Mar91b] H.M. Markowitz. Portfolio selection: efficient diversification of investments.


Blackwell Publishing, 1991.

[Mer74] R.C. Merton. On the Pricing of Corporate Debt: The Risk Structure of
Interest Rates. The Journal of Finance, 29(2):449–470, 1974.

[Mit06] S. Mitra. An Introduction to Hedge Funds. Optirisk Systems: White Paper


Series, 2006.

[MR05] M. Musiela and M. Rutkowski. Martingale Methods In Financial Modelling.


Springer, 2005.

[MV04] D. Meneguzzo and W. Vecchiato. Copula sensitivity in collateralised debt


obligations and basket default swaps pricing and risk monitoring. Risk
Management IntesaBci, Journal of Futures Markets, pages 37–70, 2004.

[Nel06] R.B. Nelsen. An introduction to copulas. Springer Science and Business


Media, Inc., 2006.

[Rie04] F. Riedel. Dynamic coherent risk measures. Stochastic processes and their
applications, 112(2):185–200, 2004.

[Roy52] AD Roy. Safety First and the Holding of Assets. Econometrica, 20(3):431–
449, 1952.

12
[RU00] R.T. Rockafellar and S. Uryasev. Optimization of conditional value-at-risk.
Journal of Risk, 2(3):21–41, 2000.

[Rub02] M. Rubinstein. Markowitz’s “Portfolio Selection”: A Fifty-Year Retrospec-


tive. The Journal of Finance, 57(3):1041–1045, 2002.

[Sha64] W.F. Sharpe. Capital Asset Prices: A Theory of Market Equilibrium under
Conditions of Risk. The Journal of Finance, 19(3):425–442, 1964.

[Sha66] W.F. Sharpe. Mutual Fund Performance. The Journal of Business,


39(1):119–138, 1966.

[SP94] F.A. Sortino and L.N. Price. Performance measurement in a downside risk
framework. Journal of investing, (FALL 1994), 1994.

[Sto99a] Paul Stonham. Too Close To The Hedge: The Case Of Long Term Capital
Management. Part Two: Near-Collapse And Rescue. European Management
Journal,Volume 17, Issue 4 ,Pages 382-390, 1999.

[Sto99b] L.A. Stout. Why the Law Hates Speculators: Regulation and Private Order-
ing in the Market for OTC Derivatives. Duke Law Journal, 48(4):701–786,
1999.

[Sze05] G. Szegö. Measures of risk. European Journal of Operational Research,


163(1):5–19, 2005.

[Tob58] J. Tobin. Liquidity preference as behavior towards risk. The Review of


Economic Studies, pages 65–86, 1958.

13

You might also like