02 Uncertainty and Risk
02 Uncertainty and Risk
Basic economic analysis is based on the assumption of a certainty. If economist discuss the issue of time
it can effectively be collapsed into the present through discounting. We must explicitly change that by
incorporating uncertainty into the economic world. This also gives an opportunity to think more about
the matter of time. We deal with a specific, perhaps rather narrow, idea of uncertainty that is, in a
sense, exogenous. It is some external ingredient that has an impact upon individual agents’ economic
circumstances and also upon the agents’ decisions
Risk and uncertainty both relate to the same underlying concept—randomness. Risk is randomness in
which events have measurable probabilities, Knightian1 uncertainty is risk that is immeasurable, not possible
to calculate.
Risk: We don’t know what is going to happen next, but we do know what the distribution looks like.
Uncertainty: We don’t know what is going to happen next, and we do not know what the possible
distribution looks like.
In risk probabilities may be attained either by deduction (using theoretical models) or induction (using
the observed frequency of events). For example, we can easily deduce the probabilities of the possible
outcomes of a game of dice. On the other hand, induction allows us to calculate probabilities from past
observations where theoretical models are unavailable, possibly because of a lack of knowledge about
the underlying relation between cause and effect.
Whereas risk is quantifiable randomness, uncertainty isn’t. It applies to situations in which the world is
not well-charted. First, our world view might be insufficient from the start. Second, the way the world
operates might change so that past observations offer little guidance for the future.
Typically, in situations of choice, risk and uncertainty both apply. Many situations of choice are
unprecedented, and uncertainty about the underlying relation between cause and effect is often
present.
Although there are some radically new concepts to be introduced, the analysis can be firmly based on
the principles used to give meaning to consumer choice. However, the approach will take us on to more
general issues: by modeling uncertainty we can provide an insight into the definition of risk, attitudes to
risk and a precise concept of risk aversion.
1
Knightian uncertainty is named after University of Chicago economist Frank Knight (1885–1972), who
distinguished risk and uncertainty in his work Risk, Uncertainty, and Profit
1
To have a passably usable model of choice, we need to be able to say something about how risk affects
choice and well-being. Theory of risk can help to understand why people buy insurance or why do stocks
pay higher interest rates than bank accounts?
1.1 Probabilities
Probability refers to the possibility that an outcome will occur. The probability of a repetitive event
happening is the relative frequency with which it will occur. The probability of obtaining a head on the
fair-flip of a coin is 50%. Probability is a difficult concept to formalize because its interpretation can
depend on the nature of the uncertain events and on the beliefs of the people involved. One objective
interpretation of probability relies on the frequency with which certain events tend to occur.
If a lottery offers n distinct prizes and the probabilities of winning the prizes are pi (i=1,n) then
∑p i =1
i =1
But what if there are no similar past experiences to help measure probability? In these cases objective
measures of probability cannot be deduced, and a more subjective measure is needed. Subjective
probability is the perception that an outcome will occur. This perception may be based on a person’s
judgment or experience, hut not necessarily on the frequency with which a particular outcome has
actually occurred in the past. When probabilities are subjectively determined, different people may
attach different probabilities to different outcomes and thereby-make different choices. Either different
information or different abilities to process the same information can explain why subjective
probabilities vary among individuals.
Whatever the interpretation of probability is used in calculating two important measures that help us
describe and compare risky choices. One measure tells us the expected value and the other the
variability of the possible outcomes.
For a lottery (X) with prizes x1,x2,…,xn and the probabilities of winning p1,p2,…pn, the expected value of
the lottery is
E ( X ) = p1 x1 + p2 x2 + ... + pn xn
n
E ( X ) = ∑ pi xi
i =1
2
The expected value is a weighted sum of the outcomes and the weights are the respective probabilities.
iość prawdopodobieństwo Expectedvalue (value-ex (value-ex. iość prawdopodobieństwo Expectedvalue (value-ex (value-ex. iość prawdopodobieństwo Expectedvalue (value-ex (value-ex.
orłow ilościorłów (howmanytimes) val)^2 value)^2*p orłow ilościorłów (howmanytimes) val)^2 value)^2*p orłow ilościorłów (howmanytimes) val)^2 value)^2*p
1 50,00% 0,5 100,00 50,00 1 25,00% 0,25 25,00 6,25000 1 0,98% 0,010 64,00 0,62500
Expectedvalue 10,00 variance 100,000 Expectedvalue 10,00 variance 25,0000 Expectedvalue 10,00 varaince 5,0000
Table 1
2
A common observation is that people often refuse to participate in actuarially fair games
3
Dispersion and risk are closely related notions. Suppose you are choosing between three games that
have the same expected value $10. Holding constant the expectation of E(X), more dispersion means
that the outcome is “riskier”—it has both more upside and more downside potential. Table 1 summaries
these possible outcomes, their payoffs, and their probabilities3.
Table 2
Note that all four of these “lotteries” have same expected value $10, but they have different levels of
risk. The variability of the possible payoffs is different for the four games. This variability can be
measured by recognizing that large differences (whether positive or negative) between actual payoffs
and the expected payoff called deviations, signal greater risk. Table 2 gives the deviations of actual
incomes from the expected income for the example of the four games.
In the first game, the average deviation is ….., which is obtained by weighting each deviation by the
probability that each outcome occurs. Thus:
Average Deviation = …
3
Note (without proof): The variance of n identical independent gambles is 1/n times the variance of
one of the gambles.
4
For the third game, the average deviation is:
The second game is thus substantially more risky than the first because its average deviation of …. is
much greater than the average deviation of …….. for the first game.
In practice one usually encounters two closely related but slightly different measures of variability. The
variance is the average of the squares of the deviations of the payoffs associated with each outcome
from their expected value, The standard deviation is the square root of the variance. Table 2 gives some
of the relevant calculations for our example.
Variance ….
The standard deviation is therefore equal to the square root of ……), or ….. Similarly, the average of the
squared deviations under oter games is given by
Variance = …..
The standard deviation is the square root of…., or …... Whether we use variance or standard deviation to
measure risk (ifs really a matter of convenience-both provide the same ranking of risky choices), the
…..game is substantially less risky than the ……., Both the variance and the standard deviation of the
incomes earned are lower.
Figure 1
5
The concept of variance applies equally well when there are many outcomes rather than just four.
You can see from Figure 1 that the ….game is riskier than the second. The ‘spread’ of possible payoffs for
the ….. game is much greater than the spread of payoffs for the other games. And the variance of the
payoffs associated with the ….. game is greater than the variance associated with the ………..
Suppose you are choosing between the four games described in our original example. Which game
would you choose? If you dislike risk, you will take the ……. game. It offers the same expected income as
the first hut with less risk. But suppose we add $2 to each of the payoffs in the first job, so that the
expected payoff increases from $10 to $11. Table 3 gives the new earning and the squared deviations.
probabilit probabilit
Ex Pay-off y Pay-off y
heads 22 50% heads 2,00 50%
tails 0 50% tails 0 50%
no. flips 1 no. flips 10
Expecte Expecte
(value- (value-
d value d value
prawdopodobieńst (value-ex ex. prawdopodobieńst (value-ex ex.
iość orłow (how iość orłow (how
wo ilości orłów val)^2 value)^2 wo ilości orłów val)^2 value)^2
many many
*p *p
times) times)
Table 3
6
Games can then he described as follows:
Game 1 offers a higher expected income hut is substantially riskier than Game 2. Which game is
preferred depends on you. An aggressive entrepreneur may opt for the higher expected income and
higher variance, but a more conservative person might opt for the second. To see how people might
decide between incomes that differ in both expected value and in riskiness, we need to develop our
theory of consumer choice further.
In next part we need to understand why people don’t want to play fair games in which expected value
equals cost of entry. Most people would not enter into actuarially fair game - a $1000 dollar heads/tails
fair coin flip. Even if somebody offers a gamble. We’ll flip a coin. If it’s heads, I’ll give you $10 million
dollars. If it’s tails, you owe me $9 million.
Especially people won’t pay large amounts of money to play gambles with huge upside potential.
The most famous example that people won’t pay large amounts of money to play gambles with huge
upside potential is “St. Petersburg Paradox”. In the St. Petersburg game people were asked how much
they would pay for the following prospect: if tails comes out of the first toss of a fair coin, to receive
nothing and stop the game, and in the complementary case, to receive two $ and stay in the game; if
tails comes out of the second toss of the coin, to receive nothing and stop the game, and in the
complementary case, to receive four $ and stay in the game; and so on ad infinitum. So flip a coin. I’ll
n
pay you in dollars 2 ,where n is the number of tosses until you
7
We need to answer following questions: how much would you be willing to pay to play this game, and
How much should you be willing to pay? The expected monetary value of the St. Petersburg paradox
game is infinite
∞ ∞ i
1
E ( X ) = ∑ π i xi = ∑ 2
i
i =1 i =1 2
E ( X ) = 1 + 1 + 1 + ... + 1 = ∞
What is the variance of this gamble? V (X)= ∞.
No one would pay more than a few dollars to play this game. Because no player would pay a lot to play
this game, it is not worth its infinite expected value
The fact that a gamble with positive expected monetary value has negative ‘utility value’ suggests
something pervasive and important about human behavior: As a general rule, uncertain prospects are
worth less in utility terms than certain ones, even when expected tangible payoffs are the same. If this
observation is correct, we need a way to incorporate risk preference into our theory of choice since
many (even most) economic decisions involve uncertainty about states of the world.
We need to be able to say how people make choices when agents value outcomes (as we have modeled
all along) and Agents also have feelings/preferences about the riskiness of those outcomes.
Since the people always set a definite, possibly quite small upper value on the St. Petersburg prospect, it
follows that they do not price it in terms of its expected monetary value. Bernoulli argued in effect that
they estimate it in terms of the utility of money outcomes, and defended the Log function as a plausible
idealization, given its property of quickly decreasing marginal utilities.
Because the resulting series, Σn(Log 2nOO1/2n)), is convergent, Bernoulli’s hypothesis is supposed to
deliver a solution to the paradox. At least, Bernoulli’s hypothesis counts as the first systematic
occurrence of EUT theory.
8
2 Expected utility theory
The history of Expected utility theory is often construed in terms of the smooth generalization process:
the principle of maximizing expected monetary values antidates expected utility theory, which is now in
the process of being generalized in two directions, by either non-additive or non-probabilistic decision
theories. The highlights in this sequence are Bernoulli's (1738) resolution of the St. Petersburg paradox,
and Allais's (1953) invention of a thought-provoking problem widely referred to as the Allais paradox.
John von Neumann and Arthur Morgenstern suggested a model for understanding and systematically
modeling risk preference in the mid-1940s: Expected utility theory. We will begin with the Axioms of
expected utility and then discuss their interpretation and applications. The Axioms of consumer theory
continue to hold for preferences over certain (opposite of uncertain) bundles of goods. Expected utility
theory adds to this preferences over uncertain combinations of bundles where uncertainty means that
these bundles will be available with known probabilities that are less than unity Hence, Expected utility
theory is a superstructure that sits atop consumer theory.
Expected Utility Theory states that the decision maker chooses between risky or uncertain lotteries by
comparing their expected utility values - the weighted sums obtained by adding the utility values of
outcomes multiplied by their respective probabilities.
People are generally unwilling to play fair games. There may be a few exceptions (we will assume that
this is not the case): when very small amounts of money are at stake or when there is utility derived
from the actual play of the game. Individuals do not care directly about the dollar values of the prizes,
but they care about the utility that the dollars provide.
If we assume diminishing marginal utility of wealth, the St. Petersburg game may converge to a finite
expected utility value and this would measure how much the game is worth to the individual.
n
E (U ) = ∑ π iU ( xi )
i =1
Because utility may rise less rapidly than the dollar value of the prizes, it is possible that expected utility
will be less than the monetary expected value
Suppose that there are n possible prizes that an individual might win (x1,…xn) arranged in ascending
order of desirability
9
–xn = most preferred prize ⇒ U(xn) = 1
The point of the von Neumann-Morgenstern theorem is to show that there is a reasonable way to assign
specific utility numbers to the other prizes available
The von Neumann-Morgenstern method is to define the utility of xi as the expected utility of the gamble
that the individual considers equally desirable to xi
U ( xi ) = π iU ( xn ) + (1 − pi )U ( x1 )
Since U(xn) = 1 and U(x1) = 0
U ( xi ) = pi 1 + (1 − pi )0 = pi
The utility number attached to any other prize is simply the probability of winning it. Notice that this
choice of utility numbers is arbitrary. A rational individual will choose among gambles based on their
expected utilities (the expected values of the von Neumann-Morgenstern utility index)
If individuals obey the von Neumann-Morgenstern axioms of behavior in uncertain situations, they will
act as if they choose the option that maximizes the expected value of their von Neumann-Morgenstern
utility index
Risk aversion is the reluctance of a person to accept a bargain with an uncertain payoff rather than
another bargain with a more certain, but possibly lower, expected payoff.
10
In theory, three possible profiles in risk tolerance or attitudes toward risk exist. They are: aversion to
risk, neutrality to risk, and preference for risk. Each decision maker`s attitude toward risk is determined
by his or her utility of income or wealth.
A person is given the choice between two scenarios, one with a guaranteed payoff and one without. In
the guaranteed scenario, the person receives $1. In the uncertain scenario, a coin is flipped to decide
whether the person receives $2 or nothing. The expected payoff for both scenarios is $1, meaning that
an individual who was insensitive to risk would not care whether they took the guaranteed payment or
the gamble. However, individuals may have different risk attitudes. A person is said to be:
A risk-averse decision maker displays a diminishing marginal utility of income or wealth. This is a
common, but not universal, attitude. A risk-averse manager tends to choose options that have very little
variation about expected monetary returns. risk-averse (or risk-avoiding) - if he or she would accept a
certain payment (certainty equivalent) of less than $50 (for example, $40), rather than taking the
gamble and possibly receiving nothing.
A risk-indifferent decision maker has a constant marginal utility of income or wealth. She neither seeks
nor avoids risk. (S)he will choose investments for which the expected monetary values are equal to the
manager`s subjectively perceived utility values (expressed in utils, an arbitrary measure). Thus, she has a
linear utility function, one in which the monetary amounts and utilities have a constant or directly
proportional relationship. Accordingly, (s)he is indifferent to risk. risk-neutral - if he or she is indifferent
between the bet and a certain $50 payment.
A risk-seeker`s marginal utility of income or wealth increases. A risk-seeking manager chooses risk. A risk
seeker prefers investments that have the potential for large gains even though large losses may also be
possible. The utility function for the risk seeker increases at an increasing rate; i.e., riskier investments
are appealing. risk-loving (or risk-seeking) - if he or she would accept the bet even when the guaranteed
payment is more than $50 (for example, $60).
Managerial decisions cannot be based solely on expected outcomes, but must incorporate an analysis of
risk attitudes. ………shows the relations between money and its utility for three types of decision makers:
(a) a risk averter, (b) risk-neutral manager, and (c) risk seeker.
The average payoff of the gamble, known as its expected value, is $50. The dollar amount that the
individual would accept instead of the bet is called the certainty equivalent, and the difference between
the expected value and the certainty equivalent is called the risk premium. For risk-averse individuals, it
becomes positive, for risk-neutral persons it is zero, and for risk-loving individuals their risk premium
becomes negative.
11
2.3 Risk Aversion and Insurance
Most people buy insurance - for cars, homes, and pretty much anything we consider valuable. The
motivation is simple - even though the probability of losing or damaging the item insured may be small,
the potential loss would be so huge, that most people would rather pay reasonable amounts of money
for certain, than lose a great amount with a very small probability.
But there's another side to this - the insurance companies. If it's logical for people to buy insurance, how
come it's also logical to sell it? After all, insurance companies are in the business to make a profit. If
individuals are better off paying comparatively small, fixed amounts at regular intervals (as an insurance
premium) than risking a large loss, how is the insurance company better off by accepting these small
amounts, and agreeing to risk a large loss? If buyers of insurance are indeed risk-averse - what about
those who sell it?
Insurance is a classic illustration of the difference between risk-aversion and risk-neutrality. We can see
how risk-averse individuals will always choose to insure valuable assets, since although the probability
of a loss may be small, the potential loss of the asset itself would be so large that most people would
rather pay small amounts of money as a premium for certain than risk the loss. Insurance companies are
risk-neutral, and earn their profits from the fact that the value of the premiums they receive is either
greater than or equal to the expected value of the loss.
We assume that apart from his own wealth, an individual making the decision to insure or not also
knows for certain the probability of a loss or accident. As a risk-averse consumer have initial wealth W,
and a von Neumann-Morgenstern utility function U(W). You own a car of value L, and the probability of
an accident which would total the car is p. . If x is the amount of insurance you can purchase, how
much should x be?
12
The answer to this question depends, very simply, on the price of insurance - the premium you'd have to
pay. Let's say this price is r, for $1 worth of insurance, so for $x of insurance, you'd be paying $rx as a
premium.
We know that For insurance to be actuarially fair,, the insurance company should have zero expected
profits. We can set up their problem as under:
With probability
bability p, the insurance company must pay $x, while receiving $rx in premiums. With
probability (1-p),
p), they pay nothing, and continue to receive $rx in premiums. So their expected profit is:
p(rx - x) + (1-p)rx
so, p = r.
So for insurance to be actuarially fair, the premium rate must equal the probability of an accident.
The person might be willing to pay some amount to avoid participating in a gamble.
gamble This helps to explain
why some individuals purchase insurance
Figure 2
W ” provides the same utility as participating in gamble 1 (certainty equivalent). The individual will be
willing to pay up to W* - W ” to avoid participating in the gamble. An individual who always refuses fair
bets is said to be risk averse: will exhibit diminishing marginal utility of income and will be willing to pay
to avoid taking fair bets
In actual practice, even if the premium does not equal the probability of an accident, it certainly
depends on it - which is why different demographic groups pay widely differing automobile insurance
13
premiums. Since single men under the age of 25 have the highest accident risk, they also pay the highest
premiums.
Given actuarially fair insurance, where p = r, you would solve: max pu(w - px - L + x) + (1-p)u(w - px),
since in case of an accident, you total wealth would be w, less the loss suffered due to the accident, less
the premium paid, and adding the amount received from the insurance company.
Differentiating with respect to x, and setting the result equal to zero, we get the first-order necessary
condition as: (1-p)pu'(w - px - L + x) - p(1-p)u'(w - px) = 0,
Risk-aversion implies u" < 0, so that equality of the marginal utilities of wealth implies equality of the
wealth levels, i.e.
w - px - L + x = w - px,
so we must have x = L.
So, given actuarially fair insurance, you would choose to fully insure your car. Since you're risk-averse,
you'd aim to equalize your wealth across all circumstances - whether or not you have an accident.
However, if p and r are not equal, we will have x < L; you would under-insure. How much you'd
underinsure would depend on the how much greater r was than p.
14
2.4 Absolute risk aversion
The higher the curvature of U(W), the higher the risk aversion. However, since expected utility functions
are not uniquely defined, a measure that stays constant with respect to these transformations is needed.
The most commonly used risk aversion measure was developed Arrow-Pratt4 measure of absolute
risk-aversion (ARA), also known as the coefficient of absolute risk aversion, defined as
U " (W )
r (W ) = −
U ' (W )
For risk averse individuals, U”(W) < 0
• r(W) will be positive for risk averse individuals
• r(W) is not affected by which von Neumann-Morganstern ordering is used
The Arrow-Pratt measure of risk aversion is proportional to the amount an individual will pay to avoid a
fair gamble
Exponential utility of the form U(W) = 1-e-αc is unique in exhibiting constant absolute risk
aversion (CARA): r(W)=α is constant with respect to c.
Experimental and empirical evidence is mostly consistent with decreasing absolute risk aversion.
We now need to expand both sides of the equation using Taylor’s series
Because p is a fixed amount, we can use a simple linear approximation to the right-hand side
U(W - p) = U(W) - pU’(W) + higher order terms
For the left-hand side, we need to use a quadratic approximation to allow for the variability of the
gamble (h)
E[U(W + h)] = E[U(W) - hU’(W) + h2/2 U”(W) + higher order terms
E[U(W + h)] = U(W) - E(h)U’(W) + E(h2)/2 U”(W) + higher order terms
Remembering that E(h)=0, dropping the higher order terms, and substituting k for E(h2)/2, we get
U (W ) − pU ' (W ) ≅ U (W ) + kU " (W )
kU " (W )
p≅− = kr (W )
U ' (W )
It is not necessarily true that risk aversion declines as wealth increases. Diminishing marginal utility
would make potential losses less serious for high-wealth individuals. However, diminishing marginal
4
after the economists Kenneth Arrow and John W. Pratt
15
utility also makes the gains from winning gambles less attractive, the net result depends on the shape of
the utility function
U " (W )
rr (W ) = Wr (W ) = − W
U ' (W )
Like for absolute risk aversion we can use the corresponding terms constant relative risk
aversion (CRRA) and decreasing/increasing relative risk aversion (DRRA/IRRA). This measure
has the advantage that it is still a valid measure of risk aversion, even if the utility function
changes from risk-averse to risk-loving as c varies, i.e. utility is not strictly convex/concave over
all c. A constant RRA implies a decreasing ARA, but the reverse is not always true.
As a specific example, the expected utility function U(W) = log (W) implies RRA = 1.
The power utility function
U " (W ) ( R − 1)W R −2 ( R − 1)
r (W ) = − =− R −1
=−
U ' (W ) W W
rr (W ) = Wr (W ) = −( R − 1) = 1 − R
16
2.6 The State-Preference Approach
The method we taken up to this point is different from the typical consumer optimization
method. We have not used the fundamental optimization model of utility-maximization subject
to a budget constraint. So now we need to expand new techniques to integrate the standard
consumer’s choice model.
Pay-offs of any non-deterministic event can be categorized into a number of states of the
economy: “boom” or “slump”. Instead of classical commodities we use the idea of state-
contingent commodities. These goods are distributed only in particular state of the economy. The
simplest state-contingent commodities choice is: “big house in boom” or “small apartment in
slump”
It is possible that an individual could purchase a contingent commodity
• buy a promise that someone will pay you $1 if tomorrow turns out to be good times
• this good will probably cost less than $1
This is the value that the individual wants to maximize given his initial wealth (W)
Assume that the person can buy $1 of wealth in good times for pg and $1 of wealth in bad times for pb
W = pgWg + pbWb
The price ratio pg /pb shows how this person can trade dollars of wealth in good times for dollars in bad
times
17
Fair Markets for Contingent Goods:
If markets for contingent wealth claims are well-developed and there is general agreement about π,
prices for these goods will be actuarially fair
pg = π and pb = (1- π)
The price ratio will reflect the odds in favor of good times
pg π
=
pb 1− π
If contingent claims markets are fair, a utility-maximizing individual will opt for a situation in which
Wg = Wb
He will arrange matters so that the wealth obtained is the same no matter what state occurs
U ' (Wg )
=1
U ' (Wb )
Wg = Wb
The individual maximizes utility on the certainty line where Wg = Wb. Since the market for contingent
claims is actuarially fair, the slope of the budget constraint = -1 (Figure 3)
18
Figure 3
If the market for contingent claims is not fair, the slope of the budget line ≠ -1 In this case, utility
maximization may not occur on the certainty line (Figure 4).
Figure 4
Again, consider a person with wealth of $100,000 who faces a 25% chance of losing his automobile
worth $20,000
19
wealth with no theft (Wg) = $100,000 and probability of no theft = 0.75
E(U) = 11.45714
The budget constraint is written in terms of the prices of the contingent commodities
Assuming that these prices equal the probabilities of these two states
The individual will move to the certainty line and receive an expected utility of
To be able to do so, the individual must be able to transfer $5,000 in extra wealth in good times into
$15,000 of extra wealth in bad times and a fair insurance contract will allow this.
2.8 Summary
In uncertain situations, individuals are concerned with the expected utility associated with various
outcomes. If they obey the von Neumann-Morgenstern axioms, they will make choices in a way that
maximizes expected utility. If we assume that individuals exhibit a diminishing marginal utility of
wealth, they will also be risk averse and they will refuse to take bets that are actuarially fair.
Risk averse individuals will wish to insure themselves completely against uncertain events if insurance
premiums are actuarially fair. They may be willing to pay actuarially unfair premiums to avoid taking
risks.
20
Decisions under uncertainty can be analyzed in a choice-theoretic framework by using the state-
preference approach among contingent commodities. If preferences are state independent and prices
are actuarially fair, individuals will prefer allocations along the “certainty line”. We will receive the same
level of wealth regardless of which state occurs
21
2.9 Add
2.9.1 The Preference Axioms
In order to construct a utility function over lotteries, or gambles, we will make the following
assumptions on people's preferences. We denote the binary preference relation "is weakly preferred
to" by , which includes both "strictly preferred to", and "indifferent to".
1. Completeness: For any 2 gambles g and g' in G, either g g' or g' g. In English, this means
that people have preferences over all lotteries, and can rank them all.
2. Transitivity: For any 3 gambles g, g', and g" in G, if g g' and g' g", then g g". In English,
if g is preferred (or indifferent) to g', and g' is preferred (or indifferent) to g", then g is
preferred (or indifferent) to g".
3. Continuity: Mathematically, this assumption states that the upper and lower countour sets of
a preference relation over lotteries are closed. Along with the other axioms, continuity is
needed to ensure that for any gamble in G, there exists some probability such that the
decision-maker is indifferent between the "best" and the "worst" outcome. This may seem
irrational if the best outcome were, say, $1,000, and the worst outcome was being run over by
a car. However, think of it this way - most rational people might be willing to travel across
town to collect a $1,000 prize, and this might involve some probability, however tiny, of being
run over by a car.
4. Monotonicity: This big ugly word simply means that a gamble which assigns a higher
probabilty to a preferred outcome will be preferred to one which assigns a lower probability
to a preferred outcome, as long as the other outcomes in the gambles remain unchanged. In
this case, we're referring to a strict preference over outcomes, and don't consider the case
where the decision-maker is indifferent between possible outcomes.
5. Substitution: If a decision-maker is indifferent between two possible outcomes, then they will
be indifferent between two lotteries which offer them with equal probabilities, if the lotteries
are identical in every other way, i.e., the outcomes can be substituted. So if outcomes x and y
are indifferent, then one is indifferent between a lottery giving x with probability p, and z with
probability (1-p), and a lottery giving y with probability p, and z with probability (1-p).
Similarly, if x is preferred to y, then a lottery giving x with probability p, and z with probability
22
(1-p), is preferred to a lottery giving y with probability p, and z with probability (1-p).
Note: This last axiom is frequently referred to as the Independence axiom, since it refers to
theIndependence of Irrelevant Alternatives (IIA).
The last axiom allows us to reduce compound lotteries to simple lotteries, since one can also
be similarly indifferent between a a simple lottery giving an outcome x with a probability p,
and compound lottery where the prize might be yet another lottery ticket, allowing one to
participate in a lottery with x as a possible outcome, such that the effective probability of
getting x was p.
A utility function u is said to have the expected utility property if, for a gamble g with outcomes {a1,
a2,...,an}, with effective probabilities p1, p2,...,pn respectively, we have:
An individual who chooses one gamble over another if and only if the expected utility is higher is
anexpected utility maximizer.
von Neumann and Morgenstern proved that, as long as all the preference axioms hold, then a utility
function exists, and it satisfies the expected utility property.
U(W) = a + bW + cW 2
23
Pratt’s risk aversion measure is
U " (W ) − 2c
r (W ) = − =
U (W ) b + 2cW
Risk aversion increases as wealth increases
U(W) = ln (W )
where W > 0
If utility is exponential
U " (W ) A2e − AW
r (W ) = − = =A
U (W ) Ae − AW
24
3 Alternatives to Expected Utility - Prospect Theory
Over time, researchers have become all too aware of the limitations of expected utility theory,
especially those raised by the St. Petersburg, Allais, and Ellsberg paradoxes. As a result, numerous
alternative theories have been developed to overcome the limitations of expected utility theory without
losing its explanatory power. Prospect theory is perhaps the most well-known of these alternative
theories.
Prospect theory5 is a behavioral economic theory that describes decisions between alternatives that
involve risk (i.e. alternatives with uncertain outcomes) where the probabilities are known. The theory
states that people make decisions based on the potential value of losses and gains rather than the final
outcome, and that people evaluate these losses and gains using certain heuristics. The model is
descriptive: it tries to model real-life choices rather than optimal decisions. A psychologically more
accurate description of preferences compared to expected utility theory.
Allais questioned the naturalness of Expected Utility based choices by devising the following
questionnaire.
and
5
The theory was developed by Daniel Kahneman and Amos Tversky in 1979 as a psychologically more accurate
description of decision making, comparing to the expected utility theory. In the original formulation the term
prospect referred to a lottery.
25
and nothing with probability 0.89,
and
Allais found that the majority answers were x1 to question 1 and y2 to question 2, and argued that this
pair of prospects could indeed be chosen for good reasons. But it violates EUT, since there is no function
U that would both satisfy:
and
Although the word “paradox” is frequently used in the history of EUT, it should be clear from these two
famous examples that it does not refer to deeply ingrained conceptual difficulties, such as Russell’s
paradox in set theory, or the EPR paradox in physics, but rather just to problems or anomalies for the
theory that is currently taken for granted -- expected monetary value theory and EUT, respectively.
In 1961, Daniel Ellsberg published the results of a hypothetical experiment he had conducted, which, to
many, constitutes an even worse violation of the expected utility axioms than the Allais Paradox.
Ellsberg's subjects in his thought experiment seemed to run the gamut of noted economists of the time
Subjects are presented with 2 urns. Urn I contains 100 red and black balls, but in an unknown ratio. Urn
II has exactly 50 red and 50 black balls. Subjects must choose an urn to draw from, and bet on the color
that will be drawn - they will receive a $100 payoff if that color is drawn, and $0 if the other color is
drawn. Subjects must decide which they would rather bet on:
26
A black draw from Urn I, or a black draw from Urn II
Most people choose A over B and then choose D over C. Choosing A over B seems to be invoking a belief
that there are more yellow balls than green balls. Choosing D over C seems to be invoking a belief that
there are more green balls than yellow balls. It is logically inconsistent to choose A over B and then
choose D over C.
Many explanations have been proposed as to why this happens. The choices of A and D are those that
eliminate the uncertainty by not knowing the number of green balls. If you prefer A over B, logic
demands that you prefer C over D.
The Ellsberg paradox is based on the difference between two types of risk: …….., and the problems it
poses for utility theory – one is faced with an urn that contains 30 red balls and 60 balls that are either
all yellow or all black, and one then draws a ball from the urn. This poses both uncertainty – whether the
non-red balls are all yellow or all black – and probability – whether the ball is red or non-red, which is ⅓
vs. ⅔. Expressed preferences in choices faced with this situaon reveal that people do not treat these
risks the same. This is also termed "ambiguity aversion".
27
3.3 Limitations of Expected Utility Theory (EUT)
Expected Utility Theory added to Neoclassical Utility theory, the idea of decision making under
uncertainty. People’s preferences with regard to uncertain outcomes are represented by a function of
the payouts, the probabilities of occurrence, and risk aversion. People with different assets or personal
preferences will have different utilities (i.e. personal value) placed on different risk or uncertainties.
Expected Utility theory was a big advance. Prior to that economic theory didn’t take into consideration
decision making under uncertainty or that people’s attitudes toward risk will affect the value they place
on outcomes of different risks i.e. probabilities of events taking place
Kahneman and Tversky found systematic violations of expected utility theory in actual behavior:
28
3.3.1 The Certainty Effect
In 1979, Daniel Kahneman and Amos Tversky conducted a series of thought experiments testing the
Allais Paradox in Israel, at the University of Stockholm, and at the University of Michigan. Everywhere,
the results followed the same pattern. The problem was even framed in many different ways, with
prizes involving money, vacations, and so on. In each case, the substitution axiom6 was violated in
exactly the same pattern. Kahnemann and Tversky called this pattern the certainty effect - meaning,
people overweight outcomes that are certain, relative to outcomes which are merely probable.
Using the term "prospect" to refer to lotteries or gambles, (i.e. a set of outcomes with a probability
distribution over them), Kahnemann and Tversky also state that where winning is possible but not
probable, i.e. when probabilities are low, most people choose the prospect that offers the larger gain.
This is illustrated by the second decision stage in the Allais Paradox.
The change produces a greater reduction in desirability when It alters the character of the prospect
from a sure gain to a probable one. Than when both the original and reduced prospects are uncertain.
Example:
Choose between:
A. (4,000, 0.80) or
B. (3,000, 1)
Choose between:
C. (4,000, 0.20)
D. (3,000, 0.25)
Between A and B:
20% usually choose A
80% usually choose B
Between C and D:
65% usually choose C
25% usually choose D
Note that prospect C =(4,000, .20) can be expressed as (A, .25) and prospect D=(3,000, .25) can be
rewritten as (B, .25), but firstly B>A then A>B. Reducing the probability of winning from certain to a 25%
chance has a larger effect than reducing it from an 80% chance to a 20% chance.
So people overweight outcomes that are considered certain, relative to outcomes that are merely
probable.
6
Substitution: If a decision-maker is indifferent between two possible outcomes, then they will be indifferent
between two lotteries which offer them with equal probabilities, if the lotteries are identical in every other way,
i.e., the outcomes can be substituted.
29
Certainty effect is the psychological effect that results from the reduction of probability from certainty
to probable. Normally a reduction in probability of winning some reward, say from 80% to 20% of a
chance creates a “displeasure” to individuals, which leads to the perception of loss (relative to the
original probability) and favors a risk-averse decision. The same reduction results in larger psychological
effect when the relative change is from certainty than from uncertainty.
The certainty effect refers to preferences between positive prospects (i.e. no losses). What happens
when the signs of outcomes are reversed, so that people have to consider risky decisions which may
involve actual losses? Table… displays choice problems with positive and negative prospects.
In each of the four problems presented in the table, the preference between negative prospects is the
mirror image of the preference between positive prospects. This is the “reflection effect”
Kahnemann and Tversky also found strong evidence of what they referred to as the reflection effect. To
illustrate it imagine an Allais Paradox-type problem, framed in the following way. You must choose
between one of the two gambles, or prospects:
Gamble B: An 80% chance of losing $4000, and a 20% chance of losing nothing.
30
Next, you must choose between:
Gamble D: An 80% chance of receiving $4000, and a 20% chance of receiving nothing.
Kahnemann and Tversky found that 20% of people chose D, while 92% chose B. A similar pattern held
for varying positive and negative prizes, and probabilities. This led them to conclude that when decision
problems involve not just possible gains, but also possible losses, people's preferences over negative
prospects are more often than not a mirror image of their preferences over positive prospects. Simply
put - while they are risk-averse over prospects involving gains, people become risk-loving over prospects
involving losses.
As long as prospects are in the positive domain, the certainty effect leads to a risk-averse preference for
a sure gain, rather than one which may be larger but be merely probable. However, once prospects are
in the negative domain, people exhibit risk-loving preferences for larger losses which are probable,
rather than smaller certain ones.
One might imagine that if this finding held universally that one would never observe people buying
insurance. As we will see in the section on probability transformations, what this really implies is that in
the domain of losses with moderate or high probabilities, risk seeking is predicted. Prospect theory
does, in fact, predict risk-aversion for small-probability losses, which is normally the case with insurance.
Imagine yet another lottery-choice problem. Given a choice between the following, which would you
choose?
Gamble B: A 20% chance of receiving $4000, and an 80% chance of receiving nothing.
Now imagine you are faced with a two-stage problem. The first stage involves a 0.75 probability of
ending the game without winning or losing anything, and a 0.25 probability of moving to the second
stage, where you are presented with the following choice:
Gamble D: An 80% chance of receiving $4000, and a 20% chance of receiving nothing.
31
65% of people chose B, while 78% chose C. Why is this surprising? Well, the true probabilities involved in
the second choice are:
Kahnemann and Tversky interpreted this finding in the following manner - in order to simplify the choice
between alternatives, people frequently disregard components that the alternatives share, and focus on
those which distinguish them. Since different choice problems can be decomposed in different ways,
this can lead to inconsistent preferences, as above. They call this phenomenon the isolation effect.
32
3.4 Two stages decision process: editing and evaluation
The theory describes the decision processes in two stages: editing and evaluation. During editing,
outcomes of a decision are ordered according to a certain heuristic. In particular, people decide which
outcomes they consider equivalent, set a reference point and then consider lesser outcomes as losses
and greater ones as gains.
The editing phase aims to alleviate any Framing effects. It also aims to resolve isolation effects stemming
from individuals' propensity to often isolate consecutive probabilities instead of treating them together.
Editing Phase:
• Coding: Outcomes are coded as gains or losses. The reference point can be sensitive to
presentation effects and expectations of the decision maker.
• Combination: Prospects with identical outcomes can be combined.
• Segregation: In some cases, the riskless proportion will be ignored is decision making.
• Cancellation: Common components will be discarded in the editing phase. This drives many
isolation effects.
In the subsequent evaluation phase, people behave as if they would compute a value (utility), based on
the potential outcomes and their respective probabilities, and then choose the alternative having a higher
utility.
Evaluation Phase:
Each probability p has a decision weight, π(p), associated with it. We require that π(0) = 0 and π(1) = 1.
Small probability events are generally overweighted. This implies that π(p) > p for small values of p and
π(p) < p for high values of p. It need not be true (and generally isn't) that π(p) + π(1 – p) = 1. This is
known as "subcertainty."
33
The outcome is evaluated via a "value function." This serves much the same role as a utility function.
The value function is generally concave for gains and convex for losses. This gives us reflection – risk
aversion over gains and risk loving over losses. The value function is steeper for losses than for gains,
giving us "loss aversion."
34
The overall value of a gamble is given by the following equation for a "regular prospect." In spite of its
apparent similarity to expected utility, this differs from expected utility in how probabilities are handles
and how outcomes are valued.
V ( x ,p ;y ,q ) = π (p ) v (x ) + π (q ) v ( y )
Wi:
The formula that Kahneman and Tversky assume for the evaluation phase is (in its simplest form) given
by
Where U is the overall or expected utility of the outcomes to the individual making the decision, x1, x2, x3
…… xn are the potential outcomes and p1, p2, p3 …… pn their respective probabilities. is a function that
assigns a value to an outcome. The value function (sketched in the Figure) that passes through the
reference point is s-shaped
shaped and asymmetrical. Losses hurt more than gains feel good (loss ( aversion).
This differs from expected utility theory
theory,, in which a rational agent is indifferent to the reference point. In
expected utility theory, the individual only cares about absolute wealth, not relative wealth
we in any given
situation. The function w is a probability weighting function and captures the idea that people tend to
overreact to small probability events, but under react to large probabilities.
3.5 Summary
Given the effects observed above, Kahneman and Tversky designed a new theory of decision-making
decision
under risk, which they named prospect theory.
Prospect theory differs from expected utility theory in many fundamental ways. To begin with, it
distinguishes two phases in the decision-making
decision process: an editing phase, which is a preliminary
analysis of the offered prospects, and an evaluation phase, which is when the prospect with the highest
value is chosen from among the edited prospects.
35