0% found this document useful (0 votes)
61 views

Lecture 1 Utility 2011

Utility theory provides a framework for measuring investors' preferences for risk and return. It allows investors to be modeled as maximizing expected utility. Key concepts include utility functions, risk aversion, certainty equivalents, and positive affine transformations of utility functions. An example shows how a risk-averse investor will not participate in a fair gamble with an expected return of 0%.

Uploaded by

ajc5000
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views

Lecture 1 Utility 2011

Utility theory provides a framework for measuring investors' preferences for risk and return. It allows investors to be modeled as maximizing expected utility. Key concepts include utility functions, risk aversion, certainty equivalents, and positive affine transformations of utility functions. An example shows how a risk-averse investor will not participate in a fair gamble with an expected return of 0%.

Uploaded by

ajc5000
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Utility Theory

Utility functions give us a way to measure investors preferences for wealth and the amount of risk they are willing to undertake in the hope of attaining greater wealth. This makes it possible to develop a theory of portfolio optimization. Thus utility theory lies at the heart of modern portfolio theory.

We develop the basic concepts of the theory through a series of simple examples. We discuss non-satiation, risk aversion, the principle of expected utility maximization, fair bets, certainty equivalents, portfolio optimization, coefcients of risk aversion, iso-elasticity, relative risk aversion, and absolute risk aversion. Our examples of possible investments are deliberately oversimplied for the sake of exposition. While they are much too simple to be directly relevant for real-life applications, they lay the foundation upon which the more complicated relevant theories are developed.

An Introduction to Utility Theory


A utility function is a twice-differentiable function of wealth U(y) dened for y > 0 which has the properties of non-satiation (the rst derivative U (y) > 0) and risk aversion (the second derivative U (y) < 0).

A utility function measures an investors relative preference for different levels of total wealth.

The non-satiation property states that utility increases with wealth, i.e., that more wealth is preferred to less wealth, and that the investor is never satiated - the investor never has so much wealth that getting more would not be at least a little bit desirable.

The risk aversion property states that the utility function is concave or, in other words, that the marginal utility of wealth decreases as wealth increases. To see why utility functions are concave, consider the extra (marginal) utility obtained by the acquisition of one additional dollar. For someone who only has one dollar to start with, obtaining one more dollar is quite important. For someone who already has a million dollars, obtaining one more dollar is nearly meaningless. In general, the increase in utility caused by the acquisition of an additional dollar decreases as wealth increases. It may not be obvious what this concavity of utility functions or decreasing marginal utility of wealth has to do with the notion of risk aversion. This should become clear in the examples.

Different investors can and will have different utility functions, but we assume that any such utility function satises the two critical properties of non-satiation and risk aversion.

The principle of expected utility maximization states that a rational investor, when faced with a choice among a set of competing feasible investment alternatives, acts to select an investment 2

which maximizes his expected utility of wealth.

Expressed more formally, for each investment I in a set of competing feasible investment alternatives F, let X(I) be the random variable giving the ending value of the investment for the time period in question. Then a rational investor with utility function U faces the optimization problem of nding an investment Iopt F for which: E(U(X(Iopt ))) = max E(U(X(I)))
IF

Example 1 - A Fair Game

As a rst example, consider an investor with a square root utility function: y = y0.5

U(y) =

Note that: U (y) = 0.5y0.5 > 0 and U (y) = 0.25y1.5 < 0 so this is a legitimate utility function. We assume that the investors current wealth is $5.

To make things simple, we assume that there is only one investment available. In this investment a fair coin is ipped. If it comes up heads, the investor wins (receives) $4, increasing his wealth to $9. If it comes up tails, the investor loses (must pay) $4, decreasing his wealth to $1. Note that the expected gain is 0.5 ($4) + 0.5 ($4) = $0. This is called a fair game. This game may seem more like a bet than an investment. To see why its an investment, consider an investment which costs $5 (all of our investors current wealth) and which has two possible future values: $1 in the bad case and $9 in the good case. This investment is clearly exactly the same as the coin-ipping game.

Note that we have chosen a very volatile investment for our example. In the bad case, the rate of return is -80%. In the good case, the rate of return is +80%. Note that the expected return is 0%, as is the case in all fair games/bets/investments.

We assume that our investor has only two choices (the set of feasible investment alternatives has only two elements). The investor can either play the game or not play the game (do nothing). Which alternative does the investor choose if he follows the principle of expected utility maxi-

mization?

The gure above shows our investors current wealth and utility, the wealth and utility of the two possible outcomes in the fair game, and the expected outcome and the expected utility of the outcome in the fair game. If the investor refuses to play the game and keeps his $5, he ends up with the same $5, for an expected utility of 5 = 2.24. If the investor plays the game, the expected outcome is the same $5, but the expected utility of the outcome is only 0.5 1 + 0.5 9 = 0.5 1 + 0.5 3 = 2. Because the investor acts to maximize expected utility, and because 2.24 is greater than 2, the investor refuses to play the game. In general, a risk-averse investor will always refuse to play a fair game where the expected return is 0%. If the expected return is greater than 0%, the investor may or may not choose to play the game, depending on his utility function and initial wealth.

For example, if the probability of the good outcome in our example was 75% instead of 50%, the expected outcome would be $7, the expected gain would be $2, the expected return would be 40%, and the expected utility would be 2.5. Because 2.5 is greater than 2.24, the investor would be willing to make the investment. The expected return of 40% is a risk premium which compensates the investor for undertaking the risk of the investment.

Another way of looking at this property of risk aversion is that investors attach greater weight to losses than they do to gains of equal magnitude. In the example above, the loss of $4 is a decrease in utility of 1.24, while the gain of $4 is an increase in utility of only $0.76.

Similarly, for a risk-averse investor, a loss of 2x is more than twice as bad as a loss of 1x, and a gain of 2x is less than twice as good as a gain of 1x. In the example above, with an initial wealth of $5, a loss of $1 is a decrease in utility of 0.24, and a loss of $2 is a decrease in utility of 0.50, more than twice 0.24. A gain of $1 is an increase in utility of 0.21, and a gain of $2 is an increase in utility of 0.41, less than twice 0.21.

In our example, the expected utility of the outcome is 2. The certain wealth value which has the same utility is $4 (2 squared). This value $4 is called the certainty equivalent. If the initial wealth is less than $4, an investor with a square root utility function would choose to play a game where the outcome is an ending wealth of $1 with probability 50% (a loss of less than $3) or an ending wealth of $9 with probability 50% (a gain of more than $5). Put another way, with a 6

current wealth of $4, our investor is willing to risk the loss of about 75% of his current wealth in exchange for an equal chance at increasing their wealth by about 125%, but the investor is not willing to risk any more than this. In general, the certainty equivalent for an investment whose outcome is given by a random variable y is:

Certainty equivalent = c = U 1 (E(U(y))) U(c) = E(U(y))

If an investor with utility function U has current wealth less than c, the investor will consider the investment attractive (although some other investment may be even more attractive). If the investors current wealth is greater than c, the investor will consider the investment unattractive, because doing nothing has greater expected utility than the investment. If the investors current wealth is exactly c, the investor will be indifferent between undertaking the investment and doing nothing.

Note that because U is an increasing function, maximizing expected utility is equivalent to maximizing the certainty equivalent.

The certainty equivalent is always less than the expected value of the investment. In our example, the certainty equivalent is c = $4, while the expected value (expected outcome) is E(y) = $5.

Positive Afne Transformations


Utility functions are used to compare investments to each other. For this reason, we can scale a utility function by multiplying it by any positive constant and/or translate it by adding any other

constant (positive or negative). This kind of transformation is called a positive afne transformation.

For example, with the square root utility function we used above, we could have used any of the following functions instead: 100 y 50 y + 83 y 413 3 y + 10 8 etc.

The specic numbers appearing as the utility function values on our graphs and in our calculations would be different, but the graphs would all look the same when scaled and translated appropriately, and all our results would be the same.

It is easy to see why this is true in general. Suppose we have constants a > 0 and b and a utility function U. Dene another utility function V :

V (y) = aU(y) + b

Note that:

V (y) = aU (y) > 0 because a > 0 and U (y) > 0 V (y) = aU (y) < 0 becausea > 0 andU (y) < 0

so V is a valid utility function.

Consider an investment I with outcome given by a random variable y. We can easily see that the certainty equivalent for I is the same under utility function V as it is under utility function U. Let c be the certainty equivalent under U. Then:

c = U 1 (E(U(y))) U(c) = E(U(y)) V (c) = aU(c) + b = aE(U(y)) + b = E(V (y)) c = V 1 (E(V (y)))

When we talk about utility functions we often say that two functions are the same when they differ only by a positive afne transformation.

Example 2 - An All-or-Nothing Investment


For our second example we assume that the investors current wealth is y = $100 and we begin

by using the following utility function: 1, 000, 000 = 1, 000, 000y3 3 y U (y) = 3, 000, 000y4 > 0 U (y) = 12, 000, 000y5 < 0 Our utility function is the same as 1/y3 . We use the scale factor 1,000,000 to make the function values easier to read in the wealth neighborhood of $100 which we are investigating. Without the scale factor, the numbers are very small, messy to write, and not easy to interpret at a glance.

U(y) =

As in the rst example, we assume that our investor has only one alternative to doing nothing. He may use his entire wealth of $100 to purchase an investment which returns -10% with probability 50% and +20% with probability 50%. We will continue to use this hypothetical investment as a running example through the rest of this lecture.

Note that the expected return on this investment is +5% and the standard deviation of the returns is 15%. This is similar to many common real-life nancial investments, except that in real life there are many more than only two possible outcomes.

We continue to assume that there are only two possible outcomes to make the mathematics easier for the sake of exposition.

The next gure shows a graph similar to the one in our rst example.

10

The expected utility of the investment is -0.98, which is larger than the utility of doing nothing, which is -1.00. Thus, in this case the investor chooses to make the investment.

The decision looks like a close call in this example. The expected utility of the investment is only slightly larger than that of doing nothing. This leads us to wonder what might happen if we change the exponent in the utility function.

11

In this gure the following points are plotted: Wealth Utility $90 $100 $105 $120 -1.37 -1.00 -0.86 -0.58 -0.98 Bad outcome Current wealth Utility of expected outcome Good outcome Expected utility of outcome

The graph in next gure shows how the situation changes if we use an exponent of -5 instead of -3 in the utility function. 1010 = 1010 y5 y5

U(y) =

U (y) = 5 1010 y6 > 0 U (y) = 30 1010 y7 < 0 In this gure the following points are plotted: Wealth Utility $90 $100 $105 $120 -1.69 -1.00 -0.78 -0.40 -1.05 Bad outcome Current wealth Utility of expected outcome Good outcome Expected utility of outcome

In this case, the expected utility of the investment is -1.05, which is smaller than the utility of doing nothing, which is -1.00. Thus, in this case the investor chooses not to make the investment.

12

13

This example shows that an investor with a utility function of 1/y5 is somewhat more riskaverse than is an investor with a utility function of 1/y3 .

14

You might also like