Understanding Economics: Game Theory
Understanding Economics: Game Theory
Understanding Economics
Game Theory
Course Guidebook
Jay R. Corrigan
Kenyon College
4840 Westfields Boulevard | Suite 500 | Chantilly, Virginia | 20151‑2299
[phone] 1.800.832.2412 | [fax] 703.378.3819 | [web] www.thegreatcourses.com
LEADERSHIP
PAUL SUIJK President & CEO
BRUCE G. WILLIS Chief Financial Officer
JOSEPH PECKL SVP, Marketing
JASON SMIGEL VP, Product Development
CALE PRITCHETT VP, Marketing
MARK LEONARD VP, Technology Services
DEBRA STORMS VP, General Counsel
KEVIN MANZEL Sr. Director, Content Development
ANDREAS BURGSTALLER Sr. Director, Brand Marketing & Innovation
KEVIN BARNHILL Director of Creative
GAIL GLEESON Director, Business Operations & Planning
PRODUCTION TEAM
TRISH GOLDEN Producer
SUSAN DYER Content Developer
ABBY INGHAM LULL Associate Producer
DANIEL RODRIGUEZ Graphic Artists
BRIAN SCHUMACHER
OWEN YOUNG Managing Editor
CHRISTIAN MEEKS Editor
CHARLES GRAHAM Assistant Editor
CHRIS HOOTH Audio Engineer
ROBERTO DE MORAES Director
GEORGE BOLDEN Camera Operators
MATTHEW CALLAHAN
VALERIE WELCH Production Assistant
PUBLICATIONS TEAM
FARHAD HOSSAIN Publications Manager
TIM OLABI Graphic Designer
JESSICA MULLINS Proofreader
ERIK A ROBERTS Publications Assistant
RENEE TREACY Fact-Checker
WILLIAM DOMANSKI Transcript Editor
Jay R. Corrigan
Kenyon College
J
ay R. Corrigan is a Professor of Economics at Kenyon College.
He earned a BA in Economics from Grinnell College and
a PhD in Economics from Iowa State University.
Professor Corrigan’s writing has appeared in The Washington Post
and Barron’s. His scholarly publications in economics, public health,
and substance abuse journals have been cited more than 1,000 times.
His research has been covered by news outlets such as ABC, NBC,
and BBC World News, and his work was included in a Washington
Post list of the 10 best works on political economy in 2018.
Professor Corrigan is a recipient of Kenyon College’s Trustee
Teaching Excellence Award, and The Princeton Review named him
one of America’s best college professors. ■
i
Understanding Economics: Game Theory
Professor Biography
TABLE OF CONTENTS
Introduction
Professor Biography . . . . . . . . . . . . . . . . . . . . . . . . . . i
Course Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Guides
1 Game Theory Basics: The Prisoner’s Dilemma . . . . . 4
2 Repeated Prisoner’s Dilemma Games . . . . . . . . . . . 14
3 The Game of Chicken . . . . . . . . . . . . . . . . . . . . . . 24
4 Reaching Consensus: Coordination Games . . . . . . . 33
5 Run or Pass? Games with Mixed Strategies . . . . . . . 42
6 Let’s Take Turns: Sequential-Move Games . . . . . . . 54
7 When Backward Induction Works—and Doesn’t . . . 64
8 Asymmetric Information in Poker and Life . . . . . . . 72
9 Divide and Conquer: Separating Equilibrium . . . . . . 82
10 Going Once, Going Twice: Auctions as Games . . . . 92
11 Hidden Auctions: Common Value and All-Pay . . . . 101
12 Games with Continuous Strategies . . . . . . . . . . . 110
Supplementary Material
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Image Credits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
UNDERSTANDING ECONOMICS:
GAME THEORY
COURSE SCOPE
G
ame theory is a framework for thinking more clearly and carefully
about strategic interactions in business, politics, international
relations, and even biology. Though it began as a niche field of
mathematics, game theory has become so important to economics that it’s
now covered in every introductory textbook and is a central part of every
graduate student’s first-year coursework.
A better understanding of game theory allows you to explain the otherwise
inexplicable. Why, for example, do corporate executives risk prison time
by conspiring to raise prices? Why is a college degree so important in the
job market even when the degree isn’t related to the job? Game theory can
also answer questions well beyond the traditional boundaries of economics.
For example, why do people confess to crimes they didn’t commit? Why,
during World War I, did peace break out spontaneously at points all along
the Western Front?
Lesson 1 introduces the most fundamental concepts in game theory: players,
strategies, payoffs, and finding an equilibrium. Once you’ve developed these
tools, you’ll apply them to the prisoner’s dilemma, the most famous—and
the most famously frustrating—game in the field.
1
Understanding Economics: Game Theory Course Scope
In lesson 2, you’ll see how playing a game repeatedly can lead to the
emergence of cooperation even among players who are ostensibly at odds
with one another. You’ll apply lessons learned from repeated games to
understand price-fixing conspiracies and military history.
Lesson 3 focuses on conflict. You’ll learn that the game of chicken, where
drivers speed toward one another until one swerves, has more than one
equilibrium. Irresponsible though it may be, this game will help you
understand why animals fighting for mating rights rarely injure one another,
even when they have savagely sharp teeth or claws.
Why do people drive on the right in some countries and on the left in others?
Why did the movie industry choose VHS cassettes over Betamax? These
are all examples of coordination. In lesson 4, you’ll learn about when people
are likely to coordinate and when they aren’t.
While it’s often best to settle on a strategy and stick with it, it doesn’t always
pay to be predictable. There are situations where you do best by keeping
people guessing. But behaving randomly doesn’t have to mean behaving
arbitrarily. Being unpredictable in a specific way can improve your payoff
and keep others from taking advantage of you. In lesson 5, you’ll apply this
lesson to football.
Other games unfold sequentially, with you getting to see the choice
your opponent makes before you make yours. In lesson 6, you’ll see how
backward induction—starting at the end of the game and working back to
the beginning—allows you to solve these sequential-move games.
But backward induction can also lead to predictions too counterintuitive to
be believed. In lesson 7, you’ll consider some of the most famous thought
experiments in game theory and try to find ways to reconcile theory’s
predictions with people’s actual behavior.
Lessons 8 and 9 discuss games of private information. You’ll learn how
to overcome information asymmetries and apply those lessons to poker,
business, and dueling.
2 2
Course Scope
3
Lesson 1
Understanding
Game
Economics:
Theory Basics:
Game Theory
The Prisoner’s Dilemma
LESSON 1
GAME THEORY
BASICS: THE
PRISONER’S
DILEMMA
4 4
Lesson 1 Game Theory Basics: The Prisoner’s Dilemma
W
hat do economists mean when they call something
a game? Here, it’s useful to draw a distinction
between a decision and a game. A decision is
when you choose what to do without regard for anyone else’s
response. A game, on the other hand, has two or more players
who choose what to do based on what they think other players
will do. In other words, a game is strategic, and game theory
is the formalized study of interactions between strategic
players.
5
Lesson 1
Understanding
Game
Economics:
Theory Basics:
Game Theory
The Prisoner’s Dilemma
• Finally, what if the big pig presses the lever while the little pig waits by
the trough? Now, the little pig starts eating immediately. Because the big
pig is slow, the little pig manages to eat most of the food before the
big pig gets back to the trough, shoves the little pig out of the way, and
eats what’s left. The big pig may not get a lot of food, but it more than
offsets the effort he exerts trudging to the lever and back.
SOLUTION
• To find the solution, think of this as a game. There are two players—the
little pig and the big pig—and each has two potential strategies—waiting
by the trough or pushing the lever. Each player’s payoff can be measured
in terms of the food they eat minus the effort they expend.
6 6
Lesson 1 Game Theory Basics: The Prisoner’s Dilemma
• If the little pig thinks the big pig is going to press the lever, her best
response is to wait by the trough. That way, the little pig doesn’t waste
any energy, and she gets to eat most of the food before the big pig shoves
her out of the way.
• But what if the little pig thinks the big pig is going to wait by the trough?
Counterintuitively, perhaps, it’s still in her best interest to wait by the
trough. She won’t eat no matter what she does in this case, but if she
waits by the trough, at least she doesn’t waste any energy.
• This is the little pig’s dominant strategy—a strategy that’s a best response
no matter what the other player does. In this example, no matter what
the big pig plans to do, it’s always in the little pig’s best interest to wait
by the trough.
• The big pig knows what the little pig is going to do—wait by the
trough—because it’s her dominant strategy. All the big pig has to do
now is choose his best response. If he also waits by the trough, he doesn’t
get anything. If he pushes the lever, he doesn’t get much to eat, but he’s
better off than if he gets nothing.
• Because the little pig is going to wait by the trough, the big pig’s best
response is to push the lever. This result—where the little pig waits by
the trough and the big pig presses the lever—is what’s called a Nash
equilibrium.*
• A Nash equilibrium is a list of strategies for each player such that no
player has an incentive to change their strategy unilaterally. If the big
pig thinks the little pig is going to wait by the trough, the big pig has
no incentive to change his strategy from pushing the lever to waiting
by the trough. If he did, he’d go from getting a little to eat to getting
nothing to eat.
• Likewise, if the little pig thinks the big pig is going to push the lever,
she has no incentive to change her strategy from waiting by the trough
to pushing the lever. If she did, she’d go from eating most of the food to
eating just some of it. Neither player can improve their own payoff by
changing their strategy, so this is a Nash equilibrium.
8 8
Lesson 1 Game Theory Basics: The Prisoner’s Dilemma
SOLUTION
• What does game theory predict the prisoners will do if they’re
interrogated simultaneously but separately? Once again, this is a game
with two players, each with two potential strategies. They can either
confess or deny. Payoffs in this game are measured in terms of prison
sentences, where a longer sentence is worse from the standpoint of an
individual prisoner.
9
Lesson 1
Understanding
Game
Economics:
Theory Basics:
Game Theory
The Prisoner’s Dilemma
GOLDMAN’S DILEMMA
• A different version of the prisoner’s dilemma is Goldman’s dilemma,
named for Robert Goldman, a doctor specializing in sports medicine.
Between 1982 and 1995, he asked fighters, bodybuilders, and power
lifters the following question:
If I had a magic drug that was so fantastic that you’d win
every competition you would enter … for the next five years,
but it had one minor drawback—it would kill you five years
after you took it—would you still take the drug?
• Goldman found that more than half said yes—the median athlete in
this sample would die to win.
10 10
Lesson 1 Game Theory Basics: The Prisoner’s Dilemma
11
Lesson 1
Understanding
Game
Economics:
Theory Basics:
Game Theory
The Prisoner’s Dilemma
SOLUTION
• In order for doping to be both players’ dominant strategy in this game,
both have to believe that increasing their chances of winning the Mr.
Olympia title by 50% is worth risking their life. That might sound
farfetched, but Robert Goldman’s research shows the typical power
athlete from his practice is willing to die to win.
• If Hans thinks Franz will stay clean, and assuming he’s willing to die to
win, his best response is to dope, increasing his chances of winning from
50% to 100%. If he thinks Franz will dope, his best response is still to
dope, increasing his chances of winning from 0% to 50%.
• The same logic applies to Franz, so both have a dominant strategy to
dope.** The Nash equilibrium is for both to dope, in which case they
are equally likely to win the Mr. Olympia title—just as they would have
been if both had stayed clean—but both also may suffer the potentially
deadly side effects from doping. This leaves both worse off than if they’d
stayed clean.
• The outcome of the prisoner’s dilemma game doesn’t always have to be
so relentlessly depressing. When the same two players play the game
together again and again, it’s possible they’ll learn to cooperate.
READINGS
“£66,885 Split or Steal?”
Nasar, A Beautiful Mind.
“What’s Left When You’re Right?”
QUESTIONS
1. Find this game’s two pure-strategy Nash equilibrium outcomes.
Assume a higher payoff is better than a lower one.
Colin
Left Right
Up 0, 3 10, 10
Rose
Down 2, 1 5, 0
Up 2, 2 6, 1
Rose
Down 1, 6 5, 5
13
Lesson 2
Understanding Economics:
Repeated
Game
Prisoner’s
Theory Dilemma Games
LESSON 2
REPEATED
PRISONER’S
DILEMMA
GAMES
14 14
Lesson 2 Repeated Prisoner’s Dilemma Games
T
his lesson focuses on what, if anything, changes when
games are played repeatedly. While games often have
a frustrating outcome when played once, a cooperative
outcome can be reached when games are repeated infinitely or
at least indefinitely. A cooperative outcome depends on things
like how patient the players are and how likely they think the
game is to end in the near future.
• If you think the other player is going to cooperate, your best response
is to defect. Defecting earns you $5, while cooperating earns you just
$3. If you think the other player is going to defect, your best response
is still to defect. Here, defecting earns you $1, while cooperating earns
you nothing.
• This means your dominant strategy in this game is to defect. It’s always
your best response, regardless of what you think the other player will
do. By that same logic, defecting is the other player’s dominant strategy
as well.
• Just as with the prisoner’s dilemma and Goldman’s dilemma, the Nash
equilibrium is for both players to play their dominant strategy, even
though the $1 payoffs at that Nash equilibrium are clearly worse than
the $3 payoffs you’d earn if you both cooperated.
REPEATED GAMES
• This game is more interesting, but just as frustrating, if you play it twice.
Imagine your opponent tells you that if you cooperate in the first of two
rounds, she’ll reward you in the second round by cooperating, earning
you both $3 per round.
• But there’s a serious problem here: Because you’re only playing two
rounds, in the second round, you have a strong incentive to defect no
matter what you did in the first round. Here, you start by thinking about
what makes sense in the last round and then work your way back to the
first round. This is called backward induction.
• In the second—and last—round, you both have an incentive to defect
regardless of what happened in the first round. You can’t credibly
promise to cooperate in the second round in exchange for the other
player’s cooperation in the first. You could say that’s what you’d do, but
it wouldn’t be wise for the other player to believe you, given what you
both know about the incentives you face in the second round.
16 16
Lesson 2 Repeated Prisoner’s Dilemma Games
• Since the other player knows you’re likely to defect in the second round,
she has no incentive to cooperate in the first round. Naturally, you
respond by defecting in the second round, but she knew you were going
to anyway.
• The outcome of the repeated game is the same outcome from the one-
time game, but twice. From a theoretical standpoint, nothing would
change if you played the game 20 times or even 200 times. In each case,
both players have an incentive to defect in the last round.
17
Lesson 2
Understanding Economics:
Repeated
Game
Prisoner’s
Theory Dilemma Games
• What should you do if you think the enemy is going to shoot to kill?
Your unit takes heavy casualties no matter what you do, but if you also
shoot to kill, at least you’re not overrun. And what should you do if you
think the enemy is going to shoot to miss? Your unit avoids casualties
no matter what you do, but if you shoot to kill, you also gain ground.
• Shooting to kill is your dominant strategy. No matter what you think
the enemy is going to do, your payoff is improved by shooting to kill.
Assuming the enemy thinks like you do, he also has a dominant strategy
to shoot to kill.
• If the game is only played once, game theory predicts everyone plays their
dominant strategy, in which case both sides suffer heavy casualties and
neither gains ground. This is a horrifying outcome, where thousands die
to no advantage for either side.
18 18
Lesson 2 Repeated Prisoner’s Dilemma Games
• But one of the unique features of World War I trench warfare was that
the same units faced one another day after day. They weren’t playing
this bloody version of the prisoner’s dilemma once—they were playing
it repeatedly. And because it was a repeated game, norms of cooperation
and trust could, and often did, develop. Geoffrey Dugdale, a British
army captain, said that he was
astonished to observe German soldiers walking about within
rifle range behind their own line. Our men appeared to take
no notice. … These people evidently did not know there was
a war on. Both sides apparently believed in the policy of “live
and let live.”**
• Eventually, the Allied generals insisted on seeing the corpses that would
result from raiding the German trenches. Small but relentless Allied
raids, followed by the retaliation you’d expect from Germans playing
tit for tat, caused cooperation to break down and fighting to resume.
20 20
Lesson 2 Repeated Prisoner’s Dilemma Games
21
Lesson 2
Understanding Economics:
Repeated
Game
Prisoner’s
Theory Dilemma Games
• If the game is played only once, the Nash equilibrium is where you both
play your dominant strategy. This is a wonderful outcome for students,
since everyone gets generous financial aid, but you both could have
attracted classes that were just as strong and less expensive if you’d agreed
to offer stingy aid.
• You don’t expect to see that kind of cooperation in a one-time game, but
you may manage to cooperate in an infinitely repeated game. Colleges
could reasonably think of themselves as playing a game that perhaps isn’t
infinitely repeated but has no clear end date. Since there’s no known final
round, there’s always an incentive to protect your future reputation by
cooperating in this round.
DISCOUNT RATE
• The equilibrium can be for both of you to cooperate by offering
stingy financial aid in every period—if, that is, you’re patient
enough. This means you have to care enough about what happens in
the future. Or, as economists put it, you have to have a low enough
discount rate.
• What do you gain if you cheat on your price-fixing agreement by offering
generous financial aid today? You get a great class this year, though it
comes at the expense of a larger financial aid budget. On balance, you
see that as a good thing.
• But what do you lose by cheating? Because you double-crossed it, the
other college will offer generous financial aid from now on. Knowing
that, you’ll want to offer generous financial aid as well, leaving you both
at the Nash equilibrium outcome from the one-time game, which is
a worse outcome than if you’d both continued to cooperate.
• Is getting a better outcome today worth getting a worse outcome
in every future period? It depends on how patient you are. If
you feel like what happens next year is just as important as what
happens today, you’ll surely want to cooperate with the agreement.
22 22
Lesson 2 Repeated Prisoner’s Dilemma Games
READINGS
Axelrod, The Evolution of Cooperation.
Case, “The Evolution of Trust.”
Kingston and Wright, “The Deadliest of Games.”
QUESTIONS
1. Rose and Colin are playing a repeated version of the prisoner’s dilemma
game. If Rose is playing tit-for-tat and Colin always defects, what will
each player’s payoff be in the first round? What will their payoffs be
in subsequent rounds?
Colin
Cooperate Defect
Cooperate 2, 2 0, 3
Rose
Defect 3, 0 1, 1
2 . Rose and Colin are again playing a repeated version of the same
prisoner’s dilemma game. If both Rose and Colin now play tit-for-tat,
what will each player’s payoff be in the first round? What will their
payoffs be in subsequent rounds?
Colin
Cooperate Defect
Cooperate 2, 2 0, 3
Rose
Defect 3, 0 1, 1
23
Lesson 3
Understanding Economics: Game Theory
The Game of Chicken
LESSON 3
THE GAME
OF CHICKEN
24 24
Lesson 3 The Game of Chicken
I
n another type of simultaneous-move game, two players
get into an argument, and they decide the only way to
settle it is to play a game of chicken. That means they’re
going to get into their cars and drive straight at one another
as fast as their cars can go, until, at the very last second, the
players have to simultaneously decide whether to swerve or to
keep driving straight.
MEASURING PAYOFFS
• Payoffs in this game can be measured in utils*—a unit that economists
use to measure satisfaction, especially when there is no other natural
measure. The more satisfying an outcome is, the more utils you derive
from that outcome.
• Imagine that at the very last second, you decide to keep driving straight,
while your opponent simultaneously decides to swerve. You look tough,
so you’ll have a payoff of 1 util. Your opponent looks weak, so he’ll have
a payoff of −1 util. And the opposite is true if your opponent decides to
keep driving while you decide to swerve.
• If you both swerve—assuming you don’t crash into each other in the
process—it’s a draw. Neither of you looks tougher than the other, and
neither looks weaker, so you’ll both earn a payoff of 0.
• If you both drive straight as fast as you can go, it’s fair to assume you’d
both be horribly injured—perhaps even killed—in the crash. Figuring
out the payoff in this situation requires a value judgment: Is it worse to
suffer a horrible, life-threatening injury or to look weak?
• Opinions will vary, but for the sake of the game, let’s say suffering
a life-threatening injury is worse than looking weak. Both you and your
opponent will receive −10 utils in this situation.
SOLUTION
• If you think your opponent is going to drive straight, what’s your best
response? Your payoff is −1 if you swerve and −10 if you drive straight.
Neither option is good from your standpoint, but −1 is less bad, so your
response is to swerve.
• If you think your opponent is going to swerve, your best response is
to drive straight, because your payoff is 1 if you drive straight and 0 if
you swerve.
• Unlike the big pig, little pig and prisoner’s dilemma games, in the game
of chicken, no player has a dominant strategy. But that doesn’t mean this
game has no Nash equilibrium. Two can be found using best-response
analysis: one where you drive straight and your opponent swerves, and
one where you swerve and he drives straight.**
*** Today, this game is universally known as the hawk–dove game, but
the word dove did not appear in the original paper. Instead, the
authors referred to the second strategy as mouse.
27
Lesson 3
Understanding Economics: Game Theory
The Game of Chicken
SOLUTION
• Payoffs in this game will account for the gains from winning, the cost of
being seriously injured while losing a fight, and the cost associated with
a dove yielding to a hawk. We can use the following numerical values:
ʶ V is the gain from winning, which will be 40.
ʶ C is the cost of losing, which will be −80.
ʶ Y is the cost of yielding, which will be 0.
• Imagine you’re one of the animals playing this game. If you and your
opponent both play hawk, you’ll both fight until you’ve either won or
been injured by your opponent. Assuming you’re equally matched, each
of you has a 50% chance of winning and a 50% chance of losing. Since we
can’t be sure what will happen, we need to calculate your expected payoff:
the probability—or the fraction of time—some events occurs times the
payoff a player receives if that event occurs.
• You and your opponent are equally matched, meaning that if the two of
you were to play this game again and again, we’d expect you to win half
of your encounters and lose the other half. So your expected payoff from
any one encounter is the probability you win times your payoff if you
win, plus the probability you lose times your payoff if you lose:
1/2 × 40 + 1/2 × (−80) = −20.
• This gives you an expected payoff of −20. And because the game looks
exactly the same from your opponent’s perspective, he also has an
expected payoff of −20.
• Things are more straightforward for the other three outcomes. If you
play hawk and your opponent plays dove, he yields, meaning you win.
In that case, you earn a payoff of 40, and your opponent earns 0.
• If your opponent plays hawk and you play dove, you yield, meaning he
wins. You earn a payoff of 0, and your opponent earns a payoff of 40.
28 28
Lesson 3 The Game of Chicken
• Finally, if you both play dove, we assume you split the gain from winning.
Both you and your opponent earn a payoff of 20 (half of 40).
• You can now use best-response analysis to find this game’s pure-strategy
Nash equilibria. If you think your opponent will play hawk, your best
response is to play dove. And if you think your opponent will play
dove, your best response is to play hawk. The same is true from your
opponent’s perspective.
• Like the original game of chicken, this game has two pure-strategy Nash
equilibria, with another mixed-strategy Nash equilibrium lurking in the
background. This shows that we don’t expect the population to be made
up entirely of hawks or entirely of doves; instead, we expect there to be
some mix of the two.
29
Lesson 3
Understanding Economics: Game Theory
The Game of Chicken
REAL-LIFE APPLICATIONS
• Animals are not rational, strategic actors who know their payoffs and
make choices that maximize payoffs given what they believe other players
will do. So how can this kind of analysis explain animal behavior?
• The answer is evolution. A bighorn sheep, for example, doesn’t choose
to play hawk or dove. Instead, he’s born with genes that determine his
behavior—whether he will play hawk or dove.
• Depending on the conditions, one of those strategies may be associated
with a higher level of fitness. A sheep with those genes will have a fitness
advantage, so he will be more likely to find a mate and pass his genes
on to the next generation. This means more of the bighorn sheep in
the next generation will be born with genes associated with the more
successful strategy.
• Imagine a population of bighorn sheep made up entirely of doves.
This may seem like a utopian paradise, but it can’t last, because it’s not
an equilibrium.
• If one male is born with a genetic mutation that causes him to play hawk
rather than dove, he will have a tremendous fitness advantage. When
he plays the hawk–dove game, he will necessarily play against a dove,
meaning he will necessarily win.
• This means there will be more hawks in the next generation. Each of
those—born into a population now made up almost entirely of doves—
will also enjoy a fitness advantage. But unlike their father, they’ll
occasionally encounter another hawk, which will mean a fight and an
outcome with a negative expected payoff.
LIMITED WAR
• This pattern may continue for many generations, but it won’t continue
indefinitely. The population will eventually reach a point where hawks
are no longer more fit than doves.
30 30
Lesson 3 The Game of Chicken
• Imagine you’re a hawk born into a population that’s now half doves
and half hawks. There’s a 50% chance that when you play the game,
you’ll play against another dove. You’re sure to win that encounter,
guaranteeing you a payoff of 40. But there’s also a 50% chance your
opponent will be a hawk. The two of you will fight, earning you an
expected payoff of −20.
• Your expected payoff is the probability you face a dove times your payoff
if you face a dove, plus the probability you face a hawk times your
expected payoff if you face a hawk:
1/2 × 40 + 1/2 × (−20) = 10.
• Now, imagine you’re a dove born into that same population. There’s
a 50% chance your opponent will be a dove. You’ll share, earning you
a payoff of 20. The other 50% of the time, your opponent will be a
hawk. You’ll immediately yield, earning a payoff of 0. Calculating your
expected payoff the same way you did for the hawk, you get
1/2 × 20 + 1/2 × 0 = 10.
• On average, the dove earns a payoff of 10, just like the hawk. This is
called evolutionarily stable equilibrium. A population like this one can’t
be successfully invaded by a mutant the way the all-dove population was
successfully invaded by a mutant hawk.
• A mutant hawk born into a 50/50 population will tip the scale slightly in
favor of the doves, meaning doves would enjoy a slightly higher expected
payoff and a slight fitness advantage, moving the population back to
evolutionarily stable equilibrium.
• Animals fighting for mates tend to engage in limited war because, at the
evolutionarily stable equilibrium, we expect the population to be made
up of a mix of hawks and doves. There will be conflict, but fights will
only end in one animal being seriously injured in the relatively rare case
where a hawk encounters another hawk.
31
Lesson 3
Understanding Economics: Game Theory
The Game of Chicken
READINGS
Leeson, “Oracles.”
Smith and Price, “The Logic of Animal Conflict.”
QUESTIONS
1. True or false: When playing chicken, your dominant strategy is to
drive straight.
2 . Fill in the blanks in the following payoff matrix so it’s clear this is
a game of chicken. Assume a higher payoff is preferred to a lower one.
Colin
Straight Swerve
32 32
Lesson 4 Reaching Consensus: Coordination Games
LESSON 4
REACHING
CONSENSUS:
COORDINATION
GAMES
33
Lesson 4
Understanding
Reaching
Economics:
Consensus:
Game Theory
Coordination Games
O
n the surface, coordination games look a lot like the
game of chicken. But where chicken is a game of
conflict, coordination games are about consensus.
One thing the games do have in common is that they both
have multiple equilibria. This lesson focuses on ways to choose
between equilibria in a game that has more than one.
WIRELESS CHARGING
• According to the technology website CNET, the next big advancement
in how we charge our devices will be over-the-air wireless charging.
But this new technology won’t take off until the industry settles on
a single standard.
• Imagine a game with two players—Apple and Samsung—who must
simultaneously choose one of the approved standards for over-the-air
wireless charging—Powercast or WattUp—to incorporate into their
next generation of phones. Their strategies, then, are the two charging
standards they can choose between.
• In the simplest version of this game, we’ll assume the two charging
stations have some minor differences but are equally good. All that
matters from Apple and Samsung’s standpoint is whether they adopt
the same standard or different ones. If they adopt the same standard, it
will catch on quickly and people will be eager to buy new phones.
• But if Apple and Samsung adopt different wireless charging standards,
companies like Toyota, Boeing, and Starbucks won’t be so eager to
incorporate either technology into their cars, planes, and coffee shops,
because it’s not so clear which of the standards will catch on. This gives
the phone-buying public less of an incentive to buy new phones.
SOLUTION
• If both Apple and Samsung adopt the Powercast standard, each phone
maker earns a relatively high payoff. Suppose each firm’s profits increase
by $30 billion. The same is true if both Apple and Samsung choose the
34 34
Lesson 4 Reaching Consensus: Coordination Games
35
Lesson 4
Understanding
Reaching
Economics:
Consensus:
Game Theory
Coordination Games
ASSURANCE GAMES
• Suppose that while WattUp can charge devices that are up to 15 feet away,
Powercast can charge devices as far as 80 feet away. Because Powercast
is a superior technology, both Apple and Samsung will earn a payoff of
$50 billion, rather than $30 billion, if they coordinate on Powercast.
• If Apple thinks Samsung is going to choose Powercast, its best response
is to choose Powercast as well, earning a payoff of $50 billion rather than
the $10 billion it would earn with WattUp. If Apple thinks Samsung is
going to choose WattUp, its best response is to choose WattUp as well,
earning a payoff of $30 billion rather than just the $10 billion it would
earn with Powercast.
36 36
Lesson 4 Reaching Consensus: Coordination Games
• Interestingly, both Apple and Samsung have the same two Nash
equilibria in pure strategies: one where both choose Powercast and
one where both choose WattUp. The fact that Powercast is clearly the
superior technology doesn’t eliminate the WattUp equilibrium.
• This variation on the coordination game is called an assurance game.
It differs from the pure coordination game only in that both players agree
that one of the Nash equilibria outcomes is clearly better than the other.
• Now, imagine Apple and Samsung get together to discuss their strategies
before playing the game. We can be pretty confident they’d agree on
the Powercast standard. Once the two players have assured one another
they’ll both choose Powercast, neither has an incentive to renege.
• If the two aren’t allowed to communicate, it would seem that one of these
outcomes is likely to be a focal point, or a solution people are naturally
drawn to without communication: the one where both choose Powercast
and earn a payoff of $50 billion each. There will, however, be exceptions.*
If one more change is made to the game, settling on the lower-payoff
Nash equilibrium outcome actually becomes the norm.
• If you both hunt the stag, you’ll kill it and split the meat. But if you hunt
the stag while the other player hunts a hare, you’ll go hungry. Both
players’ two strategies are to hunt the stag or a hare, and payoffs are
measured in terms of the meat you take home.
• Like all of the coordination games discussed so far, this one has two pure-
strategy Nash equilibria: one where you both play stag and one where
you both play hare. And like the assurance game, one of the equilibria
outcomes offers both players a higher payoff. Game theorists call this
payoff dominance.
• But unlike the assurance game, one of the two strategies in this game
eliminates all risk. If you choose to hunt hare, you earn a payoff of 1 no
matter what the other player chooses to do. It doesn’t matter if she hunts
the stag or a hare. You go home with hare either way. The equilibrium
where you both hunt a hare risk dominates the equilibrium where you
both hunt the stag because the hare/hare equilibrium is less risky.
38 38
Lesson 4 Reaching Consensus: Coordination Games
SOLUTION
• One Nash equilibrium offers a higher payoff, and the other offers lower
risk. Knowing this, which strategy would you choose to play?
• When experimental economists have asked people to play games like this
one in the laboratory, they’ve found that the more experience the players
have, the more likely they are to choose lower risk over higher reward.
• The experimental research in this area has tended to focus on a version
of the stag hunt called the minimum effort game. As an example,
participants might have to choose to contribute between one and seven
units toward a communal fund.
• Each player’s payoff would equal $2 times the minimum contribution
made by any member of the group, minus $1 times the amount they
contributed themselves.
• If everyone in the group contributes the maximum of seven units, each
participant receives a payoff of $7:
$2 × 7 − $1 × 7 = $7.
• This is a Nash equilibrium because if you believe everyone else will
contribute seven to the fund, you have no incentive to reduce your own
contribution. If, for example, you lowered your contribution from seven
to six, your payoff would fall from $7 to $6. That’s because your payoff is
$2 times the minimum contribution—which would now be six—minus
$1 times your own contribution—also six—equaling $6.
• This is just one of the game’s Nash equilibria. If you think everyone
else will contribute six, you have no incentive to raise your contribution
from six to seven. This would increase your cost without changing the
minimum contribution.
• You also have no incentive to lower your contribution from six to five.
This would lower your cost, but it would also lower the minimum
contribution, leaving you with a lower payoff. The same can be said
about everyone contributing five, four, three, two, or even one. So this
game has seven pure-strategy Nash equilibria.
39
Lesson 4
Understanding
Reaching
Economics:
Consensus:
Game Theory
Coordination Games
FOCAL POINT
• Which, if any, of these equilibria is a focal point?
• The one where everyone contributes the full seven units to the group
fund seems conspicuous because it offers the highest payoff. It’s the
payoff dominant equilibrium. But there’s also something conspicuous
about the equilibrium where everyone contributes just one unit.
• If you contribute seven to the group fund, there’s a chance everyone else
will be just as generous as you, in which case you’ll earn a $7 payoff. But
there’s also a chance someone in the group will contribute just one. In
that case, your payoff would be $2 times the minimum contribution—
just one unit, in this case—minus $1 times your contribution of seven.
You’d earn a payoff of −$5.
• But if you contribute just one unit to the group fund, you know what
your payoff will be: $1. Contributing just one unit to the group fund
eliminates all risk, so everyone making the minimum contribution is the
risk dominant equilibrium.
REAL-LIFE APPLICATION
• The earliest study to look at this game was by John Van Huyck and
others. They found that the first time people play this game, most are
generous. Most people have enough faith in their fellow participants that
they’re willing to make a large contribution to the group fund.
• But it’s the minimum contribution that determines everyone’s payoff.
The minimum contribution in the first round is typically only about
two units. This means the people who were willing to take a chance and
make the maximum contribution end up losing money.
• Not surprisingly, most people contribute less and less each round,
until by the 10th round, the majority of people make the smallest
possible contribution.
• In a sense, this outcome is frustrating in the same way the prisoner’s
dilemma is frustrating. Everyone could enjoy a better payoff if everyone
would play seven instead of one.
40 40
Lesson 4 Reaching Consensus: Coordination Games
• But in another sense, this outcome is even more frustrating than the
prisoner’s dilemma. Unlike in the prisoner’s dilemma game, the outcome
where everyone enjoys a better payoff is a Nash equilibrium outcome.
If we could only get there, no one would have an incentive to move
away from it.
READINGS
Dugar, “Nonmonetary Sanctions and Rewards in an Experimental
Coordination Game.”
Schelling, The Strategy of Conflict.
Van Huyck, Battalio, and Beil, “Tacit Coordination Games,
Strategic Uncertainty, and Coordination Failure.”
QUESTIONS
1. What kind of coordination game is this: pure coordination, assurance,
or stag hunt? Assume a higher payoff is preferred to a lower one.
Colin
Red Blue
Red 2, 2 0, 0
Rose
Blue 0, 0 5, 5
2 . Fill in the blanks in this payoff matrix so it’s clear this is a stag hunt.
Assume a higher payoff is preferred to a lower one.
Colin
Dinner and
Watch TV
a movie
41
Lesson 5
UnderstandingRun
Economics:
or Pass? Game
GamesTheory
with Mixed Strategies
LESSON 5
RUN OR
PASS? GAMES
WITH MIXED
STRATEGIES
42 42
Lesson 5 Run or Pass? Games with Mixed Strategies
W
hen you used best-response analysis to find
a game’s pure-strategy Nash equilibria in previous
lessons, there was occasionally a third equilibrium
hiding in the background—a mixed-strategy Nash equilibrium
that involved players randomizing between their pure
strategies. This lesson focuses on games with mixed strategies
and why it doesn’t always pay to be predictable.
RUN OR PASS
• Run or Pass, a game developed by Matt Rousu, is a dramatically
simplified version of American football. It’s not necessary to have any
knowledge of football beyond that gaining more yards is good for the
offense and bad for the defense.
• Imagine that you are your team’s offensive coordinator, which means
that you call the plays for your team’s offense, or the players who have
the ball and try to move it down the field. Your opponent is the other
team’s defensive coordinator, which means he calls the plays for his
team’s defense, or the players who are trying to stop yours from moving
the ball down the field.
• Because this is a simplified version of football, you will each choose
between just two strategies: You have to choose whether your team will
run or pass, and your opponent has to simultaneously decide whether
his team will prepare to stop the run or stop the pass.
PAYOFFS
• Your objective is to gain yards, and your opponent’s objective is to stop
you from doing that. Both of your payoffs can therefore be measured in
yards, where gaining yards is a good thing for you and an equally bad
thing for your opponent.
43
Lesson 5
UnderstandingRun
Economics:
or Pass? Game
GamesTheory
with Mixed Strategies
• What happens if you decide your team should pass while your opponent
simultaneously decides his team should prepare to stop the pass? You
don’t do too well—you gain just two yards, and the other team gives up
only two yards. This is because he correctly anticipated that your team
was going to pass.
• But what happens if you decide your team should pass while he decides
his team should prepare to stop the run? This time, you do much better:
You gain seven yards, which means the other team gives up seven yards.
• If you decide to run the ball while your opponent prepares to stop the
run, you gain just one yard. But if he prepares his team to stop the pass
on a play when you’ve decided to run, you gain six yards.
• This is a constant-sum game, or a zero-sum game, because the payoffs
in each cell of the payoff matrix add up to zero. This is also a game of
pure conflict: Every yard you gain is a yard your opponent gives up. You
can’t do well unless he does poorly.
44 44
Lesson 5 Run or Pass? Games with Mixed Strategies
BEST-RESPONSE ANALYSIS
• You can use best-response analysis to figure out what you should do given
what you think your opponent is going to do. If he’s going to prepare to
stop the pass, you’d gain two yards if you pass but six yards if you run,
so your best response is to run. If he’s going to prepare to stop the run,
you’d gain seven yards if you pass and only one if you run, so your best
response is to pass.
• Whatever you think your opponent is going to prepare for, you want to
do the opposite. And he wants to be ready to stop whatever he thinks
you’re going to do.
• In every other game discussed so far, there was at least one pure-strategy
Nash equilibrium. Here, for the first time, that’s not the case. That
doesn’t mean there’s no equilibrium.* Instead, the Nash equilibrium in
this game will involve mixed strategies. Both you and your opponent
need to be unpredictable. You’ll also need to be deliberate about the way
you randomize, or your opponent can take advantage of you.
OFFENSIVE STRATEGY
• If your opponent never prepares to stop the pass, you’ll gain seven yards
if you pass. And if he always prepares to stop the pass, you’ll gain just
two yards if you pass.
• But what if he’s less predictable? Suppose, for example, he prepares to
stop the pass 60% of the time. There’s a 60% chance he’ll stop the pass,
in which case you’ll gain just two yards. And there’s a 40% chance he’ll
stop the run, in which case you’ll gain seven yards. So your expected
payoff is four yards:
0.6 × 2 + 0.4 × 7 = 4.
* John Nash won the Nobel Prize in Economics for proving that
every finite game—meaning a finite number of players choosing
between a finite number of pure strategies—will have at least one
Nash equilibrium.
45
Lesson 5
UnderstandingRun
Economics:
or Pass? Game
GamesTheory
with Mixed Strategies
Figure 1
46 46
Lesson 5 Run or Pass? Games with Mixed Strategies
Figure 2
• Connecting these points on a new line on the graph shows the number of
yards you gain if you run as a function of the probability your opponent
prepares to stop the pass (Figure 2).
• With this graph, you can determine your best response given the
probability with which you think your opponent is going to prepare to
stop the pass.
• If you think your opponent is going to stop the pass less than 60% of
the time, your best response is to always pass (Figure 3). If, on the other
hand, you think he’s going to stop the pass more than 60% of the time,
your best response is to always run (Figure 4). The only time you’re
indifferent between running and passing is when he stops the pass 60%
of the time.
47
Lesson 5
UnderstandingRun
Economics:
or Pass? Game
GamesTheory
with Mixed Strategies
Figure 3
Figure 4
48 48
Lesson 5 Run or Pass? Games with Mixed Strategies
DEFENSIVE STRATEGY
• Turning now to the perspective of the defensive coordinator, the best
outcome for him is where you’re indifferent between running and
passing, which is the point on the graph where the two lines intersect.
That means his optimal strategy is to stop the pass 60% of the time and
stop the run the remaining 40% of the time.
• But it’s not enough to be unpredictable—he needs to be unpredictable
in a carefully calibrated way. If he prepares to stop the pass a third of
the time and stop the run the other two-thirds of the time, you can
exploit him by always passing, in which case you gain more yards per
play on average.
• As the defensive coordinator, he chooses the probability with which he
plays his strategies not so that it leaves him indifferent between his two
strategies, but so that it leaves you indifferent between yours. Again,
that’s because your expected payoffs will never be so low—and, by
extension, his will never be so high—as when you’re indifferent between
your two pure strategies.
SOLUTION
• You can quickly find the solution with a few lines of algebra. Remember
that, as the offensive coordinator, you’re looking to minimize your
opponent’s expected payoff, and you’ll do that by mixing between
your two strategies such that he’s indifferent between his. To do this
algebraically, we’ll say that p is the probability you pass, and (1 − p) is
the probability you run.
• If you want to choose p to leave your opponent indifferent between
stopping the pass and stopping the run, that means you’re going to
need to choose p such that his expected payoff from stopping the pass
equals his expected payoff from stopping the run. Each of these expected
payoffs can be written as a function of p.
49
Lesson 5
UnderstandingRun
Economics:
or Pass? Game
GamesTheory
with Mixed Strategies
.
• The defense’s expected payoff from stopping the run is similar:
.
• Here, SP is replaced with SR, which represents stopping the run, and the
payoffs are replaced with those from stopping the run.
• This simplifies to
.
• To find the value of p that leaves your opponent indifferent between
stopping the pass and stopping the run, equate the two expected payoffs
like this:
.
• Next, rearrange and solve for p. Start by adding 6p to both sides of
the equation.
• Then, add 6 to both sides.
50 50
Lesson 5 Run or Pass? Games with Mixed Strategies
• This means you should pass with a probability of 1/2, or 50% of the
time, and run the other 50% of the time. And this leaves your opponent
indifferent between stopping the run and stopping the pass. Either way,
he expects to give up four yards per play.
• This means the mixed-strategy Nash equilibrium for this game is where
you, as the offense, pass 50% of the time and run 50% of the time, while
the defense stops the pass 60% of the time and stops the run 40% of
the time.
GAME VARIATIONS
• What would change if the offense gained 17 yards from a successful
passing play rather than just seven? The new mixed-strategy Nash
equilibrium would be where the offense passes just 25% of the time and
the defense prepares to stop the pass 80% of the time.
• Interestingly, the number of yards you gain from a successful pass has
more than doubled, but the probability you pass has fallen from one-half
to one-quarter. How can that be?
• Remember, you do best when your opponent is indifferent between
stopping the pass and stopping the run. If he gives up twice as many
yards when you have a successful play, he’ll only be indifferent between
his two strategies if you rarely pass.
• And what if you gained 47 yards from a successful passing play? In that
case, the mixed-strategy Nash equilibrium is when the offense passes just
10% of the time and the defense prepares to stop the pass 92% of the
time. If your opponent expects you to pass any more often than that, he’s
going to prepare to stop the pass 100% of the time, in which case you’ll
never have one of those successful 47-yard passing plays.
51
Lesson 5
UnderstandingRun
Economics:
or Pass? Game
GamesTheory
with Mixed Strategies
READINGS
Leeson, “Oracles.”
Palacios-Huerta, “Professionals Play Minimax.”
Reiley, Urbancic, and Walker, “Stripped-Down Poker.”
Rousu, “Run or Pass?”
Walker and Wooders, “Minimax Play at Wimbledon.”
QUESTIONS
1. The game of chicken, introduced in lesson 3, has two pure-strategy
Nash equilibria, but it also has a mixed-strategy Nash equilibrium.
Find the mixed-strategy Nash equilibrium for the following game
of chicken.
Colin
Straight Swerve
Straight −1, −1 2, 0
Rose
Swerve 0, 2 1, 1
52 52
Lesson 5 Run or Pass? Games with Mixed Strategies
Batter
Throw
−1, 1 1, −1
a fastball
Pitcher
Throw
1, −1 −2, 2
a changeup
53
Lesson 6
Understanding Let’s
Economics:
Take Turns:
Game Sequential-Move
Theory Games
LESSON 6
LET’S TAKE
TURNS:
SEQUENTIAL-
MOVE GAMES
54 54
Lesson 6 Let’s Take Turns: Sequential-Move Games
A
ll of the games discussed up to this point have been
simultaneous-move games, meaning players had to
make their decisions simultaneously, without knowing
what the other player had done. This lesson focuses on
sequential-move games, where players take turns.
NIM
• There are countless variations of the game nim, but in this example,
imagine there’s a pile of 20 beads between you and another player. You
take turns removing between one and six beads from the pile with the
understanding that the player who takes the last bead wins.
• If you get to go first, how many beads should you take to guarantee
that you win? Suppose you take six beads, leaving 14 in the pile. Your
opponent then takes five beads, leaving nine. You go again and take
two. There are seven beads left at this point, and it’s clear that you are
sure to win.
• No matter how many beads your opponent takes in her next turn, there
will be six or fewer left in the pile, meaning you’ll be able to take them
all on your third turn and win the game.
• You can guarantee a win by leaving your opponent with a pile of 14 beads
after your first turn, which ensures that no matter how many she takes,
you can take just enough to leave her with a pile of seven on her next
turn. And you can be sure to leave your opponent with a pile of 14 beads
by taking six at the beginning.
STRATEGY
• Nim illustrates two important concepts. The first is backward induction,
which is when you start by thinking about the last round and then
work your way backward to the first round. This is absolutely central to
finding the solutions to the sequential-move games in this lesson.
55
Lesson 6
Understanding Let’s
Economics:
Take Turns:
Game Sequential-Move
Theory Games
• You know that in order for you to win, no more than six beads must
remain in the pile in the last round. With that in mind, you know
you need to leave your opponent with seven beads in the second-to-last
round. And you know that you can leave her with seven beads in the
second-to-last round by leaving her with 14 in the preceding round.
• This game also has a clear first-mover advantage, which means that going
first improves your payoff. In sequential-move games, going first will
often give you an advantage, but not always. In fact, not all versions of
nim have a first-mover advantage.
• Imagine the game begins with 21 beads in the pile instead of 20. If you
go first, no matter how many beads you take on your first turn, your
opponent will take just enough on her first turn to leave a pile of 14 beads.
And no matter how many you take on your second turn, she’ll take just
enough to leave seven, guaranteeing her the win. The game was only
changed slightly, but it was enough to give a second-mover advantage.
56 56
Lesson 6 Let’s Take Turns: Sequential-Move Games
GAME TREE
• The sequential nature of this game can be captured using a game tree,
which has branches representing each of the player’s potential decisions.
• The initial decision node* represents Congress. It has four branches
depicting Congress’s four potential choices: pass A, pass B, pass
a combination of A and B, or do nothing.
• If Congress passes nothing, the game ends at a terminal node,** where
both Congress and the president earn a payoff of 2. If Congress actually
passes one of the bills, the president decides whether to sign it or veto it.
Whatever he decides, the game ends with a terminal node.
SOLUTION
• The solution to this game can be found by using backward induction.
Start at the final decisions—those that the president makes—and work
back to the initial decision.
• If Congress passes A, the president earns a payoff of 1 if he signs the
bill and 2 if he vetoes it. Because 2 is better than 1, the president vetoes
the bill.
• If Congress passes B, the president earns a payoff of 4 if he signs the
bill and 2 if he vetoes it. Because 4 is better than 2, the president signs
the bill.
• If Congress passes a combination of A and B, the president earns a payoff
of 3 if he signs the bill and 2 if he vetoes it. Because 3 is better than 2,
the president signs the bill.
• Now that you know what the president will do in each scenario, you
can return back to the initial decision node. If Congress passes A while
the president vetoes it, Congress will have a payoff of 2. If it passes B
while the president signs it, Congress gets a payoff of just 1. If it passes
A and B while the president signs it, Congress earns a payoff of 3. And
if Congress does nothing, the game ends immediately, with Congress
earning a payoff of 2.
• Of these four possibilities, passing the compromise bill with both
provisions A and B offers Congress the highest payoff.
58 58
Lesson 6 Let’s Take Turns: Sequential-Move Games
59
Lesson 6
Understanding Let’s
Economics:
Take Turns:
Game Sequential-Move
Theory Games
60 60
Lesson 6 Let’s Take Turns: Sequential-Move Games
SOLUTION
• Once again, you can use backward induction to find the equilibrium.
• If the allocator offers an even split, the decider earns a payoff of $5 if he
accepts the offer and nothing if he rejects it, so he should accept.
• If the allocator offers an uneven split, the decider earns a payoff of
$2.50 if he accepts the offer and nothing if he rejects it. Again, he
should accept.
• If the allocator offers a grossly uneven split, the decider earns a payoff of
$0.50 if he accepts the offer and nothing if he rejects it. Fifty cents isn’t
much, but it’s better than nothing, so he should still accept.
• Now, knowing that the decider will accept any offer she makes, the
allocator should choose to make a grossly uneven offer to maximize her
own payoff.
• This means the equilibrium outcome is the one where the allocator
earns a payoff of $9.50 and the decider earns just $0.50. The allocator’s
equilibrium strategy is to choose the grossly uneven split, and the
decider’s equilibrium strategy is to always accept.
REAL-LIFE APPLICATION
• Do participants in a laboratory actually behave the way game
theory predicts?
• In the classic version of the game, the most common division proposed
by allocators is an even split, and most deciders reject an offer
of $2.50. They’d rather walk away with nothing than just one-
quarter of the pie.
• This could be because the payoffs don’t capture everything the players
care about. In some cases, that may simply be money. But it’s possible
the allocator also cares about fairness and the decider cares about not
61
Lesson 6
Understanding Let’s
Economics:
Take Turns:
Game Sequential-Move
Theory Games
feeling like he’s been stiffed. This changes the payoffs so that the decider
is less likely to accept an unfair offer—at least, as long as the stakes are
low enough.***
READINGS
Andersen, Ertaç, Gneezy, Hoffman, and List, “Stakes Matter in
Ultimatum Games.”
Kahneman, Knetsch, and Thaler, “Fairness and the Assumptions of
Economics.”
Leeson, “Trading with Bandits.”
Thaler, “The Ultimatum Game.”
QUESTIONS
1. Consider a variation on the veto example where the president has line-
item veto powers, meaning he can sign into law only the provisions
of the bill he likes while vetoing others. In this version of the game,
Congress views both A and B becoming law as worse than passing
nothing. Use this new payoff ranking to find the equilibrium for this
game, first without the line-item veto and then with it.
Outcome Congress President
A becomes law 4 1
B becomes law 1 4
Both A and B become law 2 3
Congress passes nothing 3 2
63
Lesson 7When Backward
Understanding Economics:
Induction
Game Theory
Works—and Doesn’t
LESSON 7
WHEN
BACKWARD
INDUCTION
WORKS—AND
DOESN’T
64 64
Lesson 7When Backward Induction Works—and Doesn’t
I
n some games, the solution found using backward
induction seems so startlingly counterintuitive that you’ll
have to question whether any rational person would really
play the way backward induction predicts he would. As a test,
this lesson explores whether participants in a laboratory play
the way game theory predicts or behave in a way that’s more
consistent with common sense.
CENTIPEDE
• The first example is a classic called centipede. In this version of the game,
there are six rounds. In each round, one of two players decides whether
to stop or to let the game continue.
• Let’s say you go first. If you choose to stop the game after the first round,
you get $0.40 and your opponent gets $0.10. If you choose to let the
game continue, your opponent decides in the second round. If he chooses
to stop the game, he gets $0.80 and you get $0.20. If you choose to let
the game continue, you decide in the third round, and so on.
• Looking at the payoffs, the important thing to notice is that as you move
from one round to the next, your combined payoff keeps doubling. But
the way the payoffs are divided keeps alternating between an 80/20 split
favoring you and a 20/80 split favoring your opponent.
65
Lesson 7When Backward
Understanding Economics:
Induction
Game Theory
Works—and Doesn’t
SOLUTION
• This game can be solved using backward induction, starting with the
last decision and working back to the first.
• If your opponent, as the second mover, finds himself at the game’s last
decision node, he can stop the game and earn a $12.80 payoff or let
the game continue and earn just $6.40. Assuming the only thing he
cares about is the money he’ll receive, his choice is easy: He should stop
the game.
• Next, take a step backward and think about what you should do in the
fifth round. You can stop the game and earn a payoff of $6.40, or you can
let the game continue knowing that your opponent will stop the game
in the next round, in which case you’ll earn $3.20. Assuming the only
thing you care about is the amount of money you receive, your choice is
easy: You should stop the game.
• In the fourth round, your opponent can stop the game and earn $3.20,
or he can let the game continue knowing you’ll stop it in the next round,
in which case he’ll earn $1.60. He should stop the game.
• By similar reasoning, you should stop the game in the third round, your
opponent should stop the game in the second round, and you should
stop the game in the first round. This means the equilibrium outcome
is the one where you earn just $0.40 and your opponent earns $0.10.
• Remember that in sequential games like this one, a strategy is a detailed
set of instructions describing what you’d do at every decision-making
point you might find yourself at. Your equilibrium strategy is to always
play stop, and your opponent’s equilibrium strategy is to always play stop.
66 66
Lesson 7When Backward Induction Works—and Doesn’t
• If you were to make it to the last round of the game, you’d find yourself
dividing up billions of dollars. But, using backward induction, your
equilibrium strategy would be the same—always play stop. And once
again, the equilibrium outcome would be the one where you earn $0.40
and your opponent earns $0.10.
• Would rational players really play this way? Game theorists debated this
puzzle for more than a decade before Richard McKelvey and Thomas
Palfrey decided to have real people play the six-round version of the game
with real money at stake.
• The authors found that the game never ended in the first round, the way
backward induction would predict, but it also virtually never went all
the way to the end. The most common outcome was for participants to
stop the game in the fourth or fifth round.
• McKelvey and Palfrey proposed that some players care not only about
their own payoff but also the payoff of the other player.* And if you’re
interested in maximizing both payoffs, you want to see the game continue
all the way to the end.
ALTRUISM IN CENTIPEDE
• Suppose you’re playing the game again, and this time you’re purely
self-interested. If you think there’s a 5% chance your opponent is an
altruist, it might actually be rational for you to choose to continue the
game. You might be able to play all the way to the end, in which case
you earn a payoff of $25.60. Your opponent might stop the game in the
second round, in which case you’d earn just $0.20, but a 5% chance of
getting $25.60 more than offsets the risk of losing $0.20.
COMMON KNOWLEDGE
• From this example, you can see that it’s not enough that players be
rational and self-interested. In order to end up at the outcome backward
induction predicts—the one where you stop the game in the first round—
both players’ rationality and self-interest need to be common knowledge.
• If your rationality and self-interest aren’t common knowledge—if, for
example, each of you thinks there’s some small chance the other is an
altruist—the game shouldn’t end in the first round.
68 68
Lesson 7When Backward Induction Works—and Doesn’t
• But maybe not everybody is a rational game theorist who can instantly
solve this kind of game using backward induction. Maybe most
people look at this game and see the biggest payoffs are at the end,
so they realize the only way to get to the end is to continue in the
early rounds.
• It would be nice, then, if we could figure out whether people let the
game continue for several rounds due to a lack of common knowledge
of rationality and self-interest or due to a lack of the cognitive firepower
needed to solve the game.
• With this same pairing, if the chess player moved first, he was only half
as likely to stop the game in the first round as compared to when he
played against another chess player. The chess player still understands
backward induction, but he can’t be sure the student he’s paired with
understands it.
• This is a curious result. Imagine you’re the student. The game is more
likely to continue—meaning you make more money—if the chess player
assumes you don’t understand backward induction. It doesn’t happen
often, but there are situations where you want to be underestimated.
READINGS
McKelvey and Palfrey, “An Experimental Study of the
Centipede Game.”
Palacios-Huerta and Volij, “Field Centipedes.”
Selten, “The Chain Store Paradox.”
Suri, “The Nukes of October.”
QUESTIONS
1. Assuming you’re an altruist and your opponent cares only about his
own monetary payoff, use backward induction to find the equilibrium
to the following version of centipede.
70 70
Lesson 7When Backward Induction Works—and Doesn’t
2 . In the classic version of the chain store game, a rational incumbent firm
has an incentive to respond passively in every city where a competitor
enters. Imagine the following variation on the game where the
incumbent derives so much satisfaction from acting tough that its
payoff from responding aggressively is greater than its payoff from
responding passively. Use backward induction to find the equilibrium
for this single-city version of the game.
71
Lesson 8
Understanding
Asymmetric
Economics:Information
Game Theoryin Poker and Life
LESSON 8
ASYMMETRIC
INFORMATION
IN POKER
AND LIFE
72 72
Lesson 8 Asymmetric Information in Poker and Life
T
his lesson focuses on games where one person knows
something that others don’t. In some cases, like poker, this
can benefit the person who has the private information.
If you’re bluffing, for example, you don’t want your opponents
to know. But in other cases, like when you’re selling a nice used
car, you might wish others knew the private information—such
as just how good the car is.
STRIPPED-DOWN POKER
• The simple version of poker in this example was developed by David
Reiley, Michael Urbancic, and Mark Walker. It has just two players
and a deck of eight cards—four kings and four queens. You and your
opponent each put $1 into the pot. Your opponent is then dealt a card,
and she decides whether to fold or bet.
• If she folds, the game ends, and you collect the $2 pot, making you $1
richer. If she bets, she adds $1 to the pot, and you have to decide whether
to fold or call.
• If you fold, she collects the $3 pot, making her $1 richer. If you call, you
put $1 in the pot, and she has to show you her card.
• If it’s a king, she wins and collects the $4 pot, making her $2 richer. If
it’s a queen, you win and collect the $4 pot, making you $2 richer.
• Imagine a game where your opponent is dealt a card and decides to bet.
You now have to decide whether to fold or call. Your opponent clearly
knows something you don’t—she knows if she’s holding the winning
card. She could have bet because she knows she has the winning hand,
or she could be bluffing, hoping you’ll fold.
• If you choose to fold, the game ends and she collects the $3 pot, making
her $1 richer. But if you call, you put in another dollar, and she then
shows you her card. It’s a king, so she wins and collects the $4 pot,
making her $2 richer.
73
Lesson 8
Understanding
Asymmetric
Economics:Information
Game Theoryin Poker and Life
• The deck isn’t stacked in your opponent’s favor, but the game still isn’t
fair, because it’s a game of asymmetric information. Your opponent
knows something you don’t know, which gives her an edge.
GAME TREE
• To see why your opponent has an edge, you can draw a game tree.
• The first mover in this game isn’t your opponent; rather, it’s nature.
When game theorists talk about nature making a move, they mean the
game has an element of chance. In this game, that means nature decides
whether your opponent is dealt a king or a queen.
• Once nature moves, your opponent sees the card and decides whether
to fold or bet. No matter which card she’s dealt, if she folds, the game
ends with her earing a payoff of −1 and you earning a payoff of 1. And
if she bets, you need to decide whether to fold or call.
• If you fold, the game ends with her earning a payoff of 1 and you earning
a payoff of −1. And that’s true no matter which card she has in her hand.
74 74
Lesson 8 Asymmetric Information in Poker and Life
• The game is most interesting when your opponent bets and you call.
Only in that case does she show you her card. If she has the king, the
game ends with her earning a payoff of 2 and you earning a payoff of
−2. If she has the queen, the game ends with her earning a payoff of −2
and you earning a payoff of 2.
• At this point, it would be tempting to use backward induction to find
the subgame perfect equilibrium, but there are two things that make
this impossible.
ʶ Nature moves first and at random. Because nature doesn’t care about
payoffs or strategies, you can’t use backward induction to decide what
nature should do in the first move.
ʶ You don’t know which decision node you’re at when you have to
decide whether to fold or call. Your opponent knows which card she
has, but you don’t.
• This information asymmetry changes the way the game tree is drawn.
In particular, we show that you don’t know which of the two decisions
you’re at by drawing them inside of one information set.*
• Since you have just the one information set, you can’t use backward
induction to say what you’d do if your opponent has a king and what
you’d do if she has a queen—only she knows that.
PAYOFFS
• To solve this game, you need to convert the game tree to a payoff
matrix. But first, you need to think about how many strategies each
player has.
• Because you only have one decision-making point, and you only have
two possible choices at that point, you only have two potential strategies:
fold or call.
• Things are more complicated for your opponent. She has two decision
nodes: the one where nature deals her a queen and the one where nature
deals her a king. At each of those decision nodes, she has two possible
choices: fold or bet. This means she has four potential strategies:
ʶ Always bet
ʶ Bet with a king and fold with a queen
ʶ Fold with a king and bet with a queen
ʶ Always fold
• This means you’ll have a 2×4 payoff matrix representing each of your
payoffs given your two potential strategies and your opponent’s four
potential strategies.
• The easiest row to fill out is the one where your opponent always folds.
It doesn’t matter whether you’d planned to fold or call, because you
never get to the point of making a decision. The game ends with your
opponent $1 poorer and you $1 richer.
• What if she always bets? If you fold, it doesn’t matter which card she was
dealt. Either way, the game ends with her $1 richer and you $1 poorer.
• If you call, you have to think about your expected payoffs. Half the
time, nature will have dealt your opponent a king. She’ll collect the
pot, meaning she’s $2 richer and you’re $2 poorer. The other half the
time, she’s dealt a queen. You collect the pot, meaning she’s $2 poorer
and you’re $2 richer.
• Your opponent’s expected payoff is the probability she’s dealt a king
times her payoff if she’s dealt a king, plus the probability that she’s dealt
a queen times her payoff if she’s dealt a queen:
1/2 × $2 + 1/2 × (−$2) = 0.
• That’s an expected payoff of 0. Your expected payoff would be the mirror
image—also 0. You can use similar expected-payoff calculations to fill
in the remaining four cells.
76 76
Lesson 8 Asymmetric Information in Poker and Life
BEST-RESPONSE ANALYSIS
• If you think your opponent is always going to bet, what’s your best
response? If you fold, you’re sure to earn a payoff of −1. If you call, you
earn an expected payoff of 0. Zero is greater than −1, so you should call.
• What if you think she’s going to bet with a king and fold with a queen?
You earn an expected payoff of −1/2 if you call and 0 if you fold, so you
should fold. If she’s only betting when she has the winning hand, it’s
not smart for you to call.
• What if she folds with a king and bets with a queen? This would be
a strange strategy: When she bets, she’s always bluffing. Clearly, you do
much better if you call.
• Finally, if she always folds, you’re indifferent between your two strategies
because no matter what you planned to do, the game ends with you
collecting the pot before you actually have to make a decision.
77
Lesson 8
Understanding
Asymmetric
Economics:Information
Game Theoryin Poker and Life
SOLUTION
• At this point, it’s clear that there is no pure-strategy Nash equilibrium.
But this kind of game has to have at least one Nash equilibrium, so this
one must involve mixed strategies.
• Two of your opponent’s four strategies are dominated, meaning they
offer a worse payoff than another strategy. For example, no matter what
she thinks you’re going to do, folding with a king and betting with
a queen earns her a lower payoff than always betting. Similarly, no matter
what she thinks you’re going to do, always folding earns her a lower
payoff than always betting. We can confidently assume your opponent
will never play a dominated strategy.
• Using the same method you used in lesson 5, you can quickly show that
this game’s mixed-strategy Nash equilibrium is where your opponent
always bets one-third of the time and bets with a king and folds with
a queen the remaining two-thirds of the time, while you call two-thirds
of the time and fold the remaining one-third of the time.
• Calculating both players’ expected payoffs at the equilibrium will show
just how unfair this game is to you. Just like in lesson 5, your opponent
chooses the probabilities with which she plays her strategies to leave you
indifferent between yours. That means that your expected payoff if you
fold will be the same as if you call.
• Suppose you call. One-third of the time, your opponent always bets, in
which case your expected payoff is 0. The other two-thirds of the time, she
bets with a king and folds with a queen, in which case your payoff is
−1/2. That means your expected payoff is −1/3:
1/3 × 0 + 2/3 × (−1/2) = −1/3.
78 78
Lesson 8 Asymmetric Information in Poker and Life
• Your expected payoff if you fold is also −1/3. Either way, you expect to
lose an average of $0.33 each time you play a hand of this game because
your opponent has a critical piece of information that you don’t know.
79
Lesson 8
Understanding
Asymmetric
Economics:Information
Game Theoryin Poker and Life
• Is it wise to offer $1,050 if there’s a 50% chance the car is a plum and
50% chance it’s a lemon? In that case, there’s a 50% chance you’ll pay
a fair price for a plum, earning what can be called $50 in profit. But
there’s a 50% chance you’ll radically overpay for a lemon, earning you
a profit of −$850.
• What if you based your offer on the car’s expected value? If there’s a 50%
chance it’s a lemon and 50% chance it’s a plum, its expected value is 1/2
× $200 + 1/2 × $1,100, or $650.
• Would offering something a little less than that—say, $600—earn you
a modest profit? Think about how the seller would respond to a $600
offer if the car is a plum. In that case, he’s not willing to sell for less than
$1,000, so he’ll reject the offer. But if it’s a lemon, he’ll be delighted to
sell it for $600, and you’ll still be overpaying for a lemon.
• This leaves offering a lemon price, like $150. At that price, you know
you won’t be able to buy a plum, but you also know you won’t overpay
for a lemon. Asymmetric information destroys the market for plums,
leaving only a market for lemons.
SIGNALING
• This phenomenon was originally described by George Akerlof in his
paper “The Market for ‘Lemons.’” This work won Akerlof the Nobel
Prize in 2001, which he shared with Michael Spence, who showed that
one way to overcome asymmetric information is through signaling—in
this example, introducing a third party to inspect the used car.
• Rather than asking if the car is a lemon or a plum, you ask the seller if he’d
be willing to pay $20 to have the car inspected. If the car is a plum, he’d be
happy to pay for the inspection because he knows the car will pass. This will
allow him to sell it to you for a plum price, leaving both of you better off.
• If the car is a lemon, the seller won’t agree to pay for the inspection. It’s
better for him to save $20 and let you assume the worst—which happens
to be true. You can still find a price you both agree on, but it will be a much
lower price than he’d get if he could credibly signal that his car is a plum.
80 80
Lesson 8 Asymmetric Information in Poker and Life
READINGS
Akerlof, “The Market for ‘Lemons.’”
Caplan, Bryan. “Bryan Caplan—The Case Against Education.”
Reiley, Urbancic, and Walker, “Stripped-Down Poker.”
QUESTIONS
1. True or false: You can use backward induction to solve the following
version of stripped-down poker.
81
Lesson 9
Understanding
Divide
Economics:
and Conquer:
Game Theory
Separating Equilibrium
LESSON 9
DIVIDE AND
CONQUER:
SEPARATING
EQUILIBRIUM
82 82
Lesson 9 Divide and Conquer: Separating Equilibrium
T
he previous lesson discussed games of asymmetric
information, where one player has some critical piece of
information that another player doesn’t. This lesson gives
two examples where the right set of incentives will motivate
the informed player to honestly reveal her private information.
.
• The B superscripts represent the business traveler, and the F and C
subscripts represent first class and coach.
• A business traveler’s payoff from flying in either cabin is the difference
between what she’s willing to pay and the price she actually pays. You
can see that here, where π stands for payoff and P is the ticket price:
83
Lesson 9
Understanding
Divide
Economics:
and Conquer:
Game Theory
Separating Equilibrium
.
• Now, assume a leisure traveler—who, remember, is spending his own
money—is willing to pay $600 for a seat in first class and $500 × α for
a seat in coach. More formally, you can write his willingness to pay for
flying in the two cabins as
.
• The L superscripts represent the leisure traveler, and the F and C
subscripts again represent first class and coach.
• As with the business traveler, the leisure traveler’s payoff from flying in
either cabin is the difference between what he’s willing to pay and what
he actually pays.
SEPARATING EQUILIBRIUM
• As an executive at American Airlines, you’re all too aware that the airline
industry is heavily competitive, meaning you have no control over ticket
prices. Suppose the going price for a first-class ticket from New York
to Los Angeles is $1,600, and the price for a coach ticket on that same
flight is $300.
• If you could fill your plane with first-class passengers paying $1,600
each, you’d do it. Unfortunately, there aren’t enough people willing to
pay that much. That means you’re going to have to fill most of your seats
with passengers paying the much lower fare.
• While you don’t control ticket prices, one thing you do control is α. You
can lower α by installing less comfortable seats in coach, moving those
seats closer together, providing less-appetizing snacks, and limiting in-
flight entertainment options.
84 84
Lesson 9 Divide and Conquer: Separating Equilibrium
• What makes this game harder for you is that it’s a game of asymmetric
information. Travelers know their type—and, therefore, their willingness
to pay—but you do not. Your job, then, is to choose a value for α that
creates a separating equilibrium where business travelers truthfully
reveal their high willingness to pay by buying first-class tickets and
leisure travelers truthfully reveal their low willingness to pay by buying
coach tickets.
• This won’t be easy. Your choice of α will have to satisfy incentive
compatibility constraints and participation constraints for both types
of travelers. Incentive compatibility constraints are conditions that give
the informed player an incentive to truthfully reveal their type, while
participation constraints ensure that informed players’ payoffs from
playing the game are at least as high as the payoffs they’d receive if they
didn’t play.
.
• Remember, a business traveler’s payoff is just her willingness to pay
minus the price she has to pay:
85
Lesson 9
Understanding
Divide
Economics:
and Conquer:
Game Theory
Separating Equilibrium
.
• Rearranging to solve for α, you get
.
• This means that coach can’t be too nice, or business travelers will choose
to fly coach. If a seat in coach is nearly as nice as one in first class but
dramatically less expensive, even somebody on an expense account will
opt for coach.
• Next, you’ll do something similar for the leisure traveler. What has to
be true of α for him to truthfully reveal his low willingness to pay by
choosing the less expensive coach ticket?
• Again, use his payoff functions to set up an inequality such that he
prefers the less expensive coach ticket:
.
• As with the business traveler, the leisure traveler’s payoff is just his
willingness to pay minus the price he has to pay:
.
• Making a few substitutions gives you
.
• Rearranging, you get
.
• This might be surprising, since we said that α has to fall between 0 and
1. But what this result means is that given the prices of the two tickets,
there’s no realistic value of α that will motivate a leisure traveler to pay
for a first-class ticket.
86 86
Lesson 9 Divide and Conquer: Separating Equilibrium
PARTICIPATION CONSTRAINTS
• What’s more important from the leisure traveler’s perspective is the
participation constraint. If you make flying coach too miserable, the
leisure traveler will choose to not play the game—he’ll take the train,
or drive, or just stay home.
• Set up the leisure traveler’s participation constraint by comparing his
payoff from flying coach with his payoff if he chooses not to fly, which
we’ll say is 0:
.
• Again, his payoff is his willingness to pay minus the price he pays:
.
• Rearranging this inequality, you get
.
• This means that α must be greater than 0.6, or the leisure traveler will
stay home.
• The last step is to find the business traveler’s participation constraint.
What has to be true so the business traveler will prefer flying first class
to staying home?
• Her payoff from flying first class needs to be greater than 0. Her payoff
is her willingness to pay minus the price she pays, which is $2,000 −
$1,600, or $400. This will always be true in this example, so the business
traveler will be willing to play the game.
SOLUTION
• Now that you’ve done this work, you can answer the question of just how
bad flying coach has to be. In other words, for what values of α do you
get a separating equilibrium where business travelers reveal their high
willingness to pay by buying first-class tickets and leisure travelers reveal
their low willingness to pay by buying coach tickets?
87
Lesson 9
Understanding
Divide
Economics:
and Conquer:
Game Theory
Separating Equilibrium
PAYOFFS
• In order to understand why this is an equilibrium, it’s important to
understand the incentives borrowers and lenders face.
• Imagine you’re the lender and another player is the borrower. What’s
your expected payoff from issuing a loan, assuming both of you
behave honorably?
• Remember, there’s a small probability the project fails. In that case,
you lose the money you invested in the project, and if you want to
preserve your honor, you have to challenge the borrower to a duel. That’s
obviously costly—you might be killed, or you might have to live with
the fact that you’ve killed another person.
• Fortunately, there’s a much larger probability that the project succeeds,
in which case you get back the money you invested plus your share of
the profits.
89
Lesson 9
Understanding
Divide
Economics:
and Conquer:
Game Theory
Separating Equilibrium
90 90
Lesson 9 Divide and Conquer: Separating Equilibrium
READINGS
Caplan, “Bryan Caplan—The Case Against Education.”
Leeson, “Ordeals.”
QUESTIONS
1. True, false, or uncertain: A more dangerous—and thus more costly—
duel will always be more effective at creating a separating equilibrium
where only men of honor participate in financial markets, since a more
dangerous duel will give borrowers a stronger incentive to pay back
their loans.
2 . Suppose Toyota discovers it’s cheaper to add a certain feature to every
version of its Camry sedan rather than make different versions of the
car, some with the feature and some without. Why might Toyota still
choose to reserve that feature for the more luxurious versions of the
Camry? Relate this to the airline example from this lesson.
91
Lesson 10
Understanding
Going
Economics:
Once, Going
GameTwice:
TheoryAuctions as Games
LESSON 10
GOING ONCE,
GOING TWICE:
AUCTIONS
AS GAMES
92 92
Lesson 10 Going Once, Going Twice: Auctions as Games
Y
ou’re probably familiar with fast-talking livestock
auctioneers, and there’s a good chance you’ve bought
something at auction on eBay. These are examples
most people imagine when they think of auctions. But auctions
are everywhere—indeed, you’re participating in them all the
time, often without even knowing.
SILENT AUCTIONS
• Most people take out a loan when they buy a new car. If a person stops
making payments on the loan, the lender will have the car repossessed.
In many cases, the car will then be sold at auction. Most of the buyers
will be used car dealers, but individuals are sometimes allowed to bid.
Suppose you find the car you have always wanted, so you decide to bid
in what turns out to be a silent auction.
• In a first-price sealed-bid auction, or simply a first-price auction, everyone
privately writes down a bid. The winner is the person who submits the
highest bid, and she pays a price equal to the bid she submitted.
• It’s a sealed-bid auction because the bids are private—you don’t get to see
your rival’s bid, and she doesn’t get to see yours. It’s a first-price auction
because the winner pays the highest bid, which you can think of as the
first price on the list if you were to rank the bids from highest to lowest.
• In a second-price sealed-bid auction, you and your rivals still write down
your bids privately, and the winner will still be the person who submits
the highest bid. The only difference is that the winner pays a price equal
to the second-highest bid submitted.
• The second-price auction is demand revealing, while the first-price
auction is not. When an auction is demand revealing, it’s in bidders’
best interest to bid their true willingness to pay.
94 94
Lesson 10 Going Once, Going Twice: Auctions as Games
SOLUTION
• Now, you can use best-response analysis to see why the second-price
auction is demand revealing.
• If you think the highest rival bid will be A, you’re indifferent between
the three bids you’ve considered. You lose no matter what.
• If you think the highest rival bid will be B, your best response is to either
bid truthfully or to bid low, because bidding high leads to winning the
auction but overpaying.
• If you think the highest rival bid will be C, your best response is to either
bid truthfully or to bid high, because bidding low leads to missing out
on what would be a good deal.
95
Lesson 10
Understanding
Going
Economics:
Once, Going
GameTwice:
TheoryAuctions as Games
• Finally, if you think the highest rival bid will be D, you’re again
indifferent between the three bids you’ve considered. This time, you
win the auction no matter what.
• This means bidding truthfully is your weakly dominant strategy.
In general, this is a strategy that is sometimes your best response,
sometimes just as good as one or more alternative strategies, and never
worse than one of the other strategies.
• In this example, submitting a bid higher or lower than your true
willingness to pay will never help you, but it can hurt you. You can do
no better than to simply bid what you’re truly willing to pay. This is
because second-price auctions—and other demand-revealing auctions—
separate what you pay from what you say: The price you pay if you win
the auction isn’t determined by what you bid.
96 96
Lesson 10 Going Once, Going Twice: Auctions as Games
• The second bid you’ll consider is $50,000, which is your true willingness
to pay. If you value your dream car at $50,000, what would your payoff
be if you win the auction with a $50,000 bid?
ʶ You can’t be precise about your probability of winning the auction
now, but it’s surely greater than it would be if you bid $0. But your
payoff if you do win the auction is 0: There’s no profit in paying
$50,000 for something that’s worth $50,000. So your expected payoff
is again $0.
• What if you bid somewhere between $0 and $50,000?
ʶ Your probability of winning won’t be as high as it would be if you
bid $50,000, but it will still be positive. And your payoff if you do
win would also be positive, meaning your expected payoff would
be positive. Somewhere in that range, there’s an optimal bid—call
it b*—that strikes an optimal balance between your probability of
winning and your payoff if you do win.
97
Lesson 10
Understanding
Going
Economics:
Once, Going
GameTwice:
TheoryAuctions as Games
SOLUTION
• Drawing a smooth curve between these three points gives your expected
payoff as a function of your bid. And your optimal bid, b*, is right at
the top.
• Based on this graph, you don’t know exactly what your optimal bid
should be—that will depend on things like the number of people you’re
bidding against, your best guess as to what their bids might be, and
your own willingness to take risks—but you do know that b* is less
than $50,000.
• This means the first-price auction is not demand revealing. It will always
be in your best interest to submit a bid that’s less than your true value.
Since your bid determines the price you pay if you win the auction, this
time you have a strategic incentive to understate what you’re truly willing
to pay in the hope of getting a better deal.
* This is how fine art is sold at Sotheby’s and how hogs are sold at
the county fair.
98 98
Lesson 10 Going Once, Going Twice: Auctions as Games
• In the Dutch auction, the auctioneer starts at a high price and gradually
lowers it until someone agrees to pay the last price announced.**
• Here, you don’t want to stop the auction at a bid equal to what you’re
truly willing to pay—that’s like paying $10 for $10. Instead, you should
wait until the price has fallen to something less than that. Just like the
first-price sealed-bid auction, the Dutch auction is not demand revealing.
AUCTIONS IN LIFE
• Life is full of auctions. Recognizing what kind of auction you’re
participating in—and the incentives that auction presents you and other
bidders with—has important implications for how you should bid.
• For example, think about making an offer on a house. On one side of
the transaction you have the seller, who is probably currently living
in the house. The bidders are potential
buyers making offers on the house. It’s
possible you’re the only bidder, or you Which auction is the
might find yourself bidding against best for the seller?
several other potential buyers—you According to the
don’t necessarily know. revenue equivalence
• This is most similar to the first- theorem, assuming
price auction because it isn’t demand certain conditions are
met, all four of the
revealing. And because your offer
auctions discussed
determines the price you pay if your
in this lesson should
offer is accepted, you have an incentive
bring in the same
to understate what you’d truly be
expected revenue.
willing to pay for the house.
** This is how fresh-cut flowers are sold in Amsterdam and how the
US government sells Treasury bonds.
99
Lesson 10
Understanding
Going
Economics:
Once, Going
GameTwice:
TheoryAuctions as Games
READINGS
“The Division Problem.”
Lucking-Reiley, “Using Field Experiments to Test Equivalence
between Auction Formats.”
McAfee, McMillan, and Wilkie, “The Greatest Auction in History.”
Sun, “Divide Your Rent Fairly.”
QUESTIONS
1. Explain why the following statement is false: A seller will always earn
more money from a first-price sealed-bid auction than from a second-
price sealed-bid auction. After all, in the second-price auction, the
winner only pays the second-highest bid.
2 . Imagine you’re bidding on a rare antique rug that’s being sold at an
estate auction. Ann, Bob, and Cindy are bidding against you. The
following table shows how much each of you is truly willing to pay
for the rug.
You $4,000
Ann $3,000
Bob $2,000
Cindy $1,000
Complete the following table showing who wins and what he or she
pays in an English auction and a second-price sealed-bid auction.
Second-price
English auction
sealed-bid auction
Winner
Price paid
100 100
Lesson 11 Hidden Auctions: Common Value and All-Pay
LESSON 11
HIDDEN
AUCTIONS:
COMMON
VALUE AND
ALL-PAY
101
Lesson 11
Understanding
Hidden
Economics:
Auctions:
Game
Common
Theory
Value and All-Pay
T
his lesson explores why common-value goods are
different from private-value goods and why this
difference can lead to the winner’s curse, where winners
consistently end up overpaying. It also discusses the seemingly
strange but surprisingly common all-pay auction, where all
participants pay, even when they don’t win.
COMMON-VALUE GOODS
• Remember that the person who submits the highest bid in a second-price
sealed-bid auction wins and pays a price equal to the second-highest
bid submitted. If you’re bidding on a private-value good in one of these
auctions, such as a piece of furniture that might mean more to you than
to your rivals, you can do no better than to submit a bid equal to what
the good is truly worth to you.
• But imagine you’re bidding on a jar of pennies—or, more precisely, the
dollar value of the pennies inside the jar. This jar of pennies is an example
of a common-value good. It contains the same number of pennies no
matter who buys it.
• If you’re bidding on a common-value good in one of these second-
price sealed-bid auctions, bidding your best guess is not a good idea. If
everyone bids their best guess, the winner will be the person who had the
highest guess, which is almost surely an overestimate—as is the second-
highest guess, which determines the price the winner pays.
• Instead, you should bid as if you know that your guess is the highest
of any of the guesses. This is because the winner of a common-value
auction isn’t typically the person who made the most accurate guess; it’s
the person who made the worst guess, overestimating the value of the
pennies by a wider margin than anyone else.
COMMON-VALUE AUCTIONS
• Imagine a game where three players are each dealt one card from a deck
of cards. They then bid in a second-price auction for a pot of money,
the value of which is equal to the average value of the three cards drawn.
102 102
Lesson 11 Hidden Auctions: Common Value and All-Pay
• You can think of each player’s card as being a signal to the value of the
pot. Because of the way the game is constructed, the guesses must be
accurate on average. But unless all three cards are the same, the highest
card must be an overestimate of the prize’s value, and the lowest card
must be an underestimate.
• To make the arithmetic easier, assume the deck is made up of just 11
cards: a joker, an ace, and the numbered cards two through 10. The joker
has a value of 0, the ace has a value of 1, and each of the other cards is
simply equal to the value on its face.
• You can also assume the cards are drawn with replacement, meaning
your card gets shuffled back in the deck after you see it. So if you draw
a certain card, that doesn’t mean no one else can draw it.
• If the three players draw an ace, a six, and an eight, the value of the prize
is the sum of the three cards divided by 3, or the average of the cards.
This equals $5.
• Imagine you’re the player who is dealt an eight. That’s your signal, or
your best guess, to the value of the pot. The second-price auction is
demand revealing, but that doesn’t mean you want to bid $8. That’s
because if you win the auction, you know you had the highest signal, so
you know the pot is worth less than $8.
• To see this, suppose each player bids her signal. You win the auction and
pay a price equal to the second-highest bid—in this case, $6—meaning
you pay more than the $5 the pot is worth. That’s the winner’s curse.
• So what’s the expected value of the pot given that you’ve drawn an eight?
Your eight plus the five you’d expect the second player to draw and the
five you’d expect the third player to draw—that’s 18—all divided by
3 equals 6.
• You also wouldn’t want to bid $6. Again, imagine the other players
behaved the same way, bidding the average of their draw from the deck
and two fives. That means the player who drew a six would bid $5.33.
And the player who drew an ace would bid $3.67.
• Once again, you’d have the highest bid, so you’d win the auction. You’d
pay a price equal to the second-highest bid, which in this case is $5.33.
But the pot is still worth just $5.
• With this more sophisticated way of formulating your bid, you’re not
overpaying as much, but you’re still overpaying. That means you’re still
suffering from the winner’s curse.
• From that player’s standpoint, the expected value of the pot if no one else
draws a card higher than a six would be the average of her own card plus
two more cards drawn from a deck with only the cards joker through six.
That’s 6 + 3 + 3, or 12, all divided by 3, which equals 4.
• So you win the auction, paying the second-highest bid—in this case, just
$4—meaning you’re finally able to avoid the winner’s curse.
ALL-PAY AUCTIONS
• In all-pay auctions, everyone who submits a bid pays what they bid,
regardless of whether they win the auction. The person who submits the
highest bid either wins with certainty or, much more commonly, has the
highest probability of wining. But everyone has to pay what they bid.
• This may sound ridiculous, but people bid in all-pay auctions all the
time. For example, in lawsuits, each party’s bid is the amount it spends
on a legal team. The side that spends more on the larger and more expert
team may not be guaranteed to win the case, but that certainly increases
its chances of winning.
• Another example is elections. In the 2016 presidential election, Hillary
Clinton’s campaign spent roughly $1.2 billion, while Donald Trump’s
spent about $600 million. The prize is the presidency, and the campaigns
bid by spending on rallies, advertisements, and organizers. Donald
Trump won the election, but Hillary Clinton’s campaign still had to
pay its $1.2 billion bid.
TRIAL BY BATTLE
• Another example of an all-pay auction comes from a paper by Peter
Leeson. His specialty is explaining why what seem to be historical
curiosities are actually rational ways for people to solve problems in the
absence of a well-functioning government. This example focuses on the
Norman England custom of trial by battle.
105
Lesson 11
Understanding
Hidden
Economics:
Auctions:
Game
Common
Theory
Value and All-Pay
PROBABILITY OF WINNING
• Imagine you, as the tenant, own a piece of land that you value at £1.
And imagine another player, called the demandant, claims that same
land rightfully belongs to him, and he values it at £2. Assuming both
of you can convince a judge that you have a reasonable claim to owning
the land, the judge will order each of you to hire a champion to fight
on your behalf.
• Assume the probability that your champion wins the battle, which we’ll
call pT , equals the amount you spend divided by the amount you and the
demandant collectively spend:
.
• In this equation, t is the amount you spend on your champion, and d is
the amount the demandant spends on his.
106 106
Lesson 11 Hidden Auctions: Common Value and All-Pay
• Likewise, the probability that the demandant’s champion wins the battle,
which we’ll call pD, equals the amount he spends divided by the amount
you both collectively spend:
.
• In other words, a more expensive champion is more likely to win the trial
by battle. If you both hire equally expensive champions—if t = d—then
you’re both equally likely to win. But if the demandant spends twice as
much on his champion as you spend on yours, his will win two-thirds
of the time.
PAYOFFS
• Your expected payoff, which is represented as E for expected value and
πT for the tenant’s payoff, equals the probability your champion wins
times your payoff if he wins, plus the probability your champion loses
times your payoff if he loses.
• You’ve already found the probability that your champion wins, and the
probability that your champion loses is just the probability that the
demandant’s champion wins.
• If your champion wins, you get to keep your land, so your payoff is the
value you place on the land, or £1, minus the amount you paid your
champion, or t. If your champion loses, you no longer have your land,
but you still have to pay your champion, so your payoff is simply –t.
• This all gives you
.
107
Lesson 11
Understanding
Hidden
Economics:
Auctions:
Game
Common
Theory
Value and All-Pay
• Using a few lines of calculus, you can show that the demandant should
spend twice as much on his champion as you spend on yours. In total,
you spend two-thirds of a pound on champions.
• Importantly, this is less than the demandant would have spent if he had
simply bought the land. You value the land at £1, so he’d have to pay
more than that to buy it.
• Trial by battle may seem absurd, but in the absence of a property market,
this violent all-pay auction was an effective way of getting land into the
hands of the people who valued it the most.
READINGS
Leeson, “Trial by Battle.”
Thaler, “The Winner’s Curse.”
QUESTIONS
1. In the reality TV series Storage Wars, thrift store owners bid on the
contents of abandoned storage lockers. The winner of the auction then
tries to sell the contents at his or her store. Because bidders are only
allowed a quick peek at the contents before placing a bid, the winner
is often surprised by how much (or how little) the haul is worth. In
what ways is this like a common-value auction?
2 . The notion of an auction where everyone—not just the winner—pays
what he or she bids is so foreign that it may seem like the only sensible
strategy is to not participate. But that doesn’t have to be true.
Imagine you and another pharmaceutical company are fighting
over who should control the patent for a new drug. Because the
other company is larger than yours, the drug is worth more to it. In
particular, it stands to earn $18 million in profit from this new drug,
where your company would earn just $9 million.
108 108
Lesson 11 Hidden Auctions: Common Value and All-Pay
Why might it be better for you to spend $2 million on your legal team
(while the other company spends $4 million on its legal team) than for
you to not contest the case and let the other company have the patent?
Similar to the trial by battle example, you can assume that if you spend
half as much on your legal team as your opponent does, you have one
chance in three of winning.
109
Lesson 12
Understanding Economics:
GamesGame
with Theory
Continuous Strategies
LESSON 12
GAMES WITH
CONTINUOUS
STRATEGIES
110 110
Lesson 12 Games with Continuous Strategies
I
n almost every game you’ve
You won’t be able to
seen up to this point, players
use a payoff matrix for
have had a fixed number
games with continuous
of possible strategies. As long
strategies. Instead, this
as the number of potential
lesson relies on just the
strategies is countable, the
most straightforward
game has discrete strategies. differential calculus. You
This lesson focuses on games can still learn from this
with continuous strategies. In lesson’s examples even if
these, players can choose from you choose not to worry
an infinite number of potential about the equations.
strategies.
PAYOFFS
• There are many countries whose commercial ships travel the Gulf
of Aden, and there are several countries whose gunboats helped
patrol it. For this example, you can focus on the United States and
Great Britain.
• You can assume that because Great Britain is closer to the Gulf of Aden,
more of the products it produces and consumes travel through the gulf.
This means it derives twice the benefit the US does from the protection
gunboats provide.
• And because the United States has spent so much more over the years
on defense, you can also assume US gunboats are more technologically
sophisticated and, therefore, more effective at providing protection.
• A little more formally, the United States’ payoff from the effort it and
Great Britain devote to defending the gulf is
112 112
Lesson 12 Games with Continuous Strategies
• The term in parentheses is the benefit the US enjoys from the gunboats
both countries send. Note that America’s gunboats count for twice as
much as Great Britain’s because of their superior technology. The term ab
is there because there are some aspects of the job that go more smoothly
when both countries contribute—perhaps the gunboats have different
relative strengths, meaning they complement one another.
• Finally, the squared term is the cost associated with sending gunboats
to the Gulf of Aden. Perhaps the US would have had one in the region
anyway, so the cost of the first gunboat is small. But as it sends more and
more, the US is forced to divert them from increasingly critical missions
elsewhere in the world.
• Great Britain’s payoff function looks similar:
• Note, though, that the British get twice as much benefit from the
gunboats the two countries deploy. Again, that’s because Great Britain
is so much closer to the Gulf of Aden.
BEST-RESPONSE RULE
• This is a game with two players—the United States and Great Britain—
where payoffs are measured as the benefit from the protection the
gunboats provide minus the cost of deploying them.
• Each country has a continuum of strategies to choose from since each
can choose any value between zero and the total number of gunboats
in its navy. Because we’re allowing for fractional values, there are in
fact an infinite number of potential strategies for each country to
choose from.
• If you were to plot the United States’ payoff function for some given level
of effort on Great Britain’s part, you’d get a hill-shaped curve. The US
wants to be at the very top of this payoff function. The slope of its payoff
function at that maximum is 0.
113
Lesson 12
Understanding Economics:
GamesGame
with Theory
Continuous Strategies
• The next step is to take the derivative of the payoff function with respect
to the variable that’s under the United States’ direct control—the number
of gunboats it deploys:
• This expression tells you the slope of the payoff function for any values
of b, which the US can’t directly control, and a, which it can.
• Because you want to find the value of a at which the slope of the payoff
function is 0, set this derivative equal to 0 and solve for a:
• Subtracting 2 + b from both sides and then dividing both sides by −2,
you get
114 114
Lesson 12 Games with Continuous Strategies
• This is what’s called a best-response rule. This rule tells you how
many gunboats the United States should deploy to the Gulf of Aden
in order to maximize its payoff given the number of gunboats Great
Britain sends.
• If, for example, Great Britain sends no gunboats, the United States
should deploy one gunboat. But if Great Britain sends four gunboats,
the United States should deploy three.
• It may seem counterintuitive that as Great Britain devotes more effort
to protecting commercial ships, the United States should devote more
effort as well. But remember, the countries’ gunboats complement one
another. With that in mind, America is willing to do more work when
it has more help.
• You can follow the same logic to find Great Britain’s best-response rule,
which is b = 1 + a.
NASH EQUILIBRIUM
• Next, you can use these two best-response rules to find the Nash
equilibrium for this game. Remember that at a Nash equilibrium, no
player can improve his own payoff by changing his own strategy. In other
words, at a Nash equilibrium, each player’s strategy is a best response to
the strategies chosen by every other player.
• You can apply that reasoning to this game by finding the values of a and
b that simultaneously satisfy both countries’ best-response rules.
• The first step will be to substitute Great Britain’s best-response rule in
for the b in the United States’ best-response rule:
• If you rearrange this equation and solve for a, you’ll find that a = 3.
115
Lesson 12
Understanding Economics:
GamesGame
with Theory
Continuous Strategies
• You can then use Great Britain’s best-response rule to find the number
of gunboats it should deploy to the Gulf of Aden when the United States
sends three. That’s 1 + 3 = 4.
• This is a Nash equilibrium because, given the amount of effort Great
Britain puts in, the United States can do no better than to deploy three
gunboats. And given the amount of effort the United States puts in,
Great Britain can do no better than to deploy four gunboats.
• This same kind of analysis can be applied to any situation where two
parties have to decide how much effort to devote to a joint project,
such as roommates deciding how much time to spend cleaning their
apartment, or countries deciding how much money to spend on reducing
greenhouse gas emissions.
116 116
Lesson 12 Games with Continuous Strategies
PAYOFFS
• You can measure payoffs in terms of the expected benefit the antlers
provide minus the cost associated with them. If y is the size of your antlers
and m is the size of your opponent’s, you can assume the probability you
win the battle is
** In this context, polygyny refers to one male mating with more than
one female.
117
Lesson 12
Understanding Economics:
GamesGame
with Theory
Continuous Strategies
SOLUTION
• Like before, take the derivative of your expected payoff function
with respect to the variable that’s under your control—your antler
size—and set that derivative equal to 0. Solve for y to get your best-
response rule.
• The calculus is a bit more involved in this example. Because y is the
numerator and the denominator of your probability of winning, you
have to use what’s called the quotient rule.
• You should find that your best-response rule is
.
• By identical reasoning, your opponent’s best-response rule is
.
• The Nash equilibrium in this game is when each of your strategies is
a best response to the other’s strategy. In other words, it’s the y and the
m that simultaneously satisfy both of these equations.
• You find this by plugging one of these best-response rules into the other.
After several lines of algebra, you should get
.
• Your antlers should be as big as your opponent’s, and your opponent’s
antlers should be as big as yours. And the size of both of your antlers
depends on the polygyny constant. The more polygynous the species,
the more you’re willing to invest in antler size.
118 118
Lesson 12 Games with Continuous Strategies
READINGS
Basu, “The Traveler’s Dilemma.”
Frank, The Darwin Economy.
Pool, “Putting Game Theory to the Test.”
QUESTIONS
Grouper and octopuses, two carnivorous sea creatures, often hunt
together in coral reefs. The octopus reaches deep into the coral with
its tentacles to catch fish. Sometimes it succeeds, but more often, fish
swim out of the coral and into the waiting mouth of the grouper.
This isn’t exactly cooperation, since the octopus doesn’t benefit from
the grouper lurking nearby. With this in mind, the octopus’s payoff
function is
,
where is the number of hours the octopus spends hunting each day,
the term in parentheses is its benefit from hunting, and the squared
term is its cost of hunting.
Since the grouper does benefit from the octopus’s hunting effort, the
grouper’s payoff function is more complicated:
,
where is the number of hours the grouper spends hunting each day,
the term in parentheses is its benefit from hunting, and the squared
term is its cost of hunting.
1. Find the Nash equilibrium for this game.
2 . Calculate each sea creature’s payoff at the Nash equilibrium.
119
Understanding Economics: Game Theory Bibliography
BIBLIOGRAPHY
“£66,885 Split or Steal?” YouTube video, https://www.youtube.com/
watch?v=yM38mRHY150. This four-minute video clip from the British
gameshow Golden Balls is a delightful real-life example of a prisoner’s
dilemma–style game with $100,000 at stake.
“The Division Problem.” Planet Money. Podcast audio, January 25, 2019.
https://www.npr.org/transcripts/688849249. NPR’s Planet Money team
talks about how auction-like markets can be used to divide rent between
roommates, pick which movie friends should see together, and better
allocate boat slips in Santa Barbara Harbor.
“What’s Left When You’re Right?” Radiolab. Podcast audio. September 5,
2019. https://www.wnycstudios.org/podcasts/radiolab/episodes/whats-
left-when-youre-right. This hour-long podcast dives deeper into the
British gameshow featuring real-life high-stakes prisoner’s dilemma
games. It focuses on one contestant’s attempt to twist the rules of the
game to his benefit.
Akerlof, George. “The Market for ‘Lemons’: Quality Uncertainty
and the Market Mechanism.” The Quarterly Journal of Economics 84,
no. 3 (1970): 488–500. Akerlof won a Nobel prize for showing how
asymmetric information can destroy the market for good products,
leaving only a market for lemons. This is true not only for used cars
but for insurance, loans, and job candidates.
Andersen, Steffen, Seda Ertaç, Uri Gneezy, Moshe Hoffman, and
John List. “Stakes Matter in Ultimatum Games.” American Economic
Review 101, no. 7 (2011): 3427–3439. In most laboratory experiments,
participants playing the ultimatum game are both more spiteful and more
generous than game theory would predict. The authors of this article
ask villagers in northeast India to divide a pot worth more than half of
the typical villager’s annual income. They find that when the stakes
are that high, people’s behavior is much more consistent with theory.
120 120
Bibliography
122 122
Bibliography
lawmakers today would agree were a success. That’s a short list, but the
FCC’s radio-spectrum auctions are on it. The authors of this chapter
argue that replacing spectrum lotteries with auctions struck the right
balance between bringing in revenue for the government and putting
spectrum rights into the hands of companies prepared to do the most
with this scarce resource.
McKelvey, Richard and Thomas Palfrey. “An Experimental Study of
the Centipede Game.” Econometrica 60, no. 4 (1992): 803–836. Game
theorists talked about the centipede game for years before McKelvey
and Palfrey decided to have people play the game in the laboratory. In
addition to presenting the results of their experiment, the authors put
forward an explanation for why people’s behavior isn’t consistent with
backward-induction thinking. The second half of the paper is quite
dense, but the first half is accessible.
Nasar, Sylvia. A Beautiful Mind. New York: Simon & Schuster, 1998.
Sylvia Nasar’s biography of John Nash, the game theory pioneer and
namesake of the Nash equilibrium, is a pleasure to read and is the basis of
the 2001 film starring Russell Crowe. While the book focuses primarily
on Nash’s life, it does include interesting insights into game theory, such
as the story behind the creation of the prisoner’s dilemma game.
Palacios-Huerta, Ignacio and Oscar Volij. “Field Centipedes.” American
Economic Review 99, no. 4 (2009): 1619–1635. These authors build on
McKelvey and Palfrey’s work by proposing another explanation for why
laboratory participants don’t behave how theory predicts when playing
the centipede game: lack of common knowledge of rationality. If one
player isn’t sure another will employ backward-induction thinking,
it’s not in his best interest to stop the game immediately. The authors
test their theory by playing the game with college undergraduates and
chess grandmasters.
Palacios-Huerta, Ignacio. “Professionals Play Minimax.” The Review
of Economic Studies 70, no. 2 (2003): 395–415. In another application
of mixed-strategy Nash equilibrium to professional sports, this article
focuses on penalty kicks in Europe’s top professional soccer leagues.
124 124
Bibliography
125
Understanding Economics: Game Theory Bibliography
would lead to deep losses in that city but would allow the chain
store to maintain its monopoly in the other 19. Backward induction
suggests otherwise.
Smith, J. M. and G. R. Price. “The Logic of Animal Conflict.” Nature
246 (1973): 15–18. In this admirably concise paper, the authors present
a five-strategy version of the hawk-dove game from lesson 3, showing
that natural selection can lead to limited war, where animals rarely
seriously injure members of their own species.
Sun, Albert. “Divide Your Rent Fairly.” The New York Times, April 28,
2014. https://www.nytimes.com/interactive/2014/science/rent-division-
calculator.html. Are you having trouble deciding how to divide up the
rent between roommates? The New York Times is here to help with
this handy online calculator. But be careful: You get different results
depending on who goes first.
Suri, Jeremi. “The Nukes of October: Richard Nixon’s Secret Plan to
Bring Peace to Vietnam.” Wired, October 25, 2008. A recurring theme
in lesson 7 is that you sometimes benefit from appearing less rational
than you truly are. Did Richard Nixon send a squadron of nuclear-
armed B-52s racing toward Moscow because he wanted the Soviets to
think he was impulsive and volatile? Or did he do it because he was
impulsive and volatile?
Thaler, Richard. “The Ultimatum Game.” Chap. 3 in The Winner’s
Curse: Paradoxes and Anomalies of Economic Life. Princeton: Princeton
University Press, 1994. Thaler’s wonderfully readable survey of
behavioral economics includes chapters on more than a dozen topics
at the intersection of economics and psychology. This chapter focuses
on the ultimatum game, including the classic version covered in lesson
6 and more complicated variations.
126 126
Bibliography
———. “The Winner’s Curse.” Chap. 5 in The Winner’s Curse: Paradoxes
and Anomalies of Economic Life. Princeton: Princeton University Press,
1994. In this easily accessible chapter rich with real-world examples,
Thaler surveys the empirical literatures on common-value auctions and
the winner’s curse.
Van Huyck, John, Raymond Battalio, and Richard Beil. “Tacit
Coordination Games, Strategic Uncertainty, and Coordination Failure.”
American Economic Review 80, no. 1 (1990): 234–248. In this paper from
the top economics journal, the authors describe a laboratory experiment
involving a seven-strategy version of the stag hunt game. Over time,
participants are more and more likely to end up at the risk-dominant
equilibrium where everyone makes the minimum possible contribution
to a group fund, not the payoff-dominant equilibrium where everyone
behaves generously. This leads to the average participant’s payoff being
less than half of what it could be.
Walker, Mark and John Wooders. “Minimax Play at Wimbledon.”
American Economic Review 91, no. 5 (2001): 1521–1538. Studying
a concept like the mixed-strategy Nash equilibrium is only worthwhile
if it actually describes the way people behave. In this article, the authors
find that professional tennis players in top tournaments serve to their
opponent’s forehand or backhand with the probabilities game theory
predicts they should.
127
Understanding Economics: Game Theory Answers
ANSWERS
LESSON 1
1.
Colin
Left Right
Up 0, 3 10, 10
Rose
Down 2, 1 5, 0
2 .
Colin
Left Right
Up 2, 2 6, 1
Rose
Down 1, 6 5, 5
LESSON 2
1. If Rose is playing tit-for-tat, that means she cooperates in the first round,
and then in all subsequent rounds copies whatever Colin did in the
previous round. If Colin always defects, then in the first round, Rose
will begin by cooperating and Colin will defect. Rose earns a payoff of
0 in the first round, and Colin earns a payoff of 3. In all subsequent
rounds, Rose will copy Colin’s strategy from the previous round (defect)
and Colin will defect. Each will earn a payoff of 1.
128 128
Answers
2 . If both Rose and Colin play tit-for-tat, both will cooperate in the first
round. In subsequent rounds, because each copies the other’s strategy
from the previous round, both players will continue to cooperate.
That means each player earns a payoff of 2 in the first round and in
subsequent rounds.
LESSON 3
1. False. Driving straight is your best response if you think your opponent
will swerve, but it’s not your best response if you think your opponent
will drive straight. In chicken, there is no one strategy that is always
your best response regardless of what you think your opponent will do.
That means you don’t have a dominant strategy.
2 . There is more than one correct answer to this question. What’s important
is that Rose prefers swerving to driving straight when she thinks Colin
will drive straight, and she prefers driving straight to swerving when she
thinks Colin will swerve. So Rose’s payoff in the upper left cell should
be less than 0, and her payoff in the lower right cell should be less than
2. By similar reasoning, Colin’s payoff in the upper left cell should be
less than 0, and his payoff in the lower right cell should be less than 2.
Colin
Straight Swerve
Straight − 1, − 1 2, 0
Rose
Swerve 0, 2 −1, 1
LESSON 4
1. This is an assurance game. That’s a coordination game with two pure-
strategy Nash equilibria, where both players clearly prefer the payoff at
one of those equilibria to the payoff at the other. In this case, the Nash
equilibrium where both Rose and Colin play blue offers both players
a higher payoff than the Nash equilibrium where they both play red.
129
Understanding Economics: Game Theory Answers
2 . In the classic version of the stag hunt, you can catch the hare whether
or not the other player hunts the hare, but you can only catch the stag
if both of you hunt the stag. That means your payoff from hunting the
hare (or, in the current example, staying in and watching TV) is the
same regardless of what the other player chooses to do.
This game has two pure-strategy Nash equilibria. The Nash equilibrium
where both Rose and Colin watch TV risk dominates the Nash
equilibrium where both Rose and Colin go out for dinner and a movie,
because staying in is less risky for each player.
Colin
Dinner and
Watch TV
a movie
Watch TV 1, 1 1, 0
Rose
Dinner and a movie 0, 1 3, 3
LESSON 5
1.
Colin
130 130
Answers
()*%+
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋
()*%+
()*%+
()*%+ $=
!"#$%&'"$$$==−1(𝑝𝑝)
−1(𝑝𝑝)
−1(𝑝𝑝) ++ +2(1
2(12(1
−− −𝑝𝑝)
𝑝𝑝) 𝑝𝑝)
𝐸𝐸"𝜋𝜋 !"#$%&'"
!"#$%&'"
()*%+
!"#$%&'" = −1(𝑝𝑝) + 2(1 − 𝑝𝑝)
𝐸𝐸"𝜋𝜋!"#$%&'" $ = −1(𝑝𝑝) + 2(1 − 𝑝𝑝) ,
()*%+
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋
()*%+
()*%+
$=
!,-#.-
()*%+ $$==0(𝑝𝑝)
0(𝑝𝑝)
0(𝑝𝑝) ++ +1(1
1(11(1
−− −𝑝𝑝)
𝑝𝑝) 𝑝𝑝).
𝐸𝐸"𝜋𝜋 ()*%+ $ = 0(𝑝𝑝) + 1(1 − 𝑝𝑝)
!,-#.-
!,-#.-
!,-#.-
𝐸𝐸"𝜋𝜋!,-#.- $ = 0(𝑝𝑝) + 1(1 − 𝑝𝑝)
Setting these two expected payoffs equal to one another and solving
for p,
−1(𝑝𝑝)
−1(𝑝𝑝)
−1(𝑝𝑝) ++ +2(1
2(12(1
−− −𝑝𝑝)
𝑝𝑝) 𝑝𝑝)=
= =0(𝑝𝑝)
0(𝑝𝑝)
0(𝑝𝑝) ++ +1(1
1(1 1(1
−− −𝑝𝑝)
𝑝𝑝) 𝑝𝑝)
−1(𝑝𝑝) + 2(1 − 𝑝𝑝) = 0(𝑝𝑝) + 1(1 − 𝑝𝑝)
−1(𝑝𝑝) + 2(1 − 𝑝𝑝) = 0(𝑝𝑝) + 1(1 − 𝑝𝑝)
223𝑝𝑝
− 3𝑝𝑝 1=− 1𝑝𝑝−𝑝𝑝𝑝𝑝
2− − 3𝑝𝑝 == 1
1−
2− 3𝑝𝑝 = − 𝑝𝑝
2 − 3𝑝𝑝 = 1 − 𝑝𝑝
=𝑝𝑝=
𝑝𝑝 𝑝𝑝 1/2..
=1/2.
1/2.
𝑝𝑝 = 1/2.
You could 𝑝𝑝 = 1/2.
use similar
5$""-# analysis to show that Rose is indifferent between
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋
5$""-#
5$""-#
her 5$""-#
two pure $ =$ $=
strategies =1(𝑝𝑝)
1(𝑝𝑝)
1(𝑝𝑝)
when−q− −
1(1 1(1
= 1(1 − −𝑝𝑝)
− That
1/2. 𝑝𝑝) 𝑝𝑝)
𝐸𝐸"𝜋𝜋 /-$01 3)# 45
/-$01 3)# 45
/-$01 3)# 45
5$""-#
/-$01 3)# 45 $ = 1(𝑝𝑝) − 1(1 − 𝑝𝑝)means the mixed-strategy
𝐸𝐸"𝜋𝜋
Nash equilibrium $is=
/-$01 3)# 45 1(𝑝𝑝)Rose
where − 1(1
drives−straight
𝑝𝑝) with probability 1/2
and swerves with probability 1/2, while Colin also drives straight with
5$""-#
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋
5$""-#
5$""-#
probability
5$""-# $$$=
1/2$and
/-$01 3)# (6 = =−1(𝑝𝑝)
−1(𝑝𝑝)
swerves
−1(𝑝𝑝) ++
with + 2(1
2(1−− −𝑝𝑝)
probability
2(1 𝑝𝑝) 𝑝𝑝)
1/2.
𝐸𝐸"𝜋𝜋 /-$01 3)# (6
/-$01 3)# (6
5$""-#
/-$01 3)# (6 = −1(𝑝𝑝) + 2(1 − 𝑝𝑝)
𝐸𝐸"𝜋𝜋
2 . /-$01 3)# (6
$ = −1(𝑝𝑝) + 2(1 − 𝑝𝑝)
1(𝑝𝑝)
1(𝑝𝑝)
1(𝑝𝑝) −− −1(1
1(1 1(1
−− −𝑝𝑝)
𝑝𝑝) 𝑝𝑝)=
= =−1(𝑝𝑝)
−1(𝑝𝑝)
−1(𝑝𝑝) ++ +2(1
2(1 2(1
−− −𝑝𝑝)
𝑝𝑝) 𝑝𝑝)
1(𝑝𝑝) − 1(1 − 𝑝𝑝) = −1(𝑝𝑝) + 2(1 − 𝑝𝑝)
1(𝑝𝑝) − 1(1 − 𝑝𝑝) = −1(𝑝𝑝) + 2(1 − 𝑝𝑝) Batter
−1−1 −1
++ +2𝑝𝑝 2𝑝𝑝
== 2=−
223𝑝𝑝
− 3𝑝𝑝
−1 2𝑝𝑝
+ 2𝑝𝑝 = 2−− 3𝑝𝑝
3𝑝𝑝 Ready for
−1 + 2𝑝𝑝 = 2 − 3𝑝𝑝 Ready for
=𝑝𝑝= =3/53/5 a changeup
𝑝𝑝 𝑝𝑝
𝑝𝑝 3/5
= 3/5 a fastball (q)
(1 − q)
𝑝𝑝 = 3/5
Thro a
−1, 1 1, −1
fastball (p)
8%"9'-#
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋 Pitcher $$==−1(𝑞𝑞)
8%"9'-#
8%"9'-#
8%"9'-# $=
7'#), $ 45 −1(𝑞𝑞)
−1(𝑞𝑞) ++ +1(1
1(1 1(1
−− −𝑞𝑞)
𝑞𝑞) 𝑞𝑞)
𝐸𝐸"𝜋𝜋 7'#), $ 45
7'#), $ 45
8%"9'-#
7'#), $ 45 $= −1(𝑞𝑞)
Throw + 1(1 − 𝑞𝑞)
𝐸𝐸"𝜋𝜋 7'#), $ 45 $= −1(𝑞𝑞)
a changeup + 1(1 − 𝑞𝑞)
1, −1 −2, 2
(1 − p)
5$""-#
𝐸𝐸(𝜋𝜋
𝐸𝐸(𝜋𝜋
𝐸𝐸(𝜋𝜋
5$""-#
5$""-#
5$""-# )=
7'#), $ (6 )))= =1(𝑞𝑞)
1(𝑞𝑞)
1(𝑞𝑞) −− −2(1
2(1 2(1−−𝑞𝑞)
−can
𝑞𝑞) 𝑞𝑞)
𝐸𝐸(𝜋𝜋 7'#), $ (6
7'#), $ (6
𝐸𝐸(𝜋𝜋
Using7'#), $ (6
best-response
5$""-# = 1(𝑞𝑞) −
analysis, 2(1
you − 𝑞𝑞)
see there’s not a cell where
7'#), $ (6 ) = 1(𝑞𝑞) − 2(1 − 𝑞𝑞)
−1(𝑞𝑞)
−1(𝑞𝑞)
−1(𝑞𝑞) ++ +1(1
both payoffs
1(1 1(1
−−
are−
𝑞𝑞)𝑞𝑞)𝑞𝑞)=
= =1(𝑞𝑞)
underlined. 1(𝑞𝑞)
1(𝑞𝑞) −2(1
2(1
Therefore,
−− 2(1 −− −𝑞𝑞)
𝑞𝑞) no pure-strategy Nash
there’s
𝑞𝑞)
−1(𝑞𝑞) + 1(1
equilibrium. −
There 𝑞𝑞)
is, = 1(𝑞𝑞)
however, −
a 2(1 − 𝑞𝑞) Nash equilibrium. The
mixed-strategy
−1(𝑞𝑞) + 1(1 − 𝑞𝑞) = 1(𝑞𝑞) − 2(1 − 𝑞𝑞)
112𝑞𝑞
−p2𝑞𝑞 =−2 −2 + 3𝑞𝑞 indifferent between being ready for
1−− 2𝑞𝑞
pitcher 1
chooses
− =to= leave
−2 + +
the 3𝑞𝑞
batter
3𝑞𝑞
2𝑞𝑞 = −2 + 3𝑞𝑞
1 −being
a fastball and 2𝑞𝑞 =ready−2for+ a3𝑞𝑞
changeup:
=𝑞𝑞=
𝑞𝑞 𝑞𝑞
𝑞𝑞 =
=3/5
3/5 3/5
3/5 131
𝑞𝑞 = 3/5
−1(𝑝𝑝)
−1(𝑝𝑝)
−1(𝑝𝑝) ++2(1
2(1 −−−𝑝𝑝) 𝑝𝑝) ===0(𝑝𝑝)
0(𝑝𝑝) +++1(1
1(1 −−𝑝𝑝)𝑝𝑝)
−1(𝑝𝑝) ++ 2(1
2(1 −− 𝑝𝑝) 𝑝𝑝)
== 0(𝑝𝑝)
0(𝑝𝑝) + 1(1
1(1 −− 𝑝𝑝)𝑝𝑝)
−1(𝑝𝑝) + 2(1 − 𝑝𝑝) 0(𝑝𝑝) + 1(1 − 𝑝𝑝)𝑝𝑝)
−1(𝑝𝑝)
−1(𝑝𝑝) +
+ 2(1
2 2
2(1− −
3𝑝𝑝
− 𝑝𝑝)
3𝑝𝑝 =
𝑝𝑝) =1 10(𝑝𝑝)
− − 𝑝𝑝
0(𝑝𝑝) 𝑝𝑝+
+ 1(1
1(1 −
− 𝑝𝑝)
22 −
−− 3𝑝𝑝
3𝑝𝑝 == 1
1 −
−− 𝑝𝑝
𝑝𝑝 Game
222− 3𝑝𝑝 3𝑝𝑝 = =111− −𝑝𝑝
𝑝𝑝
Understanding
22 −𝑝𝑝−
− 𝑝𝑝=3𝑝𝑝=
Economics:
3𝑝𝑝 3𝑝𝑝 =
1/2.
= =11
1/2.−− 𝑝𝑝 𝑝𝑝
𝑝𝑝 Theory Answers
5$""-# 𝑝𝑝
𝑝𝑝 == = 1/2.
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋 5$""-# 𝑝𝑝 𝑝𝑝
/-$01 3)# 45 $=
𝑝𝑝 𝑝𝑝=$=
𝑝𝑝 =1/2.
1/2.
1/2.
1/2.
=1(𝑝𝑝)
1(𝑝𝑝)−−1(1
1/2.
1/2. 1(1−−𝑝𝑝)𝑝𝑝)
/-$01 3)# 45
5$""-#
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋 5$""-#
5$""-# $===1(𝑝𝑝)
1(𝑝𝑝) −−− 1(1 −−− 𝑝𝑝)
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋
5$""-#
𝐸𝐸"𝜋𝜋 /-$01 3)# 45
5$""-#
/-$01 3)# 45
𝐸𝐸"𝜋𝜋 5$""-#
/-$01 3)# 45
5$""-# $$ $$$= =
=
1(𝑝𝑝)
1(𝑝𝑝)
1(𝑝𝑝)
1(𝑝𝑝) − −
−
1(1
1(1
1(1
1(1
1(1− −
−
𝑝𝑝)
𝑝𝑝)𝑝𝑝)
𝑝𝑝)
𝑝𝑝)
𝐸𝐸"𝜋𝜋 /-$01 3)# 45
/-$01 3)# 45
/-$01 3)# 45
/-$01 3)# 45 $ = 1(𝑝𝑝) − 1(1 − 𝑝𝑝)
5$""-#
𝐸𝐸"𝜋𝜋
5$""-#
𝐸𝐸"𝜋𝜋/-$01 3)# (6
/-$01 3)# (6 $ = −1(𝑝𝑝) + 2(1 − 𝑝𝑝)𝑝𝑝).
$ = −1(𝑝𝑝) + 2(1 −
5$""-#
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋
5$""-#
𝐸𝐸"𝜋𝜋 5$""-#
these two$ $expected
=−1(𝑝𝑝)
$$= −1(𝑝𝑝) ++2(1
2(1 −to𝑝𝑝)
− 𝑝𝑝)
𝐸𝐸"𝜋𝜋 5$""-#
/-$01 3)# (6
Setting
𝐸𝐸"𝜋𝜋 5$""-#
/-$01 3)# (6
1(𝑝𝑝) 5$""-#
/-$01 3)# (6
−
5$""-#
𝐸𝐸"𝜋𝜋 1(1 − $𝑝𝑝) $===
=
−1(𝑝𝑝)
−1(𝑝𝑝)
−1(𝑝𝑝)
−1(𝑝𝑝) ++
payoffs +
+
2(1
2(1
equal
2(1
2(1 −−
−
−
𝑝𝑝)
𝑝𝑝)one
𝑝𝑝)
𝑝𝑝)
another and solving
𝐸𝐸"𝜋𝜋
1(𝑝𝑝) /-$01 3)# (6
− 1(1 − 𝑝𝑝)
/-$01 3)# (6
p,/-$01 3)# (6
for /-$01 3)# (6 $ = −1(𝑝𝑝) + 2(1 − 𝑝𝑝)
1(𝑝𝑝)
1(𝑝𝑝)
1(𝑝𝑝) −−− 1(1
1(1
1(1−−− 𝑝𝑝)
𝑝𝑝) 𝑝𝑝) = =−1(𝑝𝑝)
−1(𝑝𝑝) ++ 2(1
2(1 −−− 𝑝𝑝)
𝑝𝑝)
1(𝑝𝑝)
1(𝑝𝑝)
1(𝑝𝑝)
1(𝑝𝑝)
−
− −
−1(1
1(1
−1
1(1
1(1
−
−1
− −
+
−𝑝𝑝) 𝑝𝑝)
+2𝑝𝑝
𝑝𝑝) 𝑝𝑝) =
= 2𝑝𝑝=
=
= =−1(𝑝𝑝)
−1(𝑝𝑝)
−1(𝑝𝑝)
2
−1(𝑝𝑝)2−−3𝑝𝑝
−1(𝑝𝑝) +
+
+
+
3𝑝𝑝
+
2(1
2(1
2(1
2(1
2(1
−
− −
−𝑝𝑝)
𝑝𝑝)
𝑝𝑝)
𝑝𝑝)
𝑝𝑝)
−1
−1 −1 +
++ 2𝑝𝑝
2𝑝𝑝 =
==
2𝑝𝑝 2
22 −
−− 3𝑝𝑝
3𝑝𝑝3𝑝𝑝
−1 −1+
−1 +2𝑝𝑝 = 2 2−−3𝑝𝑝
−1 +𝑝𝑝+2𝑝𝑝𝑝𝑝= 2𝑝𝑝
=
2𝑝𝑝
= =22
3/5
3/5
= −− 3𝑝𝑝3𝑝𝑝
3𝑝𝑝
𝑝𝑝
𝑝𝑝 𝑝𝑝 == = 3/5
3/5
3/5
𝑝𝑝 𝑝𝑝= =3/5
.
𝑝𝑝 𝑝𝑝 == 3/5
3/5
3/5
The batter chooses q to leave the pitcher indifferent between throwing
8%"9'-#
a𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋 8%"9'-#
fastball $ $==−1(𝑞𝑞)
and throwing
7'#), $ 45
7'#), $ 45 a−1(𝑞𝑞)++1(1
changeup: 1(1−−𝑞𝑞)𝑞𝑞)
8%"9'-#
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋 8%"9'-#
8%"9'-# $ ==−1(𝑞𝑞) −1(𝑞𝑞) +1(11(1 −𝑞𝑞) 𝑞𝑞)
𝐸𝐸"𝜋𝜋 8%"9'-# $ $ $$$= −1(𝑞𝑞) +
++ 1(1 −
−− 𝑞𝑞)𝑞𝑞)
8%"9'-#
7'#), $ 45
𝐸𝐸"𝜋𝜋
𝐸𝐸"𝜋𝜋 8%"9'-#
7'#), $ 45
7'#), $ 45
8%"9'-#
𝐸𝐸"𝜋𝜋 == =−1(𝑞𝑞)
−1(𝑞𝑞)
−1(𝑞𝑞) +
+1(1
1(1
1(1 −
− 𝑞𝑞)
𝑞𝑞)
𝐸𝐸"𝜋𝜋 7'#), $ 45
7'#), $ 45
7'#), $ 45
7'#), $ 45 $ = −1(𝑞𝑞) + 1(1 − 𝑞𝑞)
5$""-#
𝐸𝐸(𝜋𝜋5$""-#
𝐸𝐸(𝜋𝜋 7'#), $ (6
7'#), $ (6 ) = 1(𝑞𝑞) −
) = 1(𝑞𝑞) − 2(1 − 𝑞𝑞). 2(1 − 𝑞𝑞)
5$""-#
𝐸𝐸(𝜋𝜋
𝐸𝐸(𝜋𝜋
𝐸𝐸(𝜋𝜋
𝐸𝐸(𝜋𝜋
Setting
𝐸𝐸(𝜋𝜋
5$""-#
5$""-#
5$""-#
7'#), $ (6
5$""-#
these two− )))expected
=
=
==1(𝑞𝑞)
))𝑞𝑞)=
1(𝑞𝑞)
1(𝑞𝑞)
1(𝑞𝑞) −−2(1
−−
2(1
2(1
payoffs
1(𝑞𝑞) − 2(1 −𝑞𝑞)
−
−−
equal
2(1 −
𝑞𝑞)
𝑞𝑞)𝑞𝑞)
to
𝑞𝑞)one another and solving
−1(𝑞𝑞)
−1(𝑞𝑞)
𝐸𝐸(𝜋𝜋
𝐸𝐸(𝜋𝜋 + 1(1
7'#), $ (6
+5$""-#
7'#), $ (6
5$""-#
1(1
7'#), $ (6 −
7'#), $ (6
7'#), $ (6 )𝑞𝑞)=) = = 1(𝑞𝑞)
1(𝑞𝑞)
1(𝑞𝑞) − − −
2(12(12(1
− − −
𝑞𝑞)𝑞𝑞)𝑞𝑞)
for q, 7'#), $ (6
−1(𝑞𝑞)
−1(𝑞𝑞)
−1(𝑞𝑞) +++1(1
1(1
1(1 −−𝑞𝑞)
− 𝑞𝑞) ===
𝑞𝑞)= 1(𝑞𝑞)
1(𝑞𝑞)
1(𝑞𝑞)−−−2(1
2(1
2(1−−− 𝑞𝑞)
𝑞𝑞)𝑞𝑞)
−1(𝑞𝑞)
−1(𝑞𝑞)+
−1(𝑞𝑞) +1(1 − 𝑞𝑞)= 1(𝑞𝑞) −3𝑞𝑞
−2(12(1− −𝑞𝑞)
−1(𝑞𝑞) ++ 1 1(1
1−−
1(1
1(1 2𝑞𝑞−
− − 𝑞𝑞)𝑞𝑞)
2𝑞𝑞 =
=−2
𝑞𝑞)=−2 1(𝑞𝑞)
++3𝑞𝑞
1(𝑞𝑞)
1(𝑞𝑞) −− 2(12(1−− 𝑞𝑞)𝑞𝑞)
𝑞𝑞)
11 −
−− 2𝑞𝑞
2𝑞𝑞2𝑞𝑞 = = =−2−2 +++ 3𝑞𝑞
3𝑞𝑞3𝑞𝑞
111− −2𝑞𝑞 == −2−2
−2+ +3𝑞𝑞
11 −− 𝑞𝑞2𝑞𝑞
2𝑞𝑞2𝑞𝑞 ==
𝑞𝑞= = −2
3/5
3/5
−2 ++ 3𝑞𝑞3𝑞𝑞
3𝑞𝑞
𝑞𝑞
𝑞𝑞 == = 3/5
3/5
𝑞𝑞 𝑞𝑞𝑞𝑞= =3/5
3/5.
𝑞𝑞 𝑞𝑞 == 3/5
3/5
3/5
So the mixed-strategy Nash equilibrium is where the pitcher throws a
fastball with probability 3/5 and a changeup with probability 2/5, while
the catcher is ready for a fastball with probability 3/5 and a changeup
with probability 2/5.
132 132
Answers
LESSON 6
1.
133
Understanding Economics: Game Theory Answers
134 134
Answers
LESSON 7
1. Using backward induction, start by considering your opponent’s decision
in round 6. Here, he stops the game and collects $12.80 rather than let
the game continue. This is no different than what happens in round 6
of the classic version of the game. What is different is your choice in
round 5. Because you only care about your combined payoff, you choose
to let the game continue. Recognizing that, your opponent lets the game
continue in round 4. Using similar logic, you both choose to let the
game continue in all preceding rounds. At the equilibrium, you always
let the game continue, and your opponent lets the game continue in
rounds 2 and 4 but stops the game in round 6. The equilibrium outcome
is circled.
135
Understanding Economics: Game Theory Answers
LESSON 8
1. False, for two reasons. First, you can’t determine what you should do
at each of your decision nodes because they’re both contained within
the same information set. In other words, if your opponent bets,
you don’t know whether nature dealt her a king or a queen. Second,
nature behaves randomly, without regard for players’ payoffs, so there’s
no way to use backward induction to determine which card nature will
deal her.
136 136
Answers
LESSON 9
1. This one is complicated. It’s true that a more dangerous duel is more
likely to motivate borrowers to repay their loans when their project is
successful. However, the duel can’t be too costly, or lenders won’t be
willing to make loans in the first place for fear they won’t be paid back
and will be forced to challenge the borrower to a duel to preserve their
honor. Dueling has to be just dangerous enough: not so dangerous that
it discourages loans (or, for that matter, challenges), but not so safe
that borrowers feel like they can get away with not repaying what they
owe. In sum, you can’t say that a more dangerous duel will always be
more effective.
2 . In the airline example, you saw that airlines with coach and first-class
cabins have to be careful not to make coach too nice. If they do, business
travelers will start opting for coach. By the same logic, automakers may
want to save some sought-after features for their more expensive trim
packages, even though that makes the manufacturing process more
complicated and expensive. That’s because automakers, like airlines,
are trying to discriminate between buyers who are more price conscious
and those who can afford to spend extra. If Toyota makes the base-
model Camry too nice, more people will opt for that cheaper version.
By reserving some features for the more expensive trim packages, Toyota
may be able to boost its revenue by tempting more people to buy those
more expensive (and profitable) Camrys.
137
Understanding Economics: Game Theory Answers
LESSON 10
1. This won’t always be the case. It’s true, of course, that the winner of
a second-price auction pays the second-highest bid, but that doesn’t
necessarily mean lower revenue for the seller. That’s because the second-
price auction is demand revealing, while the first-price auction isn’t.
In other words, bidders in a second-price auction have an incentive to
submit bids equal to what they’re truly willing to pay for a product.
Bidders in a first-price auction, on the other hand, have an incentive
to submit bids that are less than what they’re truly willing to pay. The
second-highest bid from an auction where everyone bids their full
willingness to pay won’t necessarily be less than the highest bid from
an auction where everyone bids less.
2 .
Second-price
English auction
sealed-bid auction
You win in either case. In the English auction, the auctioneer keeps
naming higher prices until Cindy, Bob, and Ann drop out. Ann stays
in the auction until the price the auctioneer calls out exceeds what
she’s willing to pay. That means you end up paying $3,000 + e, where
e is the smallest increment the auctioneer uses between prices. In the
second-price auction, everyone writes down their true willingness to pay
(remember that it’s demand revealing). You win since you submit the
highest bid, and you pay a price equal to Ann’s bid. In this case, that’s
$3,000.
138 138
Answers
LESSON 11
1. Assuming all the bidders’ thrift stores are similar, this really is a
common-value auction. Each store owner has a best guess as to the
value of a storage locker’s contents based on the quick peek. On average,
these guesses may be quite accurate, but the highest guess is likely to
be an overestimate of the contents’ value. If bidders don’t take that into
account, they’re likely to suffer from the winner’s curse.
2 . If you bid in this all-pay auction by hiring a legal team for $2 million,
you have a one-third chance of coming away with a patent that entitles
you to $9 million in profits and a two-thirds chance of coming away
emptyhanded. Win or lose, you have to pay $2 million in legal fees.
That means your expected payoff is
! #
𝐸𝐸(𝜋𝜋) = ($9 million) + ($0) − $2 million = $1 million.
" "
That’s greater than the $0 payoff you receive if you choose not to bid
at all.
𝑑𝑑𝑑𝑑!
= 4 − 2𝜃𝜃
LESSON 12 𝑑𝑑𝑑𝑑
1. Start with the octopus. Because the grouper’s effort
𝑑𝑑𝑑𝑑!
4 −into
𝑑𝑑𝑑𝑑!doesn’t enter 2𝜃𝜃the= 0
= 4 − 2𝜃𝜃 = 4 − 2𝜃𝜃
octopus’s payoff, the octopus’s optimization problem is straightforward.
𝑑𝑑𝑑𝑑 function𝑑𝑑𝑑𝑑
First, take the derivative of its payoff with respect to 𝜃𝜃 := 2
!𝑑𝑑𝑑𝑑
𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 4 − 2𝜃𝜃 = 40 − 2𝜃𝜃 𝑑𝑑𝑑𝑑= 0
!
! 4=
= −42𝜃𝜃
− 2𝜃𝜃. "
𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 = 4 − 2𝜃𝜃
𝑑𝑑𝑑𝑑 = 4 + 2𝜃𝜃 − 2𝑔𝑔
Next, set this equal to 0 and solve for 𝜃𝜃 := 2 𝜃𝜃 =𝑑𝑑𝑑𝑑
2
4 −42𝜃𝜃
4 − 2𝜃𝜃 = 0
−= 2𝜃𝜃0= 0
𝑑𝑑𝑑𝑑" 𝑑𝑑𝑑𝑑" 4 + 2𝜃𝜃 − 2𝑔𝑔 = 0
𝜃𝜃 𝜃𝜃=𝜃𝜃=
2=.2 2 = 4 + 2𝜃𝜃 = − 42𝑔𝑔+ 2𝜃𝜃 − 2𝑔𝑔
𝑑𝑑𝑑𝑑 𝑑𝑑𝑑𝑑
𝑔𝑔 = 2 + 𝜃𝜃
𝑑𝑑𝑑𝑑
𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 So" the octopus should spend two hours per day hunting.
" "
= 4=+ 4 +
2𝜃𝜃 2𝜃𝜃2𝑔𝑔
− − 2𝑔𝑔 4 + 2𝜃𝜃 − 42𝑔𝑔+= 2𝜃𝜃0− 2𝑔𝑔 = 0
𝑑𝑑𝑑𝑑 = 4you
+ 2𝜃𝜃
can−do2𝑔𝑔 𝜃𝜃 = 2
𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 Now, something similar for the grouper by taking the
= 2 +to𝜃𝜃𝑔𝑔. = 2 + 𝜃𝜃
𝑔𝑔respect
derivative of its payoff function with
4+ 4 +
2𝜃𝜃 −2𝜃𝜃 −
2𝑔𝑔
4 + 2𝜃𝜃 − 2𝑔𝑔 = 0 2𝑔𝑔
= 0 = 0
𝜃𝜃 = 2 𝜃𝜃 = 2
𝑔𝑔 𝑔𝑔=𝑔𝑔=2=+ 2𝜃𝜃+ 𝜃𝜃 𝜋𝜋! = (4𝜃𝜃) − 𝜃𝜃 # = (4 × 2) − 2
2 + 𝜃𝜃 139
#
𝜃𝜃 =𝜃𝜃2= 2 𝜋𝜋" = (4𝑔𝑔 + 2𝜃𝜃 + 2𝑔𝑔𝑔𝑔) − 𝑔𝑔 = (4 × 4 + 2 × 2 +
𝑑𝑑𝑑𝑑!𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝑑𝑑𝑑𝑑!
=! 4=−42𝜃𝜃 − 2𝜃𝜃 𝑑𝑑𝑑𝑑 = 4 − 2𝜃𝜃 𝜃𝜃 = 2
𝑑𝑑𝑑𝑑 𝑑𝑑𝑑𝑑4 − 2𝜃𝜃 = 0
Understanding Economics:
𝑑𝑑𝑑𝑑! 𝑑𝑑𝑑𝑑"𝑑𝑑𝑑𝑑Theory
= 4 − 2𝜃𝜃 Game ! Answers
4 −42𝜃𝜃 − 2𝜃𝜃= 0
𝜃𝜃 ==𝑑𝑑𝑑𝑑20 4 − 2𝜃𝜃 = 0 𝑑𝑑𝑑𝑑 = 4=+42𝜃𝜃 − 2𝜃𝜃
− 2𝑔𝑔
𝑑𝑑𝑑𝑑! 𝑑𝑑𝑑𝑑
𝜃𝜃 = =
2 4 − 2𝜃𝜃 𝜃𝜃 = 2
" 𝜃𝜃 = 2 4 − 2𝜃𝜃 = 0
𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑
𝑑𝑑𝑑𝑑 = 4 + 2𝜃𝜃 − 2𝑔𝑔. 4 +42𝜃𝜃 − 2𝜃𝜃 = 0= 0
− 2𝑔𝑔
𝑑𝑑𝑑𝑑
𝑑𝑑𝑑𝑑"𝑑𝑑𝑑𝑑" 4 − !
4 −=𝑑𝑑𝑑𝑑
= 2𝜃𝜃 2𝜃𝜃 0𝜃𝜃"2𝑔𝑔
= 24 + 2𝜃𝜃 − 2𝑔𝑔 𝜃𝜃 = 2
=𝑑𝑑𝑑𝑑
Setting 4=+4 2𝜃𝜃
this − 2𝑔𝑔
+equal
2𝜃𝜃 −
𝑑𝑑𝑑𝑑to 0=and solving for 𝑔𝑔 gives
= 2 you
+ 𝜃𝜃the grouper’s best-
𝑑𝑑𝑑𝑑 𝑑𝑑𝑑𝑑4 + 2𝜃𝜃 − 2𝑔𝑔 = 0
response 𝜃𝜃rule:
4 − 2𝜃𝜃 𝑑𝑑𝑑𝑑
== "2 0 𝑑𝑑𝑑𝑑"
4 +42𝜃𝜃 + 2𝜃𝜃 −𝑔𝑔2𝑔𝑔= =+=
−𝑑𝑑𝑑𝑑
2𝑔𝑔
2 0= 4𝜃𝜃4++ 2𝜃𝜃 − 2𝑔𝑔
0 2𝜃𝜃 − 2𝑔𝑔𝑑𝑑𝑑𝑑 = 0= 4 +𝜃𝜃2𝜃𝜃 =− 2 2𝑔𝑔
𝑑𝑑𝑑𝑑"
𝜃𝜃 = 22𝜃𝜃 − 2𝑔𝑔
𝑔𝑔 =𝑔𝑔=2=4+2+𝜃𝜃
𝑑𝑑𝑑𝑑 4+ +𝜃𝜃=
.𝜃𝜃 2𝜃𝜃2− 𝑔𝑔 2𝑔𝑔
= 2=+0𝜃𝜃 𝑑𝑑𝑑𝑑!
4 + 2𝜃𝜃 − 2𝑔𝑔 = 0
𝑑𝑑𝑑𝑑" # # = 4 − 2𝜃𝜃
Because = 2𝜃𝜃 4𝜃𝜃you
+= 2𝜃𝜃
2 −2==
already 𝜋𝜋
2𝑔𝑔
know! = 𝜃𝜃 (4𝜃𝜃)
= 2− 𝜃𝜃 =
, that (4 ×that
means 2) −
at 2 =
the 𝑑𝑑𝑑𝑑4
Nash
𝑑𝑑𝑑𝑑4 + −𝜃𝜃 =
2𝑔𝑔 𝑔𝑔 02 + 𝜃𝜃 𝑔𝑔 = 2per+day
𝜃𝜃 and the grouper
equilibrium, the octopus hunts for two hours
𝜋𝜋 = (4𝑔𝑔
" for #four.
hunts + 2𝜃𝜃 + 2𝑔𝑔𝑔𝑔) − 𝑔𝑔 #
= (4 × 4 + 2 × 2 + 24×−42𝜃𝜃 =−
× 2) 04
𝜋𝜋! = (4𝜃𝜃)4 +−2𝜃𝜃 𝑔𝑔𝜃𝜃=−= 22𝑔𝑔+(4=𝜃𝜃×02)𝜃𝜃−=2#2 = 4 𝜃𝜃 = 2
2 . This is𝜋𝜋#simply
=× a matter
(4𝜃𝜃) 𝜃𝜃#of#= entering
# 4(4 ×the 2) Nash
− 2#equilibrium levels of 𝜃𝜃 = 2
𝜋𝜋! +
2𝜃𝜃 𝜋𝜋 (4𝜃𝜃)
=!2𝑔𝑔𝑔𝑔)
= (4𝜃𝜃)− 𝑔𝑔
− 𝜃𝜃−## 𝜃𝜃
and
=
𝑔𝑔=from
=
(4
!(4=2× (4
+𝜃𝜃4
the
2)
=×
𝜃𝜃+− 2−
2)
previous22− 2=
× problem
2 +=2 4×into 4× each
=4
− 4# =payoff
2)creature’s 20. function:
(4𝑔𝑔 # # 2#× 4 × 2)𝑑𝑑𝑑𝑑 −#"4#==4 20.
"+=2𝑔𝑔𝑔𝑔)
2𝑔𝑔𝑔𝑔) +# 𝑔𝑔
− 𝑔𝑔− 2𝜃𝜃
=𝜋𝜋#!(4
=+=× 2𝑔𝑔𝑔𝑔)
(4 4×+4−
(4𝜃𝜃) −2+𝑔𝑔
×𝜃𝜃22#2×=+𝜋𝜋2(4
= 2+×
(4 ×
=
42)+4−2)
×24(4𝜃𝜃)
×× 2×2×−
−
#
2)
𝜃𝜃
2=
4−
# +
=4=
4(420.
=×20.2) − + 2𝜃𝜃 − 2
𝜃𝜃 = ! 𝑑𝑑𝑑𝑑 = 4
2
(4𝑔𝑔𝜋𝜋+! 2𝜃𝜃 # 𝑔𝑔#(4 42#+=220.
+ 𝜋𝜋
= (4𝜃𝜃) 2𝑔𝑔𝑔𝑔)−=𝜃𝜃(4𝑔𝑔 − = +=2𝜃𝜃 ×(42) +×2𝑔𝑔𝑔𝑔)
−4 2+# 2= −×𝑔𝑔
42# + = 2(4××44×+2)2 − ×
" 4 +× 2𝜃𝜃4 −
× 2𝑔𝑔
2) −= 40
𝜃𝜃 +𝜋𝜋2𝑔𝑔𝑔𝑔) − 𝑔𝑔# = #(4 × 4 + 2 × 2 +
! = (4𝜃𝜃) − 𝜃𝜃 = (4 × 2) − 2 = 4
# 2 × 4 × 2) − 4# = 20..
𝑔𝑔 = 2 + 𝜃𝜃
# At the Nash equilibrium, the octopus earns a payoff of 4 and the grouper
= a(4payoff
+ 2𝑔𝑔𝑔𝑔) − 𝑔𝑔earns × 4 of 2 × 2 + 2 × 4 × 2) − 4# = 20.
+20. 𝜃𝜃 = 2
𝜋𝜋! = (4𝜃𝜃) − 𝜃𝜃 # = (4 × 2)
IMAGE CREDITS
Title graphics and backgrounds:
Lepusinensis/iStock /Getty Images.
Siminitzki/iStock /Getty Images.
140 140