Negotiation Analysis Supplement: by Howard Raiffa
Negotiation Analysis Supplement: by Howard Raiffa
[]
ABOUT PON
Negotiation Analysis Supplement NEWS
CLEARINGHOUSE
by Howard Raiffa
Welcome!
PDF: 51 KB / 18 pages
My Short Vita
The Harvard University Press does not include much info
about its authors on the jacket cover. I would have liked
more. So, here is some more.
PDF: 5 KB / 1 page
PDF: 12 KB / 3 pages
PDF: 51 KB / 12 pages
PDF: 26 KB / 5 pages
Simulations
If the book is to be used as a text in a course, I strongly
recommend the instructor use several simulated negotiation
exercises to be suggested in the Supplement. Let me
explain how this might work.
PDF: 93 pages
Preface
Part I: FUNDAMENTALS
Chapter 1: Decision Perspectives
Our Approaches to Decision Making
Decision Analysis
Behavioral Decision Theory
Game Theory
Negotiations
Individual Decisions:
Prototypical Examples
Drill or Not Drill
Surgery or Radiation
Invest or Not
Group Decisions
Interactive decision making: Theory of Games
Joint decision makng: Negotiation theory
Fuzzy boundaries
Descriptive, Normative, and Prescriptive Orientations
The Individual Case
Descriptive decision making: How decisions are mde
Normative decision making: How decisions should be made
Prescriptive decision making: How decisions could be made better
The case of transitivity
Group Orientations
Symmetrically descriptive
Symmetrically normative
Asymmetrically prescriptive/descriptive
Descriptive, normative, and prescriptive orientations for the external helper
Core Concepts
Core Concepts
Core Concepts
Sequential Search
Select the Most Beautiful Woman Problem
The Strike Game
The Simulated Strike Game
Empirical Results
The Escalation Game
The Problem:
How Real Players Behave
Prescriptive Analysis
Behavioral Insights
The No-Strike Strike or the Virtual Strike
Escrow Accounts and Penalties
Creation of Strike Mechanisms
The New York State Budget Deliberation
Core Concepts
Intertwined Negotiations
Distributive Bargaining with One Seller and Many Buyers
AUCTION MECHANISMS
Open Descending Outcry Auction - Dutch Auction
A Bidder’s Decision Analysis
Competitive Sealed Bids
The General Case: Private Valuations and Fuzzy Knowledge.
MBOO Analysis
The Philatelist or Vickerey Auction (High Bidder Wins At Second-High Price)
The Auctioneer’s Perspective.
SPECIAL CASES
Common Knowledge and Common Values
Bidding For $100 Against One Other Bidder
Uncertain Common Value, Known Objective Probabilities, Unknown Private Risk Aversions
Reciprocal Buy/Sell Bids
Combinatorial Bidding - The FCC Spectrum Auction
Privatization of Governmental Monopolies
Serial Bidding
Parallel Bidding
Design of the FCC Auction
FOTE Analysis
Core Concepts
Monetary Quantification
Core Concepts
Core Concepts
Core Concepts
Core Concepts
ORIENTATION
Types of Parallel Negotiations
Front Channel
Back Channel
Track II Negotiations
Blurred Boundaries
A Hypothetical, Abstract Problem of Intractable Negotiations
Barriers to Negotiation
COPING WITH THE BARRIERS
High Political Cost of Initiating Unproductive Negotiations
“Experimental Non-Negotiations”
Troublesome Logistics
Posturing, Positional Bargaining, and Excessive Claiming
Inability to Brainstorm
Inability to Use FOTE Collaboratively
Front-Channel Negotiators Are Not Natural Compromisers
Power Imbalances
Lack of Trust and Insecure Contracts
Securing Insecure Contracts
Ripeness of the Conflict
CSCE Dispute Settlement Mechanism: The Valletta Accord
Divided Selves: Blocking Coalitions of Extremists
Ignoring Future Joint Collaborative Opportunities
The BCP Project.
PRESCRIPTIVE SUGGESTIONS
Membership
Invite only those who contribute
Wanted: Ideas and expertise
Managers of process (facilitating, drafting)
Help in getting approval and in implementation
Organization of the Substance
Choosing purpose
Early articulation
Need for tangible end-product
Nested aspirations
Basic questions (Why us? Why now? ….)
Structure
No group mind for synthesis
Need for decision- making framework
PrOACT
Other problem-solving frameworks: circle chart; seven elements;
Ben Franklin's pros and cons
Using Labor Efficiently: Organization of Tasks and Sub-Committees
Decomposition of Tasks
Assignments to Sub-Committees
Recomposition and Synthesis
Style: Organizing the Conduct of Meetings
Facilitation
Documentation
Brainstorming
Time
INDIVIDUAL DECISION MAKING WITH EXPERTS AND ADVISORS
The Advocacy Model
The Structured Analysis Model
Consensual Agreement
FAIR DIVISION WITH MANY PARTIES
Analysis with Monetary Transfers
Naive procedure
Auction procedure
The Steinhaus-Knaster Allocation
Strategic Misrepresentations with the Steinhaus Procedure
Analysis without Monetary Transfers
Three Parties: The Problem and Its Analysis
Comparison of Solutions
A CONRETE, BUT ABSTRACT, CASE
FOTE Analysis with SOLVER
Comparison of Solutions
Disengaging from FOTE Analysis: POTE Analysis
Negotiation Template Used as Decision Aid
Voting Dynamics
First Vote
Second Vote
Third Vote
Scores on Votes
The Two-Party Counterpart of the Voting Procedure
Comparison of Solutions
Summary
Strategic Claiming Behavior
The $64,000 Question
Core Concepts
Chapter 23 : COALITIONS
Coalition Formation
Cooperative Game Theory
The Characteristic Function Form
Case Study: The Scandinavian Cement Company
The Core
A Pure Coalition Game
Game Description
Offers that Cannot Be Readily Refused
Face-to-Face versus Terminal-to-Terminal
Rationality, Fairness and Arbitration
The Core of the Game
The Shapley Value
A Modification of the Shapely Values
The Game with One Strong Player
Moving Toward Reality
Intertwining Negotiations
Core Concepts
Majority Rule
WYZARD, Inc.
Independence of Irrelevant Alternatives
Insincere Voting
Arrow’s Impossibility Theorem
The Problem
Strengths of Preferences
Strategic Voting
Strategic Ordering of Bills in a Legislature
Insincere Voting Revisited
Randomization
Strength of Preference and Logrolling
Implications for Many-Party Negotiations
A Real Case: The Voyager Mission To Outer Planets
The Collective Choice Problem
Collective Choice Procedures and the Independence of Irrelevant Alternatives
Postscript
Core Concepts
CONTENTS
BIBLIOGRAPHY
Part Introductions
Chapter 3 examines real behavior and indicates that many individuals, some of
the time, do not act in conformity with rational behavior -- especially when uncertainties
loom large. We call such examples of "deviant" behavior anomalies, biases, errors, or
traps and we try to rationalize why and how they might occur.
Chapter 5 provides an outline of the rest of the book. It introduces the concept of
idealized, joint behavior in which all protagonists in a negotiation agree to negotiate in a
truly collaborative manner agreeing to tell the truth and all of the truth. We call this joint
FOTE analysis -- Full, Open, Truthful Exchange. We claim that often in actual deal
making in contrast to dispute settling that this ideal is often approximated and the FOTE
examination sets up an ideal against which other approaches can be compared. In the
sequel, we back away from FOTE analyses to consider POTE-like behavior in which
some protagonists tell the truth but not the whole truth -- Partial, Open Truthful
Exchange -- leaving their bottom lines hidden from view of others; and back away further
to NOTE behavior -- No Open, Truthful Exchange.
Chapter six presents a case study (Elmtree House) of the prototypical negotiation
problem of this part of the book. A seller (in this case an institution that owns a halfway
house) wishes to sell an asset (its residence, the house) to a buyer (a developer). The
seller wants more; the buyer less; and they haggle. It’s all a matter of claiming a larger
part of a fixed size pie. There are two parties that negotiate over one issue: money – but it
could be time or any other single commodity.
In the next chapter the problem is abstracted and the discussion addresses such
questions as: How to prepare? Who should declare first? Why? How extreme a first
offer? What’s a reasonable counter-offer? What is the pattern of concessions? And so
forth. The chapter discusses a related double-auction game in which the seller and buyer
simultaneously offer sealed bids and a transaction takes place if and only if the bids are
compatible. This game is analyzed normatively, descriptively and prescriptively and we
conclude that good old-fashioned haggling might be better.
Chapter ten considers the case where there are many buyers and just one seller.
The seller could negotiate sequentially with each potential buyer or engage in a
competitive bidding or auction procedure. The chapter compares several different types
of auction and competitive bidding procedures and we use individual decision analysis to
give partisan prescriptive advice to one of the players. We end the chapter by considering
the all-too-common case when two partners want to dissolve their partnership by having
one partner buy out the other.
This part of the book, comprising chapters 11 to 16, deals with Two-Party
Integrative Negotiations. By "integrative negotiations" we mean: negotiations having the
potential of resulting in joint gains. Part II dealt with Two-Party Distributive Negotiations
which involved partitioning a fixed-sized pie. Whereas Part II was mostly about claiming
tactics, Part III will be mostly about creating tactics -- how to create a bigger pie. But
there is a tension between tactics used to create a larger pie and tactics used to claim a
large portion of the pie created. How to balance this tension is part of the art and science
of negotiation.
Much of negotiation involves the settlement of disputes. The bulk of our attention
in Part III, however, will be with examining deals as presenting opportunities for joint
gains (accompanied, of course, by some distributional strains) in contrast to disputes or
problems that have to be resolved. Behaviorally speaking, negotiations involved in deal-
making tend to be more collaborative than those involving the settlement of festering
disputes. But still, much of the advice given for deals are relevant also for disputes. Part
of this advice is to try to convert a dispute into a deal and to prevent a deal from
becoming a dispute.
Still in chapter 15, we next consider anomalies, biases and errors of behavior in
real-world negotiation settings. In an interactive setting misinterpretations beget
misinterpretations and a dynamic may ensue where the parties spiral downwards in their
pursuit of joint harms rather than joint gains. Cultural differences make it harder to
establish a constructive negotiation style even though cross-cultural differences in
interests are often the source for potential joint gains.
Our first chapter in this part starts out by considering conventional facilitation and
mediation reserving the next chapter for conventional arbitration. Rather than getting
involved in disputes about nomenclature, we adopt the neutral language of “external
helper “ or simply “helper” or “3rd party intervenor”. In our abstraction, we talk about the
{A, B; H} dynamic where A and B are the negotiating parties and H the helper. We start
by drawing up lists of conventional roles that H might perform, saving those that have a
more analytical flavor for later development. . What role H plays depends on how H gets
involved (as invitee or invitor or a mixture of these), on the context of the problem, on
tradition, on the particular temperaments of the three actors involved, and on what is to
be negotiated.
We talk about the interests of H as another player -- but a different type of player.
We review the many reasons why external helpers are not used when perhaps they
should.
The chapter then considers more active roles for the mediator by involving H
more in generating proposals (e.g., in the form of Single Negotiation Texts) for the
consideration of A and B. We discuss President Carter's role at Camp David as a
proactive mediator with clout in the negotiations between Sadat of Egypt and Begin of
Israel. We also consider, as a second case study of proactive mediation, my intervention
in helping divide an art collection between two brothers who knew ahead of time that
they could not do it alone without jeopardizing their relationship.
In the standard distributive bargain where A wants a higher value and B a lower
value, the conventional arbiter, after fact finding, proposes a final solution. In contrast, in
final-offer arbitration, A and B are required to submit sealed final offers and then the
arbiter, H, is required to select one of these offers. H has no other choice! We examine a
game-theoretic treatment of this type of final-offer arbitration and surprisingly find it
somewhat flawed from a normative perspective even though it seems to work in practice.
The latter part of the chapter deals with cases of complex integrative negotiations
in which H, acting as a neutral joint analyst (NJA), helps the parties achieve an ideal
collaborative compromise solution described at length in chapters 11 to 14. Such an NJA
acts as a special kind of non-evaluative arbiter. The possible reluctance of the parties to
truthfully reveal their reservation values (RVs) complicates the analysis and a double
auction bidding system is introduced to help resolve this complication.
After the completion of an unassisted negotiation, the parties realizing that there
may be still joint gains to be had, might choose to invite in an NJA to try to embellish
their agreement in a so-called post-settlement settlement. In this case the parties do not
have to reveal their RVs to the NJA -- the previously negotiated compromise acts as a
pseudo joint reservation value.
There remains the nagging question for the NJA: which point on the efficient
frontier should be selected as the most equitable. What is fair? The next chapter discusses
this question in some detail. In Chapter 13 and 14 we already introduced some candidate
rationales for "fairness" such as Nash's solution that maximizes the product of excesses
over RVs and the one that maximizes the minimum of the two proportions of potentials
(POPs).
The last chapter in this part examines the common nature of intractable disputes
(mainly between feuding contiguous countries or national entities or cultural groupings),
identifies a set of barriers that prevents constructive negotiations from taking place, and
suggests how parallel negotiations (track II) could help.
In this part we extend all fronts into many-party lands. By “many parties”, as in
some primitive tribes, we mean more than two.
Everyone knows that reaching a decision becomes harder the more people are
involved. Negotiations among multiple parties – or even decisions, which must be jointly
taken by a group which is ostensibly on the same side – are often long and unhappy
affairs.
negotiation). Once we move out of that setting, there are many different geometries we
could consider:
• A Group of Separate, Individual Negotiators
• Bilateral Negotiations with Multiple Participants on Each Side
• A Group of Advisors Preparing One Side for Negotiations
• A Permanent Decision-Making or Advisory Group
• An Ad Hoc Decision-Making or Advisory Group.
Each of these groups pose particular challenges. But there is also a core of similar
difficulties that will have to be managed in all of those contexts. Chapter 21 starts by
listing in what ways some groups do better than individuals and then why some groups
do worse. We examine some anomalies of group behavior. It then attempts to understand
the underlying reasons for this behavior – why do some groups behave so poorly. After
examining losses due to cognitive overload, poor coordination, poor communication, and
poor motivation, the chapter considers general prescriptive advice designed to partially
ameliorate some of these causes. The advice considers such matters as: membership in
the group; use of a facilitator or chair; need for an ongoing visual documentation of the
deliberations; role of brainstorming and devising; focus on purpose and choice of a
problem solving framework; decomposition of tasks and formation of sub-committees;
and the allocation of time.
The next chapter (22) considers groups that strive for consensual agreement. Each
of the members wishes to act in unison but there are strains in the community. Still they
don’t resort to voting or coalition formation. To a large extent this discussion generalizes
the two-party material in chapters 11-13 which has a strong FOTE flavor. As was the
case for two parties, we once again examine the fair division problem – but now with
more than two parties – before launching into our full analysis of feasible, efficient, and
equitable contracts for the many-party problem.
Chapter 24 examines the use of voting procedures for group action. Such common
schemes as majority rules are fundamentally flawed since it can lead to intransitivities of
group preferences. We review Arrow’s famous Impossibility Theorem that proves that
there is no way for three or more (i.e., many) individuals to combine their individual,
ordinal preferences to obtain a group ordinal preference without violating some appealing
desiderata. Many party negotiations often result in the identification of a few viable
contracts for adoption, and guess what? They vote. One way out of the dilemma is to
demand a richer set of inputs from the voters. They are asked not only about their ordinal
preferences but the intensities of their preferences. They are asked for cardinal orderings.
However, easier said than done – especially with the realities of insincere voting. We
introduce a case study that examines how a group of scientists selected a trajectory for
the Voyager mission to outer space.
The next chapter examines cases where at least one side in a negotiation is
non-monolithic. We imagine a two-party (external) negotiation across a table that is
complicated by the existence of an intense internal negotiation on each side of the table.
Contracts that are negotiated across the table may result in winners and losers on one side
of the table and the losers may try to block the negotiations unless they are duly
compensated by internal transfers. Some call it “bribes.” The challenge is to synchronize
internal and external negotiations.
I have been asked by ZZZ who just happens to know that you are
about to enter into very complicated negotiations. ZZZ thought I might help
you both and suggested I write to you and explain my craft. I bring to the
table skills of the intervenor (facilitator, mediator, and arbitrator), coupled
with the skills of a decision analyst. I call myself a Neutral Joint Analyst –
an NJA if you will – I would like to help you to find both an efficient and
equitable outcome of the negotiation opportunity you collectively confront.
By efficient I mean that you squeeze out the full potential of this negotiation
opportunity and not leave potential joint gains go unrealized sitting on the
table. By equitable I mean that I will treat you equally in seeking a
resolution of your joint decision problem – a resolution that gives each of
you a comparable fair gain over what each of you could gain acting
separately. We can think of this as aided collaborative decision making.
If I’m to help you fully I would like to learn just what are your
possibilities if negotiations were to be aborted. I’ll encourage you, with my
help if you so desire, to explore your Best Alternative to No Agreement
(BATNA) to help you assess the minimum you should be willing to accept
in the actual negotiations.
To build up trust and confidence, why don’t you try me out on a easy
problem that is not critically important to you. If you want I could suggest a
problem that you both could practice on with me
Norman J. Altman
Example 2. The prize is the oil deposits under a given site up for competitive bid.
All the bidders have some information about the deposits but different guesses
about the value. Let’s simplify a bit by saying with full information the value of the
oil would be the same for all bidders.
What’s the strategic essence in these two problems? First, this is the case of a
prize that has a common objective value, but--and it is a big but--the common value
is uncertain and each of many bidders have differing subjective perceptions of this
value.
Jay: It seems to me that if I’m risk -- or for the standard competitive bid -- I
neutral my RV should be $200. That’s as would shade down from my RV.
high as I would bid in the English auction, Ann: I agree so far with Jay. And since
and in the Vickrey auction I would bid my there are so many other bidders, I
RV of $200. Now for the English auction wouldn’t shade down too much from the
RV of $200 -- say $185.
1
Assume that $200 is the Expected Value of your assessed distribution.
Jay: This seems like an easy case. Why Exp: Well you should suspect by now
single it out? that there will be a special message of
earth-shaking import.
Suppose the potential bidders recorded their single best guess at the uncertain but
common value of the prize. There would be 30 guesstimates and these would have
some distribution.
We still don’t know where the true value is. It could be below, or close to, or
above the average of these 30 best guesses. Suppose for now that the true value is
close to the average of these 30 best guesses. Now consider three of the bidders:
Alicia, Bernie, and Charlie. Alicia’s best guess is on the low end of the distribution
of best guesses. Bernie’s best guess is slightly above the average of the best guesses
and Charlie’s is extreme on the high side. Their subjective, assessed probability
perceptions of the true value are shown in Figure 10b.1.
Probability
A True B C Value
The trouble is that Alicia, Bernie and Charlie do not know where their best guesses
lie among the distribution of best guesses. Now do you, Ann or Jay, see a problem
lurking here?
Jay: Well, if the bidders follow our more chance there is for a right tail outlier
advice and bid close to their RVs, then and if bidders bid close to their RVs then
Charlie will probably win the bid, or the winner of the uncertain prize will be
someone even more way out on the right paying too much.
tail will be the winner of the prize. Jay: The winner will be a loser.
Ann: Hey, this is a good deal for the Exp: That’s the message! If you win,
auctioneer. The more bidders we have the you lose.
Exp: Well, what should you do about this guess knowing now that your best bid of
sad complication? $185 is the top of the 30 bids.
Jay: It’s not sad from the auctioneer’s Ann: Ah, that’s tricky. I would now
point of view. know that most of the others would
Exp: Agreed. I posed two questions to perceive the value of the common prize as
you for the coin-packed transparent jar below mine. If I thought that their
problem. Question A asks for your best opinions were somewhat informative, I
guess of the value of the prize before would now want to bias my best guess
knowing any of the other bids. And downwards.
Question B asks for your revised best
Jay: I would want to bid an amount Ann: Sure, Jay, you said that you would
“X” say, so that if I were told my X value want to believe that you could expect a
were the highest bid, my revised best profit after knowing that your bid was
guess at the common value would still be highest. Can you analyze this problem
higher than X. That’s the way I would get further?
around the winner’s curse. Exp: Sure, but it’s the qualitative
Ann: This really is complicated. Let’s message that’s important. The game
assume that we originally had a theorists have a ball with this problem.
distribution centered around $200. That’s But in order for the game theorists to do
our best guess without worrying about their thing which is equilibrium analysis --
the strategic bidding problem. We then the problem has to be modified so that
said earlier that the more bidders there are each bidder gets confidential information
the closer we should bid to our RV. But and each is given an objective probability
now comes the winner’s curse. The larger distribution of the original RV’s of the
the number of other bidders the more others. It is too complicated for me to
worried we should be if we win. So with discuss in the elementary presentation.
29 other bidders we should be bidding
much lower than our $200 original RV.
We need a strategic RV like Jay just said.
Jay: Did I say that?
ACQUIRING A COMPANY
In the following exercise you will represent company A (the Acquirer), which is currently
considering acquiring Company T (the Target) by means of a tender offer. You plan to tender in
cash for 100% of Company T’s shares but are unsure how high a price to offer. The main
complication is this: the value of Company T depends directly on the outcome of a major oil
exploration project it is currently undertaking. Indeed, the very viability of the Company T depends
on the exploration outcome. If the project fails, the company under current management will be
worth nothing - $0/share. But if the project succeed, the value of the company under current
management could be as high as $100/share. All share values between $0 and $100 are equally
likely.
By all estimates, the company will be worth considerably more in the hands of company A
than under current management. In fact, whatever the ultimate value under current management, the
company will be worth fifty per cent more under the management of Company A rather than
Company T. If the project fails, the company is worth $0/share under either management. If the
exploration project generates a $50/share value under current management, the value under company
A is $75/share. Similarly, a $100/share value under Company T implies a $150/share value under
Company A, and so on.
The board of directors of Company A has asked you to determine the price they should offer
for Company T’s shares. This offer must be made now, before the outcome of the drilling project is
known. From all indications, Company T would be happy to be acquired by Company A, provided it
is at a profitable price. Moreover, Company T wishes to avoid, at all cost, the potential of a takeover
bid by any other firm. You expect Company T to delay a decision on your bid until the results of the
project are in, then accept or reject your offer before the news of the drilling reaches the press.
Thus, you (Company A) will not know the results of the exploration project when submitting
your offer, but Company T will know the results when deciding whether or not to accept your offer.
In addition, Company T is expected to accept any offer by company A that is greater than the (per
share) value of the company under current management.
As the representative of Company A, you are deliberating over offers in the range of
$0/share (this is tantamount to making no offer at all) to $150/share. What offer per share would
you tender for Company T’s stock?
My tender price is: $_______ per share.
******************************* (Pause)
The first trial is a disappointing value to the target of $18.29 and the value to the
Acquirer is 3/2 x $18.29 or $27.42. A bid of $60 would yield an acceptance
(TRUE) and the net proceeds to the Acquirer would be $27.42 - $60.00 or -$32.58.
Not starting off so well. But on the third trial the Acquirer makes $20.43. Over 100
trials the average return is -$9.13. Not a good deal.
We also ran the same experiment but inserted an offer of $50 instead of $60.
We generated 100 new random numbers and the average return with this tender
offer is still disappointing: a dismal -$6.20 per share return. What’s cooking here?
The Acquirer should be able to make some money. Or should she?
Intuitive Thinking
Let’s examine how most people think about this problem. They start off
asking the natural question: What is the expected value per share of the company to
the Target? And the answer is $50. Next they ask: What is the expected value (per
share) to the Acquirer? And their correct answer is 3/2 x $50 or $75. Next they say
if their RV is $75. what should they bid? They shade down a bit from $75 to $60
say.
Jay: Can I interrupt here? That’s just Jay: Well that will be $30 and not $50.
what I did. I thought I was clever. I bid That was my mistake. I didn’t condition
$51. Now what’s wrong with that logic? my probability assessment on Mr. T
Exp: Ann, what’s wrong? accepting my bid.
Ann: I did something similar. But when Exp: Now if Mr. A accepts your tender
you posed the question, “What’s wrong?” bid of $60, what is the expected value of
and after seeing the simulations, I think I the firm to you?
see what’s wrong. I should have guessed Ann: Well that would be 3/2 x $30 or
this from the beginning. I should have $45 per share. So a tender bid of $60
suspected. Why did you introduce this would result in an expected loss of $60 -
problem? It smacks of the winner’s curse. $45 or $15. That’s even worse than the
For really high values of the company to simulation indicated.
Mr. T., Ms. A is not going to sell the Exp: Your analysis is still not quite
company. Once Ms. A knows that Mr. T right Ann. The simulation is better than
accepts the tender offer her RV changes. you’re giving it credit for.
Jay: Oh, that’s clever. It is a disguised
version of the Winner’s Curse.
Exp: O.K. Let’s examine what happens
with a tender bid of $60. If Mr. T accepts
the bid of $60, what is the expected value
per share to Ms. T?
-$9.13.
-9 Rejects $0
X > $60
Tender Price Per Share T's Response to Offer Value of X to Target Payoff to Acquirer
We can perform the same analysis for any bid b. Here’s the argument. Keep the
decision tree in mind. If b is greater than the value of the company, call that V, then
the offer will be accepted. But if b < V the company will reject your offer.
Whenever your bid is accepted your return is V − b , where V is still uncertain.
3
2
Since your bid was accepted any value in the range from 0 to b is possible and all
b
are equally likely. So on average the company is worth whenever your bid b is
2
3 b
accepted, and we get − b or b − b so already we see that this is a bad, bad
3
2 2 4
deal. There is an analytical lesson here. Unaided intuition has limitations.
Simulating an actual play of the game can provide fundamental insights. Drawing a
symbolic decision tree can also help straighten out your thinking. Get used to these
techniques.
Below we plot the expected payoff as a function of the tender bid offer. It’s a deal
that Ms. A should shun.
5
Expected Return to Acquirer
0
0 20 40 60 80 100
-5
($/share)
-10
-15
-20
-25
-30
Bid of Acquirer ($/share)
knows and Gambit knows that Maxco knows that Gambit knows and so on. Think
hard about this one.
What would you bid as Maxco? And what would you bid as Gambit conditioned on
what you might learn about the contents of the envelope?
********************(Pause)
Let’s play the conventional sealed-bidding game: high bidder wins at the
high price. When you are ready please read on. O.K. Let’s resume. Let me call in
Jay and Ann again.
Exp: What did you bid as Maxco? were $100 but you would lose $63 if the
Jay: As Maxco I bid $63. prize were zero.
Exp: And Ann, what do you do as Jay: You’re right. But you were not so
Gambit? smart, either. You would bid $112 for the
Ann: My strategy was to bid $0 for the $200-prize. You could have squeezed
$0 prize; $51 for the $100 prize, and harder and come in with some value like
$112 for the $200 prize. $65 say.
Exp: So in this case Jay playing Maxco Exp: Well, what do we see here? If
would win the prize if it were $100 (since Gambit is very conservative -- i.e., bids
his bid of $63 is higher than her bid of close to $100 for a $100 prize and close
$51), but he would lose the $200 prize to $200 for a $200 prize -- then Maxco
(since his bid of $63 is less than her bid of will only win when the actual prize is
$112). zero.
Jay: Good. I would capture the $100 Jay: This is really the winner's curse
prize. once again. Maxco wins the prize only to
Ann: Why are you so pleased with lose money. So the conclusion seems to
yourself? You would win $37 if the prize be that Maxco shouldn’t bid. Especially
against a conservative Gambit.
Ann: But if Maxco thinks this way, then Ann: And how should I behave if I
I, as Gambit, shouldn’t be conservative. I don’t have this empirical information
should bid $30 or a $100 prize and say about others?
$40 for a $200 prize. Exp: That’s the usual case, of course.
Jay: But be careful Ann. If I suspect You just have to imagine yourself in the
that you will be greedy and bid so low, I, role of the other party; try to think as you
as Maxco, will be able to sneak in with a believe he or she might think; and
bid of $50 and get both the $100 and heroically assess your probabilistic betting
$200 prize. distribution of what the other party will
Jay: It is important for us as a player, do. I like to think of l00 different people
to try to out-psych the other player. But each playing the role of the other party
where does this end? and I assess the distribution of what I
Exp: Well, one tack would be to think they will do.
examine equilibrium behavior. The idea is Jay: But if you’re lousy at this guessing
to give advice to both players that will be game, you won’t do well will you?
stable. The advice should be such that Exp: That’s right. But I think you are
each player would want to follow the better off thinking this way than not. Of
advice if the other player follows the course, some laboratory experimental
advice. The other tack is to examine results bolster my impressions.
empirically how people like yourselves
behave so that you could behave best
against this empirical mix.
impolite guffaws from the 100 onlookers. “And you Gambriel?” I continued. “I bid
$12,” said Gambriel, “because the envelope indicated the $20 prize. It turns out to
have been a stupid bid.”
Max was delighted. He netted $5 and snickered back at his disapproving
audience. This shows the danger of a sample of size one. Here is a case of a very
bad decision for Max that turned out to have a good outcome. Some students took
Max’s side and said that if the outcome was good, the decision was good. Max said,
“I’m lucky in gambles; I had an intuitive flash that the chosen card was $20. So I
gambled and won.” I thought to myself, “I wouldn’t hire this guy!”.
Think carefully about the Maxco-Gambit game described above, then answer the
following five questions:
1. If you were Gambit, would you prefer to obtain your confidential information
secretively (so that Maxco would not know you had this privileged information) or
openly (so that Maxco knew that you knew the value of the prize and he or she
didn’t)?
2. If you were Maxco and by counter-intelligence, say, you could also learn the value
of the prize, would you want to learn this information secretively (so that Gambit
would not know you also knew the value of the prize) or openly (so that he or she
would know you knew)?
3. If you were Gambit and you were not given the confidential information about the
value of the prize but could buy this information (in a way that Maxco would know
you knew), up to how much would you pay?
4. Neither Maxco nor Gambit knows the value of the prize. But this information is
available on a confidential basis to the highest bidder. What would be your RV for
this auction?
5. There is a sequence of repeated Maxco-Gambit games. At each trial the high
bidder at that trial gets the privileged confidential information at the next trial.
This means, for example, that if Maxco outbids Gambit on trial 4, then Maxco
learns the value of the prize at trial 5 and Gambit now knows at trial 5 that Maxco
knows the value of the prize and she doesn’t. How would you bid?
***********************
Ann: I, Gambit, would want Maxco to that problem I would assess a probability
know I have the information. I would distribution for the bid of Maxco. Then I
want to intimidate him to bid zero so that could figure out my best retort and
I could bid low. Indeed, even if I could calculate my expected return. Next I
not get the information, I would want to would consider the case where I learn the
lead him to believe I had the information. value of the prize and I know Maxco
Exp: Good. Now for question 2. knows that I know. Once again I would
Jay: I, as Maxco, would want my assess a probability distribution for
counter-intelligence to be secret. I would Maxco bids and then calculate my best
like her to believe I was in the dark, so $100 and $200 responses and finally
she would be greedy and bid low. Then, if calculated the value of that game. The
I learned the prize were $200, and if she difference would inform me about the
didn’t know I knew, then I could steal the value of this information. Go on to
$200 prize for $70 or $80. question 4.
Exp: I agree. Now for question 3. Jay: My turn. It’s a little different from
Ann: I’m Gambit so I suppose you want question 3 because if I don’t win the bid,
me to respond. But I don’t have the she will win it and clobber me. I have to
faintest idea what the answer is. Is the figure out my RV for this information
answer obvious? with the understanding that if I have it I
Exp: No, it’s not. But the line of attack can intimidate her while if she has it she
should be clear. Here’s what I would do can intimidate me.
to answer the problem. I would first Exp: That’s right. Go on to question 5.
consider the problem when I, Gambit, did Ann: I would want to bid higher than
not know the value of the prize and usual because if I win, I could intimidate
Maxco knew I didn’t know. To answer Maxco at the next round. It would
depend on the number of rounds to be Exp: All right, if you want to do this,
played. This would be fun to play. I’ll be the bank, but let’s play it for
pennies instead of dollars.
= 10, G0 = 0, G10 = 8, and G20 = 11. What happens to Maxco in this case? Well
Maxco would lose his bid of $10 if the prize is 0; wins $0 if the prize is $10; and
he does not win the bid if the prize is $20 -- that is because Gambit’s bid G20 is 11
which is higher than m = 10. So Maxco’s expected winnings is:
1 1 1
x(−10)+ (0)+ x(0) or
3 3 3
-10/3. Now glance down the seven illustrative cases and reflect a bit on what you
see happening.
From this simple analysis we can already see that if Gambit is conservative (i.e., G10
near $10, and G20 near $20) Maxco loses money when he or she has bid. By
contrast, if Gambit squeezes hard (i.e., G10 and G20 are very low), then Maxco can
make some money by bidding 5 or so.
Maxco's Payoff
0 C
-m
1/3 M wins
B 10 - m
G10 < m
A m 10 D
1/3
M loses
0
1/3
M wins
20 - m
20 G20<m
E
M loses
0
Now let’s look at Gambit’s decision tree (see Figure 10b.8). At node A,
chance yields prizes of $0, $10, and $20. At node B, Gambit, knowing the prize is
worth $10, must choose a bid G10. At node D, Gambit wins if Maxco’s bid m is less
than G10. A similar story for nodes D and E. Gambit must assess probabilities that
Maxco’s bid is less than G10 and G20.
G's Payoffs
0
0
G wins
1/3
10- G10
B D G10 > m
G10
A 10
1/3
G loses
0
1/3 G wins
20 -G20
C
G20 > m
20
E
G20 E
G loses 0
Maxco's uncertainty involves the unknown values of G10 and G20. Should
Maxco work through Gambit’s tree to figure out what Gambit will do? This is not
easy because Gambit’s tree depends on the unknown Maxco bid m. How far should
we go thinking about what each of us is thinking? Game theorists want to go the
whole way. To think what the other is thinking about what you’re thinking about ...
etcetera. In practice most people have trouble going even the first step in new
complicated problems. Of course, if the problem is very clear cut, important and
repetitive, it may be necessary to do more interactive, reflection analysis and then
equilibrium analysis becomes of real practical importance. Let’s see what happens
empirically with the Maxco-Gambit game in the laboratory.
30
25
15
10
0
0 1 2 3 4 5 6 7 8 9 10 11 12
Maxco Bid ($)
________________________________________________________
20
16
No. of bids in range
12
0
0 1 2 3 4 5 6 7 8 9 10 11
Gambit Bid ($)
32
28
0
0 2 4 6 8 10 12
-1
-1.5
-2
-2.5
-3
0.5
0
-0.5 0 2 4 6 8 10
-1
-1.5
10
Expected Payoff to Gambit ($)
9
8
7
6
5
4
3
2
1
0
0 2 4 6 8 10 12 14 16 18 20
Now let’s reflect about what all this means. Let me start a dialogue with Ann and
Jay.
EXP: Well, what do you think? What do What do you think happens after I show
these last three pages, depicting the these results to the bidders and then they
empirical results, tell us? play the game a second time? Why don’t
Ann: Some of your students were you think about this before reading on.
confused.
EXP: Granted. You always have to
expect some modicum of confusion and
misunderstanding. But that happens in the
real world as well as in the laboratory.
After Feedback
EXP: I had my students replay this game would be in equilibrium? The advice to
and what do you expect happened? each must be such that neither bidder
Ann: A lot more Maxco players would would want to change knowing the other
bid $0. would follow our advice.
EXP: That’s absolutely right. And Ann: I don’t see how you could do this.
Gambit players? If you tell Maxco to bid any amount like
Jay: I would bet they would bid a lot $3.00, then Gambit would want to let gl0
lower than they did before, especially for and G20 equal to $3.10. But if Gambit
the $20 prize. were told to bid $3.10, then Maxco
EXP: That’s also true. And it also turns would want to bid $3.20.
out that some Maxco players who live Jay: It seems to me any advice to
dangerously and bid about $5.50 steal a Maxco would be unstable if Gambit knew
few big prizes from very aggressive it.
Gambits. EXP: So far I agree with you, but you
Let’s speculate. What kind of operational haven’t thought hard enough yet. The
advice can we give to the players that trick is to tell Maxco and Gambit to use
randomized procedures. It can be shown, EXP: Yup, you’re right. But if Maxco
but I won’t do it here, that it is possible to were advised to bid zero, then Gambit
give randomized advice to each that yields could bid nice and low and we would go
an equilibrium situation. around in circles once again. The only
Jay: I’m not sure I follow you. You equilibrium situation is as I indicated.
would tell Maxco to base his bid on a Look, we’ve gone into this equilibrium
random drawing from some distribution. behavior more than I think it warrants, so
Like all bids should be equally likely let’s push on.
between $0 and $10. Is that it? Jay: I’m not satisfied
EXP: Or to bid $0 with probability 0.4 EXP: Let’s consider the use of shut-out
say, and the spread the other 0.6 bids. Would it ever be advantageous for
probability uniformly over the interval of Gambit to announce her strategy to
$0 to $10. That’s the idea. Jay. Maxco before the game starts? By
Ann: And you’re saying that if Gambit announcing her strategy, she would
learns about this advice, but doesn’t learn declare exactly what she would bid if the
about the particular random drawing, then prize V equals $0, equals $10, equals $20.
she should also follow your advice. But Ann: I can’t see how this could help
your advice to her must also involve Gambit.
random elements or else Marco could EXP: Suppose Gambit announces that
sneak in with a non-randomized response. she will bid $0 if V=0; will bid a bit over
EXP: That’s the idea. Such equilibrium $5.00, say $5.10, if V=$l0; and will bid a
advice is shown in an appendix.2 It’s not bit over $l0, say $10.10, if V=$20.
so easy to derive. Jay: Well, if Maxco believed Gambit,
Jay: You indicate on the bottom of the then Maxco could bid just higher than
page that if Maxco follows your advice, $5.10, say $5.20.
then his expected return is zero. That’s EXP: What would Maxco’s EV be in
scary. Why bother? Why not just bid that case?
zero? Jay: Well, if V were $0, he’d lose
$5.20; if V were $10, he would win
2
$4.80; if V were $20, he would lose the
Not included in the preliminary version.
auction and get zero. Since each of these his shut-out bid, then Maxco would steal
is equally likely, I guess his EV would be the bid.
negative. Ann: But why would he do that? He
Ann: Maxco would be best off bidding would lose $10.20, if V were 0; lose 20¢,
zero. if V were $10; and gain $9.80, if V were
EXP: Gambit’s publicly announced $20; which would yield a negative EMV.
strategy is called a shut-out bid. It is EXP: How negative?
meant to intimidate Maxco. Ann: Not much. About negative 20¢.
Ann: But if the intimidation were to EXP: And what happens to Gambit?
work, Gambit could surreptitiously bid Jay: He would get nothing. Ah, I see
just above $0.00, say 10¢ if V were $10 what’s going on. If Gambit can publicly
or $20. announce a shut out bid Maxco can really
Jay: Ah yes, but Maxco might expect sock it to Gambit and lose only 20¢ in the
that Gambit is unprincipled and conniving deal. And perhaps Maxco can demand
and bid say 40¢. some of the payoff to behave nicely.
EXP: If Gambit threatened you, playing EXP: All I’m trying to do is to stir up
the role of Maxco, with intimidating shut- the pot a bit. Shut-out bids should not be
out bid, and if you could have a pre-play taken lying down by the Maxcos of the
discussion with Gambit what would you world allowing all the gain to go to the
say to him? intimidating party. Finally you might want
Ann: I don’t see what Maxco could do. to recall the discussion of the Ultimatum
EXP: How about if Maxco threatens to Game of Chapter 4.
bid $10.20? If Gambit were locked in to
An exercise for the mathematically inclined: Consider the Maxco-Gambit Problem with just two
equally likely payoffs: $0 and $100. As before Gambit has privileged information and knows the
value of the prize. There’s common knowledge. Find the pair of randomized equilibria strategies.
Hint: Try the randomized strategy for Maxco that puts some weight on a bid of $0 and distributes
the rest in a way that makes any pure strategy of Gambit optimal in the range of $0 to $50. Try a
similar trick for Gambit. The value of the game for Maxco turns out to be a disappointing $0. But
nevertheless bidding $0 for Maxco cannot be a part of an equilibrium pair.
Core Concepts
We consider the distributive negotiation problem when a single seller is confronted with
several potential buyers. (Or a single buyer with several sellers.) Face-to-face negotiations can now
be replaced by several types of auctions or competitive bidding mechanisms. We introduce several of
these designs and examine them from both a game-theoretic, decision analytic, and behavioral
perspective and both from the orientation of a bidder and the auctioneer.
In the open, ascending outcry auction a bidder should (but often doesn't) prepare for the
auction by determining her breakeven value. There is no need for an analytically inclined bidder to
assess a probability distribution of the Maximum Bid Of Others -- a so-called MBOO analysis for
this case. A MBOO analysis is central to the decision analytic approach - but not game theoretic
approach -- of the Dutch (descending) auction and the competitive, sealed-bid mechanism.
The high-bidder-wins-at-the-second-high-price auction (or philatelist or Vickery auction) has
the nice feature that it is optimum to bid one's breakeven value regardless of what other bidders
choose to do. It does not require a MBOO analysis to act optimally.
The chapter then considers two special cases: (1) reciprocal buy/sell bids that are especially
useful when two business partners seek to dissolve their partnership; (2) combinatorial bids such as
the FCC auction when the federal government auctioned off 2500 licenses for radio frequencies. The
complicating feature is that for many bidders the value of a given lease depends on the other leases
that bidder wins.
The central theme of Part B deals with subtle issue of conditional probabilities. There is
often information to be gleaned from the prior choices of others.
Problem 1 considers the case where the prize is common but uncertain. In bidding for a transparent
jar filled with coins, the winner of the bid would very often be someone who had an extreme
perception of the value of the jar. The winner often finds out that he or she has paid too much. Thus
the winner (of the prize) turns out to be a financial loser -- thus the "Winner's Curse."
Problem 2 considers the (Bazerman-Samuelson) case where an Acquiring firm submits a bid to a
Target firm that can either accept the offer or reject it. The target firm has privileged information
about the value of the firm. Thus, the acquirer's conditional probability distribution for the value of
the firm should depend on whether the target accepts or rejects the offer. In the case discussed, the
acquirer should not want any deal the target would accept. This subtlety is missed by most subjects.
It's another case of subjects becoming confused about conditional probabilities and is somewhat
related to the Monty Hall example in Chapter 3.
Problem 3 is abstracted from the case of two oil wildcatters bidding for a lease when one bidder has
privileged information about the chances and extent of the oil deposits.
In the Maxco Gambit Cases both bidders have identica1 probability distributions of the value of the
prize but before the bidding takes place Gambit gets privileged information about the value of the
prize. This raises the question of the value of information in a game setting. Part of the value comes
from an intimidation factor: the less knowledgeable player may be wise not to bid or to bid low and
the party with the added information can exploit this knowledge.
One Appendix does an equilibrium Analysis of the both-pay competitive sealed-bid problem.
The both-pay ascending auction, or the so-called escalation game, was featured in Chapter 9.
Another Appendix does equilibrium analysis for a slight simplification of the Maxco-Gambit
Problem. In this case it is equally likely that the prize is $0 or $100 and Gambit, not Maxco, has this
privileged information.
Appendices
Let X denote the random quantity (variable) with CDF F(.). By definition, for
any constant x,
F(x) = Probability { X ≤ x } .
In order to select a random value for X governed by F, we proceed as
follows:
Let R be a random number on the interval from 0 to 1.0 – all values on
the unit interval being equally likely. Independent R-values can be generated
in EXCEL by the command +RAND. For any single R drawn,
say r, obtain corresponding x from the equation
r = F(x) = Probability { X ≤ x } ..
Let
2
F(x) = x , for 0 ≤ x ≤ 1.
or
½
x=r ,
Rules of the Game. Each of n + 1 players must submit a sealed bid for a $100
prize. The top bidder wins the $100 but all bidders sacrifice the value of their
bid.
Illustration: Let n =2 so that there are three bidders. Assume the bids are as
shown :
Now any bid b in the interval from $0 to $100 must be a good bid against this
common F, or else you yourself would not use F; hence it must be that
or
1/n
F (b) = [ b/100 ] .
To use the mixed (randomized) strategy F, you would generate the
random number r and solve
1/n n
r = [ b/100 ] or b = 100 r .
So we see that as the number of bidders increase, you should bid more
cautiously but your equilibrium expected payoff remains 0 for all n.
For n =1 , i.e., you are bidding against one other bidder, you should
choose equally likely bids from $0 to $100. If you choose bid b, you get a b/100
chance of winning 100 so that your expected up side just cancels the downside.
This does not look like a good game to play. So many bidders will
n
either not bid (i.e., choose b = 0), or choose b < 100 r , and in this case
perhaps you should bid even more aggressively than the equilibrium theory
suggests. But if this logic appeals to you, it might appeal to others and
therefor perhaps you shouldn’t bid after all. But we can go on … but we won’t.
acumen have assured him a good reputation, and he now enjoys the right
to drill for oil at a given site. The trouble is that he has liquidity problems:
most of his money is tied up in other risky ventures and his credit rating at
the bank is not favorable. The cost of drilling is uncertain, but he has the
possibility of taking seismic soundings at the site which will yield some
finding oil. He could plunge all his financial resources into this deal and go
and down sides of the deal. Let's assume for the moment that all
are inviolable both in law and in the intent of the protagonists—and look
reinsurance, that we must be careful not to get lost in its intricate byways.
venture with him. They examine their options and identify one strategy
that appears promising, but the payoffs are uncertain: these depend on the
(uncertain) cost of drilling, on how much oil is down there, on how easy
the oil is to recover, on future regulations, on future oil prices, and on a lot
more. To simplify, we'll say that they depend on which one of five states of
we might use something like five thousand states of the world.) Mr. Lloyd
consults his own experts and obtains probabilistic assessments of the five
proceeds in each of the five states. If state A unfortunately occurs, then the
team will lose $70,000. George, who is short of funds, will want Lloyd to
assume most of this loss. But, of course, Lloyd is not going to agree with
this unless his own shares are sufficiently high for the states C, D, and E.
just to keep George honest—or, more felicitously put, to give George the
right incentives.
George and Lloyd have to decide how to share the loss of $70,000 if
easier to generalize to more than two risk sharers at a later stage), let's
suppose that George and Lloyd have to select two numbers: AG, the payoff
to George if A occurs, and AL, the payoff to Lloyd if A occurs (all payoffs
1
If they were to disagree on the financial consequences associated with a given state, then they could
decompose that state into two or more states with differing probabilities. Our present format (including
additional states) is thus quite general. For example, if George thinks the payoff in state C is $30,000 and
Lloyd thinks it is $50,000, then state C could be split into two states, C' and C’’, with payoffs $30,000 and
$50,000, respectively. George may assign probabilities of .3 and zero to C' and C’’, whereas Lloyd may
assign probabilities of zero and .5 to C' and C’’, respectively.
George and Lloyd have to decide on ten numbers: AG, AL,…, EG, EL (see
AG + AL = -70,
BG + B L = -20,
CG + CL = 30,
DG + DL = 80,
EG + E L = 200 .
For any determination of these ten numbers, George and Lloyd will
each be confronted with a risk profile. George's risk profile will yield
respectively; Lloyd's risk profile (see chapter 2) will yield financial prizes
specific setting of the ten numbers)2 is inefficient in the sense that the ten
risk-sharing numbers could be changed to improve the risk profile for each
party (in that party's subjective opinion). In other words, there may be
2
Because of the five financial constraints, these ten numbers have really five degrees of freedom: once we
determine what Lloyd gets in each state, George gets the complement.
opportunities for joint gains. Figure 14A.2 depicts graphically what could
occur. For a specific risk sharing plan, Q (which arises through the
equivalent to his resulting risk profile of $5,000, and Lloyd might assign a
depicted, the risk-sharing plan Q is not efficient: they both can improve,
since there are joint evaluations of risk-sharing deals that fall northeast of
Q. George controls the ownership of the deal and can remind (threaten?)
Lloyd that there are other speculators who would love to join in the
venture. Lloyd could counter that he, too, has a choice of other potential
drilling deals. They also can remind each other about the transaction costs
work on other deals together in the future. The point of all this is that
to discuss details about how such sharing procedures are made or could be
available to him before he can arrive at a reservation price for the present
set of negotiations.
Suppose that George will deal with Lloyd only if he can get a
shown in Figure 14A.2, it may be possible to satisfy George and still get a
positive return for Lloyd, but there's not much leeway. They may never
find sharing arrangements that are mutually acceptable, even though such
insurance for our automobiles. But the owner of an oil tanker entering into
troubled waters might have some negotiating leverage with his insurance
suppliers, and vice versa. By and large, we can think of these negotiations
Strategy of Presentation
We don’t seek generality but present enough of the theory in a special case to
enable the reader to generalize to more general cases.
Let Rowena have two pure strategies R1 and R2 and Colin have strategies C1 and
C2 as was the case for the bi-matrix games of Ch. 4. For any (Ri, Cj) pair we need only
display Rowena’s utility payoffs since Colin’s motivation is to minimize Rowena’s
payoff from the game.
To emphasize the point that the payoffs are in utility units and that utilities are
essentially breakeven probabilistic judgments we’ll interpret the payoffs as follows: the
payoff to Rowena for the pair (Ri, Cj) is a utility number Aij between 0.0 and 1.0 which
gives Rowena an Aij standard probability at some desirable prize W and a
complementary probability at the status quo. Think of Colin supplying this prize to
Rowena so that the higher the number Aij the better for Rowena and the worse for Colin.
C1 C2
R1 A11 A12
R2 A21 A22
Rowena must choose either strategy R1 or R2; Colin has the choice of C1 or C2. The four
Aij numbers are known fully by the players; in the parlance of game theory, the payoff
numbers are common knowledge.
Table 1 Table 2
R1* .4 .5 .4* R1 .1 .9 .1
R2 .3 .2 .2 R2 .7 .2 .2*
Analysis of Game 1
In Game 1, Rowena, by playing R1, has a security level of .4; that .4 is the minimum of
row R1. By playing R2 Rowena has a security level of .2 which is the minimum of row
R2. The two row minima are .4 and .2 and the maxima of these two row minima is .4.
Turning to Colin’s viewpoint: the maximum that Colin can lose playing C1 is .4 (the max
of column C1; and the max he can lose with C2 is .5. Thus .4 and .5 are Colin’s security
losses and if he minimizes his security losses (i.e., if he minimizes his maximum
potential losses of .4 and .5 , he’ll choose C1 which guarantees maximum loss .4 units.
Game 1 has a value of .4 and Rowena by playing the strategy R1 that maximizes
the row minima – her so-called maximin strategy – can guarantee herself at least .4; and
Colin, by playing the strategy C1 that minimizes the column maxima – his so-called
minimax strategy – can guarantee that Rowena gets at most .4. Nice neat little reinforcing
bundle! How much different things are going to be when we turn our attention to Game
2.
Analysis of Game 2
• The minima in each row (labeled security levels) are .1 and .2 and the
maximum of these minima is .2. Thus the maximin value is .2 and the
maximin choice for Rowena is R2.
The maxima in each column (labeled security losses) are .7 and .9 and the
minimum of these maxima is .7. Thus the minimax value is .7 and the
minimax choice for Colin is C1.
• The maximin value of .2 – the value that Rowena can secure for herself –
is less than the minimax valueof .7 – the value that Colin can secure for
himself.
Figure 1:
___________________________________________
0 .2 .7 1.0
maximin minimax
gap
If Rowena uses Mixed Strategy with x = .5, which we label MS(.5), then her
payoff against C1 is .5 x .1 + .5 x .7 = .4 ; and her payoff against C2 is .5 x .9 + .5 x .2
= .55. Note – and this deserves some thought – that all probabilities are standard or what
we have called canonical. We have studiously avoided, thus far, in any use of subjective
probabilities.
Table 3
Game 2B
Security
C1 C2 Level
R1 .1 .9 .1
R2 .7 .2 .2
Toss Coin
MS(.5) .4 .55 .55*
We now have some work cut out for ourselves and some conjectures to be made:
• We have introduced for Rowena a new class of mixed strategies – the
MS(x) strategies. We now can manipulate x to achieve the optimal
maximin value.
• We can do the same trick for Colin: let him use C1 with probability y and
C2 with probability 1- y trying to improve his security losses. We should
suspect that by randomization he could improve (i.e., lower) his minimax
value.
• Wouldn’t it be “loverly “ if the gap between minimax and maximin values
vanished with proper choices of x and y. In this eventuality, the nice story
for game 1 would be repeated for Game 2 once mixed strategies are
introduced. There will be some value, v*, such that Rowena, by choice of
x*, can guarantee herself at least v* and Colin, by choice of y* can
guarantee that Rowena get at most v*. Furthermore the optimal x – let’s
call it x* – and the optimal y* are in equilibrium. We’ll see that’s the case.
Geometry
Now to explain Fig. 1. The horizontal axis depicts the value of x, the probability of the
mixed strategy choosing R1. Hence as x goes from 0.0 (choice of R2 ) to 1.0
(choice of R1) the payoff against C1 goes from .7 on the left to .1 on the right; and
against C2 the payoff goes for .9 on the right to .2 on the left.
Now suppose Rowena were to announce she is going to use MS(.5) – i.e., she is
going to toss a fair coin. So we go to .5 on the horizontal axis and Colin has the choice
the C1 or the C2 lines. As the minimizing player, Colin will choose the C1 line against
MS(.5).But against x = .2 (say), Colin would prefer C2. Hence the broken darkened line
represents the payoff to Rowena as a function of x assuming that Colin chooses the best
retort from him. Thus the darkened line is the minimum function (or the security
function) that Rowena can achieve as a function of her chosen x. We see, and a little
algebra shows that Rowena should choose x* = 5/13 for her maximin value that will
guarantee her a return of 4.692. These numbers can be read off a precisely drawn figure
but they also can be derived algebraically as follows: We require
.1 x* + .7 ( 1- x*) = .9 x* + .2 (1 - x*)
and solving, we get x* = 5/13 = .385. The maximin value v*, is thus
End of story for Romena’s analysis and now on for more about Colin’s analysis.
1.0 1.0
.9 C2: y = 0 .9
.8 C1 : y = 1 Pivot Point
.8
.7 .7
.6 .6
.4692 .4 .4
.3 .3
.2 .2
.1 .1
0.0 0.0
Figure 2:
Colin’s Analysis
Let y designate the probability that Colin uses C1, letting 1 – y be the probability
of C2. Now here is the key: for any specific y Colin uses, the return to Rowena can be
shown as a line in Fig.1 that goes through the pivot point. Indeed the lines for y = 0 and
for y = 1 are already shown on that diagram. Let’s examine the line we would get for
y = .5. It would be midway between lines for y = 0 and y= 1 but not quite horizontal. For
y = .5 the intercept on the left (the R2 side) would be .5 x .7 + .5 x . 2 = .45; and the right
intercept (the R1 side) would be at .5 x .1 + .5 x .9 = .5. Hence the line for y =.5 would
start on the left at a height of .45 and tilt slightly upwards , passing through the pivot
point and reaching a right of .5 at the right. If Colin were to announce y = .5, Rowena
would choose the right end point (corresponding to x =1 or to R1). Colin would like to
choose y such that the line it offers to Rowena is horizontal going through the pivot point
(as all lines for different y values do). To find this y we set
.1 y + .9 (1 – y) = .7 y + .2 ( 1 – y)
or
y = 7/13 = .538
Table 4
8/13 .7 .2 6.1/13
6.1/13 6.1/13
The calculations say: Rowena can guarantee a payoff of 6.1/13 by playing her maximin
strategy. Colin can guarantee that Rowena gets at most 6.1/13 by playing his minimax
strategy. Rowena’s maximin and Colin’s minimax strategies are in equilibrium in the
sense that there is no motivation for either to change if the other holds fixed. End of
story.
[Use of an EXCEL Linear programming package to find Rowena’s maximin strategy and
Colin’s minimax strategies and the resulting value of the game with more than two
strategies for each player.]
Let m and n represent respectively the number of pure strategies for Rowena and
Colin. We have just completed the story for m = n = 2. We now investigate the analysis
for more general m and n.
First let us illustrate a game that has an equilibrium pair in pure strategies. See
Table 5. We want to continue interpreting the payoff numbers as utilities so think of the
payoffs as chances out of ten that Rowena will receive a desirable prize from Colin.
Table 5
C1 C2 C3 C4 Security Level
R1 1 3 4 1 1
R2 6 5 8 7 5 maximin
R3 4 4 0 2 0
Security Loss 6 5 8 7
minimax
Rowena’s security levels for her pure strategies are the minima of the rows and
the maximizer of these row minima is R2. Her maximin value in pure strategies is 5. She
can guarantee at least 5 units for herself by playing R2. Colin’s security losses are the
maxima of each of the columns and the minimizer of these column maxima is C2; he can
guarantee, by playing C2 that Rowena gets no more than 5 units. In this case the maximin
equals the minimax.
The pair (R2, C2) is in equilibrium since R2 is best for Rowena against C2; and C2 is
best for Colin ( the minimizing player) against R2. Observe also that the (R2, C2) entry
of 5 units is the maximum of its column and the minimum of its row. This summarizes
what need be said about strictly competitive games with pure strategy equilibria.
Games with no equilibrium pair in pure strategies. Mixed strategies to the rescue.
Consider the game depicted in Table 6. The row minima are 1, 1, 2 and the
maximum of these minima is 2, Rowena’s maximin value in pure strategies. The maxima
in each Column are respectively 5, 3, 4, 6 and the minimum of these column maxima is
3.Thus Colin can secure, by choice of C2, not to lose more than 3 units but Rowena can
only command 2 units for herself. There is a gap between the maximin value and the
minimax value. We also note that there is no equilibrium pair – no entry is both max in
its column and min in its row.
Column Maxima 5 3 4 6
Minimax
We will now show that Rowena can increase her security level – i.e., increase her
maximin value by use of randomized strategies.
Consider for example the randomized strategy that assigns equally likely values to
R1, R2, and R3 – i.e., assigns a .333 prob to each pure strategy as shown in Table 5.
Table 7
Randomized Row
Strategy C1 C2 C3 C4 Security
0.333 R1 1 3 4 1 1
0.333 R2 5 2 1 6 1
0.333 R3 4 3 3 2 2
If Rowena uses this randomization, then her expected return against any column chosen
by Colin is the simple average of the entries in that column. Hence her payoffs in utility
terms are as shown in the last row of Table 7 and the security level of that randomized
(or mixed) strategy is the minimum of those entries, a whopping 2.667 value. Can she do
better by using a different randomization? How much can she aspire to? Certainly not
above Colin’s minimax value of 3? Can she aspire to get a secutity level of 2.9? This
becomes now a mathematical programming problem. First let us note that the value for
Rowena’s entry against C1, which we already know is the average of the C1 column of
numbers, is also the sumproduct of the strategy column and the C1 column. Indeed the
payoff of any legitimate randomization against any column is merely the sumproduct of
those two columns of numbers.
We first set up Table 8. We choose a ballpark aspiration level of 2.5. That number
is used as a convenience to set the calculations and is arbitrary at this time. Our aim is to
find a legitimate randomized strategy – a set of three non-negative numbers that sum to
1.0 – that will yield expected payoffs against C1 to C4 that are each in excess of some
pre-specified aspiration level. In this case we see that the randomized strategy as shown
yield numbers that are each in excess of the aspiration value of 2.5. These excesses are
shown in the last row of numbers in Table 6. The problem is: Given the setup in Table 8,
maximize the aspiration-level cell and find a set of legitimate randomization values for
which each of the excesses is non-negative. This is a linear programming problem ideally
setup to use SOLVER, an attachment on EXCEL.
When SOLVER was queried it responded by showing Table 9, which shows how
Rowena could choose a randomized strategy yielding her maximum possible aspiration
of 2.75.
Colin’s Analysis
We start the process in Table 11. We choose a representative random strategy for
Colin as shown in B3:E3. The sum of those four probabilities is shown in cell F3, which
in this case is 1.0. The expected payoffs of this randomized strategy against R1 is 2.3;
against R2 is 3.6; against R3 is 2.7. These are sumproducts of the randomized strategy
and the row payoffs. With a representative arbitrary aspiration level of 2.9 (pulled out of
the air) we see that Colin cannot achieve this aspiration level using the displayed random
strategy (B3:E3) in because the excess with R2 I s negative. Remember that Colin is the
minimizing player and the excesses are the aspiration value minus the EV columns. We
see that as pitted against R2 the resulting EV of 3.6 exceeds the aspiration level of 2.9.
A B C D E F G
1 Table 11 Aspiration 2.9
2
3 C’s Rand. Str. 0.1 0.2 0.3 0.4 1
4 C1 C2 C3 C4 EV Excess
5 R1 1 3 4 1 2.3 0.6
6 R2 5 2 1 6 3.6 -0.7
7 R3 4 3 3 2 2.7 0.2
to maximize
the entry in D1
subject to
keeping the calculated values ( the excesses) in F5:F7 non-negative.
SOLVER takes a second or so to produce the answer shown in Table 12. Colin can hold
Rowena down to at most 2.75 by playing his minimax randomized strategy.
Table 13
A B C D E F G H
1
2
3 0 0.5 0.25 0.25 1 Row’s EV excess
4 0.25 1 3 4 1 2.75 0
5 0.25 5 2 1 6 2.75 0
6 0.5 4 3 3 2 2.75 0
7 1
8 Col’s EV 3.5 2.75 2.75 2.75 2.75
That completes the story except for a proof that argues that the best aspirations for
Colin and Rowena are always the same. The gap with randomization between the
maximin and minimax values is zero. We know how to make this result convincing but
it’s too specialized for our needs and is thus omitted.
C1 C2 C3 C4
R1 1 3 4 1
R2 6 5 8 7
R3 4 4 0 2
Max of
Col. and
Min of
Row
Fig 2 :
Row
C1 C2 C3 C4 Security
R1 1 3 4 1 1
R2 5 2 1 6 1
R3 4 3 3 2 2 Maximin
Column 5 3 4 6
Security
Minimax
Fig. 3
Row
C1 C2 C3 C4 Security
R1 1 3 4 1 1
R2 5 2 1 6 1
R3 4 3 3 2 2 Maximin
Column 5 3 4 6
Security
Minimax
No equil. pair in
pure strategies
Fig. 4 : Intro of
Randomized
str.
Strategy C1 C2 C3 C4
x1 R1 1 3 4 1
x2 R2 5 2 1 6
x3 R3 4 3 3 2
1 EV
Sum
products
of Xs and
Cs
Fig. 5:
Aspiration 2.75
Randomized
Strategy C1 C2 C3 C4
0.250 R1 1 3 4 1
0.250 R2 5 2 1 6
0.500 R3 4 3 3 2
A B C D E F G
1 Table 11 Aspiration 2.9
2
3 Rand. Str. 0.1 0.2 0.3 0.4 1
4 C1 C2 C3 C4 EV Excess
5 R1 1 3 4 1 2.3 0.6
6 R2 5 2 1 6 3.6 -0.7
7 R3 4 3 3 2 2.7 0.2
A B C D E F G H
1
2 rand.
3 0 0.5 0.25 0.25 1 str. excess
4 0.25 1 3 4 1 2.75 0
5 0.25 5 2 1 6 2.75 0
6 0.5 4 3 3 2 2.75 0
7 1
8 random str 3.5 2.75 2.75 2.75 2.75
9 excess 0.75 0 0 0
10
A B C D E F G H
1
2 rand.
3 0.1 0.3 0.4 0.2 1 str. excess
4 0.1 1 3 4 1 2.8 -0.3
5 0.5 5 2 1 6 2.7 -0.2
6 0.4 4 3 3 2 2.9 -0.4
7 1
8 random 4.2 2.5 2.1 3.9 2.5
str
9 excess 1.7 0 -0.4 1.4
A B C D E F G H
1
2 rand.
3 0 0.5 0.25 0.25 1 str. excess
4 0.25 1 3 4 1 2.75 0
5 0.25 5 2 1 6 2.75 0
6 0.5 4 3 3 2 2.75 0
7 1
8 random 3.5 2.75 2.75 2.75 2.75
str
9 excess 0.75 0 0 0