0% found this document useful (0 votes)
36 views

Lecture 5: Social Preferences: Experimental Economics (ECON 3020)

- Social preferences refer to people's intrinsic concern for others' well-being, such as altruism, fairness, and reciprocity. The ultimatum game highlights conflicts between selfishness and social preferences. - In typical ultimatum game experiments, responders reject offers less than 20% of the time, and proposers offer 30-50%. This contradicts predictions of self-interest but can be explained by intentions and inequity aversion. - The dictator game, where recipients cannot reject, shows offers are lower than in ultimatum games, supporting suspicions that some high ultimatum offers are strategic rather than fair. Overall results indicate people care about fairness and reciprocity to some degree

Uploaded by

BAOABOABOABAOB
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Lecture 5: Social Preferences: Experimental Economics (ECON 3020)

- Social preferences refer to people's intrinsic concern for others' well-being, such as altruism, fairness, and reciprocity. The ultimatum game highlights conflicts between selfishness and social preferences. - In typical ultimatum game experiments, responders reject offers less than 20% of the time, and proposers offer 30-50%. This contradicts predictions of self-interest but can be explained by intentions and inequity aversion. - The dictator game, where recipients cannot reject, shows offers are lower than in ultimatum games, supporting suspicions that some high ultimatum offers are strategic rather than fair. Overall results indicate people care about fairness and reciprocity to some degree

Uploaded by

BAOABOABOABAOB
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Lecture 5: Social Preferences

Experimental Economics (ECON 3020)


University College London
Spring 2013
Introduction
Social preferences refer to the intrinsic nature that people
have for each others well-being, such as altruism, fairness and
reciprocity.
The classical approach in economics assumes that an
individual is self-interest. Or it suspects that any social
preference other than self-interest may be fragile.
G. Stigler (1981), When self-interest and ethical values with
wide verbal allegiance are in conict, much of the time, most
the time in fact, self-interest theory ... will win.
In 1982, G uth, Schmittberger, and Schwarze reported the kind
of empirical ndings that probably surprises only economists.
They studied a two-player ultimatum game that is one crisp
way of highlighting the stark conict between selsh, strategic
behavior and the notions of social preferences such as fairness
In a typical lab experiment, responders reject oer less than
20% of the total money around half of the time and proposers
often oer between 30% and 50%.
Introduction
Such typical ndings in the ultimatum game are in conict
with standard game theoretic predictions (subgame perfect
NE) with self-interested preferences.
In this lecture, we will learn more evidence on dierent aspects
of social preferences.
Before proceeding, it is crucial to emphasize that evidence of
social preferences does not falsify game theory, per se.
Games lead to utilities over allocations, and one players
concern for how much another player earns (whether positive
or negative) can certainly aect his/her utility.
In experiments, however, games are played in money. Since we
cannot easily measure or control players preferences over how
much others earn, we always end up testing a joint hypothesis
of game-theoretical behavior coupled with some assumption
about utilities over money outcomes.
Ultimatum Game
Proposer (Player 1) suggest split of a xed pie, say 10.
Responder (Player 2) accepts (proposal is implemented) or
rejects (both receive 0)
Equilibrium? (Nash? SPNE?)
!"# %&'()*%( +)(#
,-./.0#- 1,&)2#- 34 0%++#0* 0/&5* .6 ) 78#9 /5#: 0)2 ;3<=
>#0/.?9#- 1,&)2#- @4 )AA#/*0 1/-./.0)& 50
5(/&#(#?*#94 .- -#B#A*0 1C.*" -#A#5D# <4
EF%5&5C-5%(G 1H)0"G I,HEG4
Standard Game Theory
Nash equilibrium
Responder accepts anything in set S
Proposer proposes the minimum amount for R in S
Subgame perfect equilibrium
Responder accepts anything (why?)
Proposer oers minimum amount
In the discrete version one additional SPNE: responder rejects
0, accepts anything above 0; proposer oers one increment
above 0
Ultimatum Game with Humans
First UG experiment was conducted by Werner G uth (Journal
of Economic Behavior and Organization, 1982)
Probably the most inuential single experiment, rivaled only
by double auction
Many, many versions tried since
From students
To experiments by anthropologists in highly unusual settings
such as the Amazon rainforest (Henrich et al, 2001)
Dierent stakes, framing. . .
UG common results
Oers:
Almost no oers above 50% of the pie
Mode and median of oers in almost any study in interval
[40%, 50%] of the pie
Mean oer is usually in the interval [30%, 45%] of the pie
Very few oers in the 0-10% range of the pie
Accept/reject decisions:
Rejection rates vary between 0% and 30%
Oers larger than 40% are rarely rejected
Oers smaller than 20% are rejected about half of the time
Probability of rejection decreases as oers increases
When responders are asked which oers they would accept
before they know the actual oer, a small number reject very
high oers (strategy method)
Overall, UG results clearly reject SPNE for selsh individuals
Discussing the results
Stakes?
for higher stakes, oers and rejection rates are lower, but eect
is quite small (see Oosterbeck et al, 2004)
Uncertain pie (the responder doesnt know the pie size)
oers are generally smaller
UG in many countries and cultural settings
Surprisingly weak eects. Two extreme examples (Henrich et
al, 2001):
UG in small scale societies
Machiguenga and Quechua in Peru oer little on average and
reject almost never
Ache headhunters of Paraguay and Lamerela whalers of
Indonesia oer more than 50%, and even this is sometimes
rejected
Market integration positively correlated with good oers!
!" $% &'()) &*()+ &,*$+-+&
.(*/$01+%0( (%2 31+*/1( $% 4+51 ,6+5 )$7)+ ,% (8+5(0+ (%2 5+9+*:
()',&: %+8+5
;*/+ /+(2/1%:+5& ,< 4(5(01(= (%2 >('+5+)( ?/()+5& ,< @%2,%+&$( ,6+5
',5+ :/(% ABCD (%2 +8+% :/$& $& &,'+-'+& 5+9+*:+2
.(5E+: $%:+05(-,% F,&$-8+)= *,55+)(:+2 ?$:/ 0,,2 ,6+5&G
Oers and responses in small scale
societies
!"#$% '() $#%*+(%#% ,( %-'.. %/'.#
%+/,#0#%
Experimental design issue
There are few very small oers and few of more than half of
the pie
Not enough responder observations in these cases!
How to investigate responder behavior to very small or large
oers? How to know if proposers behave rationally?
Is it indeed optimal not to oer a lot or nothing?
Strategy elicitation method: ask responders for their
complete strategy, i.e. how they would choose in each
decision node before knowing the actual oer
Explaining behavior: proposers
First interpretations of UG data:
Fairness: proposers are fair to the responders and give a larger
share than necessary. (once more economists nd out the
bleeding obvious)
But can we be sure of this?
We know smaller oers are more likely to be rejected
Hence proposers could just be reacting rationally to the
(non-credible) threats of responders
We cannot reject the possibility that proposers are rational
and selsh and the results are just driven by responders
How to distinguish between these explanations?
How about removing the responders opportunity to reject .
Then a positive oer is clearly a sign of the proposers fairness
Dictator game
Simplication of UG
Designed to check to what extent proposers care for fairness
Dictatorhas to determine how to divide the pie (say 10)
between himself and an anonymous recipient
In contrast to UG, recipient cannot reject
Nash equilibrium: (selsh) dictator passes s = 0 to recipient
If dictators/proposers mainly driven by fairness, oers should
be broadly the same in DG and UG
If proposers in UG only propose s > 0 because they fear
rejection, we would expect oers of 0 in DG
Dictator game: results
Experimental results reject prediction of oer s = 0:
On average dictators give away 20%, but there is a lot of
heterogeneity
Usually only 20% of the subjects chose s = 0; 60% chose
0 < s < 50% and roughly 20% chose s = 50%
Oers in DG are lower than those in UG
This supports our suspicion that some high oers in UG were
strategic
made in order to avoid rejection and not because the proposer
cares for fairness.
Thus results in UG are to large extent driven by fairness
concerns (or desire for revenge) of the responders
On the other hand, many subjects still oer s > 0 so they
seem to care about fairness to some extent
Ultimatum vs Dictator
!"#$%&'$ )* +,-&%&./
Digging deeper into dictator games
Several researchers have identied various features that make
dictators radically more selsh in DG
Double-blind protocols
This is sucient to make more than 60% of dictators choose s
= 0.
Average s goes down to about 10%
Uncertain pie size: only proposers know size of pie >
responders accept lower oers
Desert: Making the dictator work(solve a maze or do an IQ
test) for the pie rst, such that better results yields larger pie.
Combined with double-blind protocol this almost completely
eliminates positive oers (Cherry et al, AER, 2002)
In contrast, if receivers work, some dictators oer more than
50%
In contrast, double blind protocol has almost no eect in UG
giving in DG seems governed by norms and hence inuenced
by observability
Rejections in UG not inuenced by observability
Explaining behavior: responders
Rejecting even small positive oers violates payo
maximization
Possible explanations:
lack of rationality
aversion towards unequal payos (inequity aversion)
negative reciprocity: motivation to punishunfriendlyacts
(negative reciprocity) and rewardfriendlyacts (positive
reciprocity) =>Intentions matter (Falk, Fehr & Fischbacher
2003)
Outcomes and intentions
!"#$%&'( *+, -+#'+.%+(
/0 1'%12' %+23 $*4' *5%"# %"#$%&'(6 #7'+ 4'8'$.%+ 4*#'( (7%"2, 5'
-+,'1'+,'+# %0 #7' *2#'4+*.9'
If people only care about outcomes, then rejection rates
should be independent of the alternative
Both outcomes and intentions matter
!"#$ "&#'"()* +,- .,#),/",* (+0)1
21"3"*)1 4)$+5."1 .* '"(3+/46) 7.#$ *)68*$,)** 94&# +6*" 7.#$ 31):)1),')* :"1
:+.1,)**;
< =5.-),') ., 6.,) 7.#$ ("-)69 ., 7$.'$ &,:+.1 #>3)* +1) 3&,.*$)- ?@)5.,)9 ABBCD
Proposer behavior is compatible with selshness, but also with
preferences for fairness.
Evidence in line with model, in which unfair types are
punished (Levine, 1998)
Competition eliminates fairness?
Proposer competition game (Prasnikar & Roth 95)
9 proposers simultaneously make an oer x
1 responder can decide whether to accept or reject the highest
oer.
If the responder rejects, all players receive zero.
If the responder accepts, he receives x, the proposer who made
the oer receives 10-x and the other responder receive zero.
Prediction (smallest monetary unit 0.05)
Responder accepts every positive oer.
All proposers oer 9.95 or
At least 2 proposers oer 10.
Competition and fairness: results
High oers from the beginning (average 8.90)
Competition is important.
Quick convergence to the equilibrium.
There are fair outcomes in the UG and very unfair outcomes
in this proposer competition game.
How can we reconcile the conicting evidence?
Theory
Must explain why players reject positive oers
Must explain why randomly generated oers are accepted
more often
Must explain why competing proposers give everything to a
single responder
2 popular theories
Inequality-aversion (Fehr and Schmidt 1998)
people feel envious when others have more than them
people feel guilty when they have more than others
Fairness equilibrium (Rabin 1993)
intentions based
behave nicely towards those that treat them nicely
behave meanly towards those who behave meanly towards
them
Inequality-Aversion
For some allocation x = {x
1
, . . . , x
n
} between n people,
U
i
(x) = x
i


n 1
X
k6=i
max{x
k
x
i
, 0}

n 1
X
k6=i
max{x
i
x
k
, 0}
: guilt measure (0
i
1)
: envy measure (
i

i
)
Ultimatum Game
U
i
(x) =
(
x
i
(x
i
x
i
) if x
i
x
i
0
x
i
(x
i
x
i
) if x
i
x
i
< 0
Responders reject shares less than

R
1+2
R
Proposers: depends on and distribution of responders
preferences
but with some degree of inequity-averse preferences for
responders, oer positive amounts
Competition
This model is still consistent with proposers giving the whole
share to a responder when there is competition
suppose one proposer out bid another feel envy
but, envy > guilt ( > ) best response is to raise bid
Fairness Equilibrium
Intentions matter
if a player is nice to you, you are nice to them
if a player is mean to you, you are mean to them
Modeled as beliefs aecting your utility
fairness of player 1 towards player
f
1
=

2
(b
2
, a
1
)
fair
2
(b
2
)

max
2
(b
2
, a
1
)
min
2
(b
2
)
where b
2
is 1s belief about 2s action
Perceived kindness

f (player 1s beliefs about player 2s
kindness) enters her utility function
Reciprocity
Rabins fairness model is capable of capturing the negative
reciprocity we observe in ultimatum oers
responders reject low oers only when the proposer choose the
unfair split
(not when it was the best option, or when it was randomly
chosen)
Public Goods
Denition: Non rivalrous/non excludable (Samuelson 1954)
Problem: free riding!
Why?
A. Smith (1776): Street lamps
One person enjoys, does not detract from other persons
enjoyment
Cant charge every person for amount they use
More general: cooperation problems
Cooperative hunting and warfare (important during human
evolution)
Exploitation of common pool resources
Clean environment
Teamwork in organizations Collective action (demonstrations,
ghting a dictatorship)
Voting
Basic economic problem
Cooperative behavior has a positive externality.
Hence, private marginal benet is smaller than social marginal
benet > under provision relative to the ecient level.
A public good game
n players
Contribute x out of endowment
Contribution costs c(x),
Total contributions converted to output per capita o(X),
where X =
P
x
i
Utility U
i
= x
i
+ o(X)
o
0
() is also called marginal per capita return (MPCR)
Simple Linear Case: U
i
= x
i
+ mX
Individually rational strategy:
Corner solution: invest all if m>1, else nothing
Ecient solution (collectively rational):
Total utility U
t
=
P
u
i
=
P

P
x
i
+ m
P
X

dU
t
dx
i
= 1 + mn
Invest all if m>1/n, else nothing
Public goods: Experimental results
ubllc goods: LxperlmenLal resulLs
Permann eL al. (2008)
!"#$%"$

n=4 MC8 = 0.4
y = 20 arLner deslgn

ConLrlbuuons sLarL
relauvely hlgh
lall over ume
CulLure obvlously
mauers
Hermann et al. (2008)
Science
N=4 MPCR = 0.4 y = 20
Partner design
Contributions start
relatively high
Fall over time
Culture obviously matters
Group size
!"#$% '()*
!"#!$%&!'( *$+,#
-%.!
/012
+,
+-
./,
./-
+
+
./
./
/01
/023
/01
/023
4'556 578 95:;*" <.=>>? @AB
Issac and Walker (1988 QJE)
!"#$% '()*
!"#!$%&!'( *$+,#
-%.!
/012
+,
+-
./,
./-
+
+
./
./
/01
/023
/01
/023
4'556 578 95:;*" <.=>>? @AB
Mitigating group size eects
In minimum eort games
N people choose eort, outcome depends on the smallest eort
U = min{x
j
} cx
i
, c < 1
Any common eort level is Nash
The greater n, the lower the eort
Weber (2006) Managing growth to achieve ecient
coordination in large groups, AER
Add people one by one to the group
Eort remains much higher than if you started o with a big
group
In public goods?
The big question
People in real societies do seem to be cooperating (to various
degrees)
How can this happen?
Punishment (as in experiment)
Social norms?
Genetic predisposition to cooperate, against individual
rationality?
Communication?
Public goods with punishment !"#$%& ())*+ ,%-. /"0%+.120-
3241500 2- 5$6 7899:;
!"#$%"$

<=> ?!@A = 96>
B = 89 !54-024 *2+%(0
!"0%+.120- C D12+ &)+-$%24
-) /"0%+.24 -.50 /"#%+.2*

@)0-4%#"D)0+ +-54-
42$5DE2$B .%(. 50* 4215%0
-.242
F)12D12+ 2E20 () "/G
@"$-"42 )#E%)"+$B 15H24+
5(5%0
Hermann et al. (2008)
Science
N=4 MPCR = 0.4 y = 20
Partner design
Punishment 3 times
costlier to punisher than
punished
Contributions start
relatively high and
remain there
Sometimes even go up!
Culture obviously
matters again
Why do people cooperate?
Strategic cooperation (Kreps et al.,JET 1982)
There are strategic (rational) and tit-for-tat players.
Strategic players cooperate (except in the nal period) if they
believe they are matched with tit-for-tat players.
Strategic players mimic tit-for-tat players (i.e. they cooperate)
to induce other strategic players to cooperate.
Holds for certain parameter values.
Test? (e.g. Fehr & Gachter 2000, Croson 96, Andreoni 88)
Social preferences
Altruism, warm glow, eciency-seeking motives.
Conditional cooperation, Reciprocity.
Maladaption
Strategic cooperation: partners vs
strangers
!"##$%&'()"% &% +$,-&' .""/0
!"#$%&#'% )1 !"##$%&'()"% ,23422% +25&"/0
67 5()"%(-&38 '"##"% 9%"4-2/.2: %" 2;2'3 "7
'"##$%&'()"%< =">


Fehr & Gaechter 2000 AER
parameters: N=4, MPCR = 0.4, y = 20
6 partner groups / 2 stranger sessions / with 6 groups each
Why does cooperation decline over time?
Endogenous errors?
More on that later
Strategic cooperation if group composition is constant?
Social preferences: conditional cooperation
Subjects are conditionally cooperative and learn that there are
free-riders in the group.
As a response they punish other group members by choosing
lower cooperation levels.
How to examine conditional cooperation
How does contribution vary over time:
contribution(t)=f(contribution(t-1))
Problem: How can we disentangle the general decline of
cooperation from conditional cooperation?
Changes in contributions depend on whether the others
contributions were above or below the own contribution.
(Keser, van Winden, 2000)
Ask subjects for a belief about the other players contributions.
Does the contribution depend on the belief? (Croson, 1998)
Problem: False consensus eect (assuming that what I do is
normal)
Allow the correction of the decision.
Kurzban & Houser (2002); Levati & Neugebauer; (2001);
Guth, Levati & Stiehler (2002)
Problem: There is an incentive to choose higher contributions
for strategic reasons
Direct evidence of cond. coop
Fischbacher, Gachter & Fehr (2001)
One-shot game
Subjects choose...
An unconditional contribution
A conditional contribution, i.e., for every given average
contribution of the other members they decide how much to
contribute.
At the end one player is randomly chosen. For her the
contribution schedule is payment relevant, for the other three
members the unconditional contributions is payment relevant.
A selsh player is predicted to always choose a conditional
contribution of zero.
Note that a selsh player may have an incentive to choose a
positive unconditional contribution if she believes that others
are conditionally cooperative.
Results
Unconditional cooperation is virtually absent.
Heterogeneity:
Roughly half of the subjects are conditional cooperators
Roughly one third is selsh
A minority has ahump-shapedcontribution schedule
Question: Can the observed pattern of conditional
cooperation explain the unraveling of cooperation?
Assume adaptive expectations. Subjects believe that the other
group members behave in the same way as in the previous
period
This implies that over time the conditional cooperators
contribute little although they are not selsh.
This result holds qualitatively for any kind of adaptive
expectations.
References
Andreoni, J., M. Castillo, and R. Petrie (2003), What Do Bargainers
Preferences Look Like? Experiments with a Convex Ultimatum Game,
American Economic Review, 93(3), 672-685.
Andreoni, J. and J. H. Miller (2002), Giving According to GARP: An
Experimental Test of the Consistency of Preference for Altruism,Econometrica,
70(2), 737-753.
Berg, J. E., J. Dickhaut, and K. McCabe (1995), Trust, Reciprocity, and Social
History,Games and Economic Behavior, 10, 122-142.
Falk, Armin (2007), Gift Exchange in the Field,Econometrica, 75(5),
1501-1511.
Fisman, R., S. Kariv, and D. Markovits (2007), Individual Preferences for
Giving,American Economic Review, 97(5), 1858-1876.
Cox, J. C. (2004), How to Identity Trust and Reciprocity,Games and
Economic Behavior, 46, 260-281.
G uth, W., R. Schmittberger, and B. Schwarze (1982), An Experimental
Analysis of Ultimatum Bargaining,Journal of Economic Behavior and
Organization, 3, 367-388.

You might also like