Reducing Discrimination With Reviews in The Sharing Economy: Evidence From Field Experiments On Airbnb
Reducing Discrimination With Reviews in The Sharing Economy: Evidence From Field Experiments On Airbnb
Management Science
Publication details, including instructions for authors and subscription information:
http://pubsonline.informs.org
. https://doi.org/10.1287/mnsc.2018.3273
This article may be used only for the purposes of research, teaching, and/or private study. Commercial use
or systematic downloading (by robots or other automatic processes) is prohibited without explicit Publisher
approval, unless otherwise noted. For more information, contact [email protected].
The Publisher does not warrant or guarantee the article’s accuracy, completeness, merchantability, fitness
for a particular purpose, or non-infringement. Descriptions of, or references to, products or publications, or
inclusion of an advertisement in this article, neither constitutes nor implies a guarantee, endorsement, or
support of claims made of that product, publication, or service.
INFORMS is the largest professional society in the world for professionals in the fields of operations research, management
science, and analytics.
For more information on INFORMS, its publications, membership, or meetings visit http://www.informs.org
MANAGEMENT SCIENCE
Articles in Advance, pp. 1–24
http://pubsonline.informs.org/journal/mnsc/ ISSN 0025-1909 (print), ISSN 1526-5501 (online)
Received: December 8, 2016 Abstract. Recent research has found widespread discrimination by hosts against guests of
Revised: October 25, 2017; August 4, 2018; certain races in online marketplaces. In this paper, we explore ways to reduce such dis-
November 10, 2018 crimination using online reputation systems. We conducted four randomized field ex-
Accepted: November 26, 2018 periments among 1,801 hosts on Airbnb by creating fictitious guest accounts and sending
Published Online in Articles in Advance: accommodation requests to them. We find that requests from guests with African
August 1, 2019
American–sounding names are 19.2 percentage points less likely to be accepted than those
https://doi.org/10.1287/mnsc.2018.3273 with white-sounding names. However, a positive review posted on a guest’s page sig-
nificantly reduces discrimination: when guest accounts receive a positive review, the
Copyright: © 2019 INFORMS acceptance rates of guest accounts with white- and African American–sounding names are
statistically indistinguishable. We further show that a nonpositive review and a blank
review without any content can also help attenuate discrimination, but self-claimed in-
formation on tidiness and friendliness cannot reduce discrimination, which indicates the
importance of encouraging credible peer-generated reviews. Our results offer direct and
clear guidance for sharing-economy platforms to reduce discrimination.
1
Cui, Li, and Zhang: Reducing Discrimination with Reviews
2 Management Science, Articles in Advance, pp. 1–24, © 2019 INFORMS
(i.e., positive or nonpositive descriptions of a prior white- and African American–sounding names are
experience with a guest), credibility (i.e., peer-generated 47.9% and 28.8%, respectively, and the difference is
or self-claimed quality information),2 and the existence 19.2 percentage points (p-value 0.0002). This result
versus content of a review (i.e., the fact that a review is consistent with Edelman et al. (2017), which finds
exists on a guest profile versus what the host said in that guests with white-sounding names are more likely
the review). to be accepted compared with guests with African
We conducted four randomized field experiments American–sounding names.4 However, when there is
on Airbnb to address these questions. In each ex- one positive review, discrimination is significantly reduced:
periment, we manipulated both guests’ race and the the acceptance rate is 56.2% for guests with white-sounding
review (or self-claimed) information by employing a names and 58.1% for guests with African American–
2 × 2 design. In the first field experiment, we created sounding names, and the difference between accep-
eight fictitious guest accounts; four accounts do not tance rates for white and African American guests is
have reviews, and the other four accounts have one statistically indistinguishable (p-value 0.8774). More-
positive review written by the same host at the same over, irrespective of a guest’s race, the acceptance rate
time. Within each treatment arm, four guest accounts is higher when the guest has a positive review.
differ only by name and otherwise are identical. The remaining three experiments demonstrate
Two guests have white-sounding names, and two whether different types of information can help at-
have African American–sounding names.3 We then tenuate discrimination. In particular, our nonpositive
randomly assigned these guest accounts to Airbnb review experiment suggests that a nonpositive re-
hosts in three major U.S. cities—Boston, Chicago, and view can significantly reduce discrimination: in the
Seattle—and sent out accommodation requests from absence of reviews, guests with white-sounding names
our guest accounts to these hosts. We recorded hosts’ are 21.4 percentage points more likely to be accepted
reply messages and compared acceptance rates across than guests with African American–sounding names
guest accounts. Because we randomly assigned Airbnb (p-value 0.0287); when there is a nonpositive re-
hosts to guest accounts that differ only by names and view, the acceptance difference between white guests
reviews, the observed difference in acceptance rate is and African American guests becomes statistically
causally driven by race and review information. We indistinguishable (p-value 0.6813). Moreover, the
refer to this experiment as the positive review experiment. blank review experiment also suggests that the ex-
In the second field experiment, we created another istence of a blank review statistically significantly
eight fictitious guest accounts and repeated the pre- reduces discrimination. Although both nonpositive
vious experimental design with one change: the lat- and blank reviews can reduce discrimination, our
ter four accounts in the review condition received a self-claimed information experiment shows that the
nonpositive review instead of a positive review. We self-claimed information by guests themselves on
designate this experiment the nonpositive review ex- tidiness and friendliness fails to reduce discrimina-
periment. In the third field experiment, we created tion; guests with white-sounding names are 12.8
another 16 guest accounts and repeated the experi- percentage points more likely to be accepted com-
mental design with another change: all guest accounts pared with guests with African American–sounding
lack reviews, and the latter eight guests self-claim to names (p-value 0.019) even with self-claimed in-
be neat and friendly in their accommodation request formation, which is statistically indifferent from the
messages. This enabled us to test whether self-claimed gap without self-claimed information.
and unverified information can reduce discrimination; Our paper contributes to the emerging literature on
we refer to this experiment as the self-claimed in- marketplace innovation by providing evidence that
formation experiment. In the last field experiment, we peer-generated reviews can reduce discrimination in
again created 16 new accounts and repeated the ex- the sharing economy. Although several recent studies
perimental design with one modification: each of have documented evidence of discriminatory prac-
the latter eight guest accounts in the review condition tices on sharing economy platforms, such as Airbnb,
has one blank review without content. This setup Uber, and Lyft, none provide concrete methods to
allows us to separate the effect of a review’s existence mitigate these actions, and, to the best of our
from its content. We denote this experiment as the knowledge, our paper is the first to do so. We show
blank review experiment. We conducted the four ex- that different types of reviews—positive, non-
periments in September 2016, October and November positive, and even blank reviews—can reduce dis-
2016, July and August 2017, and March and April crimination. Moreover, we show that, in contrast to
2018, respectively. peer-generated reviews, self-claimed information
Our positive review experiment suggests that cannot reduce discrimination. This result highlights
discrimination exists when guest accounts have no that verifiability and credibility of a review (i.e., a
review: the average acceptance rates of guests with review is linked to a valid transaction on the platform)
Cui, Li, and Zhang: Reducing Discrimination with Reviews
Management Science, Articles in Advance, pp. 1–24, © 2019 INFORMS 3
is crucial for reducing discrimination. Our findings service providers bid for projects posted by buyers.
have several implications for sharing-economy plat- The authors find that buyers are willing to accept
form owners. To attenuate discriminatory behavior and higher bids from service providers with compara-
to improve operational efficiency, platform owners tively better reputations. Bolton et al. (2004) find that
should better leverage online reputation systems to online feedback substantially improves efficiency in
encourage and facilitate information sharing among distributing surplus among buyers and sellers using
participants. For example, this may be achieved by laboratory experiments. We complement this line of
sending reminders or offering incentives to users to literature by showing that online reputation systems
write reviews of one another, especially when one of can reduce social bias (i.e., racial discrimination),
them is a first-time user. Platform owners should which, in turn, leads to more successful transactions
also carefully validate reviews on their platforms by in online peer-to-peer marketplaces.
linking reviews to transactions to successfully le- Moreover, we investigate what aspects of online
verage online reviews to reduce discrimination in the reviews are critical in reducing discrimination. Pre-
sharing-economy platforms. vious literature shows that the credibility of reviews
affects whether they are effective in shaping con-
2. Literature Review sumers’ attitudes and influencing their purchasing
Our research contributes to the literature on discrimi- decisions (Cheung et al. 2012, Luca 2016). Consistent
nation, online reputation systems, information sharing, with this literature, we find reviews on Airbnb, which
and operational transparency in service operations and provide credible information about guests, reduce hosts’
marketplace operations. biases against minorities, but information claimed
by guests themselves does not. Although a review’s
2.1. Discrimination credibility is critical in reducing discrimination, the
Our paper examines the existence of racial discrimi- actual content and sentiment of the review is not. Spe-
nation in the sharing-economy context and investigates cifically, we find that a positive review, a nonpositive
how to correct it with field experiments. Although review, and a blank review all reduce discrimination.
discrimination has been widely documented in the In other words, the existence of a review alone pro-
literature, most studies do not provide implementable vides credible information regarding the legitimacy
mechanisms to reduce discrimination with very few of a guest and establishes an inclusive norm that a
exceptions that generate mixed and opposing results guest should be accepted irrespective of the guest’s
(see Bertrand and Duflo 2016 and references therein). race. This can be partially because reviews on Airbnb
Bertrand and Mullainathan (2004) and Nunley et al. are overwhelmingly positive as illustrated by recent
(2014) investigate the impact of information revealed studies of Airbnb’s review system (Zervas et al. 2015,
in resumes on discrimination in labor markets and Fradkin 2016). Consequently, hosts may not pay as
conclude that additional information revealed in re- much attention to the review’s content as its presence.
sumes cannot reduce discrimination in labor mar-
kets. In contrast, Kaas and Manger (2012) found that 2.3. Operational Transparency and Information
discrimination is significantly reduced when a ref- Sharing in Service Operations
erence letter with soft information regarding pro- A stream of literature in service operations studies the
ductivity is included in a job application. In our paper, effect of operational transparency and information
we find that peer-generated reviews can reduce dis- sharing in service industries. Buell and Norton (2011)
crimination, but self-claimed information cannot. and Buell et al. (2016) show how transparency of ser-
This result helps reconcile previous findings in the vice processes helps improve customer satisfaction and
literature that letters of reference with information on appreciation of the service provided. Several studies
productivity reduce discrimination in labor markets have shown how sharing availability information helps
(Kaas and Manger 2012), but self-revealed informa- customers make more informed purchasing decisions
tion in resumes does not (Bertrand and Mullainathan (Allon and Bassamboo 2011, Allon et al. 2011, Gallino
2004, Nunley et al. 2014). and Moreno 2014, Cui et al. 2017). A recurring theme
in this literature is the verifiability of the information
2.2. Online Reputation Systems shared. When the information provided is nonverifiable,
Our paper relates closely to the literature that studies a cheap-talk equilibrium may emerge by which cus-
the value of online reputation systems. Past research tomers ignore the information (Allon and Bassamboo
has found that a seller’s good reputation increases the 2011). Allon et al. (2011) investigate conditions under
transaction price and success rate in peer-to-peer online which firms can credibly communicate nonverifiable
auction markets, such as eBay (Bajari and Hortacsu information in a queueing service setting. In this
2003). Moreno and Terwiesch (2014) show similar case, provision of even nonverifiable information
findings in an online service auction market, in which can improve company profits and customer utility.
Cui, Li, and Zhang: Reducing Discrimination with Reviews
4 Management Science, Articles in Advance, pp. 1–24, © 2019 INFORMS
Gallino and Moreno (2014) investigate the impact of various markets (see Bertrand and Duflo 2016 for a
sharing availability information empirically in a retail comprehensive review), including labor (e.g., Bertrand
setting and find that verifiable availability informa- and Mullainathan 2004), rentals (e.g., Carpusor and
tion provided online increases off-line sales and pro- Loges 2006, Hanson and Hawley 2011, Ewens et al.
motes channel integration. 2014), retail (e.g., Ayres and Siegelman 1995), health-
Our paper shows that increased information trans- care (Williams and Mohammed 2009), and education
parency provided through online reputation systems can markets (e.g., Milkman et al. 2012). Consistent with
help reduce behavioral biases (i.e., racial discrimination) recent studies (e.g., Ge et al. 2016, Edelman et al. 2017),
in online service marketplaces. We also show that an we hypothesize discrimination exists in the sharing
important reason why an online reputation system economy; that is, members of a minority group are
can effectively reduce discrimination is that it is veri- treated less favorably than members of a majority
fied and credible. Our results highlight the need for group with identical characteristics. Following sem-
transparent marketplaces: encouraging sharing rather inal work by Bertrand and Mullainathan (2004), we
than concealing information among market participants use the correspondence method to test and measure
will help reduce discriminatory behavior and improve discrimination, and we use names to signal minority
market efficiency. status (e.g., African American– or white-sounding
names). Specifically, we offer the following hypothesis:
2.4. Marketplace Operations
We study how to leverage peer-generated reviews Hypothesis 1. Discrimination exists in the sharing econ-
to improve market efficiency in a sharing-economy omy: guests with African American–sounding names ex-
marketplace. In this respect, our work is related to re- perience lower acceptance rates on Airbnb than guests with
search in operations management that studies the de- white-sounding names.
sign and efficiency of sharing-economy business models
in various contexts, including bike sharing (Kabra et al. 3.2. Using Online Reviews to Reduce Discrimination
2015), ride sharing (Cachon et al. 2015, Bimpikis et al. in the Sharing Economy
2016, Taylor 2016), and home sharing (Fradkin 2016, Although past studies have documented the existence
Li et al. 2016, Zervas et al. 2016, Li and Netessine of discrimination in various contexts, as pointed out
2018). A common theme of these papers is how var- by Bertrand and Duflo (2016), most of these studies
ious operation levers, such as pricing and matching, do not provide implementable mechanisms to reduce
can be used to improve the market efficiency and discrimination, which is the focus of this paper. In
social welfare. For example, Cachon et al. (2015) study particular, we hypothesize that online reviews will
how dynamic pricing can be used to improve social help reduce discriminatory behavior in the context of
welfare in a ride-sharing context, and Bimpikis et al. a sharing economy. We discuss two primary reasons
(2016) demonstrate how spatial balance of a ride- why reviews would help reduce discrimination.
sharing system can improve its market efficiency and First, there is a stream of literature that believes
how this balance can be achieved through compensa- discrimination is driven, at least partially, by a lack of
tion schemes. Our paper investigates an important information (e.g., Arrow 1973, Kaas and Manger 2012).
but overlooked factor in the sharing economy that In the home-rental sharing marketplaces, hosts may
could substantially affect market efficiency and social decide to accept or reject a request based on perceived
welfare: discriminatory behavior. Our paper sheds guest quality, such as safety, reliability, tidiness, and
light on how a marketplace can be designed to en- friendliness. Some risks of home-sharing rentals can
courage information sharing so that market ineffi- lead to severe consequences, even though they may
ciency caused by discrimination is reduced. be rare, such as personal and property security risks
(theft, personal harassment, property damages, etc.)
3. Hypotheses Development (Vora 2017, NewsHub 2018). Because the quality of a
In this section, we develop hypotheses on discrimi- guest is not fully observable, a host may infer quality
nation in the sharing economy. In particular, we focus based on limited information about the guest, such as
on online reviews as a means of reducing discrimi- name and profile photo, and the host’s prior belief
nation in sharing-economy marketplaces. We also about the average quality of a guest of a certain type. For
investigate several moderating factors to further example, if a host believes it is less safe to host African
understand what aspects of online reviews are critical American guests, on average, the host may decide to reject
in reducing discrimination. a request from an African American guest lacking specific
information about the guest. However, if the host sees a
3.1. Discrimination in the Sharing Economy positive review of the guest from other hosts, the host can
Discrimination against minority groups has been then update the host’s belief about the guest’s quality
documented in past decades using field evidence in and may be more likely to accept the guest’s request.
Cui, Li, and Zhang: Reducing Discrimination with Reviews
Management Science, Articles in Advance, pp. 1–24, © 2019 INFORMS 5
In other words, online reviews of guests serve as a can still demonstrate that other hosts have accepted
quality signal that allows hosts to update their beliefs these guests and, therefore, establish a nondiscrimi-
about a prospective guest’s quality and make more natory social norm. On the other hand, the nonpositive
informed decisions, thus reducing discriminatory review may exacerbate a host’s prejudice against mi-
biases. We formalize this in a theoretical model in nority guests because of, for example, confirmation
Appendix A. bias, thereby leading to wider discrimination gaps
Another reason why online reviews may reduce (Myrdal 1944, Wason and Johnson-Laird 1972). We
discrimination is that they help establish a more in- hypothesize the following:
clusive normative behavior: that one should not base
acceptance decisions on a guest’s race. The fact that Hypothesis 3. Nonpositive reviews reduce discrimination:
other hosts have accepted the guest regardless of nonpositive reviews of guests on Airbnb reduce the accep-
the guest’s race and have written a positive review tance rate gap between African American and white guests.
can help establish a nondiscriminatory norm in the
3.3.2. Review Credibility. The literature shows that the
sharing-economy community. The literature on social
credibility of reviews affects whether they are effec-
norms (Aarts and Dijksterhuis 2003, Cialdini and
tive in conveying quality (Cheung et al. 2012, Luca
Goldstein 2004, Schultz et al. 2007) suggests that
2016). Airbnb has taken several steps to ensure the
people tend to follow the normative behavior. In this
credibility of reviews posted on its site. First, hosts
case, the existence of a social norm allows hosts to accept
and guests can only leave a review after the arranged
a guest more easily when others have done so too. In
stay is completed. Second, Airbnb actively monitors
summary, we hypothesize that reviews, particularly
review validity and quality and may remove or alter
reviews with positive information, attenuate discrimi-
a review when it violates the platform’s guidelines
natory behavior on sharing-economy platforms. aimed at promoting honesty and transparency.5 As
Hypothesis 2. Positive reviews reduce discrimination: a result, reviews on Airbnb provide relevant, useful,
hosts’ positive reviews of guests on Airbnb can reduce the and credible information.
acceptance-rate gap between African American guests and On Airbnb, users can also self-identify to be trust-
white guests. worthy by providing information in their accommo-
dation requests. For example, a guest could claim in the
3.3. Characteristics of Reviews message sent to hosts that the guest is tidy, friendly, and
In this section, we examine what aspects of online reliable. However, this information is usually non-
reviews are most critical in reducing discrimination. verifiable from the host’s perspective. On one hand,
In particular, we examine how a review’s credibility, such information may be helpful in reducing dis-
sentiment, and existence affect its ability to reduce crimination because this self-claimed information
discrimination. can serve as a signal that allows hosts to update their
belief of a prospective guest’s quality and make more
3.3.1. Review Sentiment. Even though 96% of reviews informed decisions, thus reducing discriminatory
posted on Airbnb contain positive information that biases. On the other hand, the validity of the self-
compliments guests (Zervas et al. 2015, Fradkin 2016), claimed quality cannot be verified, and hosts might
occasionally there are nonpositive reviews that criticize not trust this information (Pavlou and Gefen 2004).
an experience with a guest, for example, the guest did We hypothesize the following:
not keep the property clean, was not respectful of the Hypothesis 4. Self-claimed information reduces discrimi-
neighborhood community, etc. Therefore, we want to nation: self-claimed quality information in correspondence
test how the sentiment of a review affects its ability to messages reduces the acceptance rate gap between African
reduce discrimination. American and white guests.
When a review provides positive information about
a guest, hosts put more emphasis on the review than 3.3.3. Existence of a Review. Typical sharing-economy
on race when making their acceptance decisions. As platforms, such as Airbnb and Uber, stipulate that
a result, guests of either race may experience a higher participants can only leave a review after a transaction
acceptance rate, and the discrimination gap shrinks. is completed. Therefore, a review on these platforms
However, when a review provides nonpositive infor- conveys two types of information: First, on Airbnb, the
mation about a guest, it is less clear how it may affect the existence of a review conveys that the guest has stayed
host’s acceptance decision. On one hand, the non- with a host before, and the host has written a review.
positive review may significantly lower expectations Second, the content of the review shows how satisfied
of the quality of either type of guest (African Amer- the host is about the guest and the stay. We would like
ican and white). As a result, the discrimination gap to separate the effect of existence of a review from that
may also shrink. Moreover, the nonpositive review of the content. In other words, we would like to test
Cui, Li, and Zhang: Reducing Discrimination with Reviews
6 Management Science, Articles in Advance, pp. 1–24, © 2019 INFORMS
whether the existence of a review alone (a blank review encourages users to verify their identification, email
without any content) would reduce discrimination or address, phone number, and social network profiles.
not. The fact that a review exists on a guest’s profile When they need an accommodation, guests can
provides several important quality signals about the search for and choose from available listings at the
guest. First, the guest has prior experience with the destination for the selected travel dates. If guests have
platform and with other hosts; therefore, the guest questions regarding a listing’s rental requirements,
might be more familiar with the process than a first- they can contact the host by clicking the “Contact
time guest. Second, the guest’s identity is truthful; Host” button. Upon receiving contact information,
otherwise, the host would have reported the incident hosts can preapprove a guest. Guests can also click the
to the platform operator, and the guest account would “Request to Book” button to directly send a book-
have been suspended. Third, there was no serious ing request. Hosts can approve or reject this booking
breach of the contract that caused personal or prop- request. Hosts may ask guests to provide further
erty security damages; otherwise, the host would information regarding their travel purpose or contact
have reported the incident or sought legal settlement, verification. Once guests and hosts have confirmed
travel dates and expectations, guests submit their
and the guest account would have been suspended.
payment to Airbnb, which holds the payment until 24
Thus, even without explicit discussion of a guest’s
hours after the reservation begins.
manner or personality, reviews provide valuable in-
formation about the guest’s legitimacy. Besides the
4.1. Reviews on Airbnb
valuable information a review’s existence provides, it
Airbnb has built a reputation system that enables both
also helps to establish a norm of inclusive behavior guests and hosts to post reviews of each stay on the
because the guest has been accepted by other community platform. Once a transaction is complete, the guest
members. Therefore, we hypothesize the following: and host have 14 days to leave a review for each other.
Hypothesis 5. Review existence alone reduces discrimi- If both of them leave a review within 14 days, they can
nation: blank guest reviews reduce the acceptance rate gap see each other’s review immediately after the review
between African American and white guests. is created. If only one of them leaves a review, this
review can only be seen 14 days after the checkout
4. About Airbnb date. In creating a review, users are given an op-
We conducted the field experiments on Airbnb.com. portunity to rate the experience and then write about
Airbnb is a sharing-economy marketplace that con- it in their own words. Reviews on hosts, which are
nects hosts who have empty rooms to potential displayed on host profiles, contain both the rating as
renters. Airbnb defines itself as “a trusted community well as the textual content of the review. However,
marketplace for people to list, discover, and book unlike other platforms, reviews on guests displayed on
unique accommodations around the world.” Hosts on guest profiles only contain the textual content. Figure C.2
shows an example of a guest profile, and it is evident
Airbnb can list and rent their properties for a pro-
that reviews on guests are displayed on an easily
cessing fee. Founded in 2008, Airbnb’s marketplace
visible spot on their profiles. Because Airbnb moni-
has expanded exponentially in the last few years. As
tors the transaction and only allows users to create
of 2016, Airbnb had more than two million listings,
reviews after a successful transaction, past reviews
which have collectively served 60 million guests in
serve as a credible signal of user quality. When it
191 countries.6 includes a review, a guest’s profile page proves that
On Airbnb, hosts create profiles for themselves and the guest has had at least one successful transaction
their properties. Prospective hosts can list their entire with another host and is familiar with the accom-
apartments or just spare bedrooms on the Airbnb modation process.
platform, choose their own prices, and set guidelines
for guests. Information on each host page includes 4.2. Platform Data
a profile photo, listing information, rental require- We collected data from all listings offered in several
ments, reviews written by previous guests who have metropolitan areas with high Airbnb usage. This data
stayed at the host’s properties, and Airbnb-certified comprises two types: listing characteristics and host
contact information. Figure C.1 shows the informa- characteristics. First, the listing characteristics in-
tion displayed on a typical Airbnb listing. Guests also clude the type of room offered, number of bedrooms,
need to create personal profiles to request a booking. number of reviews, and listing location. Second, we
Each guest page contains a profile photo, the guest’s gathered the demographics of the host associated
first name, reviews written by hosts about past stays, with each listing in our experiment. We employed re-
and Airbnb-certified contact information. Figure C.2 search assistants to classify hosts’ race (white, African
shows a typical Airbnb guest profile page. Airbnb American, Asian, Hispanic, Arab, unidentifiable), gender
Cui, Li, and Zhang: Reducing Discrimination with Reviews
Management Science, Articles in Advance, pp. 1–24, © 2019 INFORMS 7
(female, male, unidentifiable), and age (0–30, 30–45, the existence of discrimination as well as the impact of
45–60, beyond 60, unidentifiable). We coded char- one positive review on discrimination. To represent
acteristics as “unidentifiable” when the photo has an average Airbnb account, all eight accounts have
multiple people in it or the picture does not show a verified their email addresses and phone numbers (see
person. Specifically, we hired two graduate students Figure C.3(a) for an example).7 The first four ficti-
as research assistants to evaluate each image; if there tious guest accounts do not have reviews and are
was a disagreement, we coded it manually. identical except for names: two accounts have African
American–sounding names and two have white-
5. Experimental Design sounding names.
We conducted four 2 × 2 field experiments using All names were drawn from a pool of most fre-
listings from Boston, Chicago, Seattle, Austin, and Los quently used white- and African American–sounding
Angeles. The experiment design details are presented names (Bertrand and Mullainathan 2004) to signal
in Table 1. The first field experiment tested the effect African American or white race. To avoid potential
of positive reviews on discrimination. The remaining confounds, we used multiple names in the experiment
three experiments tested the effect of review sentiment, (Wells and Windschitl 1999). To validate the selected
review credibility, and review existence without con- names, we performed an image search on Google and
tent in reducing discrimination. All experiments were verified that most of the people who appear in the results
conducted with the approval of the institutional re- are of a race consistent with the name we’ve chosen. We
view board, and we discuss how we protect experiment also conducted a survey on Amazon Mechanical Turk
subjects in Appendix B. to assure that average Airbnb users could recognize
the race from the names. Table C.2 shows that more
5.1. Positive Review Experiment than 80% of the survey participants can correctly
We conducted the positive review experiment in identify the race for all the eight names we used.
September 2016. In this experiment, we created eight The second four fictitious guest accounts are
fictitious guest accounts with and without positive identical to the prior four accounts except that they
reviews under names that signal different races to test have one positive review written by the same host at
Design White or African American White or African American White or African American White or African American
names × no or positive names × no or nonpositive names × no or self-claimed names × no or blank
review review information review
Guest accounts per Two guest accounts per Two guest accounts per Four guest accounts per Four guest accounts per
condition condition condition condition condition
Guest account Email address and phone Email address and phone Email address and phone Email address, phone
verification number number number number, and government
ID
White names Scott Baker, Colin Murphy Scott Baker, Colin Murphy Scott Mueller, Scott Baker, Scott Mueller, Scott Baker,
Colin Murphy, Colin Colin Murphy, Colin
Moore Moore
African American DeAndre McCray, DeShawn DeAndre McCray, DeShawn DeAndre McCray, DeAndre DeAndre McCray, DeAndre
names Washington Washington Jackson, DeShawn Jackson, DeShawn
Washington, Tyrone Washington, Tyrone
Washington Washington
Experiment dates September 4, 2016, to October 23, 2016, to July 27, 2017, to August 12, March 5, 2018, to April 10,
September 27, 2016 November 21, 2016 2017 2018
Experiment cities Chicago, Boston, Seattle Boston, Seattle Boston, Seattle, Austin Los Angeles
Planned sample 1,200 400 1,200 500
size
Actual sample size 598 250 660 293
Notes. For the first experiment, the difference between the planned sample size and the actual sample size is due to listings’ unavailability and
Airbnb banning our accounts. In particular, the planned sample size is 1,200. Excluding the unsent listings resulting from suspended accounts
leaves us with 856 listings. In the other three experiments, because we had backup accounts in case the original accounts were suspended, the
difference between planned and actual sample size is only a result of listings’ unavailability. To avoid being suspended and ensure an adequate
sample size, we use more names in the other three experiments. Based on our observations, account suspension happens randomly across
different names (i.e., accounts with African American names are not more likely to be suspended) and searches on two famous Airbnb host
forums (i.e., airhostsforum.com and community.withairbnb.com) demonstrate that account suspension does not trigger hosts’ discussion of
accounts with the names used by us.
Cui, Li, and Zhang: Reducing Discrimination with Reviews
8 Management Science, Articles in Advance, pp. 1–24, © 2019 INFORMS
the same time. Figure 2(a) shows the content of this guests with different names and review conditions,
positive review. Note that the reviews on guests do not we can test whether there exists significant discrim-
contain any ratings except the textual content on Airbnb, ination and whether one positive review reduces
and therefore, identical positive reviews represent discrimination.
reviews with the identical positive content. To gen- After sending messages to hosts, we checked for the
erate these identical positive reviews, we created one host’s reply at least five times: about 5, 10, 24, and 48
fictitious host account and used this host account to hours as well as 5 days after the request was sent.
leave a review on the fictitious guest accounts. The When a host replied to our guests, we followed up
fictitious host is named Scott and located in a Mid- with an immediate response stating that we found
western city.8 Because Airbnb allows hosts and guests another place to stay so that the host did not hold the
to post a review of each other after the guest has inventory for our fictitious guests. Figure 1(b) pres-
checked out, we created a transaction between our ents the content of our reply. We recorded host re-
host and each of our guest accounts on Airbnb, and sponses within five days after sending requests. We
we wrote an identical positive review of all guest coded each response in three categories: “decline”
accounts after the checkout date. Because Airbnb only if the host declined the request, “accept” if the host
displays the month and year when a review is written, accepted the request; “further information” if the host
the time of review is the same across our guest ac- asked for further information. Following Edelman
counts if the review dates are within a month of each et al. (2017), we focus on the “accept” response; our
other. Because Airbnb requires a guest to upload a results do not change qualitatively if we switch to a
picture prior to contacting hosts, all guest accounts broader definition of acceptance by considering
also include a scenery picture as shown in Figure 1(a). “further information” as acceptance.
We used a neutral-looking scenery photo to prevent
the content of a photo from confounding our results. 5.2. Other Experiments
We then randomly assigned Airbnb hosts in our 5.2.1. Nonpositive Review Experiment. We conducted
sample to the fictitious guest accounts with different the nonpositive review experiment in October and
names (i.e., white- or African American–sounding November of 2016 to test how a nonpositive review
names) and different reviews (i.e., no review or one affects discrimination. In this experiment, we created
review) and sent accommodation requests from our another eight fictitious guest accounts with identical
guest accounts to prospective hosts. Each prospective profile pictures, the same profile setup as the previous
host was contacted once at most to avoid a host seeing experiment: white- or African American–sounding
two identical messages from different guest accounts. names and with a nonpositive review or none at all.
If a host owns multiple properties, we inquired about Similar to the positive review experiment, all eight
only one of the properties. The accommodation re- accounts had verified email addresses and phone
quests were sent from guest accounts with and without numbers. The major difference from the previous
reviews simultaneously to ensure comparability. We experiment is that four accounts in this experiment
sent messages to request a stay of two consecutive had a nonpositive review instead of a positive review.
nights with check-in dates about three weeks from the Figure 2(b) shows the content of the nonpositive re-
date of initial contact.9 Figure 1(a) presents the content view. Similarly, we generated this review after cre-
of a sent message. The name, city, and date information ating a transaction between our fictitious host account
varied by request. Because our guest accounts are (the same one used in the previous experiment) and
identical except for having different names and num- guest accounts; these nonpositive reviews on guest
ber of reviews, by comparing acceptance rates across profiles do not contain ratings.
Figure 2. (Color online) Review Information Displayed on the Fictitious Guest’s Profile Page
Note. Because the location of the fictitious guest may indicate the location of the authors’ institutions, we removed it for the review process.
We then randomly assigned listings to the fictitious accommodation requests. The accounts in the no-
guest accounts and sent out accommodation requests information condition are similar to those in the no-
to hosts in Boston and Seattle. The listings and hosts review condition of previous experiments: eight fic-
are mutually exclusive from those used in the pre- titious guest accounts—four with white-sounding
vious experiment. We followed the same rules to names and four with African American–sounding
select available hosts and dates from their calendars, names—were created without reviews and their ac-
reply to hosts, and record host responses. This ex- commodation requests have the basic content as in
periment allows us to detect whether discrimination Figure 1(a). In the self-claimed information con-
exists (at a different time with a different set of hosts) dition, the eight fictitious guest accounts do not
and whether a nonpositive review helps attenuate have a review, and their accommodation requests
discrimination. have the basic request plus the sentence, “I am a tidy
and friendly person. I like to keep places clean
5.2.2. Self-Claimed Information Experiment. In July and organized. Let me know if you have any ques-
and August 2017, we conducted another field ex- tions.” This additional sentence represents the self-
periment to test whether self-claimed information claimed information from the guest. This sentence
from guests helps attenuate discrimination. In this signals the guest’s tidiness and friendliness, which
experiment, we employed a 2 × 2 design and ran- is consistent with the key information contained
domly assigned 16 fictitious guest accounts with in the positive review shown in Figure 2(a). This
verified phone numbers and email addresses to experiment used a total of eight names (instead
(a) white- or African American–sounding names and of four names) and 16 accounts (instead of eight
(b) no information or self-claimed information in the accounts).
Cui, Li, and Zhang: Reducing Discrimination with Reviews
10 Management Science, Articles in Advance, pp. 1–24, © 2019 INFORMS
We then randomly matched a set of listings and the and send out accommodation requests to the corre-
corresponding hosts from Boston, Seattle, and Austin sponding hosts.10 We followed the same rules to se-
to our fictitious guest accounts. Again, the listings and lect available hosts and dates from their calendars,
hosts were mutually exclusive from the first two reply to hosts, and record host responses. This ex-
experiments. We followed the same rules to select periment allows us to detect whether discrimination
available hosts and dates from their calendars, sent existed in March and April 2018 in Los Angeles and
requests, recorded hosts’ responses, and measured whether a blank review helps reduce discrimination.
acceptance rates. This experiment allows us to test if
discrimination exists and whether information pro- 6. Main Results from the Positive
vided by guests themselves about their tidiness and Review Experiment
friendliness helps attenuate discrimination. In this section, we show the results from the positive
review experiment. Our findings confirm the exis-
5.2.3. Blank Review Experiment. In March and April tence of racial discrimination and reveal that positive
2018, we conducted the fourth field experiment to test reviews can help attenuate discrimination.
whether the existence of a review without content
(i.e., a blank review) can help reduce discrimination. 6.1. Existence of Racial Discrimination
In this experiment, similar to the self-claimed infor- Panel A of Table 2 displays the summary statistics of
mation experiment, we employed a 2 × 2 design, listing availability and request acceptance rates for
created another 16 fictitious guest accounts, and guest accounts with no review in the first experiment,
randomly assigned four accounts to each condition. stratified by guest race. Panel A shows that, without
Each condition is a combination of the account names reviews, a guest with a white-sounding name is ac-
(i.e., white- or African American–sounding names) cepted with a 47.9% probability. By comparison, a
and account review status (i.e., no review or a blank guest with an African American–sounding name only
review). The major differences between this experi- has a 28.7% probability of being accepted. In other
ment and the previous positive review and non- words, a guest with an African American–sounding
positive review experiments are that (a) these 16 name is 19.2% less likely to be accepted.
accounts have verified government IDs in addition to Using a nonparametric proportion test, we dem-
verified email addresses and telephone numbers as in onstrate that the 19.2% difference in the acceptance
previous experiments, and (b) we used our fictitious rate is statistically significant. Panel A of Table 2 shows
host account to leave an identical blank review without that the p-value of the proportion test is 0.0002. 11
content instead of a positive or nonpositive review. Given that hosts are randomly assigned to guest ac-
Figure 2(c) provides an example of this blank review. counts and the guest accounts are identical except for
Similarly, we generated this review after creating a names, the empirical evidence suggests that guest
transaction between our fictitious host account (the race causally affects the acceptance rates for their
same one used in the previous experiment) and guest accommodation requests. In other words, racial dis-
accounts; these blank reviews on guests do not con- crimination exists.
tain ratings. We then randomly assigned 400 list- To formally test the existence of racial discrimi-
ings in Los Angeles to the fictitious guest accounts nation, we use the following specification to analyze
Table 2. Response Summary Statistics by Guest Race in the Positive Review Experiment
how guest race affects the acceptance rate of a request one positive review significantly attenuates discrim-
while controlling for characteristics of the host, the ination on Airbnb.
listing, and the specific request: A potential concern is whether this null result is
driven by the lack of power in our experiment. Prior to
Accepti f (α + βRacei + Xi γ + i ). (1)
the experiment, we conducted a power analysis based
We chose two specifications for the choice function on the estimation results of Edelman et al. (2017) and
f (·): the linear probability model and the logit model; selected sample sizes accordingly.12 We also con-
Xi includes host characteristics (i.e., host gender and ducted a postexperiment power analysis to show the
host race), listing characteristics (i.e., room type, power of detecting a significant discrimination effect
number of bedrooms, number of reviews, and listing given our sample size if a positive review does not
location) and request characteristics (i.e., the date and reduce discrimination. In particular, given the dis-
length of the request). crimination effect size of 19.2% (based on the no-
Columns (1) and (4) of Table 3 present the results review condition) and the sample size of 222 (for
based on linear probability and logit models, re- the positive-review condition), we have 90% power to
spectively. The findings are robust: a guest with a detect the discrimination at a 0.05 significance level
white-sounding name has a higher acceptance rate. In and 95% power at a 0.1 significance level. This in-
particular, after controlling for host, listing, and re- dicates that the null result with p-value 0.8774 ob-
quest characteristics, being associated with a white- served in our experiment is unlikely to be due to the
sounding name can causally increase the acceptance lack of power.13
rate by 20.0 percentage points. This result is consistent As expected, a positive review provides credible
with and further confirms the finding of Edelman information about guest quality and improves the
et al. (2017) that racial discrimination exists on Airbnb. acceptance rate for guests of both races. Guests with
To summarize, the results from no-review guest white-sounding names experience an increase from
accounts demonstrate that racial discrimination ex- 47.9% to 56.2% (p-value 0.0968) in the acceptance
ists on Airbnb. Guests with white-sounding names rate, and guests with African American–sounding
are 19 percentage points more likely to be accepted names experience a more substantial increase, from
than guests with African American–sounding names. 28.7% to 58.1% (p-value < 0.00001). Figure 3 presents
This empirical evidence supports Hypothesis 1. this change visually.
We then follow the same specification as in Equa-
6.2. The Effect of One Positive Review tion (1) to formally analyze the effect of a review on
Panel B of Table 2 presents the summary statistics of discrimination while controlling for host, listing,
listing availability and request acceptance rates for and request characteristics. Columns (2) and (5) of
guest accounts with a positive review, stratified by Table 3 summarize the results. Both the linear and
guest race. Panel B shows that, with a positive review, the logit models show that, in the presence of a posi-
racial discrimination is significantly attenuated: white tive review, the acceptance rate is the same across
and African American guests receive nonstatistically different races, so there is no sign of discrimina-
distinguishable acceptance rates (56.2% and 58.1%, tion. To formally identify whether having one posi-
respectively, with p-value 0.8774). This suggests that tive review significantly reduces discrimination
(1) Review = 0 (2) Review = 1 (3) All (4) Review = 0 (5) Review = 1 (6) All
Figure 3. (Color online) Results for the Positive Review low-review (LR) hosts and high-review (HR) hosts,
Experiment and a positive review significantly reduces discrimi-
nation for both types of hosts. Low review is defined as
the number of reviews being lower than the median
(i.e., 13). We also split the sample by city and demon-
strate that our results are robust across the three cities
(i.e., Chicago, Boston, and Seattle) shown in Table C.1.
Airbnb classifies listings as “entire apartment,”
“private room,” and “shared room.” Private and
shared rooms require guests to stay with hosts, whereas
the entire-apartment type does not. We classify “pri-
vate room” and “shared room” as shared listings and
“entire apartment” as nonshared listings. Columns
(3) and (4) of Table 4 show that discrimination exists
for both nonshared and shared rooms, and a posi-
tive review significantly mitigates discrimination
(Gelman and Stern 2006), we estimate this effect using
for both types of rooms. Note that the magnitude of
the following specification:
discrimination without reviews is larger for shared
Accepti f α + β0 Racei + β1 Reviewi + β2 Reviewi rooms. This provides evidence that when hosts
need to stay with guests, they become more careful
× Racei + Xi γ + i , (2)
in inferring guests’ quality and tend to be more
where Reviewi 1 indicates that the guest account i discriminative.
has one positive review, and Reviewi 0 otherwise. We also split our samples based on hosts’ gender.
Columns (3) and (6) of Table 3 display the results We classify the host gender as female or nonfemale
based on linear probability and logit models, re- (including male and unidentified hosts). Columns (5)
spectively. The coefficient of the interaction of having and (6) of Table 4 show that both female and non-
a positive review and having a white-sounding name female hosts are likely to discriminate against guests
is significantly negative, which suggests that a pos- based on race as signaled by their names in the ab-
itive review causally reduces discrimination. sence of reviews. One positive review on the guest
To summarize, we find that one positive review can profile significantly reduces discrimination for both
significantly reduce racial discrimination. This em- female and nonfemale hosts.
pirical evidence supports Hypothesis 2. Finally, we split our samples based on hosts’ race.
We classify the host of each listing as white and
6.3. Robustness of Results nonwhite (including nonwhite hosts and unidentified
We next show the robustness of our results: the im- hosts). Column (7) of Table 4 shows that white hosts
pact of a review on discrimination is not driven by a are likely to discriminate against guests based on race
particular subset of hosts or listings. We split our signaled by names; one positive review significantly
samples by listing and host characteristics and follow reduces such discrimination. Column (8) of Table 4
the logit regression specification in Equation (2) to test shows that the discrimination effect for nonwhite hosts
the existence of discrimination in each subsample. Table 4 is positive but insignificant; because the discrimination
presents the estimation results. Columns (1) and (2) effect is insignificant, the value of a positive review in
of Table 4 show that discrimination exists for both reducing discrimination is also insignificant. Note that
(1) LR (2) HR (3) Nonshared (4) Shared (5) Female (6) Nonemale (7) White (8) Nonwhite
Notes. Dependent variable is Acceptance. Standard errors are robust. The characteristics in Controls include all the host, listing, and request
characteristics in Table 3.
*p < 0.10; **p < 0.05; ***p < 0.01.
Cui, Li, and Zhang: Reducing Discrimination with Reviews
Management Science, Articles in Advance, pp. 1–24, © 2019 INFORMS 13
this result does not necessarily mean that nonwhite categories: ID verification, photo, self-introduction,
hosts do not exhibit discriminatory behavior because telephone, trip purpose, and other. Figure 4(b) shows
the sample size is small (N 119), and the null result the number of times each type of information is
may be due to low statistical power. The point esti- requested for both the no-review and positive-review
mates of discrimination effects in columns (7) and (8) conditions.
of Table 4 may suggest that nonwhite hosts are less In summary, we confirm that our main result—that
likely to discriminate; therefore, one positive review is, discrimination exists, and a positive review sig-
helps attenuate discrimination more for white hosts. nificantly reduces discrimination—is robust across
Next, we provide evidence on whether hosts re- different types of listings and hosts. We also show that
quire additional information upon receiving re- hosts are less likely to request guest quality in-
quests from guest accounts of different races, with formation when guests have a review because re-
and without reviews, from accounts in different con- views signal the quality information that hosts find
ditions. On Airbnb, hosts can reply to guests’ sent useful in making their accommodation decisions.
messages and ask for more information about the
traveling party or the purpose of the intended trip. 7. Results from Other Experiments
Some hosts who do not immediately accept the ac- In this section, we demonstrate the results from the
commodation use this opportunity. In this case, if a nonpositive review experiment, the self-claimed in-
review provides additional valuable information to formation experiment, and the blank review experiment.
hosts, a host would be more likely to request additional Our findings demonstrate that both a nonpositive review
information from a guest with no review. Moreover, if and a blank review can help reduce discrimination, and
we believe that hosts discriminate against African self-claimed information cannot.
American guests because of the lack of information, we
would observe that the probability of requesting in- 7.1. Nonpositive Review Experiment
formation conditional on rejecting the offer drops more Panels A and B of Table 5 show the number of requests
severely for African American accounts than for white sent and acceptance rates for each condition in the
accounts. Figure 4(a) shows that this is indeed the nonpositive review experiment. Panel A demonstrates
case: the probability of information request dropped that when there are no reviews, discrimination ex-
from 56% to 50% for white accounts (i.e., the drop is ists. Guests with white-sounding names are accepted
6% with p-value < 0.1) and from 52.3% to 43.6% for 62.7% of the time, which is 21.4% higher than guests
African American accounts (i.e., the drop is 8.7% with with African American–sounding names (p-value
p-value < 0.05). This indicates that reviews substitute 0.02874). The baseline acceptance rates in the non-
for guest quality information, which is consistent positive review experiment are different from those in
with statistical discrimination. Moreover, we classify the positive review experiment. This may be because
information requested by the hosts into the following these experiments were conducted with hosts from
Table 5. Response Summary Statistics and Regression Results from Nonpositive Review Experiment
Notes. Standard errors are robust. The characteristics in panel C include all the host, listing, and request characteristics in Table 3.
*p < 0.10; **p < 0.05; ***p < 0.01.
different cities and at different times. This result is and is familiar with the process. The completion of
consistent with the 19.2% difference we found in a transaction may also demonstrate that the guest
the previous round of experimentation. However, has been accepted by other hosts in the commu-
with one nonpositive review, discrimination becomes nity, which establishes a social norm that the
significantly attenuated. The acceptance rates for guest should be accepted. In Section 7.3, we formally
white and African American guests are 58.2% and tease apart the impact of a review’s existence
57.4%, respectively, and the difference is statistically (i.e., the signal that a guest has completed a trans-
insignificant (i.e., p-value 0.6813).14 action on the platform) and its content in reducing
Comparing panels A and B of Table 5 shows that discrimination.
a nonpositive review increases the acceptance rate Finally, we follow the same specifications as in Table 3
of guest accounts with African American–sounding and formally test the existence of discrimination and
names by 16.1% (p-value 0.0965). This could be the impact of one nonpositive review on discrimi-
because hosts’ prior beliefs about an average African nation; the results are shown in panel C of Table 5.
American guest’s quality are even lower than the Columns (1) and (4) demonstrate that, in absence of
quality revealed by the nonpositive review. This review information, having a white-sounding name
could also be because the existence of a review shows can causally increase the acceptance rate by 21.9
that the guest has completed a transaction with other percentage points; columns (2) and (5) show that with a
hosts. Such completion signals several qualities of the nonpositive review, having a white-sounding name no
guest. It shows that the guest is likely more familiar longer significantly affects the acceptance rate. Col-
with the process than a first-time guest. It also shows umns (3) and (6) suggest that having a nonpositive
that the guest’s identity is truthful because otherwise review significantly reduces the racial discrimination
the host would have reported the incident to the caused by names.
platform, in which case the guest account would have To summarize, we show that a nonpositive review
been suspended. In other words, even though the can also effectively reduce racial discrimination, which
guest may be messy, at least the guest is safe to host supports Hypothesis 3.
Cui, Li, and Zhang: Reducing Discrimination with Reviews
Management Science, Articles in Advance, pp. 1–24, © 2019 INFORMS 15
Table 6. Response Summary Statistics and Regression Results from Self-Claimed Information Experiment
Number of Number of
Number of requests requests Probability of
Guest race listings sent accepted acceptance
Notes. Standard errors are robust. The characteristics in panel C include all the host, listing, and request characteristics in Table 3.
*p < 0.10; **p < 0.05; ***p < 0.01.
Cui, Li, and Zhang: Reducing Discrimination with Reviews
16 Management Science, Articles in Advance, pp. 1–24, © 2019 INFORMS
this, we test whether a blank review can help reduce show that the acceptance rate of guests with white-
discrimination. Panel A of Table 7 shows that, when sounding names is not significantly different from that
there is no review, discrimination still exists in this of guests with African American–sounding names
fourth experiment (i.e., March and April 2018 in Los when there is a blank review on all guests’ profiles.
Angeles). Guests with white-sounding names are Finally, columns (3) and (6) show that the blank review
accepted 37.9% of the time, and guests with African can help reduce discrimination, which is an interesting
American–sounding names are accepted 20.2% of result given that the past literature (John et al. 2016) has
the time. The difference is 17.7% percentage points shown the failure to disclose information can cause
(p-value 0.009), which shows that discrimination still observers to assume the worst. Moreover, the fact that
exists even with verified government ID (see Table 1 for the estimated effect of a blank review in column (3) is
all verification). Furthermore, panel B shows that a smaller than the effect of the positive review in Table 3
blank review significantly reduces discrimination; is also consistent with this literature.
the acceptance rates for white and African American In summary, the existence of a review without content
guests are 34.0% and 34.4%, and the difference is −0.4 (i.e., a blank review) can help reduce discrimination; this
percentage points (p-value 0.963). This provides provides empirical evidence to support Hypothesis 5.
initial evidence that a blank review left by hosts can
reduce discrimination. Following the same specifi- 8. General Discussion
cation as in Table 3, we provide the regression results Because discrimination has been documented in the
of this blank review experiment in panel C of Table 7. sharing economy, there have been heated discussions
Columns (1) and (4) show that, in the absence of self- on how to reduce it. One way to reduce discrimination
claimed information and reviews, having a white- is through legislation. Antidiscrimination laws have
sounding name can causally increase the acceptance successfully reduced discrimination in housing and
rate by 18.5 percentage points, controlling for host, rental markets in the past few decades (U.S. Department
listing, and request characteristics. Columns (2) and (5) of Housing and Urban Development 2013). Several
Table 7. Response Summary Statistics and Regression Results from Blank Review Experiment
Notes. Standard errors are robust. The characteristics in panel C include all the host, listing, and request characteristics in Table 3.
*p < 0.10; **p < 0.05; ***p < 0.01.
Cui, Li, and Zhang: Reducing Discrimination with Reviews
Management Science, Articles in Advance, pp. 1–24, © 2019 INFORMS 17
states, such as California and Massachusetts, forbid systems to prevent discrimination in the sharing econ-
prospective landlords from asking applicants about omy. First, our positive, nonpositive, and blank review
their race, religion, gender, or possible disabilities. field experiments show that, if a guest has one review
However, the prevailing view among legal scholars is (regardless of the sentiment), discrimination is sig-
that online marketplaces, such as Airbnb, fall into a nificantly reduced. Therefore, we encourage platform
gray area of the law (Todisco 2015); therefore, existing owners to incentivize users to review one another after
legal frameworks may not provide an adequate so- a transaction. Given that the review system is a part of
lution to discrimination issues on Airbnb (Belzer and Airbnb’s platform design, such a recommendation is
Leong 2016). There also have been discussions about easy to implement and does not require major changes to
enforcing complete anonymization in online home- the way that Airbnb is organized. There are several
rental marketplaces. Such attempts, however, might actions that a platform owner may take to encourage
jeopardize the core idea of a sharing economy—that users to write reviews: for example, sending email re-
is, building trust. Airbnb’s chief executive, Brian minders or offering monetary incentives, such as dis-
Chesky, said that “access is built on trust, and trust is counts or credits for future transactions. These actions
built on transparency” (McPhate 2015). will be particularly effective in reducing discrimination if
Airbnb has also been active in fighting discrimi- they target relatively new users who do not have a re-
nation. The platform has banned hosts because of the view yet.16
usage of discriminatory language (Weise 2016). On Second, we also highlight that, for first-time users,
November 1, 2016, Airbnb issued a nondiscrimination discrimination exists and may be substantial. The self-
policy and required its members to adhere to it. Under claimed information experiment shows that a mere
this policy, Airbnb may “take steps up to and including description of someone’s friendliness or tidiness is
suspending the host from the Airbnb platform,” if unlikely to reduce discrimination. First-time users
“the host improperly rejects guests on the basis of should be cautious that discrimination may exist
protected class” (Airbnb, Inc. 2019). However, it is not before the first review and they should also try hard to
clear how the platform could monitor hosts’ discrim- get the first review as soon as possible.
inatory behaviors and how such policies could affect Third, we show that one blank review helps reduce
discrimination on the platform.15 discrimination, and self-claimed information about
Our paper suggests reviews as an alternative ap- tidiness and friendliness does not. This suggests that
proach to effectively reduce discrimination in the the credibility of a review—the fact that a review is
sharing economy. We find that, in the absence of a verified by the platform and is linked to a completed
review, an accommodation request made by a guest transaction—is crucial for the review to reduce dis-
with an African American–sounding name is 19 per- crimination. The underlying mechanism may be that
centage points less likely to be accepted by Airbnb the completion of a transaction (a) sends additional
hosts. However, a positive review can significantly information about the guests’ quality to reduce sta-
reduce the observed racial discrimination based on a tistical discrimination or (b) signals that the guests,
name’s perceived racial origin. The findings are ro- regardless of their race, have been accepted by other
bust across various listings and host characteristics, hosts and establishes a social norm for other potential
including cities, listing types, hosts’ past reviews, and hosts to accept these guests. Under either mechanism,
hosts’ gender and race. our experiments show that to fight discrimination,
We conducted further field experiments to explore platforms should ensure the credibility of reviews by
how different a review’s characteristics could affect monitoring the review system and only allowing either
its ability to reduce discrimination. Our nonpositive side of the platform to leave reviews after completing a
review and blank review experiments demonstrate transaction.
that a nonpositive review or a blank review could also
effectively reduce discrimination. This suggests that 8.2. Limitations and Future Research
the existence of a review, rather than its sentiment or Our paper has several limitations that future re-
content, could help attenuate discrimination. Our search could help resolve. The first limitation, similar
self-claimed information experiment shows that the to Edelman et al. (2017), is that race signaled by names
credibility of a review is essential to reduce discrimi- might be associated with socioeconomic status. Past
nation; when the guests self-claim their friendliness and research shows that African American–sounding
tidiness in accommodation requests, such unverified names can be correlated with lower socioeconomic
information cannot reduce discrimination. status (Fryer and Levitt 2004). Thus, the level of
identified discrimination could be driven by both
8.1. Practical Implications perceived socioeconomic status and race, signaled
Our results shed light on several important manage- by names. Such limitation is embedded in the
rial implications on how to leverage online reputation method of measuring discrimination in this type of
Cui, Li, and Zhang: Reducing Discrimination with Reviews
18 Management Science, Articles in Advance, pp. 1–24, © 2019 INFORMS
correspondence study—using names to signal races. belief about guests with race ri is drawn from a commonly
Future research could use other information to signal known normal distribution, denoted as ηi ∈ N(η̄ri , σ2η ),
race, which may potentially disentangle the dis- where η̄0 and η̄1 represent the prior belief of the average
crimination caused by race and socioeconomic status. quality of white and African American guests, respectively.
The host then uses the guest’s public profile information—for
Moreover, the observation that online reviews re-
example, a review left by past hosts—to ultimately infer the
duce discrimination in our setting can potentially be
guest’s quality and make a rental decision. Specifically,
explained by two mechanisms. The first is that re- hosts receive a signal, η̃i , of guest quality, ηi ,
views provide valuable information regarding guest
quality, and as a result, hosts do not focus as much on η̃i ηi + i ,
race. The other is that reviews establish an inclusive where i ∈ N(0, σ2s ) represents the noise level of the signal.
normative behavior that makes hosts more likely to After receiving a request i, the host updates the host’s
accept guests who have been accepted by others, and utility based on signal η̃i and the guest’s race ri . Let E[ui |η̃, ri ]
either mechanism could explain all four of our ex- denote the expected utility that a risk-neutral host derives
periments. Teasing out them is a subtle and tricky task from request i conditional on the observed information. The
because, to establish a “credible” norm for inclusive host accepts the request i if and only if the expected utility
behavior, one has to show that a guest has been hosted derived from accepting the request is higher than the cost,
by other members in the community regardless of ci ; that is, E[ui |η̃, ri ] > ci . The cost ci includes both the phys-
ical cost and the opportunity cost of renting out the prop-
race. However, such endorsement by other hosts also
erty, and it is drawn from a commonly known distribution
signals guest quality, such as safety, legitimacy, and with cumulative density function F(·).
familiarity with the process. As a result, we are not We denote the renting decision by Ai , where Ai 1 re-
able to distinguish the two mechanisms completely presents the condition in which the host has accepted the
in this specific context even with the nonpositive and request i and Ai 0 otherwise. The following proposition
blank review experiments. Moreover, it is possible demonstrates the acceptance probability for request i.
there is no single mechanism that could fully explain
Proposition A.1. For any request i, the probability that the re-
the findings, and both mechanisms may be present.
quest is accepted is P(Ai 1|η̃i , ri ) F((1 − β)η̄ri + βη̃i + αri ),
We, therefore, leave this interesting question for future σ2
research. where β σ2 +σ
η
2.
η s
belief and arrives at a final rental decision. Because the observed signal to infer the guest’s quality and (1 − β)
guest’s quality is not observable to hosts, the host relies on a represents the weight that the host puts on the prior group
prior belief of the guest’s quality ηi . We assume that the prior average.
Cui, Li, and Zhang: Reducing Discrimination with Reviews
Management Science, Articles in Advance, pp. 1–24, © 2019 INFORMS 19
Hence, the expected utility of accepting request i is of conducting the research, and it posed no or minimal risks
to subjects. To test whether hosts discriminate against
E[ui |η̃i , ri ] E[ηi + αi ri |η̃i , ri ]
guests based on race, the research randomly assigns guest
E[ηi |η̃i , ri ] + αri races while keeping other characteristics the same. By
(1 − β)η̄ri + βη̃i + αri . concealing that these are, in fact, fictitious guest accounts,
hosts respond the same way as they would to a request from
Because the host only accepts the request if the expected
a real guest. Not concealing such a fact would result in no
utility is larger than the cost (i.e., E[ui |η̃i , ri ] > ci ) and the cost
response from hosts and, hence, a failure of this study. We
is distributed according to F(·), the probability of accepting
take the following steps to minimize the potential risks to
request i becomes
subjects: First, subjects involved in this study only had to
P(E[ui |η̃i , ri ] > ci ) P((1 − β)η̄ri + βη̃i + αri > ci ) read a short message with no more than 50 words and
F((1 − β)η̄ri + βη̃i + αri ). decide whether to respond; a typical response usually
contains 5–10 words indicating whether the request is
Hence, the acceptance rate of request i depends on the approved. Second, each selected host was only contacted
observed signal quality η̃i , the average guest quality of race once through Airbnb’s reservation request. Third, the re-
group η̄ri , and the race of the guest ri . ∎ quest was cancelled immediately upon receiving the reply
When request i and request j contain identical informa- such that the host would not have to block the host’s cal-
tion, η̃i η̃j η̃, and the guests’ races are different, ri rj , endar on the requested dates. We were granted a waiver of
discrimination exists if and only if informed consent upon establishing that the study posted
no or minimal risks to subjects and that requesting in-
P(Ai 1|η̃, ri ) P(Aj 1|η̃, rj ). formed consent would violate the design of the study; hosts
Based on Proposition A.1, we define the discriminatory should not be aware of the study to ensure honest and
acceptance gap for signal η̃ as the difference between ac- accurate replies.
ceptance rates of two requests with identical signal η̃ but
different races:
Appendix C. Auxiliary Figures and Tables
G(η̃) F((1 − β)η̄1 + βη̃ + α) − F((1 − β)η̄0 + βη̃).
The discriminatory acceptance gap between white (ri 1) Table C.1. Main Results from the Positive Review
and African American guests (ri 0) stems from two Experiment per City
mechanisms: taste-based and statistical discrimination.
According to the definition, taste-based discrimination oc- Chicago Boston Seattle
curs when there is an inherent disutility related to a cer-
White 0.187*** 0.250** 0.202*
tain race, that is, α > 0; statistical discrimination occurs
(0.062) (0.123) (0.109)
when decision makers lack information and base their
Positive review 0.185** 0.239** 0.301***
decisions on prior beliefs, that is, α 0, σ2s > 0, and η̄0 η̄1 . (0.081) (0.118) (0.110)
As the information signal becomes more informative White × positive review −0.244** −0.341** −0.265*
(i.e., σ2s decreases), hosts place less weight on the prior belief (0.124) (0.171) (0.150)
(i.e., 1 − β decreases). If discrimination is statistical, the Host characteristics Yes Yes Yes
discriminatory gap (i.e., (1 − β)η̄ri diminishes). If discrimi- Listing characteristics Yes Yes Yes
nation is taste-based, the discriminatory gap still exists. Request characteristics Yes Yes Yes
Observations 318 135 145
Adjusted R2 0.028 0.047 0.093
Appendix B. Institutional Review Board
Because the study involves deceiving subjects, we estab- Note. Dependent variable is Acceptance.
lished to the board that deception is the only feasible means *p < 0.1; **p < 0.05; ***p < 0.01.
Name White, % African American, % Other, % I cannot tell, % Number of participants Intended race
Note. Among 5, 000 randomly selected guest accounts, 3, 752 are white guests and 109 are African American guests.
Table C.4. Summary Statistics of Host and Listing Characteristics per Experiment Conditions
Notes. The number of bedrooms is significantly different between the white-name and African American–name conditions in the self-claimed
information experiment because of large variations in house size in Austin. The difference is statistically insignificant if we restrict to listings with
fewer than three bedrooms or listings in Seattle and Boston. Our main analyses are robust with and without controlling for the number of
bedrooms.
Cui, Li, and Zhang: Reducing Discrimination with Reviews
Management Science, Articles in Advance, pp. 1–24, © 2019 INFORMS 21
Figure C.1. (Color online) Airbnb Host Inquiry and Booking Page Example
11
Endnotes Note that we started with the nonparametric proportion test because
1
Twitter hashtag #Airbnbwhileblack: https://twitter.com/hashtag/ it required fewer assumptions on the data-generating process com-
airbnbwhileblack. pared with regression-based methods, for example, the logit model.
12
2
On Airbnb, the content of reviews is mostly positive. Some reviews The sample size in this experiment was selected based on the
are less positive than others but rarely very negative. Therefore, we findings of Edelman et al. (2017) of an effect from 42% (African
test the impact of such “nonpositive” reviews rather than negative American accounts) to 50% (white accounts) and a power level of 0.8.
ones. Based on the one-sided proportion test and these estimates, we
3 needed 300 observations per treatment condition at a 0.05 significance
Following the past literature (Bertrand and Mullainathan 2004), the
level and 222 observations per treatment condition at a 0.1 signifi-
white- and African American–sounding names were chosen based on
cance level. Note that we used the one-sided power calculation be-
name frequency data published by the U.S. Census Bureau’s pop-
ulation division. Please refer to demographic aspects of surnames cause Edelman et al. (2017) show that accounts with African
from Census 2000. American–sounding names have a lower acceptance rate than ac-
4 counts with white-sounding names.
Our acceptance rate of white guests is similar to that of Edelman et al. 13
(2017), and our acceptance rate of African American guests is lower. The postexperiment power drops to 60% at a 0.1 significance level if
the discrimination effect size is 10% instead of 19.2%. Therefore, we
5
Airbnb claims, “Our community relies on honest, transparent re-
claim throughout the paper that the empirical evidence shows that
views. We will remove or alter a review if we find that it violates our
discrimination is significantly attenuated (instead of being elimi-
review guidelines.” Please refer to https://community.withairbnb
nated) with one positive review even though the observed dis-
.com/t5/Hosting/All-About-Reviews-A-Community-Help-Guide/
crimination effect size is not significantly different from zero when
td-p/38099.
6
there is one positive review.
See https://www.airbnb.com/about/about-us. 14
7
We conduct a postexperiment power analysis for this review
Figure C.3(b) shows verification status of 1, 000 randomly chosen condition. The identified effect of racial discrimination in the non-
guest accounts within the experimental cities. It is clear that the most review condition is a 21.4% reduction from a 62.7% acceptance rate
popular verification among these accounts is phone numbers and for white accounts to a 41.3% acceptance rate for African American
email addresses. In the blank review experiment, we further verify accounts, and the total sample size of the review condition is 128.
the government ID for each fictitious account.
Assuming the nonpositive review does not change the discrimination
8
The ideal case would be to randomize the name of the host who effect, we have a power of 79% and 69% to detect the discrimination
leaves the review. But because one can only leave a review after a effect at the 0.1 and 0.05 significance levels. This shows that the null
transaction, it is difficult to randomize host names in the experiment. results we observe are unlikely to be driven by the lack of power.
We chose a white-sounding name because more than 80% of hosts 15
For example, Airbnb can easily detect offensive language and ban
within our sample are white.
the users. But it is difficult for Airbnb to react to weak signals, such as
9
We adhere to the following procedure to find available nights from a rejection. Therefore, our suggestion to use reputation to prevent
the host’s calendar: We first find two available consecutive nights discrimination serves as a proactive approach rather than a punitive
given the range of check-in dates. If there are fewer than two nights reaction.
available, we find one available night. If the host has a “three-night 16
If the minority group receives poorer reviews on the platform, then
minimum stay” rule, we find three consecutive nights. If all of the
having more reviews would not necessarily alleviate discrimination.
above fail, we do not send a request for the listing. To provide some insight that this is not the case on Airbnb, we
10
We chose hosts from Los Angeles in this fourth experiment because randomly sample 5, 000 guests who have transacted with hosts in our
we had exhausted available and eligible hosts in cities used in pre- sample, collect the reviews these guests receive, and classify the
vious experiments. review content as positive or negative. Table C.3 presents summary
Cui, Li, and Zhang: Reducing Discrimination with Reviews
Management Science, Articles in Advance, pp. 1–24, © 2019 INFORMS 23
statistics: African American guests in general have fewer reviews, but Edelman B, Luca M, Svirsky D (2017) Racial discrimination in the
the percentage of positive reviews is almost identical across African sharing economy: Evidence from a field experiment. Amer.
American and white guests. This provides some side evidence that Econom. J. Appl. Econom. 9(2):1–22.
African American guests are not more likely to receive poorer Ewens M, Tomlin B, Wang LC (2014) Statistical discrimination or
reviews. prejudice? A large sample field experiment. Rev. Econom. Statist.
96(1):119–134.
Feldman J, Zhang D, Liu X, Zhang N (2018) Taking assortment
References optimization from theory to practice: Evidence from large field
Aarts H, Dijksterhuis A (2003) The silence of the library: Environ- experiments on Alibaba. Working paper, Washington Univer-
ment, situational norm, and social behavior. J. Personality Soc. sity, St. Louis, St Louis.
Psych. 84(1):18–28. Feldman P, Li J, Tsai T (2017) Welfare implications of congestion pricing:
Airbnb, Inc. (2019) Airbnb’s Nondiscrimination Policy: Our Com- Evidence from sfpark. Working paper, Boston University, Boston.
mitment to Inclusion and Respect. Accessed April 25, 2019, https:// Foster AD, Rosenzweig MR (1995) Learning by doing and learning
www.airbnb.com/help/article/1405/airbnb-s-nondiscrimination from others: Human capital and technical change in agriculture.
-policy–our-commitment-to-inclusion-and-respect. J. Political Econom. 103(6):1176–1209.
Allon G, Bassamboo A (2011) Buying from the babbling retailer? The Fradkin A (2016) Search frictions and the design of online market-
impact of availability information on customer behavior. Man- places. Working paper, Boston University, Boston.
agement Sci. 57(4):713–726. Fryer RG Jr, Levitt SD (2004) Understanding the black-white test
Allon G, Bassamboo A, Gurvich I (2011) We will be right with you: score gap in the first two years of school. Rev. Econom. Statist.
Managing customer expectations with vague promises and 86(2):447–464.
cheap talk. Oper. Res. 59(6):1382–1394. Gallino S, Moreno A (2014) Integration of online and offline channels
in retail: The impact of sharing reliable inventory availability
Arrow KJ (1973) The theory of discrimination. Discrimination Labor
information. Management Sci. 60(6):1434–1451.
Markets 3(10):3–33.
Ge Y, Knittel CR, MacKenzie D, Zoepf S (2016) Racial and gender
Ashlagi I, Shi P (2016) Optimal allocation without money: An en-
discrimination in transportation network companies. Working
gineering approach. Management Sci. 62(4):1078–1097.
paper, University of Washington, Seattle.
Ayres I, Siegelman P (1995) Race and gender discrimination in
Gelman A, Stern H (2006) The difference between “significant” and
bargaining for a new car. Amer. Econom. Rev. 85(3):304–321.
“not significant” is not itself statistically significant. Amer. Statist.
Bajari P, Hortacsu A (2003) The winner’s curse, reserve prices, and
60(4):328–331.
endogenous entry: Empirical insights from ebay auctions. RAND
Hanson A, Hawley Z (2011) Do landlords discriminate in the rental
J. Econom. 34(2):329–355.
housing market? Evidence from an internet field experiment in
Bertrand M, Duflo E (2016) Field experiments on discrimination,
US cities. J. Urban Econom. 70(2–3):99–114.
Handbook of Economic Field Experiments, vol. 1 (North-Holland,
John LK, Barasz K, Norton MI (2016) Hiding personal information
Amsterdam), 309–393.
reveals the worst. Proc. Natl Acad. Sci. USA 113(4):954–959.
Bertrand M, Mullainathan S (2004) Are Emily and Greg more em- Kaas L, Manger C (2012) Ethnic discrimination in Germany’s labour
ployable than Lakisha and Jamal? A field experiment on labor market: A field experiment. German Econom. Rev. 13(1):1–20.
market discrimination. Amer. Econom. Rev. 94(4):991–1013.
Kabra A, Belavina E, Girotra K (2015) Bike-share systems: Accessi-
Bimpikis K, Candogan O, Saban D (2016) Spatial pricing in ride- bility and availability. Chicago Booth Research Paper (15-04),
sharing networks. Working paper, Stanford University, Stan- University of Chicago Booth School of Business, Chicago.
ford, CA. Leong N, Belzer A (2016) The new public accommodations: Race
Bolton GE, Katok E, Ockenfels A (2004) How effective are electronic discrimination in the platform economy. Georgetown Law J.
reputation mechanisms? An experimental investigation. Man- 105(2016):1271.
agement Sci. 50(11):1587–1602. Li J, Netessine S (2018) Market thickness and matching (in)efficiency:
Buell RW, Norton MI (2011) The labor illusion: How operational Evidence from a quasi-experiment. Working paper, University of
transparency increases perceived value. Management Sci. 57(9): Michigan, Ann Arbor.
1564–1579. Li J, Moreno A, Zhang DJ (2016) Pros vs Joes: Agent pricing behavior
Buell RW, Kim T, Tsay C-J (2016) Creating reciprocal value through in the sharing economy. Working Paper, University of Michigan,
operational transparency. Management Sci. 63(6):1673–1695. Ann Arbor.
Cachon GP, Daniels KM, Lobel R (2015) The role of surge pricing on a Luca M (2016) Reviews, reputation, and revenue: The case of Yelp.com.
service platform with self-scheduling capacity. Manufacturing Harvard Business School NOM Unit Working Paper 12-016,
Service Oper. Management 19(3):368–384. Harvard Business School, Cambridge, MA.
Carpusor AG, Loges WE (2006) Rental discrimination and ethnicity McPhate M (2015) Discrimination by Airbnb hosts is widespread,
in names. J. Appl. Soc. Psych. 36(4):934–956. report says. New York Times. (December 11), https://www
Cheung CM-Y, Sia C-L, Kuan KKY (2012) Is this review believable? .nytimes.com/2015/12/12/business/discrimination-by-airbnb
A study of factors affecting the credibility of online consumer -hosts-is-widespread-report-says.html.
reviews from an ELM perspective. J. Assoc. Inform. Systems 13(8): Milkman KL, Akinola M, Chugh D (2012) Temporal distance and dis-
618–635. crimination: An audit study in academia. Psych. Sci. 23(7):710–717.
Chevalier JA, Mayzlin D (2006) The effect of word of mouth on sales: Moreno A, Terwiesch C (2014) Doing business with strangers: Reputation
Online book reviews. J. Marketing Res. 43(3):345–354. in online service marketplaces. Inform. Systems Res. 25(4):865–886.
Cialdini RB, Goldstein NJ (2004) Social influence: Compliance and Myrdal G (1944) An American Dilemma: The Negro Problem and Modern
conformity. Annual Rev. Psych. 55(1):591–621. Democracy (Harper, New York).
Cohen M, Harsha P (2013) Designing price incentives in a network NewsHub (2018) Partygoers trash $3 million Melbourne house booked
with social interactions. Working paper, New York University, on Airbnb. NewsHub (February 7), https://www.newshub.co.nz/
New York. home/world/2018/07/partygoers-trash-3-million-melbourne
Cui R, Zhang DJ, Bassamboo A (2019) Learning from inventory -house-booked-on-airbnb.html.
availability information: Field evidence from Amazon. Man- Nunley JM, Pugh A, Romero N, Alan Seals R (2014) An examination
agement Sci. 65(3):1216–1235. of racial discrimination in the labor market for recent college
Cui, Li, and Zhang: Reducing Discrimination with Reviews
24 Management Science, Articles in Advance, pp. 1–24, © 2019 INFORMS
graduates: Estimates from the field. Working paper, Auburn Weise E (2016) Airbnb Bans N. Carolina Host as Accounts of Racism
University, Auburn, AL. Rise. USA Today (June 1), https://www.usatoday.com/story/
Pavlou PA, Gefen D (2004) Building effective online marketplaces tech/2016/06/01/airbnb-bans-north-carolina-host-racism/
with institution-based trust. Inform. Systems Res. 15(1):37–59. 85252190/.
Schultz PW, Nolan JM, Cialdini RB, Goldstein NJ, Griskevicius V Wells GL, Windschitl PD (1999) Stimulus sampling and social psy-
(2007) The constructive, destructive, and reconstructive power of chological experimentation. Personality Soc. Psych. Bull. 25(9):
social norms. Psych. Sci. 18(5):429–434. 1115–1125.
Taylor T (2016) On-demand service platforms. Manufacturing Service Williams DR, Mohammed SA (2009) Discrimination and racial dis-
Oper. Management 20(4):704–720. parities in health: Evidence and needed research. J. Behav.
Todisco M (2015) Share and share alike? Considering racial dis- Medicine 32(1):20–47.
crimination in the nascent room-sharing economy. Stanford Law Zervas G, Proserpio D, Byers J (2015) A first look at online reputation
Rev. 67:121–129. on Airbnb, where every stay is above average. Working paper,
U.S. Department of Housing and Urban Development (2013) Boston University, Boston.
Housing discrimination against racial and ethnic minorities Zervas G, Proserpio D, Byers J (2016) The rise of the sharing economy:
2012. Accessed June 2013, https://www.huduser.gov/portal// Estimating the impact of Airbnb on the hotel industry. Research
Publications/pdf/HUD-514_HDS2012.pdf. Paper (2013-16), Boston University School of Management,
Vora S (2017) Airbnb sued by guest who says a host sexually assaulted Boston.
her. New York Times (August 2), https://www.nytimes.com/2017/ Zhang DJ, Dai H, Dong L, Qi F, Zhang N, Liu X, Liu Z, Yang J (2019)
08/02/travel/airbnb-lawsuit-host-sexual-assault.html. How do price promotions affect customer behavior on retailing
Wason PC, Johnson-Laird PN (1972) Psychology of Reasoning: Structure platforms? Evidence from a large randomized experiment on
and Content, vol. 86 (Harvard University Press, Cambridge, MA). Alibaba. Management Sci. Forthcoming.