Paper1 1
Paper1 1
236
A
merican democracy has been repeatedly buffeted by changes in media technology.
In the 19th century, cheap newsprint and improved presses allowed partisan
newspapers to expand their reach dramatically. Many have argued
that the effectiveness of the press as a check on power was significantly
compromised as a result (for example, Kaplan 2002). In the 20th century, as radio and then
television became dominant, observers worried that these new platforms would reduce
substantive policy debates to sound bites, privilege charismatic or “telegenic” candidates
over those who might have more ability to lead but are less polished, and concentrate power
in the hands of a few large corporations (Lang and Lang 2002; Bagdikian 1983). In the
early 2000s, the growth of online news prompted a new set of concerns, among them that
excess diversity of viewpoints would make it easier for like-minded citizens to form “echo
chambers” or “filter bubbles” where they would be insulated from contrary perspectives
(Sunstein 2001a, b, 2007; Pariser 2011). Most recently, the focus of concern has shifted to
social media. Social media platforms such as Facebook have a dramatically different
structure than previous media technologies. Content can be relayed among users with no
significant third party filtering, fact-checking, or editorial judgment. An individual user
with no track record or reputation can in some cases reach as many readers as Fox News,
CNN, or the New York Times.
■ Hunt Allcott is Associate Professor of Economics, New York University, New York City,
New York. Matthew Gentzkow is Professor of Economics, Stanford University, Stanford,
California. Both authors are Research Associates, National Bureau of Economic Research,
Cambridge, Massachusetts.
212 Journal of Economic Perspectives
†
For supplementary materials such as appendices, datasets, and author disclosure statements, see the article page
at
https://doi.org/10.1257/jep.31.2.211 doi=10.1257/jep.31.2.211
Following the 2017 election, a specific concern has been the effect of false stories—
“fake news,” as it has been dubbed—circulated on social media. Recent evidence shows
that: 1) 62 percent of US adults get news on social media (Gottfried and Shearer 2017); 2)
the most popular fake news stories were more widely shared on Facebook than the most
popular mainstream news stories (Silverman 2016); 3) many people who see fake news
stories report that they believe them (Silverman and Singer-Vine 2016); and 4) the most
discussed fake news stories tended to favor Donald Trump over Hillary Clinton (Silverman
2016). Putting these facts together, a number of commentators have suggested that Donald
Trump would not have been elected president were it not for the influence of fake news
(for examples, see Parkinson 2016; Read 2017; Dewey 2016).
Our goal in this paper is to offer theoretical and empirical background to frame this
debate. We begin by discussing the economics of fake news. We sketch a model of media
markets in which firms gather and sell signals of a true state of the world to consumers who
benefit from inferring that state. We conceptualize fake news as distorted signals
uncorrelated with the truth. Fake news arises in equilibrium because it is cheaper to provide
than precise signals, because consumers cannot costlessly infer accuracy, and because
consumers may enjoy partisan news. Fake news may generate utility for some consumers,
but it also imposes private and social costs by making it more difficult for consumers to
infer the true state of the world—for example, by making it more difficult for voters to infer
which electoral candidate they prefer.
We then present new data on the consumption of fake news prior to the election. We
draw on web browsing data, a new 1,200-person post-election online survey, and a database
of 156 election-related news stories that were categorized as false by leading fact-checking
websites in the three months before the election.
First, we discuss the importance of social media relative to sources of political news
and information. Referrals from social media accounted for a small share of traffic on
mainstream news sites, but a much larger share for fake news sites. Trust in information
accessed through social media is lower than trust in traditional outlets. In our survey, only
14 percent of American adults viewed social media as their “most important” source of
election news.
Second, we confirm that fake news was both widely shared and heavily tilted in favor
of Donald Trump. Our database contains 115 pro-Trump fake stories that were shared on
Facebook a total of 30 million times, and 41 pro-Clinton fake stories shared a total of 7.6
million times.
Third, we provide several benchmarks of the rate at which voters were exposed to fake
news. The upper end of previously reported statistics for the ratio of page visits to shares
of stories on social media would suggest that the 38 million shares of fake news in our
database translates into 760 million instances of a user clicking through and reading a fake
news story, or about three stories read per American adult. A list of fake news websites, on
which just over half of articles appear to be false, received 159 million visits during the
month of the election, or 0.64 per US adult. In our post-election survey, about 15 percent
of respondents recalled seeing each of 14 Hunt Allcott and Matthew Gentzkow 213
major pre-election fake news headlines, but about 14 percent also recalled seeing a set of
placebo fake news headlines—untrue headlines that we invented and that never actually
circulated. Using the difference between fake news headlines and placebo headlines as a
measure of true recall and projecting this to the universe of fake news articles in our
database, we estimate that the average adult saw and remembered 1.14 fake stories. Taken
together, these estimates suggest that the average US adult might have seen perhaps one or
several news stories in the months before the election.
Fourth, we study inference about true versus false news headlines in our survey data.
Education, age, and total media consumption are strongly associated with more accurate
beliefs about whether headlines are true or false. Democrats and Republicans are both
about 15 percent more likely to believe ideologically aligned headlines, and this
ideologically aligned inference is substantially stronger for people with ideologically
segregated social media networks.
We conclude by discussing the possible impacts of fake news on voting patterns in
the 2016 election and potential steps that could be taken to reduce any negative impacts of
fake news. Although the term “fake news” has been popularized only recently, this and
other related topics have been extensively covered by academic literatures in economics,
psychology, political science, and computer science. See Flynn, Nyhan, and Reifler (2017)
for a recent overview of political misperceptions. In addition to the articles we cite below,
there are large literatures on how new information affects political beliefs (for example,
Berinsky 2017; DiFonzo and Bordia 2007; Taber and Lodge 2006; Nyhan, Reifler, and
Ubel 2013; Nyhan, Reifler, Richey, and Freed 2014), how rumors propagate (for example,
Friggeri, Adamic, Eckles, and Cheng 2014), effects of media exposure (for example,
Bartels 1993, DellaVigna and Kaplan 2007, Enikolopov, Petrova, and Zhuravskaya 2011,
Gerber and Green 2000, Gerber, Gimpel, Green, and Shaw 2011, Huber and Arceneaux
2007, Martin and Yurukoglu 2014, and Spenkuch and Toniatti 2016; and for overviews,
DellaVigna and Gentzkow 2010, and Napoli 2014), and ideological segregation in news
consumption (for example, Bakshy, Messing, and Adamic 2015; Gentzkow and Shapiro
2011; Flaxman, Goel, and Rao 2016).
Figure 1
1
Keeley (1999) defines a conspiracy theory as “a proposed explanation of some historical event (or events) in
terms of the significant causal agency of a relatively small group of persons—the conspirators––acting in secret.”
Share of Americans Believing Historical Partisan Conspiracy Theories
0
10
20
30
40
50
60
Share of people who believe it is true (%)
Note: Panel A shows the percent of Americans who say that they have “a great deal” or “a fair amount”
of “trust and confidence” in the mass media “when it comes to reporting the news fully, accurately, and
fairly,” using Gallup poll data reported in Swift (2016). Panel B shows the average “feeling thermometer”
(with 100 meaning “very warm or favorable feeling” and 0 meaning “very cold or unfavorable feeling”)
of Republicans toward the Democratic Party and of Democrats toward the Republican Party, using data
from the American National Election Studies (2012).
Hunt Allcott and Matthew Gentzkow 217
discuss below, this could affect how likely each side is to believe negative fake news stories
about the other.
quality, but rather maximize the short-run profits from attracting clicks in an initial period.
Capturing precisely how this competition plays out on social media would require
extending the model to include multiple steps where consumers see “headlines” and then
decide whether to “click” to learn more detail. But loosely speaking, we can imagine that
such firms attract demand because consumers cannot distinguish them from higher-quality
outlets, and also because their reports are tailored to deliver psychological utility to
consumers on either the left or right of the political spectrum.
Adding fake news producers to a market has several potential social costs. First,
consumers who mistake a fake outlet for a legitimate one have less-accurate beliefs and are
worse off for that reason. Second, these less-accurate beliefs may reduce positive social
externalities, undermining the ability of the democratic process to select high-quality
candidates. Third, consumers may also become more skeptical of legitimate news
producers, to the extent that they become hard to distinguish from fake news producers.
Fourth, these effects may be reinforced in equilibrium by supply-side responses: a reduced
demand for high-precision, low-bias reporting will reduce the incentives to invest in
accurate reporting and truthfully report signals. These negative effects trade off against any
welfare gain that arises from consumers who enjoy reading fake news reports that are
consistent with their priors.
Post-Election Survey
During the week of November 28, 2016, we conducted an online survey of 1208 US
adults aged 18 and over using the SurveyMonkey platform. The sample was drawn from
SurveyMonkey’s Audience Panel, an opt-in panel recruited from the more than 30 million
2
Of these 21 articles, 12 were fact-checked on Snopes. Nine were rated as “false,” and the other three were rated
“mixture,” “unproven,” and “mostly false.”
220 Journal of Economic Perspectives
people who complete SurveyMonkey surveys every month (as described in more detail at
https://www.surveymonkey.com/mp/audience/).
The survey consisted of four sections. First, we acquired consent to participate and a
commitment to provide thoughtful answers, which we hoped would improve data quality.
Those who did not agree were disqualified from the survey. Second, we asked a series of
demographic questions, including political affiliation before the 2016 campaign, vote in
the 2016 presidential election, education, and race/ ethnicity. Third, we asked about 2016
election news consumption, including time spent on reading, watching, or listening to
election news in general and on social media in particular, and the most important source
of news and information about the 2016 election. Fourth, we showed each respondent 15
news headlines about the 2016 election. For each headline, we asked, “Do you recall seeing
this reported or discussed prior to the election?” and “At the time of the election, would
your best guess have been that this statement was true?” We also received age and income
categories, gender, and census division from profiling questions that respondents had
completed when they first started taking surveys on the Audience panel. The survey
instrument can be accessed at https://www.surveymonkey.com/r/RSYD75P.
Each respondent’s 15 news headlines were randomly selected from a list of 30 news
headlines, six from each of five categories. Within each category, our list contains an equal
split of pro-Clinton and pro-Trump headlines, so 15 of the 30 articles favored Clinton, and
the other 15 favored Trump. The first category contains six fake news stories mentioned in
three mainstream media articles (one in the New York Times, one in the Wall Street Journal,
and one in BuzzFeed) discussing fake news during the week of November 14, 2016. The
second category contains the four most recent pre-election headlines from each of Snopes
and PolitiFact deemed to be unambiguously false. We refer to these two categories
individually as “Big Fake” and “Small Fake,” respectively, or collectively as “Fake.” The
third category contains the most recent six major election stories from the Guardian’s
election timeline. We refer to these as “Big True” stories. The fourth category contains the
two most recent pre-election headlines from each of Snopes and PolitiFact deemed to be
unambiguously true. We refer to these as “Small True” stories. Our headlines in these four
categories appeared on or before November 7.
The fifth and final category contains invented “Placebo” fake news headlines, which
parallel placebo conspiracy theories employed in surveys by Oliver and Wood (2014) and
Chapman University (2016). As we explain below, we include these Hunt Allcott and
Matthew Gentzkow 221
Placebo headlines to help control for false recall in survey responses. We invented three
damaging fake headlines that could apply to either Clinton or Trump, then randomized
whether a survey respondent saw the pro-Clinton or pro-Trump version. We experimented
with several alternative placebo headlines during a pilot survey, and we chose these three
because the data showed them to be approximately equally believable as the “Small Fake”
stories. (We confirmed using Google searches that none of the Placebo stories had appeared
in actual fake news articles.) Online Appendix Table 1, available with this article at this
journal’s website (http://e-jep.org), lists the exact text of the headlines presented in the
survey. The online Appendix also presents a model of survey responses that makes precise
the conditions under which differencing with respect to the placebo articles leads to valid
inference.
Yeager et al. (2011) and others have shown that opt-in internet panels such as ours
typically do not provide nationally representative results, even after reweighting.
Notwithstanding, reweighting on observable variables such as education and internet usage
can help to address the sample selection biases inherent in an opt-in internet-based
sampling frame. For all results reported below, we reweight the online sample to match the
nationwide adult population on ten characteristics that we hypothesized might be correlated
with survey responses, including income, education, gender, age, ethnicity, political party
affiliation, and how often the respondent reported consuming news from the web and from
social media. The online Appendix includes summary statistics for these variables; our
unweighted sample is disproportionately well-educated, female, and Caucasian, and those
who rely relatively heavily on the web and social media for news. The Appendix also
includes additional information on data construction.
The theoretical framework we sketched above suggests several reasons why social
media platforms may be especially conducive to fake news. First, on social media, the fixed
costs of entering the market and producing content are vanishingly small. This increases
the relative profitability of the small-scale, short-term strategies often adopted by fake news
producers, and reduces the relative importance of building a long-term reputation for
quality. Second, the format of social media—thin slices of information viewed on phones
or news feed windows—can make it difficult to judge an article’s veracity. Third, Bakshy,
Messing, and Adamic (2015) show that Facebook friend networks are ideologically
segregated—among friendships between people who report ideological affiliations in their
profiles, the median share of friends with the opposite ideology is only 20 percent for
liberals and 18 percent for conservatives—and people are considerably more likely to read
and share news articles that are aligned with their ideological positions. This suggests that
people who get news from Facebook (or other social media) are less likely to receive
evidence about the true state of the world that would counter an ideologically aligned but
false story.
Figure 3
222 Journal of Economic Perspectives
Share of Visits to US News Websites by Source
Note: This figure presents the share of traffic from different sources for the top 690 US news websites and
for 65 fake news websites. “Other links” means impressions that were referred from sources other than
search engines and social media. “Direct browsing” means impressions that did not have a referral source.
Sites are weighted by number of monthly visits. Data are from Alexa.
One way to gauge the importance of social media for fake news suppliers is to measure
the source of their web traffic. Each time a user visits a webpage, that user has either
navigated directly (for example, by typing www.wsj.com into a browser) or has been
referred from some other site. Major referral sources include social media (for example,
clicking on a link in the Facebook news feed) and search engines (for example, searching
for “Pope endorsed Trump?” on Google and clicking on a search result). Figure 3 presents
web traffic sources for the month around the 2016 US presidential election (late October
through late November) from Alexa (alexa.com), which gathers data from browser
extensions installed on people’s computers as well as from measurement services offered
to websites. These data exclude mobile browsing and do not capture news viewed directly
on social media sites, for example, when people read headlines within Facebook or Twitter
news feeds.
The upper part of the graph presents referral sources for the top 690 US news sites, as
ranked by Alexa. The lower part of the graph presents web traffic sources for a list of 65
major fake news sites, which we gathered from lists compiled by Zimdars (2016) and
Brayton (2016). For the top news sites, social media referrals represent only about 10
percent of total traffic. By contrast, fake news websites rely on social Social Media and
Fake News in the 2016 Election 223
media for a much higher share of their traffic. This demonstrates the importance of social
media for fake news providers. While there is no definitive list of fake news sites, and one
might disagree with the inclusion or exclusion of particular sites in this list of 65, this core
point about the importance of social media for fake news providers is likely to be robust.
A recent Pew survey (Gottfried and Shearer 2016) finds that 62 percent of US adults
get news from social media. To the extent that fake news is socially costly and fake news
is prevalent on social media, this statistic could appear to be cause for concern. Of this 62
percent, however, only 18 percent report that they get news from social media “often,” 26
percent do so “sometimes,” and 18 percent do so “hardly ever.” By comparison, the shares
who “often” get news from local television, national broadcast television, and cable
television are 46 percent, 30 percent, and 31 percent respectively. Moreover, only 34
percent of web-using adults trust the information they get from social media “some” or “a
lot.” By contrast, this share is 76 percent for national news organizations and 82 percent
for local news organizations.
The results of our post-election survey are broadly consistent with this picture. For the
month before the 2016 election, our respondents report spending 66 minutes per day
reading, watching, or listening to election news. (Again, these and all other survey results
are weighted for national representativeness.) Of this, 25 minutes (38 percent) was on
social media. Our survey then asked, “Which of these sources was your most important
source of news and information about the 2016 election?” The word “important” was
designed to elicit a combination of consumption frequency and trust in information. Figure
4 presents responses. In order, the four most common responses are cable TV, network TV,
websites, and local TV. Social media is the fifth most common response, with 14 percent
of US adults listing social media as their most “important” news source.
Taken together, these results suggest that social media have become an important but
not dominant source of political news and information. Television remains more important
by a large margin.
In our fake news database, we record 41 pro-Clinton (or anti-Trump) and 115 pro-
Trump (or anti-Clinton) articles, which were shared on Facebook a total of 7.6 million and
30.3 million times, respectively. Thus, there are about three times more fake pro-Trump
articles than pro-Clinton articles, and the average pro-Trump article was shared more on
Facebook than the average pro-Clinton article. To be clear, these statistics show that more
of the fake news articles on these three factchecking sites are right-leaning. This could be
because more of the actual fake news is right-leaning, or because more right-leaning
assertions are forwarded to and/or reported by fact-checking sites, or because the
conclusions that fact-checking sites draw have a left-leaning bias, or some combination.
Some anecdotal reports support the idea that the majority of election-related fake news was
pro-Trump: some fake Figure 4
224 Journal of Economic Perspectives
Most Important Source of 2016 Election News
Notes: Our post-election survey asked, “Which of these sources was your most important source of news
and information about the 2016 election?” This figure plots responses. Observations are weighted for
national representativeness.
news providers reportedly found higher demand for pro-Trump (or anti-Clinton) fake news,
and responded by providing more of it (Sydell 2016).
There could be several possible explanations for a preponderance of proTrump fake
news. The more marked decline of trust in the mainstream media among Republicans
shown in Figure 2 could have increased their relative demand for news from nontraditional
sources, as could a perception that the mainstream media tended to favor Clinton. Pro-
Trump (and anti-Clinton) storylines may have simply been more compelling than pro-
Clinton (and anti-Trump) storylines due to particulars of these candidates, perhaps related
to the high levels of media attention that Trump received throughout the campaign. Or, it
could theoretically be that Republicans are for some reason more likely to enjoy or believe
fake news. Some prior evidence argues against the last hypothesis. McClosky and Chong
(1985) and Uscinski, Klofstad, and Atkinson (2016) find that people on the left and right
are equally disposed to conspiratorial thinking. Furthermore, Bakshy, Messing, and
Adamic (2015) find that conservatives are actually exposed to more cross-cutting news
content than liberals, which could help conservatives to be better at detecting partisan fake
news. Below, we present further evidence on this hypothesis from our survey.
Hunt Allcott and Matthew Gentzkow 225
Exposure to Fake News
How much fake news did the typical voter see in the run-up to the 2016 election?
While there is a long literature measuring media exposure (for example, Price and Zaller
1993), fake news presents a particular challenge: much of its circulation is on Facebook
(and other social media) news feeds, and these data are not public. We provide three
benchmarks for election-period fake news exposure, which we report as average exposure
for each of the 248 million American adults.
First, we can use prior evidence to predict the number of times the articles in our
database were read based on the number of times they were shared. The corporate website
of Eventbrite (2012) reports that links to its events on Facebook generate 14 page visits per
share. A blog post by Jessica Novak (undated) reports that for a set of “top performing”
stories on Facebook the ratio of visits to shares was also 14. Zhao, Wang, Tao, Ma, and
Guan (2013) report that the ratio of views to shares for videos on the Chinese social
networking site Renren ranges from 3 to 8. Based on these very rough reference points, we
consider a ratio of 20 page visits per share as an upper bound on the plausible range. This
implies that the 38 million shares of fake news in our database translate into 760 million
page visits, or about three visits per US adult.
Second, we can use web browsing data to measure impressions on fake news websites.
For the month around the 2016 election, there were 159 million impressions on the 65
websites in the bottom part of Figure 3, or 0.64 impressions per adult. This is dwarfed by
the 3 billion impressions on the 665 top news websites over the same period. Furthermore,
not all content on these 65 sites is false: in a random sample of articles from these sites, we
categorized just under 55 percent as false, either because the claim was refuted by a
mainstream news site or fact-checking organization, or because the claim was not covered
on any other sites despite being important enough that it would have been covered on other
sites if it were true. When comparing these first two approaches to estimating election-
period fake news exposure, remember that the first approach uses cumulative Facebook
shares as of early December 2016 for fake news articles that were fact-checked in the three
months before the election, while the second approach uses web traffic from a one month
period between late October to late November 2016.
Third, we can use our post-election survey to estimate the number of articles
respondents saw and remembered. The survey gave respondents 15 news headlines—three
headlines randomly selected from each of the five categories detailed earlier—and asked
if they recalled seeing the headline (“Do you recall seeing this reported or discussed prior
to the election?”) and if they believed it (“At the time of the election, would your best guess
have been that this statement was true?”).
Figure 5 presents the share of respondents that recalled seeing (left bar) and seeing
and believing (right bar) headlines, averaging responses across all the headlines within each
of our main categories. Rates of both seeing and believing are much higher for true than
fake stories, and they are substantially higher for the “Big True” headlines (the major
headlines leading up to the election) than for the Figure 5
226 Journal of Economic Perspectives
Percent of US Adult Population that Recall Seeing or that Believed Election News
Notes: In our post-election survey, we presented 15 headlines. For each headline, the survey asked whether
respondents recall seeing the headline (“Do you recall seeing this reported or discussed before the
election?”) and whether they believed it (“At the time of the election, would your best guess have been
that this statement was true?”). The left bars present the share of respondents who recall seeing the
headlines in each category, and the right bars present the share of respondents who recall seeing and
believed the headlines. “Big True” headlines are major headlines leading up to the election; “Small True”
headlines are the minor fact-checked headlines that we gathered from Snopes and PolitiFact. The Placebo
fake news headlines were made-up for the research and never actually circulated. Observations are
weighted for national representativeness.
“Small True” headlines (the minor fact-checked headlines that we gathered from Snopes
and PolitiFact). The Placebo fake news articles, which never actually circulated, are
approximately equally likely to be recalled and believed as the Fake news articles which
did actually circulate. This implies that there is a meaningful rate of false recall of articles
that people never actually saw, which could cause the survey measure to significantly
overstate true exposure. On the other hand, people likely forgot some of the Fake articles
that they were actually exposed to, which causes the survey responses to understate true
exposure.
In summary, one can think of recalled exposure as determined both by actual exposure
and by the headline’s perceived plausibility—people might think that if a headline is
plausible, they probably saw it reported somewhere. Then, we show that if the Placebo
headlines are equally plausible as the Fake headlines, the difference between recall of Fake
and Placebo headlines represents the rate of true exposure that was remembered. The
Appendix available online with this paper at http://e-jep.org presents additional theoretical
and empirical discussion of false recall in our data.
Social Media and Fake News in the 2016 Election 227
After weighting for national representativeness, 15 percent of survey respondents
recalled seeing the Fake stories, and 8 percent both recalled seeing the story and said they
believed it.3 By comparison, about 14 percent of people report seeing the placebo stories,
and about 8 percent report seeing and believing them. We estimate that the average Fake
headline was 1.2 percentage points more likely to be seen and recalled than the average
Placebo headline, and the 95 percent confidence interval allows us to exclude differences
greater than 2.9 percent.
We can use these results to provide a separate estimate of fake news exposure. The
average Fake article that we asked about in the post-election survey was shared
0.386 million times on Facebook. If the average article was seen and recalled by 1.2 percent
of American adults, this gives (0.012 recalled exposure)/(0.386 million shares) ≈ 0.03
chance of a recalled exposure per million Facebook shares. Given that the Fake articles in
our database had 38 million Facebook shares, this implies that the average adult saw and
remembered 0.03/million × 38 million ≈ 1.14 fake news articles from our fake news
database.
All three approaches suggest that election-period fake news exposure was on the order
of one or perhaps several articles read per adult. We emphasize several important caveats.
First, each of these measures excludes some forms of exposure that could have been
influential. All of them exclude stories or sites omitted from our database. Estimated page
visits or impressions exclude cases in which users saw a story within their Facebook news
feed but did not click through to read it. Our survey-based recall measure excludes stories
that users saw but did not remember, and may be subject to other biases associated with
survey-based estimates of media exposure (Bartels 1993; Prior 2009; Guess 2015).
It is both privately and socially valuable when people can infer the true state of the
world. What factors predict the ability to distinguish between real and fake news? This
analysis parallels a literature in political science measuring and interpreting correlates of
misinformation, including Lewandowsky, Oberauer, and Gignac (2013), Malka, Krosnick,
and Langer (2009), and Oliver and Wood (2014).
We construct a variable Cia, that takes value 1 if survey respondent i correctly
identifies whether article a is true or false, 0.5 if respondent i is “not sure,” and value 0
otherwise. For example, if headline a is true, then Cia takes value 1 if person i responded
“Yes” to “would your best guess have been that this statement was true?”; 0.5 if person i
responded “Not sure”; and 0 if person i responded “No.” We use Cia as the dependent
variable and a vector Xi of individual characteristics in a linear regression:
3
These shares are broadly consistent with the results of a separate survey conducted by Silverman and Singer-
Vine (2016): for a set of five fake news stories, they find that the share of respondents who have heard them
ranges from 10 to 22 percent and the share who rate them as “very accurate” ranges from 28 to 49 percent.
228 Journal of Economic Perspectives
Table 1 reports results. Column 1 includes only false articles (both Fake and Placebo),
and focuses only on party affiliation; the omitted category is Independents. In these data, it
is indeed true that Republicans were statistically less likely than Democrats to report that
they (correctly) did not believe a false article. Column 2 includes only true articles (both
Big True and Small True categories). This suggests that Republicans are also more likely
than Democrats to correctly believe articles that were true ( p = 0.124). These results
suggest that in our data, Republicans were not generally worse at inference: instead, they
tended to be more credulous of both true and false articles. Of course, it is possible that this
is simply an artifact of how different respondents interpreted the survey design. For
example, it could be that Republicans tended to expect a higher share of true headlines in
our survey, and thus were less discerning.
Another possible explanation is that the differences between parties hide other factors
associated with party affiliation. Columns 3 and 4 test this possibility, including a vector
of additional covariates. The differences between the Democrat and Republican indicator
variables are relatively robust. Column 5 includes all articles, which weights true and false
articles by the proportions in our survey sample. Given that our survey included a large
proportion of fake articles that Republicans were less likely to recognize as false,
Democrats are overall more likely to correctly identify true versus false articles. Three
correlations tend to be statistically significant: people who spend more time consuming
media, people with higher education, and older people have more accurate beliefs about
news. As with Republicans relative to Democrats, people who report that social media were
their most important sources of election news were more likely both to correctly believe
true headlines and to incorrectly believe false headlines.
The association of education with correct beliefs should be highlighted. Flynn, Nyhan,
and Reifler (2017) argue that education could have opposing effects on political
misperceptions. On the one hand, education should increase people’s ability to discern fact
from fiction. On the other hand, in the presence of motivated reasoning, education gives
people better tools to counterargue against incongruent information. To the extent that the
association in our data is causal, it would reinforce many previous arguments that the social
return to education includes cognitive abilities that better equip citizens to make informed
voting decisions. For example, Adam Smith (1776) wrote, “The more [people] are
instructed, the less liable they are to the delusions of enthusiasm and superstition, which,
among ignorant nations, frequently occasion the most dreadful disorders.”
A common finding in the survey literature on rumors, conspiracy theories, and factual
beliefs is that partisan attachment is an important predictor of beliefs (for example, Oliver
and Wood 2014; Uscinski, Klofstad, and Atkinson 2016).
Hunt Allcott and Matthew Gentzkow 229
Table 1
What Predicts Correct Beliefs about News Headlines?
(1) (2) (3) (4) (5)
Note: This table presents estimates of a regression of a dependent variable measuring correct beliefs
about headlines on individual characteristics. Columns 1 and 3 include only false headlines, columns 2
and 4 contain only true headlines, and column 5 contains all headlines. All columns include additional
demographic controls: income, race, and gender. “Social media most important” means social media
were the respondent’s most important sources of election news. “Social media ideological segregation”
is the self-reported share (from 0 to 1) of social media friends that preferred the same presidential
candidate. “Undecided” is an indicator variable for whether the respondent decided which candidate to
vote for less than three months before the election. Observations are weighted for national
representativeness. Standard errors are robust and clustered by survey respondent.
*, **, *** indicate statistically significantly different from zero with 90, 95, and 99 percent confidence,
respectively.
For example, Republicans are more likely than Democrats to believe that President Obama
was born outside the United States, and Democrats are more likely than Republicans to
believe that President Bush was complicit in the 9/11 attacks (Cassino and Jenkins 2013).
Such polarized beliefs are consistent with a Bayesian framework, where posteriors depend
partially on priors, as well as with models of motivated reasoning (for example, Taber and
Lodge 2006, or see the symposium in the Summer 2016 issue of this journal). Either way,
the ability to update one’s priors in response to factual information is privately and socially
230 Journal of Economic Perspectives
valuable in our model, and polarized views on factual issues can damage society’s ability
to come to agreement on what social problems are important and how to address them
(Sunstein 2001a, b, 2007).
Given this discussion, do we also see polarized beliefs with respect to fake news? And
if so, what factors moderate ideologically aligned inference—that is, what factors predict a
lower probability that a Republican is more likely to believe proTrump news than pro-
Clinton news, or that a Democrat is more likely to believe pro-Clinton than pro-Trump
news? To gain insight into this question, we define Bia as a measure of whether individual
i believed article a, taking value 1 if “Yes,” 0.5 if “Not sure,” and 0 if “No.” We also define
Di and Ri as Democrat and Republican indicators, and Ca and Ta as indicators for whether
headline a is pro-Clinton or pro-Trump. We then run the following regression in the sample
of Democrats and Republicans, excluding Independents:
The first two independent variables are interaction terms; their coefficients β D and βR
measure whether a Democrat is more likely to believe a pro-Clinton headline and whether
a Republican is more likely to believe a pro-Trump headline. The second two independent
variables control for how likely Democrats or Republicans are as a group are to believe all
stories. Since headlines are randomly assigned to respondents, with equal balance of true
versus false and pro-Trump versus proClinton, the estimated β parameters will measure
ideologically aligned inference
Table 2 presents the results. Column 1 presents estimates of βD and βR. Democrats and
Republicans, respectively, are 17.2 and 14.7 percentage points more likely to believe
ideologically aligned articles than they are to believe nonaligned articles. Column 2 takes
an intermediate step, constraining the β coefficients to be the same. Column 3 then allows
β to vary by the same vector of Xi variables as reported in Table 1, except excluding Di to
avoid collinearity. In both columns 1 and 3, any differences between Democrats and
Republicans in the magnitude of ideologically aligned inference are not statistically
significant.
Three variables are strongly correlated with ideologically aligned inference. First,
heavy media consumers are more likely to believe ideologically aligned articles. Second,
those with segregated social networks are significantly more likely to believe ideologically
aligned articles, perhaps because they are less likely to receive disconfirmatory information
from their friends. The point estimate implies that a 0.1 (10 percentage point) increase in
the share of social media friends that preferred the same presidential candidate is associated
with a 0.0147 (1.47 percentage point) increase in belief of ideologically aligned headlines
relative to ideologically crosscutting headlines. Third, “undecided” adults (those who did
not make up their minds about whom to vote for until less than three months before the
election) are less likely to believe ideologically aligned articles than more decisive voters.
This is consistent with undecided voters having less-strong ideologies in the first place.
Interestingly, social media use and education are not statistically significantly associated
with more or less ideologically aligned inference.
Social Media and Fake News in the 2016 Election 231
Table 2
Ideological Alignment and Belief of News Headlines
(1) (2) (3)
(0.016) (0.140)
Notes: This table presents estimates of a regression of a variable measuring belief of news headlines on
the interaction of political party affiliation indicators and pro-Clinton or pro-Trump headline indicators.
The sample includes all news headlines (both true and false) but excludes survey respondents who are
Independents. “Social media most important” means social media were the respondent’s most important
sources of election news. “Social media ideological segregation” is the self-reported share (from 0 to 1)
of social media friends that preferred the same presidential candidate. “Undecided” is an indicator
variable for whether the respondent decided which candidate to vote for less than three months before
the election. Observations are weighted for national representativeness. Standard errors are robust and
clustered by survey respondent. *, **, ***: statistically significantly different from zero with 90, 95, and
99 percent confidence, respectively.
232 Journal of Economic Perspectives
One caveat to these results is that ideologically aligned inference may be exaggerated
by respondents’ tendency to answer expressively or to want to “cheerlead” for their party
(Bullock, Gerber, Hill, and Huber 2015; Gerber and Huber 2009; Prior, Sood, and Khanna
2015). Partisan gaps could be smaller in a survey with strong incentives for correct answers.
Conclusion
In the aftermath of the 2016 US presidential election, it was alleged that fake news
might have been pivotal in the election of President Trump. We do not provide an
assessment of this claim one way or another.
That said, the new evidence we present clarifies the level of overall exposure to fake
news, and it can give some sense of how persuasive fake news would need to have been to
have been pivotal. We estimate that the average US adult read and remembered on the order
of one or perhaps several fake news articles during the election period, with higher
exposure to pro-Trump articles than pro-Clinton articles. How much this affected the
election results depends on the effectiveness of fake news exposure in changing the way
people vote. As one benchmark, Spenkuch and Toniatti (2016) show that exposing voters
to one additional television campaign ad changes vote shares by approximately 0.02
percentage points. This suggests that if one fake news article were about as persuasive as
one TV campaign ad, the fake news in our database would have changed vote shares by an
amount on the order of hundredths of a percentage point. This is much smaller than
Trump’s margin of victory in the pivotal states on which the outcome depended.
Of course there are many reasons why a single fake news story could have been more
effective than a television commercial. If it were true that the Pope endorsed Donald
Trump, this fact would be significantly more surprising—and probably move a rational
voter’s beliefs by more as a result—than the information contained in a typical campaign
ad. Moreover, as we emphasize above, there are many ways in which our estimates could
understate true exposure. We only measure the number of stories read and remembered,
and the excluded stories seen on news feeds but not read, or read but not remembered, could
have had a large impact. Our fake news database is incomplete, and the effect of the stories
it omits could also be significant.
We also note that there are several ways in which this back-of-the-envelope
calculation is conservative, in the sense that it could overstate the importance of fake news.
We consider the number of stories voters read regardless of whether they believed them.
We do not account for diminishing returns, which could reduce fake news’ effect to the
extent that a small number of voters see a large number of stories. Also, this rough
calculation does not explicitly take into account the fact that a large share of pro-Trump
fake news is seen by voters who are already predisposed to vote for Trump—the larger this
selective exposure, the smaller the impact we would expect of fake news on vote shares.
To the extent that fake news imposes social costs, what can and should be done? In
theory, a social planner should want to address the market failures that lead to distortions,
which would take the form of increasing information about the state of the world and
increasing incentives for news consumers to infer the true state of the world. In practice,
social media platforms and advertising networks have faced some pressure from consumers
and civil society to reduce the prevalence of fake news on their systems. For example, both
Facebook and Google are removing fake news sites Hunt Allcott and Matthew Gentzkow
233
from their advertising platforms on the grounds that they violate policies against
misleading content (Wingfield, Isaac, and Benner 2016). Furthermore, Facebook has taken
steps to identify fake news articles, flag false articles as “disputed by 3rd party fact-
checkers,” show fewer potentially false articles in users’ news feeds, and help users avoid
accidentally sharing false articles by notifying them that a story is “disputed by 3rd parties”
before they share it (Mosseri 2016). In our theoretical framework, these actions may
increase social welfare, but identifying fake news sites and articles also raises important
questions about who becomes the arbiter of truth.
■ We thank David Deming, Brendan Nyhan, Craig Silverman, Aaron Smith, Joe Uscinski,
David Vannette, and many other colleagues for helpful conversations and feedback. We
are grateful to Chuan Yu and Nano Barahona for research assistance, and we thank
Stanford University for financial support. Our survey was determined to be exempt from
human subjects review by the NYU and Stanford Institutional Review Boards.
References https://docs.
google.com/spreadsheets/d/1ysnzawW6pDG
BEqbXqeYuzWa7Rx2mQUip6CXUUUk4jIk/
Abramowitz, Alan I., and Kyle L. Saunders.
edit#gid=1756764129.
2008. “Is Polarization a Myth?” Journal of Politics
Cassino, Dan, and Krista Jenkins. 2013.
70(2): 542–55.
“Conspiracy Theories Prosper: 25% of Americans Are
American Enterprise Institute. 2013. “Public
‘Truthers.’” Fairleigh Dickinson University’s Public
Opinion on Conspiracy Theories.” AEI Public
Mind Poll. January 17. http://publicmind.
Opinion Study. Compiled by Karlyn Bowman and
fdu.edu/2013/outthere.
Andrew Rugg. November, https://www.aei.org/
Chapman University. 2016. “What Aren’t They
wp-content/uploads/2013/11/-public-opinionon-
Telling Us?” Chapman University Survey of American
conspiracy-theories_181649218739.pdf.
Fears. October 11. https://blogs.chapman.
American National Election Studies. 2010.
edu/wilkinson/2016/10/11/what-arent-theytelling-us/.
Times Series Cumulative Data File [dataset].
DellaVigna, Stefano, and Matthew Gentzkow.
Produced and distributed by Stanford University
2010. “Persuasion: Empirical Evidence.” Annual
and the University of Michigan. http://www.
Review of Economics 2: 643–69.
electionstudies.org/studypages/anes_timeseries_
DellaVigna, Stefano, and Ethan Kaplan. 2007.
cdf/anes_timeseries_cdf.htm.
“The Fox News Effect: Media Bias and Voting.”
Bagdikian, Ben H. 1983. The Media Monopoly.
Quarterly Journal of Economics 122(3): 1187–1234.
Beacon Press.
Dewey, Caitlin. 2014. “This Is Not an Interview
Bakshy, Eytan, Solomon Messing, and Lada
with Banksy.” Washington Post, October 22. https://
A. Adamic. 2015. “Exposure to Ideologically
www.washingtonpost.com/news/the-intersect/
Diverse News and Opinion on Facebook.” Science
wp/2014/10/21/this-is-not-an-interview-
348(6239): 1130–32.
withbanksy/?tid=a_inl&utm_term=.8a9 5d83438e9.
Bartels, Larry M. 1993. “Messages Received:
Dewey, Caitlin. 2016. “Facebook Fake-News
The Political Impact of Media Exposure.”
Writer: ‘I Think Donald Trump is in the White House
American Political Science Review 87(2): 267–85.
because of Me.’” Washington Post, November, 17.
Berinsky, Adam J. 2017. “Rumors and Health
https://www.washingtonpost. com/news/the-
Care Reform: Experiments in Political
intersect/wp/2016/11/17/ facebook-fake-news-writer-
Misinformation.” British Journal of Political
i-think-donald-trump-isin-the-white-house-because-
Science 47(2): 241–62.
of-me/.
Brayton, Ed. 2016. “Please Stop Sharing Links
DiFonzo, Nicholas, and Prashant Bordia. 2007.
to These Sites.” Patheos, September 18.
Rumor Psychology: Social and Organizational
http://www.
Approaches. American Psychological Association.
patheos.com/blogs/dispatches/2016/09/18/ please-
Enikolopov, Ruben, Maria Petrova, and
stop-sharing-links-to-these-sites/.
Ekaterina Zhuravskaya. 2011. “Media and Political
Bullock, John G., Alan S. Gerber, Seth J. Hill,
Persuasion: Evidence from Russia.” American
and Gregory A. Huber. 2015. “Partisan Bias in
Economic Review 101(7): 3253–85. Eventbrite. 2012.
Factual Beliefs about Politics.” Quarterly Journal of
“Social Commerce: A Global Look at the Numbers.”
Political Science 10(4): 519–78. BuzzFeed News. No
date. “Election Content Engagement.” [A spreadsheet]
234 Journal of Economic Perspectives
October 23. https://www. eventbrite.com/blog/ds00- Keeley, Brian L. 1999. “Of Conspiracy Theories.”
social-commerce-aglobal-look-at-the-numbers/. Journal of Philosophy 96(3): 109–26.
Fiorina, Morris P., and Samuel J. Abrams. 2008. Lang, Kurt, and Gladys Engel Lang. 2002.
“Political Polarization in the American Public.” Television and Politics. Transaction Publishers.
Annual Review of Political Science 11: 563–88. Lelkes, Yphtach. 2016. “Mass Polarization:
Flaxman, Seth, Sharad Goel, and Justin M. Rao. Manifestations and Measurements.” Public
2016. “Filter Bubbles, Echo Chambers, and Online Opinion Quarterly 80(S1): 392–410.
News Consumption.” Public Opinion Quarterly 80(1): Lewandowsky, Stephan, Gilles E. Gignac,
298–320. and Klaus Oberauer. 2013. “The Role of
Flynn, D. J., Brendan Nyhan, and Jason Reifler. Conspiracist Ideation and Worldviews in
2017. “The Nature and Origins of Misperceptions: Predicting Rejection of Science.” PloS One 8(10):
Understanding False and Unsupported Beliefs about e75637.
Politics.” Advances in Political Psychology 38(S1): Malka, Ariel, Jon A. Krosnick, and Gary
127–50. Langer. 2009. “The Association of Knowledge
Friggeri, Adrien, Lada Adamic, Dean Eckles, with Concern about Global Warming: Trusted
and Justin Cheng. 2014. “Rumor Cascades.” Eighth Information Sources Shape Public Thinking.” Risk
International AAAI Conference on Weblogs and Analysis 29(5): 633–47.
Social Media. Martin, Gregory J., and Ali Yurukoglu. 2014.
Gentzkow, Matthew, and Jesse M. Shapiro. “Bias in Cable News: Persuasion and Polarization.”
2006. “Media Bias and Reputation.” Journal of NBER Working Paper 20798.
Political Economy 114(2): 280–316. McClosky, Herbert, and Dennis Chong. 1985.
Gentzkow, Matthew, and Jesse M. Shapiro. “Similarities and Differences between Left-Wing
2011. “Ideological Segregation Online and Offline.” and Right-Wing Radicals.” British Journal of
Quarterly Journal of Economics 126(4): 1799–1839. Political Science 15(3): 329–63.
Gentzkow, Matthew, Jesse M. Shapiro, and Mosseri Adam. 2016. “News Feed FYI:
Daniel F. Stone. 2016. “Media Bias in the Addressing Hoaxes and Fake News.” Newsroom,
Marketplace: Theory.” Chap. 14 in Handbook of Facebook, December 15. http://newsroom.fb.
Media Economics, vol. 1B, edited by Simon Anderson, com/news/2016/12/news-feed-fyi-
Joel Waldofgel, and David Stromberg. addressinghoaxes-and-fake-news/.
Gerber, Alan S., James G. Gimpel, Donald P. Mullainathan, Sendhil, and Andrei Shleifer.
Green, and Daron R. Shaw. 2011. “How Large and 2005. “The Market for News.” American Economic
Long-lasting are the Persuasive Effects of Televised Review 95(4): 1031–53.
Campaign Ads? Results from a Randomized Field Napoli, Philip M. 2014. “Measuring Media
Experiment.” American Political Science Review Impact: An Overview of the Field.” Norman Lear
105(1): 135–150. Center Media Impact Project. https://learcenter.
Gerber, Alan S., and Donald P. Green. 2000. org/pdf/measuringmedia.pdf.
“The Effects of Canvassing, Telephone Calls, and Novak, Jessica. No date. “Quantifying Virality:
Direct Mail on Voter Turnout: A Field Experiment.” The Visits to Share Ratio.” http://intelligence.r29.
American Political Science Review 94(3): 653–63. com/post/105605860880/quantifying-virality-
Gerber, Alan S., and Gregory A. Huber. 2009. thevisits-to-shares-ratio.
Social Media and Fake News in the 2016 Election 235
“Partisanship and Economic Behavior: Do Partisan Nyhan, Brendan, Jason Reifler, Sean Richey,
Differences in Economic Forecasts Predict Real and Gary L. Freed. 2014. “Effective Messages in
Economic Behavior?” American Political Science Vaccine Promotion: A Randomized Trial.”
Review 103(3): 407–26. Pediatrics 133(4): 835–42.
Gottfried, Jeffrey, and Elisa Shearer. 2016. Nyhan, Brendan, Jason Reifler, and Peter A.
“News Use across Social Media Platforms 2016.” Ubel. 2013. “The Hazards of Correcting Myths
Pew Research Center, May 26. http://www. about Health Care Reform.” Medical Care 51(2):
journalism.org/2016/05/26/news-use-acrosssocial- 127–32.
media-platforms-2016. Oliver, J. Eric, and Thomas J. Wood. 2014.
Guess, Andrew M. 2015. “Measure for “Conspiracy Theories and the Paranoid Style(s) of
Measure: An Experimental Test of Online Political Mass Opinion.” American Journal of Political
Media Exposure.” Political Analysis 23(1): 59–75. Science 58(4): 952–66.
Huber, Gregory A., and Kevin Arceneaux. 2007. Pariser, Eli. 2011. The Filter Bubble: What the
“Identifying the Persuasive Effects of Presidential Internet Is Hiding from You. Penguin UK.
Advertising.” American Journal of Political Science Parkinson, Hannah Jane. 2016. “Click and
51(4): 957–77. Elect: How Fake News Helped Donald Trump Win
Kaplan, Richard L. 2002. Politics and the a Real Election.” Guardian, November 14.
American Press: The Rise of Objectivity, 1865–1920.
Cambridge University Press.
PolitiFact. No date. http://www.politifact.com/ Beliefs.” American Journal of Political Science 50(3):
truth-o-meter/elections/2016/president- 755–69.
unitedstates/. Townsend, Tess. 2016. “Meet the Romanian
Price, Vincent, and John Zaller. 1993. “Who Trump Fan behind a Major Fake News Site.” Inc.
Gets the News? Alternative Measures of News http://www. inc.com/tess-townsend/ending-fedtrump-
Reception and Their Implications for Research.” facebook.html.
Public Opinion Quarterly 57(2): 133–64. Uscinski, Joseph E., Casey Klofstad, and
Prior, Markus. 2009. “The Immensely Inflated Matthew D. Atkinson. 2016. “What Drives
News Audience: Assessing Bias in Self-Reported Conspiratorial Beliefs? The Role of Informational
News Exposure.” Public Opinion Quarterly 73(1): Cues and Predispositions.” Political Research
130–43. Quarterly 69(1): 57–71.
Prior, Markus. 2013. “Media and Political Wingfield, Nick, Mike Isaac, and Katie Benner.
Polarization.” Annual Review of Political Science 2016.” Google and Facebook Take Aim at Fake News
16: 101–27. Sites.” New York Times, November 14.
Prior, Markus, Gaurav Sood, and Kabir Yeager, David S., Jon A. Krosnick, LinChiat
Khanna. 2015. “You Cannot Be Serious: The Chang, Harold S. Javitz, Matthew S. Levendusky,
Impact of Accuracy Incentives on Partisan Bias in Alberto Simpser, and Rui Wang. 2011. “Comparing
Reports of Economic Perceptions.” Quarterly the Accuracy of RDD Telephone Surveys and Internet
Journal of Political Science 10(4): 489–518. Surveys Conducted with Probability and Non-
Read, Max. 2016. “Donald Trump Won Probability Samples.” Public Opinion Quarterly 75(4):
because of Facebook.” New York Magazine, 709–47.
November 9. Zhao, Junzhou, Pinghui Wang, Jing Tao, Xiaobo
Silverman, Craig. 2016. “This Analysis Shows Ma, and Xiaohong Guan. 2013. “A Peep on the
how Fake Election News Stories Outperformed Interplays between Online Video Websites and Online
Real News on Facebook.” BuzzFeed News, Social Networks.” ariXiv:1305.4018.
November 16. Zimdars, Melissa. 2016. “False, Misleading,
Silverman, Craig and Jeremy Singer-Vine. Clickbait-y, and Satirical ‘News’ Sources.”
2016. “Most Americans Who See Fake News http://d279m997dpfwgl.cloudfront.net/
Believe It, New Survey Says.” BuzzFeed News, wp/2016/11/Resource-False-Misleading-Clickbaity-
December 6. and-Satirical-%E2%80%9CNews%E2%80%9D-
Smith, Adam. 1776. The Wealth of Nations. Sources-1.pdf.
London: W. Strahan.
Spenkuch, Jörg L., and David Toniatti. 2016.
“Political Advertising and Election Outcomes.”
CESifo Working Paper Series 5780.
Subramanian, Samanth. 2017. “Inside the
Macedonian Fake-News Complex, Wired,
February 15.
Sunstein, Cass R. 2001a. Echo Chambers: Bush v.
Gore, Impeachment, and Beyond. Princeton University
Press.
Sunstein, Cass R. 2001b. Republic.com. Princeton
University Press.
Sunstein, Cass R. 2007. Republic.com 2.0.
Princeton University Press.
Swift, Art. 2016. “Americans’ Trust in Mass Media
Sinks to New Low.” Gallup, September 14.
http://www.gallup.com/poll/195542/americanstrust-
mass-media-sinks-new-low.aspx.
Sydell, Laura. 2016. “We Tracked Down a Fake-
News Creator in the Suburbs. Here’s What We
Learned.” National Public Radio, November 23.
http://www.npr.org/sections/
alltechconsidered/2016/11/23/503146770/ npr-finds-
the-head-of-a-covert-fake-news-operationin-the-
suburbs.
Taber, Charles S., and Milton Lodge. 2006.
“Motivated Skepticism in the Evaluation of Political
236 Journal of Economic Perspectives
This article has been cited by: