The Chaos Machine
The Chaos Machine
v=rMrpGNRzmQ8
How does social media influence
behavior and what are the
consequences of this technology?
Brain chemistry changes after social media use
https://www.youtube.com/watch?v=vPOquG6OxbY
(up to 4:37)
DISCUSSION
1. To what extent should governments be involved in regulating social media platforms? Are there
risks of government overreach or censorship?
2. How should social media platforms approach content moderation? What are the challenges in
striking a balance between allowing diverse opinions and preventing harmful content?
3. What role should regulations play in combating fake news and disinformation on social media?
How can platforms be held accountable for the spread of false information?
4. Should there be regulations requiring social media platforms to be more transparent about their
algorithms?
5. What types of content should be regulated on social media, and who should be responsible for
enforcing those regulations?
6. How can regulations be designed to protect user privacy and freedom of expression while still
addressing legitimate concerns about harm?
7. Should the approach to regulation differ across different countries and cultures?
Dopamine is social media’s
accomplice inside your brain. It’s why
your smartphone looks and feels like a
slot machine, pulsing with colorful
notification badges, whoosh sounds,
and gentle vibrations. Those stimuli
are neurologically meaningless on
their own. But your phone pairs them
with activities, like texting a friend or
looking at photos, that are naturally
rewarding.
How do you feel about the way smartphones
and social media use notifications and
sounds to grab our attention? How do you
think this affects the time we spend on these
apps and our connections with others? Do
you think it's good or bad for our well-being
and relationships?
In 2018, a team of economists offered
users different amounts of money to
deactivate their account for four weeks,
looking for the threshold at which at
least half of them would say yes.
The number turned out to be high: $180.
But the people who deactivated
experienced more happiness, less
anxiety, and greater life satisfaction.
After the experiment was over, they used
the app less than they had before.
Why do you think people check their
smartphones so frequently, especially
for social media? What does the study
mentioned, where users were offered
money to deactivate their accounts,
reveal about the relationship between
social media use and well-being?
A 2013 study of the Chinese platform
Weibo found that anger consistently
travels further than other sentiments.
Studies of Twitter and Facebook have
repeatedly found the same, though
researchers have narrowed the effect
from anger in general to moral outrage
specifically. Users internalize the
attentional rewards that accompany
such posts, learning to produce more,
which also trains the platforms’
algorithms to promote them even
further.
Why do you think anger, especially
moral outrage, tends to spread more on
social media platforms like Twitter, and
Facebook? What do you think can be
done to make social media a less
negative experience for individuals?
As the technology advanced, other
platforms also expanded their use of self-
guided algorithms: Facebook to select
what posts users see and what groups
they are urged to join; Twitter to surface
posts that might entice a user to keep
scrolling and tweeting.
“We design a lot of algorithms so we can
produce interesting content for you,”
Zuckerberg said in an interview.
According to Zuckerberg, algorithms
analyze user information to provide
interesting content. In your opinion,
how might this impact users' online
experience? Do you think the design
of algorithms to encourage continuous
scrolling and engagement has any
drawbacks or benefits?
Moral-emotional words convey feelings
like disgust, shame, or gratitude.
(“Refugees deserve compassion.” “That
politician’s views are repulsive.”). More
than just words, these are expressions
of, and calls for, communal judgment,
positive or negative. Tweets with moral-
emotional words, he found, traveled 20
percent farther—for each moral-
emotional word.
In what ways do these words go beyond
personal emotions or moral judgments
and involve communal judgment? Can
you think of examples from your own
experiences or observations that align
with these findings?
He and two other scholars showed
participants a fake social media stream,
tracking what captured their attention as
they scrolled. Moral-emotional words,
they found, overrode people’s attention
almost regardless of context. If a boring
statement with moral-emotional words
and an exciting statement without them
both appeared on screen, users were
drawn to the former.
Is it reasonable to argue that the use of
moral-emotional words in posts
inherently draws more attention,
regardless of the overall context, or are
there situations where other factors may
outweigh their influence?
Stage two in social media’s distorting
influence, according to the MAD model,
is something called internalization. Users
who chased the platforms’ incentives
received immediate, high-volume social
rewards: likes and shares. As
psychologists have known since Pavlov,
when you are repeatedly rewarded for a
behavior, you learn a compulsion to
repeat it. As you are trained to turn all
discussions into matters of high outrage,
to express disgust with out-groups, to
assert the superiority of your in-group,
you will eventually shift from doing it for
external rewards to doing it simply
because you want to do it. The drive
comes from within. Your nature has been
changed.
Can this process of internalization lead to a
fundamental shift in users' nature and
motivations, transforming the way they
interact and communicate on social
media?"
In an unintended 2015 test of this, Ellen
Pao, still Reddit’s chief, tried something
unprecedented: rather than promote
superusers, Reddit would ban the most
toxic of them. Out of tens of millions of
users, her team concluded, only about
15,000, all hyperactive, drove much of
the hateful content. Expelling them, Pao
reasoned, might change Reddit as a
whole. She was right, an outside analysis
found. With the elimination of this
minuscule percentage of users, hate
speech overall dropped an astounding 80
percent among those who remained.
Millions of people’s behavior had shifted
overnight.
How does Ellen Pao's choice to ban a small
percentage of toxic hyperactive users on
Reddit reflect the ongoing debate regarding
platform governance, free speech, and the
delicate balance between maintaining an open
platform and addressing harmful content?
As evidence mounted throughout 2018,
action began following. That year,
Germany mandated that social media
platforms remove any hate speech within
twenty-four hours of its being flagged, or
face fines. Australia announced an
inquiry into “world first” regulations on
social media’s harms, calling it a
“turning point” amid “global recognition
that the internet cannot be that other
place where community standards and
the rule of law do not apply.” The
European Union imposed a series of fines
exceeding a billion dollars on Google for
antitrust abuses, then threatened
regulations against Facebook over hate
speech, election influence,
misinformation—the whole gamut.
Should governments impose strict rules on
social media platforms to swiftly remove
hate speech and address potential harms, or
does this risk infringing on the freedom of
expression online? What are the key
arguments for and against such regulations?
PRONUNCIATION
https://youtu.be/pvTYuqILyeM?t=77
(1:17 to 15:33)