BSIS 441 Assignment
BSIS 441 Assignment
Group 9 Members
Task: Using examples, evaluate the role of social media in shaping legal and policy challenges in
Social media has revolutionized the way information is disseminated, consumed, and shared
across the globe. Platforms like Facebook, Twitter (now X), Instagram, and TikTok have
transformed communication by making it easier for individuals, organizations, and governments
to broadcast information to vast audiences in real-time. This unprecedented speed and reach of
information sharing have led to significant social, political, and economic changes (Koohikamali
& Sidorova, 2017). While these platforms empower users to participate in global conversations,
they also introduce complex legal, policy, and ethical challenges related to information
dissemination.
The rise of social media has created vast opportunities to access and share information. Social
media platforms had a vital role in keeping families, friends and even workplaces connected
during the peak of the Covid-19 pandemic; they have also enabled and supported civic
movements around the world. At the same time, they have also brought new challenges for
democracy, rule of law and fundamental rights (Wu et al., 2016). The freedoms and rights
associated with expressing, accessing and receiving information are central to democracy, the
rule of law and the exercise of many other fundamental rights (Vaccari & Chadwick, 2019).
However, social media platforms are often conduits to amplify mis - and disinformation,
undermining citizens’ access to reliable information and the democratic process, and are thought
by many experts to have weakened the capacity of traditional media to support informed and
constructive political debate (Allcott & Gentzkow, 2017). Given the recent focus on the role of
social media in the spread of fake news regarding current political and social events, it is critical
to comprehend how the public interacts with misinformation on social media platforms. Fake
news concerning contemporary social and political concerns spreads at a breakneck speed on
social media (Koohikamali & Sidorova, 2017).
On the one hand, social media promotes democratic participation, freedom of expression, and the
rapid spread of information, allowing users to raise awareness on pressing issues and challenge
social norms. On the other hand, it has been a breeding ground for misinformation,
disinformation, hate speech, and privacy violations. These negative consequences raise
significant concerns about the regulation of content, the responsibilities of social media
companies, and the ethical dilemmas surrounding information control. Governments, regulators,
and policymakers worldwide face a delicate balance between protecting freedom of expression
and ensuring that harmful content is curtailed.
The sort of news people get has changed dramatically because of the Internet. Individuals
formerly relied on conventional media such as radio and television, which included fewer and
more well-established news sources. Individuals are increasingly exposed to online sources of
information, such as social networking sites, which allow anybody to publish anything without
the need for “fact-checking or editorial judgment (Allcott & Gentzkow, 2017). One of the central
legal challenges is the proliferation of misinformation and disinformation on social media
platforms.
2
Misinformation refers to those who spread false information without realizing it, usually
because their friends or others do (Campan et al., 2017). Vaccari & Chadwick (2019), argue that
individuals are more likely to accept anything that confirms their beliefs (confirmation bias) and
would spread it without verifying it because it aligns with their thinking while distorting those
that do not, even if they are true. Although the spread of misinformation is as old as human
history, social media has changed the game by enabling people to generate misinformation easily
and spread it rapidly in an anonymous and decentralized fashion (Del Vicario et al., 2016). For
example, during the COVID-19 pandemic, the spread of false information about vaccine safety
or the virus itself contributed to public health crises worldwide. Governments and regulators face
the difficult task of curbing this spread without infringing on freedom of expression, a
fundamental right protected in many democracies. In response, various countries have introduced
laws to combat disinformation, but these laws often risk overreach and censorship. This presents
a legal conundrum: how to protect societies from the harm caused by false information while
preserving the essential right to free speech.
The global nature of social media adds complexity to the legal landscape. Information shared on
platforms transcends national boundaries, making it difficult to regulate in a uniform manner.
What is considered lawful speech in one country may be illegal in another. For example, content
that is protected under free speech laws in the United States, such as Holocaust denial, is a
criminal offense in Germany. This variation in legal standards across jurisdictions presents
significant challenges for policymakers and platforms trying to navigate these differences. Social
media companies, which operate globally, are often forced to adapt their policies to meet the
legal requirements of each region, leading to fragmented approaches to content regulation.
Social media also poses significant privacy concerns, which raise both legal and ethical
challenges. Platforms collect vast amounts of personal data, often using it to tailor content and
advertisements to users (Tufekci, 2014). While this practice is profitable for companies, it raises
serious ethical questions about user consent, data security, and the potential for misuse (Zuboff,
2015). The Cambridge Analytica scandal is a prominent example of how personal data was
exploited to influence political outcomes, demonstrating the dangers of unchecked data
harvesting (Cadwalladr & Graham-Harrison, 2018). In response, legislation such as the General
Data Protection Regulation (GDPR) in Europe has been introduced to protect users' privacy by
enforcing stricter guidelines on data collection and usage. However, while regulations like the
GDPR represent important steps forward, enforcing these standards on global platforms remains
a challenge, particularly when social media companies operate across borders with varying legal
frameworks.
Another challenge lies in content moderation, where platforms struggle to balance the removal of
harmful content with protecting users’ right to free expression. Social media companies have
faced intense scrutiny for their handling of hate speech, extremism, and other harmful content
(Crawford & Gillespie, 2016). For instance, Facebook and Twitter were criticized for failing to
curb the spread of misinformation and incitements to violence during politically sensitive
moments, such as the 2016 U.S. election and the 2021 Capitol riot (Isaac & Kang, 2021).
However, when these platforms do act to moderate content, they are often accused of
3
overstepping their bounds and infringing on free speech. The decision to ban former U.S.
President Donald Trump from Twitter sparked global debate on whether private companies
should have the authority to silence political figures (Newton, 2021). These issues highlight the
difficulties of establishing clear, consistent policies that balance content regulation with free
speech rights (Suzor, 2020).
Social media has contributed to the widening of the digital divide, raising ethical concerns about
equitable access to information. While social media platforms are accessible to those with
internet access, many parts of the world, particularly in developing countries, still face
significant barriers to online connectivity. This creates an unequal distribution of information,
where some individuals or communities are excluded from participating in global conversations
(van Dijk, 2020). During the COVID-19 pandemic, for instance, many public health
announcements and educational resources were disseminated primarily through social media. In
areas without internet access, individuals were left without critical information, exacerbating
existing inequalities (Ahmed et al., 2020). Ethically, this raises concerns about the responsibility
of governments and platforms to ensure that information dissemination does not exclude
marginalized communities (Fuchs, 2021).
Social media has become a powerful tool for information dissemination, but it also presents
profound legal, policy, and ethical challenges (Castells, 2015). The rise of misinformation, the
difficulties in content moderation, the privacy concerns surrounding data collection, and the
ethical implications of algorithmic bias all underscore the need for careful regulation and ethical
reflection (Napoli, 2019). As governments, regulators, and social media companies continue to
grapple with these issues, it is clear that the future of information dissemination will depend on
how these challenges are addressed, ensuring that social media can be a tool for empowerment
rather than harm (Gorwa, 2019).
4
Definition of terms
Social media - a form of mass media communications on the Internet (such as on websites for
social networking and microblogging) through which users share information, ideas, personal
messages, and other content such as videos (Fuchs, 2021 & Napoli, 2019).
Misinformation: False or misleading information shared without malicious intent. It often
spreads when individuals share content without verifying its accuracy (Vosoughi et al., 2018).
Digital Divide: The disparity between those with easy access to digital technology and the
internet and those without, leading to unequal opportunities in accessing information (Ahmed et
al., 2020).
5
Roles of social media in information dissemination
Social media has become a powerful tool for information dissemination, playing a transformative
role in how information is created, shared, and consumed. It has revolutionized communication
by providing platforms where information can be instantly shared with large, global audiences.
The role of social media in information dissemination spans various fields, including education,
journalism, marketing, and public health, influencing both the speed and scope of how
information reaches people. Here are some of the roles:
i) Speed and Reach
One of the most significant roles of social media in information dissemination is its ability to
deliver information quickly to a vast audience. Unlike traditional media outlets such as
newspapers and television, which are subject to publication schedules or broadcast times, social
media allows for real-time communication. Platforms like Twitter, Facebook, and Instagram
enable users to share information instantly, making them essential tools for news reporting and
public communication during events like natural disasters, political upheavals, or public health
emergencies (Schultz & Göritz, 2011).
The widespread adoption of smartphones and internet access has amplified the reach of social
media, allowing information to be disseminated across the globe with unprecedented speed. This
has made social media a critical channel for organizations, governments, and individuals to
distribute information in real time. For instance, during the COVID-19 pandemic, social media
played a crucial role in disseminating health guidelines, updates on the virus, and vaccine
information to the public (Mheidly & Fares, 2020). The viral nature of content on these platforms
means that even obscure or niche information can reach millions of people in a short period.
ii) User-Generated Content and Citizen Journalism
Social media also democratizes information dissemination by enabling anyone with internet
access to produce and share content. This has led to the rise of citizen journalism, where
individuals report on events and share their perspectives without relying on traditional news
outlets. Platforms like Twitter and YouTube allow users to post live updates and video footage of
events as they happen, often before mainstream media can cover the story (Hermida, 2010). This
has changed the media landscape, making it more participatory and decentralized. However, this
increased accessibility also presents challenges. While social media allows for the quick spread
of information, it also facilitates the dissemination of misinformation and disinformation.
Without the editorial oversight typically found in traditional media, false or misleading
information can spread widely and quickly on social platforms (Allcott & Gentzkow, 2017). This
has led to growing concerns about the role of social media in spreading "fake news" and its
potential to influence public opinion and behavior, particularly in political contexts.
6
iii Targeted Information and Algorithms
Another key role of social media in information dissemination is the use of algorithms that
personalize and target content to users based on their interests, behaviors, and online interactions.
Social media platforms utilize complex algorithms to curate content that appears on users' feeds,
tailoring the information they see according to their preferences (Bakshy, & Adamic, 2015). This
targeted approach can help deliver relevant information to users, making content more engaging
and increasing its reach. However, the downside of algorithm-driven information dissemination
is the creation of "filter bubbles" or "echo chambers," where users are exposed primarily to
information and viewpoints that align with their existing beliefs. This can limit exposure to
diverse perspectives and contribute to polarization, as users are less likely to encounter
conflicting information or engage with differing viewpoints (Pariser,
Social media plays an essential role in crisis communication, providing a platform for rapid
dissemination of critical 2011).
iv) Crisis Communication and Public Health
information during emergencies. Governments, organizations, and individuals use social media
to share urgent updates, safety information, and calls for action during crises such as natural
disasters, terrorist attacks, or public health emergencies. Platforms like Twitter, with its hashtag
functionality, allow for quick organization of information and the ability to follow unfolding
events in real time (Veil, Buehner, & Palenchar, 2011).
During public health crises, social media has proven to be an effective tool for disseminating
health information and raising awareness. For example, during the Ebola outbreak and the
COVID-19 pandemic, public health organizations such as the World Health Organization (WHO)
and the Centers for Disease Control and Prevention (CDC) used social media to share timely
updates, preventative measures, and vaccine information (Cinelli et al., 2020). Social media
platforms have also been used to combat misinformation during these crises, providing fact-
checking services and amplifying verified sources.
v) Marketing and Branding
In the realm of marketing, social media has become indispensable for the dissemination of
information about products, services, and brands. Businesses leverage platforms like Instagram,
Facebook, and Twitter to engage directly with consumers, creating interactive content that can be
shared and reshared by users. This viral potential is a key feature of social media's role in
marketing, allowing businesses to reach broader audiences through user-generated content,
influencer partnerships, and paid advertisements (Kaplan & Haenlein, 2010).
Additionally, social media enables businesses to gather real-time feedback from customers,
engage in two-way communication, and build brand loyalty. By monitoring social media
channels, companies can identify trends, customer preferences, and potential issues, allowing for
more responsive and personalized marketing strategies.
7
Legal and Policy Challenges in Information Dissemination through Social Media
In the contemporary digital landscape, social media has emerged as a dominant platform for
information dissemination, profoundly influencing communication, politics, and social dynamics
worldwide. However, the proliferation of information through these platforms is accompanied by
significant legal and policy challenges that can undermine the efficacy and integrity of
communication. These challenges arise from issues such as misinformation, privacy concerns,
regulatory frameworks, intellectual property rights, and the role of platform governance.
i) Misinformation and Disinformation
One of the most pressing legal challenges associated with information dissemination on social
media is the spread of misinformation and disinformation. Misinformation refers to false or
misleading information shared without malicious intent, while disinformation is disseminated
with the intent to deceive (Lewandowsky, & Cook, 2017). The rapid spread of both forms of
information can lead to public panic, erosion of trust in institutions, and manipulation of public
opinion (Cinelli et al., 2020). For instance, during health crises such as the COVID-19 pandemic,
false information regarding the virus and its treatments proliferated on social media,
complicating public health responses (Chou et al., 2020). Legal frameworks often struggle to
keep pace with the speed at which information is shared online, leading to calls for more robust
regulatory measures to curb harmful misinformation without infringing on free speech rights.
ii) Privacy Concerns
Privacy is another critical area of concern in the realm of social media information
dissemination. The collection, storage, and sharing of personal data by social media platforms
have raised significant legal and ethical questions. Users often unknowingly consent to extensive
data collection practices through terms of service agreements, which may not be fully understood
(Solove, 2021). This data is then used for targeted advertising and content curation, potentially
leading to privacy breaches and unauthorized data sharing (Zuboff, 2019). Legal challenges
surrounding data privacy have intensified, particularly with the implementation of regulations
like the General Data Protection Regulation (GDPR) in Europe, which imposes strict
requirements on data handling and user consent (Regan, 2015). However, enforcement of these
regulations remains inconsistent, and many users remain unaware of their rights regarding
personal data, leading to ongoing debates about the adequacy of existing legal frameworks.
iii) Intellectual Property Rights
The dissemination of information on social media also raises complex issues surrounding
intellectual property rights. Social media platforms are frequently used to share creative works,
news articles, and proprietary information, leading to potential copyright infringements. The ease
8
with which content can be shared and reproduced online complicates the enforcement of
intellectual property rights (Lessig, 2008). Content creators often face challenges in protecting
their work from unauthorized use, and the legal frameworks governing copyright in the digital
age are often outdated and ineffective (Bollier, 2010). The Digital Millennium Copyright Act
(DMCA) in the United States, for instance, provides some protection for copyright holders but
has been criticized for failing to adequately address the realities of online content sharing
(Goldman, 2018). The legal landscape surrounding intellectual property on social media is thus
fraught with ambiguity, necessitating ongoing discussions about how to balance the rights of
creators with the public's interest in access to information.
iv) Platform Governance and Accountability
The role of social media platforms in moderating content presents another significant legal and
policy challenge. Platforms like Facebook, Twitter, and YouTube have developed their own
policies and guidelines to govern user-generated content, yet the lack of transparency and
consistency in enforcement raises questions about accountability (Gorwa, 2019). Users often
face content removals or suspensions without clear explanations, leading to allegations of
arbitrary censorship and bias (Salgado, 2020). Moreover, the legal status of social media
companies as "platforms" rather than "publishers" under Section 230 of the Communications
Decency Act in the United States provides them with broad immunity from liability for user-
generated content, complicating efforts to hold them accountable for the dissemination of
harmful or false information (Pillai, 2020). This legal shield has prompted calls for reform to
ensure that platforms take more responsibility for the content shared on their sites while also
protecting user rights.
V) Regulatory Frameworks
The need for effective regulatory frameworks to address these challenges is increasingly
recognized by governments and policymakers worldwide. Several countries have implemented
or proposed legislation aimed at combating misinformation, protecting user privacy, and
regulating social media platforms. For example, the European Union's Digital Services Act aims
to create a safer online environment by imposing obligations on platforms to combat illegal
content and protect user rights (European Commission, 2020). However, the implementation of
such regulations often faces hurdles, including resistance from tech companies, concerns about
overreach, and the complexities of balancing regulation with free speech rights (Bennett &
Segerberg, 2013). Additionally, international cooperation is necessary, as the global nature of
social media means that actions taken in one jurisdiction can have far-reaching implications.
Information Ethics
Information ethics in social media refers to the study of ethical issues surrounding the creation,
sharing, and use of information in digital environments. As social media platforms have become
central to communication, understanding how information is managed ethically has emerged as a
critical issue. Ethical concerns on social media include privacy violations, misinformation, data
exploitation, and the creation of echo chambers (Floridi, 2013).
9
Social media platforms like Facebook, Twitter (now X), Instagram, and others play a significant
role in the dissemination of information, shaping public opinions, and influencing decisions. This
rise in digital engagement raises several ethical challenges, especially in terms of ensuring that
the use of data and information promotes public good while protecting individuals' rights
(Tavani, 2013).
Key Issues in Information Ethics on Social Media
i) Privacy and Data Collection
The collection and use of personal data on social media have become significant ethical
concerns. Social media platforms often gather vast amounts of user data, including personal
preferences, behaviors, and interactions, which are sometimes shared with third parties without
explicit consent. Ethical issues arise when users are unaware of the extent of data collection or its
potential misuse. For instance, the Cambridge Analytica scandal in 2018 demonstrated how
personal data harvested from Facebook users was used to influence political campaigns without
their informed consent (Isaak & Hanna, 2018).
Example: On Facebook, users’ data was mined to build psychological profiles that influenced
voting behavior in political campaigns, particularly during the 2016 U.S. presidential election.
This raised concerns about the misuse of personal information and the ethical implications of
data manipulation (Isaak & Hanna, 2018).
ii) Misinformation and Fake News
Social media platforms are frequently criticized for the spread of misinformation or "fake news."
Ethical concerns arise when platforms allow the unchecked spread of false information, which
can mislead users and cause harm to society. The algorithms used by social media platforms
often prioritize content that generates engagement, leading to the viral spread of sensational or
unverified information. The consequences of misinformation can be severe, from influencing
elections to promoting harmful behaviors, such as during the COVID-19 pandemic, where false
information about vaccines spread widely on platforms like Twitter and Facebook (Vosoughi et
al., 2018).
Example: During the COVID-19 pandemic, false information about vaccines and treatments
spread on social media, contributing to vaccine hesitancy and public health risks. Social
platforms struggled to contain this misinformation, raising ethical concerns about their
responsibility to ensure the accuracy of content (Wardle & Derakhshan, 2017).
10
access to the internet, creating information inequality. This divide raises ethical questions about
who has the power to access, produce, and share information and how this influences social
participation and representation in the digital world (Van Dijk, 2020).
iv) Cyberbullying and Harassment
Cyberbullying is another major ethical concern on social media, as users may face harassment or
abusive behavior online. The anonymity afforded by these platforms can lead to harmful
behavior that damages individuals' well-being. Social media companies often struggle to
effectively moderate and prevent cyberbullying while also balancing free speech. Ethically, there
is a need to create safe digital environments where users are protected from harm, while ensuring
the rights to expression are preserved (Kowalski et al., 2012).
Example: On Twitter, there have been several instances where public figures and ordinary users
alike have been subject to harassment, hate speech, or doxing (the sharing of personal
information without consent), leading to severe psychological harm (Kowalski et al., 2012).
Ethical Frameworks in Information Ethics
To navigate these challenges, several ethical frameworks can be applied:
i) Deontological Ethics
Based on the philosophy of duty and rules, deontological ethics would emphasize the importance
of adhering to principles such as honesty, transparency, and respect for privacy rights, regardless
of the outcome. For instance, social media companies have a duty to protect user privacy, even if
allowing broader data access might lead to societal benefits (Tavani, 2013).
ii) Virtue Ethics
This framework encourages the development of moral character in individuals and institutions.
For social media, virtue ethics would advocate for platforms to foster environments that
encourage responsible and respectful behavior among users and to model ethical practices in
their data handling and content policies (Floridi, 2013).
iii) Rights-Based Approach
This approach emphasizes the protection of individual rights, such as the right to privacy,
freedom of expression, and the right to access information. Social media companies are ethically
obligated to balance these rights, ensuring that one person’s rights do not infringe on another’s,
such as protecting users from hate speech while also maintaining free speech rights (Tavani,
2013).
Impacts of Social Media on Information Ethics
i) Increased Access to Information
Social media platforms enable the rapid dissemination of information, making knowledge and
news more accessible to people around the world. This democratizes access to information,
11
allowing people from diverse backgrounds to participate in discussions and stay informed about
global events. This transparency can promote ethical behavior by making it harder for
organizations or individuals to hide unethical actions.
According to Reddy et al. (2019), social media has significantly increased the availability and
accessibility of information, fostered a more informed public and encouraged ethical decision-
making processes.
Platforms like Twitter and Facebook have been used to amplify voices from marginalized
communities and share issues not covered by mainstream media.
ii) Facilitates Social Movements and Awareness
Social media has played a crucial role in raising awareness about social and ethical issues, such
as racial injustice, climate change, and human rights. It provides a platform for marginalized
groups to voice their concerns and mobilize collective action, highlighting ethical dilemmas that
might otherwise go unnoticed.
In a study by Khondker (2011), it was shown that social media is a powerful tool for organizing
social movements, making it a vehicle for promoting ethical causes and creating a sense of
global community around ethical values.
iii) Promotes Transparency and Accountability
The public nature of social media allows users to hold individuals, companies, and governments
accountable for their actions. The immediate feedback loop on these platforms can pressure
entities to act more ethically, avoiding unethical practices that could damage their reputation. For
example, citizen journalism and platforms like Twitter have often served as the first point of
exposure for scandals and misinformation, fostering ethical discourse (Hermida, 2012).
iv) Fostering Ethical Dialogue and Critical Thinking
By offering forums for discussions, social media encourages users to engage in dialogue about
ethical dilemmas, explore diverse viewpoints, and learn about the importance of accuracy and
integrity. This space promotes digital literacy and critical thinking, encouraging users to question
information and avoid spreading misinformation (Rainie & Anderson, 2017).
12
These impacts demonstrate how social media, despite its challenges, can contribute to positive
ethical outcomes in the digital age when used responsibly.
13
Conclusion
Social media has revolutionized information dissemination, offering unprecedented opportunities for
communication and engagement, while simultaneously presenting complex legal, policy, and ethical
challenges. The rapid spread of misinformation, issues of content moderation, privacy concerns, and the
global nature of these platforms create significant hurdles for regulators and policymakers. Balancing the
protection of free speech with the need to curb harmful content is crucial in shaping the future of social
media governance. As society continues to navigate these challenges, it is essential to develop
comprehensive frameworks that promote responsible information sharing while safeguarding individual
rights and public welfare.
14
References
Ahmed, T., Ahmed, A., & Faisal, F. (2020). COVID-19, digital divide, and social media in
Pakistan: A multi-method analysis. Digital Health, 6.
Allcott, H., & Gentzkow, M. (2017). Social Media and Fake News in the 2016 Election. Journal
of Economic Perspectives, 31(2), 211-236.
Bennett, W. L., & Livingston, S. (2018). The Disinformation Order: Disrupting News on Social
Media. Political Communication, 35(2), 171-176.
Bennett, W. L., & Segerberg, A. (2013). The logic of connective action: Digital media and the
personalization of contentious politics. Information, Communication & Society, 16(1), 39-60.
Bollier, D. (2010). The promise and peril of new technologies: The 2010 Aspen Institute
Roundtable on Information Technology. The Aspen Institute.
Cadwalladr, C., & Graham-Harrison, E. (2018). The Cambridge Analytica files. The Guardian.
Campan, A., Rapa, M., & Cicchi, L. (2017). Misinformation: The Role of Social Media.
Proceedings of the International Conference on Web Intelligence, Mining and Semantics
(WIMS), 2017, pp. 1-7.
Castells, M. (2015). Networks of outrage and hope: Social movements in the internet age. John
Wiley & Sons.
Chou, W. Y. S., Gaysynsky, A., Vanderpool, R., & Faulkner, L. (2020). The COVID-19
Misinformation Challenge in Public Health: Opportunities and Solutions. Health Security, 18(3),
239-242.
Cinelli, M., Quattrociocchi, W., Galeazzi, A., Valensise, C. M., Brugnoli, E., Schmidt, A. L., &
Scala, A. (2020). The COVID-19 social media infodemic. Scientific Reports, 10(1), 1-10.
Crawford, K., & Gillespie, T. (2016). What is a flag for? Social media reporting tools and the
vocabulary of complaint. New Media & Society, 18(3), 410-428.
Decker, C. (2020). The Cambridge Analytica Scandal: The Impact of Data on Democracy.
European Journal of Law and Technology, 11(1).
Del Vicario, M., Vivaldo, A., Scala, A., & Quattrociocchi, W. (2016). The Misinformation Age:
Social Media and the Spread of Misinformation. Scientific Reports, 6, Article 37825.
European Commission. (2020). Proposal for a Regulation of the European Parliament and of the
Council on a Single Market for Digital Services (Digital Services Act).
European Parliament. (2016). Regulation (EU) 2016/679 of the European Parliament and of the
Council of 27 April 2016 General Data Protection Regulation.
Floridi, L. (2013). The Ethics of Information. Oxford University Press.
15
Fuchs, C. (2021). Social media: A critical introduction. SAGE.
Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the hidden
decisions that shape social media. Yale University Press.
Goldman, E. (2018). The Digital Millennium Copyright Act: A global view. American Bar
Association.
Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(6),
854-871.
Helberger, N., Pierson, J., & Poell, T. (2018). Governing online platforms: From contested to
cooperative responsibility. The Information Society, 34(1), 1-14.
Hoffman, A., Proferes, N., & Zimmer, M. (2020). “Making data portable”: The technical and
legal challenges of data portability. International Journal of Communication, 14, 4420–4443.
Isaak, J., & Hanna, M. J. (2018). User Data Privacy: Facebook, Cambridge Analytica, and
Privacy Protection. Computer, 51(8), 56-59.
James, J. (2007). Digital divide across developing countries. Edward Elgar Publishing.
Kaplan, A. M., & Haenlein, M. (2010). Users of the world, unite! The challenges and
opportunities of social media. Business Horizons, 53(1), 59-68.
Klonick, K. (2018). The new governors: The people, rules, and processes governing online
speech. Harvard Law Review, 131(6), 1598-1670.
Koohikamali, M., & Sidorova, A. (2017). Social Media and Political Discourse: Challenges and
Opportunities. International Journal of Information Systems and Project Management, 5(3), 5-
20.
Kowalski, R. M., Limber, S. P., & Agatston, P. W. (2012). Cyberbullying: Bullying in the Digital
Age. Wiley-Blackwell.
Lewandowsky, S., Ecker, U. K. H., & Cook, J. (2017). Beyond Misinformation: Understanding
and Coping with the “Post-Truth” Era. Journal of Applied Research in Memory and Cognition,
6(4), 353-369.
Livingstone, S. (2014). Digital Literacy and the Implications for Education. Educational
Researcher, 43(1), 26–29.
Napoli, P. M. (2019). Social media and the public interest: Media regulation in the
disinformation age. Columbia University Press.
Newton, C. (2021). Twitter permanently suspends Trump’s account. The Verge.
O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens
democracy. Crown Publishing Group.
Pariser, E. (2011). The Filter Bubble: What the Internet is Hiding from You. Penguin Books.
16
Patchin, J. W., & Hinduja, S. (2010). Cyberbullying and self-esteem. Journal of School Health,
80(12), 614-621.
Pillai, S. (2020). Section 230 of the Communications Decency Act: A Major Legal Shield for
Social Media Platforms. Harvard Law Review.
Rainie, L., & Anderson, J. (2017). The Future of Truth and Misinformation Online. Pew
Research Center.
Regan, P. M. (2015). Legislating Privacy: Data Protection and Public Policy in the United States
and Europe. University of Pennsylvania Press.
Roberts, M. (2018). Tweets and truth: Journalism as a discipline of collaborative verification.
Journalism Practice, 6(5-6), 659-668.
Salgado, S. (2020). Platforms and Their Content Moderation Policies: A Case Study of
Facebook. The Information Society, 36(5), 400-410.
Shirky, C. (2011). The political power of social media: Technology, the public sphere, and
political change. Foreign Affairs, 90(1), 28-41.
Solove, D. J. (2021). Understanding Privacy. Harvard University Press.
Tavani, H. T. (2013). Ethics and Technology: Controversies, Questions, and Strategies for Ethical
Computing. Wiley.
Tufekci, Z. (2014). Engineering the public: big data, surveillance and computational politics.
First Monday, 19(7).
Van Dijk, J. (2020). Closing the digital divide: The role of digital skills and mobile internet use.
Telecommunications Policy, 44(10).
Vaccari, C., & Chadwick, A. (2019). Misinformation, Disinformation, and the Media: A
Comparative Study of the Role of News Media in Misinformation. Media, Culture & Society,
41(5), 695-709.
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science,
359(6380), 1146-1151.
Wardle, C., & Derakhshan, H. (2017). Information Disorder: Toward an Interdisciplinary
Framework for Research and Policy Making. Council of Europe Report.
Wu, Y., Wu, X., & Yang, S. (2016). The Role of Social Media in the Spreading of
Misinformation During the 2014 Ebola Outbreak. International Journal of Health Services,
46(1), 128-141.
Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information
civilization. Journal of Information Technology, 30(1), 75-89.
17
Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New
Frontier of Power. PublicAffairs.
18