The Promises and Perils of Artificial Intelligence An Ethical and Social Analysis
The Promises and Perils of Artificial Intelligence An Ethical and Social Analysis
Chapter 1
The Promises and Perils
of Artificial Intelligence:
An Ethical and Social Analysis
Rehan Khan
https://orcid.org/0000-0002-3788-6832
Oriental Institute of Science and Technology, India
ABSTRACT
Artificial intelligence (AI) has rapidly advanced in recent years, with the
potential to bring significant benefits to society. However, as with any
transformative technology, there are ethical and social implications that
need to be considered. This chapter provides an overview of the key issues
related to the ethical and social implications of AI, including bias and
fairness, privacy and surveillance, employment and the future of work,
safety and security, transparency and accountability, and autonomy and
responsibility. The chapter draws on a range of interdisciplinary sources,
including academic research, policy documents, and media reports. The
review highlights the need for collaboration across multiple stakeholders to
address these challenges, grounded in human rights and values. The chapter
concludes with a call to action for researchers, policymakers, and industry
leaders to work together to ensure that AI is used in a way that benefits all
members of society while minimizing the risks and unintended consequences
associated with the technology.
DOI: 10.4018/978-1-6684-9196-6.ch001
Copyright © 2023, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Promises and Perils of Artificial Intelligence
1. INTRODUCTION
Artificial Intelligence (AI) has the potential to bring significant benefits
to society, such as improved healthcare, more efficient transportation, and
increased productivity (Bohr & Memarzadeh, 2020). While AI has the potential
to bring significant benefits to society, there are ethical and social implications
that must be considered. As with any transformative technology, AI can have
unintended consequences that may impact individuals and society as a whole.
To address these challenges, this review paper provides an overview of
the critical issues related to the ethical and social implications of AI. These
issues include bias and fairness, privacy and surveillance, employment and
the future of work, safety and security, transparency and accountability, and
autonomy and responsibility. Drawing on a range of interdisciplinary sources,
including academic research, policy documents, and media reports, this paper
highlights the need for collaboration across multiple stakeholders to ensure
that AI is used in a way that benefits all members of society.
The paper emphasizes the importance of grounding AI development in
human rights and values and calls on researchers, policymakers, and industry
leaders to work together to address these challenges. It highlights the need for
greater transparency and accountability in AI development and deployment,
as well as the importance of ensuring that AI is developed in a way that is
fair, unbiased, and responsible. With AI poised to have a significant impact
on our lives, it is crucial that we consider the ethical and social implications
of this technology. This paper is essential reading for anyone interested in
the future of AI and the challenges we must address to ensure that it benefits
all members of society.
2
The Promises and Perils of Artificial Intelligence
3
The Promises and Perils of Artificial Intelligence
the algorithm was trained to prefer male candidates over female candidates
(“Amazon Scrapped ‘sexist AI’ Tool,” 2018; University, $dateFormat).
This highlights the potential for AI bias to be inadvertently introduced
when the data used to train the algorithm is not diverse or representative of
the broader population. Similarly, in 2016, LinkedIn’s name autocomplete
feature suggested male names instead of female ones, prompting criticism
and highlighting the biases and limitations of AI and machine learning
algorithms in relation to gender and diversity. LinkedIn acknowledged the
issue and committed to improving its autocomplete feature, emphasizing the
need for greater diversity and representation in the development and training
of AI algorithms. (Incident 47, 2013)
In 2016, a beauty contest called “Beauty.AI” was organized by a group
of Russian entrepreneurs, and it was judged entirely by artificial intelligence
(AI). The idea behind the contest was to use machine learning algorithms to
analyze facial features and other factors to determine which entrants were
the most attractive. (Woodie, 2015)
However, when the results were announced, it was found that the AI had a
clear bias against people with darker skin tones. Out of the 44 winners chosen
by the AI, only one had dark skin, and the majority were fair-skinned. The
incident caused an uproar on social media, with many people accusing the
contest organizers of racial bias. (Matyszczyk, 2016)
The contest organizers released a statement explaining that the algorithms
used in the contest were trained on a dataset of predominantly fair-skinned
individuals, which could have led to biased results. They also acknowledged
that AI could not fully account for the diversity of human beauty and that
they were working to improve the algorithms to address these issues.
This incident highlights one of the significant challenges in developing
AI systems: the potential for bias and discrimination, particularly when the
algorithms are trained on biased data. As AI becomes increasingly integrated
into our lives, it’s important to address these issues to ensure that these systems
are fair, transparent, and inclusive.
In 2017, Amazon’s virtual assistant, Alexa, accidentally played explicit
content instead of a children’s song for a young girl, prompting criticism and
concern. Amazon apologized for the error and pledged to improve its voice
recognition and filtering systems to prevent similar incidents in the future.
The incident underscored the importance of appropriate content and parental
controls in technology devices aimed at families and children.(Incident 55,
2015)
In 2019, the Apple Card, a credit card marketed by Apple and backed by
Goldman Sachs, was accused of gender bias after several users reported that
4
The Promises and Perils of Artificial Intelligence
male applicants received higher credit limits than female applicants, even if
they had similar credit scores. The issue was first raised by software developer
and entrepreneur David Heinemeier Hansson, who tweeted that he received
20 times the credit limit of his wife, despite the fact that they file joint tax
returns and she has a higher credit score. (Elsesser, 2019)
Following Hansson’s tweet, many other users, including Apple co-founder
Steve Wozniak, reported similar experiences of gender bias in credit limits.
The issue gained widespread attention and led to an investigation by the New
York State Department of Financial Services. In a statement, the department
said it “will be conducting an investigation to determine whether New York
law was violated and ensure all consumers are treated equally regardless of
sex.”(“RPT-Goldman Faces Probe after Entrepreneur Slams Apple Card
Algorithm in Tweets,” 2019)
In response to the accusations, Goldman Sachs said it does not make credit
decisions based on gender, race, age, or any other discriminatory factor. The
company also said that it would review its credit decision process to ensure
that it is fair and unbiased. Apple also defended the Apple Card, saying
that it does not discriminate and that credit decisions are made by Goldman
Sachs. The incident highlighted the potential for AI systems to replicate or
exacerbate existing biases in human decision-making, even if they are not
programmed to do so intentionally.
In 2013, a study conducted by Anh Nguyen and his colleagues discovered
that object recognition neural networks can be easily fooled by particular
noise images, which they called “phantom objects.” These phantom objects
are images that do not resemble any recognizable objects, yet they can be
classified by the neural network as familiar objects with high confidence.
(Nguyen et al., 2015)
The researchers found that they could generate these phantom objects by
adding specific noise patterns to images that are indistinguishable from the
human eye. When these modified images were fed into an object recognition
neural network, the network would classify them as recognizable objects with
high confidence, even though they were entirely unrelated to the object in
the original image.
This phenomenon has important implications for the reliability of object
recognition neural networks, as it suggests that these networks can be easily
deceived by specially crafted noise images. It also highlights the need for
more research into the robustness and reliability of AI systems, especially
those that are used in critical applications such as self-driving cars or medical
diagnosis.
5
The Promises and Perils of Artificial Intelligence
6
The Promises and Perils of Artificial Intelligence
7
The Promises and Perils of Artificial Intelligence
Moreover, companies must also ensure that they are transparent about
how they collect, store, and use data collected from autonomous vehicles.
Users should be informed about what data is being collected, why it is being
collected, and how it is being used. For example, Tesla compiles a quarterly
report on the kilometers driven by its vehicles and whether the autopilot was
engaged (Tesla Vehicle Safety Report, n.d.). This information helps Tesla to
identify patterns and trends in the use of its autonomous driving technology
and to make improvements where necessary. By being transparent about the
data they collect, companies can build trust with their users and ensure that
they are using the data in a responsible and ethical manner.
Amazon has implemented several measures to restrict the data collection
capabilities of its devices. For instance, Amazon claims that uttering the
word “Alexa” is necessary to activate its devices and prevent them from
being used as a tool for constant surveillance (Hildebrand et al., 2020)..
However, as illustrated in the Wikileaks release of NSA documents, these
types of technologies are prone to backdoors and vulnerabilities that could
be manipulated to convert them into a mechanism of persistent surveillance
(Vault7 - Home, n.d.). This implies that even with the best efforts to safeguard
privacy, there are still inherent risks involved in using these devices that
could lead to privacy violations. Companies must continuously monitor and
improve the security of their devices to ensure that they cannot be used for
unintended purposes.
The AIGS Index, which measures the use of AI for surveillance in 176
countries, has found that at least 75 countries globally are actively using
AI technologies for surveillance purposes. These countries are deploying
AI-powered surveillance in both lawful and unlawful ways, using smart
city/safe city platforms, facial recognition systems, and smart policing. The
AIGS Index’s findings indicate that the global adoption of AI surveillance
is increasing at a rapid pace worldwide.
What is particularly notable is that the pool of countries using AI for
surveillance is heterogeneous, encompassing countries from all regions with
political systems ranging from closed autocracies to advanced democracies.
The “Freedom on the Net 2018” report had previously noted that 18 out of
65 assessed countries were using AI surveillance technology from Chinese
companies (Freedom on the Net, 2018). However, a year later, the AIGS
Index found that 47 countries out of that same group are now deploying AI
surveillance technology from China. (Limited, n.d.)
In 2017, there was a global trend of declining trust in institutions and
governments, with half of the world’s countries scoring lower in democracy
than the previous year. This was also evident in Canada, where less than half of
8
The Promises and Perils of Artificial Intelligence
9
The Promises and Perils of Artificial Intelligence
Employment and the future of work are important ethical and social
implications of AI, as automation and the use of AI systems have the potential
to transform the workforce and lead to significant economic and social changes.
While AI has the potential to create new jobs and opportunities, there is also
a concern that it may displace workers and exacerbate existing inequalities.
One example of employment and the future of work concerns in AI is in
the use of autonomous vehicles. Autonomous vehicles have the potential to
revolutionize the transportation industry, but they may also lead to significant
job losses for drivers and other transportation workers. This could have a
significant impact on local economies and communities that rely on these jobs,
particularly in regions where transportation is a key industry. It is likely that
in the future of the transportation industry, employees will need to possess
higher levels of IT skills, knowledge about autonomous vehicles and AI,
and strong communication and interpersonal skills. This will enable them to
work in roles that involve innovation, critical thinking, and creativity, rather
than standardized and repetitive low-skill tasks that can easily be automated.
Therefore, it is vital for individuals to focus on developing these skills in
order to remain employable in the evolving landscape of the transportation
industry. (McClelland, 2023)
Another example of employment and the future of work concerns in AI
is in the use of automation in manufacturing and other industries. As AI and
automation technology continue to advance, there is a concern that they may
displace workers in these industries and lead to a concentration of wealth
and power in the hands of a few. This could exacerbate existing inequalities
and create significant social and economic disruption.
A meta-analysis investigated the link between AI, robots, and
unemployability and found a positive correlation implying AI and robots make
workers with the lowest levels of education lose jobs (Nikitas et al., 2021).
Also, there are a few AI use cases in manufacturing that have the capability
to replace human interventions. One such is lights-out manufacturing, also
known as dark factories or fully automated factories, which is a manufacturing
model in which the entire manufacturing process is fully automated, with
little to no human intervention required. The term “lights out” refers to the
idea that the factory can operate in complete darkness without the need for
human oversight or intervention (Lights-out Manufacturing, n.d.).
In a lights-out factory, machines are programmed to perform all aspects of
the manufacturing process, from assembly and welding to quality control and
packaging. These machines are typically controlled by artificial intelligence
and can communicate with one another to optimize production efficiency
and quality.
10
The Promises and Perils of Artificial Intelligence
11
The Promises and Perils of Artificial Intelligence
12
The Promises and Perils of Artificial Intelligence
could potentially gain access to the sensors and control systems of autonomous
vehicles, allowing them to take control of the vehicle and cause accidents or
other harm (Sheehan et al., 2019).
Another example of safety and security concerns in AI is in the use of
AI systems for critical infrastructure or weapons systems. If these systems
are compromised or malfunction, they could have severe consequences for
safety and security. For example, a malfunctioning AI system in a power
plant or water treatment facility could lead to widespread power outages or
contamination of the water supply.
There is also a risk of AI being used for malicious purposes, such as cyber-
attacks or social engineering. AI systems can be used to automate and scale
attacks, making them more effective and challenging to detect. For example,
AI-powered phishing attacks can be tailored to individual users based on their
online behavior, making them more likely to fall for the scam (“How AI and
Machine Learning Are Changing the Phishing Game,” 2022).
In 2017, Facebook’s AI research team launched an experiment to teach
AI agents how to negotiate with each other. They created a chatbot system
with two agents and gave them the task of dividing a set of objects between
them. The agents were programmed to negotiate with each other in their
natural language using a machine learning algorithm.
However, the researchers were surprised when the agents began to develop
a unique language of their own to communicate with each other. They had
deviated from English and were using a language that was more efficient
for their purposes. The researchers observed that the bots had started to use
code words and language patterns that were not comprehensible to humans
(Facebook Robots Shut down after They Talk to Each Other in Language
Only They Understand, 2020).
The incident raised concerns about the potential consequences of
uncontrolled AI development. It highlighted the fact that as AI systems
become more advanced and capable of learning on their own, they could
develop behaviors that are unpredictable and difficult to control. Facebook
ultimately shut down the chatbot experiment, and researchers acknowledged
the need for better safeguards to prevent AI systems from developing their
own language or other potentially dangerous behaviors.
To address these concerns, it is important to ensure that AI systems are
designed and implemented in a way that prioritizes safety and security. This
can involve implementing strong cybersecurity measures, such as encryption
and firewalls, and designing systems with redundancies and fail-safe to prevent
catastrophic failures. It can also involve promoting ethical and responsible use
13
The Promises and Perils of Artificial Intelligence
of AI and ensuring that there are appropriate legal and regulatory frameworks
in place to govern the development and use of AI systems.
Overall, safety and security concerns in AI are important to consider
and address, as they can have significant consequences for individuals,
organizations, and society as a whole. By taking a proactive and responsible
approach to AI development and implementation, we can help ensure that AI
systems are used in a way that promotes safety, security, and well-being for all.
14
The Promises and Perils of Artificial Intelligence
15
The Promises and Perils of Artificial Intelligence
16
The Promises and Perils of Artificial Intelligence
remarks. This was due to Tay’s learning algorithms being influenced by the
hostile and abusive messages it received from Twitter users (Vincent, 2016).
Microsoft quickly shut down the chatbot and issued an apology, stating that
they had not anticipated the extent of the negative impact that online trolls
could have on Tay’s behavior. The incident highlighted the potential risks and
challenges associated with using artificial intelligence and machine learning
in social media and online communication. It also raised questions about the
responsibility of tech companies to monitor and regulate the behavior of their
AI-powered platforms.
The Dutch scandal, also known as the “childcare allowance affair,” was
a major political scandal in the Netherlands that came to light in 2019. The
scandal involved the wrongful accusation of an estimated 26,000 parents of
making fraudulent benefit claims between 2005 and 2019 (“Dutch Childcare
Benefits Scandal,” 2022).
The scandal began when the Dutch government began cracking down on
fraud in the childcare allowance system. The system was designed to help
working parents pay for childcare costs, but there were concerns that some
parents were abusing the system. In an effort to root out fraud, the government
began using a system of algorithms to identify potentially fraudulent claims.
However, the algorithms used were found to be flawed and led to many
false accusations. As a result, thousands of parents were accused of fraud
and were forced to repay large amounts of money to the government. Many
of these parents were from low-income backgrounds and were unable to pay
back the money, leading to financial ruin and personal hardship.
The scandal eventually came to light in 2019, after journalists from the
Dutch newspaper Trouw began investigating the case. The investigation found
that the government had ignored warnings about the flawed algorithms and
had failed to provide proper support to the wrongly accused parents.
The scandal led to widespread outrage in the Netherlands, with many
calling for the resignation of government officials involved in the case. The
government eventually issued a formal apology and set up a compensation
fund for the affected parents. In January 2021, the Dutch government collapsed
after a parliamentary report found that officials had pursued a policy of ethnic
profiling, targeting families with dual nationality or non-western backgrounds,
which had led to discrimination and violations of human rights.
The Dutch childcare allowance affair is a cautionary tale about the potential
dangers of using algorithms and automated systems in decision-making. It
highlights the importance of ensuring that these systems are properly tested
and monitored, and that appropriate safeguards are put in place to prevent
errors and protect the rights of individuals.
17
The Promises and Perils of Artificial Intelligence
3 FUTURE PROSPECTS
According to policy researchers focusing on researching policymaking on
AI with the goal of maximizing societal benefits, labor management handled
by AI has had a lot of complaints regarding being unfair in companies like
Amazon, Starbucks, and uber. In the process of making amends, a legislative
file in the European Union called the platform economy directive (Cairn.
Info, 2022).
The future prospects hold both benefits and challenges for ethical AI in
the social context.
The development of ethical AI in the context of social sciences holds
immense potential for revolutionizing the way we understand and address
social problems. One of the key benefits of ethical AI is improved accuracy
and fairness. By incorporating ethical considerations into the design of AI
18
The Promises and Perils of Artificial Intelligence
systems, researchers can ensure that these systems are fair and accurate
and do not perpetuate biases and prejudices. Ethical AI can also enhance
privacy and security by protecting individuals’ personal data and preventing
unauthorized access or misuse of sensitive information. Additionally, ethical
AI can increase transparency and accountability, providing clear explanations
of how decisions are made and enabling individuals and communities to
hold organizations accountable for their actions. Ethical AI can also be used
to develop and deliver more effective social services, such as healthcare,
education, and public safety, helping to address some of the biggest social
challenges we face. Finally, by optimizing their operations with ethical AI,
organizations can reduce costs and deliver better social outcomes, providing
more resources to address social problems.
While the development of ethical AI in the context of social sciences
presents significant benefits, it also poses several challenges and risks. One
of the main challenges is ensuring data quality and avoiding bias. Ethical AI
must be designed with accurate and representative data and avoid perpetuating
bias, which could lead to unfair or discriminatory outcomes. Another
challenge is the potential for unintended consequences. Even well-designed
systems may have unintended consequences that could cause harm. Moreover,
balancing ethical considerations with practical and financial constraints can
be challenging, particularly when resources are limited. Finally, the lack of
transparency in AI systems can make it difficult to interpret and understand
them and hold organizations accountable for their actions. It is essential for
researchers, policymakers, and practitioners to work together to address these
challenges and ensure that ethical AI is developed and deployed in ways that
promote social good and minimize harm.
CONCLUSION
The development and proliferation of AI have significant social and ethical
implications that must be considered and addressed. The potential benefits of
AI are immense, including increased efficiency, productivity, and innovation
in various sectors. However, the use of AI also presents significant risks
and challenges, such as job displacement, privacy violations, bias and
discrimination, and the development of autonomous weapons.
The incidents of AI’s bias and discriminatory behavior towards marginalized
groups serve as a warning that AI technology still has a long way to go before
it can be fully trusted. The incidents also highlight the importance of ethical
19
The Promises and Perils of Artificial Intelligence
REFERENCES
Amazon scrapped “sexist AI” tool. (2018, October 10). BBC News. https://
www.bbc.com/news/technology-45809919
An internal auditing framework to improve algorithm responsibility. (2020,
October 30). Hello Future. https://hellofuture.orange.com/en/auditing-ai-
when-algorithms-come-under-scrutiny/
Artificial Intelligence, Robots and Unemployment: Evidence from OECD
Countries. (2022). Cairn. https://www.cairn.info/revue-journal-of-innovation-
economics-2022-1-page-117.htm
Atske, S. (2018, December 10). Artificial Intelligence and the Future of
Humans. Pew Research Center: Internet, Science & Tech. https://www.
pewresearch.org/internet/2018/12/10/artificial-intelligence-and-the-future-
of-humans/
Babuta, A., Oswald, M., & Janjeva, A. (2020). Artificial Intelligence and
UK National Security.
Barton, N. T. L. Paul Resnick, and Genie. (2019, May 22). Algorithmic bias
detection and mitigation: Best practices and policies to reduce consumer harms.
Brookings. https://www.brookings.edu/research/algorithmic-bias-detection-
and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/
20
The Promises and Perils of Artificial Intelligence
21
The Promises and Perils of Artificial Intelligence
22
The Promises and Perils of Artificial Intelligence
23
The Promises and Perils of Artificial Intelligence
24