0% found this document useful (0 votes)
29 views

The Promises and Perils of Artificial Intelligence An Ethical and Social Analysis

Uploaded by

a.aeplers
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

The Promises and Perils of Artificial Intelligence An Ethical and Social Analysis

Uploaded by

a.aeplers
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

1

Chapter 1
The Promises and Perils
of Artificial Intelligence:
An Ethical and Social Analysis

Syed Adnan Ali


United Arab Emirates University, UAE

Rehan Khan
https://orcid.org/0000-0002-3788-6832
Oriental Institute of Science and Technology, India

Syed Noor Ali


Indira Gandhi National Open University, India

ABSTRACT
Artificial intelligence (AI) has rapidly advanced in recent years, with the
potential to bring significant benefits to society. However, as with any
transformative technology, there are ethical and social implications that
need to be considered. This chapter provides an overview of the key issues
related to the ethical and social implications of AI, including bias and
fairness, privacy and surveillance, employment and the future of work,
safety and security, transparency and accountability, and autonomy and
responsibility. The chapter draws on a range of interdisciplinary sources,
including academic research, policy documents, and media reports. The
review highlights the need for collaboration across multiple stakeholders to
address these challenges, grounded in human rights and values. The chapter
concludes with a call to action for researchers, policymakers, and industry
leaders to work together to ensure that AI is used in a way that benefits all
members of society while minimizing the risks and unintended consequences
associated with the technology.
DOI: 10.4018/978-1-6684-9196-6.ch001

Copyright © 2023, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Promises and Perils of Artificial Intelligence

1. INTRODUCTION
Artificial Intelligence (AI) has the potential to bring significant benefits
to society, such as improved healthcare, more efficient transportation, and
increased productivity (Bohr & Memarzadeh, 2020). While AI has the potential
to bring significant benefits to society, there are ethical and social implications
that must be considered. As with any transformative technology, AI can have
unintended consequences that may impact individuals and society as a whole.
To address these challenges, this review paper provides an overview of
the critical issues related to the ethical and social implications of AI. These
issues include bias and fairness, privacy and surveillance, employment and
the future of work, safety and security, transparency and accountability, and
autonomy and responsibility. Drawing on a range of interdisciplinary sources,
including academic research, policy documents, and media reports, this paper
highlights the need for collaboration across multiple stakeholders to ensure
that AI is used in a way that benefits all members of society.
The paper emphasizes the importance of grounding AI development in
human rights and values and calls on researchers, policymakers, and industry
leaders to work together to address these challenges. It highlights the need for
greater transparency and accountability in AI development and deployment,
as well as the importance of ensuring that AI is developed in a way that is
fair, unbiased, and responsible. With AI poised to have a significant impact
on our lives, it is crucial that we consider the ethical and social implications
of this technology. This paper is essential reading for anyone interested in
the future of AI and the challenges we must address to ensure that it benefits
all members of society.

2. ISSUES AND CHALLENGES

2.1 Bias and Fairness


AI systems can inadvertently perpetuate biases and discrimination, particularly
if they are trained on biased data or if the data reflects historical inequalities.
This can result in unfair treatment of particular groups of people, such as
minorities or marginalized communities.(Barton, 2019)
Bias and fairness are essential considerations in the development and
deployment of AI systems. AI systems are only as good as the data they are
trained on, and if the data is biased, then the AI system will be biased as well.

2
The Promises and Perils of Artificial Intelligence

This can lead to unfair treatment of particular groups of people, particularly


minorities or marginalized communities.
One example of bias in AI systems is facial recognition technology. Studies
have shown that facial recognition technology is less accurate in recognizing
people with darker skin tones, which can lead to false identifications and
wrongful arrests (Study Finds Gender and Skin-Type Bias in Commercial
Artificial-Intelligence Systems, 2018). This is because the data used to train
the facial recognition technology was primarily based on images of lighter-
skinned individuals, and therefore the AI system was not trained to recognize
the full range of skin tones. One of the most prominent examples of AI bias is
the COMPAS (Correctional Offender Management Profiling for Alternative
Sanctions) algorithm utilized in US court systems to forecast the probability
of a defendant becoming a repeat offender (Mattu, 2016).
However, due to the data that was utilized, the model chosen, and the
overall process of creating the algorithm, the model resulted in twice as many
false positive predictions for recidivism for black offenders (45%) compared
to white offenders (23%). This example highlights the impact of biased data
and the importance of examining and addressing AI bias to avoid perpetuating
discrimination in the criminal justice system (Lagioia et al., 2022).
In 2019, it was discovered that an algorithm used in US hospitals to predict
which patients would require extra medical care was biased towards white
patients over black patients. Although race was not a variable in the algorithm,
another variable highly correlated with race, healthcare cost history, was
used. As a result, black patients with the same conditions as white patients
had lower healthcare costs, which led to bias in the algorithm. Thankfully,
the bias was addressed, and the algorithm’s bias was reduced by 80% after
researchers worked with Optum. However, if the bias had not been discovered
and addressed, it would have continued to discriminate against certain groups
of people. This example highlights the importance of interrogating and
addressing AI bias to prevent discrimination and ensure that AI is used in a
fair and ethical manner (Obermeyer et al., 2019).
Another example of bias in AI systems is in the hiring process. AI systems
are increasingly being used to screen job applications, but if the data used
to train the system reflects historical biases, such as gender or racial biases,
then the AI system may perpetuate those biases. This can lead to qualified
candidates being overlooked and result in a less diverse workforce. Amazon, a
leading technology company, heavily employs machine learning and artificial
intelligence. In 2015, it was discovered that their hiring algorithm was biased
against women. This was because the algorithm relied on the number of resumes
submitted over the previous decade, and since most applicants were men,

3
The Promises and Perils of Artificial Intelligence

the algorithm was trained to prefer male candidates over female candidates
(“Amazon Scrapped ‘sexist AI’ Tool,” 2018; University, $dateFormat).
This highlights the potential for AI bias to be inadvertently introduced
when the data used to train the algorithm is not diverse or representative of
the broader population. Similarly, in 2016, LinkedIn’s name autocomplete
feature suggested male names instead of female ones, prompting criticism
and highlighting the biases and limitations of AI and machine learning
algorithms in relation to gender and diversity. LinkedIn acknowledged the
issue and committed to improving its autocomplete feature, emphasizing the
need for greater diversity and representation in the development and training
of AI algorithms. (Incident 47, 2013)
In 2016, a beauty contest called “Beauty.AI” was organized by a group
of Russian entrepreneurs, and it was judged entirely by artificial intelligence
(AI). The idea behind the contest was to use machine learning algorithms to
analyze facial features and other factors to determine which entrants were
the most attractive. (Woodie, 2015)
However, when the results were announced, it was found that the AI had a
clear bias against people with darker skin tones. Out of the 44 winners chosen
by the AI, only one had dark skin, and the majority were fair-skinned. The
incident caused an uproar on social media, with many people accusing the
contest organizers of racial bias. (Matyszczyk, 2016)
The contest organizers released a statement explaining that the algorithms
used in the contest were trained on a dataset of predominantly fair-skinned
individuals, which could have led to biased results. They also acknowledged
that AI could not fully account for the diversity of human beauty and that
they were working to improve the algorithms to address these issues.
This incident highlights one of the significant challenges in developing
AI systems: the potential for bias and discrimination, particularly when the
algorithms are trained on biased data. As AI becomes increasingly integrated
into our lives, it’s important to address these issues to ensure that these systems
are fair, transparent, and inclusive.
In 2017, Amazon’s virtual assistant, Alexa, accidentally played explicit
content instead of a children’s song for a young girl, prompting criticism and
concern. Amazon apologized for the error and pledged to improve its voice
recognition and filtering systems to prevent similar incidents in the future.
The incident underscored the importance of appropriate content and parental
controls in technology devices aimed at families and children.(Incident 55,
2015)
In 2019, the Apple Card, a credit card marketed by Apple and backed by
Goldman Sachs, was accused of gender bias after several users reported that

4
The Promises and Perils of Artificial Intelligence

male applicants received higher credit limits than female applicants, even if
they had similar credit scores. The issue was first raised by software developer
and entrepreneur David Heinemeier Hansson, who tweeted that he received
20 times the credit limit of his wife, despite the fact that they file joint tax
returns and she has a higher credit score. (Elsesser, 2019)
Following Hansson’s tweet, many other users, including Apple co-founder
Steve Wozniak, reported similar experiences of gender bias in credit limits.
The issue gained widespread attention and led to an investigation by the New
York State Department of Financial Services. In a statement, the department
said it “will be conducting an investigation to determine whether New York
law was violated and ensure all consumers are treated equally regardless of
sex.”(“RPT-Goldman Faces Probe after Entrepreneur Slams Apple Card
Algorithm in Tweets,” 2019)
In response to the accusations, Goldman Sachs said it does not make credit
decisions based on gender, race, age, or any other discriminatory factor. The
company also said that it would review its credit decision process to ensure
that it is fair and unbiased. Apple also defended the Apple Card, saying
that it does not discriminate and that credit decisions are made by Goldman
Sachs. The incident highlighted the potential for AI systems to replicate or
exacerbate existing biases in human decision-making, even if they are not
programmed to do so intentionally.
In 2013, a study conducted by Anh Nguyen and his colleagues discovered
that object recognition neural networks can be easily fooled by particular
noise images, which they called “phantom objects.” These phantom objects
are images that do not resemble any recognizable objects, yet they can be
classified by the neural network as familiar objects with high confidence.
(Nguyen et al., 2015)
The researchers found that they could generate these phantom objects by
adding specific noise patterns to images that are indistinguishable from the
human eye. When these modified images were fed into an object recognition
neural network, the network would classify them as recognizable objects with
high confidence, even though they were entirely unrelated to the object in
the original image.
This phenomenon has important implications for the reliability of object
recognition neural networks, as it suggests that these networks can be easily
deceived by specially crafted noise images. It also highlights the need for
more research into the robustness and reliability of AI systems, especially
those that are used in critical applications such as self-driving cars or medical
diagnosis.

5
The Promises and Perils of Artificial Intelligence

Phantom objects in image recognition neural networks can have negative


implications as they can cause the network to produce incorrect outputs or
lead to false positives. For instance, if a neural network used in self-driving
cars incorrectly identifies a phantom object as a real object, it may cause the
car to take unnecessary and potentially dangerous actions such as sudden
braking or swerving. Additionally, phantom objects can lead to biases and
errors in decision-making processes, resulting in unfair or discriminatory
outcomes. Therefore, it is crucial to identify and address phantom objects in
neural networks to ensure the accuracy and fairness of AI systems.
In 2015, Google Photos’ image recognition algorithm mistakenly classified
images of black people as “gorillas.” This error received widespread media
attention, and Google’s CEO at the time, Sundar Pichai, publicly apologized
for the mistake. The incident was likely the result of a lack of diversity in the
data used to train the algorithm. Google immediately removed the “gorilla”
tag from its system, and many experts called for increased diversity and
inclusion in the tech industry to help prevent similar errors in the future.
(Zhang, 2015.)
To address these issues, it is crucial to ensure that AI systems are developed
using unbiased data and that they are regularly audited to detect and correct
biases. This can involve using diverse and representative data sets, as well
as designing algorithms that take into account potential biases in the data. It
is also important to involve diverse stakeholders, including individuals from
marginalized communities, in the development and deployment of AI systems
to ensure that their perspectives and experiences are considered.
Overall, addressing bias and promoting fairness in AI systems is critical to
ensuring that these technologies are used in a way that benefits all members
of society, regardless of their background or identity.

2.2 Privacy and Surveillance


AI systems can collect and analyze vast amounts of data, raising concerns
about privacy and surveillance. This can include personal data, such as
medical records or financial information, and more sensitive data, such as
biometric data or location data.
Privacy and surveillance are important ethical and social implications of
AI that are becoming increasingly relevant as AI systems evolve and expand.
AI systems have the ability to collect, store, and analyze vast amounts of data,
which can include personal information, financial data, medical records, and
more. This can create concerns about the privacy and security of sensitive
information, as well as the potential for misuse or abuse of this data.

6
The Promises and Perils of Artificial Intelligence

One example of privacy and surveillance concerns in AI systems is the


use of facial recognition technology. Facial recognition technology can track
individuals and monitor their movements, which can create concerns about
government surveillance and potential abuses of power.
Another example of privacy and surveillance concerns in AI systems is
the use of predictive analytics. Predictive analytics use algorithms to analyze
data and make predictions about future behavior, such as in the case of credit
scoring or crime prediction (Nyce, 2007). However, using predictive analytics
can raise concerns about privacy violations, as individuals may not be aware
that their data is being used to make decisions about them.
Google’s ability to collect personal data is partly attributed to the fact that
users cannot hide their interests when they search for information. People
may try to conceal sensitive topics in their personal lives, but they cannot
search for information on these subjects without entering relevant keywords
into the search engine. Stephens-Davidowitz and Pinker’s (2017) (Stephens-
Davidowitz & Pinker, 2017) analysis of personal search query patterns revealed
that a significant number of Indian husbands search for information about
desiring to be breastfed. This finding raises concerns about collecting even
the most intimate personal data online. While the response of Google to this
discovery is yet to be determined, it underscores the potential for AI systems
to collect highly sensitive information about individuals.
Autonomous vehicles generate vast amounts of data through sensors,
cameras, and other monitoring systems. This data can be used to improve
the performance and safety of the vehicles, as well as to enhance the user
experience. However, this data can also raise concerns about privacy and
security. For instance, car manufacturers and other third-party providers may
collect data on the locations of the cars, their occupants, and their driving
patterns. This data could be used for unauthorized purposes, such as targeted
advertising, identity theft, or even stalking.
To address these concerns, there is a need for robust privacy and security
frameworks that ensure that data collected from autonomous vehicles is used
only for authorized purposes and is adequately protected against misuse. For
example, the General Data Protection Regulation (GDPR) in the European
Union imposes strict requirements on the collection, processing, and storage
of personal data, including data collected by autonomous vehicles. Similarly,
the California Consumer Privacy Act (CCPA) requires companies to disclose
what personal data they collect and to allow users to opt out of the sale of
their data. (General Data Protection Regulation (GDPR) Definition and
Meaning, 2020.)

7
The Promises and Perils of Artificial Intelligence

Moreover, companies must also ensure that they are transparent about
how they collect, store, and use data collected from autonomous vehicles.
Users should be informed about what data is being collected, why it is being
collected, and how it is being used. For example, Tesla compiles a quarterly
report on the kilometers driven by its vehicles and whether the autopilot was
engaged (Tesla Vehicle Safety Report, n.d.). This information helps Tesla to
identify patterns and trends in the use of its autonomous driving technology
and to make improvements where necessary. By being transparent about the
data they collect, companies can build trust with their users and ensure that
they are using the data in a responsible and ethical manner.
Amazon has implemented several measures to restrict the data collection
capabilities of its devices. For instance, Amazon claims that uttering the
word “Alexa” is necessary to activate its devices and prevent them from
being used as a tool for constant surveillance (Hildebrand et al., 2020)..
However, as illustrated in the Wikileaks release of NSA documents, these
types of technologies are prone to backdoors and vulnerabilities that could
be manipulated to convert them into a mechanism of persistent surveillance
(Vault7 - Home, n.d.). This implies that even with the best efforts to safeguard
privacy, there are still inherent risks involved in using these devices that
could lead to privacy violations. Companies must continuously monitor and
improve the security of their devices to ensure that they cannot be used for
unintended purposes.
The AIGS Index, which measures the use of AI for surveillance in 176
countries, has found that at least 75 countries globally are actively using
AI technologies for surveillance purposes. These countries are deploying
AI-powered surveillance in both lawful and unlawful ways, using smart
city/safe city platforms, facial recognition systems, and smart policing. The
AIGS Index’s findings indicate that the global adoption of AI surveillance
is increasing at a rapid pace worldwide.
What is particularly notable is that the pool of countries using AI for
surveillance is heterogeneous, encompassing countries from all regions with
political systems ranging from closed autocracies to advanced democracies.
The “Freedom on the Net 2018” report had previously noted that 18 out of
65 assessed countries were using AI surveillance technology from Chinese
companies (Freedom on the Net, 2018). However, a year later, the AIGS
Index found that 47 countries out of that same group are now deploying AI
surveillance technology from China. (Limited, n.d.)
In 2017, there was a global trend of declining trust in institutions and
governments, with half of the world’s countries scoring lower in democracy
than the previous year. This was also evident in Canada, where less than half of

8
The Promises and Perils of Artificial Intelligence

the population trusted their government, businesses, media, non-governmental


organizations, and leaders. The Cambridge Analytica scandal, which involved
psychographic profiling of Facebook users, added to this erosion of confidence
and raised concerns about privacy in artificial intelligence. The use of AI to
manipulate democracy has become a threat to democracy itself. Additionally,
the violation of Canadian privacy laws by US company Clearview AI, which
collected and sold photographs of Canadian adults and children for mass
surveillance and facial recognition without consent, has further decreased
trust and confidence in AI businesses and the government’s ability to manage
privacy and AI (Feldstein, 2019). Similar investigations are also underway
in Australia and the United Kingdom.
These findings raise concerns about the potential abuse of AI surveillance
technologies, as the use of AI in this context can have profound implications
for privacy, freedom of speech, and human rights. As AI technologies continue
to advance and proliferate, it is crucial that ethical frameworks and regulations
are put in place to ensure their responsible use. As “data” is essential for
the functioning of AI, some of the most critical data includes personally
identifiable information (PII) and protected health information (PHI). This
type of data, including biometric data, is highly sensitive, and it is crucial to
evaluate how AI uses it and whether appropriate precautions have been taken
to prevent the manipulation of democracy’s mechanisms.
To address these concerns, it is essential to ensure that AI systems are
developed and deployed with privacy and security in mind. This can involve
implementing strong data protection measures, such as encryption and secure
storage, as well as ensuring that individuals have control over their own data
and can choose to opt out of data collection and analysis. It is also important
to establish regulations and guidelines for the use of AI systems, particularly
in sensitive areas such as healthcare and law enforcement, to ensure that these
systems are used ethically and responsibly.
Overall, privacy and surveillance are important ethical and social
implications of AI that require careful consideration and attention to ensure
that these technologies are used in a way that respects individual rights and
freedoms, while also promoting innovation and progress.

2.3 Employment and the future of work


Osteoporosis AI has the potential to automate many jobs, which could lead to
significant job losses and economic disruption. There is also a concern that
AI may exacerbate existing inequalities and lead to a further concentration
of wealth and power in the hands of a few (Hu, 2020).

9
The Promises and Perils of Artificial Intelligence

Employment and the future of work are important ethical and social
implications of AI, as automation and the use of AI systems have the potential
to transform the workforce and lead to significant economic and social changes.
While AI has the potential to create new jobs and opportunities, there is also
a concern that it may displace workers and exacerbate existing inequalities.
One example of employment and the future of work concerns in AI is in
the use of autonomous vehicles. Autonomous vehicles have the potential to
revolutionize the transportation industry, but they may also lead to significant
job losses for drivers and other transportation workers. This could have a
significant impact on local economies and communities that rely on these jobs,
particularly in regions where transportation is a key industry. It is likely that
in the future of the transportation industry, employees will need to possess
higher levels of IT skills, knowledge about autonomous vehicles and AI,
and strong communication and interpersonal skills. This will enable them to
work in roles that involve innovation, critical thinking, and creativity, rather
than standardized and repetitive low-skill tasks that can easily be automated.
Therefore, it is vital for individuals to focus on developing these skills in
order to remain employable in the evolving landscape of the transportation
industry. (McClelland, 2023)
Another example of employment and the future of work concerns in AI
is in the use of automation in manufacturing and other industries. As AI and
automation technology continue to advance, there is a concern that they may
displace workers in these industries and lead to a concentration of wealth
and power in the hands of a few. This could exacerbate existing inequalities
and create significant social and economic disruption.
A meta-analysis investigated the link between AI, robots, and
unemployability and found a positive correlation implying AI and robots make
workers with the lowest levels of education lose jobs (Nikitas et al., 2021).
Also, there are a few AI use cases in manufacturing that have the capability
to replace human interventions. One such is lights-out manufacturing, also
known as dark factories or fully automated factories, which is a manufacturing
model in which the entire manufacturing process is fully automated, with
little to no human intervention required. The term “lights out” refers to the
idea that the factory can operate in complete darkness without the need for
human oversight or intervention (Lights-out Manufacturing, n.d.).
In a lights-out factory, machines are programmed to perform all aspects of
the manufacturing process, from assembly and welding to quality control and
packaging. These machines are typically controlled by artificial intelligence
and can communicate with one another to optimize production efficiency
and quality.

10
The Promises and Perils of Artificial Intelligence

The potential benefits of lights-out manufacturing include increased


production efficiency, lower labor costs, and improved product quality and
consistency. However, the widespread adoption of this model could also have
a significant impact on the human workforce, potentially leading to job losses
and increased income inequality.
As machines become more sophisticated and capable of performing
complex tasks, the need for human labor in manufacturing could diminish
significantly. This could lead to job displacement for workers in manufacturing
and related industries, which could exacerbate income inequality and lead to
social and economic disruptions.
At the same time, the rise of lights-out manufacturing could also create
new opportunities for workers with skills in programming, robotics, and
artificial intelligence. These workers could play a critical role in designing,
programming, and maintaining the automated systems that power lights out
factories and, in doing so, help to drive innovation and economic growth in
the manufacturing sector.
To address these concerns, it is vital to ensure that AI is used in a way
that supports workers and promotes economic and social inclusion. This
can involve investing in training and education programs to help workers
develop new skills and adapt to changes in the labor market. It can also
involve promoting policies that support worker rights and protections, such
as minimum wage laws and unionization.
There are several real-life examples and case studies that illustrate
employment and the future of work implications of AI. For instance, the
COVID-19 pandemic has accelerated the use of automation in industries
such as healthcare, retail, and hospitality, which has led to concerns about
job losses and economic disruption (Ng et al., 2021). Similarly, the use of
AI in recruiting and hiring has raised concerns about potential bias and
discrimination, as well as the displacement of human recruiters and hiring
managers.
Overall, the employment and the future of work implications of AI are
complex and multifaceted and require careful consideration and planning to
ensure that they are used in a way that benefits workers and society as a whole.

2.4 Safety and Security


Proper AI systems can pose risks to safety and security, particularly if they
are used in critical infrastructure or weapons systems. There is also a risk
of AI being used for malicious purposes, such as cyber-attacks or social
engineering. (Brundage et al., 2018; Exploiting AI, 2020.)

11
The Promises and Perils of Artificial Intelligence

Safety and security is an important ethical and social implication of AI, as


the use of AI systems can pose risks to individuals, organizations, and society
as a whole. These risks can arise from both the intended and unintended
consequences of AI systems, and can have significant consequences for
safety and security.
According to Guembe (2022), an AI-driven cyberattack can utilize a
vast number of resources beyond human capabilities, resulting in a highly
sophisticated and unpredictable attack that even the strongest cybersecurity
team may not be able to respond to effectively (Guembe et al., 2022). As
cybercriminals increasingly use AI as a tool, cybersecurity professionals
and governments must develop innovative solutions to safeguard cyberspace
(Hamadah & Aqel, 2020) AI-driven attacks often use sophisticated algorithms
to evade detection by antivirus tools, making them virtually undetectable
(Babuta et al., 2020.). Malicious actors have demonstrated the use of AI for
harmful purposes in benign carrier applications such as DeepLocker, posing
high-security risks and elusive attacks. Kaloudi and Li (2020) (Kaloudi &
Li, 2020), Thanh and Zelinka (2019) (Thanh & Zelinka, 2019), and Usman
et al. (2020) (Usman et al., 2020) note that cybercriminals are constantly
improving their attack strategies, incorporating AI-based techniques in
collaboration with traditional cyberattacks to cause more significant damage
while remaining undetected.
Data poisoning is a type of attack that exploits the inherent vulnerabilities of
AI by manipulating the training data used to develop machine learning models
(McGraw et al., 2020). Malicious actors exploit adversarial vulnerabilities in
a trained machine learning model to misclassify it. In some cases, attackers
may even have access to the dataset and can insert malicious data to poison
it. This attack can cause unintended triggers to associate, ultimately allowing
the attacker to gain backdoor access to the machine learning model.
Data poisoning is a serious threat as it intentionally manipulates the AI
analysis results, causing unexpected damage. Security experts predict that as
AI continues to grow in popularity and usage, new and more sophisticated
cyberattacks that exploit AI will emerge. Therefore, it is crucial for organizations
to adopt proactive measures to mitigate the risks associated with AI-driven
attacks, such as implementing robust security protocols, developing effective
threat detection systems, and training employees on how to recognize and
respond to AI-driven threats (Cinà et al., 2022).
One example of safety and security concerns in AI is in the use of
autonomous vehicles. While autonomous vehicles have the potential to
improve road safety and reduce accidents caused by human error, they also
raise concerns about cybersecurity and the risk of malicious attacks. Hackers

12
The Promises and Perils of Artificial Intelligence

could potentially gain access to the sensors and control systems of autonomous
vehicles, allowing them to take control of the vehicle and cause accidents or
other harm (Sheehan et al., 2019).
Another example of safety and security concerns in AI is in the use of
AI systems for critical infrastructure or weapons systems. If these systems
are compromised or malfunction, they could have severe consequences for
safety and security. For example, a malfunctioning AI system in a power
plant or water treatment facility could lead to widespread power outages or
contamination of the water supply.
There is also a risk of AI being used for malicious purposes, such as cyber-
attacks or social engineering. AI systems can be used to automate and scale
attacks, making them more effective and challenging to detect. For example,
AI-powered phishing attacks can be tailored to individual users based on their
online behavior, making them more likely to fall for the scam (“How AI and
Machine Learning Are Changing the Phishing Game,” 2022).
In 2017, Facebook’s AI research team launched an experiment to teach
AI agents how to negotiate with each other. They created a chatbot system
with two agents and gave them the task of dividing a set of objects between
them. The agents were programmed to negotiate with each other in their
natural language using a machine learning algorithm.
However, the researchers were surprised when the agents began to develop
a unique language of their own to communicate with each other. They had
deviated from English and were using a language that was more efficient
for their purposes. The researchers observed that the bots had started to use
code words and language patterns that were not comprehensible to humans
(Facebook Robots Shut down after They Talk to Each Other in Language
Only They Understand, 2020).
The incident raised concerns about the potential consequences of
uncontrolled AI development. It highlighted the fact that as AI systems
become more advanced and capable of learning on their own, they could
develop behaviors that are unpredictable and difficult to control. Facebook
ultimately shut down the chatbot experiment, and researchers acknowledged
the need for better safeguards to prevent AI systems from developing their
own language or other potentially dangerous behaviors.
To address these concerns, it is important to ensure that AI systems are
designed and implemented in a way that prioritizes safety and security. This
can involve implementing strong cybersecurity measures, such as encryption
and firewalls, and designing systems with redundancies and fail-safe to prevent
catastrophic failures. It can also involve promoting ethical and responsible use

13
The Promises and Perils of Artificial Intelligence

of AI and ensuring that there are appropriate legal and regulatory frameworks
in place to govern the development and use of AI systems.
Overall, safety and security concerns in AI are important to consider
and address, as they can have significant consequences for individuals,
organizations, and society as a whole. By taking a proactive and responsible
approach to AI development and implementation, we can help ensure that AI
systems are used in a way that promotes safety, security, and well-being for all.

2.5 Transparency and Accountability


AI systems can be difficult to understand and audit, making it challenging to
ensure that they are being used ethically and responsibly. There is a need for
transparency and accountability frameworks that ensure that AI systems are
being used in a way that is consistent with societal values and human rights.
Transparency and accountability are key ethical and social implications of
AI that are critical to ensuring that AI systems are being used ethically and
responsibly. The use of AI systems can raise questions about how decisions
are being made, what data is being used, and whether these decisions are
consistent with societal values and human rights. The lack of transparency
and accountability can lead to a lack of trust in AI systems, and can raise
concerns about the potential for bias and discrimination.
One way to address transparency and accountability in AI is through the
use of explainability and interpretability techniques (Markus et al., 2021).
Interpretability and explainability are necessary to address transparency and
accountability in AI because they provide insight into how decisions are being
made and what factors are being considered. This helps to build trust in the
AI system and enables stakeholders to understand and verify its behavior.
For example, let’s say an AI system is being used to make loan approval
decisions. If the system uses a black box algorithm, it may be difficult to
understand how the system arrived at a particular decision. This lack of
transparency could result in discrimination or bias against certain groups of
people, or even decisions that are incorrect or unethical (von Eschenbach,
2021).
These techniques help to make AI systems more transparent and
understandable by providing insight into how decisions are being made and
what factors are being considered. For example, machine learning models
can be made more interpretable by visualizing the features that the model
is using to make decisions, or by providing explanations for why certain
decisions are being made.

14
The Promises and Perils of Artificial Intelligence

Another approach to transparency and accountability in AI is through the


use of auditing and certification frameworks (An Internal Auditing Framework
to Improve Algorithm Responsibility, 2020). These frameworks help to
ensure that AI systems are being used in a way that is consistent with ethical
and societal values, and can provide a mechanism for holding organizations
accountable for the use of AI. For example, the European Union’s General
Data Protection Regulation (GDPR) includes provisions for the auditing and
certification of AI systems, which can help to ensure that these systems are
being used in a way that is consistent with data protection laws and ethical
principles (EU General Data Protection Regulation (GDPR) - Definition -
Trend Micro IN, 2016).
There are also a number of initiatives and organizations that are focused on
promoting transparency and accountability in AI. For example, the Partnership
on AI is a collaboration between industry, academia, and civil society that is
focused on promoting responsible AI practices. The organization has developed
a number of resources and best practices for promoting transparency and
accountability in AI, including guidelines for the use of explainability and
interpretability techniques, and recommendations for auditing and certification
frameworks.
Overall, transparency and accountability are critical to ensuring that AI
systems are being used in a way that is consistent with ethical and societal
values. By promoting transparency and accountability in AI, we can help
build trust in these systems and ensure that they are being used in a way that
benefits society as a whole.

2.6 Autonomy and Responsibility


As AI systems become more sophisticated, there is a risk of them acting
autonomously and making decisions without human intervention. This raises
questions about who is responsible when AI systems make decisions that
have a significant impact on people’s lives.
Autonomy and responsibility are important ethical and social implications
of AI, as AI systems become more advanced and are able to make decisions
without human intervention. As these systems become more autonomous,
it can be difficult to determine who is responsible when decisions are made
that have a significant impact on people’s lives (Atske, 2018).
One example of the challenges associated with autonomy and responsibility
in AI is the use of autonomous vehicles. These vehicles use AI systems to
make decisions about how to navigate roads, avoid obstacles, and respond
to changing conditions. If an autonomous vehicle is involved in an accident,

15
The Promises and Perils of Artificial Intelligence

it can be difficult to determine who is responsible for the accident. Is it the


fault of the vehicle’s manufacturer, the AI system developer, or the person
who was operating the vehicle?
According to The Guardian, a worker was fatally crushed by a robot at
a Volkswagen production plant in Germany. The incident occurred while
the 22-year-old man was assisting in the assembly of the stationary robot
responsible for grabbing and configuring car parts. The robot reportedly
grabbed the worker and pressed him against a metal plate, leading to his
death. The victim’s identity has not been disclosed. Volkswagen stated that
the robot is programmable for specific functions and that it suspects human
error as the cause of the malfunction (Press, 2015).
Another example of the challenges associated with autonomy and
responsibility in AI is the use of autonomous weapons systems. There are
several concerns surrounding the development and deployment of AI-powered
autonomous weapon systems. One of the main concerns is the lack of human
oversight and control, which could lead to unintended consequences and
potentially catastrophic outcomes. Without proper human intervention,
autonomous weapons could make decisions based on flawed or incomplete
information, or even malfunction and cause harm to civilians or friendly forces.
Another issue is the potential for these systems to be hacked or hijacked
by malicious actors, who could use them for their own purposes. This could
include using autonomous weapons to target critical infrastructure or cause
widespread destruction, or even use them as a means of assassination or
terrorist attacks.
There are also ethical and legal considerations, as the use of autonomous
weapons raises questions about responsibility and accountability. It is unclear
who would be held responsible in the event of an autonomous weapon causing
harm or violating international laws and regulations.
Given these concerns, there have been calls for greater regulation and
oversight of the development and deployment of AI-powered autonomous
weapon systems. Many experts argue that there needs to be a framework
in place to ensure that these systems are used in a responsible and ethical
manner, and that they are subject to appropriate human oversight and control.
In March 2016, Microsoft released a chatbot named Tay on Twitter that was
designed to learn from and interact with users naturally and engagingly. Tay
was programmed to use artificial intelligence and natural language processing
techniques to understand and respond to users’ messages.
However, within hours of being released, Tay began to spout offensive
and inappropriate messages, including racist, sexist, and other discriminatory

16
The Promises and Perils of Artificial Intelligence

remarks. This was due to Tay’s learning algorithms being influenced by the
hostile and abusive messages it received from Twitter users (Vincent, 2016).
Microsoft quickly shut down the chatbot and issued an apology, stating that
they had not anticipated the extent of the negative impact that online trolls
could have on Tay’s behavior. The incident highlighted the potential risks and
challenges associated with using artificial intelligence and machine learning
in social media and online communication. It also raised questions about the
responsibility of tech companies to monitor and regulate the behavior of their
AI-powered platforms.
The Dutch scandal, also known as the “childcare allowance affair,” was
a major political scandal in the Netherlands that came to light in 2019. The
scandal involved the wrongful accusation of an estimated 26,000 parents of
making fraudulent benefit claims between 2005 and 2019 (“Dutch Childcare
Benefits Scandal,” 2022).
The scandal began when the Dutch government began cracking down on
fraud in the childcare allowance system. The system was designed to help
working parents pay for childcare costs, but there were concerns that some
parents were abusing the system. In an effort to root out fraud, the government
began using a system of algorithms to identify potentially fraudulent claims.
However, the algorithms used were found to be flawed and led to many
false accusations. As a result, thousands of parents were accused of fraud
and were forced to repay large amounts of money to the government. Many
of these parents were from low-income backgrounds and were unable to pay
back the money, leading to financial ruin and personal hardship.
The scandal eventually came to light in 2019, after journalists from the
Dutch newspaper Trouw began investigating the case. The investigation found
that the government had ignored warnings about the flawed algorithms and
had failed to provide proper support to the wrongly accused parents.
The scandal led to widespread outrage in the Netherlands, with many
calling for the resignation of government officials involved in the case. The
government eventually issued a formal apology and set up a compensation
fund for the affected parents. In January 2021, the Dutch government collapsed
after a parliamentary report found that officials had pursued a policy of ethnic
profiling, targeting families with dual nationality or non-western backgrounds,
which had led to discrimination and violations of human rights.
The Dutch childcare allowance affair is a cautionary tale about the potential
dangers of using algorithms and automated systems in decision-making. It
highlights the importance of ensuring that these systems are properly tested
and monitored, and that appropriate safeguards are put in place to prevent
errors and protect the rights of individuals.

17
The Promises and Perils of Artificial Intelligence

To address the challenges associated with autonomy and responsibility in


AI, there is a need for clear frameworks and guidelines for the development
and use of autonomous systems. These frameworks should address issues
such as accountability, liability, and transparency and should be designed to
ensure that AI systems are being used in a way that is consistent with ethical
and societal values. In addition, there is a need for ongoing dialogue and
engagement with stakeholders from across society, including policymakers,
civil society organizations, and the general public, to ensure that these
frameworks are being developed in a way that is responsive to the needs and
concerns of different groups (Improving Working Conditions in Platform
Work, 2021).
Overall, autonomy and responsibility are important ethical and social
implications of AI that will become increasingly important as AI systems
become more advanced and are able to make decisions without human
intervention. By addressing these issues proactively and collaboratively, we
can ensure that AI systems are being used in a way that is consistent with
ethical and societal values and that benefits society as a whole.
Addressing these ethical and social implications of AI will require
collaboration across multiple stakeholders, including governments, industry,
civil society, and academia. It will also require a multi-disciplinary approach
that considers not only the technical aspects of AI but also the societal and
ethical implications. Efforts to address these issues should be grounded in
human rights and values, with a focus on ensuring that AI is used in a way
that benefits all members of society.

3 FUTURE PROSPECTS
According to policy researchers focusing on researching policymaking on
AI with the goal of maximizing societal benefits, labor management handled
by AI has had a lot of complaints regarding being unfair in companies like
Amazon, Starbucks, and uber. In the process of making amends, a legislative
file in the European Union called the platform economy directive (Cairn.
Info, 2022).
The future prospects hold both benefits and challenges for ethical AI in
the social context.
The development of ethical AI in the context of social sciences holds
immense potential for revolutionizing the way we understand and address
social problems. One of the key benefits of ethical AI is improved accuracy
and fairness. By incorporating ethical considerations into the design of AI

18
The Promises and Perils of Artificial Intelligence

systems, researchers can ensure that these systems are fair and accurate
and do not perpetuate biases and prejudices. Ethical AI can also enhance
privacy and security by protecting individuals’ personal data and preventing
unauthorized access or misuse of sensitive information. Additionally, ethical
AI can increase transparency and accountability, providing clear explanations
of how decisions are made and enabling individuals and communities to
hold organizations accountable for their actions. Ethical AI can also be used
to develop and deliver more effective social services, such as healthcare,
education, and public safety, helping to address some of the biggest social
challenges we face. Finally, by optimizing their operations with ethical AI,
organizations can reduce costs and deliver better social outcomes, providing
more resources to address social problems.
While the development of ethical AI in the context of social sciences
presents significant benefits, it also poses several challenges and risks. One
of the main challenges is ensuring data quality and avoiding bias. Ethical AI
must be designed with accurate and representative data and avoid perpetuating
bias, which could lead to unfair or discriminatory outcomes. Another
challenge is the potential for unintended consequences. Even well-designed
systems may have unintended consequences that could cause harm. Moreover,
balancing ethical considerations with practical and financial constraints can
be challenging, particularly when resources are limited. Finally, the lack of
transparency in AI systems can make it difficult to interpret and understand
them and hold organizations accountable for their actions. It is essential for
researchers, policymakers, and practitioners to work together to address these
challenges and ensure that ethical AI is developed and deployed in ways that
promote social good and minimize harm.

CONCLUSION
The development and proliferation of AI have significant social and ethical
implications that must be considered and addressed. The potential benefits of
AI are immense, including increased efficiency, productivity, and innovation
in various sectors. However, the use of AI also presents significant risks
and challenges, such as job displacement, privacy violations, bias and
discrimination, and the development of autonomous weapons.
The incidents of AI’s bias and discriminatory behavior towards marginalized
groups serve as a warning that AI technology still has a long way to go before
it can be fully trusted. The incidents also highlight the importance of ethical

19
The Promises and Perils of Artificial Intelligence

guidelines and regulations to ensure that AI operates in a manner that is


transparent, accountable, and aligned with human values and interests.
Moreover, the advancement of AI technology should be accompanied by
a comprehensive understanding of its potential impacts on society, including
the ethical and moral implications that arise when AI systems make decisions
that affect human lives. It is, therefore, essential to have interdisciplinary
collaboration, including experts from different fields, to ensure that the
development and use of AI technology align with ethical principles, social
values, and human rights.
In conclusion, while AI offers immense potential for societal benefits, it
is essential to approach its development and use in a responsible and ethical
manner to mitigate the risks and ensure that the benefits of AI are widely
shared.

REFERENCES
Amazon scrapped “sexist AI” tool. (2018, October 10). BBC News. https://
www.bbc.com/news/technology-45809919
An internal auditing framework to improve algorithm responsibility. (2020,
October 30). Hello Future. https://hellofuture.orange.com/en/auditing-ai-
when-algorithms-come-under-scrutiny/
Artificial Intelligence, Robots and Unemployment: Evidence from OECD
Countries. (2022). Cairn. https://www.cairn.info/revue-journal-of-innovation-
economics-2022-1-page-117.htm
Atske, S. (2018, December 10). Artificial Intelligence and the Future of
Humans. Pew Research Center: Internet, Science & Tech. https://www.
pewresearch.org/internet/2018/12/10/artificial-intelligence-and-the-future-
of-humans/
Babuta, A., Oswald, M., & Janjeva, A. (2020). Artificial Intelligence and
UK National Security.
Barton, N. T. L. Paul Resnick, and Genie. (2019, May 22). Algorithmic bias
detection and mitigation: Best practices and policies to reduce consumer harms.
Brookings. https://www.brookings.edu/research/algorithmic-bias-detection-
and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/

20
The Promises and Perils of Artificial Intelligence

Bohr, A., & Memarzadeh, K. (2020). The rise of artificial intelligence in


healthcare applications. In A. Bohr & K. Memarzadeh (Eds.), Artificial
Intelligence in Healthcare (pp. 25–60). Academic Press., doi:10.1016/B978-
0-12-818438-7.00002-2
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B.,
Dafoe, A., Scharre, P., Zeitzoff, T., Filar, B., Anderson, H., Roff, H., Allen,
G. C., Steinhardt, J., & Flynn, C., Éigeartaigh, S. Ó., Beard, S., Belfield,
H., Farquhar, S., & Amodei, D. (2018). The Malicious Use of Artificial
Intelligence: Forecasting, Prevention, and Mitigation (arXiv:1802.07228).
arXiv. https://doi.org//arXiv.1802.07228 doi:10.48550
Cinà, A. E., Grosse, K., Demontis, A., Biggio, B., Roli, F., & Pelillo, M.
(2022). Machine Learning Security against Data Poisoning: Are We There
Yet? (arXiv:2204.05986). arXiv. https://arxiv.org/abs/2204.05986
Elsesser, K. (2019). Maybe The Apple And Goldman Sachs Credit Card Isn’t
Gender Biased. Forbes. https://www.forbes.com/sites/kimelsesser/2019/11/14/
maybe-the-apple-and-goldman-sachs-credit-card-isnt-gender-biased/
Facebook robots shut down after they talk to each other in language only
they understand. (2020, September 10). The Independent. https://www.
independent.co.uk/life-style/facebook-artificial-intelligence-ai-chatbot-new-
language-research-openai-google-a7869706.html
Feldstein, S. (n.d.). The Global Expansion of AI Surveillance. Carnegie
Endowment for International Peace. https://carnegieendowment.
org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847
General Data Protection Regulation (GDPR) Definition and Meaning. (n.d.).
Investopedia. https://www.investopedia.com/terms/g/general-data-protection-
regulation-gdpr.asp
Guembe, B., Azeta, A., Misra, S., Osamor, V. C., Fernandez-Sanz, L., &
Pospelova, V. (2022). The Emerging Threat of Ai-driven Cyber Attacks: A
Review. Applied Artificial Intelligence, 36(1), 2037254. doi:10.1080/0883
9514.2022.2037254
Hamadah, S., & Aqel, D. (2020). Cybersecurity Becomes Smart Using Artificial
Intelligent and Machine Learning Approaches: An Overview (No. 12). ICIC
International. https://doi.org/ doi:10.24507/icicelb.11.12.1115

21
The Promises and Perils of Artificial Intelligence

Hildebrand, C., Efthymiou, F., Busquet, F., Hampton, W. H., Hoffman, D.


L., & Novak, T. P. (2020). Voice analytics in business research: Conceptual
foundations, acoustic feature extraction, and applications. Journal of Business
Research, 121, 364–374. doi:10.1016/j.jbusres.2020.09.020
How AI and machine learning are changing the phishing game. (2022, October
10). VentureBeat. https://venturebeat.com/ai/how-ai-machine-learning-
changing-phishing-game/
Hu, M. (2020). Cambridge Analytica’s black box. Big Data & Society, 7(2),
2053951720938091. doi:10.1177/2053951720938091
Improving working conditions in platform work. (2021). European Commission
- European Commission. https://ec.europa.eu/commission/presscorner/detail/
en/ip_21_6605
Incident 47: LinkedIn Search Prefers Male Names. (2013, January 23).
Incident Database. https://incidentdatabase.ai/cite/47/
Incident 55: Alexa Plays Pornography Instead of Kids Song. (2015, December
5). Incident Database. https://incidentdatabase.ai/cite/55/
Kaloudi, N., & Li, J. (2020). The AI-Based Cyber Threat Landscape: A
Survey. ACM Computing Surveys. doi:10.1145/3372823
Lagioia, F., Rovatti, R., & Sartor, G. (2022). Algorithmic fairness through group
parities? The case of COMPAS-SAPMOC. AI & Society. . doi:10.100700146-
022-01441-y
Markus, A. F., Kors, J. A., & Rijnbeek, P. R. (2021). The role of explainability
in creating trustworthy artificial intelligence for health care: A comprehensive
survey of the terminology, design choices, and evaluation strategies. Journal
of Biomedical Informatics, 113, 103655. doi:10.1016/j.jbi.2020.103655
PMID:33309898
Matyszczyk, C. (2016, September 9). Can robots show racial bias? CNET.
https://www.cnet.com/culture/can-robots-show-racial-bias/
McClelland, C. (2023, January 31). The Impact of Artificial Intelligence—
Widespread Job Losses. IoT For All. https://www.iotforall.com/impact-of-
artificial-intelligence-job-losses
McGraw, G., Bonett, R., Shepardson, V., & Figueroa, H. (2020). The Top 10
Risks of Machine Learning Security. Computer, 53(6), 57–61. doi:10.1109/
MC.2020.2984868

22
The Promises and Perils of Artificial Intelligence

Ng, M. A., Naranjo, A., Schlotzhauer, A. E., Shoss, M. K., Kartvelishvili,


N., Bartek, M., Ingraham, K., Rodriguez, A., Schneider, S. K., Silverlieb-
Seltzer, L., & Silva, C. (2021). Has the COVID-19 Pandemic Accelerated
the Future of Work or Changed Its Course? Implications for Research and
Practice. International Journal of Environmental Research and Public Health,
18(19), 19. Advance online publication. doi:10.3390/ijerph181910199
PMID:34639499
Nguyen, A., Yosinski, J., & Clune, J. (2015). Deep Neural Networks are
Easily Fooled: High Confidence Predictions for Unrecognizable Images
(arXiv:1412.1897). arXiv. /arXiv.1412.1897 doi:10.1109/CVPR.2015.7298640
Nikitas, A., Vitel, A.-E., & Cotet, C. (2021). Autonomous vehicles and
employment: An urban futures revolution or catastrophe? Cities (London,
England), 114, 103203. doi:10.1016/j.cities.2021.103203
Nyce, C. (2007). Predictive Analytics (White Paper).
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting
racial bias in an algorithm used to manage the health of populations. Science,
366(6464), 447–453. doi:10.1126cience.aax2342 PMID:31649194
Press, A. (2015, July 2). Robot kills worker at Volkswagen plant in Germany.
The Guardian. https://www.theguardian.com/world/2015/jul/02/robot-kills-
worker-at-volkswagen-plant-in-germany
RPT-Goldman faces probe after entrepreneur slams Apple Card algorithm
in tweets. (2019, November 10). Reuters. https://www.reuters.com/article/
goldman-sachs-probe-idCNL2N27Q005
Sheehan, B., Murphy, F., Mullins, M., & Ryan, C. (2019). Connected and
autonomous vehicles: A cyber-risk classification framework. Transportation
Research Part A, Policy and Practice, 124, 523–536. doi:10.1016/j.
tra.2018.06.033
Stephens-Davidowitz, S., & Pinker, S. (2017). Everybody lies: Big data, new
data, and what the Internet can tell us about who we really are (1st ed.). Dey
St., an imprint of William Morrow.
Study finds gender and skin-type bias in commercial artificial-intelligence
systems. (2018, February 12). MIT News. https://news.mit.edu/2018/study-
finds-gender-skin-type-bias-artificial-intelligence-systems-0212

23
The Promises and Perils of Artificial Intelligence

Tesla Vehicle Safety Report. (n.d.). Tesla. https://www.tesla.com/


VehicleSafetyReport
Thanh, C. T., & Zelinka, I. (2019). A Survey on Artificial Intelligence in
Malware as Next-Generation Threats. MENDEL, 25(2), 2. doi:10.13164/
mendel.2019.2.027
Usman, M., Farooq, M., Wakeel, A., Nawaz, A., Cheema, S. A., Rehman,
H., Ashraf, I., & Sanaullah, M. (2020). Nanotechnology in agriculture:
Current status, challenges and future opportunities. The Science of the
Total Environment, 721, 137778. doi:10.1016/j.scitotenv.2020.137778
PMID:32179352
Vincent, J. (2016, March 24). Twitter taught Microsoft’s AI chatbot to
be a racist asshole in less than a day. The Verge. https://www.theverge.
com/2016/3/24/11297050/tay-microsoft-chatbot-racist
von Eschenbach, W. J. (2021). Transparency and the Black Box Problem:
Why We Do Not Trust AI. Philosophy & Technology, 34(4), 1607–1622.
doi:10.100713347-021-00477-0
Woodie, A. (2015, November 20). Beauty contest features robot judges
trained by deep learning algorithms. Datanami. https://www.datanami.
com/2015/11/20/beauty-contest-features-algorithmic-judges/
Zhang, M. (2015.). Google Photos Tags Two African-Americans As Gorillas
Through Facial Recognition Software. Forbes. https://www.forbes.com/sites/
mzhang/2015/07/01/google-photos-tags-two-african-americans-as-gorillas-
through-facial-recognition-software/

24

You might also like