0% found this document useful (0 votes)
7 views

Ethics in AI

The document discusses the significance of ethics in AI, emphasizing that ethical principles such as fairness, accountability, transparency, privacy, and inclusivity are crucial for the responsible development and implementation of AI technologies. It highlights historical contexts of AI ethics, notable successes and failures in AI applications, and the ongoing challenges posed by biased technologies, particularly in law enforcement. The document advocates for transparency, accountability, and a moratorium on untested AI technologies in the criminal justice system to prevent wrongful convictions.

Uploaded by

baywestprint
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Ethics in AI

The document discusses the significance of ethics in AI, emphasizing that ethical principles such as fairness, accountability, transparency, privacy, and inclusivity are crucial for the responsible development and implementation of AI technologies. It highlights historical contexts of AI ethics, notable successes and failures in AI applications, and the ongoing challenges posed by biased technologies, particularly in law enforcement. The document advocates for transparency, accountability, and a moratorium on untested AI technologies in the criminal justice system to prevent wrongful convictions.

Uploaded by

baywestprint
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 41

Module 1: Introduction to Ethics and AI

Definition of ethics and its importance in AI.


Ethics are a system of moral principles that guide people's decisions and
actions. Many factors contribute to what an individual identifies as their
ethics. These include your background, your education, past experiences,
to name a few. Because we are all unique individual’s, ethics is often a
topic that can lead to rich discussion and debates about morality, this
being because each human has their own understanding of what ethics
means to them.
Ethics help us differentiate between right and wrong, promotes fairness,
helps us take accountability for our actions, makes us transparent,
ensures that we don’t violate others privacy and makes us conscious
about inclusivity.
Ethics are essential for humans as well as for Artificial Intelligence.
While Artificial Intelligence (AI) is a set of technologies that allow
computers to perform tasks that typically require human intelligence, the
one human element that AI cannot replace is ethics.
AI ethics are important because AI technology is meant to augment or
replace human intelligence—but when technology is designed to replicate
human life, the same issues that can cloud human judgment can seep into
the technology.
Humans possess traits like creativity, empathy, emotional intelligence,
ethical judgment, intuition, adaptability to new situations, complex social
understanding, and the ability to form personal relationships which AI
currently lacks, as AI primarily operates based on pre-programmed
algorithms and data analysis, making it unable to replicate these nuanced
human qualities fully.
Humans can make complex moral decisions based on context and
personal values, whereas AI lacks this nuanced understanding. Humans
establish the ethical frameworks and principles that AI systems must
follow.
AI ethics are the moral principles companies use to guide responsible and
fair development and use of AI.

Key ethical principles: fairness, accountability, transparency,


privacy, and inclusivity.
Ethics in AI revolves around ensuring that AI systems are designed and
implemented in ways that uphold human values and rights. Below we
delve into the key ethical principles — fairness, accountability,
transparency, privacy, and inclusivity.
Fairness
This key element is necessary to ensure that AI systems treat everyone
equally, without bias or discrimination. Bias and discrimination can be
passed on from the date data it is trained on resulting in unfair treatment
of certain individuals.
Therefore, it is important to ensure fairness by hiring algorithms to avoid
bias or discrimination against certain demographics.
Accountability
This key element ensures that humans remain responsible for AI systems'
design, deployment, and outcomes, including addressing harm caused by
the systems. Without clear accountability, it can be difficult to determine
who is responsible for errors, misuse, or harm caused by AI.
To ensure that accountability is taken in the event that an error
occurs, clear legal and ethical responsibility for developers,
organizations, and operators of AI system should be put in place. One way
of doing this is by auditing AI systems to ensure compliance with ethical
and legal standards.
Transparency
The transparency key element involves making AI systems and their
decision-making processes understandable and accessible to users and
stakeholders. A lack of transparency can lead to mistrust,
misunderstanding, or misuse of AI, especially in high-stakes contexts.
To be transparent all users of the AI system should be trained on how to
use it.
Privacy
The privacy key ethical principle ensures that AI respects individuals'
rights to control their personal data and protects sensitive information
from misuse. AI systems often rely on large datasets, which may include
personal, sensitive, or identifiable information. Mishandling this data can
lead to breaches, surveillance, or exploitation.
Way to mitigate the invasion of privacy is by encrypting and anonymizing
personal data used in AI training and operation. Also, by ensuring
compliance with privacy laws like GDPR (General Data Protection
Regulation) or CCPA (California Consumer Privacy Act)Giving users control
over how their data is collected, stored, and used by AI systems.
Inclusivity
Inclusivity ensures that AI systems are accessible to and beneficial for all
individuals and communities, avoiding marginalization or exclusion. AI
should be designed to meet the needs of diverse populations and ensure
that no one is unfairly disadvantaged by its implementation.
Historical context of AI and Ethics:
What started out as speculative fiction and philosophical debates has
become a part of our daily lives. AI is everywhere and used for basically
everything. Whether it be at the workplace, academics or for basic things
at home we all turn to AI. Now that it has become part of routines, ethic
plays a pivotal role in maintaining the use of it.
Below we look at the historical context of AI and ethics. We’ll be delving
into the roots and fruits of it.
The roots of AI ethics can be traced back to early discussions about
machine intelligence and morality:
Isaac Asimov's "Three Laws of Robotics" (1942): In his science
fiction, Asimov proposed laws to govern the behaviour of robots,
emphasizing harm prevention, obedience to humans, and self-
preservation. These laws remain a symbolic starting point for ethical
discussions in AI.
Alan Turing’s "Computing Machinery and Intelligence" (1950):
Turing explored whether machines could think and proposed the Turing
Test to evaluate intelligence. While not directly about ethics, his work
highlighted concerns about how humans would relate to intelligent
machines.
The Advent of AI Research and Ethical Concerns (1956–1970s)
Dartmouth Conference (1956): This event marked the formal
beginning of AI as a field. Early AI research focused on logical reasoning,
problem-solving, and automation, but ethical concerns were minimal, as
AI systems were rudimentary and far from autonomous.
Ethical Implications of Automation: By the 1960s, concerns emerged
about the impact of automation on employment and human labor.
Questions arose about how AI would affect economic and social
structures.
Rise of Cybernetics: Discussions in the 1960s about systems theory and
feedback loops (cybernetics) raised philosophical and ethical questions
about machine autonomy and control.
Ethical Dilemmas in AI during the "AI Winter" (1970s–1980s)
AI development slowed during periods of reduced funding and optimism
(known as the "AI winters"). Ethical discussions were largely academic but
laid groundwork for future debates:
Concerns about autonomous weapons: Early military AI systems raised
questions about the morality of delegating life-and-death decisions to
machines.
The emergence of privacy concerns: As computer systems became
capable of storing and analyzing data, ethical discussions about data
collection and surveillance began to surface.

Rise of Ethical AI with Advancements in Technology (1990s–


2000s)
Resurgence of AI: Improved computational power, data availability, and
machine learning algorithms reignited AI development.
Privacy in the Digital Age:
The internet boom and the rise of big data led to heightened concerns
about privacy. Organizations began addressing how AI systems collected,
stored, and used personal data.
Data Protection Directive (1995): A precursor to GDPR, this
emphasized ethical data practices.
AI in Decision-Making:
AI systems began influencing decisions in finance, healthcare, and law
enforcement, raising questions about fairness, accountability, and
discrimination.
Researchers highlighted cases where biased data led to unfair or unethical
outcomes, such as in credit scoring and hiring.

Modern AI and Ethics (2010s–Present)


Explosive Growth of AI Applications:
AI-powered technologies like facial recognition, recommendation systems,
and autonomous vehicles became widespread, bringing ethical concerns
to the forefront.
The use of AI in sensitive areas, such as predictive policing, healthcare,
and hiring, highlighted issues like bias, fairness, and accountability.
Key Ethical Concerns and Frameworks:
Bias in AI: Studies revealed how biased training data led to
discrimination against certain groups (e.g., gender or racial biases in
hiring systems or facial recognition software).
Autonomous Vehicles: Ethical dilemmas like the "trolley problem"
became relevant as self-driving cars required moral decision-making in
life-and-death scenarios.
Surveillance and Privacy: Governments and companies using AI for
surveillance (e.g., China’s social credit system) triggered debates about
privacy and human rights.
Notable Ethical Frameworks:
Asilomar AI Principles (2017): Proposed by AI researchers and ethicists
to guide AI development with principles emphasizing safety, transparency,
and alignment with human values.
EU’s Ethical Guidelines for Trustworthy AI (2019): A framework
emphasizing fairness, accountability, transparency, and human-centered
design.
OECD AI Principles (2019): Adopted globally to ensure AI respects
human rights and democratic values.

Current Trends and Future Ethical Challenges


Generative AI: Tools like ChatGPT, DALL-E, and others have raised
ethical questions about copyright, misinformation, and accountability for
content produced by AI.
Regulation Efforts: Governments and organizations are now actively
working on AI-specific legislation, such as the EU AI Act, to govern ethical
AI development.
AI Alignment: Discussions on ensuring AI aligns with human values have
intensified, especially with advancements in artificial general intelligence
(AGI) and fears of unintended consequences.
Autonomous Weapons: Debates about banning or regulating AI in
military applications are ongoing, with many advocating for global
treaties.

Case studies: AI success and failures


AI Success Stories:
Healthcare Diagnostics:
The most popular AI healthcare diagnostics success story is that of the
Moorsfield Eye Hospital. Below is a case study done by Designveloper that
unpacks the success story:
Moorfields Eye Hospital – the best-case study of AI in healthcare
Moorfields is the world’s oldest eye hospital. Eye health professionals
there had to analyze over 5,000 optical coherence tomography (OCT)
scans per week to spot and diagnose severe eye conditions like diabetic
retinopathy or age-related macular degeneration (AMD). However, these
eye scans could take a long time for manual analytics, which affects early
detection and diagnostics.
Solution & Result
In 2018, Pearse Keane, a consultant ophthalmologist at Moorfields, came
to DeepMind for AI solutions.
Moorfields and DeepMind collaboratively developed an AI tool that can
identify more than 50 eye diseases as accurately as top eye professionals.
The tool was trained with almost 15,000 OCT scans from 7,500 patients
and real referral decisions. It uses deep learning algorithms to detect the
various anatomical elements of an eye and create a 3D image that shows
the thickness of retinal tissue through near-infrared light.
The software can even offer clinical advice based on the different signs of
eye conditions in the scans. As a result, its recommendations were
considered 94% accurate as diagnostics by top eye professionals.
The software even explains how it came to its decisions. This helps
doctors and nurses trust and use its recommendations more carefully.
In addition to early detection, AI algorithms can help predict disease
progression. Google’s DeepMind conducted a test to evaluate how its AI
model could forecast the high risk of an eye converting to exudative AMD.
The model automatically segmented different types of tissues identified in
the eye scans and observed their changes over time. As a result, the
model successfully predicted that the eye would likely worsen at least 2
visits before signs of exAMD became clear.
AI Failure Cases:
Facial Recognition Bias:
Several cases have highlighted issues with facial recognition technology,
where algorithms have shown racial bias in identifying individuals, raising
concerns about potential misuse, below is a case study done by the
Innocence Project on “When Artificial Intelligence Gets It Wrong”:
When Artificial Intelligence Gets It Wrong
Unregulated and untested AI technologies have put innocent people at
risk of being wrongly convicted.
Porcha Woodruff was eight months pregnant when she was arrested for
carjacking. The Detroit police used facial recognition technology to run an
image of the carjacking suspect through a mugshot database, and Ms.
Woodruff’s photo was among those returned.
Ms. Woodruff, an aesthetician and nursing student who was preparing her
two daughters for school, was shocked when officers told her that she was
being arrested for a crime she did not commit. She was questioned over
the course of 11 hours at the Detroit Detention Center.
A month later, the prosecutor dismissed the case against her based on
insufficient evidence.
Ms. Woodruff’s ordeal demonstrates the very real risk that cutting-edge
artificial intelligence-based technology — like the facial recognition
software at issue in her case — presents to innocent people, especially
when such technology is neither rigorously tested nor regulated before it
is deployed.
The Real-world Implications of AI
Time and again, facial recognition technology gets it wrong, as it did in
Ms. Woodruff’s case. Although its accuracy has improved over recent
years, this technology still relies heavily on vast quantities of information
that it is incapable of assessing for reliability. And, in many cases, that
information is biased.
In 2016, Georgetown University’s Center on Privacy &
Technology noted that at least 26 states allow police officers to run or
request to have facial recognition searches run against their driver’s
license and ID databases. Based on this figure, the center estimated that
one in two American adults has their image stored in a law enforcement
facial recognition network. Furthermore, given the disproportionate rate at
which African Americans are subject to arrest, the center found that facial
recognition systems that rely on mug shot databases are likely to include
an equally disproportionate number of African Americans.
More disturbingly, facial recognition software is significantly less reliable
for Black and Asian people, who, according to a study by the National
Institute of Standards and Technology, were 10 to 100 times more likely
to be misidentified than white people. The institute, along with other
independent studies, found that the systems’ algorithms struggled to
distinguish between facial structures and darker skin tones.
The use of such biased technology has had real-world consequences for
innocent people throughout the country. To date, six people that we know
of have reported being falsely accused of a crime following a facial
recognition match — all six were Black. Three of those who were falsely
accused in Detroit have filed lawsuits, one of which urges the city to
gather more evidence in cases involving facial recognition searches and to
end the “facial recognition to line-up pipeline.”
Former Detroit Police Chief James Craig acknowledged that if the city’s
officers were to use facial recognition by itself, it would yield
misidentifications “96% of the time.”
The Problem With Depending on AI
Even when an AI-powered technology is properly tested, the risks of a
wrongful arrest and wrongful conviction remain and are exacerbated by
these new tools.
That’s because when AI identifies a suspect, it can create a powerful,
unconscious bias against the technology-identified person that hardens
the focus of an investigation away from other suspects.
Indeed, such technology-induced tunnel vision has already had damaging
ramifications.
For example, in 2021, Michael Williams was jailed in Chicago for the first-
degree murder of Safarian Herring based on a ShotSpotter alert that
police received. Although ShotSpotter purports to triangulate a gunshot’s
location through an AI algorithm and a network of microphones, an
investigation by the Associated Press found that the system is deeply
statistically unreliable because it can frequently miss live gunfire or
mistake other sounds for gunshots. Still, based on the alert and a
noiseless security video that showed a car driving through an intersection,
Mr. Williams was arrested and jailed for nearly a year even though police
and prosecutors never established a motive explaining his alleged
involvement, had no witnesses to the murder, and found no physical
evidence tying him to the crime. According to a federal lawsuit later filed
by Mr. Williams, investigators also ignored other leads, including reports
that another person had previously attempted to shoot Mr. Herring. Mr.
Williams spent nearly a year in jail before the case against him was
dismissed.
Cases like Ms. Woodruff’s and Mr. Williams’ highlight the dangers of law
enforcement’s overreliance on AI technology, including an unfounded
belief that such technology is a fair and objective processor of data.
Absent comprehensive testing or oversight, the introduction of additional
AI-driven technology will only increase the risk of wrongful conviction and
may displace the effective policing strategies, such as community
engagement and relationship-building, that we know can reduce wrongful
arrests.
Addressing AI in the Criminal Legal System
We enter this fall with a number of significant victories under our belt —
including 7 exonerations since the start of the year. Through the cases of
people like Rosa Jimenez and Leonard Mack, we’ve leveraged significant
advances in DNA technology and other sciences to free innocent people
from prison.
We are committed to countering the harmful effects of emerging
technologies, advocating for research on AI’s reliability and validity, and
urging consideration of the ethical, legal, social and racial justice
implications of its use.
We support a moratorium on the use of facial recognition technology in
the criminal legal system until such time as research establishes its
validity and impacted communities are given the opportunity to weigh in
on the scope of its implementation.
We are pushing for more transparency around so-called “black box
technologies” — technologies whose inner workings are hidden from
users.
We believe that any law enforcement reliance on AI technology in a
criminal case must be immediately disclosed to the defence and subjected
to rigorous adversarial testing in the courtroom.
Building on President Biden’s executive order directing the National
Academy of Sciences to study certain AI-based technologies that can lead
to wrongful convictions, we are also collaborating with various partners to
collect the necessary data to enact reforms.
And, finally, we encourage Congress to make explicit the ways in which it
will regulate investigative technologies to protect personal data.
It is only through these efforts can we protect innocent people from
further risk of wrongful conviction in today’s digital age.
With gratitude,
Christina Swarns
Executive Director, Innocence Project

Activities:
 Group discussion: “What does ethical AI mean to you?”
 Case analysis of AI applications with ethical concerns (e.g., facial
recognition, biased algorithms).
Module 2:
Utilitarianism, deontology, and virtue ethics
Utilitarianism, deontology, and virtue ethics are distinct ethical
frameworks and theories that provide different approaches to evaluating
morality and guiding ethical decision-making. Below we unpack the above
mentioned.
Utilitarianism
Utilitarianism is a moral philosophy that promotes actions that increase
happiness and well-being for the greatest number of people. It's a type of
consequentialism, which means that the morality of an action is based on
its consequences.
Utilitarianism is widely used in policymaking, business ethics, and
healthcare resource allocation (e.g., deciding how to allocate vaccines to
save the most lives)
Deontology
Deontology is a moral philosophy that judges actions based on rules and
principles, rather than the consequences of those actions. It's also known
as duty-based ethics.
Frequently applied in law, human rights, and ethical debates (e.g.,
opposing torture, even if it might save lives).
Virtue Ethics
Virtue ethics is a philosophical approach that focuses on character and
virtue as the most important aspects of ethics. It emphasizes developing
and demonstrating virtues like courage, wisdom, and temperance, while
avoiding vices like greed and selfishness.
Often used in education, leadership development, and healthcare (e.g.,
promoting compassion, honesty, and empathy in caregivers).

Rights-based and justice-based approaches


When making ethics-based decisions, the two approaches we lean
towards are normally rights-based or justice-based. In most cases we
consider an individual’s right before executing anything and if the right of
an individual has been violated, we turn to justice. In AI ethics, balancing
both approaches is essential to ensure AI respects individual rights while
promoting fairness in society.
Rights-based approach:
The Rights- based approach is an ethical framework that is based on
Human Rights as fundamental moral principles. The firm believe is that
the rights of humans should not be violated in any sense under any
circumstances.
When we apply the rights-based approach to AI we then understand that
no one’s privacy should be invaded. No one should be discriminated
against, and all users should give consent to how AI accesses their data.
Justice-based approach:
The Justice-based approach focuses on the equality and fair treatment of
all. The aim is to distribute benefits and burdens equally amongst society.
When we apply the justice-based approach to AI we then understand that
AI designers and users should avoid discrimination and biases as well as
establish laws that ensure ethical AI deployment.

The precautionary principle in AI development.


The precautionary principle suggests that AI systems should be carefully
regulated, tested, and monitored to prevent potential harm before
deployment. The term “better safe than sorry” is popularly associated
with the precautionary principle in AI development.
The precautionary principle suggests that AI developers should do a lot of
planning, practising and research before executing. In such a way they
pick up on risks before it occurs and escalates. Clear rules and regulations
should be set in place for all AI developments. This might slow down the
creative process, but it also gives a sense of ownership and accountability
from the developers.

Activities:
 Application of ethical theories to AI scenarios.
 Group exercise: Developing an ethical charter for an AI company.
Module 3: Bias and Fairness in AI
Understanding algorithmic bias:
When using flawed training data, it can result in algorithms that
repeatedly produce errors, unfair outcomes, or even amplify the bias
inherent in the flawed data.
Algorithmic bias can also be caused by programming errors, such as a
developer unfairly weighting factors in algorithm decision-making based
on their own conscious or unconscious biases. For example, indicators like
income or vocabulary might be used by the algorithm to unintentionally
discriminate against people of a certain race or gender.

Sources and impacts of bias in AI:


There are many sources of bias in AI, algorithmic bias being one of them.
We then also have data bias. With data bias the data collected in the past
can have an impact of the now. For example, a facial recognition may be
programmed to recognise a white person more easily than that of a
person of colour. Human bias can also occur by means of the developer,
conscious or unconscious.
The impacts of these biases are that society will start to distrust the use of
AI. Legal action can also be taken because of these biases.

Strategies to mitigate bias: diverse datasets, inclusive design:


Bias in AI can lead to unfair, discriminatory, or harmful outcomes. To
mitigate bias in AI systems, key strategies include using diverse datasets
that represent a wide range of demographics and implementing inclusive
design practices, which means actively considering the needs of all users,
including those from underrepresented groups, throughout the
development process; ensuring that AI algorithms are transparent and
explainable, and regularly monitoring model performance to identify and
address potential biases.
Ways to mitigate bias is by collecting data from diverse sources, regularly
auditing data, language should be diverse as well accessibility, it should
be suitable for disabled persons as well. Human oversight is also essential
to monitor AI decisions and to intervene when there is a discrepancy.

Activities:
 Hands-on workshop: Identifying bias in AI datasets and models.
 Debate: “Can AI ever be truly unbiased?”
Module 4: Privacy and Data Protection
Ethical considerations in data collection and usage:
The foundation of AI is data, without it, AI would not exist. Considering
that data plays such a pivotal role in AI, the collection and usage of it
must be ethical and done with consent, else the whole AI system will be
doubted and seen as not credible. Therefore, AI systems must be
transparent and responsible with the data they collect to prevent harm
and bias.
Ethical considerations that can be used is to make use of minimum
necessary data and follow all the legal frameworks related to the AI topic
at hand. Another consideration is to allows get consent from those
involved.
AI designers must make sure to respect human rights while balancing
innovation.

Consent and data ownership:


With AI being so reliant on data, the sources of the data needs to
comfortable and at ease with their information being used. For them to be
relaxed about it, they have to give consent, and the use of their
information must be transparent. They also must have the sense that they
are in control of how their data is being used so it must be clear and
voluntary.
When we speak about data ownership, we establish who has control over
it, how it is used, stored and shared. There are three types of owners for
the data, this includes: Data subjects, in this case the user’s data belongs
to them. Theres’s data controllers, they store and manage data but must
follow ethical guidelines. And then we have data processors, they process
data on behalf of another organization.
Regulations: GDPR, CCPA, and beyond:
The keys laws protecting consent and data ownership in AI consists of the
following regulations:
 GDPR – The right to consent, access, and delete data. Users must opt-
in for data collection.
 CCPA - Users can request data disclosure and opt out of data sales.
 HIPAA - Protects medical data privacy and patient consent.
 AI Act - Regulates AI systems that handle personal data
Module 5: Accountability and Transparency
As the saying goes, “people fear what they don’t understand,” and this
sentiment is particularly relevant in the era of artificial intelligence, which
is rapidly integrating into our society. According to a Forbes article, by
2030, AI is expected to replace 30% of our workload in various fields such
as marketing, healthcare, finance and more. Rather than perceiving this
as a threat, we should make our best efforts to understand AI and learn
how to adapt to it. The EU AI Act is the first legal framework to address
the risks associated with AI, ensuring transparency and accountability in
its development and deployment.

What is the EU AI Act and How It Works?


The European Union's Artificial Intelligence Act (EU AI Act) is a legislative
proposal aimed at regulating the development and deployment of artificial
intelligence (AI) within the EU. This comprehensive framework seeks to
ensure that AI systems used within the EU are safe, lawful and aligned
with fundamental rights and values. The act, proposed by the European
Commission in April 2021, represents a significant step toward
establishing a global standard for AI governance. The AI Act provides
developers with a clear and concise requirements and obligations to
specify how AI works, its uses for data and how decisions are made.
The regulatory framework has defined 4 levels of risk for AI systems,
outline the requirements and obligations for AI developers, providers and
users. These provisions are designed to ensure the safety and ethical use
of AI across various sectors. namely:

 Unacceptable Risk: AI systems that pose a clear threat to the safety,


livelihoods and rights of people. These systems are prohibited under
the act. Examples include AI-based social scoring and real-time
biometric identification in public spaces.
 High Risk: AI systems that have a significant impact on individuals and
society, such as those used in critical infrastructure (for example
transportation which could put the life and health of citizens at risk),
education (the scoring of examinations which could impact students
and professionals livelihood), employment (using a software to sort out
CV during the initial recruitment process) and law enforcement (which
refers to evaluating evidences in a law order). These systems are
subject to strict obligations, including risk management, data
governance, transparency and human oversight to minimise risk.
 Limited Risk: AI systems that require specific transparency obligations,
such as chatbots and AI-generated content. Users must be informed
that they are interacting with an AI system. Providers also need to
ensure that AI-generated content is identified and this includes video
and audio content.
 Minimal Risk: AI systems that pose minimal or no risk to users' rights or
safety. These systems are subject to minimal regulatory intervention.

Challenges In Explaining AI Decisions


As we enter the era of AI, where we can leverage its capabilities to our
advantage unfortunately (loosely quoted from Carolyn Hax) “There is a
downside to everything good, a hurdle to everything that is desirable”
While on the one hand, AI can significantly enhance decision-making
processes, making it both faster and more accurate for us to decipher,
how can we ensure that AI will serve the public well? What happens when
the day comes that AI makes an error, who is held accountable? AI or
humans? According to an article published by Pandata Blog, AI mistakes
are easily preventable. Asking the right questions and having transparent
conversations throughout all stages of the AI design process are key to
identifying common mistakes before they become problematic. Let’s
delve into the challenges when diving into AI design and development
below.

Traditional decision-making often involves researching trends, gathering


data from various reports, and analysing it to identify key patterns and
insights based on performance. In contrast, the AI-driven decision-making
process leverages algorithms and programs to analyse data and offer
actionable recommendations. Here’s how it works: AI systems collect and
process vast amounts of data from multiple sources, identifying patterns
and trends that may not be immediately apparent to humans. For
instance, data analysts can suggest to AI , “please analyse customer
behaviour, sales performance and market trends by using the provided
data and provide optimal strategies for growth.” (Creately, n.d.)

The benefits of using AI for data analysis range from cost savings to
improved accuracy. Businesses can save up to 25% by redesigning
processes and incorporating AI. (AtScale, 2023)
AI in decision-making can be divided into two main types: Firstly, AI-
assisted decision making: In this scenario, Here we have AI who acts as a
helping assistant, providing key background insights and
recommendations, but the human ultimately makes the final call. For
example, a dentist might use an AI tool to interpret medical images more
accurately, but they still decide on the patient’s treatment. Secondly, we
have AI-driven decision making: where AI fully takes charge of the
decision-making process within predefined parameters, particularly in
scenarios where speed and consistency are crucial. For instance, a
financial trading algorithm can execute buy or sell orders in milliseconds
based on real-time data, all without human intervention. (Creately, n.d.)

Now one of the challenges that arrives from this is explaining the nature
of the AI decision making technology itself to stakeholders, AI algorithms
are often what we call “black boxes” which means that it can be difficult
to trace back the origin of why a particular decision was made. Another
common challenge is trying to simplify the technical jargon and
terminologies in AI, it can be confusing and perhaps intimidating to those
who either lack the knowledge or are unfamiliar with the concepts. It
would be important to consider the fact that stakeholders may come from
different background and thus simplifying the terminologies in a manner
that is digestible and easy to understand will be a key factor.
Furthermore, its important to understand that most people probably fear
the rise of AI, the fear of loss of jobs. The phycology behind fear is
anticipating something bad will happen and retreating back to your safe
and known space which is exactly what would happen if you fail to
educate stakeholders on AI concepts. To overcome this you would need to
address the job loss concern and provide accurate information to build
trust and confidence in AI. (Moldstud, 2023)

Building Trust Through Transparency


Education will be the great engine that fosters trust and transparency to
the stakeholders. A stakeholder is an individual who has some investment
in AI, either in the form of direct support for research and development or
a vested interest in the success of the AI. (Frontiers in Computer Science,
2023) According to a recent survey by PwC, 72% of business leaders
believe that explainable AI is essential for gaining trust in AI systems. In
addition, 68% of consumers are more likely to trust companies that offer
explanations for their AI-driven decisions. These statistics highlight the
growing importance of explainability in AI solutions. (Moldstud, 2023)

Firstly, you wouldn’t expect an 8 year old to spell the word


“Tyrannosaurus Rex” without first teaching it the alphabet, wouldn’t you?
Same concept applies when educating AI concepts. Start from the base
and work your way up. Begin by explaining what AI is and how it works,
use real world examples of how to demonstrate the benefits of AI to their
business. Be clear and concise when making your point, remember your
stakeholders could come from different backgrounds and it would be
beneficial to include creators/ developers, AI researchers, affected parties
etc. Regulators will need to be ensured that the practices are safe and
fair and that AI systems are working as they are expected to. Thus
contributing to the users trust and verification of the information provided.

Then the next point is to empower your stakeholders, involve them in the
decision making process by explaining how AI decisions are made and
explain how the algorithm chooses to make predictions based on visual
representations and examples of AI output. (Moldstud, 2023) it’s all about
trying to make people understand how AI can improve the quality of data
and work within their organisation. But it’s just about showcasing the
benefits, as stated by Weller, one of the authors of “Transparency:
Motivations and Challenges in Explainable AI: Interpreting, explaining and
visualizing deep learning. Developers need to understand how their
system is working, to debug or improve it, or to see what is working well
or performing badly. Users need to have a sense of what AI is doing and
why and need to become comfortable with AI’s decisions. Experts/
regulators need to be able to audit a decision trail, especially when
something goes wrong and lastly he general public needs to feel
comfortable so they can continue to use AI. If you’re able to achieve this
not only will business’s reap the benefits of AI technologies but you will
also build trust and confidence thus leading to a future where we can
adopt AI into our business and daily lives. Furthermore to ensure the
transparency and accountability, the EU AI Act mandates that high-risk AI
systems undergo rigorous testing, documentation and certification
processes. AI developers must maintain a detailed record of the systems
developments and performance and strictly report any incidents and
malfunctions

Accountability in autonomous systems


Earlier the question was raised, what happens when the day comes when
AI makes a mistake? Who is held accountable? AI or humans? The truth is
humans aren’t perfect beings, which means to some extent AI is capable
of making errors. As AI systems become more sophisticated and
autonomous, addressing the challenges of accountability and
responsibility becomes an increasingly important aspect of ethical AI
development and deployment. Especially referring to decision making
processes without human oversight in healthcare, finance, transportation
etc.
Accountability can have many different definitions but the fact remains is
when a situation in which someone is responsible for things that happen
and can give a satisfactory reason for them.

The work published in Springer Nature Links refers to accountability as “In


the HLEG reports, accountability is defined both as a principle that
ensures compliance with the key requirements for a trustworthy AI—in
this sense, it works as a meta-principle (Durante and Floridi 2022)—and as
a set of practices and measures, e.g., audit, risk management, and
redress for adverse impact. The polysemic nature of accountability is
confirmed in the Assessment List for Trustworthy Artificial Intelligence
(ALTAI) by the same expert group”
What this translates to is that one is responsible for their action—and as a
consequences must explain their reasons, aims and motivations.
(Springer. 2023)

An article from Paul Veitch who wrote an article on LinkedIn provides us


insights into “Accountability and Responsibility in AI: Assigning
Responsibility in the Age of Autonomous AI Systems” and to recap his
suggestions.

 Building Trust and Public Acceptance: Defining clear roles of


responsibility and accountability is essential for fostering trust in AI
systems. According to Veitech, this transparency promotes greater
social acceptance and adoption of AI, enhancing user confidence. As a
result, AI technologies are more widely embraced across different
industries, unlocking greater potential and driving progress.
 Legal Compliance: It's critical for AI systems and their developers to
comply with current laws and regulations to prevent legal challenges
and penalties. Clarifying who is responsible for an AI system’s actions
helps navigate the intricate legal landscape, ensuring a more secure
and predictable environment for the growth and adoption of AI
technologies.
 Ethical Development: Holding relevant stakeholders accountable for
the outcomes of AI systems encourages ethical development practices.
This ensures AI technologies are designed in line with societal values
and ethical standards, promoting responsible innovation. Such an
approach not only minimizes risks but also maximizes the positive
impact of AI. (Veitch, 2023)

AI Ethics in Finance

Activities
 Activities: Case study: Analysing transparency issues in real world AI
applications.
 Group task: Creating an explainability framework for a hypothetical AI
system.

Module 6: Global and Cultural Perspectives


Ethical AI in diverse cultural contexts

As one dives deeper into the research of artificial intelligence, one will
come across the strong role that culture plays in determining AI ethics.
South African comedian, Trevor Noah once made a comment and said "to
understand people, you have to understand their language" which he
meant that in order to truly connect with people, you need to understand
more than just the words they speak. It suggests that language carries
cultural, emotional and social meanings that shape how people think,
communicate, and view the world.
So, it's not just about speaking the same language, but also grasping the
underlying ideas, values and experiences that influence how people
express themselves. By understanding someone's language be it their
native tongue, their communication style, or the cultural context behind
their words, you can better understand their perspectives, feelings, and
worldview. Now let’s talk about how closely intertwined Ethics and Culture
is. Culture as defined by Hofstede is a set of common values, norms and
beliefs shared by the same group of people, it is an unwritten set of rules
that a particular group of people possess. Ethics refers to human
behaviour covering our moral and philosophical judgment. The East and
West have distinct cultural differences that shape their societies and ways
of life. For example, the culture of gift giving in corporate is strongly
appreciated and encouraged in China, now while this may be accepted in
Eastern parts of the world, the same thing from a Westerner perspective
can be seen as bribing. Another key example of cultural differences
between the East and West is family. In the West, privacy and
independence is paramount, after a child reaches a certain age it is
encouraged for the to receive their own place, whereas in the East, the
idea of your own place and privacy is an unknown concept, the children
have no need to leave the family home even after they’re married. So the
reality is, culture and ethics are very closely intertwined.

This translates to AI Ethics and Culture and why having a strong AI ethical
framework is so crucial. AI systems, especially those developed in the
early days of AI research, have been shaped by predominantly Western
perspectives. This is primarily because the majority of early AI research
and development took place in the United States and Europe, where the
tech industry has been heavily concentrated. The data used to train AI
models often reflect the values, norms and assumptions of the cultures
where the data was collected. For example, AI trained primarily on data
from Western societies may have biases that favour Western norms,
languages and behaviours. The ethical guidelines and frameworks used to
develop AI often reflect Western values as mentioned above such as
individual privacy. These values might not resonate the same way in other
parts of the world, where communal values, group loyalty, or social
harmony might be more important. For instance, AI surveillance systems
designed for security in the West may not align with cultural norms
around privacy in other regions.

This can lead to problems when the technology is applied in non-Western


contexts, where those cultural assumptions may not hold. This is a
challenge AI ethics discourse faces, in the future, it will imperative for
strong artificial intelligence machines to synchronise a crossover between
cultural communications which will require an extensive and complex
layer of structure. This shift is critical to ensure that AI technologies serve
all people fairly, without perpetuating cultural biases or inequalities.

Impact of AI on developing countries.


AI has a dual impact on developing nations, meaning it has both positive
and negative effects. On one hand, it offers opportunities for economic
growth, development, and advancement in various sectors. On the other
hand, it also brings risks such as job displacement, inequality,
and potential exacerbation of existing challenges.

What is the potential use of AI in developing countries and what will it


mean for the future? According to WHO, The World Health Organisation, it
is recommended that there should be least 45 doctors, nurses and
midwives for every 100 000 people. However in many low income
countries this can pose a challenge that only have a quarter of that
number, so how can AI address this issue? AI tools can be used to support
diagnosis and treatment recommendations, a trusted AI assistant for
every doctor can free up time and enable treatment recommendations to
occur in a more timely manner. A healthcare worker in a remote village of
South Africa Dr Nthabiseng described using her AI based diagnostic tool to
help her diagnose her patients. (Nugraha, 2024)
A second benefit of the use of AI is in Education, It is no surprise that
there is a shortage of teachers, it is believed that a staggering 58 million
additional teachers are needed globally. (Langer, 2021). AI can help
personalize learning experiences, offer low-cost educational tools, and
improve access to quality education, especially in remote areas. From a
financial point of view, AI enables services like tax collections, mobile
money, microloans and credit scoring for people who might not have
access to traditional banking systems, thus promoting financial inclusion
in places where formal banking infrastructure is limited.
For example in Togo, AI improved that targeting of a cash transfers
making sure that aid reaches those who needed it most, take Amina, a
single mother who used the AI targeted aid to start her own business and
now provides for her family and supports her children. All these relevant
examples are but a small glimpse into what a difference AI is making in
society. Unfortunately light cannot exist without dark, the potential of AI
does not come without its drawbacks. So what comes first? The
International Labour Organisation (ILO) is dubbing this term as the “AI
Divide” where there will be unequal access to technology between
developed and developing countries, leaving the developing countries
further left behind. Countries that have the resources to develop and
implement AI (e.g., the U.S., China) may dominate AI technologies and
reap the economic rewards, while developing nations become dependent
on foreign AI solutions, potentially leading to a neo-colonial
dynamic where developing countries are left with fewer opportunities to
develop their own technologies or industries.

Secondly there could be a potential decline in job creation, if jobs start


becoming more automative, traditional jobs served by workers will seize
to exist, this will become a problem for growing economies who leverage
cheap labour as a competitive advantage.

Balancing Global AI Governance with local values:


It is important to understand that AI isn’t just about technology, it is about
people. Did you know in Kenya a program created called “Somanasi”
which translates to “Learn with me” is an AI powered tool that helps
students in real time with feedback and responses . (Langer, 2023)

Balancing global AI governance with local values refers to the challenge of


creating international frameworks and regulations for Artificial Intelligence
(AI) that are effective globally while still respecting and accommodating
the cultural, ethical, legal and social norms of different countries and
regions. In other words, it’s about finding a way to have global standards
for AI technology that don’t ignore or undermine the specific values and
needs of individual cultures or societies.

This issue is important because AI is a rapidly evolving technology that


has far-reaching implications, both socially and ethically.

Examples of Balancing Global AI Governance with Local Values:

1. Data Privacy and Surveillance:


o In the European Union, data privacy and the GDPR are major concerns,
emphasizing the right of individuals to control their personal data.
However, in countries like China, AI is often used for state surveillance,
and data privacy might not be prioritized as highly. A global AI
governance framework needs to balance the protection of personal data
with the need for national security, which can vary greatly between
countries.

2. AI in Healthcare:
o AI in healthcare could look very different in India compared to Germany.
In India, the focus might be on affordable access to healthcare using AI,
whereas in Germany, the focus might be on data
protection and personalized medicine. Governance should allow for
different priorities based on local challenges and values.

3. AI in Autonomous Vehicles:
o Self-driving cars and AI-powered transport systems are a topic of global
interest, but the rules for these technologies will likely differ based on
local traffic laws, infrastructure, and cultural factors (e.g., how roads are
used, the importance of pedestrian rights, or the ethics of autonomous
decision-making in life-or-death situations). Different countries will have
their own needs for regulation in this field.

4. Bias in AI Algorithms:
o Global standards for AI fairness often aim to reduce biases in algorithms.
However, cultural biases exist in many forms and may differ between
countries. In some places, AI algorithms may need to be calibrated to
account for local gender norms, ethnic diversity, or social hierarchies that
differ from the predominantly Western-centric models that often dominate
AI development.

Activities:

 Research and presentation: AI ethics in different regions.


 Group discussion
Module 7: Emerging Ethical Challenges
AI and Employment: Automation and Job Displacement
I want to transport you back in time, to 2005 when the beloved children’s
book was made into a live action called “Charlie and the Chocolate
Factory” in one of the scenes, the father William Bucket gets laid off from
his job because a new robot at the toothpaste factory can perform his job
more cheaply and efficiently. His job has been made obsolete by
technology, thus driving his already poor family into starvation. Now in
the real world, automation has been a large part of the industrial
landscape for decades, recent advancements in robotics and AI have
accelerated its adoption across different sectors. (Gunkel & Schlesinger,
2023) but the question is, how much of human tasks will be replaced by
technology in the near future? Currently it is said that 47% of work tasks
are still being completed by humans, with 22% being completed by
technology and a further 30% involve a collaborative effort. .(Mavunga D,
2025)
According to this year’s Future of Jobs Report 2025 , technology will be
the most disruptive force shaping the labour market. Over the next five
years, advances in AI and information-processing technologies will
accelerate digital access, creating 19 million jobs while displacing 9
million.(Mavunga D, 2025) Job displacement is a growing concern for
everyone affected, one by one certain jobs may become obsolete as the
result of automation replacing repetitive tasks once completed by
humans, but on the flip side, this also offers job transformation for
humans to focus on skills that are hardwired around creativity. The top
three fastest growing skills at the moment are, AI driven data analyst,
networking and cyber security and technological literacy.

Circling back to the film above, not a lot of attention was placed on the
father so it is unclear whether he got his job back at his old factory or was
given a new job. However future employers in the real world will have to
start thinking ahead and pouring some investment into education and
training programs for individuals to help build their skills for the future
jobs.

AI in Warfare and autonomous weapons.


For as long as humans have confronted one another in conflict, war has
driven the advancement of science and technology. Many innovations that
emerged during World War II developed to gain an upper hand on the
battlefield and ultimately win the war later found commercial applications,
shaping industries for decades after the war’s end. From breakthroughs in
antibiotics to the creation of the atomic bomb by Oppenheimer, few things
are as unsettling as the prospect of AI enhancing military capabilities.

At present, AI is being utilized in military settings, where autonomous


weapon systems are capable of identifying targets without human
intervention. It is also playing a crucial role in cybersecurity, helping to
detect and neutralize cyber threats. Ai is described as a tool that allows
machines to recreate human abilities but on a more effective level and
wasn’t it as the mathematician Alan Turing, whose work laid the
foundation for AI, once asked: "Humans use available information and
reason to solve problems and make decisions, so why can’t machines do
the same?" Rewind the clocks back to World War Two, where there was an
urgency for scientific advancements, back then the race was not simply
about building nuclear weapons, but about who could build them first. The
Allies' primary goal was to develop atomic capabilities before Adolf Hitler
and the Nazis did because if they succeeded, it could have led to a global
catastrophe. The essence of that effort was to ensure the protection of
peace and to maintain the balance of power, preventing a dangerous
monopoly on such devastating technology.

Fast forward to today and AI's role in national defence is rooted in a


similar objective, preserving peace through military strength.

The key advantage of AI in defence lies in its speed and decision-making


capacity. Unlike human decision-makers, AI can sift through immense
amounts of information, analyse patterns, and deliver results in a fraction
of the time it would take a human.

Civilian life, of course, is often the most vulnerable in times of war. The
consequences of conflict extend far beyond the battlefield, impacting
innocent people and communities. This is where AI-powered autonomous
weapons hold both potential and peril. On one hand, they can reduce the
number of human soldiers sent into combat zones, potentially saving lives
by limiting the exposure of troops to direct harm. On the other hand, there
is a darker side: much like any technology, AI can be weaponized for
malicious purposes. Just as ChatGPT can be used to generate helpful
responses, it can also be misused by bad actors to incite violence or
spread dangerous ideologies. Autonomous military technologies, if not
properly regulated and controlled, could be exploited by terrorists or
rogue states, posing new threats to global security.
Ethical Concerns in Generative AI.
Generative AI refers to the artificial intelligence systems that are capable
of creating new content including text, images, code and other media
formats in response to user prompts (Brown et al., 2020). These systems
are built on large language models (LLMs) trained on vast datasets of
human-created content. According to Zhang and Liu (2023), the market
for generative AI tools is expected to reach $110 billion by 2030, with
significant applications across industries including finance, healthcare,
and creative sectors.

One of the most concerning issues with generative AI is its ability to


produce fake content. A striking example of this appeared recently on
TikTok, where a video showed actress Blake Lively and singer Taylor Swift
walking the red carpet together. While both women are real-life friends,
what was shocking about the content was that it was entirely generated
by AI, and it was indistinguishable from authentic footage. The content
was so convincing that I couldn’t tell the difference between what was real
and what was synthetic. This experience highlighted the potential dangers
of AI-generated misinformation. Imagine if such a video had instead been
about a harmful or misleading topic, like bullying or a false scandal. The
ability of AI to craft fake content that looks so genuine raises serious
questions about trust in the media and the responsibility of those who
create and distribute such material. Furthermore, it made me reflect on
the need to challenge AI and inquire about where it sources its
information. When we listen to someone speak or consume media, we
inherently want them to fact-check and provide sources. The same should
apply to AI-generated content. We need to be able to question AI’s
sources and ensure its outputs are reliable and truthful.

To address these risks, we must implement transparency in AI use. As


discussed in module 5, one critical solution is always disclosing when
generative AI has been used to create content. This would help viewers
and users identify AI-generated material, allowing them to approach it
with the appropriate scepticism and critical thinking. Transparency will
help mitigate the potential for misinformation and protect individuals from
being misled or deceived.

Another question that arises is who holds accountability for the harm
caused by generative AI, AI itself or the individuals who feed data into the
system? The responsibility likely lies with those who design and deploy AI
systems. Developers and companies must ensure their AI models are
ethically trained, free from bias, and do not produce harmful content.
Accountability should rest with those who build and use these systems,
ensuring they comply with ethical standards and regulations.

Activities:
 Scenario-based workshop: Addressing ethical issues in new AI
technologies
 Policy drafting: proposing ethical guidelines for emerging AI
applications.

Module 8: Designing Ethical AI systems


Integrating Ethics into an AI Lifecycle.

You cannot build a house without a strong foundation, this is the same for
AI development. AI development lifecycles start with data handling so
ethical practices should be the start approach and maintained strictly
throughout the lifecycle. (Jidenma, 2024)
Building on our exploration in Module 5, it is imperative for AI developers
to explain the process by which AI systems analyse data and make
decisions. Simplifying these explanations fosters greater understanding
and trust between developers and stakeholders. Transparency is the
cornerstone of this relationship, as it allows all parties to engage with AI
outputs confidently.
Moreover, data serves as the lifeblood of AI systems, analogous to a bank
that must securely collect and store sensitive information. It is crucial to
implement robust measures such as strong encryption, anonymization
techniques, and strict access controls to safeguard individual privacy. By
adhering to these practices, we can ensure the ethical and secure use of
AI technologies.
An example of how ethical standards have been implemented in AI
projects, is the development of facial recognition systems with built in
fairness algorithms. At first a flawed system which Microsoft’s Azure Face
API showed inaccuracies misidentification across women and people of
colour.

Tools and Frameworks for Ethical AI Development

AI ethics will continue to grow at an accelerated rate, thus it is important


to find a suitable tools and techniques to help shape a company’s ethical
AI frameworks. According to the principal architect of ethical AI practice at
Salesforce, she recommends the following frameworks to consider:

 OECD Framework for the Classification of AI Systems: a tool for


effective AI policies by OECD
“To help policymakers, regulators, legislators, and others characterise
AI systems deployed in specific contexts, the OECD has developed a
user-friendly tool to evaluate AI systems from a policy perspective. It
can be applied to the widest range of AI systems across the following
dimensions: People & Planet; Economic Context; Data & Input; AI
model; and Task & Output. Each of the framework’s dimensions has a
subset of properties and attributes to define and assess policy
implications and to guide an innovative and trustworthy approach to AI
as outlined in the OECD AI Principles.”
 Securing Machine Learning Algorithms by European Union Agency
for Cybersecurity (ENISA)
“Based on a systematic review of relevant literature on machine
learning, in this report we provide a taxonomy for machine learning
algorithms, highlighting core functionalities and critical stages. The
report also presents a detailed analysis of threats targeting machine
learning systems. Finally, we propose concrete and actionable security
controls described in relevant literature and security frameworks and
standards.”
 Ethical OS Framework by IFTF and Omidyar Network
“The Ethical Operating System can help makers of tech, product
managers, engineers, and others get out in front of problems before
they happen. It’s been designed to facilitate better product
development, faster deployment, and more impactful innovation. All
while striving to minimize technical and reputational risks. This toolkit
can help inform your design process today and manage risks around
existing technologies in the future.”

Tools and Toolkits

 Algorithmic Impact Assessment Tool by Canadian Government


“The tool is a questionnaire that determines the impact level of an
automated decision-system. It is composed of 48 risk and 33 mitigation
questions. Assessment scores are based on many factors including
systems design, algorithm, decision type, impact and data.”
 PWC Responsible AI Toolkit
“Our Responsible AI Toolkit is a suite of customizable frameworks, tools
and processes designed to help you harness the power of AI in an
ethical and responsible manner – from strategy through execution.
With the Responsible AI toolkit, we’ll tailor our solutions to address
your organisation’s unique business requirements and AI maturity.”
 The Box by AI Ethics Lab
“The Box is designed to help you visualize the ethical strengths and
weaknesses of a technology. Once the weaknesses are identified,
solutions can be created!”

Monitoring and Evaluation of an AI system:

In an article exploring the basics of Monitoring and Evaluation it explains,


“M&E is a systematic process used to assess the performance of projects,
programs, or policies, with the goal of improving effectiveness and
accountability. Monitoring typically involves the ongoing collection and
analysis of data to track progress against set objectives, while evaluation
involves the assessment of outcomes and impacts to determine the
overall success of an intervention.” (Evalcommunity, 2023)

AI collects, analyses and improves data at a much faster rate than the
capabilities of a human. (NLP) Natural Language Processing is one of the
hottest areas in artificial intelligence right now which can analyse
unstructured data from Social Media, reports and news articles allowing
organisations to gather insights from a wider range of sources. Automated
reporting and visualisation AI can automate, prepare reports, graphs and
presentations, savings analysts hours of time compiling the data.
Reference List:

Harvard Gazette. (2020). Ethical concerns mount as AI takes bigger


decision-making role. Retrieved from
https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-
as-ai-takes-bigger-decision-making-role/
Mökander, J., Floridi, L., & Haataja, M. (2022). Why you need an AI ethics
committee. Harvard Business Review. Retrieved from
https://hbr.org/2022/07/why-you-need-an-ai-ethics-committee
León, L., Velázquez, S., & Gutiérrez, J. (2022). AI and ethical dilemmas in
decision-making: An exploration of the current landscape. Frontiers in
Psychology. 13, 836650. Retrieved from
https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.202
2.836650/full
Walton, C. (2021). 7 principles to guide the ethics of artificial intelligence.
ATD. Retrieved from https://www.td.org/content/atd-blog/7-principles-to-
guide-the-ethics-of-artificial-intelligence
Innocence Project. (2021). When artificial intelligence gets it wrong.
Innocence Project. Retrieved from https://innocenceproject.org/when-
artificial-intelligence-gets-it-wrong/
Fiveable. (n.d.). Historical context and evolution of AI ethics. Fiveable.
Retrieved from https://fiveable.me/artificial-intelligence-and-ethics/unit-
1/historical-context-evolution-ai-ethics/study-guide/emlp4L3sTicFK1Gb
IBM. (2020). Shedding light on AI bias with real-world examples. IBM.
Retrieved from https://www.ibm.com/think/topics/shedding-light-on-ai-
bias-with-real-world-examples

Moldstud, 2023. Overcoming challenges in explaining AI decisions to


stakeholders. Available at: https://moldstud.com/articles/p-overcoming-
challenges-in-explaining-ai-decisions-to-stakeholders [Accessed 27
January 2025].

Frontiers in Computer Science, 2023. A comprehensive review of AI-driven


systems and their impact on various industries. Available
at: https://www.frontiersin.org/journals/computer-science/articles/
10.3389/fcomp.2023.1117848/full[Accessed 27 January 2025].

Pandata, n.d. Who is responsible for AI’s mistakes?. Available


at: https://pandata.co/blog/who-is-responsible-for-ais-mistakes/#:~:text=A
lthough%20your%20first%20instinct%20may%20be%20to%20blame,is
%20that%20many%20AI%20mistakes%20are%20easily
%20preventable[Accessed 27 January 2025].

Springer, 2023. Ethical implications of AI in modern industries. Available


at: https://link.springer.com/article/10.1007/s00146-023-01635-y [Accesse
d 27 January 2025].

Veitch, P., 2023. Accountability and responsibility in AI: Assigning blame


to AI systems. LinkedIn. Available
at: https://www.linkedin.com/pulse/accountability-responsibility-ai-
assigning-age-systems-paul-veitch/ [Accessed 27 January 2025].

GovLab 2023, AI Ethics: Frameworks and Challenges, Directus, Available


at: https://directus.thegovlab.com/uploads/ai-ethics/originals/6dfaba73-
9c1f-49cc-b293-e5d0ac3aa08c.pdf (Accessed: 28 January 2025).
Das, R. 2024, 'AI adoption in developing countries: Opportunities,
challenges, and policy pathways', Modern Diplomacy, 17 October.
Available at: https://moderndiplomacy.eu/2024/10/17/ai-adoption-in-
developing-countries-opportunities-challenges-and-policy-pathways/ (Acce
ssed: 28 January 2025).

Langer, A. 2021, 'Tipping the scales: AI’s dual impact on developing


nations', World Bank Blogs, 3 May. Available
at: https://blogs.worldbank.org/en/digital-development/tipping-the-scales--
ai-s-dual-impact-on-developing-nations(Accessed: 28 January 2025).
Langer, A. 2023, 'The AI governance balancing act: Navigating
opportunities and risks', World Bank Blogs, 12 October. Available
at: https://blogs.worldbank.org/en/digital-development/the-ai-governance-
balancing-act--navigating-opportunities-and-ri (Accessed: 28 January
2025).

Cultural Differences Between East and West: What You Should Know.
Learning Mind. Available at: https://www.learning-mind.com/cultural-
differences-east-west/ [Accessed 29 January 2025].

Global AI Ethics: Bridging Cultural Divides in Technology. Technology


Magazine. Available at: https://technologymagazine.com/articles/global-ai-
ethics-bridging-cultural-divides-in-technology [Accessed 29 January 2025].

Generative AI Ethics: 8 Biggest Concerns. TechTarget. Available


at: https://www.techtarget.com/searchenterpriseai/tip/Generative-AI-
ethics-8-biggest-concerns#:~:text=Like%20other%20forms%20of%20AI
%2C%20generative%20AI%20can,like%20misinformation%2C
%20plagiarism%2C%20copyright%20infringements%20and%20harmful
%20content[Accessed 29 January 2025].

Kestria, 2023. Integrating Ethical Principles into AI Development. [online]


Available at: https://kestria.com/insights/integrating-ethical-principles-
into-ai-development/ [Accessed 30 January 2025].

Salesforce, 2023. Frameworks, Toolkits, Principles, and Oaths – Oh


My! [online] Available at: https://www.salesforce.com/blog/frameworks-
tool-kits-principles-and-oaths-oh-my/#:~:text=Ethical%20AI
%20frameworks%2C%20tool%20kits%2C%20principles%2C%20and
%20certifications,6%20Oaths%2C%20Manifestoes%2C%20and%20Codes
%20of%20Conduct%20[Accessed 30 January 2025].
Evalcommunity, 2023. AI in Monitoring and Evaluation. [online] Available
at: https://www.evalcommunity.com/artificial-intelligence/ai-in-monitoring-
and-evaluation/ [Accessed 30 January 2025].

You might also like