Ethics in AI
Ethics in AI
Activities:
Group discussion: “What does ethical AI mean to you?”
Case analysis of AI applications with ethical concerns (e.g., facial
recognition, biased algorithms).
Module 2:
Utilitarianism, deontology, and virtue ethics
Utilitarianism, deontology, and virtue ethics are distinct ethical
frameworks and theories that provide different approaches to evaluating
morality and guiding ethical decision-making. Below we unpack the above
mentioned.
Utilitarianism
Utilitarianism is a moral philosophy that promotes actions that increase
happiness and well-being for the greatest number of people. It's a type of
consequentialism, which means that the morality of an action is based on
its consequences.
Utilitarianism is widely used in policymaking, business ethics, and
healthcare resource allocation (e.g., deciding how to allocate vaccines to
save the most lives)
Deontology
Deontology is a moral philosophy that judges actions based on rules and
principles, rather than the consequences of those actions. It's also known
as duty-based ethics.
Frequently applied in law, human rights, and ethical debates (e.g.,
opposing torture, even if it might save lives).
Virtue Ethics
Virtue ethics is a philosophical approach that focuses on character and
virtue as the most important aspects of ethics. It emphasizes developing
and demonstrating virtues like courage, wisdom, and temperance, while
avoiding vices like greed and selfishness.
Often used in education, leadership development, and healthcare (e.g.,
promoting compassion, honesty, and empathy in caregivers).
Activities:
Application of ethical theories to AI scenarios.
Group exercise: Developing an ethical charter for an AI company.
Module 3: Bias and Fairness in AI
Understanding algorithmic bias:
When using flawed training data, it can result in algorithms that
repeatedly produce errors, unfair outcomes, or even amplify the bias
inherent in the flawed data.
Algorithmic bias can also be caused by programming errors, such as a
developer unfairly weighting factors in algorithm decision-making based
on their own conscious or unconscious biases. For example, indicators like
income or vocabulary might be used by the algorithm to unintentionally
discriminate against people of a certain race or gender.
Activities:
Hands-on workshop: Identifying bias in AI datasets and models.
Debate: “Can AI ever be truly unbiased?”
Module 4: Privacy and Data Protection
Ethical considerations in data collection and usage:
The foundation of AI is data, without it, AI would not exist. Considering
that data plays such a pivotal role in AI, the collection and usage of it
must be ethical and done with consent, else the whole AI system will be
doubted and seen as not credible. Therefore, AI systems must be
transparent and responsible with the data they collect to prevent harm
and bias.
Ethical considerations that can be used is to make use of minimum
necessary data and follow all the legal frameworks related to the AI topic
at hand. Another consideration is to allows get consent from those
involved.
AI designers must make sure to respect human rights while balancing
innovation.
The benefits of using AI for data analysis range from cost savings to
improved accuracy. Businesses can save up to 25% by redesigning
processes and incorporating AI. (AtScale, 2023)
AI in decision-making can be divided into two main types: Firstly, AI-
assisted decision making: In this scenario, Here we have AI who acts as a
helping assistant, providing key background insights and
recommendations, but the human ultimately makes the final call. For
example, a dentist might use an AI tool to interpret medical images more
accurately, but they still decide on the patient’s treatment. Secondly, we
have AI-driven decision making: where AI fully takes charge of the
decision-making process within predefined parameters, particularly in
scenarios where speed and consistency are crucial. For instance, a
financial trading algorithm can execute buy or sell orders in milliseconds
based on real-time data, all without human intervention. (Creately, n.d.)
Now one of the challenges that arrives from this is explaining the nature
of the AI decision making technology itself to stakeholders, AI algorithms
are often what we call “black boxes” which means that it can be difficult
to trace back the origin of why a particular decision was made. Another
common challenge is trying to simplify the technical jargon and
terminologies in AI, it can be confusing and perhaps intimidating to those
who either lack the knowledge or are unfamiliar with the concepts. It
would be important to consider the fact that stakeholders may come from
different background and thus simplifying the terminologies in a manner
that is digestible and easy to understand will be a key factor.
Furthermore, its important to understand that most people probably fear
the rise of AI, the fear of loss of jobs. The phycology behind fear is
anticipating something bad will happen and retreating back to your safe
and known space which is exactly what would happen if you fail to
educate stakeholders on AI concepts. To overcome this you would need to
address the job loss concern and provide accurate information to build
trust and confidence in AI. (Moldstud, 2023)
Then the next point is to empower your stakeholders, involve them in the
decision making process by explaining how AI decisions are made and
explain how the algorithm chooses to make predictions based on visual
representations and examples of AI output. (Moldstud, 2023) it’s all about
trying to make people understand how AI can improve the quality of data
and work within their organisation. But it’s just about showcasing the
benefits, as stated by Weller, one of the authors of “Transparency:
Motivations and Challenges in Explainable AI: Interpreting, explaining and
visualizing deep learning. Developers need to understand how their
system is working, to debug or improve it, or to see what is working well
or performing badly. Users need to have a sense of what AI is doing and
why and need to become comfortable with AI’s decisions. Experts/
regulators need to be able to audit a decision trail, especially when
something goes wrong and lastly he general public needs to feel
comfortable so they can continue to use AI. If you’re able to achieve this
not only will business’s reap the benefits of AI technologies but you will
also build trust and confidence thus leading to a future where we can
adopt AI into our business and daily lives. Furthermore to ensure the
transparency and accountability, the EU AI Act mandates that high-risk AI
systems undergo rigorous testing, documentation and certification
processes. AI developers must maintain a detailed record of the systems
developments and performance and strictly report any incidents and
malfunctions
AI Ethics in Finance
Activities
Activities: Case study: Analysing transparency issues in real world AI
applications.
Group task: Creating an explainability framework for a hypothetical AI
system.
As one dives deeper into the research of artificial intelligence, one will
come across the strong role that culture plays in determining AI ethics.
South African comedian, Trevor Noah once made a comment and said "to
understand people, you have to understand their language" which he
meant that in order to truly connect with people, you need to understand
more than just the words they speak. It suggests that language carries
cultural, emotional and social meanings that shape how people think,
communicate, and view the world.
So, it's not just about speaking the same language, but also grasping the
underlying ideas, values and experiences that influence how people
express themselves. By understanding someone's language be it their
native tongue, their communication style, or the cultural context behind
their words, you can better understand their perspectives, feelings, and
worldview. Now let’s talk about how closely intertwined Ethics and Culture
is. Culture as defined by Hofstede is a set of common values, norms and
beliefs shared by the same group of people, it is an unwritten set of rules
that a particular group of people possess. Ethics refers to human
behaviour covering our moral and philosophical judgment. The East and
West have distinct cultural differences that shape their societies and ways
of life. For example, the culture of gift giving in corporate is strongly
appreciated and encouraged in China, now while this may be accepted in
Eastern parts of the world, the same thing from a Westerner perspective
can be seen as bribing. Another key example of cultural differences
between the East and West is family. In the West, privacy and
independence is paramount, after a child reaches a certain age it is
encouraged for the to receive their own place, whereas in the East, the
idea of your own place and privacy is an unknown concept, the children
have no need to leave the family home even after they’re married. So the
reality is, culture and ethics are very closely intertwined.
This translates to AI Ethics and Culture and why having a strong AI ethical
framework is so crucial. AI systems, especially those developed in the
early days of AI research, have been shaped by predominantly Western
perspectives. This is primarily because the majority of early AI research
and development took place in the United States and Europe, where the
tech industry has been heavily concentrated. The data used to train AI
models often reflect the values, norms and assumptions of the cultures
where the data was collected. For example, AI trained primarily on data
from Western societies may have biases that favour Western norms,
languages and behaviours. The ethical guidelines and frameworks used to
develop AI often reflect Western values as mentioned above such as
individual privacy. These values might not resonate the same way in other
parts of the world, where communal values, group loyalty, or social
harmony might be more important. For instance, AI surveillance systems
designed for security in the West may not align with cultural norms
around privacy in other regions.
2. AI in Healthcare:
o AI in healthcare could look very different in India compared to Germany.
In India, the focus might be on affordable access to healthcare using AI,
whereas in Germany, the focus might be on data
protection and personalized medicine. Governance should allow for
different priorities based on local challenges and values.
3. AI in Autonomous Vehicles:
o Self-driving cars and AI-powered transport systems are a topic of global
interest, but the rules for these technologies will likely differ based on
local traffic laws, infrastructure, and cultural factors (e.g., how roads are
used, the importance of pedestrian rights, or the ethics of autonomous
decision-making in life-or-death situations). Different countries will have
their own needs for regulation in this field.
4. Bias in AI Algorithms:
o Global standards for AI fairness often aim to reduce biases in algorithms.
However, cultural biases exist in many forms and may differ between
countries. In some places, AI algorithms may need to be calibrated to
account for local gender norms, ethnic diversity, or social hierarchies that
differ from the predominantly Western-centric models that often dominate
AI development.
Activities:
Circling back to the film above, not a lot of attention was placed on the
father so it is unclear whether he got his job back at his old factory or was
given a new job. However future employers in the real world will have to
start thinking ahead and pouring some investment into education and
training programs for individuals to help build their skills for the future
jobs.
Civilian life, of course, is often the most vulnerable in times of war. The
consequences of conflict extend far beyond the battlefield, impacting
innocent people and communities. This is where AI-powered autonomous
weapons hold both potential and peril. On one hand, they can reduce the
number of human soldiers sent into combat zones, potentially saving lives
by limiting the exposure of troops to direct harm. On the other hand, there
is a darker side: much like any technology, AI can be weaponized for
malicious purposes. Just as ChatGPT can be used to generate helpful
responses, it can also be misused by bad actors to incite violence or
spread dangerous ideologies. Autonomous military technologies, if not
properly regulated and controlled, could be exploited by terrorists or
rogue states, posing new threats to global security.
Ethical Concerns in Generative AI.
Generative AI refers to the artificial intelligence systems that are capable
of creating new content including text, images, code and other media
formats in response to user prompts (Brown et al., 2020). These systems
are built on large language models (LLMs) trained on vast datasets of
human-created content. According to Zhang and Liu (2023), the market
for generative AI tools is expected to reach $110 billion by 2030, with
significant applications across industries including finance, healthcare,
and creative sectors.
Another question that arises is who holds accountability for the harm
caused by generative AI, AI itself or the individuals who feed data into the
system? The responsibility likely lies with those who design and deploy AI
systems. Developers and companies must ensure their AI models are
ethically trained, free from bias, and do not produce harmful content.
Accountability should rest with those who build and use these systems,
ensuring they comply with ethical standards and regulations.
Activities:
Scenario-based workshop: Addressing ethical issues in new AI
technologies
Policy drafting: proposing ethical guidelines for emerging AI
applications.
You cannot build a house without a strong foundation, this is the same for
AI development. AI development lifecycles start with data handling so
ethical practices should be the start approach and maintained strictly
throughout the lifecycle. (Jidenma, 2024)
Building on our exploration in Module 5, it is imperative for AI developers
to explain the process by which AI systems analyse data and make
decisions. Simplifying these explanations fosters greater understanding
and trust between developers and stakeholders. Transparency is the
cornerstone of this relationship, as it allows all parties to engage with AI
outputs confidently.
Moreover, data serves as the lifeblood of AI systems, analogous to a bank
that must securely collect and store sensitive information. It is crucial to
implement robust measures such as strong encryption, anonymization
techniques, and strict access controls to safeguard individual privacy. By
adhering to these practices, we can ensure the ethical and secure use of
AI technologies.
An example of how ethical standards have been implemented in AI
projects, is the development of facial recognition systems with built in
fairness algorithms. At first a flawed system which Microsoft’s Azure Face
API showed inaccuracies misidentification across women and people of
colour.
AI collects, analyses and improves data at a much faster rate than the
capabilities of a human. (NLP) Natural Language Processing is one of the
hottest areas in artificial intelligence right now which can analyse
unstructured data from Social Media, reports and news articles allowing
organisations to gather insights from a wider range of sources. Automated
reporting and visualisation AI can automate, prepare reports, graphs and
presentations, savings analysts hours of time compiling the data.
Reference List:
Cultural Differences Between East and West: What You Should Know.
Learning Mind. Available at: https://www.learning-mind.com/cultural-
differences-east-west/ [Accessed 29 January 2025].