0% found this document useful (0 votes)
45 views

What Are AI Ethics

The document discusses the importance of ethics in artificial intelligence. It covers topics like algorithmic bias, privacy, explainability, transparency, environmental impact, and more. Examples are provided of AI systems that exhibited unethical behavior and the challenges of ensuring AI is developed and used responsibly.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views

What Are AI Ethics

The document discusses the importance of ethics in artificial intelligence. It covers topics like algorithmic bias, privacy, explainability, transparency, environmental impact, and more. Examples are provided of AI systems that exhibited unethical behavior and the challenges of ensuring AI is developed and used responsibly.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

What are AI ethics?

AI ethics are the set of guiding principles that stakeholders (from engineers to
government officials) use to ensure artificial intelligence technology is
developed and used responsibly. This means taking a safe, secure, humane, and
environmentally friendly approach to AI.

A strong AI code of ethics can include avoiding bias, ensuring privacy of users
and their data, and mitigating environmental risks. Codes of ethics in
companies and government-led regulatory frameworks are two main ways that
AI ethics can be implemented. By covering global and national ethical AI issues,
and laying the policy groundwork for ethical AI in companies, both approaches
help regulate AI technology.

Why are AI ethics important?

AI ethics are important because AI technology is meant to augment or replace


human intelligence—but when technology is designed to replicate human life,
the same issues that can cloud human judgment can seep into the technology.

AI projects built on biased or inaccurate data can have harmful consequences,


particularly for underrepresented or marginalized groups and individuals.
Further, if AI algorithms and machine learning models are built too hastily, then
it can become unmanageable for engineers and product managers to correct
learned biases. It's easier to incorporate a code of ethics during the
development process to mitigate any future risks.
AI ethics in film and TV

Examples of AI ethics

It may be easiest to illustrate the ethics of artificial intelligence with real-life


examples. In December 2022, the app Lensa AI used artificial intelligence to
generate cool, cartoon-looking profile photos from people’s regular images.
From an ethical standpoint, some people criticized the app for not giving credit
or enough money to artists who created the original digital art the AI was
trained on [1]. According to The Washington Post, Lensa was being trained on
billions of photographs sourced from the internet without consent [2].

Another example is the AI model ChatGPT, which enables users to interact with
it by asking questions. ChatGPT scours the internet for data and answers with a
poem, Python code, or a proposal. One ethical dilemma is that people are using
ChatGPT to win coding contests or write essays. It also raises similar questions
to Lensa, but with text rather than images.

These are just two popular examples of AI ethics. As AI has grown in recent
years, influencing nearly every industry and having huge positive impact on
industries like health care, the topic of AI ethics has become even more salient.
How do we ensure bias-free AI? What can be done to mitigate risks in the future?
There are many potential solutions, but stakeholders must act responsibly and
collaboratively in order to create positive outcomes across the globe.

Ethical challenges of AI

There are plenty of real-life challenges that can help illustrate AI ethics. Here are
just a few.
AI and bias

If AI doesn’t collect data that accurately represents the population, their


decisions might be susceptible to bias. In 2018, Amazon was under fire for its AI
recruiting tool that downgraded resumes that featured “women” (such as
“Women’s International Business Society”) in it [3]. In essence, the AI tool
discriminated against women and caused legal risk for the tech giant.

AI and privacy

As mentioned earlier with the Lensa AI example, AI relies on data pulled from
internet searches, social media photos and comments, online purchases, and
more. While this helps to personalize the customer experience, there are
questions around the apparent lack of true consent for these companies to
access our personal information.

AI and the environment

Some AI models are large and require significant amounts of energy to train on
data. While research is being done to devise methods for energy-efficient AI,
more could be done to incorporate environmental ethical concerns into AI-
related policies.

How to create more ethical AI

Creating more ethical AI requires a close look at the ethical implications of


policy, education, and technology. Regulatory frameworks can ensure that
technologies benefit society rather than harm it. Globally, governments are
beginning to enforce policies for ethical AI, including how companies should
deal with legal issues if bias or other harm arises.

Anyone who encounters AI should understand the risks and potential negative
impact of AI that is unethical or fake. The creation and dissemination of
accessible resources can mitigate these types of risks.

It may seem counterintuitive to use technology to detect unethical behavior in


other forms of technology, but AI tools can be used to determine whether video,
audio, or text (hate speech on Facebook, for example) is fake or not. These tools
can detect unethical data sources bias better and more efficiently than humans.

https://www.coursera.org/articles/ai-ethics
The rapid rise in artificial intelligence (AI) has created many opportunities
globally, from facilitating healthcare diagnoses to enabling human connections
through social media and creating labour efficiencies through automated tasks.

However, these rapid changes also raise profound ethical concerns. These
arise from the potential AI systems have to embed biases, contribute to cl The
risk environment

Most of the new generation of AI tools are pretty broad. You can use them to do
everything from writing silly poems about your friends to defrauding
grandparents by impersonating their grandchildren—and everything in
between. How careful you have to be with how you use AI, and how much
human supervision you need to give to the process, depends on the risk
environment.

Take auto-generated email or meeting summaries. Really, there will be almost


no risk in letting Google's forthcoming AI summarize all the previous emails in a
Gmail thread into a few short bullet points. It's working from a limited set of
information, so it's unlikely to suddenly start plotting world domination, and
even if it misses an important point, the original emails are still there for people
to check. It's much the same as using GPT to summarize a meeting. You can
fairly confidently let the AI do its thing, then send an email or Slack message to
all the participants without having to manually review every draft.

But let's consider a more extreme option. Say you want to use an AI tool to
generate financial advice—or worse, medical advice—and publish it on a
popular website. Well, then you should probably be pretty careful, right? AIs can
"hallucinate" or make things up that sound true, even if they aren't, and it
would be pretty bad form to mislead your audience.

CNET found this out the hard way. Of 77 AI-written financial stories it published,
it had to issue corrections for 41 of them. While we don't know if anyone was
actually misled by the stories, the problem is they could have been. CNET is—or
was—a reputable brand, so the things it publishes on its website have some
weight.

And it's the same if you're planning to just use AIs for your own enjoyment. If
you're creating a few images with DALL·E 2 to show to your friend, there isn't a
lot to worry about. On the other hand, if you're entering art contests or trying to
get published in magazines, you need to step back and consider things
carefully.

All this is to say that, while we can talk about some of the ethical issues with
artificial intelligence in the abstract, not every situation is the same. The higher
the risks of harm, the more you need to consider whether allowing AI tools to
operate unsupervised is advisable. In many cases, with the current tools we
have available, it won't be.

The potential for deception

Generative AI tools can be incredibly useful and powerful, and you should
always disclose when you use them. Erring on the side of caution and making
sure everyone knows that an AI is generating something mitigates a lot of the
potential harms.

imate degradation, threaten human rights and more. Such risks associated with
AI have already begun to compound on top of existing inequalities, resulting in
further harm to already marginalised groups.

The ethics of artificial intelligence is the branch of the ethics of technology


specific to artificial intelligence (AI) systems.[1]
The ethics of artificial intelligence covers a broad range of topics within the field
that are considered to have particular ethical stakes. This includes algorithmic
biases, fairness, automated decision-making, accountability, privacy, and
regulation. It also covers various emerging or potential future challenges such
as machine ethics (how to make machines that behave ethically), lethal
autonomous weapon systems, arms race dynamics, AI safety and alignment,
technological unemployment, AI-enabled misinformation, how to treat certain
AI systems if they have a moral status (AI welfare and rights), artificial
superintelligence and existential risks.[1] Some application areas may also have
particularly important ethical implications, like healthcare, education, or the
military.
Examples of AI ethics issues include data responsibility and privacy, fairness,
explainability, robustness, transparency, environmental sustainability,
inclusion, moral agency, value alignment, accountability, trust, and technology
misuse. This article aims to provide a comprehensive market view of AI ethics in
the industry today. To learn more about IBM’s point of view, see our AI ethics
page here.

With the emergence of big data, companies have increased their focus to drive
automation and data-driven decision-making across their organizations. While
the intention there is usually, if not always, to improve business outcomes,
companies are experiencing unforeseen consequences in some of their AI
applications, particularly due to poor upfront research design and biased
datasets.

As instances of unfair outcomes have come to light, new guidelines have


emerged, primarily from the research and data science communities, to address
concerns around the ethics of AI. Leading companies in the field of AI have also
taken a vested interest in shaping these guidelines, as they themselves have
started to experience some of the consequences for failing to uphold ethical
standards within their products. Lack of diligence in this area can result in
reputational, regulatory and legal exposure, resulting in costly penalties. As with
all technological advances, innovation tends to outpace government regulation
in new, emerging fields. As the appropriate expertise develops within the
government industry, we can expect more AI protocols for companies to follow,
enabling them to avoid any infringements on human rights and civil liberties.
Now available: watsonx.governance
Accelerate responsible, transparent and explainable workflows for generative AI
built on third-party platforms
Try watsonx.governance
Related content
Ebook

Register for the ebook on AI data stores


Guide
Learn how to leverage the right databases for applications, analytics, and generative
AI
Begin your journey to AI
Learn how to scale AI

Explore the AI Academy


Establishing principles for AI ethics
While rules and protocols develop to manage the use of AI, the academic
community has leveraged the Belmont Report (link resides outside ibm.com) as
a means to guide ethics within experimental research and algorithmic
development. There are main three principles that came out of the Belmont
Report that serve as a guide for experiment and algorithm design, which are:

• Respect for Persons: This principle recognizes the autonomy of individuals


and upholds an expectation for researchers to protect individuals with
diminished autonomy, which could be due to a variety of circumstances such
as illness, a mental disability, age restrictions. This principle primarily touches
on the idea of consent. Individuals should be aware of the potential risks and
benefits of any experiment that they’re a part of, and they should be able to
choose to participate or withdraw at any time before and during the
experiment.
• Beneficence: This principle takes a page out of healthcare ethics, where
doctors take an oath to “do no harm.” This idea can be easily applied to
artificial intelligence where algorithms can amplify biases around race,
gender, political leanings, et cetera, despite the intention to do good and
improve a given system.
• Justice: This principle deals with issues, such as fairness and equality. Who
should reap the benefits of experimentation and machine learning? The
Belmont Report offers five ways to distribute burdens and benefits,
which are by:
o Equal share
o Individual need
o Individual effort
o Societal contribution
o Merit

You might also like