0% found this document useful (0 votes)
63 views

MODULE 4

The document discusses the importance of responsible AI usage, emphasizing the need for users to be aware of AI's limitations, prioritize ethical practices, and strive to benefit society while avoiding harm. It outlines various types of harms associated with AI, including allocative, quality-of-service, and systemic bias, and stresses the significance of human intervention in mitigating these issues. Additionally, it highlights the necessity of maintaining privacy and security when using AI tools, advising users to understand terms of use and avoid sharing sensitive information.

Uploaded by

Vy Vo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views

MODULE 4

The document discusses the importance of responsible AI usage, emphasizing the need for users to be aware of AI's limitations, prioritize ethical practices, and strive to benefit society while avoiding harm. It outlines various types of harms associated with AI, including allocative, quality-of-service, and systemic bias, and stresses the significance of human intervention in mitigating these issues. Additionally, it highlights the necessity of maintaining privacy and security when using AI tools, advising users to understand terms of use and avoid sharing sensitive information.

Uploaded by

Vy Vo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 30

In order to achieve the goal of responsible AI, what should AI users commit to

doing? Select three answers.

Rely on AI’s understanding of reasoning and context

Maintain awareness of AI’s limitations

Prioritize ethical use of AI tools

Strive to benefit people and society while avoiding harm

Status: [object Object]

1 point

2.

Question 2

A new text messaging app features an AI model that predicts the next word
in a sentence, helping users quickly type their messages. However, the AI
model frequently makes inaccurate predictions for users who are writing in
languages that the model wasn't trained on. What type of harm does this
scenario describe?

Quality-of-service

Allocative

Social system

Deepfakes

Status: [object Object]

1 point

3.

Question 3

A manufacturer starts making products faster thanks to a new AI tool. As a


result, products arrive in stores 15% more quickly than they used to. After a
few years, production slows because the AI model the tool uses is never
trained on new data. What does this scenario describe?

Drift

Algorithms
Emergence

Updates

Status: [object Object]

1 point

4.

Question 4

Which of the following statements correctly describe systemic bias? Select


two answers.

Systemic bias exists within societal systems, such as education or criminal


justice.

Systemic bias is a tendency upheld by institutions that favors certain groups.

Systemic bias occurs when an AI model's accuracy in predictions declines


due to changes over time.

Systemic bias is only reflected in low-quality data sets.

Status: [object Object]

1 point

5.

Question 5

A graphic designer is using a generative AI tool to help create marketing


materials for clients. Which of the following measures should the graphic
designer take to keep their clients' information private and safe while using
the AI tool? Select two answers.

Regularly review and understand the AI tool's terms of use.

Stay informed about AI developments that can impact privacy concerns.

Input sensitive project information to create engaging marketing materials.

Assume the AI tool's developer fully secures the data they collect.

Status: [object Object]

1 point

Module 4 introduction: Use AI responsibly


- Make sure that you use AI responsibly,

ethically and for good outcomes.

Play video starting at ::14 and follow transcript0:14

Hello!

I'm excited to introduce you to new concepts

that will help you learn how to use AI responsibly.

Learning to use AI responsibly is a crucial part

of experimenting with and using this technology.

Responsible AI is the principle of developing

and using AI ethically with the intent of benefiting people

and society, while avoiding harm.

To ensure people are treated fairly and respectfully,

AI users must be aware of the limitations of AI tools

and commit to using them ethically.

An AI user is someone who leverages AI

to complete a personal or professional task,

like editing copy for a marketing campaign,

brainstorming ideas for a nonprofit fundraiser,

or discovering more effective ways

to use a particular technology.

AI can perform many tasks

that help make work more productive, efficient, or engaging.

But AI, to be clear, is not perfect.

Humans are creative, logical and compassionate.

We have critical reasoning abilities

and contextual understanding of our environment

that AI systems lack.


Consider how the autopilot feature functions in an airplane.

Autopilot navigates from point A to point B,

but the plane still needs a human pilot

to make complex decisions.

For instance, if weather forces the plane

to make an emergency landing,

the pilot will be the one to manage that situation

and safely land the plane.

In this scenario, autopilot is the system

and the pilot is the user.

Autopilot can handle a lot of the more technical aspects

of keeping the plane in the air,

but generally, flying the plane safely

is the responsibility of the pilots.

Similarly, AI tools can help

with many basic tasks in the workplace.

AI can be used to brainstorm ideas for a new product,

outline a press release,

suggest questions to ask during a focus group and much more,

but it cannot perform higher level tasks

like giving personalized performance feedback to an employee

or making a judgment about which candidate to hire

or providing therapy to a patient.

AI works best when used as a complement

to our uniquely human skills and abilities.

As a member of the Responsible Innovation Team,

I use my human abilities to connect with colleagues,


motivate teammates and drive innovation responsibly.

My name is Emilio and I'm going to help you

learn about what using AI responsibly means.

In this section of the course, you'll learn about the biases

that exist in AI models.

Then, you'll examine the types of harms

that are associated with AI

and the importance of pairing the use of AI

with human abilities like critical thinking

and responsible decision-making.

Finally, you'll discover some tips for maintaining privacy

and security while experimenting with or using AI tools.

Let's get started.

Understand bias in AI

- AI is an inspiring tool

that helps allow for new experiences,

opportunities, and achievements.

For instance, AI is used to help self-driving cars

detect pedestrians on a busy road

and predict the presence

and severity of a medical condition.

However, the fact that AI is beneficial is not a given.

As a user who is aware

of AI's potential biases and its limitations,

you can help ensure responsible outcomes

rather than harmful ones.

AI models are trained on data created by humans,


so they consist of values and are subject to bias.

They can also sometimes produce inaccurate results.

Because an AI model is trained on a data set

to recognize patterns and perform tasks,

the model is only as good as the data it receives.

The output from the AI tool may be affected

by both systemic bias and data bias.

Let's explore each of those now.

First, systemic bias is a tendency upheld by institutions

that favors or disadvantages certain outcomes or groups.

Systemic bias exists within societal systems

like healthcare, law, education, politics, and more.

Even if the people who design

and train an AI model think they're using high-quality data,

the data may already be biased

because humans are influenced by systemic biases.

Data bias is a circumstance in which systemic errors

or prejudices lead to unfair or inaccurate information,

resulting in biased outputs.

Maybe you're developing a work presentation,

you ask an AI image generator to create a photo of a CEO.

All of the images generated appear to be white males.

Based on this result,

you might assume that all CEOs are white men.

Obviously, this data is biased.

Still, the more an AI model is trained

with images of white men as CEOs,


the more likely these models are

to generate similarly biased outputs.

Therefore, the more the data represents

a wider variety of people,

the more inclusive the outcome

of the image generation will be.

Just as AI models reflect the biases

of the data used to train them,

they also reflect the values of the people who design them.

In other words, AI models are value-laden.

For example, perhaps an AI engineer wants

to help create more sustainable ways to generate energy.

The engineer could use AI to build a tool

that allows energy suppliers

to increase their use of renewable resources.

In this case, the AI tool was created based on the idea

that society can and should make the most of solar

and wind power sources,

which is a reflection of the engineer's values.

This focus means that the AI tool

is not intrinsically value-neutral.

Other people may have different values

about energy generation that are not reflected

in this particular AI tool.

Like most aspects of emerging technology,

AI is not a perfect system.

At present, it provides both opportunities and challenges,


so using it responsibly requires critical thinking

and an understanding about how data may be biased.

Identify AI harms
- In order to limit biases, drift, and inaccuracies,

AI models require humans to take action,

like retraining models on more diverse data sets

and continuing to fine tune them frequently.

Humans must also consider the inadvertent harms

associated with using these AI tools.

Let's examine some of the types of harm

that AI can cause if used irresponsibly.

First is allocative harm.

An allocative harm is a wrongdoing that occurs

when an AI system's use or behavior

withholds opportunities or resources or information

in domains that affect a person's wellbeing.

For example, if AI tools don't provide

the same information to everyone,

some people may be denied access to education,

healthcare, fair housing, or other opportunities.

Maybe a property manager for an apartment complex

uses an AI tool to screen applications

for potential tenants.

This AI tool uses the names

and other identifying information on these applications

to help conduct background checks.

One applicant is deemed a risk


because of a low credit score,

so they are denied the apartment

and they lose the application fee.

Later, the property manager realizes

the software had misidentified the applicant

and ran a background check on the wrong person.

In this example,

the applicant has experienced an allocative harm

because they were denied an opportunity

and lost resources, both affecting their wellbeing.

Another type of harm AI can cause

is quality-of-service harm.

Quality-of-service harm is a circumstance in which AI tools

do not perform as well for certain groups of people

based on their identity.

When speech recognition technology was first developed,

the training data didn't have many examples

of speech patterns exhibited by people with disabilities,

so the devices often struggled to parse this type of speech,

but this technology is still evolving.

The next type of harm is representational harm,

an AI tool's reinforcement

of the subordination of social groups

based on their identities.

For instance, the AI powering a language translation app

might associate certain words with feminine

or masculine traits,
and choose gender specific translations

based on those assumptions.

Now, this is harmful

because the result may be the erasure

or alienation of social groups due to built-in biases.

Another type of harm associated with AI

is social system harm.

This harm refers to macro-level societal effects

that amplify existing class, power, or privilege disparities

or cause physical harm

as a result of the development or use of AI tools.

As AI-generated images become more realistic,

there's concern about the spread of disinformation,

including deep fakes.

Deepfakes are AI generated fake photos or videos

of real people saying or doing things that they did not do.

An example of a social system harm

might be if a deepfake of a school board candidate

showed that person saying something they didn't say.

If it went viral, causing them to lose the election,

that would impact voters' perspectives,

the way parents feel about their school district,

and the community in general.

Because there would be disinformation being spread

on a large scale, this would be a social system harm.

Fortunately, new technology is being created

to detect Deepfakes.
Some image generating tools are putting digital watermarks

on AI-generated images and videos

to indicate who created them.

Over time, deepfakes should become easier

for computers to identify.

AI users do need to be aware of the difficulty

of distinguishing between AI-generated images

and real images,

and the consequences of creating these deepfakes.

Play video starting at :4:8 and follow transcript4:08

Sometimes people can share private information

with an AI tool that could be misused by others,

like locking someone out of an online account

or surveilling them.

These are examples of interpersonal harm,

which is the use of technology

to create a disadvantage to certain people

that negatively affects their relationships with others

or causes a loss of one's sense of self and agency.

All of these harms are examples

of how using technology irresponsibly

can negatively impact people and communities.

If used without human intervention or critical thinking,

AI can reinforce systemic bias,

leading to unfair distribution of resources,

the perpetuation of dangerous stereotypes,

or the reinforcement of ongoing power dynamics.


The good news is that AI tools are rapidly evolving

based on feedback from users.

That's why being aware of potential harm

and negative outcomes

is a first step to using AI responsibly.

Emilio: My path to working in responsible AI

Hi. I'm Emilio.

I work at Google

and I'm a responsible innovation program manager.

I get asked all the time,

"Wait, so you were a political science major

and a Spanish major, and now you work in AI?

What do you do?"

And my best answer is, like, I do everything and nothing.

I'm a program manager.

I make sure that we get things done on time.

I make sure that folks are

collaborating well and effectively together.

I connect the experts in AI

in specific areas of AI like perception, fairness,

to product development teams

who are looking to incorporate those best practices.

I read a really impactful book

about the dangers of automating

some levels of systemic inequality,

and I think that really woke me up

to not only the possibilities of AI,


because I do consider myself a techno-optimist,

but the ramifications of implementing AI

without fully teaching folks

and understanding for ourselves that implement it,

what are the ramifications of these systems?

Thinking about your own background and your own upbringing

and where you come from in your culture,

you have a certain set of experiences

that led you to who you are and that serve as foundational

to your core values and your core beliefs.

The same thing exists for AI.

So, for instance, I'm from Southern California.

I have a bias towards sunshine and perfect waves, right?

I think a lot of people do, but I do in particular.

Now, those human aspects might not be directly translatable

to a machine learning model,

but because you can equate experiences to data,

machine learning models only can learn

based off the data that they've ingested,

whether it's through historical data sets or user feedback.

If we can make these systems

more representative of the users that they purport to serve,

I think it will help serve more people.

Something that I think about is,

what is the information that's going into those data sets?

Who is being included and who is being excluded?

Something I think a lot about is


facial detection systems and social media sites.

How well do those systems work for folks that look like me

who have more melanin-rich skin,

or folks who might have even more melanin-rich skin?

And does it work across the spectrum equally?

Does it not work across the spectrum equally?

I think we all have a personal responsibility

when using these tools.

I think the two things that you can do are

check the outputs of whatever tool you're using,

and if you're going to use new data

ingested into the model, check the data.

Make sure that it's inclusive.

Make sure that it's representative of

the different communities that you hope to interact with.

Get involved.

Constantly give feedback on things that you don't approve of

or that you do approve of

so that the groups that are creating these systems

know how to improve.

Security and privacy risks of AI

- You've learned that it's important

to be aware of the possible harms of AI tools.

It's equally important to make sure

you're equipped with the knowledge

to make informed decisions about data,

especially when it comes to privacy and security.


Privacy is the right for a user to have control

over how their personal information and data

are collected, stored, and used.

A variety of information is used to train AI models,

including data sets and user inputs.

For example, users might disclose private information

during their interactions with an AI tool,

and personal information might include names and addresses,

medical records and history,

and financial and payment information.

If you're using an AI tool at your job,

you might decide to include specific information

about a project, stakeholders or your clients

in an AI prompt

to make the output more specific to your task.

But using an AI tool in this way

can present a security risk.

Security is the act of safeguarding personal information

and private data,

and ensuring that the system is secure

by preventing unauthorized access.

The majority of IT industry leaders

believe generative AI

might introduce new security risks,

and that before an organization implements generative AI

for the first time, the organization

should put enhanced security measures in place.


As the user, there are measures you can take

to help protect your own privacy and security,

as well as that of your organization,

coworkers, and business partners.

First, before you use an AI tool,

be aware of its terms of use or service,

privacy policy, and any associated risks.

Consider how transparent an AI tool is

about how it collects data from its users.

Trusted AI tools are often built

with robust security and privacy teams

that have thoroughly considered all kinds of risks

and have put effort into communicating those risks to users.

Before accepting terms of service for a website or an app,

be aware of what data is collected

and how it might be used.

Next, don't input personal or confidential information.

Most AI tools function adequately

without specific personal details.

So while using AI,

keep private information, like your identity,

your department's budget details or email address, private.

Play video starting at :2:30 and follow transcript2:30

Similarly, avoid putting confidential information

into an AI tool

to prevent the data from becoming available to a third party

during a security breach or data leak.


To personalize your outputs,

you can always edit the details later.

Many AI tools use encryption

and other measures to help protect your information,

but you should always make sure to protect your privacy.

Finally, stay up to date on the latest tools.

Knowing about new advancements in AI

can help you understand risks as they come in.

So if you plan on using AI frequently,

make sure you're reading the latest articles

from trusted news sources,

scholarly and university publications,

and subject matter experts.

When it comes to AI,

technology is progressing and changing almost daily.

Luckily, security strategies are, too.

As you've learned, privacy and security

are a huge part of using AI responsibly.

And knowing how to keep your organization and yourself safe

is an integral part of responsible AI.

Bias, drift, and knowledge cutoff

A thorough understanding of concepts in responsible AI—such as bias, drift,


and knowledge cutoff—can help you use AI more ethically and with greater
accountability. In this reading, you’ll learn how to use AI tools responsibly and
understand the implications of unfair or inaccurate outputs.
Harms and biases

Engaging with AI responsibly requires knowledge of its inherent biases. Data


biases are circumstances in which systemic errors or prejudices lead to
unfair or inaccurate information, resulting in biased outputs. Using AI
responsibly and being aware of AI’s potential biases can help you avoid these
kinds of harms.

Biased output can cause many types of harm to people and society,
including:

 Allocative harm: Wrongdoing that occurs when an AI system’s use or


behavior withholds opportunities, resources, or information in domains
that affect a person’s well-being

o Example: If a property manager for an apartment complex were


to use an AI tool that conducted background checks to screen
applications for potential tenants, the AI tool might misidentify
an applicant and deem them a risk because of a low credit score.
They might be denied an apartment and lose the application fee.

o How to mitigate: Evaluate all AI-generated content before you


incorporate it into your work or share it with anyone. Situations
like the one in the example can be avoided by double-checking
AI output against other sources.

 Quality-of-service harm: A circumstance in which AI tools do not


perform as well for certain groups of people based on their identity

o Example: When speech-recognition technology was first


developed, the training data didn’t have many examples of
speech patterns exhibited by people with disabilities, so the
devices often struggled to parse this type of speech.

o How to mitigate: Specify diversity by adding inclusive


language to your prompt. If a generative AI tool fails to consider
certain groups or identities, like people with disabilities, address
that problem when you iterate on the prompt.

 Representational harm: An AI tool’s reinforcement of the


subordination of social groups based on their identities

o Example: When translation technology was first developed,


certain outputs would inaccurately skew masculine or feminine.
For example, when generating a translation for words like
“nurse” and “beautiful,” the translation would skew feminine.
When words like “doctor” and “strong” were used as inputs, the
translation would skew masculine.

o How to mitigate: Challenge assumptions. If a generative AI tool


provides a biased response, like by skewing masculine or
feminine in its output, identify and address the issue when you
iterate on your prompt, and ask the tool to correct the bias.

 Social system harm: Macro-level societal effects that amplify


existing class, power, or privilege disparities, or cause physical harm,
as a result of the development or use of AI tools

o Example: Unwanted deepfakes, which are AI-generated fake


photos or videos of real people saying or doing things they did
not say or do, can be an example of a social system harm.

o How to mitigate: Fact-check and cross-reference output. Some


generative AI tools have features that provide sources for where
information was found. You can also fact-check an output by
using a search engine to confirm information, or asking an expert
for help. Running a prompt through two or more resources helps
you identify possible inaccurate output.

 Interpersonal harm: The use of technology to create a disadvantage


to certain people that negatively affects their relationships with others
or causes a loss of their sense of self and agency

o Example: If someone were able to take control over an in-home


device at their previous apartment to play an unwanted prank on
their former roommate, these actions could result in a loss of
sense of self and agency by the person affected by the prank.

How to mitigate: Consider the effects of using AI, and always use your best
judgment and critical thinking skills. Ask yourself whether or not AI is right
for the task you’re working on. Like any technology, AI can be both beneficial
and harmful, depending on how it’s used. Ultimately, it’s the user’s
responsibility to make sure they avoid causing harm by using AI.

Drift versus knowledge cutoff

Another phenomenon that can cause unfair or inaccurate outputs is drift.


Drift is the decline in an AI model's accuracy in predictions due to changes
over time that aren’t reflected in the training data. This is commonly caused
by knowledge cutoff, the concept that a model is trained at a specific point
in time, so it doesn’t have any knowledge of events or information after that
date.

For instance, a fashion designer might want to track trends in spending


before creating a new collection. If they use a model that was last trained on
fashion trends and consumer habits from 2015, the model may not produce
useful outputs because those two factors likely changed over time.
Consumer preferences in 2015 are very likely to be different from today’s
trends. In other words, the model’s predictions have drifted from accurate at
the time of training to less accurate in the present day due, in part, to the
model's knowledge cutoff.

Several other factors can cause drift, making an AI model less reliable.
Biases in new data can contribute to drift. Changes in the ways people
behave and use technology, or even major events in the world can affect a
model, making it less reliable. To keep an AI model working well, it's
important to regularly monitor its performance and address its knowledge
cutoffs using a human-in-the-loop approach.

To explore biases, data, drift, and knowledge cutoffs, check out the exercise
What Have Language Models Learned? from Google PAIR Explorables. There
you can interact with BERT, one of the first large language models (LLMs),
and explore how correlations in the data might lead to problematic biases in
outcomes. You can also check out other PAIR AI Explorables to learn more
about responsible AI.

Jalon: My work on the responsible AI team

- Hi, my name is Jalon.

This is my sign name, Jalon.

I have worked at Google for a little over three years now

as a research analyst,

on the responsible AI Human-centric Technology department.

Part of my job is really just being able to continue

to advocate for the Black deaf community

and making sure that the community is heard.

A majority of my time in my job is really dealing

with product and tools that may not work for myself
and analyzing how we can improve to make it better.

We want to make sure that every individual are included

when AI and products are being developed,

to make sure that there are

no unconscious bias involved in the process.

Play video starting at ::54 and follow transcript0:54

Right now, I am responsible for leading

a project called Understanding Black Deaf Users.

In this project, the goal is to really collect

Play video starting at :1:5 and follow transcript1:05

the Black deaf experience

and understand what it's like for them

when products and AI is not working for them

and what that looks like.

It's very important because if you look in

right now in today's society

and if you look at all the different industries

who are trying to shift their focus

on ASL and sign language recognition,

but they are forgetting about the Black deaf community too.

I have had few people know people who would say,

"I can't even use the product for myself."

And if we have left out one person or two people,

we still have a lot more work to do.

AI really can help improve the life for the deaf community.

For example, when we think about the deaf community,

their language is sign language.


But what happens when you come across a community

or a population that doesn't know sign language?

AI has the ability to bridge the communication

to make sure that they are effective

and efficient communication.

But it's very important to make sure that we're including

as many represents involved

so that AI is accurate at all times.

The more data we have,

the more accurate the experience will be

for all who will be feeling included.

You have Black deaf experts right now

to advocate for the Black deaf community to be hired too.

As you are taking this AI course

and thinking about how you can make an impact in the world,

it's gonna be very important

that not only learning an understanding of AI

and machine learning too,

but AI is going to require you

to have to really think outside yourself.

AI is not just for one person,

it's for a huge population that we're trying to reach

to make sure that everyone feel included.

Checklist for using AI responsibly

Before you use AI, it’s vital that you think carefully about how to do so
responsibly. This ensures you're using it ethically, minimizing risks, and
achieving the best outcomes. Responsible AI use involves being transparent
about its use, carefully evaluating its outputs, and considering the potential
for bias or errors. You may download a copy of this checklist for using AI
responsibly to help you navigate these considerations in your work.

Review AI outputs

When using AI as a tool to complete a task, you’ll want to get the best
possible output. If you aren’t sure about the accuracy of a model’s output,
experiment with a range of prompts to learn how the model performs.
Crafting clear and concise prompts will improve the relevance and utility of
the outputs you receive. Taking a proactive approach to addressing and
reducing instances of unexpected or inaccurate results is also best practice.

Remember that, in general, you’ll only get good output if you provide good
input. To help you create good input, consider using this framework when
crafting prompts:

 Describe your task, specifying a persona and format preference.

 Include any context the generative AI tool might need to give you
what you want.

 Add references the generative AI tool can use to inform its output.

 Evaluate the output to identify opportunities for improvement.

 Iterate on your initial prompt to attain those improvements.

After you’ve used that framework to create your prompts, review your
output. Fact-check all content the AI tool generated by cross-referencing the
information with reliable sources. To do this, you can:

 Look for sources using a search engine.

 Prompt the AI to provide references so that you can determine where it


might’ve gotten the information.

 If possible, ask an expert to confirm whether the output is true.

Disclose your use of AI

Disclosing your use of AI fosters trust and promotes ethical practices in your
community. Here are some actions you can take to be transparent about
using AI:
 Tell your audience and anyone it might affect that you’ve used or are
using AI. This step is particularly important when using AI in high-
impact professional settings, where there are risks involved in the
outcome of AI.

 Explain what type of tool you used, describe your intention, provide an
overview of your use of AI, and offer any other information that could
help your audience evaluate potential risks.

 Don’t copy and paste outputs generated by AI and pass them off as
your own.

Evaluate all content before you share it

By taking a proactive approach, you can help ensure users explore AI with
confidence that the content is legitimate. This is especially important
because, in some cases, you may not be aware that you’re engaging with AI.

Here are some actions you can take to evaluate image, text, or video content
before you share it:

 Fact check content accuracy using search engines.

 Ask yourself: If this content turns out to be inaccurate or untrue, am I


willing or able to correct my mistake? If you aren’t, that’s probably an
indicator that you shouldn’t share it.

 Remember the steps to SHARE, the World Health Organization’s


mnemonic that can help people be more thoughtful when sharing
information online.

o Source your content from credible and official sources.

o Headlines don't always tell the full story, so read full articles
before you share.

o Analyze the facts presented to determine everything you're


reading is true.

o Retouched photos and videos might be present in the content


you want to share, so be cautious about misleading imagery.

o Errors may be present in the content you're sharing and the


information is more likely to be false if it’s riddled with typos and
errors.

Consider the privacy and security implications of using AI


Whether you’re entering a prompt, sharing a post, or creating new content
with the help of AI, you’ll want to take a moment and reflect on how it may
affect the security of relevant people or organizations. Here are some actions
you can take to consider those privacy and security implications:

 Only input essential information. Don’t provide any information that’s


unnecessary, confidential, or private, because you may threaten the
security of a person or the organization you’re working for.

 Read supporting documents associated with the tools you’re using. Any
documentation that describes how the model was trained to use
privacy safeguards (such as terms and conditions) can be a helpful
resource for you.

Consider the effects of using AI

AI isn’t perfect. Keep that in mind as you use various tools and models, and
use your best judgment to use AI for good. Before you use AI, ask yourself:

 If I use AI for this particular task, will it hurt anyone around me?

 Does it reinforce or uphold biases that may cause damage to any


groups of people?

Shaun: Develop AI that works for everyone

- Hi, I am Shaun.

I'm a Research Scientist at Google Research.

I research fairness and equality of AI,

especially for marginalized groups,

like people with disabilities.

The main reason I came to Google was to have impact

on the technology that we use

and the technology that's coming.

My favorite part of this job is working with people,

both the people on my team, but also in our research,

we get to go out into the community


and get feedback about what we can be doing better

and how we should be thinking about the future of AI

and how it's gonna affect people.

You know, if you think about how different people

might engage with an AI system,

one thing that you might consider

is people will ask questions.

"What is life like for someone with a disability?"

"What kinds of things do people with disabilities enjoy?"

And it's not clear

what the right answer is to that kind of question.

What should you say?

And I think one of the things we want to avoid

is to say, "Oh, that's a sensitive topic,

"so we can't answer that."

From my perspective, that means we've lost

because now, we are missing out on opportunities

to teach people and to be helpful to people.

We want systems that work for everybody.

You know, AI has to be for everybody.

AI bias is a huge problem

and we have to solve it in many different ways,

from data to algorithms to user interface.

One way to address bias

and harmful content is to make sure

that our data sets are appropriate

and are representative of a wide variety of people.


A lot of the work that our team is doing is in helping

to define guidelines for collecting data

and to make sure that we have data that's representative

of a wide variety of people, a wide variety of experiences,

and a wide variety of perspectives.

One way to keep on top of these issues

is through having systems that are transparent.

So being able to understand how AI systems work,

what data they draw on,

how they come to provide certain kinds of feedback.

I think a huge part of learning

to use AI is also applying critical thinking

and not just taking the first answer you get,

but really trying to understand why a particular answer

came about, right?

I think about AI systems not as a tool

to give you answers, but to help you get to answers.

I use AI in my home life also, we, a couple weeks ago,

used AI to help come up with a name for our pet.

And so that was really helpful just as a muse

and as a way to, like, generate some ideas

and have a little bit of fun.

Dr. Bones is the name of my cat, yeah.

Wrap-up

- Congratulations.

You're almost done with this section of the course.

You've discovered how responsible AI helps mitigate bias


and avoid societal harms.

Let's review some of those key points.

First, you explored how systemic

and data biases are reflected in AI models.

Training or developing models with incomplete, outdated,

or inaccurate data increases bias in AI systems.

You examined the types of harms that are associated with AI,

especially if AI tools are not paired with human abilities,

like critical thinking and responsible decision making.

Finally, you gained some tips for maintaining privacy

and security when using AI.

The best way to commit to using AI responsibly

is to be vigilant about the risks of AI

while enjoying its benefits.

Think critically about how you use AI

and the output you receive from AI tools.

Share your awareness of potential biases with others.

Staying up to date with the latest AI developments

is also very important.

AI tools are changing the world rapidly,

but with new technology also comes risk.

Understanding potential harms will help you consider

learning how to use these tools responsibly.

It's been great introducing you

to the responsible use of AI.

To continue learning, I encourage you

to stay up to date on AI developments


as part of Google AI Essentials.

You might also like