MODULE 4
MODULE 4
1 point
2.
Question 2
A new text messaging app features an AI model that predicts the next word
in a sentence, helping users quickly type their messages. However, the AI
model frequently makes inaccurate predictions for users who are writing in
languages that the model wasn't trained on. What type of harm does this
scenario describe?
Quality-of-service
Allocative
Social system
Deepfakes
1 point
3.
Question 3
Drift
Algorithms
Emergence
Updates
1 point
4.
Question 4
1 point
5.
Question 5
Assume the AI tool's developer fully secures the data they collect.
1 point
Hello!
Understand bias in AI
- AI is an inspiring tool
they also reflect the values of the people who design them.
Identify AI harms
- In order to limit biases, drift, and inaccuracies,
In this example,
is quality-of-service harm.
an AI tool's reinforcement
or masculine traits,
and choose gender specific translations
of real people saying or doing things that they did not do.
to detect Deepfakes.
Some image generating tools are putting digital watermarks
or surveilling them.
I work at Google
that led you to who you are and that serve as foundational
How well do those systems work for folks that look like me
Get involved.
in an AI prompt
believe generative AI
into an AI tool
Biased output can cause many types of harm to people and society,
including:
How to mitigate: Consider the effects of using AI, and always use your best
judgment and critical thinking skills. Ask yourself whether or not AI is right
for the task you’re working on. Like any technology, AI can be both beneficial
and harmful, depending on how it’s used. Ultimately, it’s the user’s
responsibility to make sure they avoid causing harm by using AI.
Several other factors can cause drift, making an AI model less reliable.
Biases in new data can contribute to drift. Changes in the ways people
behave and use technology, or even major events in the world can affect a
model, making it less reliable. To keep an AI model working well, it's
important to regularly monitor its performance and address its knowledge
cutoffs using a human-in-the-loop approach.
To explore biases, data, drift, and knowledge cutoffs, check out the exercise
What Have Language Models Learned? from Google PAIR Explorables. There
you can interact with BERT, one of the first large language models (LLMs),
and explore how correlations in the data might lead to problematic biases in
outcomes. You can also check out other PAIR AI Explorables to learn more
about responsible AI.
as a research analyst,
with product and tools that may not work for myself
and analyzing how we can improve to make it better.
but they are forgetting about the Black deaf community too.
AI really can help improve the life for the deaf community.
and thinking about how you can make an impact in the world,
Before you use AI, it’s vital that you think carefully about how to do so
responsibly. This ensures you're using it ethically, minimizing risks, and
achieving the best outcomes. Responsible AI use involves being transparent
about its use, carefully evaluating its outputs, and considering the potential
for bias or errors. You may download a copy of this checklist for using AI
responsibly to help you navigate these considerations in your work.
Review AI outputs
When using AI as a tool to complete a task, you’ll want to get the best
possible output. If you aren’t sure about the accuracy of a model’s output,
experiment with a range of prompts to learn how the model performs.
Crafting clear and concise prompts will improve the relevance and utility of
the outputs you receive. Taking a proactive approach to addressing and
reducing instances of unexpected or inaccurate results is also best practice.
Remember that, in general, you’ll only get good output if you provide good
input. To help you create good input, consider using this framework when
crafting prompts:
Include any context the generative AI tool might need to give you
what you want.
Add references the generative AI tool can use to inform its output.
After you’ve used that framework to create your prompts, review your
output. Fact-check all content the AI tool generated by cross-referencing the
information with reliable sources. To do this, you can:
Disclosing your use of AI fosters trust and promotes ethical practices in your
community. Here are some actions you can take to be transparent about
using AI:
Tell your audience and anyone it might affect that you’ve used or are
using AI. This step is particularly important when using AI in high-
impact professional settings, where there are risks involved in the
outcome of AI.
Explain what type of tool you used, describe your intention, provide an
overview of your use of AI, and offer any other information that could
help your audience evaluate potential risks.
Don’t copy and paste outputs generated by AI and pass them off as
your own.
By taking a proactive approach, you can help ensure users explore AI with
confidence that the content is legitimate. This is especially important
because, in some cases, you may not be aware that you’re engaging with AI.
Here are some actions you can take to evaluate image, text, or video content
before you share it:
o Headlines don't always tell the full story, so read full articles
before you share.
Read supporting documents associated with the tools you’re using. Any
documentation that describes how the model was trained to use
privacy safeguards (such as terms and conditions) can be a helpful
resource for you.
AI isn’t perfect. Keep that in mind as you use various tools and models, and
use your best judgment to use AI for good. Before you use AI, ask yourself:
If I use AI for this particular task, will it hurt anyone around me?
- Hi, I am Shaun.
Wrap-up
- Congratulations.
You examined the types of harms that are associated with AI,