0% found this document useful (0 votes)
98 views

Answer Key Artificial Intelligence - 8 Chapter 5: AI Ethics: Hints

This document contains an answer key for an artificial intelligence ethics chapter that discusses: 1) Potential gender bias in virtual assistants with female voices by default and how this reflects stereotypes. 2) Ethical concerns around befriending an AI system unknowingly created by someone with malicious intent, such as privacy and cybercrime risks. 3) A debate on responsibility and safety of self-driving cars after a Tesla crash raised questions about autopilot systems and human oversight.

Uploaded by

Vishisht Singh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
98 views

Answer Key Artificial Intelligence - 8 Chapter 5: AI Ethics: Hints

This document contains an answer key for an artificial intelligence ethics chapter that discusses: 1) Potential gender bias in virtual assistants with female voices by default and how this reflects stereotypes. 2) Ethical concerns around befriending an AI system unknowingly created by someone with malicious intent, such as privacy and cybercrime risks. 3) A debate on responsibility and safety of self-driving cars after a Tesla crash raised questions about autopilot systems and human oversight.

Uploaded by

Vishisht Singh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Answer Key

Artificial Intelligence – 8
Chapter 5: AI Ethics

AI Brainstorming! (Pg. 101)


Virtual personal assistants available in the market today, such as Siri and Alexa come with female voice
by default. Can it be considered a reflection of AI gender bias? Share your opinion.
Hints:
(This answer may vary.)
Siri and Alexa come with female voice by default, which can be considered a reflection of AI gender bias.
The design of such virtual assistants reproduces discriminatory stereotypes of female secretaries who,
according to gender stereotype, is often just a secretary or assistant to her boss.

AI Brainstorming! (Pg. 101)


Imagine you met someone on a social networking site and befriended him/her. After spending
considerable time, you found that the person is in fact an AI-enabled machine. A human with
malicious intent operates that machine. What will be your thoughts and concerns?
Hints:
(This answer may vary.)
If I have befriended an AI-enabled machine with malicious intent operator, then maybe I must have
shared my personal and private details to that friend. This may arise cybercrime that may be harmful in
my real as well as virtual life if the information shared ends up in wrong hands. I will make my parents/
guardians aware about this and take help from them to report this issue with cybercrime cell. Also, I will
always be more aware about cybercrimes and cyber safety and must possess good online habits. We all
should learn more about privacy settings, security systems, antivirus updating, and so on.

AI Brainstorming! (Pg. 103)


In July 2020, the video of an accident, where a Tesla car slammed into an overturned truck got viral
and it led to a debate on the social media on the safety of car’s autopilot mode. The car was on
autopilot mode, when it crashed into an overturned truck on a busy highway in Taiwan. After the
incident, the driver of the car blamed car’s autopilot feature for the accident. This incident raised some
ethical questions.
Show the video of the accident to the students. Divide the class into groups of three to four students.
Ask them to discuss the following points:
YY Does this incident raise an ethical concern, related to the use of AI technology in cars?
Yes, this incident raises an ethical concern related to the use of AI technology in cars.
YY Is this a case of AI making a mistake?
Hints:
(This answer may vary.)
Yes, according to me it is a case of AI making a mistake. The machine could have been trained
more, that is, the machine algorithms could have been stronger. Since it is a machine trained
by human beings, so it worked accordingly and it is an example of artificial stupidity. I think the
car must have raised some alarm while seeing a stationary object, so that the driver could have
responded.
YY In case of such accidents who should be held responsible — the manufacturer of the car, the
developer, the driver, or the person who purchased the car?
Hints:
(This answer may vary.)
In case of such accidents the developer and manufacturer of the car can be held responsible under
the existing automobile products liability laws. Drivers can also be held responsible because may be
they are not fully trained to drive such vehicles, they must be given training for self-driving vehicles
and companies must ensure that drivers fully understand system’s capabilities and limitations.
YY Do you think it will be safe to use self-driving cars on roads in future?
Hints:
(This answer may vary.)
Yes, it would be safe to use self-driving cars on roads in future because day-by-day the technology
is improving and is being implied in every field. Also, researchers are not only evaluating the impact
of AI technology on human safety, human perception to interact with AI-powered robots and
machines, impact on privacy and dignity, but they are also trying to ensure that these technologies
do not violate human moral compass.

Exercises (Pgs. 104–106)


A. Tick (✓) the correct answers.
1. a 2. b 3. c 4. c 5. a
6. b 7. a 8. b 9. c 10. b
B. Fill in the blanks.
1. Ethics 2. Sentience 3. AI Ethics 4. AI Policy 5. mistakes
C. Write ‘T’ for True and ‘F’ for False.
1. F 2. T 3. F 4. T 5. F
D. Answer the following questions.
1. AI ethics can be defined as the moral principles governing the behaviour of humans as they
design, construct, and use artificial intelligent machines. The three AI ethical issues are:
a. The impact that AI will have on humans
b. Lack of transparency of AI-enabled systems
c. Ethical quality of the outcomes drawn from developing artificial intelligent machines
2. AI-enabled machines making a mistake is considered an ethical issue as machine learning takes
time to become useful. If AI-enabled systems are trained with bad data or if they are not
programmed in an ethical way, they can be harmful. For example, Tay, Microsoft’s AI chatbot
was shut down immediately in less than one day, because the robot developed racial and
misogynistic personality which would have damaged the company’s reputation.
3. AI bias is also known as Machine Learning bias, as it cannot be always fair and can develop bias
towards a race, gender, or ethnicity. The reason behind this is that AI systems are also developed
and trained by humans who themselves can be biased.
4. Singularity is the hypothetical point at which artificial intelligent machines will surpass human
intelligence and human beings will no longer be the most intelligent beings on the Earth.
5. I f robots replace doctors in healthcare industry, then one of the ethical issues that may arise is
the loss of human contact and empathy. This means people will be deprived of social interaction
which may lead to mental and psychological disorders and it can also impact on health and
well-being of the elderly people.
(This answer may vary.)

Balloon Debate (Pg. 106)


(Note: Answers may vary from student to student as these are open-ended questions for group
discussion. It is suggested that discuss your opinion in the class with your teacher and then write your
answer.)

You might also like