0% found this document useful (0 votes)
7 views3 pages

Practice1 Sac3b Solutions (1)

The document discusses the ethical concerns surrounding the use of AI technologies, particularly facial recognition, for surveillance and crime prevention. It highlights issues such as bias in algorithmic outcomes, privacy violations, and the potential for exploitation by governments and corporations. Additionally, it acknowledges the benefits of AI in detecting crimes while emphasizing the need for ethical considerations in its implementation.

Uploaded by

Buddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views3 pages

Practice1 Sac3b Solutions (1)

The document discusses the ethical concerns surrounding the use of AI technologies, particularly facial recognition, for surveillance and crime prevention. It highlights issues such as bias in algorithmic outcomes, privacy violations, and the potential for exploitation by governments and corporations. Additionally, it acknowledges the benefits of AI in detecting crimes while emphasizing the need for ethical considerations in its implementation.

Uploaded by

Buddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Practice 1 U4 SAC3 Part B ETHICS IN AI – SOLUTIONS

Case Study:
Many countries globally are actively using AI technologies for surveillance of their general populations, including smart
city/safe city platforms, facial recognition systems, and smart policing initiatives. The widespread use of AI in the name of
combating crimes and public safety has seen a reduction in crimes in many locations and has been effective in many cases.
However public surveillance does not come without a cost; in the US multiple ethical concerns have arisen in the past couple
of years, which questions the feasibility of implementing AI technology to combat crimes.

Case 1: Robert William Case 2: Nijeer Parks Case 3: Michael Oliver


Williams, a Black man, was falsely A 25-year-old Detroit man was wrongly
accused of stealing watches. During accused of a felony for supposedly
questioning, an officer showed reaching into a teacher’s vehicle,
Williams a picture of a suspect. His grabbing a cellphone and throwing it,
response, was to reject the claim. cracking the screen and breaking the
“This is not me,” he told the officer. He case.
says the officer replied: “The Evidence presented by the police officers
computer says it’s you.” that led to Parks’ arrest was the facial
The facial recognition system returned recognition match to a photo from what
a match with an old driver’s license was determined to be a fake ID left at
photo of Williams. While William the crime scene that witnesses
ended up being released, his personal connected to the suspect. After her son
experience has been traumatic. was arrested, Patricia Parks looked at
an enlarged print out of the suspect’s
fake ID. “He looks nothing like him … Left, an image taken from a cellphone
nothing like him,” she said. “People video shortly before a young man
have a saying ‘all Black people look the reached into a vehicle and grabed the
same.’ That’s the first thing came to teacher's phone. On the right is a
my mind when I’d seen this photo picture of Michael Oliver. Detroit police
because it looks nothing like my son.” used facial recognition in the
investigation, his case was dismissed
after officials determined he had been
misidentified. Oliver doesn’t resemble
the man in the video. Oliver has
several tattoos, while the person in
the video has no visible tattoos.

Q1: Outline in detail two major ethical objections regarding the use of the facial recognition technology. (8-10 dot
points)
Ethical Objection #1: Bias
• facial recognition software in these cases have learned biases regarding race, age, and sex as well as the humans that
use them
• these algorithms used by law enforcement agencies appear to have errors and inaccuracies in matching faces from
some demographic groups
• These errors expose the biases in facial recognition systems that hinder the safe implementations of these
technologies
• False matches can lead to tense police encounters, false arrests or worse.
• This reality of widespread demographic differential from inherently discriminatory AI facial recognition systems remains
a paramount ethical issue that needs to be addressed.
Ethical Objection #2: Privacy
• As facial recognition technology becomes more widely used, concerns grow around continuous surveillance and the
potential for 'function creep,' where data collected for one purpose such as driver’s licences is used for another.
• With facial recognition people can be identified and tracked without their knowledge or consent, potentially leading to
an invasion of privacy.
• Accumulation of facial data in databases presents a target for cybercriminals, raising questions about data security.
Ethical Objection #3: Transparency
• Consent and transparency form another ethical issue. There's an ongoing debate about whether it's ethically sound to
capture and use someone's biometric data without explicit consent. This concern becomes more pressing in public
spaces where surveillance systems are increasingly equipped with facial recognition technology.
• Facial recognition technology is often used without transparent policies about its usage, leaving individuals unaware of
when, why, and how their facial data is being used.

Exploiting AI Surveillance Technology


Ethical concerns also arise in authoritarian governments and big corporations exploiting AI surveillance and facial
recognition in the name of combating crime.
• to use AI for defence, social welfare, and developing ethical standards
• all-seeing digital system of social control
• grants governments and corporations control at the expense of civil liberties.
• Government or corporation intentionally using AI for racial profiling of minorities or political enemies
• looks for people based on their appearance and keeps records of their daily movement
The capability and implementation of mass AI surveillance employed by governments and corporations remains an urgent
ethical crisis to human activists and leaders worldwide.

Published 15 June 2022 11:45am By Akash Arora Source: SBS


News
Kmart, Bunnings and The Good Guys are using facial recognition
technology in stores, “often without shoppers realising”, according
to an investigation by Choice.
Choice also analysed the retailers’ privacy policy and claimed
Kmart, Bunnings and The Good Guys are the only three major
Australian retailers capturing biometric data of their shoppers.
In a statement to SBS News, Bunnings chief operating officer Simon
McDowell said the technology was used "solely to keep team and
customers safe and prevent unlawful activity in our stores".

Bunnings paused its use of facial recognition technology in July 2022 after the Office of the Australian Information
Commissioner (OAIC) opened an investigation into whether the retailer had breached privacy laws.

Q2: Did these stores violate the privacy of its customers If so, was this breach justifiable? (5-6 points)
• The stores have a sign at the entry that warns of facial recognition technology, this is not the same as consent.
• It is not clear if the images of customers have been retained in a database by the stores which is a breach of privacy
and a target for cybercriminals.
• The store claims the facial technology is used to prevent crime and keep customers safe. Presumably the vast
majority of customers are law abiding and do no harm and this breach is not justifiable.
• These stores need to consider the implications and ethical considerations more thoroughly.

Benefits of AI Surveillance Technology


It is critical to address ethical concerns regarding the incorporation of AI in combating crimes and it is also essential to
acknowledge some of the benefits of AI to detect crimes such as employee theft, cyber fraud, fake invoices, money
laundering, and terrorist financing. Many AI applications have triumphed against financial crimes. Specifically, banks have
reduced false alerts by 50% while finding success with AI-driven tools to track criminals, the scope of AI’s application is
virtually unlimited if utilized correctly.
Future applications include detecting and tracking illegal goods, terrorist activities, and human trafficking. Delivery
companies can use AI to assess parcels containing illegal goods, shops can use AI to identify abnormal purchases, and law
enforcement can use AI to combat human trafficking. All of these AI applications display promising capabilities of
enhancing society’s safety across the globe.

Q3: What are some of the important ethical issues in using AI technology to combat and prevent crimes? Can the
decisions made by deep learning algorithms be explained and justified? (5-6 points)
• Crucial issues include totalitarian regimes’ abuse of AI to silence political opponents and minorities
• Government and corporation use of facial recognition systems without individual consent and public consultations
• Violation of innocent people’s privacy in their everyday shopping
• Biases and lack of transparency in the use of fundamentally biased facial recognition systems for law enforcement.

You might also like