Practice1 Sac3b Solutions (1)
Practice1 Sac3b Solutions (1)
Case Study:
Many countries globally are actively using AI technologies for surveillance of their general populations, including smart
city/safe city platforms, facial recognition systems, and smart policing initiatives. The widespread use of AI in the name of
combating crimes and public safety has seen a reduction in crimes in many locations and has been effective in many cases.
However public surveillance does not come without a cost; in the US multiple ethical concerns have arisen in the past couple
of years, which questions the feasibility of implementing AI technology to combat crimes.
Q1: Outline in detail two major ethical objections regarding the use of the facial recognition technology. (8-10 dot
points)
Ethical Objection #1: Bias
• facial recognition software in these cases have learned biases regarding race, age, and sex as well as the humans that
use them
• these algorithms used by law enforcement agencies appear to have errors and inaccuracies in matching faces from
some demographic groups
• These errors expose the biases in facial recognition systems that hinder the safe implementations of these
technologies
• False matches can lead to tense police encounters, false arrests or worse.
• This reality of widespread demographic differential from inherently discriminatory AI facial recognition systems remains
a paramount ethical issue that needs to be addressed.
Ethical Objection #2: Privacy
• As facial recognition technology becomes more widely used, concerns grow around continuous surveillance and the
potential for 'function creep,' where data collected for one purpose such as driver’s licences is used for another.
• With facial recognition people can be identified and tracked without their knowledge or consent, potentially leading to
an invasion of privacy.
• Accumulation of facial data in databases presents a target for cybercriminals, raising questions about data security.
Ethical Objection #3: Transparency
• Consent and transparency form another ethical issue. There's an ongoing debate about whether it's ethically sound to
capture and use someone's biometric data without explicit consent. This concern becomes more pressing in public
spaces where surveillance systems are increasingly equipped with facial recognition technology.
• Facial recognition technology is often used without transparent policies about its usage, leaving individuals unaware of
when, why, and how their facial data is being used.
Bunnings paused its use of facial recognition technology in July 2022 after the Office of the Australian Information
Commissioner (OAIC) opened an investigation into whether the retailer had breached privacy laws.
Q2: Did these stores violate the privacy of its customers If so, was this breach justifiable? (5-6 points)
• The stores have a sign at the entry that warns of facial recognition technology, this is not the same as consent.
• It is not clear if the images of customers have been retained in a database by the stores which is a breach of privacy
and a target for cybercriminals.
• The store claims the facial technology is used to prevent crime and keep customers safe. Presumably the vast
majority of customers are law abiding and do no harm and this breach is not justifiable.
• These stores need to consider the implications and ethical considerations more thoroughly.
Q3: What are some of the important ethical issues in using AI technology to combat and prevent crimes? Can the
decisions made by deep learning algorithms be explained and justified? (5-6 points)
• Crucial issues include totalitarian regimes’ abuse of AI to silence political opponents and minorities
• Government and corporation use of facial recognition systems without individual consent and public consultations
• Violation of innocent people’s privacy in their everyday shopping
• Biases and lack of transparency in the use of fundamentally biased facial recognition systems for law enforcement.