0% found this document useful (0 votes)
29 views7 pages

Assignment

Uploaded by

alyanch2555
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views7 pages

Assignment

Uploaded by

alyanch2555
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

DISCOVERING

UNIVERSE
28-12-2023
1.THE ETHICAL IMPLICATIONS OF
FACIAL RECOGNITION TECHNOLOGY?
Technology
27 JUN 2023 3:26 PM AEST

Share

ETHICS AND IMPLICATIONS OF


FACIAL RECOGNITION TECHNOLOGY
Facial recognition technology, an aspect of artificial intelligence (AI) that
allows systems to identify or verify a person's identity using their face, has
become increasingly integrated into various sectors of society. As with any
form of advanced technology, its rise comes with complex ethical implications
and societal impacts.

This article aims to provide a comprehensive exploration of these dimensions.

Facial Recognition Technology: An Overview


Facial recognition technology uses biometrics to map facial features from a
photograph or video and compares this information with a database of known
faces to find a match. It is used across various domains, from unlocking
smartphones and tagging friends on social media, to more serious applications
such as surveillance, law enforcement, and border control.
While the technology brings substantial convenience and security
enhancements, it has also sparked a debate around privacy, consent, and civil
liberties.

Ethical Implications of Facial Recognition Technology


Privacy Concerns
Privacy is at the forefront of the ethical issues related to facial recognition
technology. As the technology becomes more pervasive, concerns grow around
continuous surveillance and the potential for 'function creep,' where data
collected for one purpose is used for another.

With facial recognition systems, individuals can be identified and tracked


without their knowledge or consent, potentially leading to an invasion of
privacy. Moreover, the accumulation of facial data in databases presents an
attractive target for cybercriminals, raising questions about data security.

Consent and Transparency


Consent and transparency form another ethical issue. There's an ongoing
debate about whether it's ethically sound to capture and use someone's
biometric data without explicit consent. This concern becomes more pressing
in public spaces where surveillance systems are increasingly equipped with
facial recognition technology.

Furthermore, the technology is often used without transparent policies about its
usage, leaving individuals unaware of when, why, and how their facial data is
being used.

Bias and Inaccuracy


Facial recognition systems have been criticized for their varying performance
across different demographics. Several studies have found these systems to
show bias, with higher rates of inaccuracies for women, the elderly, and people
of color. Such bias can lead to unjust outcomes, particularly in law
enforcement contexts.

Regulatory Implications and the Path Forward


With these concerns in mind, it's clear there's a need for stringent regulations
governing the use of facial recognition technology.

2
Some cities and countries have already started implementing laws to control its
use. San Francisco, for instance, has banned the use of facial recognition by
city agencies, and the European Union has considered a temporary ban on the
technology in public spaces.

Regulation should balance the benefits of facial recognition technology—


convenience, efficiency, and enhanced security—against its potential harms.
Policymakers must ensure regulations are robust, future-proof, and centered
around the principles of transparency, accountability, and public engagement

2.THE ETHICS OF ARTIFICIAL


INTELLIGENCE?
In the review of 84.. ethics guidelines for AI, 11 clusters of principles were found:
transparency, justice and fairness, non-maleficence, responsibility, privacy,
beneficence, freedom and autonomy, trust, sustainability, dignity, solidarity.[26]
Luciano Floridi and Josh Cowls created an ethical framework of AI principles set
by four principles of bioethics (beneficence, non-maleficence, autonomy and justice)
and an additional AI enabling principle – explicability.
Transparency, accountability, and open sourceedit
Bill Hibbard argues that because AI will have such a profound effect on humanity,
AI developers are representatives of future humanity and thus have an ethical
obligation to be transparent in their efforts.[28] Ben Goertzel and David Hart
created OpenCog as an open source framework for AI development.[29] OpenAI is a
non-profit AI research company created by Elon Musk, Sam Altman and others to
develop open-source AI beneficial to humanity. There are numerous other open-
source AI developments.
Unfortunately, making code open source does not make it comprehensible,
which by many definitions means that the AI code is not transparent. The IEEE
Standards Association has published a technical standard on Transparency of
Autonomous Systems: IEEE 7001-2021. The IEEE effort identifies multiple scales
of transparency for different stakeholders. Further, there is concern that releasing
the full capacity of contemporary AI to some organizations may be a public bad,
that is, do more damage than good. For example, Microsoft has expressed
concern about allowing universal access to its face recognition software, even for
those who can pay for it. Microsoft posted an extraordinary blog on this topic,
asking for government regulation to help determine the right thing to do.

3
3.THE IMPACT OF SOCIAL MEDIA
ON MENTAL HEALTH?
Adults frequently blame the media for the problems that younger
generations face, conceptually bundling different behaviors and
patterns of use under a single term when it comes to using media
to increase acceptance or a feeling of community . The effects of
social media on mental health are complex, as different goals are
served by different behaviors and different outcomes are produced
by distinct patterns of use . The numerous ways that people use
digital technology are often disregarded by policymakers and the
general public, as they are seen as "generic activities" that do not
have any specific impact . Given this, it is crucial to acknowledge
the complex nature of the effects that digital technology has on
adolescents' mental health . This empirical uncertainty is made
worse by the fact that there are not many documented metrics of
how technology is used. Self-reports are the most commonly used
method for measuring technology use, but they can be prone to
inaccuracy. This is because self-reports are based on people's own
perceptions of their behavior, and these perceptions can be
inaccurate . At best, there is simply a weak correlation between
self-reported smartphone usage patterns and levels that have been
objectively verified .

When all different kinds of technological use are lumped together


into a single behavioral category, not only does the measurement
of that category contribute to a loss of precision, but the category
also contributes to a loss of precision. To obtain precision, we need
to investigate the repercussions of a wide variety of applications,
ideally guided by the findings of scientific research. The findings of
this research have frequently been difficult to interpret, with many
of them suggesting that using social media may have a somewhat
negative but significantly damaging impact on one's mental health .
There is a growing corpus of research that is attempting to provide

4
a more in-depth understanding of the elements that influence the
development of mental health, social interaction, and emotional
growth in adolescents.

4.THE FUTURE OF PRIVACY IN A


DIGITAL WORLD?
The terms of citizenship and social life are rapidly changing in the digital age. No
issue highlights this any better than privacy, always a fluid and context-situated
concept and more so now as the boundary between being private and being
public is shifting. “We have seen the emergence of publicy as the default
modality, with privacy declining,” wrote Stowe Boyd, the lead researcher for
GigaOm Research in his response in this study. “In order to ‘exist’ online, you
have to publish things to be shared, and that has to be done in open, public
spaces.” If not, people have a lesser chance to enrich friendships, find or grow
communities, learn new things, and act as economic agents online.

Moreover, personal data are the raw material of the knowledge economy.
As Leah Lievrouw, a professor of information studies at the University of
California-Los Angeles, noted in her response, “The capture of such data lies at
the heart of the business models of the most successful technology firms (and
increasingly, in traditional industries like retail, health care, entertainment and
media, finance, and insurance) and government assumptions about citizens’
relationship to the state.”

This report is a look into the future of privacy in light of the technological
change, ever-growing monetization of digital encounters, and shifting
relationship of citizens and their governments that is likely to extend through the
next decade. “We are at a crossroads,” noted Vytautas Butrimas, the chief
adviser to a major government’s ministry. He added a quip from a colleague who

5
has watched the rise of surveillance in all forms, who proclaimed, “George Orwell
may have been an optimist,” in imagining “Big Brother.”

This issue is at the center of global deliberations. The United Nations is working
on a resolution for the General Assembly calling upon states to respect—and
protect—a global right to privacy.

5.THE DANGERS OF DEEP FACE


TECHNOLOGY?
In the future, deepfakes will bring cyberattack scenarios to a whole new
dimension. Since there are no mature technical defense mechanisms currently
available, organizations must be extremely cautious and recognize the potential
risks.

Deepfake videos first appeared on a large scale in 2017, with fake videos of
Hollywood stars like Scarlett Johansson, Emma Watson and Nicolas Cage
spreading online quickly. And even politicians were not spared by deepfakes,
such as Angela Merkel who suddenly takes on the features of Donald Trump in a
speech.

Deepfakes, derived from the terms deep learning and fake, refer to manipulated
media content such as images, videos or audio files. The technologies behind it,
such as artificial intelligence and machine-learning algorithms, have developed
rapidly in recent times, make it almost impossible to distinguish between original
and counterfeit content.

What’s concerning is that deepfakes are getting better, and it's getting harder
and harder to recognize them. It won’t be long before we see their impact on
businesses. The potential attack scenario ranges from taking on identities to
blackmailing companies.

The following three deepfake-based attack methods are likely:

 C-Level fraud: It's the most prominent method. As a result,


fraudsters no longer seek to persuade an organization’s
employee with a fake email to transfer money, but a call that
makes the caller sound the same as the CFO or CEO.

6
 Extorting companies or individuals: With deepfake
technology, faces and voices can be transferred to media files
that show people making fake statements. For example, a
video could be produced with a CEO announcing that a
company has lost all customer information or that the company
is about to go bankrupt. With the threat of sending the video to
press agencies or posting it on social networks, an attacker
could blackmail a company.
 Manipulation of authentication methods: Likewise,
deepfake technology can be used to circumvent camera-based
authentication mechanisms, such as legitimacy checks through
tools such as Postident.

You might also like