Assignment
Assignment
UNIVERSE
28-12-2023
1.THE ETHICAL IMPLICATIONS OF
FACIAL RECOGNITION TECHNOLOGY?
Technology
27 JUN 2023 3:26 PM AEST
Share
Furthermore, the technology is often used without transparent policies about its
usage, leaving individuals unaware of when, why, and how their facial data is
being used.
2
Some cities and countries have already started implementing laws to control its
use. San Francisco, for instance, has banned the use of facial recognition by
city agencies, and the European Union has considered a temporary ban on the
technology in public spaces.
3
3.THE IMPACT OF SOCIAL MEDIA
ON MENTAL HEALTH?
Adults frequently blame the media for the problems that younger
generations face, conceptually bundling different behaviors and
patterns of use under a single term when it comes to using media
to increase acceptance or a feeling of community . The effects of
social media on mental health are complex, as different goals are
served by different behaviors and different outcomes are produced
by distinct patterns of use . The numerous ways that people use
digital technology are often disregarded by policymakers and the
general public, as they are seen as "generic activities" that do not
have any specific impact . Given this, it is crucial to acknowledge
the complex nature of the effects that digital technology has on
adolescents' mental health . This empirical uncertainty is made
worse by the fact that there are not many documented metrics of
how technology is used. Self-reports are the most commonly used
method for measuring technology use, but they can be prone to
inaccuracy. This is because self-reports are based on people's own
perceptions of their behavior, and these perceptions can be
inaccurate . At best, there is simply a weak correlation between
self-reported smartphone usage patterns and levels that have been
objectively verified .
4
a more in-depth understanding of the elements that influence the
development of mental health, social interaction, and emotional
growth in adolescents.
Moreover, personal data are the raw material of the knowledge economy.
As Leah Lievrouw, a professor of information studies at the University of
California-Los Angeles, noted in her response, “The capture of such data lies at
the heart of the business models of the most successful technology firms (and
increasingly, in traditional industries like retail, health care, entertainment and
media, finance, and insurance) and government assumptions about citizens’
relationship to the state.”
This report is a look into the future of privacy in light of the technological
change, ever-growing monetization of digital encounters, and shifting
relationship of citizens and their governments that is likely to extend through the
next decade. “We are at a crossroads,” noted Vytautas Butrimas, the chief
adviser to a major government’s ministry. He added a quip from a colleague who
5
has watched the rise of surveillance in all forms, who proclaimed, “George Orwell
may have been an optimist,” in imagining “Big Brother.”
This issue is at the center of global deliberations. The United Nations is working
on a resolution for the General Assembly calling upon states to respect—and
protect—a global right to privacy.
Deepfake videos first appeared on a large scale in 2017, with fake videos of
Hollywood stars like Scarlett Johansson, Emma Watson and Nicolas Cage
spreading online quickly. And even politicians were not spared by deepfakes,
such as Angela Merkel who suddenly takes on the features of Donald Trump in a
speech.
Deepfakes, derived from the terms deep learning and fake, refer to manipulated
media content such as images, videos or audio files. The technologies behind it,
such as artificial intelligence and machine-learning algorithms, have developed
rapidly in recent times, make it almost impossible to distinguish between original
and counterfeit content.
What’s concerning is that deepfakes are getting better, and it's getting harder
and harder to recognize them. It won’t be long before we see their impact on
businesses. The potential attack scenario ranges from taking on identities to
blackmailing companies.
6
Extorting companies or individuals: With deepfake
technology, faces and voices can be transferred to media files
that show people making fake statements. For example, a
video could be produced with a CEO announcing that a
company has lost all customer information or that the company
is about to go bankrupt. With the threat of sending the video to
press agencies or posting it on social networks, an attacker
could blackmail a company.
Manipulation of authentication methods: Likewise,
deepfake technology can be used to circumvent camera-based
authentication mechanisms, such as legitimacy checks through
tools such as Postident.