Sans Addressing Deepfake Enabled Attacks Using Security Controls
Sans Addressing Deepfake Enabled Attacks Using Security Controls
gh
Addressing Deepfake-Enabled Attacks
Ri
Using Security Controls
ll
Fu
ns
ai
GIAC (GSEC) Gold and RES5500
et
rR
Author: Jarrod Lynn, [email protected]
ho
Advisor: Russell Eubanks
ut
,A
te
itu
Abstract
In
NS
understanding of the issue and offers a methodology for handling the problem.
e
Much of what has been written and publicly discussed about deepfakes has concerned
Th
can choose the means of communication, media, and audience, and whether the attack is
prerecorded or conducted in real time; deepfake technology is advancing quickly, and the
software to create deepfakes is ubiquitous; and as of printing, there has not yet been
publicly reported even a simple implementation of a counter deepfake technology that
reliably addresses a single type of deepfake (e.g., prerecorded, real time, audio, video)
communicated between devices outside of a controlled setting. These factors combine to
prevent meaningful technical countermeasures at present.
gh
Ri
1. Introduction
ll
Fu
Altered images, disinformation, propaganda, social engineering, theft, coercion,
ns
prank phone calls— these are a few of the many phenomena, sometimes disparate and
ai
sometimes related, that fit within the broad umbrella of deepfake-enabled attacks. An
et
example of applied artificial intelligence (AI), deepfakes have become well-known in the
rR
last few years, both for their entertainment value and their malicious uses. Two aspects of
ho
deepfakes have been the primary focus of most discussion: first, the most prominent,
ut
mesmerizing, and alarming feature of the technology—its dangerous potential to deceive
,A
by serving as increasingly realistic digital puppets of actual human beings; and second,
te
the possible malevolent uses of this technology to pursue all sorts of nefarious ends, such
itu
potential for problems around this technology have brought this topic to the forefront,
NS
with urgent calls for action in the form of technical countermeasures and legislation.
SA
Despite this attention, thus far, there has been relatively little focus on the
systemic nature of the growing threat. Further, there has not yet been a widescale
e
Th
recognition of the significant problem that exists in addition to the difficulties around
technical countermeasures.
22
from deepfake-enabled attacks. This stems from several facts: 1. Deepfake technology is
©
evolving rapidly. 2. The means by which an attacker transmits a deepfake is flexible and
therefore unpredictable. An attacker can transmit a deepfake via any effective channel of
communication to which the attacker has access. 3. Deepfakes can be created in various
media: audio, video, text, and still image, real time and prerecorded. These facts leave
defenders at a distinct disadvantage.
This paper will provide the reader with a contextual background on deepfakes,
including an introduction to the underlying technology, a discussion of technical
countermeasures, an overview of relevant legislation, and examples of use cases.
gh
Ri
It is important to reframe the discussion around deepfake-enabled attacks at the
ll
Fu
outset. While the technical problem is undoubtedly real, a technical solution cannot solve
it alone. To illustrate, even if a technical solution provided accurate detection of
ns
deepfakes on a single device or set of devices, attackers could circumvent this specific
ai
device or set of devices and solution. The dispersed nature of the threat leaves defenders
et
rR
with the need to develop a strategy.
This paper’s original contribution is a methodology comprised of a set of
ho
qualitative measures that organizations can take to confront the threat posed by deepfake-
ut
enabled attacks. The methodology begins with an original questionnaire that guides
,A
organizations in assessing their security posture in light of the threat. The methodology
te
then walks through additional steps and reaches the final tool in the process, which is a
itu
Organizations can use the tools in the methodology for assessing their security
In
posture and responding to incidents as well as for planning and designing scenario-based
NS
drills.
SA
After introducing the methodology, the paper explores a series of case studies that
demonstrate key issues around deepfake-enabled attacks.
e
Th
they design responses to this serious threat. This methodology is meant to work with an
20
gh
Ri
work on realistic plans that can help prepare their organizations for this approaching
ll
Fu
challenge.
ns
2. General Background on Deepfakes
ai
et
rR
2.1 Deepfakes Defined
ho
The term “deepfakes” originated on Reddit in 2017 in reference to videos in
ut
which celebrity faces were realistically superimposed on the bodies of actors in
,A
pornographic films using artificial intelligence (Beridze & Butcher, 2019). It has since
te
come to refer more broadly to other forms of media created with artificial intelligence. A
itu
definition that allows for this more common usage is “Believable media generated by a
st
deep neural network” (Mirsky & Lee, 2020). Similarly, the National Security
In
distinguish from reality” (NSCAI, 2021). Others have moved away from using the term
deepfake to describe this phenomenon. Meredith Somers quotes Henry Ajder of
e
Th
videos but can include any of the following: audio recordings, still images, text, video
20
only, and any combination of real-time audio-video. The key to whether a given piece of
©
gh
Ri
used to authenticate the identity of a known colleague’s likeness is simple familiarity.
ll
Fu
With deepfakes, it is precisely this natural phenomenon that is undermined. At present,
there is no effective means of detecting and preventing this in practical terms. That is, an
ns
effective deepfake will circumvent this form of “authentication.” That can be in many
ai
instances the purpose of their use.
et
rR
Second, deepfakes represent the first wave of a new trend. It is tempting to view
ho
deepfakes as a discrete problem. However, this is an unduly limited view. While they are
indeed a challenge to be dealt with in the short term, they also offer an opportunity.
ut
,A
There is a current trend towards a new version of the internet that involves what is
te
being called “web3” and the so-called metaverse, which includes virtualized reality
itu
elements” onto the user’s field of vision (Johnson, 2020). While fictional, Keiichi
NS
exciting but dangerous trajectory” in which “physical and virtual realities are becoming
e
he goes about his life in the film is augmented, primarily by advertisements (Vincent,
22
2016).
20
Third, one use of deepfakes falls within a very broad sphere that includes
disinformation, misinformation, and other related concepts. Viewed from this angle,
gh
Ri
deepfakes can be discussed in the context of digital forgeries, manipulated images,
ll
Fu
propaganda, and hybrid warfare, among other phenomena. Some examples of how
deepfakes are being used uniquely within the overall themes of disinformation and
ns
misinformation will be discussed. The primary obvious difference between deepfakes and
ai
their predecessors in the forms of doctored analogue images and digital forgeries is that
et
rR
deepfakes are created through the use of artificial intelligence.
ho
Finally, the criminal potential of deepfakes is very clear. As evidence, in recent
years, numerous leading thinkers and law enforcement authorities have released strong
ut
,A
statements related to deepfakes (Federal Bureau of Investigation, 2021). In 2020, the
Dawes Centre for Future Crime at University College London rated deepfakes as the
te
“most dangerous crime of the future” (Smith, 2020). Both the FBI and Europol have
itu
issued warnings related to deepfakes, with Europol calling for increased investment
st
(Stolton, 2020). Europol recently published a report predicting an expanded role for
In
deepfakes in organized crime (Coker, 2022). Among other trends, the report predicts that
NS
deepfakes will be used in document fraud and that a new market will emerge for
SA
deepfakes as a service (Europol, 2022). The next section of the paper will look at the
e
called generative adversarial networks (GANs) (Mirsky & Lee, 2020). GANs work by
“pitting neural networks against one another” to “learn” (Giles, 2018). GANs produce
various types of deepfakes using large stores of data (images, videos, etc.) of the victim
as well as a second set of data with which to compare and generate images (Vincent,
2019).
Aside from the need for large data sets, another notable current limitation around
deepfake technology is the continued presence of visible artifacts in photos and videos
(FBI, 2021). This can be obvious to the human eye—not just to programs designed to
detect subtle differences. The “Detect Fakes” project run by the Massachusetts Institute
gh
Ri
of Technology Media Lab and Applied Face Cognition Lab presents this experientially.
ll
Fu
The project is an online research study in which users are presented with 32 samples of
video, audio, and text and asked whether they believe the sample is real or fake (MIT
ns
Media Lab, 2022).
ai
et
At present, basic limitations around realism and the need for large stores of data
rR
can make some of the most damaging implementations of deepfakes more challenging for
ho
adversaries. For instance, due to technical hurdles, it would be difficult for a malicious
actor to create a real-time deepfake that gives the impression of coming from an
ut
,A
organization’s physical location and from within a company’s actual network, given the
inherent limitations of the technology as it currently stands.
te
itu
However, the technology is evolving quickly. Videos made using GANs are
st
becoming more realistic as problems around digital artifacts are resolved. At the same
In
(Fowler, 2021). With the number of apps proliferating, one example of a problematic
program is DeepFaceLive, which is among the programs that allow users to create real-
SA
time deepfakes (Anderson, 2021). This program is based on popular deepfake creation
e
software and has strong community support. Many other programs are available to make
Th
prerecorded and live deepfakes. Likewise, there are programs to make other sorts of
22
deepfakes. For example, the program “This Person Does Not Exist” allows users to create
20
2.3 Countermeasures
gh
Ri
Lee provide an excellent overview detailing many of the methods in existence as of
ll
Fu
January 2020 (Mirsky & Lee, 2020). Other examples abound. One promising recent
study shows a very high rate of detection, focusing on facial expressions (Ober 2022).
ns
However, Ober quotes one of the authors of this study, Amit Roy-Chowdhury, who
ai
notes, “What makes the deepfake research area more challenging is the competition
et
rR
between the creation and detection and prevention of deepfakes which will become
increasingly fierce in the future.”
ho
While all of these methods are focused on the detection or prevention of
ut
,A
deepfakes, most are not tied to concrete practical applications. Further, none of these
methods has suggested a solution that would work across platforms with real-time
te
deepfakes. In other words, if an attacker were to attack a platform that were effectively
itu
on which the victim does not have the countermeasure deployed. Further, an attacker
NS
always has the option of directing a deepfake towards third parties (e.g., the public) using
SA
software on both sides of the interaction. Again, the attacker can circumvent the
©
Having looked at the technical aspects, the next step is to consider examples of
use cases.
gh
Ri
2.4 Use Cases
ll
Fu
While the deepfake problem is growing, there are thus far very few publicly
ns
documented real-world examples of malicious deepfake-enabled (audio-video) attacks
ai
being carried out against organizations. This section will address a handful of publicized
et
examples as well as proofs of concept. Further, this portion of the paper addresses
rR
categories of phenomena for which there are many examples: deepfake-enabled static
ho
photo attacks, attacks against individuals, and proofs of concept of these forms of
ut
deepfakes that could be used to conduct successful deepfake-enabled attacks. How and
,A
why deepfakes have been used in given instances are germane to the ultimate goals of
te
prevention and mitigation. This includes the types of implementations—prerecorded, real
itu
time, audio, and video. This also includes the reasons for which they have been used,
st
compromising activity (Beridze & Butcher, 2019). As noted, programs for the creation of
SA
deepfakes are readily available. The continued usage of this technology for illicit
purposes has prompted legislation (Clark, 2022). In its most benign form, this technology
e
Th
is used to create amusing videos in which people’s faces are swapped with celebrities’
(Hirwani, 2021). This technology is of particular interest to organizations because it can
22
be used to coerce employees and create insider threats. It can also be used to create
20
Thus far, the most successful and damaging publicly disclosed deepfake-enabled
attacks against organizations have involved the use of voice cloning. In one attack,
thieves tricked an employee into transferring $243,000 in corporate funds by pretending
to be the company’s CEO (Stupp, 2019). In a second case, thieves stood in for a company
gh
Ri
director and convinced a bank manager to transfer $35 million in company funds
ll
Fu
(Brewster, 2021).
ns
While there have been few publicized examples, the recently reported case of
French documentary filmmaker Yzabel Dzisky highlights the viability of real-time video
ai
et
deepfakes as a credible threat (Kasapoglu, 2022). Dzisky was reportedly the victim of a
rR
romance scam in which the attacker convincingly used real-time deepfakes to disguise his
ho
true identity. Although many aspects of the attacker’s story and modus operandi fit
common patterns for this type of romance scam (an inability to meet in person,
ut
,A
unexplained trips, requests to wire cash), the fraudster in this case was able to use real-
time deepfake technology to overcome Dzisky’s doubts with great effect (Dellinger,
te
2019). The technology allowed him to play a psychological game in which he told half
itu
truths and made half confessions about his identity, leading Dzisky to believe that he had
st
been honest at first, when in fact he was remaining deceitful. The fraudster initially told
In
Dzisky that he was a doctor in Los Angeles. When she began to see through his story, he
NS
“admitted” that he was actually Turkish and located in Istanbul. This appears to have
SA
been a calculated part of his plan. The name he chose was the same as Dzisky’s ex-
e
husband. In reality, the fraudster was a young man living in Nigeria. He used the
Th
deepfake to concoct two false identities in conjunction with typical social engineering,
22
appeals to emotions, and romance scam tactics. This example shows that real-time
20
Moving from a real-world attack to two categories of proof of concept, there are
the ubiquitous videos of celebrities being made to say and do ridiculous things. One of
the most famous (because it is meant to instruct on this precise point) is a deepfake of
former President Obama made by comedian Jordan Peele (Romana, 2018). In addition,
there are proofs of concept of real-time deepfakes occurring during live teleconferences.
The most famous of these is a “Zoom-bomb” by a real-time deepfake imposter posing as
Elon Musk (Greene, 2020). Together, these examples show the viability of the future use
of deepfakes for malicious purposes. Real-time deepfakes are extremely dangerous for
victims and should give organizations pause. Given the lack of control over the medium
of delivery, malicious actors can wreak havoc using a real-time deepfake.
gh
Ri
A general point regarding the technology bears repeating. As of now, to be
ll
Fu
successful, deepfake programs require stores of images for “training” purposes. That is,
in order to work, a GAN needs a database of images of a person who will be the “victim”
ns
of the deepfake. This is a limitation on the implementation of the deepfake. While there
ai
is a possibility that the technology will evolve so that the data required for a successful
et
rR
attack will decrease, the present need highlights one aspect of the market for deepfake as
a service. Fraudsters could use ready-made deepfake packages to target victims.
ho
The examples above show not only that deepfakes can and have been used in
ut
,A
malicious circumstances, but that individuals are vulnerable. Organizations can fall prey
in that their employees can be shown individually or collectively to be taking actions or
te
making statements that run counter to organizational interest. Individuals can be
itu
vulnerable in that deepfakes can depict them or those close to them engaged in
st
compromising activity.
In
NS
victim and claim to have kidnapped the victim’s loved one (Kushner, 2022). They use
e
recordings or actors to play the role of the loved one. The entire kidnapping is a hoax that
Th
relies on social engineering (FBI, 2017). The criminal’s goal is typically to obtain money
22
from the victim. The bad actor usually insists that the victim not contact anyone for the
20
duration of the interaction. The criminal relies on the victim’s attention during this
©
interaction. If the victim contacts his or her loved one, the con ends. According to reports,
there has been some degree of randomness in targeting, in that criminals contact large
numbers of people looking for victims who do not hang up. Using deepfakes, criminals
could craft highly targeted virtual kidnappings with devastating effectiveness. Young
people are particularly vulnerable as they post more online content that can be used to
generate GANs.
gh
Ri
kidnapping was only virtual, it raises the question of whether the organization’s culture
ll
Fu
will encourage reporting, especially if the act done by the employee on behalf of the bad
actor was subtle enough to go undetected. A smart adversary will compel the victim to
ns
engage in small, undetectable acts (relatively low stakes) against the organization by
ai
coercing the victim with high-stakes negative consequences—the release of unflattering
et
rR
fake videos, a fake kidnapping, or other acts. A smart adversary who knows that there is
no means of reporting such an incident without consequence can also use that against the
ho
victim. An employee compromised in such a way may be inclined not to report if there is
ut
a guaranteed high cost compared to what he or she perceives to be a relatively low-stakes
,A
act. However, an adversary can compromise multiple employees to devastating effect.
te
Deepfake-enabled attacks overlap with insider threat on numerous levels. The next topic
itu
misinformation. These dangers are illustrated by a 2019 video of Speaker of the House
NS
Nancy Pelosi that was significantly altered to make it appear as though she was
SA
intoxicated during a press conference. This video was widely circulated on social media.
e
It remained online and generated significant attention even after being disproven
Th
(Denham, 2020). This video is at the edge because it is not a deepfake per se, but rather
22
in this context (Lima, 2021). While the video was manipulated, not a deepfake, a
convincing deepfake could presumably do the same.
©
gh
Ri
Interestingly, there have been other instances in which the mere possibility that
ll
Fu
deepfakes might be used has become a factor. In April 2021, two Russian men known as
pranksters within Russia conducted meetings with multiple European leaders, during
ns
which one of the men pretended to be Leonid Volkov, former chief of staff to imprisoned
ai
Russian opposition politician Alexei Navalny (Vincent, 2021). Part of the disinformation
et
rR
appears to have been the notion that the call was a deepfake. It later became clear that the
video was not a deepfake, but that one of the men was an actor in disguise. This fake
ho
deepfake caused quite a stir. Subsequent reporting that there had been a successful
ut
deepfake attack against senior EU leaders led to embarrassment and confusion.
,A
As this paper was written during the period of the leadup to Russia’s invasion of
te
Ukraine, there were reports and speculation about the potential use of deepfakes in
itu
connection with a possible “false flag” operation. As reports came out from the border
st
area between Russia and Ukraine, the ever-present potential for a “false flag” and
In
deepfake caused a great deal of skepticism and doubt around the information being
NS
presented (Haltiwanger, 2022). This demonstrates both the powerful potential of this
SA
technology (use as part of a false flag) as well as the power created by the possibility of
its use (the need to account for the prospect of its use by an adversary). After the invasion
e
Th
video itself was not very well done. Social media sites such as Facebook/Meta quickly
20
removed it from their platforms (Saxena, 2022). It is possible to imagine that if the
©
technology had been better, if there had been less reporting around the likelihood of such
an incident, or if social media companies had been less proactive about removing the
content, the video might have been more influential. That being said, in and of itself, the
video may have served a propaganda purpose even in its limited release by contributing
to the disinformation environment. Any deepfake purporting to show someone acting
against their own interests can have an effect.
It is worth noting that there are at least two examples of the intentional use by
candidates for political office of deepfakes of themselves during political campaigns.
While campaigning, Indian politician Manoj Tiwari was the willing subject of a deepfake
gh
Ri
produced by his own party, the end product of which was that his words were translated
ll
Fu
into another language to communicate with voters from that language group (Christopher
2020). During his 2022 campaign, President Yon Suk-yeol of South Korea also became
ns
the willing subject of a deepfake. In an effort to appeal to young voters, his campaign
ai
used the video to answer questions from the public online (Jin-kyu, 2022).
et
rR
In a seminal paper around deepfakes, legal scholars Danielle Citron and Robert
ho
Chesney coined the term “liar’s dividend” to describe the phenomenon in which
wrongdoers can point to the existence of deepfakes to deny having engaged in activity
ut
,A
they were clearly caught engaging in (Citron & Chesney, 2019). That is, the fact that
there are deepfakes gives malefactors plausible deniability. It seems that there is also
te
something of a reverse liar’s dividend at play in the atmosphere of doubt and mistrust
itu
around the possibility that one might use a deepfake, the anticipation thereof, or that
st
anyone might have used one. This murky environment can be a strategic weapon of sorts
In
that causes a sense of questioning around all media, and that in and of itself can be an
NS
An inadvertent benefit of this atmosphere to bad actors may become that honest
e
people are more likely to pay ransom or cede to demands than to allow compromising
Th
commonplace and denials of legitimate activity by guilty parties also becomes common,
20
the public will grow weary of the “liar’s dividend,” and there will be a backlash in which
©
denials will be treated as suspect. This will benefit bad actors who may be able to count
on people who simply do not want to risk having to defend themselves against deepfakes
that are eventually presumed real.
The examples above of deepfakes being used by bad actors demonstrate some of
the dangers of deepfake-enabled attacks. However, deepfake-enabled attacks are a
compound problem, and this should be explored in more detail.
gh
Ri
Most literature concerning the dangers of deepfakes is limited to the technology
ll
Fu
around the deepfake itself—e.g., producing, preventing, and detecting GANs. Much of
the rest of public discussion focuses on the effects of deepfakes, such as aspects of crime
ns
or disinformation.
ai
et
The first level of this analysis stems from the fact that the technology behind
rR
deepfakes is difficult to counter. As discussed, there have been and are ongoing serious
ho
efforts to develop technical countermeasures. Broadly speaking, some of the categories
include forensic detection after the fact, authentication at the time a message is sent,
ut
,A
sending messages through closed systems, and looking for network anomalies.
te
The second level of this analysis is that deepfakes can be live or prerecorded.
itu
Third, deepfakes can be dispersed. They can be transmitted via any channel of
In
communication that the bad actor chooses to whomever the bad actor targets. This means
NS
that that he or she can interact directly via a live deepfake with the intended victim. Or,
SA
he or she can address a group live. He or she can prepare a prerecorded deepfake and let
it be played by third parties on a social media site. Or, he or she can cause a prerecorded
e
Th
deepfake to be played live at a certain time for a specific person, a group, or a widescale
audience. There is no way to predict the means or mode of communication.
22
artifacts that make them look unrealistic today are likely to become less common.
©
Taken together, these facts mean that current technical countermeasures are not
only insufficient to counter the simplest implementation of a deepfake, which is a known
point-to-point message, but also have no way at all to confront a more sophisticated
implementation, which could include switching from a prerecorded to real-time fake,
changing from a one-on-one conversation to addressing a group, or switching modes of
communication. There is no technical countermeasure that can handle these possibilities
at present.
gh
Ri
given the current state of technology, that the same countermeasure could detect a
ll
Fu
deepfake during a live call. However, even if such a countermeasure existed that worked
against a live call, it would not work against all deepfake-enabled attacks directed against
ns
a given victim because detection software cannot be everywhere at all times at present.
ai
This is a major vulnerability.
et
rR
Next, there is also a significant likelihood of combining a deepfake-enabled attack
ho
with a cyberattack. A bad actor could attempt to use a deepfake to gain access to a system
or could launch a cyberattack concurrently with or as part of a deepfake-enabled attack—
ut
,A
for instance, by using a deepfake as a distraction or by initiating a denial of service,
among other scenarios.
te
itu
The next level of the compound challenge is the interrelation between deepfake-
st
enabled attacks and other major security concerns. Some of the areas with a large overlap
In
or potential for overlap are insider threat, work from home/remote work, and physical
NS
security. These areas of overlap create complications, such as the potential for employees
who are working remotely to be put in harm’s way, manipulated, and coerced within the
SA
Note also that deepfakes can potentially be deployed in conjunction with a host of
other attacks. For example, this could include social engineering and blackmail, such as
22
combining a deepfake attack with a demand for payment to prevent release of the
©
gh
Ri
When it comes to technical countermeasures, the challenge for defenders from a
ll
Fu
technical perspective will be to find a means that works across platforms to detect and
prevent deepfakes—at the sending and receiving ends, and for the viewing public. This is
ns
a tall order. The stakes are high.
ai
et
This profound challenge demands a solution. The methodology in section 3 offers
rR
some practical suggestions for approaching the problem. Prior to turning to discussion of
ho
the methodology, the final background area to discuss is a brief review of law around
deepfakes.
ut
2.6 Law and Policy Around Deepfakes ,A
te
itu
intelligence, including deepfakes. Broadly speaking, this involves wider policy issues
In
such as those affecting national security, intellectual property, and privacy rights, as well
NS
In the United States, there has been a spate of legislation dealing with widescale
issues around AI and deepfakes. Among other examples, this has included the
e
Th
development of reporting mechanisms and task forces through the Deepfake Report Act
22
of 2019 (S. 2065), the Deepfake Task Force Act of 2021 (S.2559), and creation of the
National Security Commission on Artificial Intelligence (created through the National
20
Much of the harm caused by deepfakes can be addressed by existing civil and
criminal law. For instance, victims of defamatory material can sue. Likewise, there are
criminal laws at both the state and federal levels dealing with various types of computer
crimes, theft, and fraud. However, victims have not been able to seek justice in all cases,
such as in situations involving deepfake-enabled revenge porn. Due to these
shortcomings, a number of US states have enacted statutes that are purpose-built for
deepfakes (Clark, 2022). One such example is Florida’s pending Senate Bill 1798
(Florida Senate, 2022). The Florida bill stems in part from the personal experiences of a
state senator (Coble, 2022).
gh
Ri
International approaches to deepfakes have varied. The European Union’s (EU)
ll
Fu
General Data Protection Regulation (GDPR) does not mention deepfakes. However, it
does indirectly afford a degree of civil recourse to victims of malicious deepfakes
ns
through some of the rights it confers (Colak, 2021). The EU’s pending Artificial
ai
Intelligence (AI) Act, proposed in 2021, explicitly mentions deepfakes. It takes a “risk-
et
rR
based approach” to AI that would require notifying users when they are interacting with
manipulated media, including deepfakes (Europol, 2022). Neither bill criminalizes
ho
deepfakes or would have much of an apparent deterrent effect against deepfake-enabled
ut
attacks.
,A
One more international approach that bears mention is China’s planned deepfake
te
law, which nominally bans deepfakes made without the consent of those depicted and
itu
requires removal of some deepfake apps from online app stores (Qureshi, 2022). It
st
The legal system is working to respond to this growing threat. However, there is
not yet a legal response that sufficiently takes into account the dispersed nature of the
SA
threat posed by deepfake-enabled attacks. Legal defenders, like technical defenders, need
e
to appreciate and deal with the compound nature of the problem that is coming.
Th
Malicious actors can attack via various modes of communication and devices, speaking
22
to the audience of their choosing, either live or prerecorded. This will require an
20
3. Methodology
gh
Ri
time. However, until there is a technical solution that directly addresses the problem,
ll
Fu
organizations need to develop a plan to confront this emerging threat.
ns
The dynamism and potential destructiveness of deepfake-enabled attacks demands
that security practitioners take a proactive approach in assessing and planning with the
ai
et
specific nature of the threat in mind. In this section, this paper offers suggestions for
rR
confronting the threat posed by deepfake-enabled attacks.
ho
The methodology is comprised of several steps. The first is a wholly original
ut
checklist (Appendix 1) that assists organizational leaders and security practitioners in
,A
assessing their organization’s posture in terms of the threat. The second step is a ranking
te
of threats, to include those identified in the first step, those from known deepfake-enabled
itu
attacks, and those derived from any other relevant source. The next element of the
st
systematic, and organized way of viewing and discussing security, both within
organizations and between organizations.
SA
process, they can approach the tools sequentially, beginning their assessment with the
checklist in Appendix 1. Their assessment can continue through to the framework. The
22
framework itself includes a number of recommended steps that cover assessment and
20
planning. Planning is the next step in the overall methodology. Organizations that are
©
being proactive can strengthen their security posture based on specific deepfake-related
threats identified during the assessment process.
The methodology can be used as a training aid. The questions in Appendix 1 and
the method for brainstorming to be discussed in Section 3.3 lend themselves well to
tabletop exercises. They can also assist planners in coming up with training scenarios.
With this in mind, a benefit of the way Appendix 1 and Section 3.3 are set up is that
planners can design training scenario injects that emphasize particular points. For
example, if there is a concern about scenarios involving employee vulnerability in remote
gh
Ri
work settings, it would be possible to emphasize this by weighing it more heavily in the
ll
Fu
answers to the questions in Appendix 1 and Section 3.3.
ns
Finally, the methodology can be used to respond to deepfake-enabled attacks.
Ideally, an organization has proactively planned by working through an assessment,
ai
et
adjusted its posture in light of that assessment, trained with specific threats in mind, and
rR
readjusted. However, regardless of whether this is the case, the methodology is written in
ho
a way that it can be used by an organization that is facing a deepfake-enabled attack. The
obvious caveat here is that an organization dealing with a real-time attack (either an
ut
,A
attacker in real time or a prerecorded attack being played live) will not likely have a great
deal of time to work with the tools in the methodology, given the nature of the crisis.
te
itu
Before an organization begins its analysis, leaders need to decide whether they
NS
want to invest time, money, and energy on deepfakes as a threat. The primary question
SA
for any organization is whether it will analyze its security in light of the threat posed by
deepfakes. If an organization is considering whether to pursue this analysis, it can review
e
Th
the documented real-world cases and proofs of concept above as a threshold test. It is
possible that some organizations may be hesitant to view deepfakes and deepfake-
22
blackmail).
If an organization wishes to proceed with the analysis, the recommended first step
is a process of reviewing the threat and risk exposure using the checklist for analyzing
risk exposure in Appendix 1. This is a series of questions that break down various aspects
of deepfakes and deepfake-enabled attacks. This is clearly not an exhaustive list. Rather,
it is a qualitative tool, the purpose of which is to allow organizations to consider where
risk exposure might lie. The questions are written as though they are about an actual
gh
Ri
attack. Written this way, the questions allow security personnel and other organizational
ll
Fu
leaders to consider whether an attack with these characteristics is possible.
ns
This initial analysis is used to brainstorm attack vectors and risk exposure.
Nevertheless, security personnel might consider soliciting buy-in and input from other
ai
et
relevant departments. This could include a variety of offices such as legal, compliance,
rR
human resources, public relations, and privacy, among others. For example, given the
ho
capacity of deepfakes to cause immediate reputational damage, public relations might be
able to spot issues related to these specific risks and threats that security and technical
ut
,A
personnel cannot see at the outset. This said, it is not necessary to expand the discussion
group at the initial brainstorming stage because there is an opportunity to do so within the
te
context of the framework, which follows. Security personnel may wish to keep the
itu
discussion streamlined at this point in order to more quickly move the process forward
st
well as any other relevant information about the organization’s circumstances. The
e
questionnaire is designed to prompt thought and discussion and elicit answers, not to
Th
reach a specific and definitive objective truth. It may likely turn out that an organization
22
believes it has exposure to deepfakes from multiple angles. For instance, an organization
20
may see that its employees are vulnerable as individuals and that the organization can
©
also be a target. Likewise, an organization may be the target of malicious actors who are
driven by multiple motivations: theft, vandalism, etc. An organization may also be able to
draw on relevant real-world experience with other crises.
gh
Ri
are the adversary’s most likely and most dangerous courses of action. This starts with
ll
Fu
brainstorming on courses of action. During this process, participating team members
simply list whatever they believe to be feasible adversary courses of action. In order to
ns
assist with the qualitative assessment of these courses of action, organizations can create
ai
a basic chart with x and y axes, with one axis representing likelihood and the other
et
rR
danger. By plotting these risks and threats to a matrix, organizations will have a visual
representation of priorities to confront when it comes to deepfakes.
ho
This matrix can be used for a variety of purposes, from the allocation of resources
ut
,A
to the development of scenarios involving these specific risks that can be run through the
framework. While any relevant information can be fed into this matrix, it would be
te
helpful to include at a minimum reference to the items in the checklist for analyzing risk.
itu
Again, that list is not meant to be prescriptive. Organizations are facing their own threats
st
and risks. However, the list is an effort to begin to compile themes common to deepfake-
In
enabled attacks. It may serve as a useful starting point in assessing risks in this area. The
NS
next step for an organization in this process is to turn to the framework. Before moving to
SA
the framework, it is worth examining briefly how the zero trust model can apply to this
e
While zero trust is intended to apply to computer networks, and that model can
©
gh
Ri
In pursuing this zero trust line of thinking, a question that arises is whether certain
ll
Fu
transactions or roles (e.g., financial, leadership) should be flagged as high risk for
scrutiny when it comes to deepfakes. While there is no right answer and each
ns
organization needs to consider these questions based on its own circumstances, there are
ai
some inherent dangers in over-relying on automatically heightened scrutiny. Zero trust’s
et
rR
primary benefit in this context is to apply a level of skepticism to all roles and
interactions, since deepfakes are a hybrid attack in which people are part of the attack.
ho
The benefits of applying the zero trust mindset become clearer as organizations begin to
ut
consider how the framework applies to their circumstances. The next step in the
discussion is the framework itself.
,A
te
itu
(NIST, 2018). This analysis assumes that a given organization is using the CSF. As a first
step, organizations should review the CSF to include the introductory sections, whether
SA
or not the organization already uses the CSF. An organization not currently using the
e
framework should attempt to understand its security posture in the context of the CSF.
Th
There is some precedent for adapting the CSF to a specific use (Barker et al., 2022).
22
The framework (Appendix 2) as modified draws from all of the CSF’s five
20
subcategories are from the identify function, 17 from protect, three from detect, 11 from
respond, and six from recover.
The framework and overall methodology have been developed with a focus on
deepfake-enabled attacks. The 48 subcategories included in the framework reflect those
CSF areas that are most relevant, considering the various issues identified around
deepfakes. Each of the functions and many of the subcategories have unique implications
for deepfake-enabled attacks, some of which will become clear in the case studies.
gh
Ri
subcategory applies to deepfake-enabled attacks. The idea for this column, specifically,
ll
Fu
would be akin to the “ransomware application” column in NIST 8374 (Barker et al.,
2022).
ns
Prior to moving into the case studies, note that in discussing the cases, reference
ai
et
to the categories and subcategories will include standard notation of CSF sections, which
rR
reflect the function, category, and subcategory. For instance, “(ID.AM-3)” corresponds
ho
with the third subcategory under “identify, asset management,” which is “organizational
communication and data flows are mapped.” Throughout the case studies, the
ut
,A
subcategories will be cited this way. They are all from the NIST CSF.
te
Having discussed several modes of analysis, including the checklist, most
itu
likely/most dangerous course of action analysis, zero trust, and framework, the paper will
st
4. Case Studies
SA
Organizations can use the checklist for analyzing risk exposure in Appendix 1 and
e
the most likely/most dangerous matrix to develop hypothetical fact patterns for tabletop
Th
that there are varying levels of complication around deepfake attacks. This corresponds in
some ways with the most likely/most dangerous analysis. Whether an organization is
©
designing scenarios or considering its actual risk profile, planners can write out a variety
of potential adversary courses of action and then plot them on the axes. Planners can then
combine a number of features to create the type of attack they wish to train with.
For example, a prerecorded deepfake circulated internally within an organization,
not accompanied by any other avenues of attack, that does not hold credibility within the
organization could be relatively low impact. On the opposite end of the spectrum, a live
deepfake, directed outwards from the organization (or inwards with significant effect on
operations), launched in conjunction with other avenues of attack such as a denial of
service, with a highly detrimental and credible message, would be very high impact.
gh
Ri
If designing a scenario, planners can take these fact patterns and build around
ll
Fu
them. If an organization has these facts as its actual operating picture, a next step would
be to apply them to the framework.
ns
Planners can use this methodology to come up with virtually any possible
ai
scenario by changing business type, attack vector, attacker motivation, and other factors.
et
rR
For example, scenarios can focus on types or levels of threats. Likewise, one could
develop scenarios that are very specific to a type of business with specific characteristics
ho
and a particular risk/threat profile.
ut
The first case study demonstrates an organization working through all stages of
,A
the methodology. The remaining five case studies show organizations working only with
te
the framework and only while they are experiencing a crisis. In terms of scenario design,
itu
the scenarios in this section tend towards cases on the higher-impact side. The reason for
st
this is that these scenarios are illustrative of important concepts around deepfakes.
In
Entities come from across industry sectors, facing attackers with a variety of motivations,
NS
using various types of deepfakes and attack vectors, and relying on witting and unwitting
SA
gh
Ri
This organization’s first step is to go through the checklist in Appendix 1. Given
ll
Fu
that the organization is reviewing this checklist proactively, it assesses its risk exposure.
It must consider where it is most vulnerable to the types of known threats from deepfake-
ns
enabled attacks. This series of questions results in a number of possibilities, and the
ai
organization plots them against the matrix of most likely/most dangerous threats. During
et
rR
this stage of planning, members of the team also begin to cross-reference their work to
framework sections. The organization’s review identifies the need to strengthen and
ho
validate internal communication across multiple bands in emergencies/contingencies, to
ut
engage in company-wide training regarding deepfake identification, to bolster and test
,A
response and recovery plans (PR.IP-9 and PR.IP-10), and to ensure that their public
te
relations strategy is prepared for the variety of deepfake contingencies they have
itu
identified (RS.CO-1 and RC.CO-1). During its preparation, the company identifies its
st
any attack that jeopardizes its compliance with PCI-DSS standards, and overall, anything
NS
that damages its reputation and the trust it has among the public of the region.
SA
As part of this planning, the organization can also use the framework to guide its
e
preparation. Numerous subcategories from the identify (ID) and protect (PR) categories
Th
of the framework can aid the organization in proactive development of its defensive
22
strategy. For example, (PR.AT-4) ensures that senior executives understand their roles
20
and responsibilities.
©
Despite having upgraded its security in light of the threat of deepfakes, the
company ends up the victim of a deepfake-enabled attack. The chain’s executive office
receives an email with an attachment (deemed safe) of a video showing individuals
taking barrels from a truck that is clearly marked with the company’s logo and dumping
what appears to be some type of industrial material from the barrels into a lake. While the
video plays, there is an audio overlay of what seems to be a phone call between an
gh
Ri
unknown third party and a deepfake of the unmistakable and very well-known voice of a
ll
Fu
member of the executive team, discussing with disdain towards the local populace the
dumping of waste in this local nature preserve. The email delivering the deepfake could
ns
be accompanied by a demand for ransom (ransomfake) or might just be a preview of the
ai
fact that the deepfake will be released to the public. This implicates framework sections
et
rR
related to public relations (RC.CO-2) and information sharing with external stakeholders
(RS.CO-5). Perhaps the sender might demand customer data as ransom. If the
ho
organization has been thorough in planning based on the most likely/dangerous threats it
ut
identified, it will have plans to address these eventualities.
,A
This example showcases the entire methodology from the assessment questions
te
through the initial application of the framework after a deepfake-enabled attack.
itu
st
The purposes of the next five examples are to demonstrate a broad view of the
In
threat landscape, identify salient points in each case, and cover a number of the
NS
framework’s points.
SA
what he or she believes to be his or her manager. Nothing about the interaction raises any
concerns for the employee. The manager asks the employee to take certain actions with
20
regard to an account that would result in the release of funds, and that until one year ago
©
would have been routinely permitted based only on the verbal authorization of this
manager by phone.
A number of subcategories in the framework are useful both in preparing for and
responding to such a situation. In terms of preparation, organizations should train their
employees on the threat and recommended responses using known threat examples
(PR.AT-1). Organizations should consider internal and external threats (ID.RA-3),
business impacts (ID.RA-4), and risk responses (ID.RA-6). In light of known threats of
the type described in articles cited above, financial institutions are undoubtedly instituting
additional checks and verifications prior to the release of funds. As part of its preparation,
gh
Ri
the organization would need to continue to stay abreast of any new information related to
ll
Fu
this type of scam through organizations such as the FS-ISAC. Staying up to date on
strategic information through industry groups is a best practice that should be adopted
ns
across the board when preparing to mitigate deepfakes.
ai
et
This scenario makes the employee into an unwitting insider threat. Awareness and
rR
training are the keys to ensure that an employee does not fall prey. Organizations should
ho
have procedures in place instructing employees on how to respond in such instances—to
include internal communication and verification. These procedures should be drilled.
ut
,A
Organizations should consider reporting mechanisms for employees who might get drawn
into this kind of scam. Depending on the nature of the organization, the organization
te
should consider whether it can allow a no-fault system for immediate reporting. There
itu
should also be a system for the employee to report it contemporaneously with little to no
st
notice to the malicious actor. Any such reporting system should take into account that an
In
The next scenario involves a publicly held corporation. This case is a two-part
attack. In the first part, an attacker gains access by using a deepfake to pose as a member
22
of the sysadmin team. The attacker then convinces personnel to take actions that lead to a
20
service. In the second stage, the attacker makes a public announcement while posing as a
member of the company’s leadership. The announcement will be intended to further the
attacker’s goal, whether that is to manipulate stock prices, achieve an activist purpose, or
otherwise.
As in the first scenario, employee training (PR.AT-1) is key. In both of the first
two scenarios, the relevance of a zero trust mindset is clear. Employees will not typically
or naturally question what appear to be routine instructions from superiors to release
funds or from sysadmins to take certain actions with regard to information systems.
Absent a technical countermeasure that alerts employees to the presence of computer-
gh
Ri
generated images or that prevents deepfakes, employees need to be aware of the
ll
Fu
possibility that a given image might be a deepfake. Likewise, organizations need to
implement processes to safeguard against the possibility that an attacker might
ns
successfully trick employees, or that would mitigate damage if an attacker is able to make
ai
contact with employees through the use of a deepfake.
et
rR
An employee’s own senses would typically be his or her means of verifying
ho
identity in this case. If employees are unable to rely chiefly on their own senses and on
the integrity of information systems, organizations must develop other reliable means. An
ut
,A
important part of an organization’s preparation for this type of eventuality is to have
mapped out communication flows (ID.AM-3). With the organization’s ability to
te
communicate internally and externally compromised, it is important that the organization
itu
execute its response plan (RS.RP-1) and that personnel know their roles and the order of
st
operations (RS.CO-1). Organizations should consider how they can establish reliable
In
means of identity and access management, in particular authentication, for their backup
NS
Ideally, employees would prevent stage one from becoming successful. However, if they
Th
do not, the key is for the organization to have effective backup communication plans in
22
gh
Ri
This scenario also involves an insider, this time a witting insider, although one
ll
Fu
who is under duress. As in the first scenario, the organization should consider insider
threats in light of deepfake-enabled attacks. Likewise, as in the previous examples,
ns
training should include the possibility of this type of attack. Employees should be aware
ai
of the potential for being targeted with this type of material (PR.AT-1). Organizations
et
rR
should consider that between remote work and the increasing prevalence of this
technology, the risk of employees being the target of a “ransomfake” and other forms of
ho
AI-related extortion is growing exponentially. With that in mind, organizations should
ut
find ways to build employee reporting of such incidents into their response plans (PR.IP-
,A
9) to attempt to undermine the potential impact of virtual kidnapping and extortion
te
directed at employees.
itu
st
video is newsworthy. However, the individual makes some particularly strong statements
e
that are guaranteed to anger parties to the conflict and that may aggravate the already
Th
very fraught situation. The media organization is unable to immediately verify the
22
authenticity of the video, but it came through normal channels in the country of origin,
20
There are several ways in which the media scenario could play out. Probably the
most likely is that the media could be fed disinformation to run as though it is news. The
gh
Ri
second most likely scenario is that the media itself could be spoofed (as in the previously
ll
Fu
cited example of France24). Third, many media organizations run online fora where
members of the public post unmoderated or lightly moderated content, on which it would
ns
be very easy for someone to post a deepfake.
ai
et
In the first scenario, the media organization is an unwitting partner in
rR
promulgating disinformation. In the second, the organization is the victim of a deepfake
ho
attack. The third example is more complicated. Focusing on the first example, the
obvious point is that media should have a process of vetting information: stories,
ut
,A
incoming video, interviewees, etc. The existence of deepfakes does not obviate the
media’s inherent obligation to ascertain the authenticity of the information it puts out.
te
That being said, it is very likely that deepfakes will eventually be presented as true on air,
itu
Notwithstanding other relevant sections of the framework, for the purposes of this
NS
internally and externally, and managing legal and regulatory requirements around
e
cybersecurity, including those related to privacy and civil liberties (NIST, 2018). Clearly,
Th
media organizations need to be highly attuned to this threat. Their staff must be able to
22
vet deepfakes better than most. The public relations consequences of running deepfakes
20
In the scenario described at the outset, if the media organization were to publish
the video, later determined to have been false, this would obviously become a major
response and recovery problem.
gh
Ri
behavior in the case of the municipality or research institution, or tampering with ballots
ll
Fu
in the election example. While this fact pattern can be expanded to include potential
avenues of attack such as compromised insiders, the key element for the purposes of this
ns
paper is to consider the end goal in a situation like this, in which such an organization—
ai
any of which relies on public trust—has been the target of an attack whose sole purpose
et
rR
is to undermine that trust. The framework subcategories discussed in previous sections
apply equally here. An important point to emphasize in this section is recovery. Each of
ho
the organizations mentioned in this scenario stands to suffer greatly from the reputational
ut
damage described. If the attack takes place, part of the recovery process will be managing
,A
public relations (RC.CO-1), repairing reputation (RC.CO-2), and communicating with
te
internal and external stakeholders about recovery efforts (RC.CO-3). More so than in
itu
most cyberattacks, deepfakes intentionally target reputation. The recovery effort may
st
need to be more intense. If the public is convinced that members of an organization have
In
truly done something wrong, it may be very difficult to repair the organization’s
NS
reputation. Mere proof that it was in fact a fake video may not always be enough to
SA
4.7 Wrap-Up
22
The scenarios above include a number of organizations dealing with witting and
20
unwitting insider threats, real-time and prerecorded deepfakes, and adversaries who
©
attack for a variety of reasons. The methodology can provide a good starting point for
organizations looking to review their security postures in light of this serious threat.
Organizations would be well-advised to consider the unique risks that deepfakes might
pose to their particular businesses based on the ways in which deepfakes can be used, by
comparing them to existing attack vectors, and by considering existing mitigations they
have in place. The methodology described in this paper can help organizations to identify
gaps in their technical and administrative controls.
gh
Ri
5. Discussion
ll
Fu
The case studies offered an opportunity to walk through the methodology much in
ns
the way an organization might with its own fact patterns. It is worth noting at this point
ai
that once an organization has decided to adopt this methodology, the various pieces or
et
tools begin to work in concert as an ongoing cycle. For instance, the checklist at
rR
Appendix 1 and most likely/most dangerous matrix can be part of the risk assessment the
ho
framework discusses in the identify, risk assessment (ID.RA) category.
ut
Some common themes that arise from the case studies are that organizations
,A
should have ways of communicating during an incident, should know their information
te
flows, and should proactively develop public relations plans. In addition, training and
itu
Organizations need to consider that deepfake attacks can hit any form of
In
can be targeted as a group or as individuals, whether working in the office, on the road, or
SA
from home or other remote locations. The general public can also be targeted by attackers
purporting to be company employees or claiming to show images of company employees
e
Th
and can be audio, video, audio-video, or text only. They can be transmitted through any
means of communication, meaning they can come through any application or
20
communications system. Finally, attackers can act for any of the reasons that traditionally
©
motivate malicious actors: ego, money, activism, terrorism, espionage, vandalism, etc.
gh
Ri
the public or anyone else external to the organization. In all cases, they will in some way
ll
Fu
implicate communications involving the organization.
ns
This has implications for how organizations consider the scope of information
flows. For example, a denial of service could range from the inability of one or several
ai
et
employees in non-critical roles to access the network at the low end to the inability of
rR
anyone in the company to communicate with the public via corporate information
ho
systems at the opposite end of the spectrum.
ut
As organizations consider their information flows in the context of deepfakes,
,A
they should also think about the well-known “CIA triad,” that is confidentiality, integrity,
te
and availability. All three of these can and will be affected by deepfake-enabled attacks.
itu
However, the primary one that is most affected is integrity. Perhaps less obvious, but
st
equally affected, is availability. Naturally, if a bad actor is taking up airtime and sitting in
In
for a given person, that affects availability. In some instances, this may be by design. In
NS
others, it may simply be a helpful side effect for an attacker. Some attackers may intend
to include a denial of service in order to frustrate the victim’s ability to counter the
SA
attacker’s message, so that the attacker can effectively further his or her plan to steal or
e
otherwise move forward. Finally, and to a lesser degree, confidentiality can be affected.
Th
This would happen, for instance, when an attacker manages to convince an unwitting
22
such as the nature of the information, the jurisdiction in which this occurs, and an
©
All three of these items can be addressed through appropriate planning and
implementation of aspects of the framework. Up front, note that organizations must be
realistic and cognizant of the problem and its effects on their systems in terms of the CIA
triad. This can inform a proper evaluation of information flows, responsibilities, response
plans, and recovery in various possible scenarios.
gh
Ri
organizations should refer to their plans for these threats and work to integrate them with
ll
Fu
their deepfake response strategy.
ns
Additionally, because of the reputational harm associated with deepfakes, it will
always be better not to become the victim of a deepfake than to have to argue against the
ai
et
authenticity of one. Organizations should keep this in mind in designing their strategies
rR
for confronting deepfakes.
ho
Finally, it is important to keep in mind that deepfake-enabled attacks are not yet
ut
widespread and the damage they can cause is not yet widely documented. As these
,A
attacks take place moving forward, the security community can work this learned
te
experience into the methodology.
itu
Given that the topic of deepfakes is emerging, there are many areas for further
NS
who are working on machine learning and the security personnel who will be
Th
implementing the tools. Given the lack of practical solutions, this may help.
22
Another general area for further research would be the interrelation between
20
deepfake-enabled attacks and physical security, insider threat, and remote work.
©
In terms of this specific project, two next steps include building out of the
framework to include the “deepfake relevance” section and cross-referencing the
framework to other frameworks and compliance systems.
As noted, deepfakes are only one aspect of the wider, growing area of AI/ML and
the so-called metaverse. It is very important to consider security in this area now, as it
remains in its earliest stages. As with the development of any software or hardware, it is
far better to build security into the product than to treat security as an afterthought.
Deepfakes are an entrée to this area that presents a wide-open opportunity.
gh
Ri
7. Conclusion
ll
Fu
This is a transitional time. Deepfake attacks are in their relative infancy. The
ns
overall reality they represent in terms of AI/ML is new. This means that the technology is
ai
not yet settled—either for attackers or defenders. The problem is that attackers have the
et
upper hand. As always, attackers only have to get it right some of the time. The problem
rR
of deepfake-enabled attacks is approaching quickly, and the stakes are exceptionally
ho
high, as is evidenced by the voice cloning cases this paper covered. The security
ut
community should not wait for this problem to arrive at its doorstep before it acts. Real-
,A
time deepfakes are a particular concern. The fact that they can be affected from any
te
location on any device at any time and combined with other attacks, such as denial of
itu
service, should be alarming to security personnel and executives alike. Smart adversaries
st
will preposition resources for complex attacks. They will attempt to attack even when the
In
technology is not optimal. The suggestions in this paper are not meant to replace
NS
technical measures. Rather, they are an effort to begin to develop a way of acting and
SA
thinking about this burgeoning problem proactively and effectively, using available
security controls. One strength of the methodology is that it includes venerable existing
e
Th
systems. This should allow it to remain flexible enough to encompass new technologies
as they emerge. The “deepfake relevance” column on the framework will grow with
22
lessons learned. As of now, there is not a deep body of experience dealing with deepfake-
20
enabled attacks. What appears to be the case, based on the technology, trends, and
©
guidance of experts, is that the security community will see AI-powered cyberattacks,
including deepfakes, proliferate. A goal of this research is to assist CISOs and other
network defenders as they prepare their organizations to meet this new threat. No one
knows with certainty what the landscape will look like once deepfake-enabled attacks
begin to make an impact. Based on current trends, it appears that this will happen soon.
When it does, no one will be able to deny it.
gh
Ri
The views expressed in this article are my own and not those of the Department of State
ll
Fu
or the US government.
ns
This article is intended as general educational information, not as legal advice with
respect to any specific situation. If the reader needs legal advice on a specific situation or
ai
et
issue, the reader should consult with an attorney.
rR
ho
ut
,A
te
itu
st
In
NS
SA
e
Th
22
20
©
gh
Ri
References
ll
Fu
Anderson, M., (2021, August 8). Real-Time DeepFake Streaming with DeepFaceLive,
ns
UniteAI, https://www.unite.ai/real-time-deepfake-streaming-with-deepfacelive/
ai
et
Barker, W., Fisher, W., Scarfone, K., & Souppaya, M. (2022, February). Ransomware
rR
Risk Management: A Cybersecurity Framework Profile, (NISTIR 8374). National
Institute of Standards and Technology, https://doi.org/10.6028/NIST.IR.8374
ho
ut
Bennett, C., (2022, January 10), Fake Videos Using Robotic Voices and Deepfakes
,A
Circulate In Mali, https://observers.france24.com/en/tv-shows/truth-or-
te
fake/20220110-truth-or-fake- debunked-mali-robot-voices-deepfakes
itu
Beridze, I. & Butcher, J., (2019, August). When Seeing Is No Longer Believing, Nature
st
https://doi.org/10.1038/s42256-019-0085-5
NS
Brady, M., Howell, G., Franklin, J., Sames, C., Schneider, M. Snyder, J., & Weitzel., D.
SA
https://doi.org/10.6028/NIST.IR.8310-draft
22
Brewster, T., (2021, October 14)., Fraudsters Cloned Company Director’s Voice In $35
20
millions
Caldelli, R., Galteri, L., Amerini, I., & Del Bimbo, A., (2021, June). Optical Flow based
CNN for detection of unlearnt deepfake manipulations. Pattern Recognition
Letters, Volume 146, 2021, Pages 31-37,
https://doi.org/10.1016/j.patrec.2021.03.005
Campbell, D., (2008, January 28). The Tiger Kidnapping, The Guardian,
https://www.theguardian.com /uk/2008/jan/28/ukcrime.duncancampbell2
gh
Ri
Christopher, N., (2020, February 18), We’ve Just Seen the First Use of Deepfakes In An
ll
Fu
Indian Election Campaign, Vice, https://www.vice.com/en/article/jgedjb/the-first-
use-of- deepfakes-in-indian-election-by-bjp
ns
Clark, K. ‘Deepfakes’ Emerging Issue in State Legislatures, State Net Capitol Journal,
ai
et
Retrieved May 4, 2022 from https://www.lexisnexis.com/en-us/products/state-
rR
net/news/ 2021/06/04/Deepfakes-Emerging-Issue-in-State-Legislatures.page
ho
Coble, S. (2022, January 27) Florida Considers Deepfake Ban
ut
https://www.infosecurity-magazine.com/news/florida-considers-deepfake-ban/
,A
Coker, J., (2022, April 28), Europol: Deepfakes Set to Be Used Extensively In Organized
te
Crime, https://www.infosecurity-magazine.com/news/europol-deepfakes-
itu
organized-crime/
st
In
Citron, D. & Chesney, R., (2019). Deep Fakes: A Looming Challenge for Privacy,
NS
Democracy, and National Security, 107 California Law Review 1753 (2019).
https://scholarship.law.bu.edu/faculty_scholarship/640
SA
Colak, B., (2021, January 19). Disinformation: Legal Issues of Deepfakes, Institute for
e
https://www.internetjustsociety.org/legal-issues-of-deepfakes
22
bill/116th-congress/senate-bill/2065
©
Dellinger, AJ. (2019, November 25). Anatomy of a Scam: Nigerian Romance Scammer
Shares Secrets. Forbes. https://www.forbes.com/sites/ ajdellinger/
2019/11/25/anatomy-of-a-scam-nigerian-romance-scammer-shares-secrets/
Denham, H., (2020, August 3) Another Fake Video of Pelosi Goes Viral on Facebook,
Washington Post, https://www.washingtonpost.com/
technology/2020/08/03/nancy-pelosi-fake-video-facebook/
gh
Ri
EUROPOL, (2022, April 28) Facing Reality? Law Enforcement and the Challenge of
ll
Fu
Deepfakes, https://www.europol.europa.eu/cms/sites/default/
files/documents/Europol_Innovation_Lab_Facing_Reality
ns
_Law_Enforcement_And _The_Challenge_Of_Deepfakes.pdf
ai
et
Federal Bureau of Investigation (FBI). (2017, October 16). Virtual Kidnapping, A New
rR
Twist on a Frightening Scam. https://www.fbi.gov/news/stories/virtual-
ho
kidnapping
ut
Federal Bureau of Investigation (FBI). (2021, March 10). Private Industry Notification:
,A
Malicious Actors Almost Certainly Will Leverage Synthetic Content for Cyber
te
Foreign Influence Operations. https://www.ic3.gov/Media/News/2021/210310-
itu
Florida Senate, Minority Office. (2022, November 15). Leader Book Advances
In
https://www.flsenate.gov/ Media/PressReleases/Show/4098
SA
Fowler, G., (2021, March 25). Anyone with an iPhone Can Now Make Deepfakes. We
Th
https://www.washingtonpost.com/ technology/2021/03/25/deepfake-video-
20
apps/
©
Giles, M., (2018, February 21). The GANfather The Man Who’s Given Machines the
Gift of Imagination. MIT Technology Review.
https://www.technologyreview.com/ 2018/02/21/145289/the-ganfather-the-man-
whos-given-machines-the-gift-of-imagination/
Greene, T., (2020, April 21). Watch: Fake Elon Musk Zoom-bombs Meeting Using Real-
time Deepfake AI, https://thenextweb.com/news/watch-fake-elon-musk-zoom-
bombs-meeting-using-real-time-deepfake-ai;
Haltiwanger, J., (2022, February 3), US Says Russia Planned to Use A “Graphic” Fake
Video with Corpses and Actors to Justify an Invasion of Ukraine, Business
gh
Ri
Insider, https://www.businessinsider.com/us-says-russia-planned-fake-video-
ll
Fu
create-pretext- ukraine-invasion-2022-2
ns
Hirwani, P., (2021, May 27). Scarily Authentic New Deep Fake of Tom Cruise Attracts
Millions of Views, The Independent, https://www.independent.co.uk/celebrity-
ai
et
news/tom-cruise-deep-fake-tik-tok-b1853256.html
rR
Jin-kyu, Kang, (2022, February 13), Deepfake Democracy: South Korean Virtual for
ho
Votes, https://www.france24.com/en/live-news/20220214-deepfake-democracy-
ut
south-korean-candidate-goes-virtual-for-votes
,A
John S. McCain National Defense Authorization Act for Fiscal Year 2019, Public Law
te
115-232, 115th Cong. (2018) https://www.congress.gov/115/plaws/
itu
publ232/PLAW-115publ232.pdf
st
Johnson, D., (2020, December 4). What Is Augmented Reality? Here’s What You Need
In
https://www.businessinsider.com/what-is-augmented-reality
SA
Kasapoglu, C., (2022, February 9). Me enamoré de un 'deepfake' de un sitio de citas que
e
Kushner, D. (2022, March 20). ‘We Have Your Daughter’: The Terrified Father Paid the
22
Ransom. Then He Found His Kid Where He Least Expected Her. Business
20
Insider, https://www.businessinsider.com/virtual-kidnappers-scamming-terrified-
©
parents-out-of-millions-fbi-2022-3#
Lima, C., (2021, August 6). The Technology 202: As Senators Zero In On Deepfakes,
Some Experts Fear Their Focus Is Misplaced, Washington Post,
https://www.washingtonpost.com /politics/2021/08/06/technology-202-senators-
zero-deepfakes-some-experts-fear-their-focus-is-misplaced/
Lomas, N., (2020, September 14). Sentinel Loads Up With $1.35M In the Deepfake
Detection Arms Race, Techcrunch, https://techcrunch.com/2020/09/14/sentinel-
loads-up-with-1-35m-in-the-deepfake-detection-arms-race/
gh
Ri
Marr, B., (2022, February 22). The Important Difference Between Web3 and The
ll
Fu
Metaverse, Forbes, https://www.forbes.com/sites/bernardmarr/2022/02/22/the-
important-difference- between-web3-and-the-metaverse/
ns
Matsuda, K., (2016). Hyper-Reality. http://hyper-reality.co/
ai
et
Mirsky, Y. & Lee, W., (2020, January). The Creation and Detection of Deepfakes: A
rR
Survey, ACM Computing Surveys, Vol. 1, No. 1, Article 1. at page 1:3,
ho
https://arxiv.org/pdf/2004.11138.pdf
ut
MIT Media Lab and Applied Face Cognition Lab. (2022). Detect Fakes
,A
https://detectfakes.media.mit.edu/ (No author is listed, but Matt Groh is listed on a
te
linked site as the “project contact.”)
itu
National Institute of Standards and Technology. (2018, April 16). Framework for
st
https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.04162018.pdf
NS
National Security Commission on Artificial Intelligence (NSCAI), (2021, March 1), Final
SA
Report, https://reports.nscai.gov/final-report/table-of-contents/
e
Newman, L., (2019, May 28). To Fight Deepfakes, Researchers Built a Smarter Camera,
Th
Wired, https://www.wired.com/story/detect-deepfakes-camera-watermark/
22
Ober, H., (2022, May 3). New Method Detects Deepfake Videos With Up To 99%
20
method-detects-deepfake-videos-99-accuracy
Poremba, S. (2021, July 20) Deep Fakes: The Next Big Threat, Security Boulevard,
https://securityboulevard.com/2021/07/deepfakes-the-next-big-threat/
Qureshi, S. (2022, January 29). China Prepares to Crack Down on Deepfakes. Jurist,
https://www.jurist.org/news/2022/01/china-cyberspace-regulator-issues-draft-
rules-on-deep-fakes/
gh
Ri
Romano, A. (2018, April 18). Jordan Peele’s Simulated Obama PSA Is A Double-Edged
ll
Fu
Warning Against Fake News, Vox, https://www.vox.com/2018/4/18/17252410
/jordan-peele-obama-deepfake-buzzfeed
ns
Satter, R. (2019, June 13), Experts: Spy Used AI-Generated Face to Connect with
ai
et
Targets, AP, https://apnews.com/article/ap-top-news-artificial-intelligence-social-
rR
platforms-think-tanks-politics-bc2f19097a4c4fffaa00de6770b8a60d
ho
Saxena, A., (2022, March 17), “Despicable Zelensky deepfake ordering Ukrainians to
ut
‘lay down arms’ taken offline,” Express, https://www.express.co.uk/news/
,A
world/1581928/ukraine-volodymyr-zelensky-deepfake-video-ont
te
Smith, A., (2020, August 5). Deepfakes Are the Most Dangerous Crime of the Future,
itu
and-tech/news/deepfakes-dangerous-crime-artificial-intelligence-a9655821.html
In
Smith, A., (2022, February 17). Deepfake Faces Are Even More Trustworthy Than Real
NS
tech/deepfake-faces-real-ai-trustworthy-b2017202.html
e
Somers, M., (2020, July 21). “Deepfakes, explained,” Ideas Made to Matter –
Th
ideas-made-to-matter/deepfakes-explained
20
Stolton, S., (2020, November 20). EU Police Recommend New Online ‘Screening Tech’
©
Stupp, C. (2019, August 31), Fraudsters Used AI to Mimic CEO’s Voice in Unusual
Cybercrime Case, Wall Street Journal, https://www.wsj.com/articles/fraudsters-
use-ai-to-mimic-ceos- voice-in-unusual-cybercrime-case-11567157402.
Thomson, D, (2022, January 31). Truth or Fake – Deepfake News Videos Circulate In
Mali Amid Tensions with France, France 24, https://www.france24.com/en/tv-
shows/truth-or-fake/20220131-deepfake-news-videos-circulate-in-mali-amid-
tensions-with-france
gh
Ri
US Army, (2019, March) ATP 2-01.3, Intelligence Preparation of the Battlefield,
ll
Fu
https://home.army.mil/wood/application/files/8915/5751/8365/ATP_2-
01.3_Intelligence_Preparation_of_the_Battlefield.pdf
ns
Vincent, J. (2016, May 20). This Six-minute short film plunges you into an augmented
ai
et
reality hellscape. The Verge. https://www.theverge.com/2016/5/20
rR
/11719244/hyper-reality-augmented-short-film
ho
Vincent, J. (2019, February 15) TL;DR, ThisPersonDoesNotExist.com Uses AI to
ut
generate endless fake faces, The Verge, https://www.theverge.com/tldr
,A
/2019/2/15/18226005/ai-generated-fake-people-portraits-thispersondoesnotexist-
te
stylegan
itu
Vincent, J. (2021, April 30), ‘Deepfake’ That Supposedly Fooled European Politicians
st
2021/4/30/22407264/deepfake-european-polticians-leonid-volkov-vovan-lexus
NS
SA
e
Th
22
20
©
gh
Ri
Appendix 1 – Checklist for Assessing Risk Exposure
ll
Fu
1 – Who is the target of the deepfake? (As distinguished from the motivation and purpose
ns
which may well be tied to a different target for the overall attack).
ai
Individual
et
Organization
rR
Individual within Organization
Third party (Coercion/Tiger kidnapping/etc)
ho
2 – Who is the victim of the overall crime? This is distinguished from the target of the
ut
deepfake. They may be one and the same. However, the victim of the crime is related to
,A
the attacker’s motivation and purpose. The attacker may have multiple targets and
victims.
te
itu
2 – What is the physical location of the affected person or people within the organization
(subjects and intended audience of deepfake – audience question does not apply in cases
st
In office
NS
Remote (organization-controlled/travel/etc.)
Personal residences
SA
transmitted to?
Th
Phone (audio)
22
Proprietary VTC
Chat app (WhatsApp, Signal, etc.)
Social media (Facebook, YouTube, TikTok, Instagram)
Broadcast media (news organizations, television, radio, etc.)
Real-time (audio/video/audio-video)
Pre-recorded (audio/video/audio-video)
Text
Photo
gh
Ri
For instance, if it is a pre-recorded video, is the attacker’s intent that it be played on live
ll
news media?
Fu
6 – What is the attacker’s motivation?
ns
Theft, espionage, vandalism, activism, terrorism, ego/“bragging rights,” other?
ai
et
7 – Is there insider involvement?
rR
Witting? Unwitting? Coerced?
ho
8 – If insider involvement is coerced, how is the insider being coerced?
ut
,A
Directly?
Threats to loved ones?
te
Virtual kidnapping?
itu
st
9 – Is the attacker gaining access through the use of other hacker tools/exploitation
In
means? E.g., is the attacker accessing internal corporate networks as part of the attack –
either to conduct the deepfake itself or to accomplish another portion of the plan.
NS
If the attacker has used other hacker tools/exploits, did he or she access the network
SA
For example, is the deepfake itself the extent of the attack? Is the deepfake a means to an
©
end (e.g. the way to get a party to commit a follow-on crime)? Or, is there a second wave
deepfake?
12 – Will the organization’s systems’ availability be affected? Will the overall system
integrity be affected (beyond the individual message)? Will there be any form of outage?
Will systems be taken offline directly or indirectly?
14 – Will the attacker offer the organization an opportunity to prevent release of the
deepfake?
gh
Ri
ll
15 – Will the attack involve any form of physical interaction with systems or personnel?
Fu
16 – Is there any indication that organizational systems have been breached?
ns
17 – Is there any indication that confidential data may be affected by the attack?
ai
et
This may trigger breach reporting requirements depending on an organization’s industry,
rR
the nature of the data affected, and/or the jurisdictions involved. The organization’s
counsel and privacy or data protection team should be involved in any discussion on this
ho
question.
ut
,A
Additional questions for organizations to consider:
te
1 – Are there any recent successful or unresolved incidents involving any of the
itu
2 – Does the organization engage in any activity that closely mirrors any of the specific
NS
known or likely scenarios? (This may seem like an obvious point, but these known/most
likely scenarios are the known and most likely because they are the low hanging fruit of
SA
this area and are likewise the low hanging fruit for us as security practitioners).
e
If so, is there a mechanism by which employees are able to report in-progress situations
22
involving duress or coercion? Are employees encouraged to come forward when they are
20
4 – Does the organization have a physical security program in place for remote workers?
gh
Ri
Appendix 2 – Truncated NIST Cybersecurity Framework
ll
Fu
This table is derived from material in: National Institute of Standards and Technology,
ns
Framework for Improving Critical Infrastructure Cybersecurity (Version 1.1), April 16,
2018, available at https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.04162018.pdf
ai
et
Function Category Subcategory
rR
Identify (ID) (12 Subcategories)
Develop an organizational
ho
understanding to manage
ut
cybersecurity risk to systems,
people, assets, data, and
,A
capabilities.
The activities in the Identify
te
Function are foundational for
itu
7)
Asset Management (ID.AM): The data, ID.AM-3: Organizational
22
gh
Ri
ID.BE-5: Resilience requirements
ll
to support delivery of critical
Fu
services are established for all
operating states (e.g. under
ns
duress/attack, during recovery,
normal operations)
ai
Governance (ID.GV): The policies, ID.GV-1: Organizational
et
procedures, and processes to manage cybersecurity policy is
rR
and monitor the organization's established and communicated
regulatory, legal, risk, environmental,
ho
and operational requirements are
understood and inform the management
ut
of cybersecurity risk.
,A
te
ID.GV-2: Cybersecurity roles and
itu
requirements regarding
cybersecurity, including privacy
SA
cybersecurity risks
Risk Assessment (ID.RA): The ID.RA-2: Cyber threat
22
gh
Ri
ID.RA-6: Risk responses are
ll
identified and prioritized
Fu
Risk Management Strategy (ID.RM): The
organization's priorities, constraints, risk
ns
tolerances, and assumptions are
established and used to support
ai
operational risk decisions
et
rR
Supply Chain Risk Management (ID.SC)
ho
Protect (PR) (17 Subcategories)
ut
Develop and implement
appropriate safeguards to
ensure delivery of critical
,A
te
services.
The Protect Function supports
itu
Page 7)
NS
gh
Ri
PR.AC-6: Identities are proofed
ll
and bound to credentials and
Fu
asserted in interactions
PR.AC-7: Users, devices, and
ns
other assets are authenticated
(e.g. single-factor, multi-factor)
ai
commensurate with the risk of
et
the transaction (e.g. individuals'
rR
security and privacy risks and
other organizational risks)
ho
Awareness and Training (PR.AT): The PR.AT-1: All users are informed
organization's personnel and partners and trained
ut
are provided cybersecurity awareness
,A
education and are trained to perform
their cybersecurity-related duties and
te
responsibilities consistent with related
itu
PR.AT-3: Third-party
stakeholders (e.g., suppliers,
customers, partners) understand
e
responsibilities
20
gh
Ri
Information Protection Processes and PR.IP-8: Effectiveness of
ll
Procedures (PR.IP): Security policies protection technologies is
Fu
(that address purpose, scope, roles, shared
responsibilities, management
ns
commitment, and coordination among
organizational entitites), processes, and
ai
procedures are maintained and used to
et
manage protection of information
rR
systems and assets.
ho
ut
PR.IP-9: Response plans
,A
(Incident Response and Business
te Continuity) and recovery plans
(Incident Recovery and Disaster
Recovery) are in place and
itu
managed
st
PR.IP-11: Cybersecurity is
NS
personnel screening)
PR.IP-12: A vulnerability
management plan is developed
e
Th
and implemented
Detect (DE) (3 Subcategories)
Develop and implement
22
appropriate activities to
20
gh
Ri
DE.CM-2: The physical
ll
environment is monitored to
Fu
detect potential cybersecurity
events
ns
DE.CM-3: Personnel activity is
monitored to detect potential
ai
cybersecurity events
et
Detection Processes (DE.DP): Detection
rR
processes and procedures are
maintained and tested to ensure
ho
awareness of anomalous events.
ut
Respond (RS) (11
,A
Subcategories) Develop and
implement appropriate
te
activities to take action
itu
regarding a detected
cybersecurity incident.
st
gh
Ri
RS.CO-5: Voluntary information
ll
sharing occurs with external
Fu
stakeholders to achieve broader
cybersecurity situational
ns
awareness
Analysis (RS.AN): Analysis is conducted RS.AN-2: The impact of the
ai
to ensure effective response and incident is understood
et
support recovery activities.
rR
RS.AN-4: Incidents are
ho
categorized consistent with
response plans
ut
RS.AN-5: Processes are
,A
established to receive, analyze
and respond to vulnerabilities
te
disclosed to the organization
itu
researchers)
Mitigation (RS.MI): Activities are
NS
the incident.
Improvements (RS.IM): Organizational RS.IM-1: Response plans
e
activities
20
updated
Recover (RC) (6 Subcategories)
Develop and implement
appropriate activities to
maintain plans for resilience
and to restore any capabilities
or services that were impaired
due to a cybersecurity incident.
The Recover Function supports
timely recovery to normal
operations to reduce the
impact from a cybersecurity
incident. (NIST CSF Page 8)
gh
Ri
Recovery Planning (RC.RP): Recovery RC.RP-1: Recovery plan is
ll
processes and procedures are executed executed during or after a
Fu
and maintained to ensure restoration of cybersecurity incident
systems or assets affected by
ns
cybersecurity inidents.
ai
et
Improvements (RC.IM): Recovery RC.IM-1: Recovery plans
planning and processes are improved by incorporate lessons learned
rR
incorporating lessons learned into future
activities.
ho
ut
RC.IM-2: Recovery strategies are
,A
updated
Communications (RC.CO): Restoration RC.CO-1: Public relations are
te
activities are coordinated with internal managed
itu
and external parties (e.g. coordinating
centers, Internet Service Providers,
st
after an incident
RC.CO-3: Recovery activities are
e
teams
20
©