Preparing For AI Enabled Attacks - Final
Preparing For AI Enabled Attacks - Final
Preparing for
AI-enabled
Produced in association with
cyberattacks
AI
2 MIT Technology Review Insights
Key takeaways
1
Cybercriminals are turning to artificial
intelligence (AI) to scale up their attacks
and evade detection.
C 2
yberattacks continue to grow in prevalence According to a global survey, more than half
and sophistication. With the ability to of business leaders say security strategies
disrupt business operations, wipe out based on human-led responses to
critical data, and cause reputational fast-moving attacks are failing. Nearly
damage, they pose an existential threat to all have begun to bolster their defenses
businesses, critical services, and infrastructure. Today’s in preparation for AI-enabled attacks.
new wave of attacks is outsmarting and outpacing
3
humans, and even starting to incorporate artificial To keep up with evolving cybercriminal
intelligence (AI). What’s known as “offensive AI” will innovation, “defensive AI” uses self-learning
enable cybercriminals to direct targeted attacks at algorithms to understand normal patterns
unprecedented speed and scale while flying under the of user, device, and system behavior in
radar of traditional, rule-based detection tools. an organization and detect unusual activity
without relying on historical attack data.
Some of the world’s largest and most trusted
organizations have already fallen victim to damaging
cyberattacks, undermining their ability to safeguard
critical data. With offensive AI on the horizon,
organizations need to adopt new defenses to fight
back: the battle of algorithms has begun. As it is, 60% of respondents report that human-driven
responses to cyberattacks are failing to keep up with
MIT Technology Review Insights, in association with AI automated attacks, and as organizations gear up for
cybersecurity company Darktrace, surveyed more than a greater challenge, more sophisticated technologies
300 C-level executives, directors, and managers are critical. In fact, an overwhelming majority of
worldwide to understand how they’re addressing the respondents—96%—report they’ve already begun to
cyberthreats they’re up against—and how to use AI to guard against AI-powered attacks, with some enabling
help fight against them. AI defenses.
58% 57%
This is just one example of the technology being used for
nefarious purposes. AI could, at some point, conduct
cyberattacks autonomously, disguising their operations
and blending in with regular activity. The technology is out
there for anyone to use, including threat actors.
Risks introduced by a Threats to cloud
dispersed workforce applications
Offensive AI risks and developments in the cyberthreat
landscape are redefining enterprise security, as humans
already struggle to keep pace with advanced attacks. Source: MIT Technology Review Insights survey of 309 business leaders
worldwide, January 2021. Respondents were asked to choose all that apply.
In particular, survey respondents reported that email and
4 MIT Technology Review Insights
60 %
55 %
How attackers exploit the headlines Source: MIT Technology Review Insights survey of 309 business leaders
worldwide, January 2021. Respondents were asked to choose all that apply.
The coronavirus pandemic presented a lucrative
opportunity for cybercriminals. Email attackers in particular
followed a long-established pattern: take advantage of
the headlines of the day—along with the fear, uncertainty, “cheap or free covid-19 cleaning programs and tests,”
greed, and curiosity they incite—to lure victims in what says Heinemeyer.
has become known as “fearware” attacks. With employees
working remotely, without the security protocols of the There has also been an increase in ransomware, which
office in place, organizations saw successful phishing has coincided with the surge in remote and hybrid work
attempts skyrocket. Max Heinemeyer, director of threat environments. “The bad guys know that now that
hunting for Darktrace, notes that when the pandemic hit, everybody relies on remote work. If you get hit now, and
his team saw an immediate evolution of phishing emails. you can’t provide remote access to your employees
“We saw a lot of emails saying things like, ‘Click here anymore, it’s game over,” he says. “Whereas maybe a year
to see which people in your area are infected,’” he says. ago, people could still come into work, could work offline
When offices and universities started reopening last more, but it hurts much more now. And we see that the
year, new scams emerged in lockstep, with emails offering criminals have started to exploit that.”
MIT Technology Review Insights 5
What’s the common theme? Change, rapid change, rapid and vicious threat landscape.” Cybercriminals are
and—in the case of the global shift to working from home— lightning-fast in their attacks, and their dwell time—
complexity. And that illustrates the problem with traditional the length of time in which attackers have free reign in an
cybersecurity, which relies on traditional, signature-based environment before their missions are complete—is
approaches: static defenses aren’t very good at adapting shrinking to hours rather than days. Heinemeyer says
to change. Those approaches extrapolate from yesterday’s cybersecurity teams are increasingly relying on AI to stop
attacks to determine what tomorrow’s will look like. “How threats from escalating at the earliest signs of compromise,
could you anticipate tomorrow’s phishing wave? It just containing attacks even when they strike at night or on
doesn’t work,” Heinemeyer says. the weekend.
Offensive AI: Not a human-scale problem One organization that’s embracing the use of AI in
Already, cyberattacks are proving to be too fast and too cybersecurity is McLaren Racing. In the world of
furious for humans and first-generation tools to keep up professional car racing, speed isn’t just important on the
with, as they struggle to protect data and other assets. racetrack; it’s also crucial when it comes to responding to
The limitations of traditional security tools were made fast-moving cyber threats. McLaren Racing’s principal
clear once again in December 2020, when a campaign digital architect Edward Green gives an example: one
attributed to Russian intelligence groups infiltrated some Saturday afternoon on a race weekend, under intense
of the world’s most prominent organizations—including pressure, the team simply did not have the time to assess
branches of the United States government and Fortune whether every email might be a threat. “Everyone was
500 companies—through their software supply chains. moving very, very quickly,” because “you’ve got a limited
Public health and safety are also at risk—hackers recently amount of time” to read and respond to data and then
attempted to disrupt the supply of coronavirus make adjustments. The quicker the team can access the
vaccines. And in February 2021, hackers infiltrated the data flowing from the race cars, the faster it might find an
systems of a water facility in Oldsmar, Florida, trying advantage over another team. The data flow on race days
to change the levels of chemicals in the water supply to is at its peak, the perfect time for an impersonation
poisonous extremes. attack—an email that attempts to impersonate a trusted
sender and gain access to data or finances.
According to survey respondents, companies worry
that they have inadequate resources to quell threats. On this particular race weekend, McLaren Racing had
This was, in fact, respondents’ biggest challenge: 60% recently deployed Darktrace’s defensive-AI platform, and
reported that human-driven responses can’t keep up the technology was already learning what the data flow
with automated attacks (see Figure 2). should look like on racing days. It spotted an email that was
unusual in the normal patterns of activity for the sender,
The IT skills gap is aggravated by increasing digital recipient, and wider organization—and locked the
complexity, Heinemeyer says. It’s not just that things are suspicious link inside the email, so anyone who tried to
changing; it’s that they’re changing in an “increasingly open it wouldn’t have been able to click through to the link.
6 MIT Technology Review Insights
Source: MIT Technology Review Insights survey of 309 business leaders worldwide, January
2021. Respondents were asked to choose all that apply.
Regaining the upper hand with automatically try to log in to their targets’ systems by
defensive AI using lists of user credentials stolen in other breaches.
Many organizations are turning to defensive AI to fight
fire with fire. Rather than relying on historical attacks to But in the past year, these attacks have been tailored to
find new ones, defensive AI learns what’s normal for an focus on individuals, roles, or teams, Green says. “They’re
organization and can detect abnormal, potentially far more targeted,” he says. “Attackers are impersonating
malicious activity as soon as it appears—even if it has employees, or they’re going really smart, and embedding
never been seen before. themselves inside these new, transformative digital
processes,” such as signing a convincingly forged
A year ago, before the pandemic and the issues of a document or joining conference calls.
remote workforce complicated the company’s security
operation, the technology team at McLaren Racing would Spear phishing—or sending emails to specific targets—
encounter crude, brute-force password attacks that is getting more refined and is the big challenge for his
Green likened to “machine-gunning” of credentials team, Green says. Email attacks targeting users have
sprayed across Microsoft 365 accounts. In such attacks— sought to solicit fraudulent payments or access
known as “spray-and-pray”—hackers employ bots to intellectual property. “Increasingly sophisticated social
MIT Technology Review Insights 7
52% 44%
survey respondents were asked how worried they are that
future cyberattacks against their companies will use AI,
97% cited future AI-enhanced attacks as troubling, with
58% of respondents saying such cyberattacks are very
concerning. When asked which attacks, in particular, are
worrisome, the most respondents, 68%, reported that Allocating more budget Assessing
to security AI-enabled
impersonation and spear-phishing was their biggest fear
security systems
(see Figure 3).
39% 38%
innovation, let alone respond fast enough, new
technological answers are needed. Thousands of
organizations rely on AI to react to a fast-developing
cybersecurity incident, whether or not their security
teams are in the office.
Automating the Deploying autonomous
investigation process response technology
Known as an autonomous response, and enabled through
self-learning AI, the technology can surgically interrupt an
in-progress attack without interrupting day-to-day
business. Here’s an example of an autonomous response
in action: an electronics manufacturer was hit by
ransomware that rapidly spread, encrypting files. The
strain of ransomware had never been encountered before,
38% 31%
so it wasn’t associated with publicly known compromise
indicators, such as blacklisted command-and-control
Hiring more security Outsourcing to
domains or malware file hashes. But the autonomous analysts managed security
response AI identified the novel and abnormal patterns service providers
of behavior and stopped the ransomware in seconds.
The security team then had enough time to catch up and Source: MIT Technology Review Insights survey of 309 business leaders
worldwide, January 2021. Respondents were asked to choose all that apply.
perform other incident response work.
8 MIT Technology Review Insights
Defensive AI is a force multiplier, Heinemeyer says. By when the pandemic forced lockdowns, he and his
automating the process of threat detection, investigation, infrastructure team installed Darktrace’s AI-powered
and response, AI augments human IT security teams by technology to defend their email environment within two
stopping threats as soon as they emerge, so people days. With the help of AI, Green’s team can now focus on
have the time to focus on more strategic tasks at hand. strategic priorities instead of stamping out the small
fires of constant, low-level alerts. “Our cybersecurity
Preparing for offensive AI team can then work with us on all of our weird and
The vast majority of respondents are actively gearing up wonderful sensors in the cars and make sure those are
for AI-powered cyberattacks. Security teams are nice and secure.”
increasingly relying on autonomous technologies that can
respond at machine speed when a cyberattack occurs. It’s a simple formula: IT teams need to be duly prepared,
When asked how they’re preparing, survey respondents Green says, because cybercriminal minds are turning to
said that outside of allocating more budget to IT security AI, too. “Much in the same way that lots of organizations
and security audits, their organizations have also look to use AI and machine learning to be more
prioritized several defensive AI projects (see Figure 4). competitive, more efficient, to solve those big challenges
they’ve got as companies, then you would start to
With the onset of AI-powered attacks, organizations expect that the same tools you’re using to be more
need to reform their strategies quickly, be prepared to efficient and effective—other people will use those to try
defend their digital assets with AI, and regain the to attack you.”
advantage over this new wave of sophisticated attacks.
To learn more about how AI responds to sophisticated
Fortunately, it’s easier to flip the switch than some cyberattacks, visit darktrace.com/en/supercharged-ai/.
may realize. McLaren Racing’s Green can speak to that:
MIT Technology Review Insights 9
“Preparing for AI-enabled cyberattacks” is an executive briefing paper by MIT Technology Review Insights. We would like
to thank all participants as well as the sponsor, Darktrace. MIT Technology Review Insights has collected and reported
on all findings contained in this paper independently, regardless of participation or sponsorship. Jason Sparapani and
Laurel Ruma were the editors of this report, and Nicola Crepaldi was the publisher.
Illustrations
All Illustrations assembled by Scott Shultz Design. Covers, pages 3 and 7 illustrations by Motorama, Shutterstock. Page 4 illustration by Real Bot, Shutterstock; pages 5 and
8 illustrations by MuPlus, Shutterstock.
While every effort has been taken to verify the accuracy of this information, MIT Technology Review Insights cannot accept any responsibility or liability for reliance on any person
in this report or any of the information, opinions, or conclusions set out in this report.