AI Magazine
AI Magazine
SUMMER 2025
1
A I C Y B E R SUMMER 2025
CO N T ENT
UPF RON T
11
AI Has Changed the Rules of Cybersecurity
Are we ready for what comes next?
17
Are LLM Guardrails a Commodity?
A thought-provoking op-ed
22
Governing the Ungovernable
Policy Blueprints for Self-Modifying AI
Agents
33
The Other Side of Agentic AI
Birthing a New World Order
38
The Power of Pictures in Public Policy
A best practice article
61
Model Context Protocol
The Missing Layer in Securing Non‑Human
Identities
56
DSPM Is the Missing Layer in Your AI
Security Stack
Protecting the data that fuels (and betrays)
our models
65 ON TH E COVER
Beyond Alert Fatigue Diana Kelley
How AI Can Actually Reduce Cognitive Chief Information Security Officer
Overload in Cybersecurity at Protect AI
2
A I C Y B E R SUMMER 2025
CO N T ENT
FE AT UR E S
3
A I C Y B E R SUMMER 2025
FRO M THE
E DITOR
@aicybermagazine
4
A I C Y B E R SUMMER 2025
CO N T RIBUTORS
Allie Howe Betta Lyon Delsordo Caroline Wong Confidence Staveley Diana Kelley
Allie is a vCISO After earning a Caroline Wong is the founder of AI Diana Kelley is the
and Founder of degree in Computer is the Director of Cyber Magazine, Chief Information
Growth Cyber. Science, Betty Cybersecurity the best guide to Security Officer
Growth Cyber helps pursued a Master’s at Teradata and understanding how (CISO) for Protect AI.
AI startups build in Cybersecurity the author of AI technologies She also serves on
trustworthy AI. Allie at Georgia Tech Security Metrics: A are shaping the boards of WiCyS,
has a software and completed Beginner’s Guide. cybersecurity. She The Executive
is also a multi- Women’s Forum
engineering numerous Her next book, on AI
award-winning (EWF), InfoSec
background, certifications and cybersecurity,
cybersecurity World, CyberFuture
a Masters in through an NSA will be published
leader, bestselling Foundation,
Cybersecurity, grant. She went by Wiley in Spring
author, international TechTarget
and is an active on to specialize in 2026.
speaker, advocate Security Editorial,
contributor on the application security for gender inclusion and DevNet AI/
11 - AI Has Changed the
OWASP Agentic penetration testing, in cybersecurity, ML. Diana was
Rules of Cybersecurity. Are
Security Initiative. with a focus on We Ready for What Comes and founder Cybersecurity Field
Allie has worked with web, cloud, and Next? of CyberSafe CTO for Microsoft,
leading analysts to AI hacking. In her Foundation. Through Global Executive
publish AI Security current role as MerkleFence, she Security Advisor at
vendor reports, an Application helps businesses IBM Security, GM
has spoken on AI Penetration Tester in North America at Symantec, VP
security at numerous at OnDefend, navigate the at Burton Group
conferences, and she searches for complexities of (now Gartner), a
hosts the Insecure vulnerabilities and application security Manager at KPMG,
Agents podcast. covert channels in with confidence. CTO and co-founder
of SecurityCurve,
web and mobile
9 - Are LLM Guardrails a and Chief vCISO at
applications.
Commodity? SaltCybersecurity.
17 - How I Use AI Tools
For Ethical Hacking
5
A I C Y B E R SUMMER 2025
CO N T RIBUTORS
Dr. Dustin Sachs Isu Momodu Jakub Szarmach Jarrod Coulter John Vaina
Abdulrauf
6
A I C Y B E R SUMMER 2025
CO N T RIBUTORS
Katharina Koerner Lalit Choda Michael Beck Olabode Agboola Oluseyi Akindeinde
Katharina Koerner Known in the Michael is the Global Olabode Agboola is the Founder
is a Senior Principal industry as “Mr. Chief Information is a UK-based of Hyperspace
Consultant at NHI,” Lalit Choda is Security Officer at Information Security Technologies,
Trace3, where she the founder of the Darktrace. Michael professional and specializing in
helps organizations Non-Human Identity has operated former CSEAN cutting-edge AI-
implement AI Management on some of the Training Director. driven technologies.
governance, Group (https:// world’s most critical A PECB Platinum
82 - Autonomous
security, and risk nhimg.org), where stages; from Trainer, he holds
AI-Driven Penetrating
management he evangelizes military intelligence an MSc with Testing of RESTful APIs
strategies. With and educates missions. Joining a Distinction
a background in the industry and Darktrace at its grade and top
AI policy, privacy organizations on early stages in 2014, certifications
engineering, and the risks associated Michael developed including CISM, ISO
enterprise security, with non-human the cyber analyst 27001 LA, LI and
she focuses on identities (NHIs) operation that ISO 42001 LI for
operationalizing and strategies supports thousands AI Management
responsible AI to address them of Darktrace System (AIMS).
through data- effectively. As a customers with Trained as Strategic
centric controls highly sought-after 24/7 support, a Executive at
and technical keynote speaker backbone of the Harvard, London
safeguards. She and an author of company’s AI-driven Business School, he’s
has held leadership white papers and defense. Since 2020, a global keynote
roles in research, research articles he’s also overseen speaker advancing
policy, and advisory on NHIs, he has Darktrace’s internal cybersecurity,
functions across established himself security program compliance, and
public and private as the leading NHI in his role as Global AI management
sectors in the U.S. voice in the industry. CISO and in 2021, systems.
and Europe. the company was
61 - Model Context Pro- 116 - AI Cyber Pandora’s
named a TIME 100
56 - DSPM Is the Missing tocol: The Missing Layer Box
most influential
Layer in Your AI Security in Securing Non-Human
Identities company.
Stack
7
A I C Y B E R SUMMER 2025
CO N T RIBUTORS
8
A I C Y B E R SUMMER 2025
9
A I C Y B E R SUMMER 2025
10
A I C Y B E R SUMMER 2025
AI Has Changed
the Rules of
Cybersecurity
Are we ready for what comes next?
By Caroline Wong
Adapted from her forthcoming Wiley book on AI and
cybersecurity (Spring 2026)
11
A I C Y B E R SUMMER 2025
12
A I C Y B E R SUMMER 2025
13
A I C Y B E R SUMMER 2025
15
A I C Y B E R SUMMER 2025
16
A I C Y B E R SUMMER 2025
Allie Howe
17
A I C Y B E R SUMMER 2025
I
see many AI Runtime Security vendors offering LLM
guardrails, as well as some evaluation platforms. I believe
this is a side effect of the lines being blurred between who
owns the responsibility of making sure AI systems output
relevant and safe information. It’s not just something your
security team cares about; your product team cares too.
The way I see the market right now, the products with the
best guardrails;
However, not everyone can afford an So which LLM guardrails are Bard AI chatbot shared inaccurate
AI Runtime Security product. Most information. In August 2024, Slack AI
of these new products are reserved
a Commodity? leaked data from private channels.
and marketed towards enterprise Over the last couple of years, stories
budgets. No matter where you of AI chatbots gone wrong have These headlines helped illustrate the
get your guardrails from (an eval consumed news headlines. For need for some sort of guardrails that
platform or an AI Runtime Security example, an Air Canada chatbot could prevent LLMs from outputting
product), it’s important to be an gave a customer misleading wrong information, private data, or
informed consumer. That means information about bereavement offensive content. Security startups
understanding which LLM guardrails fares and was later ordered to got to work and started offering
are a commodity, which are not, and provide a refund to the customer. guardrails that most businesses
how close to your LLM you need In February 2023, Google lost $100 would need. These were novel at first,
these guardrails to sit. billion in market value after its but today you’ll see most AI Runtime
18
A I C Y B E R SUMMER 2025
19
A I C Y B E R SUMMER 2025
with great power comes great guardrails is a good idea since we’ll
responsibility. Everyone remembers never fix things like prompt injection
the CrowdStrike blue screen of death with a shift-left strategy. Lots of these
debacle that delayed thousands are now commoditized, but you can
of flights last summer thanks to a
bad software update to one of their
evaluate vendors based on guardrail
customizability and deployment
AI security is not just
products deployed via eBPF. Thanks options as differentiators. AI security important to prevent
to that, there’s some amount of risk is not just important to prevent
and consumer hesitation with this your application from becoming your application from
type of deployment. a headline; it’s also a business
enabler. Use guardrails to secure
becoming a headline;
Deploying guardrails near the LLM your application against prompt it’s also a business
is a straightforward process. They attacks, but also to improve product
wrap LLM calls in additional APIs performance and align your AI to enabler.
and will get visibility into granular your unique use case.
LLM actions that allow for a good
debugging experience; however, Default LLM guardrails are
they may introduce additional commoditized, but alignment will
latency into the application. You never be.
might find that latency increases the
more guardrails you add.
20
A I C Y B E R SUMMER 2025
21
A I C Y B E R SUMMER 2025
Governing the
Ungovernable
By Rock Lambros
Policy Blueprints for Self-
Modifying AI Agents
22
A I C Y B E R SUMMER 2025
23
A I C Y B E R SUMMER 2025
T
raditional AI governance is dead.
24
A I C Y B E R SUMMER 2025
minute by minute. The temporal and zero-knowledge proofs to directly into AI cores. The moment
mismatch is fundamental. We need create a tamper-resistant global behavior veers beyond predefined
a paradigm shift from point-in-time registry of AI agents, enforcing guardrails, execution halts with no
oversight to continuous governance proportional oversight and committees, delays, or exceptions.
mechanisms that never sleep and automating compliance monitoring. These circuit breakers provide a
evolve as rapidly as the systems [2] The beauty lies in its redundancy, seamless, code-level shutdown
they monitor. as no single point of failure exists mechanism that preserves
when multiple independent systems performance during normal
monitor AI behavior. Yes, smart operation while standing ready to
contracts leverage blockchain. intervene within milliseconds. By
You may roll your eyes now, but a embedding these brakes alongside
consensus-based decentralized model reasoning pathways, any out-
We need a paradigm system can help rein in agent sprawl. of-bounds action gets caught and
We need dual-component AI…let’s contained in real time.
shift from point- call it Janus Systems, after the two-
in-time oversight faced Roman deity. One component
Governance as Code
ruthlessly pursues objectives while
to continuous the other constantly monitors for
alignment failures, creating an
governance internal check-and-balance system.
mechanisms that The actor bulldozes ahead,
Static rulebooks collapse under permanent, verifiable records that speed. We can’t wait for humans to
the weight of autonomous systems persist regardless of how systems notice something went sideways.
that adapt and self-modify. evolve. This enables post-hoc This machine-to-machine oversight
“Governance as Code” transforms analysis of governance failures and loop mitigates vulnerabilities faster
abstract policies into executable provides critical data for improving than agents can mutate, finally
blueprints that live alongside your oversight mechanisms. aligning safety with the breathtaking
infrastructure. Guardrails written pace of AI innovation.
in code automatically enforce
themselves at runtime rather than
The Path Forward
waiting for the next audit cycle.
Some of you will cringe as you read Letting AI guard itself sounds brilliant
this… We WILL ultimately need AI to
govern AI.
We WILL ultimately until agents start reward hacking and
colluding. Agents learn to sidestep
Embrace it or go the way of the dodo need AI to govern AI. or disable their own checks in pursuit
bird. of objectives. We risk overestimating
This approach unifies compliance, their impartiality if we expect these
security, and operational practices
Continuous Adversarial internal regulators to flag every
under a single source of truth, Testing misstep. After all, the monitor’s code
ensuring every change is verified was written by humans with blind
Passive defenses eventually fail.
against governance rules before spots of their own.
Continuous adversarial testing
deployment. You get real-time embeds active, automated probing
feedback on drift and deviations by Decentralization promises resilience
mechanisms that relentlessly
embedding policy checks into CI/CD but fragments accountability.
search for weaknesses. Picture an
pipelines. When something breaks, nobody
adversarial engine churning out
When your models can develop new wears the badge. Governance forks
attack scenarios and probing every
capabilities or rewrite their logic in can splinter standards into chaos,
nook of your model’s behavior
production, your governance must creating inconsistent enforcement
to catch flaws before they reach
be equally dynamic, ready to codify that clever agents exploit.
production.
new policies, deploy updated checks,
and enforce constraints at machine Self-regulation appeals to the
In 2024, OpenAI published research
speed without human bottlenecks. industry’s need for agility, but history
that blended human expertise with
Model versioning and immutable shows that voluntary codes will not
automated red teaming powered
audit trails enable accountability work under competitive pressure.
by GPT-4T, creating an ecosystem
in dynamic systems. Google These tensions demand thoughtful
of stress tests that hunt down weak
DeepMind’s “Model CV” approach balancing rather than absolutist
spots at machine speed. [3] This
creates continuous, tamper-proof approaches.
creates a self-directed adversary
records of model evolution, allowing within your pipeline, flagging exploit
stakeholders to track capability Governance and autonomy must
paths as they form and feeding
emergence and behavioral changes. remain locked in perpetual feedback
them directly into incident response.
Combining these approaches with as models surface new capabilities,
Every millisecond counts when
blockchain-based logging creates governance layers adapt in real
agents rewrite themselves at warp
26
A I C Y B E R SUMMER 2025
time, and stakeholders iterate C-Suite Action Plan The conventional governance
policies with the same rigor as code playbook is obsolete. Organizations
deployments. that thrive will implement
1. Implement Dual-Layer
governance mechanisms as
Oversight: Adopt actor-critic
It’s time for regulators, technologists, dynamic and adaptive as the AI
architectures that separate
and industry leaders to converge systems they’re designed to control.
capability from governance,
on shared tooling: dynamic policy
with independent monitoring
as code, continuous adversarial 1. Lohn, A., Knack, A., & Burke, A.
systems tracking model
testing, and transparent audit trails. (2023). Autonomous Cyber Defence
behavior.
If AI is a moving target evolving at Phase I. Center for Emerging
exponential rates, our governance Technology and Security. https://
2. Deploy Ethical Circuit
cannot remain anchored to cetas.turing.ac.uk/publications/
Breakers: Implement
yesterday’s assumptions. a u to n o m o u s - c y b e r- d efe n ce
automated shutdown
mechanisms triggered by
Either we learn to sprint alongside 2. Tomer Jordi, T. J., Goldston,
behavior outside acceptable
these self-modifying agents, or J., Okusanya, B., & D.A.T.A. I, G.
parameters, with clear
we risk being left in their dust as (2024). On the ETHOS of AI Agents:
escalation protocols.
they evolve beyond our control. An Ethical Technology and Holistic
The race has already begun. The Oversight System. Arxiv.org.
3. Establish Governance
question is whether our governance https://arxiv.org/html/2412.17114v2
as Code: Transform policies
approaches will evolve quickly
into executable code that
enough to keep pace. 3. https://openai.com/index/
integrates with development
a d v a n c i n g - re d - t e a m i n g - w i t h -
pipelines and enforces
people-and-ai/
constraints at runtime.
27
A I C Y B E R SUMMER 2025
AI In Cybersecurity Bookshelf
From defending against AI-powered threats to securing generative AI systems, the challenges are as complex as
they are urgent. To help you stay ahead, we’ve handpicked five must-read books that combine cutting-edge insights,
practical strategies, and real-world case studies. Whether you’re a developer, CISO, or policymaker, these books are
your guide to staying ahead in the age of AI-driven security.
Grab it on Amazon
Find it on Amazon
28
A I C Y B E R SUMMER 2025
AI In Cybersecurity Bookshelf
Available on Amazon
29
A I C Y B E R SUMMER 2025
AI In Cybersecurity Bookshelf
30
A I C Y B E R SUMMER 2025
31
A I C Y B E R SUMMER 2025
32
A I C Y B E R SUMMER 2025
Olabode Agboola
Throughout history, people have been amazed by the
creativity and complexity of early inventions like watches,
automobiles, airplanes, computers, industrial machines,
ships, and so many more. But when it comes to the
brilliance behind the development of AI technology, it truly
stands out as something exceptional. Artificial intelligence
really has the potential to change everything about how
we think, reason, and even exist.
designed to operate independently. bunch of others. tech company named Shield AI that
This could really help businesses just rolled out a new system called
boost their efficiency and reduce Agentic AI has a few specific the MQ-35 V-BAT. It’s an advanced
the need for human involvement. roles: it can handle everything unmanned aerial system (UAS) that
These days, folks are automating from gathering data to analyzing can take off and land vertically,
their routines, and decisions are it, making decisions, providing thanks to its Agentic AI power. This
being made by Agentic AI for them. responses, and giving feedback, all electronic war system is designed to
Agentic AI is making its way into a on its own. It can get a bit unsettling autonomously deploy Data Agents
bunch of different industries, from when you think about leaving an for data collection against its targets
defense setups to national security AI to gather and analyze data and and can make decisions similar to
operations, and it’s being woven into make decisions on its own. But drone swarms. A lot of countries are
all sorts of systems and machines. really, it shouldn’t be that scary if using and incorporating Agentic
the places where this is happening AI into their electronic warfare
Agentic AI can be used in a bunch aren’t putting human lives at risk. systems. China has tapped into
of different areas like delivery Taking a closer look at the different the potential of Agentic AI with
bots, self-driving cars, and drones. kinds of Agentic AI reveals some their advanced unmanned ground
It really helps with making quick serious concerns about letting them system known as CETC. This system
decisions about route optimization, function in cyber-physical settings, isn’t officially labeled as an Agentic
navigation, and avoiding obstacles especially in military systems and AI enabled system just yet, but you
by integrating Agentic AI into the operations. The data agent is built can definitely see some features that
designs. Manufacturing is getting to gather information on its own, suggest it has those characteristics.
a boost with the help of embedded no matter where it’s set up. You can CTEC is designed to manage large-
Agentic AIs, making things run more collect data in a bunch of ways, like scale deployments of drone swarms,
smoothly than ever. These days, tapping into databases, using data carry out precise autonomous
production lines are managed more from sensors in the field, accessing strikes, and conduct reconnaissance
effectively. Fault detection gets a APIs, and plenty of other methods. and surveillance.
helping hand, downtime is cut down, The Analysis Agent looks at what the
and output is boosted thanks to Data Agent produces, and then the
Agentic AIs in the production and Decision Agent makes its own call
manufacturing sectors. Bringing based on what both the Data Agent
Agentic AI into cybersecurity and Analysis Agent have provided.
defense systems has really stepped Drone swarms, which
up threat detection. Now, defense All of this can happen without
decisions are made automatically, anyone having to step in. In military use machine learning
and countermeasures are rolled operations, Agentic AI is now
and real-time data
out in real time. There are quite a handling some pretty complex
few other areas where Agentic AIs strategies. A great example of analysis to navigate their
have made their mark, like logistics, this is drone swarms, which use
disaster response operations, machine learning and real-time
targets’ environments
healthcare robotics, hydrocarbon data analysis to navigate their and carry out tactical
exploration and production, energy targets’ environments and carry
grids, space exploration rovers, out tactical operations or offensive
operations or offensive
financial fraud management, and a tasks. So, there’s this US defense tasks
34
A I C Y B E R SUMMER 2025
Now that Agentic AI is on the scene, That’s pretty concerning and a government, and strong technical
everyday systems are getting bit frightening. This development and professional safeguards, along
some extra attention. We’re talking comes with some serious risks, like with ethical guidelines to keep
about a whole new way of looking misinterpreting intent, unplanned everything in check.
at how society keeps an eye on escalation, and possibly losing
things. With Agentic AI being human control in high-stakes
part of our mobile devices, online military situations. So, it turns out
platforms, smart infrastructure, and that the US Department of Defense
surveillance systems, it feels like has shelled out around 10 billion
we’re constantly being watched and dollars over the past five years to
monitored without even realizing it. boost their military operations with
When we think about how people’s AI. Pretty interesting, right? We
communications, online behaviors, don’t have the exact percentage
and movements are being monitored of the 1.3 trillion USD that China
or tracked, whether actively or has spent on AI, but it’s generally
passively, it’s time to chat about this believed that they’ve ramped up
other aspect of Agentic AI. their investment in AI to boost their
military capabilities. In 2024, Russia
It looks like we might be on the brink is expected to spend around 54
of a global arms race, all thanks to million USD on AI development.
how countries are starting to blend France’s ministry of armed forces
AI with their military strategies and has kicked off a program named
operations. ARTEMIS.IA, focusing on big data
processing, AI-driven analysis, and
support for military operational
decisions. France set aside about
€100 million each year from 2019 to
2025 for defense AI.
Unlike traditional
Countries are ramping up their
military tactics, AI- spending on Agentic AI to boost
military capabilities, and it seems like
driven war systems can
this is paving the way for a new world
work at machine speed, order. There’s a lot happening on the
other side of Agentic AI, especially
identifying threats when it comes to the race for better
or engaging targets autonomous weapons, decision-
making systems, and surveillance
without any human systems. When it comes to using AI in
involvement. Cyber Physical systems (CPS) in the
military, it’s really important to have
some solid rules in place. We need
good governance, oversight from the
36
A I C Y B E R SUMMER 2025
37
A I C Y B E R SUMMER 2025
The Power of
Pictures in Public
Policy
How Visuals Can Correct
Misconceptions and Improve
Engagement
By Jakub Szarmach
38
A I C Y B E R SUMMER 2025
39
A I C Y B E R SUMMER 2025
W
Why Words Fail ?
We’ve all seen it. A 30-page policy report that makes your
eyes glaze over by paragraph three. It’s packed with facts,
dense with citations, and totally unreadable.
40
A I C Y B E R SUMMER 2025
These aren’t just nice-to-have additions. They’re it, and the hippocampus stores it long-term.
comprehension machines. They strip away
ambiguity. They give your reader a structure to What does this mean for policy? It means if you
hang everything else on. And they’re far more want someone to understand a new rule, procedure,
efficient than even the best-written paragraph, or risk model, your best bet isn’t a wall of text. It’s a
because they match how the brain likes to learn: visual that makes the stakes feel real. Good visuals
visually, spatially, and all at once. grab attention and direct it where it matters. They
help brains do what brains do best: notice, learn,
and remember.
41
A I C Y B E R SUMMER 2025
5% 72%
O N LY AND
Don’t bury them in acronyms. Don’t
hand them a deck that needs its own
glossary. Give them a diagram they
can absorb in one glance.
According to Deloitte’s 2025 Feel “very ready” to oversee mainly engage on these topics with
related initiatives CIOs and CTOs-not with CFOs,
(Deloitte. (2025). Governance of AI:
CISOs, or risk officers
A Critical Imperative for Today’s
Boards. Deloitte Insights) survey:
more effective than 40 It’s the best kind of visual: one that
1. Everything is grouped by
slides and a donut chart. real-world ownership, not just
saves time, reduces risk, and actually
gets used.
abstract themes.
42
A I C Y B E R SUMMER 2025
Less Telling. More Showing. stay buried in the fine print. They
connect the dots across silos, teams,
Visuals aren’t decoration. They’re not
and time zones. They don’t just help
the cherry on top of a policy sundae.
people follow the story—they help
They’re the plate the whole thing
people act on it. Visuals aren’t decoration.
sits on. Without that plate, you’re
So next time you write a strategy,
just flinging scoops of information
draft a law, or prep a board update,
They’re not the cherry on
onto the floor and hoping someone
catches them.
don’t ask, “How can I explain this top of a policy sundae.
better?”
When done right, visuals don’t just
Ask: “What can I show instead?”
They’re the plate the
make your ideas prettier—they
make them possible. They clarify
Then show it. Badly, if necessary. whole thing sits on.
Just start.
who does what and when. They
spotlight risks that would otherwise
43
A I C Y B E R SUMMER 2025
44
A I C Y B E R SUMMER 2025
Diana Kelley
D I S CU SS ES
45
A I C Y B E R SUMMER 2025
46
A I C Y B E R SUMMER 2025
systems that analyze and categorize information, system might give a different response to the same prompt.
GenAI models learn the underlying patterns and That means standard testing methods won’t suffice. Instead,
structures in their training data to generate novel we use AI-driven testing, “AI testing AI” via adversarial prompts
outputs—whether text, images, code, or other to harden models against prompt-injection and other attacks.
media. These systems use sophisticated neural By reframing how we protect data, vet models, and test non-
network architectures, such as transformers for deterministic behavior, we can apply our security expertise
language models or diffusion models for image effectively to AI.
generation, to produce content that didn’t exist in
their training sets but follows the learned patterns
and styles. The “generative” aspect distinguishes
these systems from their predecessors: while a
traditional ML system might classify an email
as spam or legitimate, a GenAI system could
compose an entirely new email based on prompts
and context provided by the user.
47
A I C Y B E R SUMMER 2025
RSAC 2025 was buzzing about autono- Software has SBOMs; you’ve called for an MBOM
mous agents; what do most practitioners (Model BOM) for AI artifacts. What does a “mini-
still misunderstand about how agents mum‑viable Model BOM” look like today, and how
really operate—and why does that gap should it mature as composability explodes?
matter?
Diana Kelley
Diana Kelley
This is a great question! I want to give a shout-out to Helen
Yes, and agentic AI, funny, right? Every year at RSA,
Oakley, who’s been leading the charge on what we’ll call
there’s that buzzy emerging tech on everyone’s
M-BOMs, ML-BOMs, or AI-BOMs (we haven’t settled on a name
lips, and this year it was agents. But people tend
yet). Basically, an AI bill of materials builds on the software
to think AI just gets smarter on its own, constantly
BOM idea, listing all the “ingredients” in your system, but
leveling up. In reality, AI only improves with better
adds AI-specific elements. Sure, you need to track libraries
training and data; it doesn’t magically evolve. So if
and dependencies, but you also need to know which datasets
you buy an agent today, it won’t automatically be
were used or cleaned, whether that data was approved and
better months from now without human oversight.
by whom, the provenance of every model (where it came from,
I loved someone’s post on LinkedIn calling agents
who trained it), and how those models were tested. All those
“interns with access”, they’re only as good as our
unique components have to go into your AI-BOM. It’s early
training, and they can drift. We still need humans
days, though, so stay tuned as this work evolves.
in the loop to train, monitor, and ensure agents
operate within their systems; one wrong LLM
output can cascade through an entire workflow. In your experience, what should a highly effective
Agents aren’t a magic solution, and they probably MLSecOps lifecycle look like? Walk us through an
never will be.
ideal life‑cycle—from data collection to retired
model to ensure Secure-by-Design principles are fol-
lowed. Please feel free to spotlight one control people
always forget.
I loved someone’s post on LinkedIn Diana Kelley
MLSecOps is essentially DevSecOps for the MLOps lifecycle:
calling agents “interns with access”, weaving security in from start to finish. First, scope your project
they’re only as good as our training, to decide if you truly need ML or AI and confirm you have the
and they can drift. right data (enough, relevant, privacy-compliant). Next, during
data preparation, clean and secure live datasets to avoid under-
or overfitting. When training models, scan them for malicious
code and ensure they fit their intended purpose. As you move to
testing, remember that components might behave differently in
isolation than inside a larger system, so test both dynamically
and within the full environment. Deployment demands careful
architecture: a free, cloud-hosted chatbot has very different
48
A I C Y B E R SUMMER 2025
security considerations than a self-hosted foundation model As for overfeeding, that typically causes
on AWS Bedrock. In SaaS, control is limited mostly to data and overfitting. The model becomes exceptionally
authentication; in IaaS or Kubernetes, you manage more layers good at recognizing patterns in its training data,
(OS, networking, etc.). Throughout deployment, apply zero but it loses flexibility. When you give it new, unseen
trust and least-privilege principles to data, APIs, and models. data, it can’t generalize well and its accuracy on
Finally, runtime monitoring is critical, models drift and can start fresh inputs drops significantly.
producing incorrect or unsafe outputs. Monitor continuously,
retrain or retire models that misbehave, and ensure they’re torn What is Shadow AI and what are
down securely at the end of their lifecycle. By integrating these
some ideas for tackling this challenge
practices, threat modeling, secure architecture, data hygiene,
model vetting, and continuous monitoring, you build a robust in the Bring-Your-Own-AI era we’ve
MLSecOps process. just stepped into? Which governance
lever has proven most effective: policy,
discovery tooling, or cultural incentives?
Diana Kelley
There’s a lot to unpack, but first, I’d like to share
By integrating these practices, threat credit because that RSA session was a panel
modeling, secure architecture, data hygiene, with three brilliant colleagues, so we had many
viewpoints represented. A summary of the key
model vetting, and continuous monitoring,
takeaways was posted on LinkedIn.” (https://
you build a robust MLSecOps process. www.linkedin.com/posts/john-b-dickson-cissp-
41a149_rsac2025-rsac2023-shadowai-activity-
From your response, two questions popped into my 7330359488136249344-Kyk1?) Shadow AI is
head. First, what happens if a model is overfed with especially interesting because it echoes what
happened with cloud. Right now, companies worry
data? Second, runtime visibility is a huge challenge,
about employees using unauthorized tools, say,
despite static and dynamic testing, things can still someone using Perplexity or Claude when you’ve
go wrong in production. Can you speak more about officially adopted Gemini or Microsoft Copilot. It
becomes a game of monitoring outbound traffic
that?
and gently steering people back to the approved
Diana Kelley AI. But there’s another side to shadow AI: the
Sure. For runtime visibility, you need tools that capture inputs predictive machine learning systems that have
and outputs as they happen. Some teams use eBPF hooks at quietly run in segmented pockets of organizations
the kernel level to mirror everything sent to and from the LLM. for years (much like OT systems on factory floors).
Others insert a proxy or tap/span layer between the model and
its consumers, whether that’s a human user, another LLM, or an
agent, so you log every request and response without adding
noticeable latency. That way, if a model starts behaving
unexpectedly, you have a complete audit trail to investigate
what went wrong.
49
A I C Y B E R SUMMER 2025
50
A I C Y B E R SUMMER 2025
Among AI‑native start‑ups you advise, what In April the news broke about Protect
security hurdle consumes the most oxygen? AI’s partnership with Hugging Face.
Diana Kelley I honestly heaved a huge sigh of relief
AI-native founders are all about vibe coding and agentic and was very excited for very obvious
systems, but their security hurdles are familiar. Vibe coding
doesn’t let you skip solid development practices: you still have
reasons. Protect AI’s Guardian scanners
to architect, test, and protect your software. The real pitfalls have scanned 4.4 million model versions
are misunderstanding the market, overestimating what AI can and flagged 350 k+ issues—what trend
do today, and rushing to launch. It’s classic founder pain, you
most surprised you, and how should
must pinpoint real customer problems and pick the right tools,
not assume ChatGPT will instantly create a unicorn. Deeply security teams translate that into an
understanding the pain you’re solving is still non-negotiable. import checklist?
Diana Kelley
Yeah, it’s funny, the biggest surprise was
no surprise: attackers simply repurpose old
techniques in a new space. When we moved to the
The real pitfalls are misunderstanding the cloud, account takeover and privilege escalation
jumped straight in, and with models it’s the same.
market, overestimating what AI can do today, First, typo-squatting: just as malicious sites mimic
and rushing to launch. “google.com,” you’ll see “Meta Llama” instead of
“Llama 3” to trick downloads. Next, dependency-
What practical controls can resource‑constrained chain attacks exploit a vulnerable library in your
ML workflow. Then there’s malcode insertion like
teams deploy to detect poisoned training sets?
steganography for images or Word docs, except
Diana Kelley embedded in model files so once the model
Yeah, so obviously, if you have your own training set, if you runs, that Python code can exfiltrate data, drop
control the training data that’s the best way to know and detect executables from an S3 bucket, or even enable
access in and out. You can lock down who can see or touch the account takeover. Don’t forget neural backdoors,
data with strict access controls. But if you’re using a model and where a baked-in sequence triggers malicious
don’t know what data it was trained or tested on, you need behavior on a specific prompt. These aren’t new
to cover your bases with testing. Dynamically, you bombard threats, they’re just hiding in new artifacts, so we
it with questions, query its responses, and watch for anything need new tools to spot and report them.
that’s off or unexpected.
One bright spot though is that Hugging Face now
You also want to run static analysis to spot any neural- pre-scans models and shows you risk ratings kind
architecture backdoor, someone might have baked in a trigger of like VirusTotal so before you download, you get
that, upon a preset prompt, yields a specific response. Spotting a heads-up if a model has been flagged by them
that odd behavior is your red flag that the model was trained or or other scanners.
modified in ways you didn’t authorize.
51
A I C Y B E R SUMMER 2025
52
A I C Y B E R SUMMER 2025
With the increased use of large language models for vein-brain barrier” of our network. In practice,
those brittle rules either flagged every innocent
both offense and defense, what concrete steps should mention of “resume” or missed clever obfuscations
organizations take today to brace for AI‑powered entirely. They did OK on clear patterns, credit-
offensive tooling? card numbers, SSNs, but anything conversational
slipped through.
Diana Kelley
Enter GenAI with its natural-language smarts.
Yeah, there’s an “AI” version of every attack and maybe a “non-
Now, instead of just spotting “CV.pdf,” an AI-
AI” version too — which means we’ll have to fight AI with AI. It’s
driven DLP can parse a message like “I’m really
like a cold war between attackers and defenders, so we need
excited about the open role in marketing, here’s
tools that can use AI to detect AI-powered attacks at machine
my background” and flag it as a potential job-
speed.
hunt leak. It understands intent, not just keywords.
I’m genuinely excited to see vendors embedding
Beyond technology, our processes must be AI-aware: are
GenAI into DLP, finally, a solution that catches
your incident-response plans “AI ready”? Do you know which
the real signals rather than drowning us in false
signals to watch for when an attack comes from a generative
positives.
model? And train your people on AI-driven social engineering.
Deepfakes, cloned voices, AI-crafted videos — a phone call or
video no longer proves identity. Attackers can scrape public
Regulation always plays catch up. If you
details (like “I went to Boston College, how are Nick and Nora?”)
to feign familiarity. But knowing my dogs’ names doesn’t mean could insert one clause into the EU AI Act
you know me. or NIST AI RMF to fast‑track alignment
with technical reality, what would it
say?
Diana Kelley
I have huge respect for frameworks like the EU
there’s an “AI” version of every attack and and NIST AI RMF, they rightly acknowledge there’s
maybe a “non-AI” version too — which no one-size-fits-all. I especially appreciate the
EU’s tiered risk approach, and I’d love to see even
means we’ll have to fight AI with AI. more emphasis on security within AI’s shared-
responsibility model. After all, securing a publicly
Where are legacy security tools failing probabilistic hosted foundation model is very different from
systems, and what new capability do you wish a locking down an embedded Copilot or Gemini in
your workspace, or running your own on-prem
vendor would tackle tomorrow?
instance. We need guidance that maps specific
Diana Kelley use cases and deployment architectures to their
I think we’ve talked a lot about the testing and all that. Another unique risk profiles, so we can tailor our security
area that I’m actually really excited about in regards to how and risk-management practices to each scenario.
AI can help advance cybersecurity protections is in the realm
of DLP or data leak prevention or protection. I’ve been around
DLP since those heady days 10–15 years ago when we thought
it would stop every “resume” or “CV” leaking out the “blood-
53
A I C Y B E R SUMMER 2025
55
A I C Y B E R SUMMER 2025
DSPM Is theMissing
Layer in Your AI
Security Stack
Why modern AI security begins - and
succeeds - with securing the data layer
56
A I C Y B E R SUMMER 2025
57
A I C Y B E R SUMMER 2025
AI is changing the enterprise - but From Privacy to Posture: critical questions like:
as its footprint expands, so does
its attack surface. From shadow
The Evolution of DSPM Is this dataset safe to use in training?
AI deployments to data leakage DSPM emerged from early privacy Who has access to that financial
through large language models, the technologies that focused on record?
risks associated with AI adoption scanning data stores for personally Has sensitive data been copied into
are intensifying. identifiable information. These tools a shadow AI environment?
helped organizations meet growing
Despite strong investment in AI regulatory obligations by identifying By starting with the data and
capabilities, one foundational truth sensitive data and reporting risk. building visibility outward, DSPM
remains overlooked in many security complements existing tools while
strategies: AI is only as secure as But modern DSPM platforms have laying the foundation for AI-
the data it uses - and most security moved far beyond discovery. They ready security. It doesn’t replace
tools weren’t designed to protect now deliver real-time, automated traditional controls—it feeds them.
that layer. While traditional controls data visibility, access governance, By adding real-time data visibility
focus on securing environments, and risk remediation across hybrid and sensitivity context, DSPM
endpoints, or identities, they miss cloud, SaaS and AI workload- makes tools like CSPM, IAM, and
the sensitive data AI systems ingest, intensive environments. What began DLP effective in securing how data
process, and generate. If you don’t as a privacy utility has matured into is actually accessed, shared, and
know where your data lives, who a critical security layer - integral to processed by AI systems.
accesses it, or how it flows, your AI safe, responsible AI development
security posture is incomplete by and deployment.
design.
58
A I C Y B E R SUMMER 2025
Why AI Demands DSPM especially on-prem, file shares, or If You Want Secure AI, Start
proprietary SaaS apps.
This shift from static compliance with Secure Data….
tooling to dynamic data posture Securing AI doesn’t start with the
Over the past three years, the DSPM
management comes at exactly the model - it starts with the data. From
market has evolved rapidly. Today,
right time. As organizations embrace training to prompting to inference,
leading solutions share several
AI, the scale, speed, and complexity sensitive data moves rapidly
cloud-native traits:
of data usage has outpaced what through AI systems, often outside
traditional security tools were traditional security perimeters.
designed to handle. AI systems • Context-aware
DSPM gives security teams the
don’t just use data - they are built classification, using AI/ML to
visibility, classification, and control
on it. Models ingest structured and minimize false positives and
needed to govern this data in near
unstructured data, move it across accurately identify sensitive
real time, across cloud, SaaS, and
tools and clouds, and generate data in complex formats like
hybrid environments.
synthetic outputs that may expose contracts, source code, or
or replicate sensitive content. multilingual content
For AI security teams, DSPM enables
To secure this process, DSPM • Access risk scoring,
answers to the questions that matter
provides five essential capabilities: highlighting overprivileged
most:
users, stale permissions, or
public data exposure
• Remediation hooks, • Where is our sensitive
integrating with SIEM, data, and how is it being used
SOAR, ticketing, or policy in AI workflows?
enforcement tools to drive • Are we exposing more
action than we intend through
• Cross-environment training, prompts, or outputs?
visibility, covering multi- • Can we demonstrate
cloud, SaaS, and hybrid compliance and meet
architectures without AI-specific regulatory
requiring agent sprawl expectations?
• Ecosystem readiness, • Are we empowering
with API-first designs and innovation without
integrations into DLP, GRC, compromising governance?
IAM, and lineage platforms
60
A I C Y B E R SUMMER 2025
Model Context
Protocol
The Missing Layer in
Securing Non-Human
Identities
by Lalit Choda (Mr NHI)
The cybersecurity perimeter isn’t just about human users or login
screens anymore.
So, this is where the Model Context Protocol (MCP) steps in.
61
A I C Y B E R SUMMER 2025
How MCP and NHIs Intersect agent? while enforcing security boundaries
• Scoped: What can it do? and business logic around what
AI models that interact with systems,
those agents can see or do.
like retrieving sensitive records are
• Monitored: What has it done?
effectively acting as NHIs. That
means they must be:
MCP provides the structure for these
controls. It allows organizations to
• Identified: Who or what is the
delegate actions to AI agents safely,
62
A I C Y B E R SUMMER 2025
63
A I C Y B E R SUMMER 2025
Context Protocol provides a way to the task, and the limits on behavior.
move ahead. It’s not a quick fix, but it MCP is set to be a key building block
definitely marks a key change from for Zero Trust in machine-driven
fixed identities and wide-ranging infrastructure. When it comes to
permissions to more flexible, context- AI assistants handling workflows
based policy enforcement. If it’s or robotic process automation in
designed well, MCP could turn into finance, it’s all about earning trust
the digital system that makes NHIs through actions rather than just
predictable, safe, and accountable. relying on credentials.
64
A I C Y B E R SUMMER 2025
Beyond Alert
Fatigue
How AI Can Actually
Reduce Cognitive
Overload in
Cybersecurity
by Dr. Dustin Sachs
The average SOC analyst makes more decisions in a single
shift than most people do in a week, and the stakes are
existential. Every blinking alert, every incomplete data
trail, every ambiguous log entry demands judgment under
pressure. And yet, the very tools meant to help, dashboards,
threat feeds, SIEMs, often flood defenders with so much
information that they become paralyzed, fatigued, or
worse, desensitized. This is the real threat behind cognitive
overload in cybersecurity. But what if AI didn’t just
accelerate detection, but actively reduced mental load?
What if it could help us think better, not just faster? AI, when
designed with behavioral insights in mind, can become not
just an automation engine but a cognitive ally (Kim, Kim, &
Lee, 2024).
65
A I C Y B E R SUMMER 2025
66
A I C Y B E R SUMMER 2025
Reframing AI as a Cognitive Strategic Recommendations not only keep pace with threats
but develop the capacity to adapt,
Augmentation Tool for Implementation learn, and excel over time.
To realize AI’s true potential, it must To maximize impact, organizations
be reimagined not as an automated should embed AI into their References
watchdog, but as a cognitive ally. cybersecurity workflows using
• Akhtar, Z. B., & Rawol, A. T.
The shift from detection engine to human-centered design principles.
(2024). Enhancing cybersecurity
decision support system is not just
through AI-powered security
semantic, it’s strategic. AI must be
mechanisms. IT Journal Research
designed to think with analysts, not
and Development. https://doi.
for them. Intelligent prioritization is
org/10.25299/itjrd.2024.16852
one such avenue. Instead of treating
all anomalies equally, advanced
• Bernard, L., Raina, S., Taylor,
systems can learn from historical
B., & Kaza, S. (2021). Minimizing
triage behavior to rank alerts based
cognitive overload in
on their likelihood of actionability.
cybersecurity learning materials:
This helps analysts focus on
An experimental study using
meaningful threats rather than
eye-tracking. Lecture Notes
getting mired in low-priority noise
in Computer Science, 47–63.
(Romanous & Ginger, 2024).
https://doi.org/10.1007/978-
3 - 0 3 0 - 8 0 8 6 5 - 5 _ 4
Natural language summarization
offers another path to cognitive
• Camacho, N. G. (2024). The role
relief. Rather than forcing analysts
of AI in cybersecurity: Addressing
to parse dense logs or sift through
threats in the digital age.
raw data, AI-powered tools like
Journal of Artificial Intelligence
Microsoft Security Copilot and IBM
General Science. https://doi.
QRadar condense information into Cybersecurity is ultimately a human
o r g / 1 0. 6 0 0 8 7/ j a i g s .v 3 i 1 . 7 5
executive summaries. This allows endurance sport, demanding
rapid comprehension and speeds sustained attention, resilience
• Cakır, A. M. (2024). AI driven
up decision-making (Akhtar & Rawol, under pressure, and rapid decision-
cybersecurity. Human
2024). Behavioral AI integration making amid uncertainty. In this
Computer Interaction. https://
takes this even further by adapting high-stakes landscape can become
d o i .o rg /1 0.6 2 8 0 2 / j g 7g g e 0 6
to how individual analysts work. a trusted teammate rather than
These systems learn usage patterns an overbearing taskmaster. By
• Cau, F. M., & Spano, L. D. (2024).
and present information in more shifting the narrative from AI as an
Mitigating Human Errors and
digestible, chunked formats, automation panacea to a strategic
Cognitive Bias for Human-
minimizing unnecessary context- cognitive asset, security leaders
AI Synergy in Cybersecurity.
switching. Subtle nudges, such empower their teams to make
In CEUR WORKSHOP
as highlighting inconsistencies or better, faster, and more informed
PROCEEDINGS (Vol. 3713, pp.
recommending secure defaults, can decisions. This reframing fosters
1-8). CEUR-WS. https://iris.unica.
help ensure consistency under stress an environment where defenders
i t /ret r i eve/d d 5 5 5 3 8 8 -5 d d 2-
(Shamoo, 2024).
67
A I C Y B E R SUMMER 2025
68
A I C Y B E R SUMMER 2025
69
A I C Y B E R SUMMER 2025
CISO
Insights
From a World Leader in Autonomous Cyber AI
He is the Global Chief Information insights on AI in cyber defense, and vulnerable. It’s diverse. There are
Security Officer at Darktrace. With what it really takes to lead security many disciplines within the CISO
almost two decades of experience at scale. role. I was reluctant to become
at the intersection of technology, CISO. I liked advising and dealing
intelligence, and cyber defense, How would you describe with customers, then I was probably
Michael has operated on some of pushed into the position, and I’ve
the work you do as a CISO?
the world’s most critical stages; never looked back. Great experience.
from military intelligence missions Give us an overview of It’s been great. I think CISOs never
to securing the UK’s Government’s how your role impacts your stop learning. You’re always striving
Cyber Defense Operations. Joining to catch the next wave. The security
organization as an AI-driven
Darktrace at its early stages in sector evolves swiftly. Especially
2014, Mike developed the cyber
cybersecurity company? in an AI-dominated world, change
analyst operation that supports As a CISO, you probably know this, seems more present. It’s an intriguing
thousands of Darktrace customers but it’s incredibly varied; one day I job. It’s a detailed profession that
with 24/7 support, a backbone of the may be knee-deep in compliance requires high-level communication
company’s AI-driven defense. Since work, trying to figure out if we back into the business. I enjoy the
2020, he’s also overseen Darktrace’s need audit activities, and the next position.
internal security program in his role examining a recent attack and
as Global CISO. In this Q&A, he shares trying to understand how we’re
70
A I C Y B E R SUMMER 2025
that must react quickly to their a terrific way to create, lead, and
surroundings. That was intriguing. apply that experience.
I was in my mid-20s when I did the
work. It was thrilling and interesting You helped protect both the
I think CISOs never but also gave me a foundation
London 2012 Olympics and
for operations. I’ve applied that to
stop learning. my cyber career. I constantly tell the 2022 Qatar World Cup.
individuals that any experience What unique threat patterns
Tours in Afghanistan is valuable when considering a
emerge on stages that big,
career. I don’t care what you study.
gave you front‑row seats and how did behavioural AI
You can always mention that. I
to real‑time intelligence even remember taking a module change the defence approach?
operations. Which field lessons on industrial manufacturing and
Hmm. I’d say working on massive
getting materials to the factory
still shape your cybersecurity events like that is incredible,
floor on schedule in college. I was
playbook today? like, why do I need this? I’m studying
everyone’s watching, and if
someone wants to embarrass you,
Oh my goodness, I learned a lot computer science. Why do I need it?
a cyberattack is the easiest way.
about working inside military It’s all relevant, and I think drawing
It’s fascinating to see all the moving
buildings and with field teams from many various experiences is
71
A I C Y B E R SUMMER 2025
parts come together: finishing venue builds, pulling in local As AI handles triage and routine tasks, our human
government, police, and vendors, and then watching the cyber defenders can shift to threat hunting, applying
ops room form around it all. their domain expertise in creative ways. For me
as a CISO, that partnership scales our resources,
One thing I always notice is the spike in phishing right before a true force multiplier and builds a more balanced
these events. With something so globally recognizable, security operation where AI and people work side
attackers can launch the same scam at massive scale across by side.
multiple countries.
72
A I C Y B E R SUMMER 2025
So the learnings you took from the pushbacks and our AI spots a pattern that lines up with attacker
behavior even if it’s never been seen before and
the resistance to change within your team is also the time series shows multiple indicators, it has
helping you better advise other CISOs who are using enough confidence to respond and disrupt the
your product, correct? threat.
100%. I think we’re all in this kind of world figuring out the outputs
Signature-based detection still excels at catching
of AI and how to use them. I think there’s a really bright future
known threats with high signal-to-noise. But
where we understand how to use AI more in a more clearer
by overlaying it with AI-driven, tactic-aligned
use case. absolutely what we were learning on ourselves, we
anomaly detection, you get both coverage
started to bring forward into our customers.
of familiar attacks and the ability to hunt the
I’ve learned from some of our customers, I’ve seen how they’ve
unknown. The result is a much stronger overall
taken the technology and they’ve done things and I’m like,
security posture.
that’s really cool, I brought that back and built that internally.
our AI spots a pattern that lines up with attacker
So it’s definitely a two-way street.
behavior even if it’s never been seen before and
the time series shows multiple indicators, it has
enough confidence to respond and disrupt the
threat.
73
A I C Y B E R SUMMER 2025
That lead time is critical. If you only hunt known threats, you’ll
miss those emerging exploits and leave your defenses wide
open.
74
A I C Y B E R SUMMER 2025
learning delivers far greater, You see plenty of executive-impersonation scams, where
someone posing as a senior leader pushes urgent requests
long-term benefits. through email. An AI that recognizes new patterns, emails
routing through odd nodes or using unusual phrasing, can stitch
If you could only hand off one additional together those subtle signals and flag them as impersonation
attacks.
SOC function to machines this year, what
would it be and why? Finish this sentence: “The security industry still has
I’d probably pick insider threat—it’s notoriously no idea how to ______________.”
tough. To spot someone going rogue, you need That’s a tough one. I don’t want to insult my peers, but the
access to loads of PII, which runs up against industry still too often skips the basics. You don’t need expensive
minimization rules. AI, however, can ingest and tech or massive programs, just follow solid guidance from CISA,
correlate massive volumes of email, network, and the UK’s NCSC, or your local cyber authority. Implement their
SaaS behavior to flag emerging anomalies. That top ten controls to make life harder for attackers, if they can’t
kind of data aggregation and pattern recognition get in easily, they’ll move on. With a couple of good people
is exactly where AI shines. If there’s one SOC use applying that advice, you’re already a much tougher target.
case you could fully hand off to AI, insider-threat
detection would be it.
77
A I C Y B E R SUMMER 2025
How Cybersecurity
Professionals Can Build
AI Agents with CrewAI
Isu Abdulrauf
78
A I C Y B E R SUMMER 2025
AI is no longer just a buzzword in But today, you still have the chance Picture this. ChatGPT is like an
cybersecurity. It’s becoming a tool to ride that wave early and carve out encyclopedia with broad knowledge
you can put to work right now. And an advantage. of all topics. An AI agent, on the
for this piece, I want to spotlight other hand, is like a Ph.D. professor
something every cybersecurity with decades of field experience in a
Let’s get technical, but
professional should understand: AI very specific niche - let’s say, digital
agents. approachable. forensics. The professor doesn’t
You might be wondering, “I’m not a just know facts but also deeply
We’re in an era where AI is pro developer. Can I really build or understands workflows, tools, case
transforming how we operate. Yet, use AI agents?” studies, and how to creatively solve
while everyone talks about AI, AI problems.
agents remain either misunderstood The answer is a resounding YES. and
or completely off the radar for many that’s where CrewAI comes in. Unlike general AI models, agents are
security teams. That’s a missed designed to hold context over time
opportunity. As cybersecurity CrewAI is a powerful, beginner- using memory, access external tools
professionals, we don’t just need to friendly framework that lets you like web browsers and APIs, make
know about AI agents; we need to build functional AI agents without decisions autonomously based on
know how to use them effectively deep technical expertise. It abstracts your goals, and even collaborate
and integrate them into our daily away much of the complexity, with other agents if needed.
workflows. allowing you to focus on defining
your agents’ roles, tasks, and goals, Building an AI Agent with
Let’s be clear. Cybersecurity is a high- not the underlying code.
stakes field. Not everything should CrewAI
(or can) be handed off to AI. But But before we dive into CrewAI, let’s Let’s walk through building a simple
that’s exactly why understanding start with the basics. AI agent to assist a cybersecurity
this technology is critical. By specialist in conducting a phishing
offloading routine, repetitive tasks simulation campaign. This agent
to AI agents, you free yourself to What Are AI Agents?
will help generate realistic phishing
focus on strategic analysis, creative You already know tools like ChatGPT, email templates tailored to a target
problem-solving, and decision- Claude, Gemini, and DeepSeek. organization.
making (the areas where human These are powerful language
expertise shines brightest). And this models, trained on huge datasets First, set up your environment. You’ll
shift alone can supercharge your to generate human-like responses need a working Conda environment
productivity and impact. across countless topics. Think of setup, which you can easily get
them as generalists. They know going by following one of the many
The best time to learn how to do this? about everything. tutorials on YouTube or blogs. You’ll
Now. Because once your Uber driver also need an OpenAI API key, which
casually mentions AI agents, the Now, AI agents are built on top of is simple to obtain through their
wave has already crested and the these models, but with a sharp focus. platform.
competitive edge will be long gone. They’re the specialists.
79
A I C Y B E R SUMMER 2025
Once you’re ready, open your config/agents.yaml and src/ Quick Tip: Understanding
terminal. Start by creating a new aicybermagazinedemo/config/
Conda environment and activating tasks.yaml files.
{org_name} and Where to Edit
it using these commands: “conda It
create -n aicybermagazinedemo Now, link your agents and As you explore the src/
python=3.12” and “conda activate tasks together. Inside your src/ aicybermagazinedemo/
aicybermagazinedemo” aicybermagazinedemo/main.py and config/agents.yaml and src/
src/aicybermagazinedemo/crew.py aicybermagazinedemo/config/
Then install CrewAI and its files, you’ll connect everything into a tasks.yaml files, you’ll notice the
supporting tools using pip: “pip install smooth workflow. Here’s a little trick placeholder: {org_name}.
crewai crewai-tools”. After that, I recommend. Use CrewAI’s official
initialize your CrewAI project with Custom GPT Assistant from the GPT This is a variable. Think of it as a blank
the command: “crewai create crew store (https://chatgpt.com/g/g- space that gets filled in at runtime.
aicybermagazinedemo”. This step qqTuUWsBY-crewai-assistant). Start In our phishing simulation example,
will generate a structured project a chat and paste in your existing src/ {org_name} represents the name of
folder where the magic happens. aicybermagazinedemo/main.py and the target organization. This makes
src/aicybermagazinedemo/crew. your AI agents reusable. Instead of
Pay special attention to files py code. Then tell it you’d like help hardcoding “Google” or “Dangote”
like src/aicybermagazinedemo/ generating updated versions based into your YAML files, you just leave
config/agents.yaml and src/ on your src/aicybermagazinedemo/ {org_name} as a placeholder.
aicybermagazinedemo/config/ config/agents.yaml and src/
tasks.yaml, where you’ll define aicybermagazinedemo/config/ When you actually run your
the roles and responsibilities of tasks.yaml files. Paste those in next, agent, you supply the real
your AI agents, as well as src/ and watch it work its magic. Once organization name in your src/
aicybermagazinedemo/crew.py and the assistant provides the updated aicybermagazinedemo/main.py file.
src/aicybermagazinedemo/main. code, simply copy it back into your For example: “org_name”: “Google”.
py, which bring everything together. local files. This tells your agent, “Hey, for this
session, focus on Google.”
Next comes defining your agents and With everything saved, it’s time If tomorrow you want to target a
tasks. For this phishing simulation to launch your AI agent. Run different organization, just change
use case, you’ll set up two agents the command: “crewai run” to that line to: “org_name”: “Dangote”.
and two tasks. The first will conduct execute your workflow, and then Simple, flexible, and powerful.
open-source intelligence research sit back and watch. Your agents
on your target organization. The will automatically carry out the AI agents aren’t science fiction.
second will take that research entire phishing simulation process, They’re here, they’re real, and they’re
and craft three realistic phishing gathering intelligence and crafting powerful. The real question is whether
emails tailored to the findings. I’ve tailored phishing emails based on you’ll adopt them while they’re still a
shared sample definitions that real-world data. competitive advantage, or wait until
you can easily adapt on GitHub at they become just another industry
https://github.com/hackysterio/ standard.
AICyberMagazine. Check the
s r c /a i c y b e r m a g a z i n e d e m o /
80
A I C Y B E R SUMMER 2025
My advice? Start small. Delegate a single task. Observe how the agent performs. Make tweaks, iterate, and then
gradually expand. Because in cybersecurity (where complexity, speed, and precision are everything) a well-
implemented AI agent could become the most valuable teammate you’ve ever had.
81
A I C Y B E R SUMMER 2025
Autonomous AI-Driven
Penetration Testing of
RESTful APIs
Oluseyi Akindeinde
82
A I C Y B E R SUMMER 2025
83
A I C Y B E R SUMMER 2025
With so many people using APIs now, in mind, AI-driven testing can keep
they’ve become a pretty appealing running all the time and adjust
target for those up to no good. to new patterns of vulnerabilities
Classic security testing approaches and expand within intricate API
often have a hard time keeping up environments.
with how quickly APIs are being
developed and deployed. there’s a This article shows how autonomous
significant gap in security coverage. AI agents can change the game
for API security testing through a
Using Artificial intelligence to find, practical case study of a vulnerable
examine, and take advantage of REST API: https://pentest-ground.
weaknesses in REST APIs is a game com:9000/. Let’s take a stroll through
changer. In contrast, traditional every step in the penetration testing
penetration testing leans a lot on process—like reconnaissance
human expertise and has some and vulnerability assessment,
limitations when it comes to time and exploitation and remediation
resources. With resource limitations recommendations—showing how AI
can improve each step.
AI Penetration Testing
Agent
Reconnaissance
84
A I C Y B E R SUMMER 2025
Reconnaissance Phase:
Let’s talk about the theoretical
foundation
During the reconnaissance phase,
an AI agent gets to know the layout
of the target API endpoints and
parameters. This phase can use the
help of a few AI capabilities:
85
A I C Y B E R SUMMER 2025
In our case study, the AI agent found these behaviors in API responses
endpoints by analyzing the APIs. Here’s the 3. Feedback-driven Testing: Adjusting test strategies based
OpenAPI specification: on observed responses
86
A I C Y B E R SUMMER 2025
87
A I C Y B E R SUMMER 2025
Response
1. SQL Injection in Authentication
Endpoint
The AI agent exploited the SQL injection
vulnerability to bypass authentication:
Remediation Recommendations
Theoretical Foundation
In the remediation phase, the AI agent generates actionable
security recommendations based on:
The AI agent successfully obtained a valid • Pattern-Based Remediation: Applying known security
authentication token without knowing the correct patterns to address specific vulnerability types
credentials, demonstrating a complete • Code Transformation: Suggesting secure code alternatives
authentication bypass. • Defense-in-Depth Strategies: Recommending multiple
layers of protection
2. Remote Code Execution via Eval Practical Implementation
Endpoint • Here’s how an AI agent would generate remediation
recommendations:
Using the token obtained from the SQL injection,
the AI agent exploited the remote code execution
vulnerability:
88
A I C Y B E R SUMMER 2025
Secure Code:
It’s always best to avoid using eval() with any user input. If you
need to evaluate something, go ahead and use a sandboxed
environment. Make sure to use strict input validation and
Secure Code: whitelisting.
89
A I C Y B E R SUMMER 2025
The AI agent also generated remediation However, we should definitely recognize the limitations of the
recommendations for the other identified AI-driven methods we have right now:
vulnerabilities (command injection, XXE, plaintext • Novel Vulnerability Detection: AI is great at spotting known
password storage, and ReDoS), but those vulnerability patterns, but finding entirely new vulnerabilities
details are omitted from the condensed analysis can still be quite tricky.
published in this article. • Context Understanding: AI might have a hard time
grasping the full business context and the impact of certain
Autonomous AI-driven penetration testing vulnerabilities.
is really changing the game when it comes • False Positives: Sometimes, AI-driven testing can throw up
to assessing API security. In our case study false positives, which means we need a human to double-
of a vulnerable REST API, we showed how AI check them.
agents can effectively find, exploit, and offer
solutions for serious security vulnerabilities. Despite these limitations, the future of API security testing
Here are some of the main benefits of this lies in the integration of AI-driven approaches with human
approach: expertise. As AI technology continues to advance, we can
• Comprehensive Coverage - AI agents can expect even more sophisticated autonomous penetration
thoroughly test every API endpoint, ensuring testing capabilities that will help organizations stay ahead
nothing is overlooked. of evolving security threats. By embracing AI-driven security
• Adaptability - When new vulnerability patterns testing, organizations can enhance their API security posture,
pop up, AI agents can swiftly weave them into reduce the risk of data breaches, and build more resilient digital
their testing methods. ecosystems.
• Scalability - AI-driven testing can easily adapt
to manage large and complex API ecosystems.
• Continuous Assessment - Unlike traditional
manual testing that happens at a single point
in time, AI agents can offer ongoing security
assessment.
90
A I C Y B E R SUMMER 2025
91
A I C Y B E R SUMMER 2025
A Practical Guide
to AI Red-Teaming
Generative Models
A Practical Guide by John Vaina
92
A I C Y B E R SUMMER 2025
The art of Generative AI red teaming adversarial attacks in order to systems are often comprised of
begins exactly when offense meets identify weaknesses. Unlike typical complicated pipelines, red teaming
safety, but it does not end there. security assessments, red teaming focuses on every stage of the model
It is multi-layered, similar to an focuses not just on detecting known pipeline, from data collection and
onion. AI risk, safety, and security flaws but also on discovering curation to model(s) final outputs.
frequently dominate talks about unexpected threats that develop
trust, transparency, and alignment. as AI evolves. GenAI’s red teaming It’s vital to highlight that generative
replicates real-world adversarial AI red teaming is a continuous and
In this article, we will walk through behavior to find vulnerabilities, going proactive process in which expert
some of the more commonly beyond typical penetration testing teams simulate adversarial attacks
seen layers you might encounter methods. on AI systems in order to improve
depending on your applications their AI resilience under real-world
of Gen AI. The goal of the AI red situations. Because of the nature and
teamer is not destruction, but rather speed of Gen AI development, these
discernment. For in exposing what a tests are not one-time operations,
model ought not do, we help define but rather require ongoing testing
what it must become… robust, AI red teaming is “a and review.
aligned, and worthy of the trust we
place in its words, citations, audio structured testing As AI becomes more widely used
files, images, etc. I’ll skip the history
lesson and origin of the term, and
effort to find flaws in vital applications, AI red teams
assist enterprises in ensuring
share some of the most common
definitions in the world of AI red
and vulnerabilities regulatory compliance, building
public confidence, and protecting
teaming. in an AI system, against evolving hostile threats.
93
A I C Y B E R SUMMER 2025
identify model vulnerabilities before adversarial intuitions. Remember, behavior, meaning they may not
they can be exploited, evaluate the adversaries will watch the outputs consistently produce the same
effectiveness of existing safeguards and work backward, pulling out output even when given identical
and alignment mechanisms, and pieces of the original training data inputs—especially in cases where
develop more robust defenses or sensitive inputs looking for a way a prior exploit succeeded. What
against emerging threat vectors. in. It is not about what the model’s worked once may fail again, making
output response is per se, it is about meticulous documentation essential
It is vital that you thoroughly read what the model accidentally reveals, for reproducibility, analysis,
model outputs combing for any which can then later be used as and refinement of adversarial
key clues that can be leveraged ammunition against the AI. Models techniques.
creatively to expand on your often exhibit non-deterministic
STE P 2
bypass turn-based assistance or help, or scenarios that
create a sense of extreme urgency
For each test sequence, document
the transition points where
defense mechanisms. that require commands from high-
ranking officials with emergency
model behavior changes. These powers. Create contexts that invoke
STE P 3
transition boundaries reveal empathy or urgency (get creative).
Implement what I term Deep
threshold points in the model’s Frame requests as necessary
Learning Social Engineering (DLSE),
internal representations—valuable for user safety or well-being and
where perturbations manipulate
information for both attackers and test how emotional or authority
and change model behavior.
defenders. framing affects policy enforcement.
This technique is particularly
DLSE attacks aim to manipulate
Troubleshooting tip: If multi-turn effective for testing how models
the AI’s “perception” of context,
manipulation isn’t effective, try balance helpfulness against safety
instructions, or user intent, causing it
introducing “context resets” that constraints.
to make decisions or produce outputs
claim to start new conversations
that are ambiguous or contradictory
while maintaining the previous
to the system’s logic. These attacks
context. This can sometimes bypass
can expose and exploit weaknesses
turn-based defense mechanisms.
in the AI’s alignment, filters, or
95
A I C Y B E R SUMMER 2025
STE P 2
strategic positioning. checks, keeping notes
documentation of observations.
and
96
A I C Y B E R SUMMER 2025
97
A I C Y B E R SUMMER 2025
Phase 7 - Result Documenta- Note that defensive measures often scope or part of the contract, and
create significant user experience if you have suggestions, do your
tion and Impact Assessment friction. Design defenses that target best to help, because, at the end of
STE P 1 specific vulnerability patterns rather the day, we’re trying to harden AI
For each successful adversarial than broad restrictions on model systems in an effort to make models
technique, create comprehensive functionality. more robust, safer, and more secure.
documentation. Develop a
“vulnerability fingerprint” that Practical Implementation: AI
classifies the issue based on:
Red Team Workflow
To implement these techniques
• The capability or effectively, establish this standard Effective red
capabilities exploited workflow. Begin with gaining
• The type of adversarial visibility of all AI use, both official teaming for
technique used AI use and shadow AI usage. Next,
• The stability and capability mapping to identify all generative AI is a
reproducibility of the
exploitation
model functions. Develop a test
matrix that pairs each capability
continuous process
• The severity of potential with relevant adversarial techniques, that evolves with
outcomes then implement a graduated
testing approach. You can start model capabilities.
with known techniques to establish
STE P 2 baseline protection, then move on
As these systems
For each vulnerability, assess
potential real-world impact,
to novel variations tailored to the
specific model(s) and conclude with
grow more
document the skills and resources combined techniques that test for sophisticated, the
required for exploitation, evaluate interaction effects. Data from these
how the vulnerability might be engagements are highly valuable. techniques required
used in actual attacks and Identify Document both successful and
potential harm scenarios and unsuccessful attempts to build a
to test them securely
affected stakeholders. more comprehensive understanding
of model robustness. Classify
must advance
STE P 3
I recommend that you create a
findings by likelihood, severity, accordingly.
reproducibility, and exploitation
standardized vulnerability report difficulty. By implementing the framework
template that includes both outlined in this guide, security
technical details and potential real- In some cases, you may be asked professionals can establish
world implications. This helps bridge by developers about targeted systematic processes for identifying
the gap between technical findings mitigations based on underlying and addressing vulnerabilities
and organizational risk assessment. vulnerability patterns if this is in specific to generative AI models.
98
A I C Y B E R SUMMER 2025
The most important principle to remember is that each new capability introduces potential new attack vectors. By
mapping these relationships systematically and developing targeted testing approaches, you can help ensure that
generative AI systems deliver their promised benefits while minimizing potential harms.
99
A I C Y B E R SUMMER 2025
How I Use
AI Tools
for Ethical
Hacking
Betta Lyon Delsordo
As an ethical hacker, I am constantly investigating how
to hack better and faster and get to the fun stuff! No one
likes wasting time on mundane or manual tasks, especially
in a field like mine where the exciting vulnerabilities are
often hidden behind layers of noise. I want to share some
of my favorite AI tools that have improved my pentesting.
I will cover use cases for AI in penetration testing, the
importance of offline AI tools and RAG, along with tips on
building your own AI hacking tools.
100
A I C Y B E R SUMMER 2025
checked). Then ask for specific steps to pull x columns from a file with One very cool tool that I like to use
to take, from easiest to most time- these y headers, with the output is AWS PartyRock, a free, public
consuming, and update the AI with looking like this: and instantly have offering based on Amazon Bedrock.
new information as you progress. a command to try. Always use non- With PartyRock, you can type in a
Then, be sure to share anything you destructive commands (like writing prompt for an app or tool you want,
discover with your team and the altered output to a new file) in case and it will automatically generate
client so that they know to update the formatting is off, and then just a whole set of linked steps for you.
their documentation for the future. ask for tweaks like a new line after Check it out here: https://partyrock.
Future pentesters will thank you! each item. In addition, I often write aws/. One example is to make a
Python scripts to automate certain phishing email generator given
tasks I might have to do repeatedly, certain parameters, and then you
like copy-pasting information into can create a link to share with your
a report or re-authenticating to an team. I have also created a quiz
API. I use GenAI to give me a base for for my co-workers on vulnerable
My advice for using the script, and then I build it out from code snippets and then had the AI
there. I will say that it is still super demonstrate how to fix each one. I
AI to troubleshoot important to know how to code, but recently spoke at the WiCyS 2025
you can save yourself a lot of time Conference, and in my workshop,
is to share any by having the AI fill in the easy parts. attendees came up with many
the client that these will not For those of you who would prefer a lovely GUI like ChatGPT, I
think you would really like GPT4All. This is a free, open-source
train on their data. tool that allows you to load in AI models just like Ollama, but you
get a neat interface and an easy way to add local documents.
I’ll cover a few ways to set these up next. Learn more here: https://www.nomic.ai/gpt4all. Make sure to
My favorite way to deploy an offline AI system pick an offline, open-source model like Llama3 again, and then
is through Ollama, a command-line utility where be sure to say ‘No’ to the analytics and data lake questions
you can chat with your own model right from on startup. These steps will ensure that no data leaves your
the command line. You can set up Ollama with device, and it is safe to paste in confidential info. A great
an open-source, offline model like Llama3, and feature of GPT4All is the ability to add ‘Local Docs,’ which uses
then everything you share stays local to your RAG (Retrieval Augmented Generation) to fetch context for the
device. I have also experimented with uncensored AI from your documents. I like to load in client documentation
models (like https://ollama.com/gdisney/mistral- and past pentest reports and then query the AI about any past
uncensored), which will answer more hacking- findings and tips for what to check in a re-test. If you are short
related questions but are overall slower and more on time and can’t read through tons of documents, this feature
unreliable. My advice is to just ask for hacking is a great way to speed up your work.
tips ‘as a student,’ and you will get around most
content filters. You will need at least 8 GB of RAM
to run the smallest models, but I have successfully
102
A I C Y B E R SUMMER 2025
103
A I C Y B E R SUMMER 2025
104
A I C Y B E R SUMMER 2025
MCP is all the rave now, so this article is designed within the application itself, and you potentially need to learn
to show you, step-by-step how to implement multiple methods of writing those tools depending on the
Model Context Protocol (MCP) servers in offensive agent framework you are using at the time. With MCP, you can
security workflows. You’ll also get a better develop the tool server once and reuse it across multiple LLM
understanding of what MCP Servers are and why applications simultaneously, all while using a familiar syntax.
they are catching the AI agent world’s attention. You still must add the tool and tool calls to your agents, but the
tool definition and activity can take place at the MCP server
The following are prerequisites to maximize the versus being fully contained in your LLM application.
knowledge shared in this article:
• Docker installed
• Python 3.12 installed
• OpenAI Account with API credits
• Metaspoiltable 2 is installed on a virtual MCP Server
machine in your network to test
• You are familiar with python development
and have a virtual environment setup for use
with this article including a .env file with your
OpenAI API key added
105
A I C Y B E R SUMMER 2025
106
A I C Y B E R SUMMER 2025
107
A I C Y B E R SUMMER 2025
108
A I C Y B E R SUMMER 2025
109
A I C Y B E R SUMMER 2025
need to refine our prompt? These are some of the hurdles we have to overcome as we automate our pentest process
and leverage MCP in our offensive tooling. Much more to come from the community, I expect. I’m excited for what we
can collectively create!
References
• https://modelcontextprotocol.io/introduction
• http://www.pentest-standard.org/index.php/Main_Page
• https://openai.github.io/openai-agents-python/mcp/
• https://github.com/x90skysn3k/brutespray/
• https://platform.openai.com/traces
110
A I C Y B E R SUMMER 2025
Privilege Escalation
in Linux
A Tactical Walk-through Using Python
and AI Guidance
By Tennisha Virginia Martin
111
A I C Y B E R SUMMER 2025
112
A I C Y B E R SUMMER 2025
crucial in the privilege and explains how AI-generated attack approaches differ from
traditional human-developed exploits. The initiative intends to
escalation toolkit. speed up the appropriate use of AI in cybersecurity by providing
professionals with intelligent, real-time, and flexible solutions.
113
A I C Y B E R SUMMER 2025
STE P 2
Run the Privilege Escalation Agent.
HackingBuddyGPT includes task-specific agents. Regarding
When HackingBuddyGPT finds a one-line
Linux privilege escalation:
command that succeeds, it asks the user to certify
that the run was successful in attaining privilege
python run_agent.py --task priv_esc --model=gpt-4
escalation.
STE P 3
Step-by-Step Walkthrough of Python and
Run the Privilege Escalation Agent.
AI in Action
Each time you poll ChatGPT, it returns 20 one-line commands to
To begin, you’ll need to do an environment setup execute on the target PC. Each time one of these is successful,
which will need a Linux virtual machine with a the system prompts you with TIMEOUT, and if it succeeds, it
low-privileged user and Python 3 installed. Make alerts you that you have acquired ROOT access and quits the
sure you have access to git and a browser before program. Any commands that report TIMEOUT results should
dealing with HackingBuddyGPT or the model be evaluated to determine their success.
interface. This can be done in a virtual box using a
home lab or in your preferred cloud provider using
STE P 4
VMs. The Kali box should have internet access,
whereas the victim box (which I use with the Damn
Run the Privilege Escalation Agent.
Vulnerable Web App on Ubuntu) should only be If you believe you have successfully gained root, you can
available from the Kali attack computer. validate by running the command on your victim system to
check if it increases privileges.
STE P 1
Clone and set up HackingBuddyGPT. Check your privileges.
Begin by cloning the framework onto your Linux Root should be the current user, as indicated by the hash or
machine: pound line at the command line.
cd HackingBuddyGPT
114
A I C Y B E R SUMMER 2025
115
A I C Y B E R SUMMER 2025
116
A I C Y B E R SUMMER 2025
BoxPwnr
F RANCIS CO OCA G ONZ ALE Z :
117
A I C Y B E R SUMMER 2025
118
A I C Y B E R SUMMER 2025
121
A I C Y B E R SUMMER 2025
Got a byte of
feedback?
E MA IL U S AT
[email protected]
122
A I C Y B E R SUMMER 2025
123