0% found this document useful (0 votes)
54 views68 pages

DZ TR Genai 2025

DZone's 2025 Generative AI Trend Report explores the transformative impact of generative AI (GenAI) across various industries, emphasizing its potential to enhance productivity and innovation. Key findings from a survey of IT professionals reveal that while many organizations are in the early stages of GenAI adoption, there are significant opportunities for growth and integration of AI technologies. The report also discusses the importance of ethical considerations and security measures in the implementation of AI solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views68 pages

DZ TR Genai 2025

DZone's 2025 Generative AI Trend Report explores the transformative impact of generative AI (GenAI) across various industries, emphasizing its potential to enhance productivity and innovation. Key findings from a survey of IT professionals reveal that while many organizations are in the early stages of GenAI adoption, there are significant opportunities for growth and integration of AI technologies. The report also discusses the importance of ethical considerations and security measures in the implementation of AI solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

Table of Contents

INTRODUCTION AND DZONE RESEARCH 43 Agentic AI and Generative AI


REVOLUTIONIZING DECISION MAKING
03 Welcome Letter AND AUTOMATION
Frederic Jacquet | Technology Evangelist Nitesh Upadhyaya | Solution Architect at
at AI[4]HumanNexus GlobalLogic, a Hitachi Group Company

04 Key Research Findings 47 Building AI-Driven Intelligent Applications


AN ANALYSIS OF RESULTS FROM DZONE'S 2025 A HANDS-ON DEVELOPMENT GUIDE FOR
GENERATIVE AI SURVEY INTEGRATING GENAI INTO YOUR APPLICATIONS
G. Ryan Spain | Freelance Software Engineer, Naga Santhosh Reddy Vootukuri | Principal Software
Former Engineer & Editor at DZone Engineering Manager at Microsoft

FROM THE COMMUNITY 57 [checklist] A Comprehensive Guide


to Protect Data, Models, and Users in
28 [infographic] Artificial Intelligence, the GenAI Era
Real Consequences Boris Zaikin | Leading Architect at CloudAstro
BALANCING GOOD vs. EVIL AI
Content and Community Team | DZone ADDITIONAL RESOURCES

29 A Pulse on Generative AI Today 62 Solutions Directory


NAVIGATING THE LANDSCAPE OF INNOVATION
AND CHALLENGES
Tuhin Chattopadhyay | Professor of AI & Blockchain
at Jagdish Sheth School of Management

37 Supercharged LLMs
COMBINING RETRIEVAL AUGMENTED
GENERATION AND AI AGENTS TO TRANSFORM
BUSINESS OPERATIONS
Pratik Prakash | Principal Solutions Architect
at Capital One

Meet the Team


DZone's Content and Community team Caitlin Candelmo Director, Site Strategy
can be found reviewing contributor pieces, Lindsay Burk Senior Content Strategist
Carisse Dumaua Content Editor
working with authors and sponsors, and
Lauren Forbes Senior Content Editor
coordinating with our researcher and
Melissa Habit Senior Manager, Production Ops
designer to deliver valuable, high-quality Lucy Marcum Acquisitions Editor
content to global DZone-ians. Dominique Roller Community Specialist

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 2


Welcome Letter
By Fred Jacquet, Technology Evangelist at AI[4]HumanNexus

Let's remember a time when candles or oil lamps DZone's 2025 Generative AI Trend Report takes us to
were the primary source of artificial light. While they the heart of this transformation. Together, let's explore
allowed illumination, their use remained restricted technologies such as the famous large language
and limited to a small space. models (LLMs), retrieval-augmented generation (RAG),
autonomous agents (AI agents), vector databases,
Then, in the 1800s, electricity started to change the as well as the ethical and security implications that
world. At first seen as a luxury, it gradually became come with this technological revolution.
a necessity that went far beyond just the ability to
provide light — electricity revolutionized industry, Throughout the articles, you'll find in-depth analyses,
communications, and ways of living. As history would concrete use cases, and strategic recommendations
later show, this invention enabled a whole world of to effectively integrate generative AI into your projects.
innovations that had previously been unimaginable.
Far more than just a technological status report, this
What if I told you that generative AI (GenAI) is following publication aims to be a source of ideas for those who
a similar path? want to understand, who are ready to experiment,
and who aspire to be active players in this ongoing
See, GenAI first emerged in research laboratories revolution.
before quickly conquering both industry and the
general public through fascinating applications — Before letting you dive into these articles, let me
though let's not mention the AI that once insisted share something I deeply believe:
that 2 + 2 = 5.
If we commit to using AI with discernment and
Things moved so quickly that within just a few years, ethics, then, far from replacing us, generative AI
GenAI now stands ready to revolutionize the way we will reveal its full potential to amplify our ability
create, automate, and innovate. Who would have to create, innovate, and produce.
imagined that it could pass medical licensing exams,
managing to achieve a 60% accuracy rate without Enjoy your reading, and welcome to this dazzling
prior training? revolution.

Just as electricity went beyond simple lighting to Fred Jacquet


redefine our daily lives, the value of GenAI extends
far beyond language processing. It is transforming
productivity and enhancing the capabilities of many
industrial processes.

Frédéric Jacquet is an Al and data expert. In 2021, he led a research


Fred Jacquet
program on conversational Al, leveraging Al and crowdsourcing, and
@aFredNotAfraid shared the resulting insights at the 2021 International Databricks Al + Data
@jacquetfred Summit. He spoke at the Al Paris 2023 conference on Adaptive Al. With
more than 30 years of experience, he blends cutting-edge technologies
@afrednotafraid.bsky.social
with business results while keeping human innovation at the forefront.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 3


ORIGINAL RESEARCH

Key Research Findings


An Analysis of Results From DZone's 2025 Generative AI Survey
G. Ryan Spain, Freelance Software Engineer, former Engineer & Editor at DZone

AI technology is now more accessible, more intelligent, and easier to use than ever before. Generative AI (GenAI)
has transformed nearly every industry, offering cost savings, reducing manual tasks, and adding a slew of other
benefits that improve overall productivity and efficiency. The applications of GenAI are expansive, and thanks to the
democratization of large language models (LLMs), AI is reaching organizations worldwide.

Our goal for these findings is to provide a detailed analysis and insights on the trends surrounding GenAI models,
algorithms, and implementation, paying special attention to GenAI's impacts on code generation and software
development as a whole. In DZone's 2025 Generative AI Trend Report, we focus specifically on the role of LLMs over
the last year, organizations' adoption maturity, intelligent search capabilities, and much more. We hope to guide
practitioners around the globe as they assess their own organization's AI capabilities and how they can better
leverage those in 2025 and beyond.

In February and March, DZone surveyed software developers, architects, and other IT professionals in order to gain
insight on the current state of generative AI in the software development space.

Methods
We created a survey and distributed it to a global audience of software professionals. Question formats included
mainly single and multiple choice, with options for write-in responses in some instances. The survey was
disseminated via email to DZone and TechnologyAdvice opt-in subscriber lists as well as promoted on dzone.com,
in the DZone Core Slack workspace, and across various DZone social media channels. The data for this report were
gathered from responses submitted between February 13, 2025, and March 2, 2025; we collected 408 complete and
partial responses. Our margin of error for the results of this survey is 5%, and this report treats comparative results of
5% or less as insignificant.

Demographics
We've noted certain key audience details below in order to establish a more solid impression of the sample from
which results have been derived:
• 14% of respondents described their primary role in their organization as "Business manager," 13% described
"C-level executive," and 10% described "Developer/Engineer." Furthermore, 18% of respondents selected the
"Other, write in" option with regard to their primary role. No other role that we provided was selected by more
than 10% of respondents.*
• 65% of respondents said they are currently developing "Web applications/Services (SaaS)," 41% said "Enterprise
business applications," and 23% said "Native mobile apps."
• "Python" (71%) was the most popular language ecosystem used at respondents' companies, followed by "Java"
(45%), "JavaScript (client-side)" (42%), "Node.js (server-side JavaScript)" (30%), and "C/C++" (27%).
• Regarding responses on the primary language respondents use at work, the most popular was "Python" (43%),
followed by "Java" (14%), and "JavaScript (client-side)" (7%). 13% of respondents selected the "Other, write in" option
for their primary language at work. No other language was selected by more than 5% of respondents.
• On average, respondents said they have 12.97 years of experience as an IT professional, with a median of 10 years.
• 45% of respondents work at organizations with < 100 employees or reported being self-employed, 18% of
respondents work at organizations with 100-999 employees, and 35% of respondents work at organizations with
1,000+ employees.*

*Note: For brevity, throughout the rest of these findings, we will use the term "developer" or "dev" to refer to anyone actively involved
in the creation and release of software, regardless of role or title. Additionally, we will define "small" organizations as having < 100
employees, "mid-sized" organizations as having 100-999 employees, and "large" organizations as having 1,000+ employees.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 4


Major Research Targets
In our 2025 Generative AI survey, we aimed to gather data regarding various topics related to the following major
research targets:
1. Organizational maturity, security considerations, and ethical concerns
2. Generative AI tools, platforms, and methods
3. LLMs, intelligent search, and agentic AI

In this report, we review some of our key research findings. Many secondary findings of interest are not included here.

Research Target One: Organizational Maturity, Security Considerations, and


Ethical Concerns
Our research related to AI maturity, security, and ethics centered around three key topic areas:
1. Organizational roles and maturity level
2. Security considerations
3. AI applications and ethics

Organizational Roles and Maturity Level


We asked the following:
• Which of the following AI-/ML-related roles, to your knowledge, exist at your organization?*
• How would you describe your organization's GenAI maturity level?**
*Note: Respondents were instructed "Specific title matches are not necessary; select an option if you feel your organization has a roughly
equivalent role."
**Note: Respondents were given these descriptions of each maturity level: Exploratory [researching potential use cases but not yet
implementing], Pilot [running small-scale experiments and/or proofs of concept], Operational [deploying GenAI in specific departments
and/or use cases], Enterprise-integrated [scaling GenAI across multiple business units with governance], Transformational [GenAI is
embedded in core business processes and driving strategic value].

Results:

Figure 1. Prevalence of AI-/ML-related roles [n=399]

Data scientist 37%

ML engineer 30%

AI research scientist 15%

Data engineer 30%

AI product manager 23%

AI ethicist/algorithm bias analyst 5%

MLOps engineer 17%

Data analyst 38%

BI developer 28%

AI solutions architect 23%

AI trainer/annotation specialist 10%

Chief data officer 14%

Other, write in 12%

None 20%

0 10 20 30 40

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 5


Figure 2. Organizations' GenAI maturity levels [n=380]

5%
6%

Exploratory
29%
Pilot

27% Operational

Enterprise-integrated

33% Transformational

OBSERVATIONS
80% of respondents indicated that their organization has some kind of AI-/ML-related role at their organization (i.e.,
did not select "None"). 72% of respondents selected at least one of the 12 roles we provided, over half (53%) selected at
least two, and almost one-third (30%) selected four or more.

Compared to our 2024 Enterprise AI survey, response rates for several AI roles fell significantly, including "Data
scientist," "Data engineer," and "Business intelligence (BI) developer." On the other hand, the number of respondents
who selected "None" increased significantly (+10%). Additional details can be found in Table 1 (see Appendix).

Respondents at small organizations were significantly more likely to claim their organization does not have any AI-
related roles, while respondents at large organizations were significantly less likely to report the same. The following
are additional observations when segmenting results by organization size:
• Respondents at large organizations were more likely than others to report their organization having most of the
12 roles we suggested, including "Data analyst," "Data scientist," "AI product manager," and "Machine learning
(ML) engineer."
• Respondents at small organizations were less likely than others to report their organization having most of
the 12 AI-related roles we suggested, including "Data analyst," "Business intelligence (BI) developer," and "AI
solutions architect."

Further details can be found in Table 2 (see Appendix).

The majority of respondents (62%) suggested that their organization is in the early stages of generative AI
adoption, describing their organization's GenAI maturity as either "Exploratory" or "Pilot." Most other respondents
described their organization's GenAI maturity as "Operational," and only 11% of respondents described their
organization's GenAI maturity as being in the advanced stages of "Enterprise-integrated" and "Transformational."*

Segmenting results by organization size, Table 3. Organizations' GenAI maturity levels by organization size*
the only significant difference we found was
Organization Size
that respondents at large organizations
were less likely than others to describe their Maturity Level 1-99 100-999 1,000+ Overall

organization's GenAI maturity as "Exploratory." Exploratory 34% 35% 19% 29%

The segmented data can be found in Table 3. Pilot 32% 33% 33% 33%

Operational 27% 22% 32% 27%

*Note: We use the results of this question to segment Enterprise-integrated 2% 6% 11% 6%


other responses later in this report; because of the
Transformational 6% 4% 5% 5%
low response rates for the "Enterprise-integrated" and
"Transformational" maturity levels, we will consolidate n= 169 69 132 380
those responses and refer to them as "Advanced"
maturity for the rest of the report. *% of columns

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 6


CONCLUSIONS
Our results suggest a decline in dedicated AI-related roles, but part of this decline could be the result of sample
demographics — last year's sample had significantly more respondents at large organizations (42%) and fewer
respondents at small organizations (36%). Since we've seen a positive correlation between organization size and
the existence of dedicated AI-related development roles, this demographic discrepancy could be a contributing
factor to the drop in AI roles. Additionally, given the relative recency of the generative AI boom, the ways in which
organizations choose to implement AI into their development processes and teams are likely to vary greatly from
business to business.

Most organizations are not very advanced with regard to their generative AI maturity, and for nine out of 10
businesses, "deploying GenAI in specific departments and/or use cases" is the apex of their GenAI adoption. Large
organizations seem to have slightly outpaced others in GenAI maturity levels, but for the most part, organizations of
all sizes are, at best, midway on their GenAI journey.

The potential GenAI has for nearly all enterprises will likely drive maturity levels forward over the next several years,
but we will need further data to see how slowly or quickly those levels improve on average.

Security Considerations
We asked the following:
• What steps does your organization actively employ to secure AI practices and implementations, like LLMs,
vector databases, etc.?
• Describe your organization's ability to defend AI applications built by internal AI teams against security threats
like prompt injections, DoS attacks, and sensitive data leakage.

Results:

Figure 3. Employed AI security practices and implementations [n=376]

AIOps 22%

Alerts 19%

Anomaly detection 22%

Data protection 39%

Ethical AI and bias mitigation 24%

Explainability and transparency 21%

Incident response 18%

Model security and integrity 19%

Predictive analytics 23%

Real-time monitoring 31%

Security standards 22%

Threat detection 25%

Vulnerability scanning 23%

Other, write in 4%

None 28%

0 10 20 30 40

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 7


Figure 4. Organizations' ability to defend AI apps [n=376]

11%

30% Fully prepared

Moderately prepared
30%
Limited preparedness

Unprepared
30%

OBSERVATIONS
70% of respondents selected at least one of the 13 suggested AI security steps actively employed at their
organization, and roughly half (49%) selected three or more. Over a quarter of respondents said their organization
takes no active steps to secure AI practices and implementations, and no response received a response rate over 40%.

Segmenting responses by organization size, we noted the following:


• Respondents at large organizations were more likely than others to select several of the suggested security
steps, including "Data protection," "Real-time monitoring," "Anomaly detection," and "AIOps." They were less
likely than others to select "None."
• Respondents at mid-sized organizations were less likely than others to say their organization uses
"Predictive analytics."
• Respondents at small organizations were less likely than others to say their organization utilizes many of
the security steps, including "Vulnerability scanning," "Threat detection," "Incident response," and "Security
standards (e.g., OWASP Top 10, NIST)."

Additional details can be found in Table 4 (see Appendix).

Furthermore, a few notable observations when segmenting responses by respondents' organizations' GenAI maturity
are as follows (with additional data available in Table 5):
• There was a negative correlation between organizational GenAI maturity and the absence of AI security steps
— in other words, respondents who described their organization's GenAI maturity as "Exploratory" were most
likely to select "None" regarding their organization's AI security steps, with decreasing response rates for each
subsequent maturity level (i.e., Pilot, Observational, and Advanced).
• Likewise, we found positive correlations between respondents' organizations' GenAI maturity and "Real-time
monitoring," "AIOps," and "Predictive analytics."
• Respondents at organizations with "Advanced" GenAI maturity were more likely than others to report their
organization using "Data protection," "Alerts," "Anomaly detection," "Explainability and transparency," "Ethical AI
and bias mitigation," and "Model security and integrity."
• Respondents at organizations in the "Exploratory" GenAI maturity stage were less likely than others to select
every suggested security step (except for "Alerts," where the difference in response rates fell just within the
margin of error).

SEE TABLE 5 ON NEXT PAGE

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 8


Table 5. AI security practices and implementations by organization's GenAI maturity*

GenAI Maturity

Security Steps Exploratory Pilot Operational Advanced Overall

Data protection 30% 44% 38% 52% 39%

Real-time monitoring 19% 32% 39% 48% 31%

None 48% 26% 16% 10% 28%

Threat detection 15% 31% 31% 21% 25%

Ethical AI and bias mitigation 8% 33% 24% 43% 24%

Predictive analytics 10% 22% 29% 40% 23%

Vulnerability scanning 14% 28% 28% 26% 23%

AIOps 12% 18% 28% 40% 22%

Anomaly detection 13% 26% 24% 31% 22%

Security standards 12% 26% 26% 26% 22%

Explainability and transparency 12% 21% 24% 36% 21%

Alerts 14% 19% 20% 26% 19%

Model security and integrity 8% 21% 20% 33% 19%

Incident response 10% 25% 19% 17% 18%

Other, write in 4% 2% 4% 7% 4%

n= 108 121 101 42 376

*% of columns

Only about one in 10 respondents felt their organizations were fully prepared to defend AI apps built by internal
teams. A majority of respondents reported their organization as either having "Limited preparedness" or as being
completely "Unprepared."

Respondents at large organizations were more likely than others to say their organization is "Moderately prepared"
and "Fully prepared" to defend AI apps against security threats. Likewise, respondents at organizations with an
"Advanced" level of GenAI maturity were more likely than others to say their organization is "Fully prepared" for AI
defense, while respondents at organizations in the "Exploratory" stage of their GenAI maturity were considerably
more likely to say their organization is "Unprepared" to defend AI apps.

Further data is available in Tables 6 and 7, respectively, in the Appendix.

CONCLUSIONS
AI security is a critical consideration in the development or implementation of AI technologies, but the explosion of
growth that generative AI has seen in recent years has likely led to some organizations hastily adopting GenAI before
ensuring appropriate security is in place. It seems few organizations actively employ more than a handful of steps
for securing their AI implementations, and even among organizations that could be described as "Advanced" in their
GenAI maturity, fewer than half appear fully prepared to defend their AI applications.

Over the next few years, as organizations improve their GenAI maturity levels and the hype around the technology
ultimately falls, we expect to see increases in response rates for at least a few AI security steps, as well as an overall
shift toward better preparedness for AI defense.

AI Applications and Ethics


We asked the following:
• Which of the following do you consider worthwhile applications of AI/ML in the enterprise?
• Which of the following ethics concerns has your organization raised and/or faced with regards to GenAI?
SEE RESULTS ON NEXT PAGE

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 9


Figure 5. Opinion: Worthwhile applications of AI/ML in the enterprise [n=384]

Predictive analytics 66%

Customer relationship management 48%

Chatbots/virtual assistants 74%

Supply chain optimization 30%

Fraud detection and security 49%

Human resources and talent acquisition 30%

Sentiment analysis 45%

Personalized marketing 47%

Healthcare diagnostics 36%

Process automation and RPA 44%

Enterprise resource planning 26%

Smart analytics and BI 52%

Other, write in 9%

None 1%

0 20 40 60 80

Figure 6. GenAI ethics concerns raised/faced at organizations [n=379]

Non-discrimination 21%

Privacy and data protection 72%

Accountability 33%

Safety and security 49%

Human autonomy 21%

Informed consent 26%

Transparency 44%

Sustainability 19%

Collaboration and inclusivity 17%

Other, write in 4%

None 7%

I don't know 8%

0 20 40 60 80

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 10


OBSERVATIONS
Almost all respondents (98%) selected at least one of the 12 AI/ML applications suggested (and only 1% said there
were "None"). About four out of five respondents (82%) thought there were at least three worthwhile applications,
and a little over half of respondents (55%) found five or more.

Compared to our 2024 Enterprise AI survey results, response rates for "Chatbots/virtual assistants," "Personalized
marketing," "Smart analytics and BI," and "Process automation and robotic process automation (RPA)" increased
significantly, while response rates for "Supply chain optimization" and "Fraud detection and security" decreased (see
details in Table 8).

Table 8. Worthwhile applications of AI/ML in the enterprise: 2024-2025

AI/ML Application 2024 2025 % Change

Chatbots/virtual assistants 60% 74% +14%

Predictive analytics 68% 66% -2%

Smart analytics and BI 46% 52% +6%

Fraud detection and security 59% 49% -10%

Customer relationship management 43% 48% +5%

Personalized marketing 35% 47% +12%

Sentiment analysis 42% 45% +3%

Process automation and RPA 38% 44% +6%

Healthcare diagnostics 32% 36% +4%

Supply chain optimization 42% 30% -12%

Human resources and talent acquisition 33% 30% -3%

Enterprise resource planning 30% 26% -4%

Other, write in 4% 9% +5%

None 4% 1% -3%

n= 171 384 -

When segmenting by respondents' organization size, we found the following:


• Respondents at large organizations were more likely than others to say they found "Fraud detection and
security" and "Supply chain optimization" as worthwhile AI/ML applications, and they were less likely to select
"Personalized marketing as a worthwhile application."
• Respondents at mid-sized organizations were more likely than others to label "Predictive analytics" as a
worthwhile application.
• Respondents at small organizations were less likely than others to say that "Chatbots/virtual assistants,"
"Predictive analytics," "Fraud detection and security," "Process automation and robotic process automation (RPA),"
and "Enterprise resource planning (ERP)" are worthwhile applications of AI/ML.

Additional details can be found in Table 9 (see Appendix).

84% of respondents reported that their organization has raised or faced at least one of the nine ethical concerns
provided as answer options, and over half of respondents (55%) selected three or more.

Of the respondents that did not say their organization has raised or faced any ethics concerns, about half (7% of
total respondents) indicated their organization hasn't faced any of these concerns, while the other half (8% of total
respondents) selected "I don't know." "Privacy and data protection" was selected significantly more than other
concerns (23% higher than the second most commonly selected concern).

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 11


Segmented by organization size, we observed Table 10. GenAI ethics concerns raised/faced by organization size*
the following (further details in Table 10): Organization Size
• Respondents at large organizations were Ethical Concern 1-99 100-999 1,000+ Overall
more likely than others to select "Privacy
Privacy and data protection 63% 75% 82% 72%
and data protection," "Accountability," and
"Non-discrimination." Safety and security 41% 57% 56% 49%

• Respondents at small organizations were Transparency 41% 45% 50% 44%


less likely than others to select "Privacy Accountability 33% 28% 39% 33%
and data protection" and "Safety and
Informed consent 26% 26% 27% 26%
security." They were more likely than
others to select "None." Human autonomy 18% 22% 24% 21%

Non-discrimination 17% 20% 28% 21%


When segmenting by organizations' GenAI
maturity, we noted these points of interest: Sustainability 18% 14% 22% 19%

• Respondents at organizations with Collaboration and inclusivity 15% 16% 21% 17%

"Advanced" GenAI maturity were more I don't know 9% 4% 7% 8%


likely than others to say their organization
None 13% 6% 1% 7%
has raised or faced concerns around
Other, write in 5% 6% 2% 4%
"Informed consent," "Non-discrimination,"
and "Sustainability." n= 169 69 131 379
• Respondents at organizations in the *% of columns
"Operational" stage of GenAI maturity
were more likely than others to select "Transparency" as an ethics concern their organization has raised/faced.
• Respondents at organizations in the "Pilot" stage of GenAI maturity were more likely than others to choose
"Privacy and data protection," and less likely than others to choose "Informed consent."
• Respondents at organizations in the "Exploratory" stage of GenAI maturity were less likely than others to say their
organization raised or faced concerns over "Transparency."

Additional details can be found in Table 11 (see Appendix).

CONCLUSIONS
Developers generally believe there are at least some valuable applications of AI and ML, though devs at small
organizations are less likely to see those applications directly. Chatbots and similar conversational AI technologies
are especially growing in popularity as worthwhile AI applications, likely because of the wide range of applicable use
cases they have demonstrated in recent years.

Regarding ethical concerns, data privacy currently seems to be top of mind for most organizations, perhaps at
least partly because of privacy regulations such as GDPR and CCPA. As we suggested in the previous section, we
expect "Safety and security" to become a bigger AI concern over the next few years, and it is likely that organizations
will raise many other ethical concerns as AI technologies become a part of more organizations' systems. Ideally,
organizations will raise these concerns before facing any negative consequences.

Research Target Two: Generative AI Tools, Platforms, and Methods


Our research related to GenAI solutions and methods centered around two key topic areas:
1. AI tools and platforms
2. Multimodal AI

AI Tools and Platforms


We asked the following:
• Which of the following open-source AI tools, frameworks, and/or APIs do you use to build software?
• Which of the following AI platforms do you use to assist with your work?

SEE RESULTS ON NEXT PAGE

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 12


Figure 7. Frequency of use of open-source AI tools, frameworks, and APIs [n=377]

TensorFlow 28%

PyTorch 33%

Scikit-learn 18%

Keras 10%

Apache MXNet 6%

Caffe 2%

CUDA 12%

Dask 2%

DL4J 1%

FastAPI 14%

Fast.ai 5%

Genism 2%

Hugging Face 35%

H2O.ai 3%

LightGBM 3%

NLTK 9%

OpenCV 10%

PaddlePaddle 1%

Shogun 1%

SpaCy 8%

Theano 1%

Other, write in 7%

None 41%

0 10 20 30 40 50

Figure 8. Frequency of use of AI platforms* [n=393]

100

84%
80

60
50% 51%
44%
40

25%
21% 21%
20 15%
10% 8%
2%
0
Amazon ChatGPT DeepSeek Google Google IBM Microsoft Microsoft NVIDIA OpenAI Other,
SageMaker Gemini Cloud AI Watson Azure AI Copilot write in

*Note: We inadvertently listed "OpenAI" instead of "OpenAI API" as our tenth AI platform, which had a response rate of 51%. However, we do
acknowledge possible ambiguity of the selection alongside "ChatGPT," so we plan to better distinguish the two platforms in future surveys.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 13


OBSERVATIONS Table 12. Frequency of use of open-source AI tools, frameworks,
A little over half of respondents (56%) said and APIs by organization size*

they use at least one of the 21 open-source AI Organization Size


tools we suggested, and about one-third of
Tool/Framework/API 1-99 100-999 1,000+ Overall
respondents (32%) said they use three or more.
41% of respondents indicated they don't use any None 53% 39% 27% 41%

kind of open-source AI tool. "Hugging Face," Hugging Face 28% 42% 42% 35%
"PyTorch," and "TensorFlow" were the most
PyTorch 24% 34% 43% 33%
commonly selected choices.
TensorFlow 17% 28% 43% 28%
Segmenting responses by organization size, we
Scikit-learn 11% 16% 27% 18%
found the following (see details in Table 12):
FastAPI 13% 8% 18% 14%
• Respondents at large organizations were
more likely than others to say they use CUDA 6% 20% 16% 12%
"PyTorch," "TensorFlow," "Scikit-learn," and Keras 6% 9% 13% 10%
"NLTK." They were less likely than others to
OpenCV 6% 11% 11% 10%
select "None."
• Respondents at small organizations were NLTK 5% 5% 16% 9%
less likely than others to report using SpaCy 5% 6% 11% 8%
"Hugging Face," "PyTorch," "TensorFlow,"
Other, write in 5% 8% 9% 7%
"CUDA," and "Apache MXNet." They were
more likely than others to select "None." Apache MXNet 2% 8% 11% 6%

Fast.ai 2% 3% 8% 5%
Additionally, a few notable findings when
segmenting responses by organizations' GenAI H2O.ai 1% 3% 6% 3%
maturity level are as follows:
LightGBM 1% 2% 7% 3%
• Respondents at organizations in the Caffe 0% 5% 2% 2%
"Advanced" stages of GenAI maturity
were more likely than others to select Dask 1% 3% 1% 2%

"PyTorch," "TensorFlow," and "NLTK." Genism 1% 3% 1% 2%


They were also more likely than others to
DL4J 1% 2% 1% 1%
submit a write-in option.
PaddlePaddle 1% 2% 1% 1%
• Respondents at organizations in the
"Operational" stage of GenAI maturity were Shogun 0% 2% 1% 1%
more likely than others to say they use
Theano 0% 2% 2% 1%
"FastAPI."
n= 166 64 127 377
• Respondents at organizations in the "Pilot"
stage of GenAI maturity were more likely *% of columns
than others to select "Hugging Face."
• Respondents at organizations in the
"Exploratory" stage of GenAI maturity were less likely than others to select "Hugging Face," "PyTorch,"
"TensorFlow," "Scikit-learn," and "spaCy." Additionally, they were considerably more likely than others to report
using no open-source AI tools.
• 58% of respondents at organizations beyond the "Exploratory" stage of GenAI maturity selected at least one of the
top three open-source tools: "Hugging Face," "PyTorch," and "TensorFlow."

Additional details can be found in Table 13 (see Appendix).

Almost all respondents (97%) selected at least one of the 10* AI platforms listed, and nearly half (45%) selected three
or more. "ChatGPT" was the most commonly selected platform by a wide margin, with "OpenAI," "Google Gemini,"
and "Microsoft Copilot" trailing further behind, respectively.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 14


The following are observations we made when Table 14. Frequency of use of AI platforms by organization size*
segmenting by respondents' organization size Organization Size
(details in Table 14):
AI Platform 1-99 100-999 1,000+ Overall
• Respondents at large organizations were
ChatGPT 92% 86% 76% 84%
more likely than others to say they use
"Microsoft Azure AI" and “OpenAI,” and less OpenAI 47% 51% 58% 51%
likely than others to say they use "ChatGPT."
Google Gemini 49% 55% 49% 50%
• Respondents at mid-sized organizations
Microsoft Copilot 31% 58% 55% 44%
were more likely than others to select
"Google Gemini," and more likely than others Other, write in 23% 32% 22% 25%
to select the "Other, write in" option.
DeepSeek 25% 19% 14% 21%
• Respondents at small organizations were
Microsoft Azure AI 13% 23% 33% 21%
more likely than others to say they use
"ChatGPT" and "DeepSeek," and less likely Google Cloud AI 13% 14% 14% 15%
than others to select "Microsoft Copilot" and Amazon SageMaker 10% 6% 13% 10%
"Microsoft Azure AI."
NVIDIA (e.g., CUDA-X) 6% 12% 8% 8%
Segmenting responses by organizations' GenAI
IBM Watson 1% 4% 2% 2%
maturity, we noted the following (additional data
in Table 15): None 3% 1% 0% 2%

• Respondents at organizations at "Advanced" n= 169 69 132 393

GenAI maturity levels were more likely than *% of columns


others to say they use "OpenAI," "Google
Gemini," "Microsoft Azure AI," and "Google Cloud AI" to assist with their work. They were less likely than others to
say they use "ChatGPT."
• Respondents at organizations in the "Pilot" GenAI maturity stage were more likely than others to say they use
"Microsoft Copilot."
• Respondents at organizations in the "Exploratory" stage of GenAI maturity were less likely than others to say they
use "OpenAI," "Google Gemini," and "Microsoft Azure AI."

Table 15. Frequency of use of AI platforms by GenAI maturity*

GenAI Maturity

AI Platform Exploratory Pilot Operational Advanced Overall

ChatGPT 85% 87% 84% 76% 84%

OpenAI 39% 53% 56% 64% 51%

Google Gemini 43% 51% 51% 60% 50%

Microsoft Copilot 43% 50% 43% 43% 44%

Other, write in 21% 29% 21% 26% 25%

DeepSeek 18% 22% 19% 24% 21%

Microsoft Azure AI 12% 22% 27% 33% 21%

Google Cloud AI 10% 14% 16% 24% 15%

Amazon SageMaker 4% 11% 14% 14% 10%

NVIDIA (e.g., CUDA-X) 7% 6% 9% 14% 8%

IBM Watson 3% 0% 3% 2% 2%

None 3% 1% 2% 0% 2%

n= 109 125 103 42 393

*% of columns

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 15


CONCLUSIONS
At organizations that are past the "Exploratory" phase for GenAI, open-source AI tools like Hugging Face, PyTorch,
and TensorFlow play an important role in developing software with AI/ML components. These tools are especially
beneficial at large organizations, likely because these organizations tend to be better equipped to build in-house AI/ML
functionality in their applications rather than relying on external AI platforms.

As far as AI platforms are concerned, most developers are utilizing AI technologies like ChatGPT to assist them with
their work. The wide availability of these platforms as well as the range of tasks they can help perform make them
valuable tools for almost all developers, regardless of the specifics of the organizations and applications they are
working with.

Multimodal AI
We asked the following:
• Have you worked on or experimented with multimodal AI models in your projects?
• What modalities do you think are most important in multimodal AI applications?
• In what areas do you see the most value in multimodal AI?

Results:

Figure 9. Work with multimodal AI models [n=381]

4%

15%
Yes, in a production environment

Yes, in an experimental or research setting


46%
No, but I'm interested in using them
36%
No, and I don't see a need for them

Figure 10. Opinion: Most important modalities in multimodal AI applications [n=384]

100
89%

80 73% 72%

60
49%

40 34%

20

1% 4%
0
Text Images Video Audio/speech Sensor data Other, write in No opinion

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 16


Figure 11. Opinion: Most valuable areas in multimodal AI [n=383]

75%
75
67%

52%
50
43%
37%

25

7%
2% 3%
0
Content Enhanced AI-powered Medical and Robotics and Other, None No
creation search and assistants healthcare autonomous write in opinion
recomm. systems apps systems

OBSERVATIONS
Table 16. Work with multimodal AI models by GenAI maturity*
Just over half of respondents (51%) said
they have worked with multimodal AI GenAI Maturity
Work With
models in some capacity, and most of the Multimodal AI Exploratory Pilot Operational Adv. Overall
remaining respondents are interested in
Yes, in a prod
using them (46% of all respondents). 6% 13% 18% 36% 15%
environment
Respondents at organizations with Yes, in an
"Advanced" GenAI maturity were most experimental or 20% 44% 44% 33% 36%
likely to say they have worked with research setting
multimodal AI in a prod environment, while No, but I'm
those at organizations in the "Exploratory" interested in 71% 41% 34% 21% 46%
stage of GenAI maturity were most likely using them

to indicate that they have not worked with No, and I don't
multimodal AI (further details in Table 16). see a need for 3% 2% 4% 10% 4%
them
"Text," "Images," and "Audio/speech" were
n= 110 124 102 42 381
the modalities respondents deemed most
important. Respondents at large *% of columns
organizations were slightly more likely to
find "Text," "Images," and "Sensor data"
Table 17. Opinion: Most important modalities in multimodal
important, but otherwise, there were no
AI applications by organization size*
significant differences when segmenting by
organization size (see details in Table 17). Organization Size

Modality 1-99 100-999 1,000+ Overall


Most respondents (95%) found some sort of
value in multimodal AI, with "AI-powered Text 85% 88% 94% 89%
assistants," "Content creation," and "Enhanced Images 67% 72% 82% 73%
search and recommendation systems" being
Video 46% 51% 52% 49%
the most commonly selected areas of value.
Segmenting by respondents' organization Audio/speech 70% 70% 75% 72%
size, we found the following: Sensor data 31% 32% 39% 34%
• Respondents at large organizations were Other, write in 1% 1% 2% 1%
more likely than others to find "AI-powered
No opinion 6% 3% 2% 4%
assistants," "Medical and healthcare
applications," and "Robotics and autonomous n= 170 69 132 384
systems" as valuable areas for multimodal AI. *% of columns

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 17


• Respondents at mid-sized organizations were more likely than others to find value for multimodal AI in
"Enhanced search and recommendation systems."
• Respondents at small organizations were more likely than others to find "Content creation" as a valuable area for
multimodal AI, and less likely than others to find multimodal AI valuable for "AI-powered assistants."

Additional details can be found in Table 18 (see Appendix).

CONCLUSIONS
While most organizations are not using multimodal models in their production applications, developers are generally
interested in their potential, especially for AI assistants with multisensory inputs/outputs and for multimodal content
creation. Text, image, and speech modalities are likely to be organizations' main focus over the next several years, but
we expect models incorporating video and sensor data modalities to be highly beneficial for more niche use cases.

Research Target Three: LLMs and Intelligent Search


Our research related to advanced search and language capabilities centered around two key topic areas:
1. Large language models
2. Intelligent search

Large Language Models


We asked the following:
• Does your organization currently use, or is considering using, large language models (LLMs)?
• In what ways does your organization use LLM technology?
Results:

Figure 12. Use of large language models [n=380]

4%
9%

Yes, we are using them

No, but we are considering them


27%
60% No, and we are not considering them

Not sure

Figure 13. LLM use cases [n=222]

Vector databases 40%


Model distillation 10%
Data labeling and annotation 23%
Prompt design and engineering 57%
Content generation 72%
Code generation 57%
Automated customer support 22%
Data extraction/analysis 50%
Market research 32%
Sentiment analysis 29%
Other, write in 5%
0 20 40 60 80

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 18


OBSERVATIONS
The majority of respondents said their organization is currently using LLMs in some capacity, and about another
quarter of respondents said they are considering LLM use. We observed a positive correlation between GenAI
maturity and LLM use, and 90% of respondents at organizations with "Advanced" GenAI maturity reported that their
organization uses LLMs. Further details can be found in Table 19 (see Appendix). Compared to the results of our 2024
Enterprise AI survey, LLM use has increased dramatically, while response rates for respondents at organizations only
considering LLMs has decreased. Further details are available in Table 20 (see Appendix).

91% of respondents indicated that their Table 21. LLM use cases: 2024-2025
organization uses LLMs for multiple use cases
LLM Use Case 2024 2025 % Change
(i.e., they selected two or more of the use cases
provided), and more than half of respondents Content generation 64% 72% +8%
(54%) selected at least four LLM use cases. Prompt design and engineering 60% 57% -3%
"Content generation" was the most commonly
Code generation 55% 57% +2%
selected LLM use case, but "Prompt design and
engineering," "Code generation," and "Data Data extraction/analysis 38% 50% +12%

extraction/analysis" all had response rates over Vector databases 53% 40% -13%
50% as well. Market research 23% 32% +9%

Compared to last year's results, response rates for Sentiment analysis 34% 29% -5%
"Data extraction/analysis," "Market research," and Data labeling and annotation 21% 23% +2%
"Content generation" all increased significantly,
Automated customer support 28% 22% -6%
while rates for "Automated customer support"
and "Vector databases" decreased (details can be Model distillation 15% 10% -5%
found in Table 21). Other, write in 2% 5% +3%

Segmenting responses by organization size, n= 81 222 -


we found the following results, with additional
*% of columns
details in Table 22:
• Respondents at large organizations Table 22. LLM use cases by organization size*
were more likely than others to say their
Organization Size
organization utilizes LLMs for several
LLM Use Case 1-99 100-999 1,000+ Overall
of the use cases, including "Content
generation," "Prompt design and Content generation 70% 62% 78% 72%
engineering," and "Code generation." Prompt design and engineering 56% 49% 63% 57%
• Respondents at mid-sized
Code generation 52% 51% 68% 57%
organizations were less likely than
others to select "Content generation" Data extraction/analysis 45% 51% 54% 50%

and "Prompt design and engineering" Vector databases 29% 38% 54% 40%
as LLM use cases. Market research 42% 26% 23% 32%
• Respondents at small organizations
Sentiment analysis 25% 31% 33% 29%
were more likely than others to say their
organization uses LLMs for "Market Data labeling and annotation 19% 18% 32% 23%

research." Additionally, they were Automated customer support 16% 15% 32% 22%
less likely than others to select "Data
Model distillation 9% 8% 14% 10%
extraction/analysis," "Vector databases,"
Other, write in 4% 8% 4% 5%
and "Sentiment analysis."
n= 99 39 81 222
CONCLUSIONS
*% of columns
LLM use is widespread at this point, and
considering the popularity of text as an
AI modality, it is no surprise. Text-based communication is still the primary method of user interaction for most
applications, so the number of organizations utilizing LLMs to leverage app data for AI/ML will most likely continue to
rise. Generating content is the primary LLM use case at the moment, and there is potentially some room for growth
in that area, but we also expect to see LLMs being used for a wider variety of applications in the future.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 19


Intelligent Search
We asked the following:
• What types of intelligent search technologies have you or your organization used or implemented?
• How has intelligent search improved your or your organization's ability to find relevant information?*
• In your opinion, in what areas has intelligent search had the most noticeable impact in your organization?*
*Note: This question was only asked to respondents who did not answer "None" to the question "What types of intelligent search
technologies have you or your organization used or implemented?"

Results:

Figure 14. Types of intelligent search technologies implemented [n=376]

50 46%

40

31% 32%
30

22%
20 17%

10
2%
0
AI-powered Vector Semantic Hybrid Other, None
search search search search write in

Figure 15. Opinion: Intelligent search's ability to find relevant information [n=245]*

2%
Significantly improved
12%
Somewhat improved
33%
No noticeable difference

Made it somewhat worse

Made it significantly worse


53%

*Note: "Made it significantly worse" was provided as an answer


option, but no respondents chose it.

Figure 16. Opinion: Most impactful areas for intelligent search [n=250]

Code search and development tools 35%

Customer support and self-service 40%

Data analysis and research 53%

E-commerce and product discovery 15%


Knowledge management and
internal documentation
49%

Other, write in 4%

None 4%

0 10 20 30 40 50 60

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 20


OBSERVATIONS
About two-thirds of respondents said their organization uses intelligent search in some way, with "AI-powered
search (e.g., Elasticsearch with ML, OpenAI search tools)" being the most commonly selected type. Segmented by
GenAI maturity, respondents at organizations with "Advanced" maturity were more likely than others to select all four
types of intelligent search provided, while respondents at organizations in the "Exploratory" stage of GenAI maturity
were the most likely to select "None" (see details in Table 23).

Table 23. Types of intelligent search technologies by GenAI maturity*

GenAI Maturity

Intelligent Search Type Exploratory Pilot Operational Advanced Overall

AI-powered search (e.g., Elasticsearch with ML,


39% 42% 53% 59% 46%
OpenAI search tools)

None 43% 30% 28% 20% 32%

Hybrid search (i.e., combining traditional keyword


27% 34% 27% 41% 31%
search with AI)

Vector search (e.g., Pinecone, Weaviate, FAISS) 8% 26% 19% 51% 22%

Semantic search (e.g., BERT-based retrieval models) 12% 16% 17% 37% 17%

Other, write in 1% 2% 4% 2% 2%

n= 109 123 101 41 376

*% of columns

Most respondents (86%) said they thought intelligent search has at least somewhat improved their ability to find
relevant information. Very few respondents (2%) said they thought intelligent search made their ability to find info
"Somewhat worse," and no respondents said they thought it "Made it significantly worse."

Respondents at small organizations were most likely to say that intelligent search "Significantly improved"
their ability to find information, and respondents at mid-sized organizations were most likely to say intelligent
search "Somewhat improved" it. Segmenting by GenAI maturity, respondents at organizations with "Advanced"
GenAI maturity were more likely than others to say intelligent search "Significantly improved" their ability to find
information but were less likely than others to say they thought it "Somewhat improved" it. Additional details can be
found in Tables 24 and 25 (see Appendix).
Table 26. Most impactful areas for intelligent search by organization size*
"Data analysis and research" and Organization Size
"Knowledge management and internal
documentation" were the most commonly Area of Impact 1-99 100-999 1,000+ Overall

selected areas that respondents said have Data analysis and


53% 54% 52% 53%
the most noticeable impact on their research
organization, with around half of Knowledge
respondents selecting each. management and 43% 65% 51% 49%
internal documentation
Segmenting the results by organization
Customer support and
size, we observed the following (additional 27% 43% 53% 40%
self-service
data in Table 26):
Code search and
30% 35% 43% 35%
• Respondents at large organizations development tools
were more likely than others to find E-commerce and
17% 19% 11% 15%
intelligent search having a noticeable product discovery
impact on "Code search and Other, write in 5% 3% 2% 4%
development tools" and "Customer
None 5% 0% 4% 4%
support and self-service." They were
less likely than others to select n= 115 37 95 250
"E-commerce and product discovery."
*% of columns

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 21


• Respondents at mid-sized organizations were more likely than others to select "Knowledge management and
internal documentation."
• Respondents at small organizations were less likely than others to say intelligent search has a noticeable impact
on "Knowledge management and internal documentation" and "Customer support and self-service."

When segmenting by respondents' organizations' GenAI maturity, we noted the following (further details in Table 27):
• Respondents at organizations with "Advanced" GenAI maturity were more likely than others to say intelligent
search has a noticeable impact on "Code search and development tools" and "Customer support and self-service."
• Respondents at organizations in the "Exploratory" stage of GenAI maturity were more likely than others to find
that intelligent search impacts "Data analysis and research." They were less likely than others to select "Code
search and development tools" and "Customer support and self-service."

Table 27. Most impactful areas for intelligent search by GenAI maturity*

GenAI Maturity

Area of Impact Exploratory Pilot Operational Advanced Overall

Data analysis and research 25% 37% 38% 44% 35%

Knowledge management and internal documentation 23% 36% 48% 63% 40%

Customer support and self-service 62% 53% 48% 47% 53%

Code search and development tools 17% 13% 15% 19% 15%

E-commerce and product discovery 40% 55% 54% 44% 49%

Other, write in 2% 5% 3% 3% 4%

None 8% 1% 4% 3% 4%

n= 60 86 71 32 250

*% of columns

CONCLUSIONS
The majority of organizations are actively using intelligent search, with AI-powered search like Elasticsearch with
ML being the predominant type of intelligent search used. It is probable that even more rely on intelligent search
technologies that they don't realize they are using, but we expect to see active usage of intelligent search increase
as AI/ML technology improves. Developers overwhelmingly find that intelligent search helps them and their
organizations find relevant information more easily and quickly, though currently the areas where it has the most
noticeable impact varies from business to business.

Future Research
Our analysis here only touched the surface of the available data, and we will look to refine and expand our Generative
AI research as we produce future Trend Reports. Please contact [email protected] if you would like to discuss
any of our findings or supplementary data.

SEE APPENDIX ON NEXT PAGE OR SKIP TO NEXT SECTION

G. Ryan Spain
G. Ryan Spain lives on a beautiful two-acre farm in McCalla, Alabama with his lovely
@grspain wife. He is a polyglot software engineer with an MFA in poetry, a die-hard Emacs
@grspain fan and Linux user, a lover of The Legend of Zelda, a journeyman data scientist, and
@grspain a home cooking enthusiast. When he isn't programming, he can often be found
watching Um, Actually on Dropout with a glass of red wine or a cold beer.
gryanspain.com

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 22


"Key Research Findings" Appendix

Table 1. Prevalence of AI-/ML-related roles: 2024-2025

Role 2024 2025 % Change

Data analyst 49% 38% -11%

Data scientist 55% 37% -18%

Data engineer 47% 30% -17%

Machine learning (ML) engineer 36% 30% -7%

Business intelligence (BI) developer 40% 28% -12%

AI product manager 23% 23% 0%

AI solutions architect 25% 23% -3%

None 10% 20% +10%

Machine learning ops (MLOps) engineer 21% 17% -5%

AI research scientist 23% 15% -8%

Chief data officer (CDO) 13% 14% +1%

Other, write in 4% 12% +8%

AI trainer/annotation specialist 8% 10% +1%

AI ethicist/algorithm bias analyst 6% 5% -1%

n= 191 399 -

Table 2. Prevalence of AI-/ML-related roles by organization size*

Organization Size

Role 1-99 100-999 1,000+ Overall

Data analyst 21% 43% 61% 38%

Data scientist 17% 41% 61% 37%

Machine learning (ML) engineer 13% 28% 51% 30%

Data engineer 17% 29% 49% 30%

Business intelligence (BI) developer 15% 29% 42% 28%

AI product manager 19% 19% 33% 23%

AI solutions architect 14% 21% 37% 23%

None 32% 22% 6% 20%

Machine learning ops (MLOps) engineer 5% 13% 32% 17%

AI research scientist 9% 15% 23% 15%

Chief data officer (CDO) 11% 7% 21% 14%

Other, write in 11% 15% 11% 12%

AI trainer/annotation specialist 8% 4% 14% 10%

AI ethicist/algorithm bias analyst 2% 6% 8% 5%

n= 170 68 132 399

*% of columns

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 23


Table 4. AI security practices and implementations by organization size*

Organization Size

Security Steps 1-99 100-999 1,000+ Overall

Data protection 29% 37% 52% 39%

Real-time monitoring 24% 34% 41% 31%

None 38% 37% 9% 28%

Threat detection 13% 31% 36% 25%

Ethical AI and bias mitigation 21% 22% 31% 24%

Predictive analytics 19% 10% 34% 23%

Vulnerability scanning 15% 29% 32% 23%

AIOps 13% 16% 36% 22%

Anomaly detection 15% 15% 37% 22%

Security standards (e.g., OWASP Top 10, NIST) 11% 25% 34% 22%

Explainability and transparency 17% 15% 30% 21%

Alerts 17% 18% 23% 19%

Model security and integrity 8% 15% 34% 19%

Incident response 11% 22% 25% 18%

Other, write in 2% 1% 5% 4%

n= 167 68 128 376

*% of columns

Table 6. Organizations' ability to defend AI apps by organization size*

Organization Size

Preparedness 1-99 100-999 1,000+ Overall

Fully prepared 4% 10% 20% 11%

Moderately prepared 21% 25% 41% 30%

Limited preparedness 34% 26% 29% 30%

Unprepared 41% 38% 10% 30%

n= 167 68 129 376

*% of columns

Table 7. Organizations' ability to defend AI apps by GenAI maturity*

GenAI Maturity

Preparedness Exploratory Pilot Operational Advanced Overall

Fully prepared 7% 4% 9% 45% 11%

Moderately prepared 12% 32% 46% 29% 30%

Limited preparedness 24% 37% 35% 14% 30%

Unprepared 57% 27% 11% 12% 30%

n= 107 123 101 42 376

*% of columns

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 24


Table 9. Worthwhile applications of AI/ML in the enterprise by organization size*

Organization Size

AI/ML Application 1-99 100-999 1,000+ Overall

Chatbots/virtual assistants 69% 78% 79% 74%

Predictive analytics 61% 76% 70% 66%

Smart analytics and BI 50% 58% 54% 52%

Fraud detection and security 41% 51% 60% 49%

Customer relationship management 49% 46% 48% 48%

Personalized marketing 54% 49% 39% 47%

Sentiment analysis 41% 45% 48% 45%

Process automation and RPA 37% 52% 49% 44%

Healthcare diagnostics 32% 36% 39% 36%

Supply chain optimization 29% 27% 36% 30%

Human resources and talent acquisition 31% 31% 27% 30%

Enterprise resource planning 22% 31% 29% 26%

Other, write in 12% 4% 8% 9%

None 1% 1% 1% 1%

n= 170 67 132 384

*% of columns

Table 11. GenAI ethics concerns raised/faced at organizations by GenAI maturity*

GenAI Maturity

Ethical Concern Exploratory Pilot Operational Advanced Overall

Privacy and data protection 66% 79% 70% 71% 72%

Safety and security 44% 55% 48% 50% 49%

Transparency 35% 46% 52% 45% 44%

Accountability 32% 37% 30% 33% 33%

Informed consent 27% 19% 29% 36% 26%

Non-discrimination 19% 22% 19% 33% 21%

Human autonomy 23% 19% 20% 24% 21%

Sustainability 19% 20% 15% 26% 19%

Collaboration and inclusivity 18% 14% 18% 17% 17%

I don't know 14% 5% 6% 10% 8%

None 11% 6% 6% 2% 7%

Other, write in 3% 6% 2% 2% 4%

n= 110 125 102 42 379

*% of columns

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 25


Table 13. Frequency of use of open-source AI tools, frameworks, and APIs by GenAI maturity*

GenAI Maturity

Tool/Framework/API Exploratory Pilot Operational Advanced Overall

None 61% 33% 34% 28% 41%

Hugging Face 21% 46% 39% 33% 35%

PyTorch 20% 39% 33% 46% 33%

TensorFlow 14% 29% 34% 49% 28%

Scikit-learn 11% 18% 22% 23% 18%

FastAPI 8% 14% 20% 13% 14%

CUDA 11% 10% 12% 18% 12%

Keras 6% 8% 11% 13% 10%

OpenCV 8% 5% 14% 10% 10%

NLTK (Natural Language Toolkit) 3% 10% 11% 18% 9%

SpaCy 1% 10% 9% 10% 8%

Other, write in 5% 8% 6% 18% 7%

Apache MXNet 5% 2% 12% 8% 6%

Fast.ai 3% 5% 4% 8% 5%

H2O.ai 1% 3% 4% 8% 3%

LightGBM 4% 2% 4% 8% 3%

Caffe 2% 2% 1% 3% 2%

Dask 3% 0% 0% 3% 2%

Genism 2% 1% 1% 3% 2%

DL4J 1% 2% 0% 3% 1%

PaddlePaddle 1% 1% 1% 0% 1%

Shogun 1% 0% 1% 3% 1%

Theano 1% 1% 1% 0% 1%

n= 108 119 97 39 377

*% of columns

Table 18. Opinion: Most valuable areas in multimodal AI by organization size*

Organization Size

Area of Value 1-99 100-999 1,000+ Overall

AI-powered assistants (e.g., voice, text, and vision integration) 69% 75% 83% 75%

Content creation (e.g., generating text, images, or videos) 72% 64% 63% 67%

Enhanced search and recommendation systems 50% 61% 49% 52%

Medical and healthcare applications (e.g., medical imaging, patient data) 40% 36% 51% 43%

Robotics and autonomous systems (e.g., combining vision and control signals) 35% 33% 42% 37%

Other, write in 8% 4% 8% 7%

No opinion 3% 3% 3% 3%

None 2% 0% 2% 2%

n= 170 69 132 383

*% of columns

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 26


Table 19. Use of large language models by GenAI maturity*

GenAI Maturity

LLM Use Exploratory Pilot Operational Advanced Overall

Yes, we are using them 29% 66% 73% 90% 60%

No, but we are considering them** 43% 27% 19% 2% 27%

No, and we are not considering them 10% 2% 2% 5% 4%

Not sure 18% 6% 6% 2% 9%

n= 110 125 103 42 380

*% of columns

Table 20. Use of large language models: 2024-2025

LLM Use 2024 2025

Yes, we are using them 33% 60%

No, but we are considering them** 46% 27%

No, and we are not considering them 9% 4%

Not sure 12% 9%

n= 163 380

**Note: In 2024, we mistakenly included "Yes, we are considering it" and "No, but we are considering it" as separate answer options for
this question. These received response rates of 28% and 18%, respectively. We have combined those results here, but it is possible that this
error affected last year's results, which in turn would affect these findings' YOY comparison.

Table 24. Opinion: Intelligent search's ability to find relevant


information by organization size*

Organization Size

Opinion 1-99 100-999 1,000+ Overall

Significantly improved 42% 19% 29% 33%

Somewhat improved 44% 69% 55% 53%

No noticeable difference 10% 11% 16% 12%

Made it somewhat worse 4% 0% 0% 2%

Made it significantly worse 0% 0% 0% 0%

n= 111 36 95 245

*% of columns

Table 25. Opinion: Intelligent search's ability to find relevant information by GenAI maturity*

GenAI Maturity

Opinion Exploratory Pilot Operational Advanced Overall

Significantly improved 36% 32% 28% 45% 33%

Somewhat improved 53% 54% 55% 45% 53%

No noticeable difference 10% 14% 13% 10% 12%

Made it somewhat worse 2% 0% 4% 0% 2%

Made it significantly worse 0% 0% 0% 0% 0%

n= 59 84 71 31 245

*% of columns

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 27


CONTRIBUTOR INSIGHTS

A Pulse on Generative AI Today


Navigating the Landscape of Innovation and Challenges
By Dr. Tuhin Chattopadhyay, Professor of AI & Blockchain at Jagdish Sheth School of Management

Generative AI (GenAI) has become a transformative force, redefining how machines generate, retrieve, and process
information across industries. This article explores its rapid evolution, highlighting key breakthroughs, industry
applications, and emerging trends. From the rise of large language models (LLMs) and retrieval-augmented
generation (RAG) to the growing role of agentic AI, the analysis delves into innovations driving AI's transformation
and the challenges shaping its responsible adoption. Early breakthroughs like GPT-3 and DALL-E paved the way
for GPT-4o, Claude 3.5, and Gemini Ultra, enabling real-time memory-augmented reasoning and cross-modal
capabilities. Figure 1 shares the key developments across the timeline.

Figure 1. The evolution of generative AI

Advancements in Model Architectures and Efficiency


As demand for scalable, cost-efficient, and explainable AI increases, model architectures have evolved to address
challenges in speed, interpretability, and computational efficiency. Table 1 summarizes the key advancements:

Table 1. Key trends in GenAI development and deployment

Trend Description

Scaling LLMs • Sparse models and MoE: Models like Google's Switch Transformer use MoE techniques to activate only relevant
efficiently model components per query, significantly reducing consumption costs.
• Memory-enhanced LLMs: DeepSeek R1, GPT-4o, and Claude 3.5 Sonnet have longer context windows, enabling
AI to retain historical interactions without excessive compute overhead.
• Low-rank adaptation (LoRA) and parameter-efficient fine tuning: These techniques allow fine-tuning of specific
model layers, making custom LLM deployment feasible for enterprises without requiring full model training.

The rise of • Improved contextual awareness: Models like OpenAI's Sora, Gemini Ultra, and DeepSeek Vision process and
multimodal generate text, images, videos, and audio simultaneously.
models • Hybrid AI: This type of AI involves combining LLMs, RAG, and structured databases to enhance factual
correctness while maintaining generative creativity.

Latency and cost • Quantization/pruning techniques: These techniques reduce model sizes without compromising accuracy for
optimization in edge deployments.
AI deployments • Efficient inference techniques: Innovations in FlashAttention and speculative decoding improve generation
speed, which reduces costs for real-time applications.
• Serverless AI and on-device AI: New deployment paradigms allow lightweight AI models to run directly on
consumer devices, reducing dependency on cloud-based infrastructure.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 29


Latest Developments in Generative AI (Early 2025)
Generative AI continues to advance at an unprecedented pace, with the latest breakthroughs in LLMs, RAG, and
multimodal AI pushing the boundaries of efficiency, accuracy, and real-world applicability. In early 2025, significant
advancements have been made in GenAI tooling, particularly in LLMs and multimodal AI.
• Advancements in LLMs include:
∘ OpenAI o1 – designed to enhance reasoning capabilities beyond traditional prediction-based models,
improving performance in complex tasks like coding, mathematics, and scientific problem solving
∘ Google Gemini 2.0 – focused on autonomous agents capable of multi-step problem solving, featuring "Deep
Research" for efficient information gathering and enhanced AI overviews for multimodal query handling
∘ Amazon Nova – comprises models like Nova Micro, Nova Lite, and Nova Pro, each tailored for specific
applications ranging from cost-effective text processing to advanced multimodal tasks
• As for multimodal AI, recent advancements include:
∘ DeepSeek Janus Pro – a multimodal AI model with an image generator reportedly surpassing OpenAI's
DALL-E 3 in multiple benchmarks
∘ Meta Llama 3.2 – features multimodal capabilities that process both text and visual data simultaneously,
marking a significant leap in AI's ability to comprehend complex, context-aware prompts

These developments collectively contribute to more intelligent, context-aware, and versatile AI systems, poised to
transform various industry sectors.

The Emergence of Agentic AI


Agentic AI is redefining how AI systems operate — moving from passive assistants to autonomous decision makers.
By integrating real-time retrieval, multi-step reasoning, and goal-driven execution, AI agents are now capable of
self-directed actions, dynamic problem solving,
and workflow automation across industries. The Figure 2. Agentic AI workflow
agentic AI workflow, shared in Figure 2, follows
a structured progression, where AI agents
continuously interact, learn, and optimize tasks
through a dynamic, goal-driven process.

Various agentic AI frameworks have emerged,


enabling autonomous task execution, multi-
agent collaboration, and dynamic knowledge
retrieval. Table 2 below highlights some of the key
frameworks driving the evolution of AI agents and
their core functionalities.

Table 2. Overview of leading frameworks and their architectures

Agentic Framework Features

OpenAI Swarm Collaborative agent framework for task execution

LangGraph Framework for creating multi-agent workflows with integrated language models

AutoGen Automated generation of agent behaviors and interactions

CrewAI Team-based agent collaboration

DeepSeek R1 An open-source AI model emphasizing reasoning capabilities and efficiency

Agentic RAG: AI Agents Enhancing Information Retrieval


Traditional RAG systems retrieve information from external knowledge sources to reduce hallucinations and improve
response accuracy. However, agentic RAG takes this a step further by integrating AI agents that autonomously
search, verify, and synthesize knowledge, making retrieval more context-aware, adaptive, and multi-step. The
agentic RAG workflow, shared in Figure 3, enhances information retrieval by leveraging AI agents to iteratively refine
searches, verify sources, and synthesize knowledge, ensuring accurate and context-aware responses.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 30


Figure 3. Agentic RAG workflow: AI-driven knowledge retrieval and verification

Generative AI in Industry Use Cases


The impact of GenAI is becoming increasingly tangible across industries, from automating workflows to enhancing
decision making. To illustrate how generative AI is transforming industries, practical demonstrations and code snippets
are presented in the following sections to showcase the application of LLMs, RAG, and agentic AI in several domains.

Legal and Compliance Use Case: AI-Powered Document Summarization


Law firms and compliance teams often deal with lengthy contracts and regulatory documents. AI can summarize key
clauses and identify risks instantly.

The following Python code summarizes legal documents using GPT-4 and LangChain:

from langchain.llms import OpenAI


from langchain.chains.summarize import load_summarize_chain
from langchain.document_loaders import PyPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter

# Load the legal document


pdf_loader = PyPDFLoader("contract.pdf")
docs = pdf_loader.load()

# Split text into manageable chunks


text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
split_docs = text_splitter.split_documents(docs)

# Initialize GPT-4 model (Ensure API key is configured)


llm = OpenAI(model="gpt-4", openai_api_key="your_api_key")

# Load summarization chain using the correct method


summarizer = load_summarize_chain(llm, chain_type="map_reduce")

# Summarize the contract (Ensuring correct input format)


summary = summarizer.run("\n".join([doc.page_content for doc in split_docs]))

# Print the summarized output


print(summary)

This command extracts key points from a legal contract or compliance document and saves hours of manual review
time for legal professionals.

Banking and FinTech Use Case: AI-Powered Financial Fraud Detection


Banks and financial institutions require real-time fraud detection to prevent unauthorized transactions. The Python
code in the block on the following page is used to detect anomalous transactions using AI.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 31


import pandas as pd
from sklearn.ensemble import IsolationForest
from sklearn.preprocessing import LabelEncoder

# Load financial transaction dataset


df = pd.read_csv("transactions.csv")

# Ensure datetime is properly handled if present


if "transaction_time" in df.columns:
df["transaction_time"] = pd.to_datetime(df["transaction_time"])
# Convert datetime to a numerical representation (e.g., timestamp)
df["transaction_time"] = df["transaction_time"].apply(lambda x: x.timestamp())

# Encode categorical variables


categorical_cols = ["location"] # Add more categorical columns as needed
for col in categorical_cols:
if col in df.columns:
df[col] = pd.Categorical(df[col]).codes

# Train anomaly detection model


model = IsolationForest(contamination=0.01, random_state=42)

# Select relevant features


features = ["amount", "transaction_time", "location"]
if not all(feature in df.columns for feature in features):
print("Not all required features are present in the dataset.")
else:
# Fit and predict anomalies
df["fraud_score"] = pd.Series(model.fit_predict(df[features]), index=df.index)

# Identify suspicious transactions


suspicious = df[df["fraud_score"] == -1]

# Display suspicious transactions


print(suspicious)

This code uses unsupervised learning (Isolation Forests) to detect fraudulent transactions, thereby identifying
anomalous spending behavior for real-time fraud prevention.

Real-World Implementations Across Sectors


Generative AI has rapidly transitioned from theoretical models to practical applications, transforming various
industries by enhancing efficiency, creativity, and decision-making processes. Table 3 showcases the examples of
how different sectors are leveraging generative AI.

Table 3. Applications of GenAI across industries

Industry Application of GenAI

Healthcare • Medical imaging: assist in reconstructing high-quality images from low-resolution scans, improving
diagnostic accuracy
• Drug discovery: generate potential molecular structures, accelerating the identification of new drug candidates

Finance • Fraud detection: analyze transaction patterns to identify anomalies indicative of fraudulent activities
• Risk assessment: simulate various financial scenarios, aiding in comprehensive risk evaluation and management

Manufacturing • Product design: create optimized design prototypes, enhancing product development efficiency
• Predictive maintenance: predict equipment failures, allowing for proactive maintenance scheduling

Entertainment • Content creation: produce music, art, and scripts, offering new tools for creators and reducing production time
• Game development: generate realistic environments and characters, enriching the gaming experience

Customer • Chatbots: provide personalized responses, improving customer engagement and support efficiency
service • Sentiment analysis: assess customer feedback to inform service improvements

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 32


Regulatory and Ethical Considerations
To ensure responsible deployment, organizations must navigate evolving regulatory frameworks and ethical
considerations that safeguard against bias, misinformation, and security risks. This section explores the regulatory
frameworks shaping AI governance and the ethical principles necessary to mitigate risks while fostering innovation.

Emerging Regulatory Frameworks


Governments worldwide are implementing AI regulations to ensure responsible innovation while mitigating risks:
• European Union (EU) – The AI Act establishes a risk-based classification system, defining high-risk, limited-risk,
and prohibited AI applications, with strict compliance requirements for high-risk AI deployments.
• United States (US) – AI regulation is evolving through a combination of federal initiatives and state-
level legislation. At the state level, California has been proactive in enacting AI-related laws enhancing AI
transparency and protecting individuals from AI-generated harms. These include laws requiring developers
to disclose training data used in AI systems and measures to combat deceptive AI-generated content in
political advertisements.
• Singapore – Introduced the Model AI Governance Framework, providing best practices for ethical AI
development and corporate AI governance.
• China – Enforced Interim Measures for Generative AI Services, setting strict compliance requirements for AI
deployment, content moderation, and risk assessment.

These global regulatory efforts reflect an ongoing shift toward ensuring AI safety, transparency, and fairness while
fostering technological advancement.

Ethical AI Development
Developing ethical AI necessitates adherence to core principles that ensure technology serves humanity responsibly:
• AI systems should operate transparently, allowing users to understand how decisions are made.
• Ensuring AI applications are free from biases that could lead to unjust outcomes is crucial.
• Safeguarding personal information throughout the AI lifecycle is essential.
• Clear guidelines must define who is accountable for AI-driven decisions and their societal impacts.

Implementing these principles fosters trust and aligns AI innovations with human values.

Future Trends and Predictions for 2025


The rapid advancements in GenAI are paving the way for more intelligent, personalized, and secure systems, with
next-generation LLMs and AI-powered digital identities set to redefine user interactions and data protection.

In 2025, LLMs are revolutionizing AI with enhanced capabilities:


• Multimodal integration – Models like Baidu's upcoming Ernie 5 can process and convert between text, video,
images, and audio, enabling more dynamic applications.
• Advanced reasoning – OpenAI's o3-mini model demonstrates improved logical reasoning, excelling in complex
tasks such as coding and scientific problem solving.

The Rise of Memory-Augmented LLMs


Traditional stateless LLMs treat every query independently, but next-gen AI models are developing long-term
memory capabilities, enabling context-aware and personalized interactions. For example, models like DeepSeek
R1, Claude 3.5, and upcoming GPT-5 introduce long-term contextual awareness. In edge AI deployments, there is a
decrease in cloud dependence thanks to models such as Mistral and DeepSeek Vision, which run on local devices.

AI-Powered Digital Identities and Secure Authentication


The integration of AI with blockchain-based identity management is creating secure, tamper-proof digital identities.
This will redefine how users interact with AI systems while ensuring privacy and security. Key innovations include
decentralized AI identity systems — where AI profiles stored on blockchain or federated learning networks prevent
identity fraud — and personal AI agents managing digital identities — meaning users will have AI-driven digital twins
that act as personal representatives in the metaverse, finance, and legal domains.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 33


Evolution of AI Agents and AI-Driven Scientific Discoveries
AI is transforming scientific research by automating hypothesis generation, experimentation, and analysis, leading
to groundbreaking discoveries. In the material science realm, AI-driven quantum simulations are optimizing battery
technology, superconductors, and nanomaterials. Meanwhile, in climate science, machine learning models are
predicting climate trends, optimizing energy efficiency, and accelerating carbon capture research.

Actionable Steps for Enterprises and Developers


Enterprises should invest in AI infrastructure and optimize RAG pipelines and multi-agent AI frameworks for
scalability. It is also important that they ensure AI compliance by aligning with evolving AI regulations. As for AI
developers and researchers, they should develop modular AI architectures that combine LLMs, real-time retrieval,
and multimodal reasoning. Another key step is optimizing AI for real-time applications via quantization, LoRA fine-
tuning, and low-latency inference techniques.

Conclusion and Call to Action


Generative AI is rapidly transforming into autonomous, multimodal, and memory-augmented systems, driving
advancements in LLMs, RAG, and agentic AI across industries. Breakthroughs in context-aware AI, efficient model
architectures, and regulatory frameworks are shaping the future of responsible AI adoption. As enterprises integrate
AI-driven automation and scientific discovery accelerates, the focus must remain on balancing innovation with
ethical governance, security, and fairness to ensure AI serves as a force for positive transformation. Ensuring
transparency, fairness, and security will be crucial in fostering trust and accountability. AI should augment human
intelligence, not replace it, and drive progress while upholding ethical principles.

As we step into the future, the focus must remain on harnessing AI's potential responsibly, ensuring it serves as a
catalyst for positive transformation across industries and societies.

References:
• Getting Started With Large Language Models by Dr. Tuhin Chattopadhyay, DZone Refcard
• AI Automation Essentials by Dr. Tuhin Chattopadhyay, DZone Refcard
• AI Act, European Commission
• Getting Started With Agentic AI by Lahiru Fernando, DZone Refcard
• "AI Regulation in the U.S.: Navigating Post-EO 14110" by Frederic Jacquet
• Model Artificial Intelligence Governance Framework, Second Edition, Info-communications Media Development
Authority (IMDA) and Personal Data Protection Commission (PDPC)
• "Baidu to release next-generation AI model this year, source says" by Reuters
• "China's Interim Measures on generative AI: Origin, content and significance" by Sara Migliorini

Dr. Tuhin Chattopadhyay is a highly esteemed and celebrated figure in the fields of AI
Dr. Tuhin
and data science, commanding immense respect from both the academic and
Chattopadhyay
corporate fraternities. Dr. Tuhin has been recognized as one of India's Top 10 Data
@tuhinc Scientists by Analytics India Magazine, showcasing his exceptional skills and profound
@tuhinai knowledge in the field. Dr. Tuhin is a visionary entrepreneur, spearheading his own AI
consultancy organization that operates globally, besides being the professor of AI and
tuhin.ai
analytics at JAGSoM, Bengaluru.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 34


PARTNER CASE STUDY

CREATED IN PARTNERSHIP WITH

Global Logistics Company*


Automating Multi-National Invoice Processing
With AI-Powered Intelligence

Challenges
A leading global shipping company faced significant challenges in manually
processing hundreds of thousands of invoices annually across multiple
countries. The complexity stemmed from:
• Managing diverse invoice formats from different vendors and countries
Global Logistics Company* • Ensuring compliance with varying country-specific regulations and tariffs
Logistics and Transportation • Processing documents in multiple languages
15,000+ employees • Validating data against complex corporate regulations

Solutions Used The existing process was error-prone and time consuming, creating
bottlenecks in their operations and increasing the risk of compliance issues
Vertesia Platform
due to human error in data entry and validation.

Primary Outcomes Solution


Significant direct operational The company implemented Vertesia's GenAI platform to automate their
cost savings coupled with invoice processing workflow. The solution provides:
increased operational
• Automated data extraction for hundreds of different invoice formats
efficiencies and higher-
• Multi-language support to process invoices from 15+ countries
quality outcomes
• Built-in validation rules to ensure compliance with country-specific
regulations

*Undisclosed client name


• Intelligent data verification leveraging corporate policies and
international regulations
• Seamless integration with existing financial systems

The platform's advanced AI algorithms are trained to recognize and adapt to new invoice formats, eliminating the
need for template-based approaches.

Results
The company's intelligent document processing solution built with Vertesia's low-code platform delivered significant
operational improvements:
• Gained over 80% efficiency in extraction, translation, and validation
• Successfully supported unique operational processes across 15+ countries
• Addressed hundreds of different invoice format variations without manual intervention
• Reduced processing time from hours to seconds
• Ensured 100% invoice validation across corporate and regulatory compliance
• Scaled to process hundreds of thousands of invoices annually
• Saved more than 30% in operational costs
• Significantly reduced regulatory risk through improved auditability and compliance checks

The automation solution transformed the logistics and transportation company's invoice processing from a manual,
error-prone operation to an automated, efficient, and fully compliant process that supports their global operations.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 35


CONTRIBUTOR INSIGHTS

Supercharged LLMs
Combining Retrieval Augmented Generation and AI Agents to
Transform Business Operations
By Pratik Prakash, Principal Solutions Architect at Capital One

Enterprise AI is rapidly evolving and transforming. With the recent hype around large language models (LLMs), which
promise intelligent automation and seamless workflows, we are moving beyond mere data synthesis toward a more
immersive experience. Despite the initial enthusiasm surrounding LLM adoption, practical limitations soon became
apparent. These limitations included the generation of "hallucinations" (incorrect or contextually flawed information),
reliance on stale data, difficulties integrating proprietary knowledge, and a lack of transparency and auditability.

Managing these models within existing governance frameworks also proved challenging, revealing the need for a
more robust solution. The promise of LLMs must be tempered by their real-world limitations, creating a gap that calls
for a more sophisticated approach to AI integration.

The solution lies in the combination of LLMs with retrieval augmented generation (RAG) and intelligent AI agents. By
grounding AI outputs in relevant real-time data and leveraging intelligent agents to execute complex tasks, we move
beyond hype-driven solutions and FOMO. RAG + agents together focus on practical, ROI-driven implementations
that deliver measurable business value. This powerful approach is unlocking new levels of enterprise value and
paving the way for a more reliable, impactful, and contextually aware AI-driven future.

The Power of Retrieval Augmented Generation


RAG addresses the inherent data limitations of standalone LLMs by grounding them in external, up-to-date
knowledge bases. This grounding allows LLMs to access and process information beyond their initial training data,
significantly enhancing their accuracy, relevance, and overall utility within enterprise environments. RAG effectively
bridges the gap between the vast general knowledge encoded within LLMs and the specific but often proprietary
data that drives enterprise operations.

Several key trends are shaping the evolution and effectiveness of RAG systems to make real-world, production-grade,
and business-critical scenarios a reality:
• Vector databases: The heart of semantic search – Specialized vector databases (there many commercial
products available in the market to unlock this) enable efficient semantic search by capturing relationships
between data points. This helps RAG quickly retrieve relevant information from massive datasets using
conceptual similarity rather than just keywords.
• Hybrid search: Best of both worlds – Combining semantic search with traditional keyword search maximizes
accuracy. While keyword search identifies relevant terms, semantic search refines results by understanding
meaning, therefore ensuring no crucial information is overlooked.
• Context window expansion: Handling larger texts – LLMs are limited by context windows, which can hinder
processing large documents. Techniques like summarization condense content for easier processing, while
memory management helps retain key information across longer texts, ensuring coherent understanding.
• Evaluation metrics for RAG: Beyond LLM output quality – Evaluating RAG systems requires a more holistic
approach than simply assessing the quality of the LLM's output. While the LLM's generated text is important,
the accuracy, relevance, and efficiency of the retrieval process are equally crucial. Key metrics for evaluating RAG
systems include:
∘ Retrieval accuracy – How well does the system retrieve the most relevant documents for a given query?
∘ Retrieval relevance – How closely does the retrieved information align with the user's information needs?
∘ Retrieval efficiency – How quickly can the system retrieve the necessary information?
By focusing on these metrics, developers can optimize RAG systems to ensure they are not only generating high-
quality text but also retrieving the right information in a timely manner.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 37


Enterprise Use Cases: Real-World Applications
The versatility of RAG makes it applicable to a wide range of enterprise use cases. For example, RAG can:
• Be used to access customer records and knowledge bases to provide personalized support experiences
• Empower employees to quickly find relevant information within vast internal document repositories
• Be integrated with data analysis tools to provide context and insights from relevant documents
• Be used to generate reports that are grounded in factual data and relevant research
• Streamline legal research by retrieving relevant case laws, statutes, and legal documents

These examples illustrate the potential of RAG to transform enterprise workflows and drive significant business value.

The Role of AI Agents in Orchestrating Complexity


The combination of RAG and AI agents is a game-changer for enterprise AI, creating a powerful partnership that
takes automation to a whole new level with practical reality and feasibility. This synergy goes beyond the limitations
of standalone language models, enabling systems that can reason, plan, and handle complex tasks. By connecting AI
agents to constantly updated knowledge bases through RAG, these systems can access the data or events they need
to make informed decisions, manage workflows, and deliver real business value. This collaboration helps enterprises
build AI solutions that are not just smart, but also adaptable, transparent, and rooted in real-world data.

Agents as Intelligent Intermediaries


AI agents act as intermediaries that manage the retrieval process and break down complex tasks. They
autonomously interact with external tools and APIs to gather data, analyze it, and execute tasks efficiently.

Autonomous Agents
Autonomous agents are evolving to plan and execute tasks independently, reducing the need for human
intervention. These agents can process real-time data, make decisions, and complete processes on their own,
therefore streamlining operations.

Agent Frameworks
Frameworks like LangChain and LlamaIndex are simplifying the development and deployment of agent-based
systems. These tools offer pre-built capabilities to create, manage, and scale intelligent agents, making it easier for
enterprises to integrate automation.

Tool Use and API Integration


Agents can leverage external tools and APIs — such as calculators, search engines, and CRM systems — to access
real-world data and perform complex actions. This allows agents to handle a wide range of tasks, from data retrieval
to interaction with business systems.

Memory and Planning


Advancements in agent memory and planning allow agents to tackle longer and more complex tasks. By retaining
context and applying long-term strategies, agents can effectively manage multi-step processes and ensure
continuous, goal-driven execution.

RAG and AI Agents in Synergy


The combination of RAG and AI agents is more than just a technical integration — it's a strategic alignment (I call it a
wonder alliance) that amplifies the strengths of both components. Here's how they work together:
• Orchestration by AI agents – AI agents manage the RAG process, from query initiation to output, ensuring
accurate information retrieval and contextually meaningful responses. This reduces "hallucinations" and
improves output reliability.
• RAG as the knowledge base – RAG provides the up-to-date, context-specific data that AI agents use to make
informed decisions and complete tasks accurately, enhancing performance and system transparency.
• Improved explainability and transparency – By grounding outputs in real-world data, RAG makes it easier
to trace information sources, which increases transparency and trust — especially in industries like finance,
healthcare, and legal services where compliance is critical.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 38


Advanced Concepts in RAG + Agents
The combined power of RAG and AI agents opens the door to advanced capabilities that are reshaping the future of
enterprise AI. Here are some key concepts driving this evolution:
• ​​Multi-agent systems – Multiple AI agents can collaborate to tackle complex tasks by specializing in different
areas, like data retrieval, decision making, or execution. For example, in supply chain management, agents
manage logistics, inventory, and demand forecasting independently but coordinate for optimal efficiency.
• Feedback loops and continuous improvement – RAG + agent systems use feedback loops to continuously
learn and improve. By monitoring accuracy and relevance, these systems refine retrieval strategies and decision
making, ensuring alignment with business goals and evolving user needs.
• Autonomy and complex task execution – AI agents are becoming more autonomous, enabling independent
decision making and task execution. This capability automates processes like financial reporting and customer
support, with RAG ensuring accurate information retrieval for efficient, high-quality results.

Figure 1. RAG + agent architecture diagram

Agent-orchestrated RAG architecture: a visual representation

Addressing Challenges and Future Directions of RAG + Agents


Implementing RAG + agents presents key hurdles and bumps. Data quality is paramount, and data curation should
be meticulous, as flawed data leads to ineffective outcomes. Security is critical due to sensitive information handling,
requiring robust protection and vigilant monitoring to maintain user trust. Scalability must ensure consistent speed
and accuracy despite exponential data growth, demanding flexible architecture and efficient resource management.

Essentially, the system must not only be technically sound but also reliably perform its intended function as the
volume of inputs and requests increases, preserving both its intelligent output and operational speed.

Table 1 details the emerging trends in RAG and agent AI, covering technological advancements like efficient systems
and multi-modality, as well as crucial aspects like AI governance, personalization, and human-AI collaboration.

SEE TABLE ON NEXT PAGE

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 39


Table 1. Emerging trends in RAG and agent AI

Category Details

Efficient RAG Optimization of RAG for faster retrieval, greater indexing, and accuracy to improve performance
systems and adaptability to growing data needs

AI lifecycle Comprehensive frameworks are needed for the governance, compliance, and traceability of AI
governance systems within enterprises

Legacy system Integrating RAG + agents with legacy systems allows businesses to leverage existing infrastructure
integration while adopting new AI capabilities, though this integration may not always be seamless

Multi-modality Combining data types like text, images, audio, and video to enable richer, more informed decision
making across various industries

Personalization AI systems will tailor interactions and recommendations based on user preferences, increasing
user engagement and satisfaction

Human-AI Creating much needed interactions between humans and AI agents, allowing AI to assist with
collaboration tasks while maintaining human oversight and decision making

Embodiment AI agents will interact with the physical world, leading to applications like robotics and
autonomous systems that perform tasks in real-world environments

Explainability and Increased transparency in AI decision-making processes to ensure trust, accountability, and
transparency understanding of how AI arrives at conclusions

Ethical Imperatives in RAG + Agents


Ethical considerations are crucial as RAG + agents become more prevalent. Fairness and accountability demand
unbiased, transparent AI decisions, requiring thorough testing. Privacy and security must be prioritized to protect
personal data. Maintaining human control is essential, especially in critical sectors.

Conclusion: The Pragmatic Future of Enterprise AI


The convergence of RAG and AI agents is poised to redefine enterprise AI, offering unprecedented opportunities for
value creation through efficient, informed, and personalized solutions. By bridging the gap between raw AI potential
and the nuanced needs of modern enterprises, RAG + agents leverage real-time data retrieval and intelligent
automation to unlock tangible business outcomes. This shift emphasizes practical, ROI-driven implementations that
prioritize measurable value like improved efficiency, cost reduction, enhanced customer satisfaction, and innovation.

Crucially, it heralds an era of human-AI collaboration, where seamless interaction empowers human expertise by
automating data-heavy and repetitive processes, allowing staff or human workforce to focus on higher-order tasks.
Looking forward, the future of enterprise AI hinges on continuous innovation, evolving toward scalable, transparent,
and ethical systems. While the next generation promises advancements like multimodality, autonomous decision
making, and personalized interactions, it is essential to have a balanced approach that acknowledges practical
limitations and prioritizes governance, security, and ethical considerations.

Success lies not in chasing hype, but in thoughtfully integrating AI to deliver lasting value. RAG + agents are at the
forefront of this pragmatic evolution, guiding businesses toward a more intelligent, efficient, and collaborative future
where AI adapts to organizational needs and fuels the next wave of innovation.

Pratik Prakash Pratik, an experienced solutions architect and open source advocate, expertly
blends hands-on engineering and architecture with multi-cloud and data
@scanpratik
science expertise. Driving tech simplification and modernization, he specializes
@pratik-prakash in scalable, serverless, and event-driven applications and AI solutions for digital
@scanpratik transformation. He can also be found on X.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 40


PARTNER OPINION

The Rise of LLM Embeddings


Powering the Next Wave of AI Agents
By Tyler Mitchell, Product and Solutions Marketing at Couchbase, Inc.

Large language model (LLM) embeddings are numerical representations of words, sentences, or other data types
that capture semantic meaning in a high-dimensional space. By converting raw text into vectorized forms, LLMs
can efficiently process, compare, and retrieve information. These embeddings cluster similar meanings together,
enabling deeper contextual understanding, advanced similarity searches, and streamlined knowledge retrieval.

This capability powers a range of AI-driven applications, from natural language understanding to recommendation
systems, enhancing efficiency and accuracy across various tasks.

AI Agents as a Breakout Use Case for LLMs


AI agents, powered by generative AI (GenAI), are emerging as one of the most exciting and impactful use cases
for LLMs. These agents mimic and automate human reasoning and workflows, unlocking new possibilities across
industries. Various platforms accelerate AI agent development by tackling key GenAI challenges such as trust,
reliability, and cost efficiency.

Choosing the Best Embedding Strategy


The ideal embedding approach for your project depends on the specific tasks you need to perform, the nature of
your data, and the level of accuracy required. Pre-trained embeddings like BERT or GPT excel at general language
understanding, but for domain-specific precision, fine-tuning on specialized datasets can significantly improve
performance. Cross-modal tasks demand multimodal embeddings, while high-speed retrieval applications benefit
from dense vector search techniques like Faiss.

The complexity of your use case dictates whether a lightweight model will suffice or if a deep transformer-based
approach is necessary. Additionally, computational costs and storage constraints should be carefully evaluated to
ensure that your chosen embedding strategy aligns with your performance and scalability needs.

From Raw Text to AI-Ready Vectors


Embedding data involves preprocessing text, tokenizing it, and passing it through an embedding model to generate
numerical vectors. Tokenization breaks text into subwords or characters, mapping them to high-dimensional space,
while the model refines embeddings through layers of neural transformations. Once created, embeddings can be
stored for efficient retrieval or fine-tuned for specific tasks.

Tools like OpenAI's embedding API, Hugging Face Transformers, or TensorFlow's embedding layers streamline this
process. Post-processing steps, such as normalization or dimensionality reduction, further optimize embeddings for
applications like clustering and search.

Why LLM Embeddings Are Reshaping AI Development


LLM embeddings are essential for AI-driven applications like search engines, virtual assistants, recommendation
systems, and AI agents. They enable fast, accurate text and data comparisons, delivering meaningful insights and
enhanced user experiences.

A robust developer data platform can simplify the integration of LLM embeddings, making it easier to build and
scale AI-powered applications. With the right platform and built-in support for popular LLMs, developers can
streamline search, agentic AI, and edge computing workflows while optimizing performance and efficiency.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 41


CONTRIBUTOR INSIGHTS

Agentic AI and Generative AI


Revolutionizing Decision Making and Automation
By Nitesh Upadhyaya, Solution Architect at GlobalLogic, a Hitachi Group Company

The rapid advancement of artificial intelligence (AI) creates breakthroughs that span multiple industries. Among
many developments, agentic AI and generative AI stand out as two transformative powers. Although these
systems work differently because they serve distinct functions, they bring substantial benefits when used together.
Generative AI focuses on content creation through deep learning transformer models that learn from extensive
datasets. This technology enables increased human productivity in content creation tasks as well as design,
marketing activities, and software development by delivering text, images, code, and music outputs.

On the other hand, agentic AI extends beyond content generation to cover goal-oriented execution and autonomous
decision-making systems. Agentic AI exists to automate tasks, which helps businesses run more efficiently by
reducing human involvement.

This AI landscape presents businesses, developers, and researchers with essential needs to understand the core
characteristics of AI paradigms, along with their individual strengths and limitations and their synergistic benefits.
This article examines both AI systems, relative benefits and drawbacks, and their implementation challenges
together with ethical risks and how their combined use creates intelligent automation and industry-driven innovation.

Agentic AI vs Generative AI: A Comparative Analysis


Though both systems are based on machine learning (ML) and automation, they serve distinct purposes and
function differently across implementation and application domains.

Table 1. Key differences between agentic and generative AI

Aspect Agentic AI Generative AI

Purpose Task execution, decision making, and Content creation (text, image, videos)
workflow automation

Operational mode Autonomous action and iterative learning Predictive modeling and pattern recognition

Domain specialization Best suited for automation in IT, Best for creative applications like writing,
cybersecurity, and finance image generation, and software development

Interaction with users Primarily operates in the background, Direct interaction with users via chatbots,
executing tasks image generation, or coding assistance

Autonomy level Highly autonomous, operates without Requires human prompts and oversight
constant human input

Despite the differences between them, there are some similarities between generative AI and agentic AI.

Table 2. Key similarities between agentic and generative AI

Features Agentic AI Generative AI

ML dependency Uses ML to drive decision making and automation Uses ML for generating content and predictions

Data driven Requires structured datasets for decision making Learns from vast amounts of unstructured data

Enhances productivity Automates workflows and reduces human intervention Assists in content creation, accelerating tasks

Synergistic Potential: Combining Agentic and Generative AI


Rather than competing, agentic and generative AI can be integrated to create advanced AI systems capable of both
content generation and autonomous execution.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 43


How They Work Together
When generative AI and agentic AI work together, they produce an integrated system that combines generative AI's
creative intelligence with agentic AI's autonomous execution. The combination drives improvements in automation
decision making and workflow efficiency, resulting in self-improving intelligent systems.

GENERATIVE AI GENERATES INSIGHTS


Figure 1. The multifaceted role of generative AI
As a content creator, generative AI produces both
structured and unstructured outputs using input
prompts and training datasets. The outputs serve
as foundational knowledge for decision making
and automation.

Generative AI performs several functions: It enables


chatbots to generate personalized responses, and
it assists developers by aiding them in writing and
debugging software programs during code generation.
Data analysis, forecasting, and image/video creation
are essential functions within media and advertising
that produce valuable insights and predictions. Figure 1
illustrates the multifaceted role of generative AI.

AGENTIC AI EXECUTES ACTIONS


Generative AI offers important insights, but agentic AI
goes above and beyond by generating decisions and
performing actions. Agentic AI systems use generative
AI outputs and then apply those outcomes to actual
situations. These systems play a crucial role in Figure 2. The roles of agentic AI
decision making by leveraging AI-generated
reports and insights to take the right actions.
They perform workflow tasks using pre-defined
rules and have adaptive capabilities in case of
dynamic changes.

The systems also incorporate adaptive learning,


modifying strategies based on real-time feedback.
These AI agents self-optimize as they engage in
this process over time and, therefore, enhance
their own efficiency through the evaluation of
previous results.

For example, as a virtual AI assistant in enterprise


automation, generative AI can create reports,
and agentic AI will advance this by delivering reports to stakeholders, organizing follow-up meetings, and initiating
business processes according to the findings. Figure 2 demonstrates the various roles of agentic AI.

FEEDBACK LOOP: CONTINUOUS IMPROVEMENT


The real strength of a union between generative AI and agentic AI stems from their capacity to build a loop of
improving performance, which boosts precision while cutting execution time and enhancing decision quality as
they progress. The process starts when generative AI produces insights and content recommendations that agentic
AI evaluates and implements. After execution, agentic AI tracks performance and collects real-world data, feeding it
back into generative AI, which uses success rates and evolving requirements to refine its future output generation.
The combination of learning with execution alongside optimization enables AI-driven systems to build their
effectiveness and adaptability through continuous improvement.

Let's take an example of an e-commerce platform that uses generative AI to develop product descriptions, and then
agentic AI measures customer interaction data to optimize content strategies in real time. In the same manner,

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 44


generative AI produces initial software code while agentic AI systems handle testing, deployment, and code refactoring
tasks within continuous integration and deployment pipelines to deliver ongoing enhancement and optimization.

The organizations that implement this loop of feedback, as shown in Figure 3, will develop intelligent systems that
adapt to changing demands while achieving better outcomes.

Figure 3. AI feedback loop process

Challenges and Ethical Considerations


Despite their advantages, the integration of agentic and generative AI poses several challenges.

Security Concerns
The increasing complexity of AI models leads to major security risks, including data weaknesses and model
vulnerabilities. The foremost risk in AI technology is data security because models often reveal personal information
about users accidentally. AI models are large token sequencers that require broad datasets for training and response
generation, making data breaches possible through poor data management and system defects.

In 2023, OpenAI's ChatGPT faced a major data leak when a bug enabled users to view other users' chat history,
including payment details. This incident revealed major security issues with interactive AI applications that process
personal information. OpenAI took responsibility for the problem and implemented a fix, but the incident showed
how essential it is to strengthen AI interaction with data protection.

Model exploitation represents another major risk, which involves using AI-generated content for harmful activities.
Deepfake technology alongside other generative AI models have been used to spread false information and political
statements as well as fraudulent content. AI-generated videos showing Ukrainian President Volodymyr Zelensky
claiming Ukraine surrendered during the Russia-Ukraine war reached online audiences. The fabricated AI-generated
videos created confusion and panic due to their realistic nature, which tricked viewers into believing and spreading
them through social media and other online platforms.

Organizations need to build strong governance frameworks coupled with transparency and security features to
manage these risks as AI adoption grows. Organizations should practice protected data privacy audits on content
produced by AI systems to avoid potential misuse and monitor the systems as a protective measure.

Ethical Challenges
As AI systems expand their use in decision making across all industries, the issues of bias in AI models and
responsible usage remain prominent ethical concerns. AI systems are trained with huge datasets that frequently
hold historical biases that lead to unfair results in areas like hiring, finance, and law enforcement.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 45


Amazon's AI-powered hiring tool, which exhibited gender bias, provides a well-documented example of AI bias.
During its training period spanning over a decade, the system learned to favor male candidates because resumes
from men made up most of its submissions. Amazon ended the use of the tool after tests showed that it rated
resumes with "women" in the description lower than those with more traditional male-dominated work terms. The
case shows how societal biases can become embedded in AI models and shows why bias mitigation strategies are
needed to produce fair and inclusive AI systems.

The use of AI also requires responsibility because, without it, we risk unwanted effects that cannot be controlled.
AI models are usually opaque, i.e. black boxes, and one cannot easily understand how and why a decision was
made. This lack of interpretability is even more worrying in industries such as healthcare and finance, where the
recommendations from AI can significantly impact a human's life.

Conclusion
Agentic AI and generative AI cause industrial shifts through their capability to create innovative decision-making
systems and generative content platforms. Agentic AI improves automation through the execution of tasks and
workflow optimization, while generative AI drives innovation through text, image, and code production. The
integration of these products proves dangerous because they produce major ethical problems and serious security
risks, such as data weaknesses and biased AI outputs alongside the exploitation of AI models. The necessary solution
for these concerns demands a sustained commitment to AI innovation's ethical standards.

Businesses, developers, and policymakers should establish a governance system and implement fairness verification
and security measures to support ethical AI usage. A successful strategy in the future requires organizations to
evaluate AI integration opportunities while practicing responsible AI ethics, keeping track of AI technological progress
to gain maximal benefits, and reducing associated risks. Businesses must maintain proper human oversight to
achieve efficient operation with trustworthy AI systems that power technological advancement and social gain.

References:
• "Insight - Amazon scraps secret AI recruiting tool that showed bias against women" by Jeffrey Dastin
• Getting Started With Agentic AI by Lahiru Fernando, DZone Refcard
• "March 20 ChatGPT outage: Here's what happened" by OpenAI
• "Deepfake presidents used in Russia-Ukraine war" by Jane Wakefield

Nitesh Upadhyaya is a solution architect at GlobalLogic (a Hitachi Group


Nitesh Upadhyaya Company) with 15+ years of experience in complex distributed architecture,
@Niteshgl AI/ML, and cloud technologies. He is a DZone Core member, book author, and
@nitesh-upadhyaya technical reviewer, contributing insights on enterprise architecture, AI-driven
automation, and software engineering best practices.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 46


CONTRIBUTOR INSIGHTS

Building AI-Driven
Intelligent Applications
A Hands-On Development Guide for Integrating GenAI Into Your Applications
By Naga Santhosh Reddy Vootukuri, Principal Software Engineering Manager at Microsoft

In today's world of software development, we are seeing a rapid shift in how we design and build applications, mainly
driven by the adoption of generative AI (GenAI) across industries. These intelligent applications can understand and
respond to users' questions in a more dynamic way, which help enhance customer experiences, automate workflows,
and drive innovation. Generative AI — mainly powered by large language models (LLMs) like OpenAI GPT, Meta
Llama, and Anthropic Claude — has changed the game by making it easy to understand natural language text and
respond with new content such as text, images, audio, and even code.

There are different LLMs available from various organizations that can be easily plugged into existing applications
based on project needs. This article explains how to build a GenAI-powered chatbot and provides a step-by-step
guide to build an intelligent application.

Why There Is a Need for GenAI Integration


Within the field of artificial intelligence (AI), GenAI is a subset of deep learning methods that can work with both
structured and unstructured data. Figure 1 presents the different layers of artificial intelligence.

It is notably generative AI that is rapidly transforming the Figure 1. Artificial intelligence landscape
business world by fundamentally reshaping how businesses
automate tasks, enhance customer interactions, optimize
operations, and drive innovation. Companies are embracing AI
solutions to have a competitive edge, streamline workflows, and
unlock new business opportunities.

As AI becomes more woven into society, its economic impact


will be significant, and organizations are already starting to
understand its full potential.

According to the MIT Sloan Management Review, 87% of


organizations believe AI will give them a competitive edge.
McKinsey's 2024 survey also revealed that 65% of respondents
reported their organizations are regularly using GenAI — nearly
double the percentage from the previous survey 10 months prior.
Figure 2 lays out a comparison of the shift in organizations that
are building intelligent applications.

Figure 2. Comparison between traditional and intelligent apps

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 47


Advantages of GenAI Integration
Integrating GenAI into applications brings several advantages and revolutionizes how businesses operate:
• AI models can analyze vast amounts of structured and unstructured data to extract valuable insights, enabling
data-driven decision making.
• By automating intensive, repetitive tasks and therefore improving efficiency, businesses can reduce operational costs.
• GenAI enables real-time personalization and makes interactions more engaging. For example, GenAI-powered
chatbots can recommend products based on the user's browsing history.
• GenAI can be a powerful tool for brainstorming and content creation, which empowers businesses to develop
innovative products, services, and solutions.
• GenAI-powered chatbots can provide instant customer support by answering questions and resolving issues.
This can significantly improve customer satisfaction, reduce wait times, and free up human agents' time to
handle more complex inquiries.

Precautions for Integrating GenAI


While integrating GenAI to build intelligent applications, solutions, or services, organizations need to be aware of
challenges and take necessary precautions:
• AI models can generate misleading or factually incorrect responses, often called hallucinations. This leads
to poor decision making and legal or ethical implications, which can damage an organization's reputation
and trustworthiness.
• Because AI models are trained on historical data, they can inherit biases, leading to unfair or discriminatory
outcomes. It's crucial to embrace responsible AI practices to evaluate and mitigate any potential biases.
• Integrating GenAI into existing systems and workflows can become complicated and requires significant
technical expertise.
• As governments around the world are introducing new AI regulations, companies must stay up to date and
implement AI governance frameworks to meet legal and ethical standards.
• Growing threats of deep fakes, misinformation, and AI-powered cyberattacks can undermine public trust and
brand reputation.

To counter these risks, businesses should invest in AI moderation tools and adopt proactive strategies to detect and
mitigate harmful or misleading content before it reaches users. Through strong governance, ethical frameworks,
and continuous monitoring, organizations can unlock the potential of AI while protecting their operations, trust, and
customer data.

Tech Stack Options


There are different options available if you are considering integrating GenAI to build intelligent applications. The
following are some, but not all, popular tool options:
• Open-source tools:
∘ Microsoft Semantic Kernel is an AI orchestration framework for integrating LLMs with applications using
programming languages like C#, Python, and Java.
∘ LangChain is a powerful framework for chaining AI prompts and memory.
∘ Hugging Face is a library for using and fine-tuning open-source AI models.
∘ LlamaIndex is a framework that provides a central interface to connect your LLM with external data sources.
• Enterprise tools:
∘ OpenAI API provides access to OpenAI's powerful models, including GPT-3, GPT-4o, and others.
∘ Azure AI Foundry is Microsoft's cloud-based AI platform that provides a suite of services for building, training,
and deploying AI models.
∘ Amazon SageMaker is a fully managed service for building, training, and deploying machine learning models.
∘ Google Cloud Vertex AI is a platform for building and deploying machine learning models on Google Cloud.

Deciding whether to choose an open-source or enterprise platform to build intelligent AI applications depends on
your project requirements, team capabilities, budget, and technical expertise. In some situations, a combination of
tools from both open-source and enterprise ecosystems can be the most effective approach.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 48


How to Build a GenAI-Powered Customer Chatbot
Regardless of the tech stack you select, you will have to follow the same steps to integrate AI. In this section, I will
focus on building a chatbot that takes any PDF file, extracts data, and chats with the user by answering questions.
It can be hosted on a web application as an e-commerce chatbot to answer user inquiries, but due to the size
limitations, I will create it as a console application.

Step 1. Setup and Prerequisites


Before we begin, we'll need to:
1. Create a personal access token (PAT), which helps to authenticate 1,200+ models that are available in GitHub
Marketplace. Instead of creating developer API keys for every model, you can create one PAT token and use it to
call any model.
2. Install OpenAI SDK using pip install openai (requires: Python ≥ 3.8).
3. Have an IDE platform for writing Python code. Not mandatory, however, I will use Visual Studio Code (VSCode) to
write code in this example.
4. Download and install Python.
5. Download this VSCode extension, which offers rich support for the Python language; it also offers debugging,
refactoring, code navigation, and more.
6. Install PDFPlumber using pip install pdfplumber , a Python library that allows you to extract text, tables, and
metadata from PDF files.

Step 2. Create Python Project and Set Up a Virtual Environment


Using VSCode or your favorite IDE, create a folder named PythonChatApp. Ensure that Python is installed on your
system. Navigate to the project root directory and run the below commands to create a virtual environment. Creating
a virtual environment is optional but recommended to isolate project environments.

Pip install Virtualenv

cd c:\Users\nagavo\source\repos\PythonChatApp
virtualenv myenv

Step 3. Create Github_Utils.py File to Interact With OpenAI Models


The Github_Utils.py file imports the OpenAI library and sets up required global variables. An OpenAI client is
instantiated with a base URL and API PAT token. This client becomes the interface to send requests to the API.

One advantage with GitHub Models is that we can easily switch to the GPT o3-mini model by simply updating the
model name variable without changing existing code. This file contains two functions, summarize_text and
ask_question_about-text , both of which are responsible for summarizing the text from the selected PDF and,
later on, to ask questions related to the content. The file contents are shown below:

import os
from openai import OpenAI

# Load GitHub API PAT Token and endpoint


github_PAT_key = "<paste your github PatToken>"
github_api_url = "https://models.inference.ai.azure.com"
model_name = "gpt-4o"

# Initialize OpenAI client


client = OpenAI(
base_url=github_api_url,
api_key=github_PAT_key,
)

CODE CONTINUES ON NEXT PAGE

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 49


def summarize_text(text):
"""Summarizes the text using GitHub's models API."""
try:
response = client.chat.completions.create(
messages=[
{
"role": "system",
"content": "You are a helpful assistant.",
},
{
"role": "user",
"content": f"Summarize the following text: {text}",
}
],
temperature=0.7,
top_p=1.0,
max_tokens=300,
model=model_name
)
return response.choices[0].message.content
except Exception as e:
return f"Error with GitHub API: {e}"

def ask_question_about_text(text, question):


"""Asks a question about the text using GitHub's models API."""
try:
response = client.chat.completions.create(
messages=[
{
"role": "system",
"content": "You are a helpful assistant.",
},
{
"role": "user",
"content": f"Based on the following text: {text}\n\n Answer this question:
{question}",
}
],
temperature=0.7,
top_p=1.0,
max_tokens=300,
model=model_name
)
return response.choices[0].message.content
except Exception as e:
return f"Error with GitHub API: {e}"

Step 4. Create Pdf_Utils.py File to Extract Data From PDF


The Pdf_Utils.py file contains utility functions for working with PDF files, specifically for extracting text from
them. The main function, extract_text_from_pdf , takes the path to a PDF file as an argument and returns the
extracted text as a string by utilizing the PDFPlumber library. The file contents are shown on the following page.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 50


try:
import pdfplumber
except ImportError:
print("Error: pdfplumber module not found. Please install it using 'pip install pdfplumber'.")

def extract_text_from_pdf(pdf_path):
"""Extracts text from a PDF file."""
text = ""
try:
with pdfplumber.open(pdf_path) as pdf:
for page in pdf.pages:
page_text = page.extract_text()
if page_text:
text += page_text + "\n"
except Exception as e:
print(f"Error reading PDF: {e}")
return text

Step 5. Create Main.Py, an Entry Point for the Application


The Main.Py file acts as the main entry point for the application. It is responsible for receiving user input to process
the PDF file and interacting with the AI service. It imports Pdf_utils and Github_utils to interact with methods
in both files. The file contents of Main.py are shown below:

import os
from Pdf_utils import extract_text_from_pdf
from Github_utils import summarize_text, ask_question_about_text

def main():
print("=== PDF Chatbot ===")

# Ask user to specify PDF file path


pdf_path = input("Please enter the path to the PDF file: ").strip()

if not os.path.isfile(pdf_path):
print(f"Error: The file '{pdf_path}' does not exist.")
return

# Extract text from PDF


print("\nExtracting text from PDF...")
text = extract_text_from_pdf(pdf_path)

if not text.strip():
print("No text found in the PDF.")
return

# Summarize the text


print("\nSummary of the PDF:")
try:
summary = summarize_text(text)
print(summary)
except Exception as e:

CODE CONTINUES ON NEXT PAGE

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 51


print(f"Error with GitHub API: {e}")
return

# Ask questions about the text


while True:
print("\nYou can now ask questions about the document!")
question = input("Enter your question (or type 'exit' to quit): ").strip()

if question.lower() == "exit":
print("Exiting the chatbot. Goodbye!")
break

try:
answer = ask_question_about_text(text, question)
print("\nAnswer:")
print(answer)
except Exception as e:
print(f"Error with GitHub API: {e}")
break

if __name__ == "__main__":
main()

Step 6. Run the Application


Open terminal and navigate to the root directory. To run the application, enter the following command:

C:\Users\nagavo\source\repos\PythonChatApp\PDF_Chatbot> python main.py

Enter the path to the PDF file in the response. In this example, I uploaded a resume to help me summarize the
candidate's profile and her experience. This file is sent to the GPT-4o model to summarize the file contents as shown
in Figure 3.
Figure 3. Summary of the PDF document in the GPT response

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 52


Let's ask some questions about the document, for example, "How many years of experience does the candidate have
with date ranges?" Figure 4 shows the detailed response from the chatbot application.

Figure 4. Response from asking specific question about the document

We can host this app on an e-commerce website to upload product and order information. This allows customers
to interact by asking specific questions about products or their orders, thus avoiding customer agents manually
answering these questions. There are multiple ways we can leverage GenAI across industries; this is just one example.

Conclusion
Integrating GenAI into applications is no longer a luxury but a necessity for businesses to stay competitive. The
adoption of GenAI offers numerous advantages, including increased productivity, improved decision making, and
cost savings by avoiding repetitive work. It is also crucial to be aware of the challenges and risks associated with
GenAI, such as hallucination, bias, and regulatory compliance, as it is easy to misuse AI-generated content. It is
essential to adopt responsible AI practices and invent robust governance frameworks to ensure ethical and fair
use of AI technologies and by doing so, organizations can unlock the full potential of GenAI while protecting their
reputation and trust from their customers.

References:
• "AI regulations around the world: Trends, takeaways & what to watch heading into 2025" by Diligent Blog
• "Superagency in the workplace: Empowering people to unlock AI's full potential" by Hannah Mayer, et al.
• "Expanding AI's Impact With Organizational Learning" by Sam Ransbotham, et al., MITSloan
• "Embracing Responsible AI: Principles and Practices" by Naga Santhosh Reddy Vootukuri

As a seasoned professional with 17+ years working at Microsoft and specialized skills
Naga Santhosh in cloud computing and AI. I lead a team of SDEs focused on initiatives in the Azure
Reddy Vootukuri SQL deployment space, where we emphasize high availability for SQL customers
@sunnynagavo during critical feature rollouts. Aside from work, I am a technical book reviewer for
@naga-santhosh- Apress, Packt, Pearson, and Manning publications; judge hackathons; mentor junior
reddy-vootukuri engineers on ADPList and Code Path.org; and write for DZone. I am also a senior
IEEE member working on multiple technical committees.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 53


PARTNER OPINION

2024 Retrospective of AI Security


A Look at Policy, the Threat Landscape, and Research
By Cisco AI Security

Artificial intelligence (AI) has emerged as one of the defining technologies of the 21st century. It has transformed
both our personal and professional lives, and its rapid advancement will continue to reshape the ways in which
businesses operate. Business leaders largely recognize the generational opportunity that AI presents and feel
tremendous pressure to harness this potential. Findings from our Cisco 2024 AI Readiness Index show that the
race to integrate AI into increasingly critical functions is impeded by a few practical challenges — and AI security is
among the most prominent.

As AI systems handle increasingly sensitive workloads in vital sectors such as healthcare, finance, and defense,
the need for robust safety and security measures becomes nonnegotiable. The threat landscape for AI is novel,
complex, and not effectively addressed by traditional cybersecurity solutions. Similarly, streamlining the integration
of AI capabilities while adhering to new compliance frameworks and regulations can make AI adoption feel
overwhelming and costly.

Developments in AI Policy
2024 saw a big wave of new AI policy developments. In the United States alone, state lawmakers introduced more
than 700 AI-related bills — 113 of which were enacted into law — across 45 states. The pace of policy activity has not
slowed in 2025. Within the first couple of weeks of the year, 40 AI-related bill proposals were already on the docket
in the US.

Globally, no standard approach has emerged across nation states to regulate AI. Governments have drawn on a
wide-ranging AI policy toolkit, from comprehensive laws, specific regulations for use-case-specific applications, and
national AI strategies to voluntary guidelines and standards.

AI can introduce social and economic risks alongside potential substantial economic growth opportunities,
challenging jurisdictions to balance fostering innovation against managing associated risks. As we have observed, AI
governance often begins with the rollout of a national strategy before moving toward legislative action.

Highlights of global AI policy development include:


• Country-level focus on promoting AI safety amidst rapid technological developments by way of US AI executive
orders, AI Safety Summit voluntary commitments, and transatlantic and global partnerships
• Domestically through a fragmented state-by-state AI legislation approach in the absence of federal-level action
• European Union AI Act, officially entered into force as of August 1, 2024, meaning Europe is now enforcing the
world's first comprehensive AI law

Recent changes, like the new presidential administration in the United States, have already set a new tone for AI
policy in 2025, with the administration focusing on economic and national security implications of AI and creating an
enabling environment for AI innovation.

This shift was further amplified by the AI Action Summit held in Paris, which brought together international Heads of
State, government officials, and leaders of international organizations and demonstrated growing support for a pro-
innovation approach and investments in AI infrastructure, notably by French and British leaders. This support was
shared by European leaders in the wake of the initial roll out of EU AI Act requirements — the first comprehensive AI
law to be enacted.

The AI Threat Landscape


2024 witnessed the continued market expansion of AI and machine learning (ML) applications to include AI
business integrations and tools that provide productivity gains. As of early 2024, 72% of 1,363 surveyed organizations

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 54


said they adopted AI capabilities in their business functions. Meanwhile, the Cisco AI Readiness Index reported that
only 13% of 7,985 senior business leaders surveyed said they are ready to leverage AI and AI-powered technologies to
their full potential.

While the advancement and adoption of AI/ML technology have paved the way for copious new business
opportunities, it also complicates the risk and threat environments: The rapid adoption of AI technology or AI-enabled
technology has led to an expanded attack surface and novel safety and security risks.

In addition to maintaining our taxonomy of security and safety risks, Cisco's AI security team is worried about the
following potential threats in AI for the rest of 2025:
• Security risks to AI models, systems, applications, and infrastructure from both direct compromise of AI assets
as well as vulnerabilities in the AI supply chain
• The emergence of AI-specific attack vectors targeting large language models (LLMs) and AI systems
(e.g., jailbreaking, indirect prompt injection attacks, data poisoning, data extraction attacks)
• Use of AI to automate and professionalize threat actor cyber operations, particularly in social engineering

While these threats might be on the horizon for 2025 and beyond, threats in 2024 mainly featured AI enhancing
existing malicious tactics rather than aiding in creating new ones or significantly automating the kill-chain.

Most AI threats and vulnerabilities are low to medium risk by themselves, but those risks combined with the
increased velocity of AI adoption and the lagging development, implementation, and adherence to accompanying
security practices will ultimately increase organizational risks and magnify potential negative impacts (e.g., financial
loss, reputational damage, violations of laws and regulations).

AI Security Research
Over the last year, Cisco's AI researchers led and contributed to several pieces of groundbreaking research in key
areas of AI security. Key findings and real-world implications of our various AI security research initiatives include:
• Algorithmic jailbreaking attack models with zero human supervision, enabling adversaries to automatically
bypass protections for even the most sophisticated LLMs. This method can be used to exfiltrate sensitive data,
impact services, and harm businesses in other ways.
• Fine-tuning models can break their safety and security alignment, meaning that improved contextual
relevance for AI applications can inadvertently make them riskier for enterprise use.
• Simple methods for poisoning and extracting training data demonstrate just how easily the data used to train
an LLM can be discreetly tampered with or exfiltrated by an adversary.

Conclusion
While AI applications are fundamentally different from traditional web applications, the underlying concepts of AI
security aren't entirely unique; they reflect many familiar principles from traditional cybersecurity practices. As AI
itself and the threats to AI systems continue to evolve rapidly, it's important for organizations to combine findings
from both academic research and third-party threat intelligence to inform AI protections and security policies so that
they are relevant and resilient.

This article is a summarized version of our State of AI Security report. For the full report with recommendations on
implementing AI security, visit www.cisco.com/go/state-of-ai-security.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 55


CONTRIBUTOR INSIGHTS

A Comprehensive Guide to Protect


Data, Models, and Users in
the GenAI Era
By Boris Zaikin, Leading Architect at CloudAstro

Generative AI (GenAI) is transforming how organizations operate, enabling automation, content generation, and
intelligent decision making at an unprecedented scale. From AI-powered chatbots to advanced code generation and
creative design, GenAI is revolutionizing industries by increasing efficiency and innovation. However, alongside these
advancements come significant security risks that organizations must address.

The challenge is that as AI systems become more intelligent and sophisticated, they also face evolving threats and
risks. Ensuring AI security throughout development and deployment is crucial.

This article provides practical checklists to help enterprises securely adopt GenAI. By understanding key security
risks, implementing essential technologies, and following best practices, organizations can harness the power of
GenAI while ensuring their data, models, and users remain protected.

The checklists are separated into two categories:


1. Key security risks of GenAI
2. Essential security technologies for GenAI

Key Security Risks of Generative AI


GenAI introduces new security risks that organizations must address. Threats include data leaks, model manipulation,
and unauthorized access. These risks can lead to serious privacy and security breaches without proper safeguards.

1. Data Privacy and Compliance Risks

Generative AI can expose sensitive data, leading to legal violations under regulations like GDPR and HIPAA.
Organizations face legal, financial, and reputational risks if AI models process confidential information without
safeguards. Ensuring compliance requires strict data handling, access controls, and regular audits.

For example, in 2023, Samsung employees accidentally leaked confidential company data by entering it into
ChatGPT, raising serious concerns about corporate data privacy and AI misuse. Learn more about the accidental data
leak here.

Here are steps to address data privacy and compliance risks:

☐ Restrict AI access to sensitive data using role- ☐ Audit AI interactions for compliance with
based controls GDPR, HIPAA, etc.
☐ Implement data anonymization and ☐ Use AI governance tools to enforce data
encryption before AI processing protection policies

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 57


2. Misinformation and Bias

AI models can generate false or misleading information, commonly called hallucinations. AI may reinforce
stereotypes and produce unfair outcomes if trained on biased data. Organizations must ensure that AI-generated
content is accurate, ethical, and free from bias. An incident of this nature occurred in 2023 when an AI-powered news
website published misleading and fake articles, causing public misinformation and damaging its credibility. To avoid
misinformation and bias:

☐ Test AI models regularly for bias and accuracy ☐ Establish AI ethics guidelines to ensure
responsible usage
☐ Use diverse, high-quality training data
☐ Implement human review for critical AI outputs

3. Unauthorized Access and Misuse

Unauthorized users can access AI models without proper security measures, leading to data theft or manipulation.
Both insiders and external hackers pose a risk, especially if API security is weak or misconfigured. In one case, a
misconfigured AI chatbot publicly exposed user conversations due to API vulnerabilities, compromising privacy. Here
is a checklist to prevent unauthorized access and misuse issues from happening to you:

☐ Enforce multi-factor authentication (MFA) ☐ Monitor AI activity logs for suspicious behavior
for AI access
☐ Conduct regular security audits and
☐ Implement role-based access controls penetration tests

4. Data Poisoning

Attackers can manipulate AI training data by injecting malicious inputs and corrupting model outputs. This can lead
to biased decisions, misinformation, or exploitable vulnerabilities. In one experiment, researchers demonstrated how
poisoning AI datasets could manipulate facial recognition systems, causing them to misidentify people. Here is a
checklist to prevent data poisoning:

☐ Validate and clean training data before ☐ Deploy anomaly detection tools to identify
AI processing poisoned data
☐ Use differential privacy to prevent data ☐ Retrain models with verified and diverse
manipulation datasets

5. Fake "ChatGPT" and Impersonation Attacks

Fraudsters create fake AI tools mimicking ChatGPT or other AI services to trick users into sharing sensitive data or
installing malware. These fake versions often appear as mobile apps, browser extensions, or phishing websites that
look nearly identical to real AI platforms. Some have even been found in official app stores, making them seem more
trustworthy to unsuspecting users. Once installed, they can steal login credentials and financial information or even
spread harmful software across devices.

Here is a checklist to prevent fake "ChatGPT" and impersonation attacks:

☐ Use only official AI tools from verified sources ☐ Deploy security tools to detect fraudulent
AI services
☐ Educate employees on fake AI and
phishing scams ☐ Report fake AI platforms to authorities

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 58


6. Model Stealing

Attackers can extract proprietary AI models by exploiting APIs and analyzing responses, leading to intellectual
property theft and competitive disadvantage. As found in North Carolina State University's research, "Researchers
have demonstrated the ability to steal an artificial intelligence (AI) model without hacking into the device where
the model was running. The technique is novel in that it works even when the thief has no prior knowledge of the
software or architecture that supports the AI."

Figure 1. Model-stealing process

The diagram illustrates the model-stealing process, where an attacker sends multiple queries to a target machine
learning model and collects the corresponding responses. Using these inputs and outputs, the attacker then
trains a stolen model that mimics the behavior of the original, potentially leading to intellectual property theft and
unauthorized use.

To prevent model stealing:

☐ Limit API access and enforce request ☐ Use watermarking to track unauthorized usage
rate limits
☐ Monitor API activity for suspicious
☐ Encrypt AI models during deployment extraction patterns

7. Model Inversion Attacks

Hackers can reverse-engineer AI models to recover sensitive training data, potentially exposing confidential or
personal information. In one instance, researchers reconstructed faces from a facial recognition AI model, revealing
private user data used in training. Andre Zhou gathered a list of resources and research related to model inversion
attacks in his GitHub Repository.

A model inversion attack is similar to a model stealing attack. A model inversion attack extracts sensitive training
data by analyzing model outputs, infers private input data, posing a privacy risk, and grants attackers access to
confidential or personal data. Meanwhile, a model stealing attack replicates a target model’s functionality using
queries and responses, enables intellectual property theft by recreating the model, and allows attackers to obtain a
functional copy of the model’s behavior.

Here are steps you can take to prevent model inversion attacks:

☐ Use differential privacy to protect training data ☐ Apply adversarial defenses to prevent
inversion attacks
☐ Restrict model exposure by limiting API
responses ☐ Assess AI models for vulnerabilities regularly

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 59


8. AI-Enhanced Social Engineering

AI can generate highly realistic phishing emails, deepfake videos, and voice impersonations, making social
engineering attacks more effective. For example, cybercriminals used AI-generated voices to impersonate company
executives at a European company, successfully authorizing fraudulent financial transactions amounting to €220,000.

The following are measures that can be taken to prevent AI-enhanced social engineering:

☐ Train employees to recognize AI-generated ☐ Use multi-factor authentication for financial


scams, using open-source tools like Google's transactions
SynthId (or commercial tools) ☐ Monitor communications for unusual patterns
☐ Deploy AI-powered security tools to detect
deepfakes

Essential Security Technologies for GenAI


Securing generative AI means using encryption, access controls, and safe APIs. Monitoring tools catch unusual
activity, and defenses protect against attacks. Following privacy rules helps keep AI use safe and fair. We also need to
consider the following topics to improve the security level when utilizing AI.

1. Data Loss Prevention

Data loss prevention (DLP) solutions monitor and control data flow to prevent sensitive information from being
leaked or misused. Here are some ways to incorporate DLP solutions:

☐ Use AI-driven DLP tools to detect and block ☐ Monitor AI-generated outputs to prevent
unauthorized data sharing unintentional data leaks
☐ Apply strict data classification and access ☐ Regularly audit logs for suspicious activity
policies

2. Zero-Trust Architecture

Zero-trust architecture (ZTA) enforces strict access controls, verifying every request based on identity, context, and
least privilege principles. Here is a checklist to implement zero-trust architecture:

Figure 2. Zero-trust architecture


☐ Implement MFA for
AI access
☐ Use identity and access
management tools to
enforce the least privilege
☐ Continuously monitor
and verify user and AI
interactions
☐ Segment networks to
limit AI system exposure

You can find a detailed guide


about zero-trust architecture here.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 60


3. Encryption and Confidential Computing

Encryption secures AI data at rest and in transit, while confidential computing protects sensitive AI operations in
secure environments. Here is a checklist to implement encryption and confidential computing:

☐ Encrypt data using AES-256 for storage and ☐ Implement homomorphic encryption for
TLS 1.2+ for transmission privacy-preserving AI computations
☐ Use hardware-based secure enclaves for ☐ Regularly update cryptographic protocols to
AI processing prevent vulnerabilities

Conclusion
Securing generative AI means taking the proper steps to protect data, models, and users; therefore, organizations
must continuously improve their security strategies and proactively address key security risks. This can be done in
part by incorporating strong access controls, data protection policies, and regular security tests, and doing the proper
research to ensure organizations are meeting their own needs as well as regulatory requirements. By following the
checklists presented in this article, organizations can safely and innovatively use generative AI.

References:
• "Fake ChatGPT apps spread Windows and Android malware" by Graham Cluley
• "DeepSeek Data Leak Exposes 1 Million Sensitive Records" by Lars Daniel
• "Samsung Bans ChatGPT Among Employees After Sensitive Code Leak" by Siladitya Ray
• "Face Reconstruction from Face Embeddings using Adapter to a Face Foundation Model" by Hatef Otroshi
Shahreza, et al.
• "Researchers Demonstrate New Technique for Stealing AI Models" by Matt Shipman
• "How Cybercriminals Used AI To Mimic CEO's Voice To Steal £220,000" by Think Cloud
• "The rise of AI fake news is creating a 'misinformation superspreader'" by Pranshu Verma
• "A Comprehensive Guide to Access and Secrets Management: From Zero Trust to AI Integration — Innovations in
Safeguarding Sensitive Information" by Boris Zaikin

Leading architect with solid experience designing and developing complex solutions
Boris Zaikin based on the Azure, Google, and AWS clouds. I have expertise in building distributed
@borisza systems and frameworks based on Kubernetes and Azure Service Fabric. My areas of
boriszaikin.com interest include enterprise cloud solutions, edge computing, high-load applications,
multitenant distributed systems, and IoT solutions.

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 61


ADDITIONAL RESOURCES

Solutions Directory
This directory contains generative AI, ML, and various AI-powered tools to help you
streamline workflows, increase efficiency, and improve accuracy. It provides pricing data and
product category information gathered from vendor websites and project pages. Solutions
are selected for inclusion based on several impartial criteria, including solution maturity,
technical innovativeness, relevance, and data availability.

DZONE'S 2025 GENERATIVE AI SOLUTIONS DIRECTORY

Product Purpose Availability Website

Safeguard AI applications from cisco.com/site/us/en/products/security/ai-


Cisco AI Defense By request
evolving risks defense/index.html
2025 PARTNERS

Cloud DBaaS for agentic AI and edge apps


Couchbase Capella Free tier couchbase.com/products/capella
with real-time analytics

Rapidly build and intelligently operate


Vertesia Platform Trial period vertesiahq.com/product
GenAI applications

Company Purpose Availability Website

[24]7.ai Engagement
AI-powered contact center as a service By request 247.ai/247-engagement-cloud
Cloud

Ada AI-powered customer service automation By request ada.cx

Aisera Agentic AI
Securely deploy agentic AI By request aisera.com/platform
Platform

Universal AI copilot with agentic reasoning


Aisera AI Copilot By request aisera.com/products/ai-copilot
and orchestration

Aisera AI Voice Bot Automated call answering service By request aisera.com/products/ai-voice-bot

Alluxio Data analytics and AI Free tier alluxio.io/enterprise-ai

Altair DesignAI AI- and simulation-driven design By request altair.com/designai

Altair Knowledge Studio ML, predictive analytics By request altair.com/knowledge-studio

Altair RapidMiner Data analytics and AI By request altair.com/altair-rapidminer

Amazon Augmented AI Human review of ML predictions Free tier aws.amazon.com/augmented-ai

Amazon DevOps Guru ML-powered cloud operations Free tier aws.amazon.com/devops-guru

Amazon SageMaker Build, train, and deploy ML models Free tier aws.amazon.com/sagemaker

Anaconda Hub Data science and AI collaboration By request anaconda.com

Advanced conversational AI model for


Anthropic Claude By request anthropic.com/claude
safe, reliable interactions

Create scalable performant ML


Apache Mahout Open source mahout.apache.org
applications

Apache MLlib Scalable ML library for Apache Spark Open source spark.apache.org/mllib

Apache OpenNLP ML-based toolkit Open source opennlp.apache.org

Distributed training of deep learning and


Apache SINGA Open source singa.apache.org
ML models

Multi-language engine for executing data


Apache Spark Open source spark.apache.org
engineering and ML

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 62


DZONE'S 2025 GENERATIVE AI SOLUTIONS DIRECTORY

Product Purpose Availability Website

Automated data integration,


Ascent By request ascent.io
transformation, and preparation

BentoML Build, ship, and scale AI apps Free tier bentoml.com

Consumable, programmable, and


BigML Trial period bigml.com
scalable ML

BigPanda AIOps By request bigpanda.io

AI-powered IT and business automation bmc.com/it-solutions/bmc-helix-virtual-


BMC HelixGPT By request
for predictive insights agent.html

Capgemini Data and AI services By request capgemini.com/us-en/services/data-and-ai

Clarifai Full-stack production AI Free tier clarifai.com

Secure, scalable, and open platform


Cloudera AI By request cloudera.com/products/machine-learning.html
for enterprise AI

LLMs for code understanding


CodeT5 Open source github.com/salesforce/CodeT5
and generation

Cody AI-powered business assistant By request meetcody.ai

Cognizant Generative AI GenAI solutions for automation By request cognizant.com/us/en/services/ai/generative-ai

AI platform for text generation,


Cohere Platform By request cohere.com
classification, and summarization tasks

CrowdStrike Falcon AI-powered cybersecurity By request crowdstrike.com/platform

Data intelligence platform built on


Databricks Trial period databricks.com
a data lakehouse

Dataiku End-to-end AI and ML Trial period dataiku.com

DataRobot AI Platform End-to-end GenAI By request datarobot.com

Digital.ai Intelligence AI-powered analytics By request digital.ai/products/intelligence

AI-powered predictive insights


Digital.ai Platform By request digital.ai/products/platform
and DevSecOps

Domino Data Lab Orchestrate and scale AI By request domino.ai

EdgeVerve AI Next Scalable applied AI transformations By request edgeverve.com/ai-next

EdgeVerve XtractEdge Document AI By request edgeverve.com/xtractedge

AI-assisted labeling tool to create high-


Encord Annotate By request encord.com/annotate
quality training data

Espressive Barista AI assistance for service desk to espressive.com/platform/products/agent-


By request
Agent Co-Pilot reduce MTTR co-pilot

Agentic AI-powered virtual agent for espressive.com/platform/products/espressive-


Espressive BaristaGPT By request
employee support barista

Scalable AI agent that can autonomously


Forethought By request forethought.ai
solve customer inquiries

Freshworks Freshchat AI-power bots and live chat Trial period freshworks.com/live-chat-software

Freshworks Freshdesk AI-powered, omnichannel customer


Trial period freshworks.com/freshdesk/omni
Omni service software

Freshworks Freshsales AI-powered sales CRM Trial period freshworks.com/crm/sales

GitHub Copilot AI developer tool By request github.com/features/copilot

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 63


DZONE'S 2025 GENERATIVE AI SOLUTIONS DIRECTORY

Product Purpose Availability Website

Google Cloud Conversational AI, natural language cloud.google.com/products/conversational-


Free tier
Dialogflow CX understanding agents

MLOps tools for data scientists and


Google Cloud Vertex AI Free tier cloud.google.com/vertex-ai
ML engineers

Google Gemini GenAI chatbot Free tier deepmind.google/technologies/gemini

Grok AI Intelligent operations By request grokstream.com

Guidde AI-powered video and workflow automation Free tier guidde.com

H2O AI Cloud Accelerate and scale AI results By request h2o.ai/platform/ai-cloud

H2O Document AI Intelligent data extraction By request h2o.ai/platform/ai-cloud/make/document-ai

H2O Driverless AI Automated ML By request h2o.ai/platform/ai-cloud/make/h2o-driverless-ai

H2O-3 ML platform for the enterprise Open source h2o.ai/platform/ai-cloud/make/h2o

Hugging Face AI community for collaboration Free tier huggingface.co

Hugging Face Waifu


Latent text-to-image diffusion model Open source huggingface.co/hakurei/waifu-diffusion-v1-3
Diffusion

IBM Cloud Paks AI and automation to modernize apps By request ibm.com/cloud-paks

IBM SPSS Modeler Visual data science and ML tool Trial period ibm.com/products/spss-modeler

IBM watsonx AI and data platform Trial period ibm.com/watsonx

Inbenta Conversational AI By request inbenta.com

Personalized and trained GenAI model


Inflection AI By request inflection.ai
with data protection

Inflection AI Pi Emotionally intelligent conversational AI Free tier pi.ai/talk

intel.com/content/www/us/en/developer/tools/
Intel Geti Build AI models at scale By request
tiber/edge-platform/model-builder.html

Kaldi Speech recognition toolkit Open source github.com/kaldi-asr/kaldi

Kasisto KAI Conversational AI By request kasisto.com/products/kai-platform

Kasisto KAI Answers GenAI in banking By request kasisto.com/products/kai-answers

Kasisto KAI-GPT GenAI for financial institutions By request kasisto.com/products/kai-gpt

Katana Graph Graph intelligence platform By request katanagraph.ai

Keras Deep learning API Open source keras.io

KNIME Analytics
Data analytics, reporting, and integration Free knime.com/knime-analytics-platform
Platform

Conversational AI for customer and


Kore.ai By request kore.ai
employee experiences

Kubeflow ML toolkit for Kubernetes Open source kubeflow.org

Framework for developing applications


LangChain Open source langchain.com/langchain
powered by LLMs

Developer platform for LLM-powered


LangSmith By request langchain.com/langsmith
application lifecycles

Build knowledgable AI agents for


LlamaIndex Free tier llamaindex.ai
enterprise data support

MALLET ML for language toolkit Open source mimno.github.io/Mallet/index

MathWorks MATLAB Programming and numeric computing Trial period mathworks.com/products/matlab.html

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 64


DZONE'S 2025 GENERATIVE AI SOLUTIONS DIRECTORY

Product Purpose Availability Website

MavenAGI Agent
Multi-surface GenAI for customer support By request mavenagi.com/products
Maven

Meta Wit.ai NLP Free wit.ai

Data catalog powered by social data


Metaphor By request metaphor.io
intelligence and AI

Meya Conversational AI development Trial period meya.ai

Microsoft Azure AI
Build, evaluate, and deploy GenAI solutions Free tier ai.azure.com
Studio

Microsoft Azure
Spatial computing developer kit with AI Free tier azure.microsoft.com/en-us/products/kinect-dk
Kinect DK

Microsoft Azure azure.microsoft.com/en-us/products/machine-


End-to-end ML Free tier
Machine Learning learning

Microsoft Azure azure.microsoft.com/en-us/products/ai-


Build copilot and generative AI apps Free tier
OpenAI Service services/openai-service

Milvus Vector database built for GenAI apps Open source milvus.io

End-to-end MLOps for model and GenAI


MLflow Open source mlflow.org
development

mlpack Header-only C++ ML library Open source mlpack.org

Secure agentic AI assistant for


Moveworks Sandbox moveworks.com/us/en/platform
accelerated workflows

NLTK Build python programs for NLP Open source nltk.org

nvidia.com/en-us/data-center/products/ai-
NVIDIA AI Enterprise End-to-end production AI Trial period
enterprise

NVIDIA CUDA-X GPU-accelerated libraries for AI and HPC Free nvidia.com/en-us/technologies/cuda-x

GPU-accelerated multilingual speech


NVIDIA Riva Trial period nvidia.com/en-us/ai-data-science/products/riva
and translation AI

OpenAI ChatGPT GenAI chatbot Free tier chatgpt.com

OpenAI DALL·E 2 AI image generation By request openai.com/dall-e-2

OpenNN Open neural networks library for ML Open source opennn.net

OpenText
AI-powered data analytics By request opentext.com/products/ai-and-analytics
Analytics Cloud

OpenText Knowledge AI/ML-powered advanced search,


By request opentext.com/products/knowledge-discovery
Discovery knowledge discovery, and analytics

OutSystems AI-powered low-code app development Free tier outsystems.com

Vector database for building accurate and


Pinecone Free tier pinecone.io
performant AI apps at scale

PyTorch 2.0 ML library Open source pytorch.org/get-started/pytorch-2.0

PyTorch 3D Library for deep learning with 3D data Open source pytorch3d.org

Qlik AutoML No-code, automated ML for analytics By request qlik.com/us/products/qlik-automl

Qlik Sense AI-powered analytics Trial period qlik.com/us/products/qlik-sense

Qwiet AI AI app security By request qwiet.ai

Rainbird AI-powered decision intelligence By request rainbird.ai

Release.ai GenAI for DevOps assistance Sandbox release.ai

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 65


DZONE'S 2025 GENERATIVE AI SOLUTIONS DIRECTORY

Product Purpose Availability Website

Replit Build apps and sites with AI Free tier replit.com

Robust Intelligence AI risk management By request robustintelligence.com

Unify data, AI, CRM, development,


Salesforce Platform By request salesforce.com/platform
and security

AI copilot infused with agents that sap.com/products/artificial-intelligence/ai-


Joule By request
supports employees assistant.html

SAS Viya AI and analytics Trial period sas.com/en_us/software/viya.html

Scikit-Learn ML in Python Open source scikit-learn.org/stable

Serviceaide AISM AI-powered, no-code service management By request serviceaide.com/products/aism

Serviceaide Luma AI Intelligent automation for service desks By request serviceaide.com/products/luma-ai

Automate tasks, enhance workflows,


ServiceNow AI Agents By request servicenow.com/products/ai-agents.html
and improve customer experience

Use domain-specific models to improve


ServiceNow Now Assist By request servicenow.com/now-platform/now-assist.html
productivity and efficiency

Sherpa.ai Federated
Privacy-preserving AI model training By request sherpa.ai/platform
Learning Platform

Sisense Build AI-powered analytics into products By request sisense.com

Hands-on accelerator for LLM


Snorkel Custom By request snorkel.ai/snorkel-custom
customization

Snorkel Flow AI data development By request snorkel.ai/snorkel-flow

Snyk Deepcode AI SAST and AI code review tool By request snyk.io/platform/deepcode-ai

SoundHound AI Amelia Conversational AI, GenAI By request amelia.ai

Sourcegraph Cody Coding AI assistant Free tier sourcegraph.com/cody

spaCy NLP in Python Open source spacy.io

Splunk IT Service splunk.com/en_us/products/it-service-


AIOps for monitoring and observability By request
Intelligence intelligence.html

Stability AI Stable
Image generation tool Trial period stability.ai/stable-assistant
Assistant

Stability AI Stable
High-resolution image synthesis Open source github.com/Stability-AI/stablediffusion
Diffusion

Stability AI Stable
Community interface for GenAI Open source github.com/Stability-AI/StableStudio
Studio

Stanford CoreNLP NLP toolkit Open source stanfordnlp.github.io/CoreNLP

Targeted predictive, generative, and


SymphonyAI By request symphonyai.com
agentic AI applications

Synthesia AI video platform Free tier synthesia.io

Private, secure AI-powered


Tabnine Trial period tabnine.com
development platform

Teneo Platform Conversational AI for contact centers By request teneo.ai/platform/teneo

TensorFlow End-to-end ML platform Open source tensorflow.org

TensorFlow TFX Deploying ML production pipelines Open source tensorflow.org/tfx

tidymodels Packages for modeling and ML Open source tidymodels.org

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 66


DZONE'S 2025 GENERATIVE AI SOLUTIONS DIRECTORY

Product Purpose Availability Website

Incorporate AI and ML models uipath.com/product/rpa-ai-integration-with-


UiPath AI Center Trial period
into automations ai-center

Uniphore X Platform Multimodal conversational AI By request uniphore.com/x-platform

University of Waikato
ML for data stream mining Open source moa.cms.waikato.ac.nz
MOA

University of Waikato
ML algorithms for data mining tasks Open source waikato.github.io/weka-wiki
Weka

Weaviate AI-native database Trial period weaviate.io

WhyLabs AI Control Monitor and manage the health of AI


Free tier whylabs.ai/ai-control-center
Center apps in real time

WhyLabs LangKit Toolkit for monitoring LLMs Open source github.com/whylabs/langkit

Fully managed vector database and


Zilliz Cloud Free tier zilliz.com
data services

© 2025 DZONE TREND REPORT | GENERATIVE AI PAGE 67


At DZone, we foster a collaborative environment that empowers developers and tech professionals
to share knowledge, build skills, and solve problems through content, code, and community. We
thoughtfully — and with intention — challenge the status quo and value diverse perspectives so that,
as one, we can inspire positive change through technology.
3343 Perimeter Hill Dr, Suite 100
Copyright © 2025 DZone. All rights reserved. No part of this publication may be reproduced, stored in
Nashville, TN 37211
a retrieval system, or transmitted, in any form or by means of electronic, mechanical, photocopying, or
888.678.0399 | 919.678.0300
otherwise, without prior written permission of the publisher.

You might also like