Global State of DevSecOps 2024 Report
Global State of DevSecOps 2024 Report
of DevSecOps
2024
Insights and Trends in Software
Security Testing from Black Duck
Table of Contents
Executive Summary...............................................................................................................................1
About Black Duck...................................................................................................................................................... 1
Findings Overview................................................................................................................................ 2
AI-assisted development soars but securing AI-generated code lags far behind................................................ 2
Parallels between securing AI-generated code and securing open source....................................................... 2
An increased focus on software security testing.................................................................................................... 2
Too much noise, too many tools.............................................................................................................................. 3
Looking ahead........................................................................................................................................................... 3
blackduck.com
From interpretation to action..................................................................................................................................14
Conclusion............................................................................................................................................. 17
Appendix................................................................................................................................................ 19
blackduck.com
Executive Summary
It is a time of radical change in software development, with organizations in every
industry recognizing the need for robust, efficient security processes that can keep
pace with new development practices, such as AI-assisted coding.
The findings in the “Global State of DevSecOps 2024” report are based on a
comprehensive survey that Black Duck® commissioned from Censuswide, an
international market research consultancy. More than 1,000 software developers,
application security (AppSec) professionals, CISOs, and DevOps engineers across
multiple countries and industries were included in the survey.
This report provides critical insights into the current state of DevSecOps practices
and AppSec testing. It delivers a comprehensive analysis of trends, challenges,
and opportunities, and it offers actionable insights for organizations seeking to
enhance their DevSecOps practices.
blackduck.com | 1
Findings
implementing policies and controls around AI tool usage, reflecting the
AI-assisted development soars but securing
nascent nature of this trend.
AI-generated code lags far behind
Overview
Although 85% of respondents to our survey say they have some measures
One of the most striking discoveries in this report is that the AI revolution
in place to address the challenges posed by AI-generated code, only 24%
is already over—and AI won, at least when it comes to integrating AI
are “very confident” in their policies and processes for testing such code.
into software development processes. The adoption of AI in software
A total 67% of respondents feel only “moderately confident” (41%), “slightly
development has gone beyond a tipping point, with over 90% of the
confident” (20%), or “not at all confident” (6%).
respondents to our survey using AI assistance in some capacity.
This lack of confidence may reflect the fact that that 21% of respondents
Although 85% of respondents to our survey say they have Parallels between securing AI-generated code and acknowledge that their development teams are bypassing corporate
some measures in place to address the challenges posed by securing open source policies and using unsanctioned—and, one would assume, unsupervised—
AI-generated code, less than a quarter were very confident The rapid adoption of AI-assisted coding by software development teams AI tools. Again, unmanaged AI use parallels the early days of unmanaged
in their policies and processes for testing such code. shares several similarities with the historic rise of open source software open source use, when few executives were aware that their development
use. Both movements disrupted traditional software development teams were incorporating open source libraries into proprietary code, let
Here is the breakdown.
practices. Open source challenged proprietary software models, and AI- alone the extent of that use.
assisted coding is transforming how code is written and reviewed.
But just as with open source use, bringing AI-assisted coding tools into
An increased focus on software security
24% 41% software development presents unique intellectual property (IP), licensing,
and security challenges that need careful management by development
testing
Test coverage is substantial but not universal, with 57% of respondents
teams. For example, both unmanaged open source and AI-generated testing between 41% to 80% of their projects, branches, and repositories,
VERY CONFIDENT MODERATELY
code can create ambiguity about IP ownership and licensing—especially suggesting opportunities for expanding security test coverage.
CONFIDENT
when the AI model uses datasets that might include open source or other
third-party code without attribution. Our findings show that organizations are prioritizing security testing
based on the sensitivity of information handled (37% of respondents),
AI-assisted coding tools also have the potential to introduce security while also emphasizing industry best practices (36%) and increasing use
20% 6%
vulnerabilities into codebases. One researcher flatly concludes that of automated security testing (35%).
“autogenerated code cannot be blindly trusted, and still requires a security
review to avoid introducing software vulnerabilities.” Configuration of security tests is becoming more centralized, with 55%
SLIGHTLY NOT AT ALL of respondents using centralized interfaces for test configuration. And
CONFIDENT CONFIDENT There are clear challenges in managing and securing AI-generated although their execution is becoming more automated, the persistence of
code. Our survey found that organizations are at different stages of nonautomated activities documented in this report indicates substantial
blackduck.com | 2
Findings
room for improvement. A significant percentage of respondents still uses
manual processes in their application security testing and remediation Looking ahead
workflows. The exact amount varies depending on which manual process
Overview
we look at, but it ranges from about 15% to 43% of respondents.
Several key trends are shaping the path of DevSecOps.
Too much noise, too many tools • Increased automation of security testing and
A slight majority of respondents find security test results at least remediation processes
“somewhat easy” (52%) to understand and act upon, while another
20% deem their results “extremely easy” to understand. However, this • A need for policies concerning the use of AI-
perception varies across roles, industries, and geographies. assisted development tools
The findings also reveal a critical challenge with “noise” in security • Enhanced focus on reducing noise in security test
testing results; that is, output that is considered irrelevant or not worth results to improve efficiency
acting upon. Noise is often caused by a high number of false positives
• The evolution of cross-functional collaboration in
or a large volume of duplicative true positives in results. Sixty percent
of respondents reported that they consider over 20% of their results as security decision-making
noise, impacting efficiency and decision-making processes. Organizations have significant opportunities to
Despite a broader trend of integrating security into development improve their DevSecOps practices by leveraging
processes, 61% of respondents report that security testing moderately automation, enhancing the clarity of security test
or severely slows down development. The tension between security and results, developing robust policies for AI-assisted
development speed remains a critical challenge for every industry. development, and fostering better cross-functional
collaboration.
The fact that 82% of organizations use between 6 and 20 security testing
tools is certainly a factor, with a broad proliferation of tools contributing As the landscape continues to evolve, organizations
to the high levels of noise reported by respondents. Multiple tools may must stay agile, adapting their AppSec processes to
detect the same issues, leading to duplicative results. Or different tools meet emerging challenges. The most successful will
may provide conflicting results for the same code or application. Each be those that can effectively balance rigorous security
tool may generate its own false positives, which compounds as more
practices with the speed and innovation demands of
tools are used.
modern software development.
With so many tools in use, organizations are struggling to effectively
integrate and correlate results across platforms and pipelines, leading to
difficulty distinguishing between genuine issues and false positives, as
well as challenges in prioritizing issues across different tools’ outputs.
blackduck.com | 3
A Deep Dive
Our survey of over 1,000 security professionals reveals a state of flux, with mature understanding of the impact potential breaches can have across
organizations striving to balance security measures with the demands different parts of an application ecosystem.
of rapid development cycles. This section delves into the current state of
DevSecOps in
instances. Sensitive data exposure is one of the most common and
Q1. Which of the following criteria does your organization
consider when determining which application security tests serious security issues across industries. To address these vulnerabilities,
organizations need to implement strong encryption practices, use up-
2024
to run and when they are run?
to-date security protocols, and ensure that sensitive data is properly
protected both when it’s being transmitted and when it’s stored.
Sensitivity of information accessed/transmitted by
37% Our data shows that organizations in sectors such as Application/
the application
Software, Banking/Finance, Healthcare, and Government are particularly
General best practices recommended by third-party attuned to this priority, given the highly sensitive nature of the data they
36%
organizations (e.g., OWASP) handle.
blackduck.com | 4
Automating and ensuring ease of test configuration Centralized management also allows better visibility into an
application’s security profile, enabling more effective identification
The emphasis on automation and ease of test configuration, prioritized by
and mitigation of vulnerabilities. Further, it facilitates the
35% of respondents, underscores the growing integration of security into
collection and analysis of security data, which is crucial for
DevOps processes. This move toward DevSecOps reflects the recognition
proactive threat detection and response.
that security must be woven into the fabric of the development life cycle
rather than treating it as an afterthought. Overall, centralization and vendor consolidation in security
testing can significantly enhance an organization’s ability to
protect its digital assets by simplifying management, improving
Trending toward centralization
coordination, and potentially reducing costs.
Q2. Which statement best describes your process of configuring
and running application security tests across your SDLC or CI A struggle to attain full security coverage
pipeline?
Q3. Which of the following statements best describes
the manner in which new projects, branches, or
Testing tools provided by the same vendor are
repositories are added to your application security
configured using a centralized interface and 30%
testing queue?
automatically run with policies
All are added to the test queue manually (e.g.,
All tests are configured using a centralized interface and 29%
26% declared by dev team, selected by security team)
automatically run with policies
All are added to the test queue automatically
The top responses to the survey’s Question 2 reveal a clear trend toward 38%
(e.g., detected by testing tools)
centralization in tool configuration for efficiency and consistency. Thirty
percent of respondents reported using a vendor’s interface to configure
Most are added to the test queue automatically;
tests from that vendor, while 26% reported using a centralized interface 22%
a few are added manually
for all tests, regardless of vendor.
Centralizing security tools allows for a unified management interface, Most are added to the test queue manually;
6%
which simplifies the monitoring and configuration of security measures. a few are added automatically
This reduces the complexity associated with managing multiple
I am not familiar with how items are added to
disparate systems, facilitates integration at each stage of the pipeline, 4%
the security testing queue
and ensures that security policies are consistently applied across the
organization. With a centralized system, security efforts can be more
easily coordinated, reducing the likelihood of gaps or overlaps in security
coverage. A centralized, holistic approach enhances the ability to detect
and respond to threats across the entire IT infrastructure.
blackduck.com | 5
Q4. Approximately what percentage of your projects, branches, counterintuitive, some respondents noted slightly higher-than-average
and repositories are included in your application security coverage despite using manual processes to add projects to the test
testing queue? queue. This may simply be the level of coverage being perceived as higher
due to the greater level of effort to test each project.
Percentage of projects, branches, and Percentage of
repositories included in testing queue respondents Who determines when security tests are run
41%–60% 37%
Q5. Which of the following teams/departments determine which
61%–80% 21% application security tests are performed, when, and on which
projects?
35%
Despite the emphasis on comprehensive security, many organizations
struggle to achieve full coverage, as the responses to Questions 3 and 4 Security 44%
demonstrate. Nearly 30% of respondents still add new projects, branches,
Development/software engineering 42%
or repositories to their application security testing queue manually. Six
percent use mostly manual processes with some automation. In other DevOps 37%
words, about 35% of organizations are still heavily reliant on manual Quality assurance 34%
intervention in their security testing queue management.
Compliance 28%
About 35% of organizations are While there are varying perceptions of the extent to which security testing
impacts development workflows, survey results show a clear correlation
Cross-functional groups 21%
still heavily reliant on manual between the perceived impact on testing and manual processes. For
Legal 19%
example, 50% of those that say application security testing slows down None of the above 1%
intervention in their security the process also say that most projects are added to the test queue
The responses to Question 5 offer valuable insights into how
testing queue management. manually.
organizations are structuring their application security testing decisions.
However, 38% of respondents report that they are taking full advantage of This data paints a picture of organizations increasingly treating security
automated processes to include all projects in test queues, and another as a shared responsibility, integrated into various stages of the software
22% report mostly using automated processes. This means that 60% of development life cycle.
organizations are leveraging automation to a significant degree in their
The close percentages for security (44%) and development/ software
security testing workflows.
engineering (42%) suggest a trend toward shared responsibility for
Thirty-seven percent of respondents include only 41% to 60% of their security testing. This aligns well with DevSecOps principles, indicating that
projects, branches, and repositories in their testing queue. Twenty-one security is becoming more integrated into the development process.
percent achieve 61% to 80% coverage.
At 37%, DevOps teams play a significant role in security testing decisions.
This coverage gap presents significant risk, potentially leaving critical This further supports the trend toward integrating security throughout
parts of an organization’s application ecosystem untested. While the development life cycle. At 34%, QA teams are also heavily involved,
blackduck.com | 6
suggesting that many organizations view security as an integral part of A proliferation of tools, although intended to provide comprehensive
overall software quality. coverage, introduces significant complexity in integration, results
interpretation, and overall management. It correlates strongly with another
The involvement of compliance (28%) and legal (19%) teams indicates
key challenge—noise in security testing results.
that regulatory and legal requirements are significant factors in security
testing decisions for many organizations.
The noise factor
Twenty-one percent of respondents indicate that cross-functional groups
Q9. Approximately what percentage of security test results
are involved in these decisions, showing a trend toward collaborative,
are noise? For example: duplicative results, false positives,
multidisciplinary approaches to security. With only 1% selecting “None of
conflicting with other tests/tools.
the above,” it’s clear that the majority of organizations have specific teams
or processes in place for determining security testing. Percentage of noise in findings Percentage of respondents
The distribution across teams suggests a relatively mature approach 21%–40% 30%
82%
to security in many organizations, moving away from security as solely 41%–60% 30%
the responsibility of a dedicated security team. These results align
Total 60%
with broader industry trends toward DevSecOps and “shift-everywhere”
security practices, as described in the “Building Security in Maturity Question 9 uncovers a significant hurdle in effective security testing:
Model” report, where security is integrated earlier and more continuously the high level of noise in results. A total of 60% of respondents reported
in the development process. that between 21% and 60% of their security test results are noise. A
of organizations use between high noise level can significantly impact the effectiveness of security
A tool proliferation challenge
6 and 20 security testing tools. efforts and lead to efficiency loss, as teams must spend time filtering out
irrelevant findings. It can also lead to alert fatigue and genuine threats
Q6. Approximately how many application security testing tools being overlooked, as well as resource misallocation due to organizations
does your organization use?
directing too much of their security efforts toward noncritical issues.
Number of security testing tools Percentage of respondents
Role-based differences
6–10 34%
There is a perception among security personnel of a high percentage of
11–15 33% noise within security test results. This is likely because security teams
16–20 15% are commonly tasked with managing security tests, as they sit toward
Total 82% the top of the review funnel. These teams present dev/engineering teams
with cleansed and prioritized results, which in turn results in those teams
One of the most striking findings from our survey is the sheer number skewing toward lower perceived noise.
of security testing tools in use, as shown by the responses to Question 6.
Eighty-two percent of organizations use between 6 and 20 security testing Likewise, 17% of dev/engineering personnel feel they don’t have enough
tools. visibility into security tests to identify noise in results. This is in stark
blackduck.com | 7
contrast to CISOs, CTOs/CPOs, and AppSec professionals; only 1% Worldwide AI adoption
of respondents in those roles cite a lack of visibility when detecting
noisy results. One core tenet of efficient DevSecOps is adequate Q14. Are your developers using AI, generative, or transformational tools to write code and modify projects (by region)?
visibility into software artifacts and associated risks across all teams.
Inadequate visibility can slow down issue detection, prioritization, and U.K. U.S. France Germany Finland China Singapore Japan
remediation, and leave pipelines prone to breakdowns and software
open to attack. Yes (Net) 94% 97% 92% 94% 93% 97% 96% 60%
blackduck.com | 8
Developers Using AI
Similar numbers play out by industry sector, with over 90% adoption
Q14 Are your developers using AI, generative, or transformational tools to write code and modify projects
reported across the Technology, Cybersecurity, FinTech, Education,
(by industry sector)?
Banking/Financial, Healthcare, Media, Insurance, Transportation, and
Utilities sectors. Even lagging sectors, such as Nonprofit, report at least
Technology 91%
50% adoption. Perhaps unsurprisingly, the larger the organization, the
Cybersecurity more likely it has significantly adopted some facet of AI in its software
98%
development.
Application/
Software Development
85% This trend is reshaping the security testing landscape and also introduces
new challenges, particularly in securing AI-generated code and managing
Manufacturing 84% potential biases or vulnerabilities that AI systems might introduce, as the
responses to Question 15 show.
FinTech 98%
100% Q15. How confident are you that you have the processes in place to
Education
manage and secure AI-generated code?
Banking/Financial 95%
Confident (Net) 85%
Telecommunications/ 90%
ISP Very confident we have the policies and
24%
automated testing in place
Healthcare 92%
Moderately confident we have the policies
87% 41%
Retail and automated testing in place
blackduck.com | 9
Most respondents not confident they’re securing AI-generated code Confidence in security controls amid AI development
While the net confidence level of respondents to Question 15 may seem high at first blush, a deeper dive into the responses show that 41% of In Figure 1, starting from the left, less than 5% of organizations forbid
respondents are only moderately confident that they have the policies and automated testing in place to adequately vet AI-generated code, while developers from using AI to write code or modify projects. Perhaps
20% are only slightly confident and 6% are not at all confident—a total 67% of respondents altogether showing concern about managing and this group’s moderate and high confidence in their preparedness
securing AI-generated code. derives from their prohibition of the use of AI, or perhaps there are
other access controls that preclude access to AI resources.
This distribution suggests that even though their development teams are adopting AI tools, many organizations are still in the process of putting
policies and tools into place to manage the unique challenges posed by AI-generated code. Ensuring the reliability and security of that code remains The second group, 27% of respondents, reports a strong awareness
a significant challenge. As one example, AI tools trained on public open source codebases could introduce potential IP, copyright, and license issues that AI is being used. Eighty-one percent have moderate or high
into the code they produce, particularly if that code is used in proprietary software. confidence in their security preparedness (22% of overall responses).
These respondents are readily leveraging AI tools and confident that
Figure 1. Developers’ AI usage (permitted or not) correlated against moderate to high confidence in security controls they have the controls in place to mitigate consequent risks.
53%
The third and fourth groups are in the midst of an AI evolution, with
Moderately confident we have moderate to high confidence in their security preparedness and a
45%
the policies and automated
testing in place seemingly phased approach to AI-enabled development.
42%
38%
36% Not at all confident we have the
policies and automated testing
33% in place
blackduck.com | 10
Figure 2. Developers’ AI usage (permitted or not) correlated against low to slight confidence in security controls AI and code snippets
A common practice of developers is to use “snippets” (small extracts
53%
from larger pieces of code) in software, a problem now exacerbated
Moderately confident we have
the policies and automated by the use of AI coding assistants. Although code might include only
45% testing in place a snippet of open source, users of the software must still comply
42%
with any license associated with the snippet.
38% Not at all confident we have the
36%
policies and automated testing Even one noncompliant license in software can result in legal
33% in place
reviews, freezes in merger and acquisition transactions, loss of
Slightly confident we have the intellectual property rights, time-consuming remediation efforts, and
policies and automated testing delays in getting a product to market.
22% 22% in place
20%
18% Black Duck’s 2024 OSSRA report relates that over half—53%—of the
This is not a priority at this time,
15%
as using AI-generated code is applications examined contained open source with license conflicts,
11% 11% against company policies exposing those applications’ owners to potential IP ownership
9%
7% 7% questions.
5% Very confident we have the
4%
2% policies and automated testing
0%
in place
In Figure 2, we can see some dissonance between respondents’ use The rightmost group highlights a greater exposure to risk, where
of AI-generated code and AI-assisted development, and the steps automated testing of AI-generated code is a notably lower priority
they’re taking to safeguard their intellectual property and mitigate despite an awareness of the use of AI-assisted development.
security risks.
The group second from right illustrates a seemingly phased adoption
Starting from the left, the less than 5% that forbids the use of AI of AI-enabled development and security controls, with limited
tools altogether exhibits slight or nonexistent confidence in security permission being granted, perhaps based upon a slight confidence in
preparedness, with nearly 42% of this group claiming a lack of priority. preparedness.
Consequently, their choice to disallow AI-enabled development may
Most concerning is the group second from left, which has some
stem from this lagging organizational approach to securing AI-
development teams that are using AI with permission, despite a clear
generated code.
lack of confidence in their preparations to mitigate risks.
blackduck.com | 11
Q7 Which statement best describes the clarity and actionability
Interpreting and acting on security test results of the results of your application security tests (by regional)?
The effectiveness of application security testing hinges not just on the Regard results as easy to interpret and act on
execution of tests, but also on the ability to interpret results and take
(Net) 72%
appropriate action. This section examines the current state of result
interpretation and remediation based on our survey results, highlighting China 88%
both progress and persistent challenges in the field.
Singapore 83%
Q7. Which statement best describes the clarity and actionability Finland 82%
of the results of your application security tests?
Germany 76%
Security test results are extremely easy to understand and to act upon
U.K. 73%
All respondents 20% France 71%
CISO 37% U.S. 55%
CTO/CPO 23% Japan 51%
AppSec 21% Geographical differences
DevOps and dev/ Notable variations were observed across countries. For example, 88% of
14%
engineering 4
respondents in China found testing results easy to understand, compared
to 55% in the U.S. and 51% in Japan. These regional disparities suggest
Role-based differences differences in tool adoption, security culture, or regulatory environments
Our analysis suggests that CISOs, CTOs/CPOs, and AppSec professionals across countries.
generally reported higher levels of ease in understanding and acting upon
security test results compared to other roles (Question 7). For example,
37% of CISOs, 23% of CTO/CPOs, and 21% of AppSec professionals found
security test results “extremely easy” to understand and to act upon.
blackduck.com | 12
Different approaches to parsing and cleansing results The process of parsing and cleansing security test results reveals a Automated vs. manual review
spectrum of approaches (Question 8). For example, 38% of respondents
As illustrated in Figure 3, it is possible to associate ease of
Q8. Which statement best describes your approach to parsing manually parse and cleanse results from all tools. Twenty-five percent
interpretation and action with the method of parsing and cleansing
and cleansing the results of application security tests? report fully automated parsing and cleansing of results. Twenty-eight
data. The resulting insight reveals a clear benefit to establishing
percent use a combination of automated and manual parsing and
Results generated by all tools are manually parsed automated mechanisms for parsing and cleansing security test
38% cleansing.
and cleansed data, whether the benefit comes from accelerated review or more
The prevalence of manual and hybrid approaches (66% combined) consistent elimination of noise before human consumption. Of
We can automatically parse and cleanse results indicates a significant opportunity for increased automation and those that manually parse and cleanse test results, 22% find
from some testing tools; the remainder are 28% normalization in results processing. However, the challenge lies in those results somewhat or extremely difficult to understand and
manually parsed and cleansed balancing automation with the need for human expertise in interpreting act upon. Of those that use automated means, only 10% find the
Results generated by all tools are automatically complex security contexts. same difficulty.
25%
parsed and cleansed
Conversely, 90% of those that use automated methods to parse
and cleanse data find the results of security tests somewhat or
extremely easy to understand and act upon, while only 77% report
Figure 3. Impact of review method on understanding results and taking action
the same ease by doing so manually. Notably, when examining
those with hybrid approaches to reviewing test results, we see
100% 90% a “worst of both worlds” experience, with 35% citing difficulty
understanding and acting on results, and only 64% finding it easy
77% to do so.
80%
64% Results are difficult to
60%
understand and act upon
blackduck.com | 13
From interpretation to action Role-based differences
When examining potential differences in security testing’s impact on Similarly, 58% of dev/engineering personnel share this sentiment. It’s
Constant security testing vs. development speed
development and delivery pipelines, there are a few clear distinctions important to note that visibility into security testing is a significant
tension
among roles, depicted in Figure 4. challenge for dev/engineering teams, making it likely more difficult for
Q13. Which statement best describes the relationship them to assess the impact of security tools. This can make a concerted
AppSec teams, perhaps due to their proximity to the testing process or DevSecOps initiative more difficult to implement, as critical contributors
between application security testing and software
the pressures applied to them to accelerate review, show the greatest are unable to close feedback loops and optimize efforts appropriately.
development/delivery?
sentiment that tests moderately or severely impede pipelines (65%).
Application security testing moderately slows Figure 4. AppSec and dev/engineering perception of security testing’s impact on development/delivery
43%
down development/delivery
blackduck.com | 14
Let’s now extend each role’s perception of pipeline impediment to include Question 10 shows that nearly half the surveyed organizations are using
the method of managing the security testing queue. We can validate automated systems to prioritize security issues, indicating a significant
that each role benefits from automating security testing. When manually adoption of advanced risk management practices. But a substantial
managing testing queues, 29% of dev/engineering personnel and 44% portion (43%) still rely on manual prioritizations. The close split between
of security personnel feel severe impact on development and delivery automated and manual prioritization suggests that the arena of software
timelines. When managing testing queues through automation, only 16% security testing is in a transition phase, with many organizations likely
Organizations are of dev/engineering personnel and 19% of security personnel feel a severe using a hybrid approach.
impact to development speed.
actively implementing This illustrates a great benefit to development and delivery pipelines, yet
What happens when security issues are discovered
automated security also defines a consistent perception among dev/engineering teams that Q11. What actions/mechanisms occur automatically as a result of
security testing tends to negatively impact their workflows. Ultimately, application security testing results or policy violations?
measures throughout the dev/engineering teams report only a 13% reduction in perceived
slowdown, whereas security teams report a 25% reduction. Alerting to upstream contributors (e.g., developers,
development life cycle, with engineers, architects)
38%
significant room for wider development life cycle, with a focus on communication, prevention, and
integration with existing workflows. However, there’s still significant room Prioritization for triage and remediation 32%
Issues are automatically prioritized for The responses to Question 11 reveal that organizations are employing
49% a variety of automated actions to address security issues, indicating a
remediation based on policies/risk tolerance
mature, layered approach to security.
Issues are manually prioritized for remediation 43%
The top actions involve alerting various stakeholders (38% for upstream
contributors, 32% for downstream stakeholders), emphasizing the
blackduck.com | 15
importance of communication in addressing security issues at a pace In the responses to Question 12, the top five methods of assigning
required by DevOps and CI/CD methodologies. High percentages for remediation issues are all automated, indicating a strong trend toward
assignment via issue management tools (36%) reveal a focus on the automating the notification process. This aligns with broader DevSecOps
The top five methods of DevSecOps requirement for closed feedback loops between security and
development teams to accelerate remediation. Significant percentages
principles of integrating security seamlessly into development workflows.
assigning remediation for actions such as preventing code check-ins (32%), blocking promotion The prevalence of alerts within development tools (36%) and pipeline
tools (35%) indicates an attempt to help developers fix issues more
downstream (28%), and breaking builds (24%) demonstrate a shift toward
issues are all automated, using automated, preventive security measures to preclude risks and quickly. There is a high percentage of responses citing alerts within
issue management tools (39%) and security tools (40%), which indicates
indicating a strong trend avoid realizing exploitable conditions in production environments.
multiple locations to access necessary risk information. This creates
However, while adoption of these automated actions is significant, there’s
toward automating the still room for growth, as no single action is implemented by more than
unnecessary deviations from development workflows. While not as
common as automated methods, manual assignment of issues is still
notification process. 38% of organizations. used by a significant portion (32%) of organizations.
DevSecOps principles Q12. Out of the following, how are developers/software engineers
in your organization notified of/assigned application security
of integrating security issues for remediation?
blackduck.com | 16
Conclusion
As we conclude this examination of the current state and future trajectory are still using manual processes in various aspects of their application
of application security, it’s clear that DevSecOps is at a critical juncture. security testing and remediation workflows.
Our findings reveal both progress and problems in current DevSecOps
With over 90% of organizations using AI tools in some capacity for
practices.
software development, we’re witnessing a transformative shift in how
Over 60% of respondents report that security testing moderately or applications are built and secured. This adoption brings both new
severely slows down development, highlighting the ongoing challenge of capabilities and new security considerations. While adoption is high, only
integrating robust security practices without impeding agility. Over 80% 24% of respondents are very confident in their policies, management, and
of organizations use between 6 and 20 security testing tools, indicating testing for AI-generated code, indicating an area in dire need of automated
a complex testing environment that can lead to integration challenges, processes.
noise, and alert fatigue. In fact, 60% of respondents report that anywhere
The truth of the matter is that, while AI-assisted coding may be
from 21% to 60% of their security test results are noise, underscoring the
accelerating development, security processes—which are already
need for more-effective filtering and prioritization mechanisms.
struggling to keep up—are going to fall further behind without automation.
While 49% of organizations now use automated prioritization for security
Take the time now to critically evaluate your organization’s approach to
issues, reflecting a growing trend toward leveraging technology to
software security testing.
streamline security processes, a significant number of respondents
blackduck.com | 17
Scrutinize your tool stack. Are you drowning • Static application security testing (SAST) is highly effective at
in a sea of disparate solutions, or leveraging an identifying coding flaws early in the development process. This is
integrated, streamlined suite of security tools? crucial for vetting AI-generated code, which can inherit insecure
coding flaws from its training data.
• Similarly, examining AI-created code with a software composition
• Evaluate your tools and processes. Aim for consolidation and
analysis (SCA) tool can help developers identify and secure outdated
results integration.
or insecure third-party components, as well as open source libraries Successful organizations will be
• Reduce tool proliferation and complexity where possible by with licenses that may potentially conflict with an organization’s
choosing a primary vendor with the experience and knowledge to business goals for its software. those that view the challenges
consolidate disparate tools into a comprehensive testing whole.
• Dynamic application security testing (DAST) detects vulnerabilities outlined in this report not as
• Explore implementing an application security posture at runtime and verifies issues’ exploitability. DAST scans are a critical
management (ASPM) solution to integrate tools, automate
obstacles, but as opportunities for
component of application security testing, and the rise in AI-generated
workflows, and normalize and prioritize results. Invest in tools code only further highlights its importance. AI coding tools are trained transformation and improvement.
and processes that consolidate security test results and make using publicly accessible code repositories, for better or for worse.
them more actionable and easier to understand across all roles in
They’ll be the ones not just willing to
They are great for generating code quickly, but do not apply the same
the organization. contextual reasoning a developer would to determine the best way adapt to the changing landscape of
Evaluate your automation levels. Our survey to write code for a specific application. While teams can provide DevSecOps but determined to shape
indicates that manual processes still dominate in this context to AI tools in the form of prompt engineering, there
many organizations. Identify where automation are still limitations. Ensuring that the application is performing its it. The future of DevSecOps is not
can be leveraged to boost speed, efficiency, and desired function, and doing so securely, relies on application security predetermined—it’s waiting to be
consistency. testing. Doing so at the speed of AI-enabled development requires
that it is tightly integrated into pipelines to allow for the detection of defined.
• Explore integrating automated security checks into your CI/CD
vulnerabilities in the context of the application in its running state.
pipeline. What role will you play in the future
• Ideally, all three security testing tools—SAST, SCA, and DAST—will
• Consider implementing infrastructure-as-code (IaC) with built-in
security policies.
run atop a centralized platform or be managed through an ASPM of DevSecOps?
solution. You may also yield greater efficiency and scalability with
• Provide developers with IDE plugins for real-time security proper integration and coordination with other AppSec testing tools,
feedback and prioritized remediation guidance to help them fix developer tools, and issue-trackers you are using.
faster and cultivate their security capabilities.
• Protect sensitive data used to train AI models by ensuring that only
Establish AI governance today. Establish authorized personnel have access. Encrypt data both at rest and in
clear policies and procedures for the use of transit to prevent unauthorized access. Enforce the principle of least
AI in software development. Invest in tools privilege to grant AI systems the minimum access necessary to
and processes designed to vet and secure AI- perform their functions.
generated code.
blackduck.com | 18
Survey Respondents Job Roles of Survey Respondents
4% Singapore 12%
Retail
Japan 13%
Media 2%
Organization Headcount
Government 2%
Fewer than 100 4%
blackduck.com | 19
Questions Q2. Which statement best describes your process of configuring
and running application security tests across your SDLC or CI
Q4. Approximately what percentage of your projects, branches,
and repositories are being included in your application
pipeline? security testing queue?
Q1. Which of the following criteria does your organization Percentage of projects, branches, and Percentage of
Testing tools provided by the same vendor are
consider when determining which application security repositories included in testing queue respondents
configured using a centralized interface and 29.53%
tests to run and when they are run? Up to 20% 4.86%
automatically run with policies
Sensitivity of information accessed/transmitted 21%–40% 23.39%
36.77%
by the application All tests are configured using a centralized interface 41%–60% 36.77%
25.77%
and automatically run with policies
General best practices recommended by third- 61%–80% 20.52%
35.88%
party organizations (e.g., OWASP) 81%–100% 8.72%
Each test is configured using its own interface and
Ease-of-configuration or automation of the 22.20%
35.38% automatically run with policies I do not have enough visibility to approximate
security tests 5.75%
test coverage
Each test is configured using its own interface and
Industry requirements or regulatory compliance 34.99% 15.46%
manually run Q5. Which of the following teams/departments determine which
I am not involved with configuring or running application security tests are performed, when, and on which
4.16% projects?
The application’s production environment 33.99% application security tests
None of the above 2.87% Security 44.00%
Attestation of security processes to stakeholders
33.70%
(e.g., customers, partners, investors) Development/software engineering 42.22%
Q3. Which of the following statements best describes the manner
in which new projects, branches, or repositories are added to DevOps 36.57%
Business criticality of the application 32.80%
your application security testing queue? Quality assurance 33.99%
Release frequency or shipping deadline of the Compliance 28.15%
30.82% All are added to the test queue manually (e.g.,
application 28.74%
declared by dev team, selected by security team) Cross-functional groups 21.01%
Recent publication of new vulnerabilities or zero- Legal 19.43%
29.34% All are added to the test queue automatically (e.g.,
days 38.35%
detected by testing tools) None of the above 1.39%
None of the above 2.78% Most are added to the test queue automatically; a
22.40%
few are added manually
Most are added to the test queue manually; a few
6.14%
are added automatically
I am not familiar with how items are added to the
4.36%
security testing queue
blackduck.com | 20
Q6. Approximately how many application security testing Q8. Which statement best describes your approach to parsing and Q10. Which statement best describes your approach to prioritizing
tools does your organization use? This should include all cleansing the results of application security tests? detected security issues for remediation?
means to detect software vulnerabilities, noncompliance
with security standards, sensitive data leakage, and Results generated by all tools are manually parsed Issues are automatically prioritized for remediation
38.06% 49.45%
policy violations. and cleansed based on policies/risk tolerance
Percentage of We can automatically parse and cleanse results Issues are manually prioritized for remediation 42.62%
Number of security testing tools
respondents from some testing tools; the remainder are 28.05%
1–5 9.32% manually parsed and cleansed I am not familiar with the process of prioritizing
7.93%
issues
6–10 33.50% Results generated by all tools are automatically
25.27%
11–15 33.30% parsed and cleansed
Q11. What actions/mechanisms occur automatically as a result of
16–20 14.57% I am not involved with parsing and cleansing application security testing results or policy violations?
5.35%
21+ 3.87% application security test results Alerting to upstream contributors (e.g., developers,
37.66%
I do not have enough visibility to engineers, architects)
5.45% None of the above 3.27%
estimate the number of testing tools
Assignment to developers via issue management
35.58%
workflows (e.g., Jira, Slack)
Q9. Approximately what percentage of security test results
Q7. Which statement best describes the clarity and are noise? For example: duplicative results, false positives, Alerting to downstream stakeholders (e.g., security
32.11%
actionability of the results of your application security conflicting with other tests/tools. team, partners, customers)
tests?
Percentage of Prevent checking-in of code to SCM/repositories 32.41%
Percentage of noise in findings
Easy (Net) 72.25% respondents
Security test results are extremely easy to 0%–20% 15.06%
20.22% Prioritization for triage and remediation 31.81%
understand and to act upon 21%–40% 30.23%
Security test results are somewhat easy to 41%–60% 30.23% Prevent addition of compiled assets into binary
52.03% 29.93%
understand and to act upon repositories
61%–80% 14.87%
Security test results are somewhat difficult to
17.54% 81%–100% 2.78% Block promotion into staging/production 28.44%
understand and to act upon
I do not have enough visibility into all tests and
Security test results are extremely difficult to 6.84%
4.26% results to identify noise Breaking the build 23.89%
understand and to act upon
I am not involved with interpretation or action No actions or mechanisms are automated, all are
5.95% 4.96%
upon the results of application security tests manual based on test results
Difficult (Net) 21.80%
Other 0.20%
blackduck.com | 21
Q12. Out of the following, how are developers/software Q13. Which statement best describes the relationship between Q15. How confident are you that you have the processes in place to
engineers in your organization notified of/assigned application security testing and software development/ manage and secure AI-generated code?
application security issues for remediation? delivery?
Automated alerts/logs within development I do not have enough visibility to assess the Not at all confident we have the policies and
35.88% 5.25% 6.05%
tools (e.g., IDE) relationship accurately automated testing in place
Automated alerts/logs within pipeline tools This is not a priority at this time, as using AI-
34.69% 4.46%
(e.g., build, SCM, repos) Q14. Are your developers using AI, generative, or transformational generated code is against company policies
tools to write code and modify projects?
Manual assignment (e.g., by manager or team
32.11% I do not have enough visibility into our processes to
lead) 4.56%
manage and secure AI-generated code
Yes (Net) 90.29%
I am not familiar with how developers/
3.27%
engineers are made aware of security issues
Yes, all developers are permitted to, and do, use these
26.86%
None of the above 1.98% tools
About Black Duck
Yes, but only certain developers/teams are permitted Black Duck® offers the most comprehensive, powerful, and trusted
42.91%
to, and do, use these tools portfolio of application security solutions in the industry. We have an
unmatched track record of helping organizations around the world secure
Yes, while we do not allow the use of these tools, we their software quickly, integrate security efficiently in their development
20.52%
are aware that some developers use them environments, and safely innovate with new technologies. As the
recognized leaders, experts, and innovators in software security, Black
No, developers are not permitted to, and do not, use
4.66% Duck has everything you need to build trust in your software. Learn more
these tools at www.blackduck.com.
I do not have enough visibility into development ©2024 Black Duck Software, Inc. All rights reserved. Black Duck is a trademark of Black Duck Software, Inc. in the
5.05% United States and other countries. All other names mentioned herein are trademarks or registered trademarks of
processes to know if these tools are used their respective owners. October 2024
blackduck.com | 22