6. Synopsys Keywords
6. Synopsys Keywords
2) What if we are talking about Coverity or WhiteHat and they state this point that we stated:
Seeker customers have reported finding vulnerabilities they didn't even know to look for, they couldn't have found them with a
DAST scanner or with static analysis.
3) Each of these provide automation IAST, DAST, SAST or Software Risk Manager(SAST+SCA) which one to recommend?
4) Seeker IAST addresses the AppSec testing gap by providing a solution that allows for automated testing of web applications, with
the ability to provide the development with the specific location of vulnerabilities in the code.
If SAST integrated with IDE, the vulnerabilities are found then and there, even if we don’t, does Coverity SAST point the exact
location of code that causes vulnerability? Also does WhiteHAT dynamic pinpoint the location of code that causes vulnerability?
6) Synopsys Software risk Manager, Synopsys CodeSight, Synopsys Polaris are all SCA + SAST, what the difference?
Cloud-based (IDE
Cloud-Based On-premises or cloud plugin) Cloud-based
Deployment
Model Enterprise-wide Developer workstation Cloud-based
Target Audience Security teams, application owners Developers Security and development teams
ASPM is best for managing and prioritizing vulnerabilities across the application portfolio.
AST excels at finding specific vulnerabilities in code and applications, including their location, severity, and potential impact.
IDE-integrated tools are ideal for early detection and developer education.
sast - coverity
iast - Seeker
fuzzing – defensics
Coverity SAST
which helps identify and address security flaws in early stages of development
Many common vulnerabilities go undetected by testing that takes place in sdlc when software is coded,
They often remain undetected until they are exploited by threat actors.
According to 2022 audits, open source is found in 97% of codebases. Clearly, open source is the fuel of modern applications,
SCA is an automated process that evalulates open source code for security vulnerabilities , compliances issues and code quality.
Synopsys offers high-quality AST capabilities delivered as either SaaS (Polaris) or on-premises (Software Risk Manager).
Competitions
Checkmarx (AST Solution provides SAST,DAST,SCA) But its SCA, DAST are weak
Veracode (Appsec Platform): Strong SAST, but weak SCA and DAST
SoanrQube (SAST): SonarQube only focuses on ‘code smells’ which indicate where problems might exist while Polaris(ASPM) and
Coverity (SAST) identify actual issues, and provide remediation guidance to resolve them quickly. Also SonarQube only looks for
small portion of vulnerabilities that put applications at risk.
Our SAST scans protect customer applications from being exploited by finding critical vulnerabilities that SonarQube misses.
Synk(SCA, SAST, IaC, container, and cloud security.): Weak SAST, no DAST. Perform poorly in competitive bake-offs due to weak
SAST capabilities, a lack of DAST, and inaccurate and incomplete SCA results due to only offering dependency analysis.
Snyk SCA only offers dependency analysis, when software supply chain risk management is top of mind, it’s crucial to identify ALL
third-party dependencies.
Gartner's Critical Capabilities for Application Security Testing (AST) report provides a detailed evaluation of vendors in the AST market.
The report ranks 12 vendors based on their ability to deliver specific capabilities.
Five Use Cases: The vendors are evaluated across five common use cases:
To position Synopsys Software Integrity Group's key differentiators , here are set of qualifying questions to get you started.
1)What are your software security and quality needs or goals?
2)Do you know how your software security compares to your peers?
3)Are your teams moving toward DevOps, DevSecOps or the cloud?
4)Do you have sufficient application security skills and resources?
5)Are your application security tools meeting your needs?
6)Are security and quality built into your software development life cycle tools and processes?
Synopsys AppSec mission is to help our customers build secure, high quality software faster by providing the solutions they need to effectively
manage their AppSec efforts.
1) Confidentiality: by encryption
2) Integrity: by using hash function and MACs making sure that data has not been tampered.
Security in SDLC
Devsecops
The objective of DevSecOps is not to eliminate all risks prior to deployment but to establish a proactive approach to security. This
involves identifying potential risks beforehand, prioritizing them based on severity, addressing the most critical issues, and
implementing measures to prevent similar vulnerabilities from recurring. Additionally, DevSecOps emphasizes continuous
monitoring of the threat landscape to identify emerging risks and address them promptly, avoiding repeated vulnerabilities.
Technical Presales
Coverity (SAST)
2022 Osprey report it was found that in. Over 97% of code bases contained open source
Proprietary code should focus on creating differentiated value, without reinventing foundational elements
You might use third party libraries in opensource form, but it's up to you to ensure they need your compliance targets.
Make sure that you do no harm with your codes.
.Static analysis looks at the source code. Its core role is to identify if there are code patterns that represent quality issues
or security weaknesses
Static analysis looks at the source code. Its core role is to identify if there are code patterns that represent quality issues
or security weaknesses. Common examples include SQL injection, buffer overflows, or even memory leaks. While this is
best for proprietary code, it's important to recognize that it can't be run against any code in source code form, including
open source and 3rd party libraries
In Coverity,
Proactive SAST, finds issues in your application code as you type in your IDE. ( Immediate )
Change-Set SAST finds and fixed common security issues before changes are committed into source code
repositories ( take Minutes)
Iterative SAST analyzes code once merged to the main branch (CI/CD) ( takes Mintues)
Comprehensive SAST thoroughly scan all your codes and makes informed decisions prior to release.
With all of these tools, it's not uncommon for development teams to complain about the Impact security, tooling and
assessment tab on delivery schedules. Those teams are pressured to deliver more, faster, and all without sacrificing
quality or security
The problem is that the most security tools are run too often relative to the code that's being edited or changing at one of
the following phases. For example, you want to have a comprehensive static code analysis running as part of a CI
workflow, but it doesn't mean to run on every commit.
That can be done as part of a nightly test cycle where there is time to complete a deeper analysis that spans all the
commits the application experience that day. As you move left towards the developer, the iterative or chain set based
analysis is more important.
Such analysis focuses more attention on the code that changed as part of a merge commit and is much faster than the
deeper comprehensive analysis performed nightly.
Faster of course doesn't need to come with a lack of coverage, but for merge commits a shallower test and makes sense
given the goal is mostly to ensure the current change set didn't introduce problems in the code branch.
Rounding Progression towards the developer is a proactive analysis performed within the developers IDE ,
This can be thought of as being the equivalent of a spell checker for a Word document. By identifying security or quality
issues as the code is being written, the developer can adjust their implementation with minimal impact to the delivery
schedule because they are currently thinking about the specifics of the feature they're working on.
When combined, the proactive, iterative and comprehensive flow reduces risk associated with both feature and merging
the feature commits into the main code branch, all without impacting the software assurance efforts of a comprehensive
analysis.
Thus keeping the Dev OPS and the Dev SEC teams in harmony
2) Integrated
- It intergrates with
-- popular ide
-- source code control and bug tracking system
-- CI / CD frameworks
Performant
- Analyze full project build or without build
Technical Capapbilities of Coverity:
1) Minimal false positives
2) Broad framework and API support
3) Configurable to match application risk profile
4) Policy based scan ensure complicance
5) Tracks progress with trend reports
6) Integrates with IDE and resolves issues in real time
7) Deploy Anywhere: SAAS for cloud based application, On-Premise for highly regulated applications.
8) Fast Comprehesive analysis during development
Covertly also supports auto assignment. When it finds a new defect, it checks a software configuration management system and
automatically assigns the defect to the developer responsible, who then gets notified by e-mail. With Coverity, your developers
don't need to be security experts.By identifying issues and getting actionable remediation advice as they code, they'll quickly learn
how to fix software flaws and security vulnerabilities early in the SDLC when it's least costly.
Fast desktop analysis enables developers to find fixed and track software flaws quickly, all within their IDE and issue trackers. It takes
only a few seconds to get accurate results, but no tradeoff in performance versus accuracy.
And if you're not able to do a build, the file system capture feature can be used to scan and analyze uncompiled Code.
Rather than simply committing code and waiting for another team to test it, Coverity integrated
via the Code Sight plugin allows them to analyze or scan code as they write it. This enables
developers to fix issues before they impact others
Coverity and Code Sight were specifically designed to address these concerns. Code Sight and
Coverity are integrated in most popular IDEs which doesn’t require developers to learn complex new tools or implement
major changes to workflows. Findings are actionable and accurate which helps ensure teams focus on fixing defects and
not triaging tooling integrations or configurations. The ultimate end goal being for developers to identify bugs as they
write features rather than needing to wait until a centralized team flags a bug for them to fix where that bug was
identifiable during feature development.
Coverity:
Reduces security exposure in your applications
Reduces project risks by finding problems earlier
Lower software development costs
Coverity Connect enables grouping and organizing defects by code bases, releases, and branches by the
order they were discovered.
Grouping defects by code base, release, and branch makes it easier to identify and prioritize vulnerabilities. For example,
defects in a specific release nearing deployment might be prioritized higher than those in a development branch.
Also,
developers and security teams can concentrate on analyzing vulnerabilities in a particular section of the code.
Also,
Seeing the order of defect discovery helps understand how vulnerabilities were introduced over time and track their
remediation progress.
Coverity also allows us to go back to results from a specific state of code. Analyzed at a specific time, how does it work?
Coverity Connect provides 3 container types which are
Project
Stream
Snapshot
Project is represents a specific The project focuses on a specific release stream. A branch in your version control
system can be represented as a stream. One or multiple streams can be grouped into a project. Snapshot Within each
stream, there are one or several snapshots which corresponds to a historical state of codes captured at the time of
analysis, which could be from last night or several nights ago.
Coverity MISRA report that can gauge MISRA compliance and display the associated issues in your
codebase.
Coverity Security Report quantifies and displays the security issues in your codebase. The report
benefits compliance departments and developers.
Coverity Integrity Report provides a standard way to objectively measure the integrity of your own
software as well as software you integrate from suppliers and the open source community.
Coverity CVSS Report uses the CVSS standard to evaluate security risks in your codebase and
allows an auditor to adjust the report's findings in Coverity Connect. This report can be used internally
and externally to your organization.
This an important capability that Synopsys has as a competitive advantage to all of our competitors. Something to
certainly highlight to our customers and prospects.
Coverity highlights issues like SQL injection, where user input is used in a database query.
It tracks this input through various function calls (procedure calls) to the point it's used (the "sink").
Coverity provides quick access to information about the identified issue.
Clicking on the issue description leads directly to relevant documentation within Coverity.
This documentation explains the issue, its code manifestations, and additional examples.
Developers can quickly understand the issue, its cause, and how it affects the code.
This reduces the time spent analyzing the problem and allows for faster resolution.
Creating Description for the null pointer deference
Action: What should be the action taken? fix required, fix submitted, modelling required or ignore?
Legacy: When a new team starts using SAST, they might find many existing issues. To focus on preventing new problems, they can mark these
older, less critical issues as 'legacy' and address them later.
Finally,
You can also 'export' the issue to jira,
i.e, push the issue to jira.
Coverity provides a visual representation of the Null pointer dereference issue, making it easier for developers to understand the
problem and its connection to other pointers in the code.
After understanding the issue, developers can categorize it in various ways:
1) Bug vs. Non-Bug: This helps distinguish genuine errors from potential false positives.
2) Severity Level: Developers can assign a severity level based on the potential impact of the vulnerability (e.g., Critical,
High, Medium, Low).
Roles-Based Access Control (RBAC): Some organizations require stricter control over marking issues as false
positives. RBAC allows setting permissions so only authorized individuals can perform this action.
Proposal and Comment: Coverity allows users to propose an issue as a false positive and add comments to justify
their reasoning. This creates an audit trail for future reference.
Identified issues can be exported to JIRA, creating a new ticket for tracking and resolution.
The integration pushes information from Coverity to JIRA but doesn't pull updates back
from JIRA.
Code Dx: For customers needing a two-way connection with JIRA, Coverity offers
Code Dx, a separate product that allows viewing and manipulating JIRA issue
statuses directly within Coverity.
1. Technical Role
Scanning (Coverity scans code for vulnerabilities, defects, and coding standards violations)
Analysis (Coverity analyzes code to identify security vulnerabilities, performance issues, and code quality problems)
Workflow Integrations (Coverity integrates with CI/CD pipelines, issue tracking systems, and code repositories)
Scoping (Coverity allows users to define specific code areas for analysis)
Project Configuration (Coverity enables setting up projects, teams, and scan configurations)
Process and RBAC (Coverity supports role-based access control and workflow customization)
Deliverables Management (Coverity generates reports and metrics for project management)
Support and Analysis (Coverity offers support and troubleshooting for security issues)
Security Audits (Coverity can be used to conduct security audits and assessments)
Policies (Coverity can enforce security coding standards and best practices)
4. Compliance Role
Compliance Standards (Coverity checks code against industry standards like CWE, OWASP, PCI DSS)
5. Developer Role
Accessing Results (Developers can view and analyze scan results within Coverity)
when a software bug can be exploited for malicious activities, it's called the software vulnerability, and these can be
categorized as either known or unknown.
Unknown vulnerabilities, also known as zero days, are dangerous, allowing attackers to operate unnoticed for an
extended period of time.
And reactive security solutions are useless against zero day attacks.
What if you could test software for unknowns? You can with fuzz testing.
Fuzz testing manipulates input data to send until the malformed input which causes the software to crash.
You could perform fuzz testing randomly and miss vulnerabilities, or with a template and still miss
vulnerabilities. Or you can fuzz intelligently manipulating only the input that matters most to you.
Introducing generational intelligent fuzz testing from Synopsis. Our fuzzing solution provides prebuilt test
suites that eases the burden of manual black box test creation, and our fuzz testing solution runs on any VM or Windows
or Linux computer to produce a detailed remediation package that helps identify and fix software issues fast. Because
you don't know what you don't know.
Defensics uses generational passing, which means it understands everything about how the input should look. It
systematically breaks the rules for each field of each message to create test cases with very high quality, capable of
reaching many code paths and driving the target.Into new states before delivering an anomalies.
Types of Fuzzing:
Mutation Based
Generation Fuzzing
Random Fuzzing
The general idea is that initially an interface is supplied with a malformed data, which the
programmers did not consider during development, such that deviations from the specified
system behavior are caused to malform. It is essential for the malformed data to be semi valid,
meaning that its construction is valid enough not to be rejected immediately by the interface,
but invalid enough to cause unexpected effects on the underlying software layers.
What makes a good fuzzer?
1) Efficient:
Identifying critical code paths that could lead to vulnerabilities.
Generating test cases that target these paths effectively.
2) Intelligent:
This allows developers to easily incorporate fuzz testing into their routine and identify vulnerabilities early in the development
cycle.
4) Repeatability:
If a fuzz test identifies a vulnerability but doesn't produce the same results consistently, it becomes difficult to verify if the issue is
real or a fluke.
Once a developer fixes a vulnerability identified through fuzz testing, they need to re-run the test to confirm the fix actually
addressed the issue. Unstable results make it hard to be certain.
lifecycle helps identify and fix vulnerabilities earlier, leading to a more secure
product.
Fuzzing prototypes during development is crucial for embedded systems. Early identification of vulnerabilities can prevent costly
hardware changes later.
Demo
Fuzzing involves sending invalid or malformed input into the application server to make it crash.
2) Altering network packets (like Http request, i.e, constructing malformed url, altering request headers, modifying request bodies, testing different http methods
like put, delete )
3) File uploads like documents, images (uploading files with unsupported file extensions, Injecting malicous code within file content, Exploit directory traversal
vulnerabilities, Uploading extremely large file to test for handling limits)
5) Protocol interactions: Manipulating data which uses certain protocol for transfer. for ex:
4) API endpoints:
Api fuzzing target api endpoint
Api endpoint is the url where the carried by api is displayed.
d) Rate limiting and DoS testing: Overloading the API with requests to assess its resilience.
e) Authentication and authorization testing: Bypassing security mechanisms or exploiting vulnerabilities in authentication and authorization processes
Practical part:
step1) Load the Test suite like HTTP server test suite
Step3) Interoperability test to ensure basic configuration is properly set and connection to target is working.
Interoperability means the ability of your test suite to communicate and interact correctly with the system you're testing (the
target).
This is a crucial first step before you start the actual fuzzing process.
Run Control: This section allows you to manage the duration and behavior of your test run. You can set time limits, control how test
cases are repeated, and define timeouts for message responses. This is useful for optimizing test efficiency and handling slow or
unresponsive targets.
Logging: Here, you can control the amount of information Defensics records during the test run. By default, it logs successful and
failed test cases. You can adjust this to include more or less detail based on your debugging needs.
Capture: This option enables you to record network traffic during the test run. This captured data can be invaluable for analyzing
issues and understanding the behavior of your target system.
Network: These settings allow you to override the network configuration used by the test suite. This is useful for testing in specific
network environments or when troubleshooting connectivity problems.
CVSS (Common Vulnerability Scoring System): This section allows you to configure how Defensics calculates vulnerability scores. You
can adjust the scoring parameters to align with your organization's specific requirements.
Instrumentation methods:
Valid Case Instrumentation: This is the most basic method. It compares the response from the target to the expected response for a
valid test case. If there's a mismatch, a failure is indicated.
Connection Instrumentation: This monitors the network connection between Defensics and the target. If the connection is lost or
there's an error, it's considered a failure.
Syslog Instrumentation: This method analyzes system logs on the target for error messages or unexpected events that might
indicate a failure.
SNMP Trap Instrumentation: If the target supports SNMP, this method can monitor for specific SNMP traps that signal issues.
SNMP Value Instrumentation: Similar to SNMP Trap, but instead of monitoring for specific events, this method checks for changes in
specific SNMP values.
External Instrumentation: This allows you to use custom scripts or tools to determine if a failure has occurred. This provides
maximum flexibility but requires additional development effort.
Agent Instrumentation: This involves installing an agent on the target system to collect detailed information about its state and
performance. This can be helpful for complex systems but might introduce overhead.
step 5) Select test cases from within the test suite we chose to run on the target
Default: This mode selects a basic set of test cases for initial testing.
Full: This mode includes a significantly larger set of test cases for more comprehensive coverage.
Unlimited: This mode generates an almost infinite number of test cases by continuously mutating existing ones.
Limited: This mode allows you to specify a percentage of test cases to run, useful for time-constrained testing
step 6) Run the test. Monitor the test results, pass/ fail, including the anamolies that caused the failure.
Step 7) Remediation
Defensics Remediation collects Defensics test run logs and test suite settings into a package called remediation package.
---
• When testing an application like "Opium PB2", what will be the target?
Ans: The network end point, i.e, server or database. Usually the server hosting the appliaction
Development teams need to know that fuzzing won’t slow them down or get in their way, so they will be especially interested in
automating fuzzing and integrating it into existing processes and systems.
Fuzzing is dynamic testing, but it can be integrated into application development as soon as an executable build or module is
available.
Testing teams are tasked with managing the quality and security of an application. When application security teams recognize the
value of fuzzing in managing unknown vulnerabilities, they will be keen to help development teams automate and integrate fuzzing
into the application development life cycle.
POLARIS SKIPPED
SEEKER - IAST
Often, tools create too much noise or are too slow. Some tools are great at generating findings, but those findings are either too
superficial or too high-level to recognize the real problem.
SYNOPSYS SEEKER IAST CAN PERFORM TESTING BOTH DURING DEVELOPMENT AND WHEN THE APPLICATION IS
RUNNING.
Note: The word instrumentation was also used in ‘Defensics demo step 4’
"seeker analyzes custom code HTTP traffic libraries and frameworks back-end connections and runtime behavior the result is a
comprehensive view of vulnerabilities and broad runtime test coverage with no extra steps saving you time and effort”. Please
Explain.
Ans:
Custom code: Seeker inspects the application's custom code for vulnerabilities, looking for weaknesses in the logic and implementation.
HTTP traffic: It analyzes the network communication between the application and clients, identifying potential vulnerabilities like injection
attacks, cross-site scripting (XSS), and others.
Libraries and frameworks: Seeker examines third-party components used in the application to ensure they are not exploited.
Back-end connections: It analyzes the application's interactions with databases, APIs, and other backend systems to detect vulnerabilities like
SQL injection or insecure data exposure.
Runtime behavior: Seeker monitors the application's dynamic behavior to uncover vulnerabilities that might not be apparent in static code
analysis.
Seeker identifies and monitors sensitive data throughout the application lifecycle, alerting teams when it's mishandled or stored
insecurely.
Seeker helps organizations adhere to security standards like OWASP Top 10, CWE/SANS Top 25, PCI DSS, and GDPR by providing
relevant reports.
Seeker monitors how sensitive data moves through your application, including how it's handled in URLs, logs, user interfaces (UI),
databases (DB), and other components. This visibility helps identify potential exposure points.
By tracking sensitive data, Seeker can detect potential data leakage scenarios. For example, if sensitive data appears in logs without
proper encryption or is exposed through a vulnerable endpoint, Seeker can flag it as a potential issue.
Seeker provides precise details about vulnerabilities, including code snippets and data flow analysis, to aid developers in
understanding the issue.
The tool offers actionable advice and even sample code to expedite the fixing process.
Build breaking: For organizations with strict security requirements, Seeker can prevent builds from progressing if security criteria
are not met.
Developers and teams receive timely notifications about new vulnerabilities through Slack or email.
Seeker integrates smoothly with popular development and CI/CD tools like Jira and Jenkins
"active verification", is similar to having a virtual team of pen-testers 24x7 but more accurate and faster. Seeker active verification
automatically retests detected vulnerabilities to determine their exploitability. It is scalable enough to process hundreds of
thousands of web requests. Security teams can filter the issues to focus attention on verified issues that pose the most significant
security risk to the application and its data.
Seeker agents can be easily instrumented to multiple target application endpoints or nodes, and they will track every interaction
between the application services under test autonomously in the background while the teams carry out their normal development
and testing work.
Endpoints is any point where user or any other system can interact with your application.
Endpoints can be internal facing or external facing.
External facing: Webpages (homepage, login page, product page)
Internal Facing: Database connectors( MySQLConnected,etc), External Api clients (payment gateway, social media
platforms, etc), file system interactions (components that read or write from external storage(like cloud storage))
Verification
(vulnerability) Active verification No verification Limited verification
ONE OF THE KEY CHALLENGES MANY APPSEC TOOLS IS THAT THEIR FINDINGS AREN’T READILY ACTIONABLE BY THE DEVELOPMENT
TEAMS THAT NEED TO FIX THE ISSUES.
SEEKER SOLVES THIS PROBLEM BY PROVIDING DETAILED INFORMATION THAT PINPOINTS THE PRECISE URL WHERE THE
VULNERABILITY OCCURRED ALONG WITH ANY PARAMETERS.
A DETAILED CALL GRAPH SHOWS HOW THE USER-SUPPLIED DATA WAS MANIPULATED TO CREATE THE OBSERVED ISSUE. THIS
ENABLES DEVELOPERS TO QUICKLY UNDERSTAND THE PROBLEM WITHOUT NEEDING TO BUILD REPRO CASES.
Seeker delivers one thing that development teams never have quite enough of – and that’s time.
Traditional AppSec Approach
Before the advent of tools like Seeker, application security testing was a fragmented process involving multiple stages:
1)Developer IDEs: Security checks were often limited to static analysis tools within the developer's environment.
2)Build Systems: Some security checks might be integrated into the build process, but these were usually basic.
3)Post-Deployment: This is where the bulk of security testing occurred, including dynamic application security testing (DAST) and
penetration testing.
This approach was time-consuming, resource-intensive, and often resulted in vulnerabilities being discovered late in the
development cycle.
Seeker's Approach
1 )Parallel Testing: It performs security tests concurrently with other development and testing activities.
2) Reduced Post-Deployment Testing: By identifying issues earlier in the development lifecycle, Seeker decreases the need for
extensive post-deployment testing.
3) Increased Test Coverage: Seeker's comprehensive approach often uncovers vulnerabilities that might be missed by traditional
methods.
Key Benefits:
1) Faster Time-to-Market: By shifting security testing left, Seeker helps accelerate the software development process.
3) Improved Security Posture: Earlier detection of vulnerabilities leads to a more secure application.
Seeker optimizes the security testing process by integrating seamlessly into the development workflow and providing
continuous security feedback.
Seeker Server: This component can be hosted on-premises or in a cloud environment. It receives data from the agent, processes it,
and provides a user interface for security teams to view results.
This is the main component of Seeker. It provides the Seeker user interface, collects vulnerabilities, (removes duplicates) and stores
the vulnerabilities generated by the Agents. The server provides Web-API’s and hosts groups and projects.
Seeker leverages user interactions as a catalyst for testing. When a user interacts with the application, the agent captures data
about the request, response, and underlying application behavior. Seeker then automatically performs security checks based on this
information.
From a testing perspective, functional and ad hoc tests are perfect triggers for Seeker instrumentation. Please explain.
Functional and ad-hoc testing are activities performed by testers or developers to test the application.
In this case, Seeker observes the application's behavior during these tests, collects data, and analyzes it for security vulnerabilities.
Seeker's role is to observe and analyze the application's behavior during these tests, not to execute the tests themselves.
Customer has no DAST solution: When customers don’t have an automated DAST solution, they rely upon penetration testing to
find issues. This approach can be expensive, so, it’s unlikely to be used with each release and for every application. When teams
follow a DevOps workflow, they likely release multiple times per month, which pen-testing was never intended for. Seeker,
complements the security team’s pen-testing efforts. Can also improve the overall security of the applications with each release.
This allows the pen-testing to focus on compliance requirements and mission-critical applications.
•IAST solutions: These have been around for a while, but legacy solutions, like web DAST, have a history of false positives. By
embedding Seeker into the development workflow, its active verification can reduce false positives and provide actionable findings.
This increased accuracy builds confidence within the development teams leading to greater Seeker adoption.
• A supplement to DAST: Teams looking to integrate some form of dynamic testing into their DevOps pipelines will find Seeker
particularly attractive. Due to its instrumentation, Seeker can be added to any test environment with a supported language and
immediately complement the current test workflows. Testing teams don’t need to change test cases, test harnesses, or drivers.
Simply enable the Seeker agent and run the existing test suites. In some cases, Seeker supplements some DAST functions by
reducing the incidence of false positives.
Seeker is an interactive application security testing (IAST) solution that enables development, DevOps, and QA teams to perform
security testing concurrently with the automated testing of web applications.
standard architecture. Its deployment can be: fully automated, docker-based or manual
Seeker Demo
Seeker is designed to be run in non-production environments, though it can be used in a production environment if active
verification is disabled
Seeker is instrumented to the target endpoint and server and will be able to capture all the app runtime behavior during app
testing. No additional scan is required, no manual human intervention or configuration needed unlike other AST tools that scan at
code level or at build integration level.
Seeker is meant to be run interactively. It runs concurrently, in the background while the web server or microservices test run is
ongoing.
if a customer adds Seeker to a system that automatically tests specific parts of their website for defects, it's automatically going to
run these additional tests constantly.
Seeker – Microservices
Seekers patented active verification engine will take the detected high severity finding and automatically replay on its own to
confirm the validity of detected findings.
Seeker verifies how data flows through the application, ensuring that the entire system, complies with security standards such as
PCI DSS.
The data flow map. This tells you what's connected to where and what they're talking
about.
BY VISUALIZING THE DATAFLOW, SEEKER CAN PINPOINT VULNERABILITIES THAT MIGHT ARISE FROM
INTERACTIONS BETWEEN MICROSERVICES LIKE DATA EXPOSURE, UNAUTHORIZED ACCESS, INJECTION
ATTACKS
As you can see, we found that there's two vulnerable endpoints. We also found that there's 142 endpoints we haven't tested, and
Seeker knows about them. Over time, it will run some synthetic test- a test that Seeker can run even if you don't have a test to run
against the endpoint.
Seeker customers have reported finding vulnerabilities they didn't even know to look for, they couldn't have found them with a
DAST scanner or with static analysis.
Second order cross site scripting is very tricky to catch. Very, very challenging to catch again.
But Seeker can catch this sort of thing because it's watching all of the web data flows between components .
Outbound endpoints are external resources accessed from an application, such as APIs, databases, or
message queues.
Inside the application, Seeker can look at the actual endpoints to tell you what you tested versus what
actually exists because these endpoints are collected by agents no matter if whether they are
vulnerable or not.
The top banner displays the vulnerability metrics of outbound endpoints: the total number of
endpoints, the numbers of vulnerable endpoints, public and unknown APIs.
Often developers, don't realize that their applications reach out to many outbound endpoints.
Incoming endpoint, you can discover where our web application is going out and connecting to? And if
you don't think that's OK, you can remediate it.
Competetive Differentiators
Seeker replays potential malicious payloads to simulate attack reducing false positive to less than 5%,
While competitors focus on findings that can also create significant false positives.
Comprehensive Capture: By being embedded within the application, Seeker can observe all actions, data flows, and interactions
that occur during testing. This provides a complete picture of the application's runtime behavior
Why Seeker Amazing Response:
Security teams need actionable results, not list of finding, to be verified later
Business Drivers
• Automatically identify security and sensitive data issues within your running web(-services) applications with minimal effort
• Perform interactive application security testing focused on the biggest attack surface – web applications and services
Questions to ask customers
Most SAST solutions focuses on benefits provided to security teams,
which leads to discussions around false positives, challenges interpreting finding, impact
Dynamic Analysis, or DAST is used to test running applications. It's very commonly used by operation teams when
attempting to understand how exploitable a deployment configuration might be
Software Composition Analysis is looking for unpatched vulnerabilities and open source code
Those unpatched vulnerabilities represent CVEs(Common Vulnerabilities and Exposures) that were disclosed against an
open source component, but where the patch hasn't been applied since open source components don't have a single
source or origin point, each with potentially different implementations.
WhiteHat DAST
WhiteHat DAST is a powerful solution to verify the security of their applications in production and identify security issues before
they can be exploited by threat actors
Unlike other DAST solutions that can make unwanted changes to your data or applications, White Hat Dynamic is 100%
production safe and will not corrupt your application or any underlying data.
Other DAST solutions can overwhelm teams with a lot of low-quality results. White Hat Dynamic combines advanced
automated analysis with expert verification by Synopsys security analysts to deliver actionable findings with near-zero
false positives, saving triage time and effort and helping teams focus remediation activities on real exploitable
vulnerabilities.
There's a continuous and on demand risk assessments, White Hat DAST offers true continuous analysis, constantly scanning
customers websites as it evolves, automatic detection and analysis of code changes to web applications, alerts for newly discovered
notabilities and the ability to retest a vulnerability without having to test from the beginning and always on risk assessment
Real-time Analysis: When changes are detected, the system immediately analyzes the impact on the application's security posture .
Once we have the type of vulnerability, the ID. Where within the application this vulnerability was found? How long it's been open for?And we also provide you
with the CVSS score
The first one is impact, which is how bad it might be for business for this vulnerability to be exploited. Alright, the next one is the likelihood, which is the
perceived degree of difficulty to exploit this vulnerability
it's important to know that every time you scan through the application, We are retesting the vulnerabilities automatically, alright, we're not always gonna create
a new list of vulnerabilities every time we scan through the application, right? So those vulnerabilities just gonna stay open as long as the scanner sees them
let's say you're a developer working a high priority issue. Do you think you've got a fixed and you wanna push in production? You can come in here, click this blue
button and the it is just gonna pinpoint that area of the application, rerun it to the test algorithms and give you immediate feedback
Software Risk Manager brings together policy, orchestration, correlation, and built-in SAST & SCA engines to integrate security
activities intelligently and consistently
ASPM stands for Application Security Posture Management. It's a comprehensive approach to managing and improving
the security of an organization's applications.
Synopsys Software Risk Manager is the only ASPM tools with market leading SCA and SAST
SRM enables you to set up policy-driven workflows to orchestrate AST tools like Coverity and
Black Duck, prioritize issues, and monitor compliance across your software assets.
Centralized control: SRM allows you to manage both manual and automated security tests from a single platform.
Tool integration: It can integrate with Synopsys tools and other third-party security tools, providing a unified view of security
activities.
Data consolidation: SRM gathers security data from various sources, including on-premises and cloud-based systems.
Data standardization: It standardizes the data format to ensure consistency and comparability.
Comprehensive overview: SRM provides a holistic view of security risks across all managed applications.
Compliance tracking: It helps you monitor compliance with industry regulations and standards.
Software Risk Manager allows you to do the following with your AppSec data:
Correlate results
Prioritize vulnerabilities
Track remediation
Centralize risk visibility
Additional Capabilities
Automatically identifies and prioritizes critical issues based on a uniform assessment of risk.
Delivers high-priority vulnerabilities to developers directly, including links to the exact line of code via bidirectional sync with issue tracking
systems
Quickly and accurately detects vulnerabilities in source code and open source via built-in SAST and SCA engines, with preset rules to achieve
required testing workflows with minimal setup
Provides contextually relevant remediation guidance to developers based on language, vulnerability type, and source, and recommends
remediation actions based on historical trends
Displays security activities at the branch level, so developers can test fixes efficiently and reduce the frequency of build breaks
Centrally orchestrates scans for Synopsys tools (built-in or standalone) or third-party tools
Provides a 360-degree view of risk scoring, findings, and key performance trends for all projects and sources of code (custom
built, third party, and open source)
Maps findings to regulatory compliance standards (including NIST, PCI, HIPAA, DISA, OWASP Top 10) and provides audit reports
for critical violations
Provides both UI and API-based workflows to create, enforce, and monitor security policies across software assets
Enables security teams to specify risk thresholds for issue types, desired application security testing tooling, SLAs on
remediation time for fixes, and required notifications to development stakeholders
Identify high Impact security activities and summarize across manual and automated AST and developer tools
Demo
Projects- contain all the all the projects you have in your Software Risk Manager instance. A project is a
collection of scans over time for a target software, so usually each product (target software) is an
individual project. Projects usually contain multiple scans done by different tools
Findings- contain all the discovered flaws and vulnerabilities from all your projects, collated in one place. If
you have multiple projects, Findings section is an easy way to look at the big picture, and see which
projects have the most pressing issues. You can also create reports from the Findings section by selecting
specific vulnerabilities by using filters.
Integrations- show you a tile for every supported tool in Software Risk Manager.
Hosts- are available to Software Risk Manager Enterprise users with the InfraSec add-on. When Software Risk Manager ingests Network Security
results, the location of those results is typically expressed in terms of a "host", with the level of detail varying from tool to tool. The Hosts page is
Software Risk Manager's location for interacting with host data directly, outside the context of Findings or Projects.
When Software Risk Manager ingests data from various network security tools, it often categorizes the information based on the concept of a
"host". A host can be a server, a network device, or any other entity within your network infrastructure.
The "Hosts" section provides a centralized location to view and manage this host-related data independently from the findings or projects related
to application security.
Server Logs- provides a helpful UI for certain events and errors that administrators might be interested in, for auditing purposes.
Server Manager UI DEMO:
Click on Projects tab. This dashboard gives a list of all your projects in SRM. From here you can add,
analyze and filter the projects within your list.
The Findings tab shows all the findings from all of your projects. You can filter the results as you will, and
drill deeper in to any individual finding. You are able to create a more system-wide report on all of your
findings.
In Policies tab, Software Risk Manager allows you to track compliance to specified requirements. Once
defined, policies can be applied to projects, and policy violations can be monitored.
In Integration tab, it show you a tile for every supported tool in Software Risk Manager.
In Hosts tab, Hosts are available for Enterprise users with InfraSec add-on. It is SRM's location for
interacting with host data directly, outside the context of Findings or Projects.
In Settings tab,
The Users page allows you to manage users. You can create new local users, set existing users as admins,
disable, or delete them. You can also see each user's latest login date.
The Project Metadata Fields page allows you to configure metadata for your projects. Create different
types of fields, text or tags for your projects.
The API Keys section shows all the generated API keys. For example, API keys are used for integrating
with a specific tool or plugin. If you haven't generated any API keys, your view would be empty like in this
example.
User Groups is a feature that allows permissions to be assigned to users in bulk. Groups are like teams,
for example developers and managers. You can create new groups and add users to any group in this
view.
Manual Entry Configuration allows administrators to define custom values which can be entered into a
Manual Results Form.
The Machine Learning Control Panel is available with the proper add-on. Once enabled, you can configure
machine learning capabilities here. ML Service assists is available if you have manually triaged at least
100 issues.
Tags can be assigned to projects to help manage people, organizations or severity.
Server Logs provide a helpful UI for certain events and errors that administrators might be interested in,
for auditing purposes.
The Licenses tab shows you your license information including users, projects, and
expiration; additionally this page also allows you to update your license.
Clicking the question mark icon on the top toolbar opens a menu for different SRM guides. You have the
option to select between HTML and PDF version for most of the available guides. Click the HTML version
of the User Guide.
When you click the download icon on the top bar, a list opens to give you links for downloading plugins
for SRM. In addition to plugins, also XML Schemas and Examples, and Java Tracer Agent are available for
download.
The Gear icon gives you the option to view the Visual Log for SRM.
Click Visual Log.
Visual Log shows you log events from your SRM instance. You can use the the log filter on the left side to
select which type of items and events are shown in the log.
When you click the admin link on the top-right corner, a menu opens where you can select either your
settings, or to log out.
Click My Settings.
We start with the Notifications tab, used for setting up email notifications that can be triggered by certain
events
In Personal Access Tokens tab, Personal Access Tokens. If you generate a token, you can use the SRM's
REST API. When you create a new token, you need to give it a name, and select the roles the token has
access to.
BlackDuck SCA
Black Duck SCA is the market leading software composition analysis tool used to
assess, manage and mitigate the risk involved with use of open-source software
across the entire SDLC. This risk falls into 3 main categories,
1)security,
2)license compliance
3) code quality.
SCA (Software Composition Analysis) helps companies ensure they are complying with the licenses of the open-source components they use. SCA
tools identify open-source components within your application and analyze their associated licenses. This helps companies avoid any potential
legal issues or license violations.
Black Duck raises a flag whenever it finds component of open source that might be risky.
For example, if you are using outdated version of open source component.
Black Duck is a market-leading software composition analysis (SCA) tool used to assess, manage, and mitigate the risk involved with
the use of open source software. This risk falls into three main categories: security, license compliance, and code quality.
By integrating into several tools across the entire software development lifecycle (SDLC), Black Duck is able to analyze application
code and files to identify open source components, and any related vulnerabilities and licenses.
Due to its configurability, Black Duck assists teams in installing custom open source governance, unique to their unique risk
tolerance, without getting in the way of innovation.
Software Composition Analysis is looking for unpatched vulnerabilities and open source code. Those unpatched vulnerabilities
represent CVEs(Common Vulnerabilities and Exposures) , that were disclosed against an open source component, but where the
patch hasn't been applied since open source components don't have a single source or origin point, each with potentially different
implementations
Throughout the past few years, the use of open-source software has exploded in popularity. This comes as no surprise given the
countless benefits of employing open source. However, without effectively managing its use, open source can expose an
organization to some unfamiliar risks.
With Blackduck, you can leverage open source and 3rd party code in your applications containers while easily managing the security
risks that often come along with it. Since the level of tolerable risk is unique to each organization, Black Duck is easy to configure
and customize to your company specific security and license policies.
Black Duck can be configured to automatically enforce policies like this and any violation will trigger the proper alert. Developers
can get ahead of these potential violations as early as the development stage, where the code side IDE plugin will notify developers
of vulnerabilities. And automatically remediate.
Black Duck also integrates with several other tools in order to find and scan your code base. Next time, you're provided with a bill of
materials, which gives you a complete and detailed inventory of all open source identified in your code base. For every component
identified, Blackduck surfaces all known security or compliance issues and flags them for review. Because you were able to
preconfigure Blackduck, you can filter the list to show only the items that violate your company's policies. When security risks are
identified, you're able to dive deeper into the component to view the Black Duck Security Advisory. This gives you all the
information needed to assess your risk and make the fix, including descriptions, severity scoring, exploit type, remediation guidance
and any related CDC's. Black Ducks vulnerability impact analysis indicates whether a exploit type, remediation guidance and any
related cdc.
Black Duck continuously monitors your SBOM and alerts you both inside and outside of the tool if any new vulnerabilities are
detected
Instead, it uses the information in the SBOM to identify the components used in the software.
It then checks these components against its vulnerability database to determine if they have known vulnerabilities.
The SBOM acts as a guide or map to the components within the software, but it doesn't inherently contain vulnerability
information.
SBOM(Software Bill of Materials) : A document which lists all the components used in the
application.
The SBOM acts as a guide or map to the components within the software, but it doesn't inherently
contain vulnerability information.
Black Duck is specifically designed to assist organizations in creating and managing Software Bills of Materials (SBOMs). It's a core
functionality of the tool.
Generative AI, cloud-native development, and open source collaboration are emerging as major trends shaping the software
development industry.
2023 witnessed a record number of first-time contributors to open-source projects, highlighting the expanding reach and inclusivity
of the open-source ecosystem.
Commercially backed open-source projects continue to attract a significant portion of both first-time and overall contributions,
suggesting the influence of commercial support on open-source growth.
There is a 60% increase in automated pull request of packages, i.e, open-source packages used in the projects, even
though these packages are vulnerable
Less than 30% of fortunes 100 companies have OSPO, i.e, teams that manages open source components and
checks for risks, compliances and vulnerabilities.
What does commercially backed open source project mean?
DevOps Pressure: Organizations are increasingly adopting DevOps practices to accelerate software delivery and improve agility.
This creates a demand for faster, more efficient processes.
Open Source as a Solution: Open-source software offers several advantages to address DevOps challenges:
Faster acquisition: No need for lengthy procurement processes like purchase orders or contracts.
Quicker deployments: Open-source software can often be deployed rapidly due to pre-built packages.
Rapid evolution and innovation: Community-driven development leads to faster feature updates and improvements.
Higher quality: Extensive community testing often results in higher quality software.
Risk Management: While open source offers numerous benefits, it also introduces risks like security vulnerabilities and licensing
compliance issues. Effective risk management is crucial to fully leverage the advantages of open source without compromising the
overall system.
while open source can significantly contribute to DevOps goals, organizations must implement robust strategies to address the
associated risks. Which is where BlackDuck comes is
Projects without active maintenance are more susceptible to vulnerabilities and exploits.
Many organizations struggle to keep open-source components up-to-date due to resource constraints, potential compatibility
issues, and the fear of unintended consequences.
Linksys failed to comply with this requirement by not making the router's source code publicly available. This non-adherence to
the GPL's terms led to legal action against Cisco.
The case highlights the critical importance of understanding and adhering to open-source licensing terms. Failure to do so can
result in significant legal and financial repercussions.
This feed is enriched by the research conducted by the Synopsys Cybersecurity Research Center ( CYRC).
The focus is on delivering timely information (same-day notification) about critical vulnerabilities.
Vulnerability feed of open source curated to ensure accuracy and relevance to customers.
BDSA is a value-added service provided by Black Duck to enhance vulnerability information. It addresses shortcomings in existing
vulnerability databases like the NVD.
Black Duck Security Advisories (BDSA) provide more comprehensive and timely CVE information than the National
Vulnerability Database (NVD) by incorporating additional research and analysis from the Synopsys Cybersecurity
Research Center (CYRC).
Now let's look at manual OSS(open-source Software) governance that creates bottlenecks
Step1) Security and legal teams establish clear guidelines for acceptable open-source components.
Step2) When a developer introduces a component, it undergoes an automated evaluation against the defined policies
Step3) Developers receive immediate feedback on whether the component complies with the policies.
Step4) The system can implement complex decision-making processes based on multiple criteria
Step5) Approved components can be cached in a binary repository for future use.
Step6) Human intervention can still be involved for complex cases or exceptions.
Benefits:
Automating the review process accelerates development cycles.
By quickly identifying and blocking risky components, it reduces security vulnerabilities.
With implementation comes integrations that support how DevOps teams build their code
1) Regardless of the acquisition method (package manager, direct download, binary repository), the open-source code
itself remains the same. It should be treated consistently within the governance framework for a complete BOM"
emphasizes the importance of uniform handling for all open-source components.
This means that,
Every open-source component, regardless of its origin, should undergo the same governance processes.
A comprehensive BOM requires the inclusion of all open-source components, irrespective of their acquisition method.
All components should be subjected to the same security, license, and quality checks.
2) Similarly, developer preferences for building code shouldn't influence how open-source components are managed.
Black Duck is designed to seamlessly integrate with various CI/CD processes and build paradigms, including cloud-
based services.
This ensures Black Duck can effectively generate findings related to open-source usage throughout the development
lifecycle.
In effect, integrations that span the entire SDLC enables teams to manage open source without having to leave tool DevOps teams
use to build their code.
Question: Coverity SAST checks the whole source code to find vulnerabilities right? Then cant we simply integrate the
open source code with the proprietary code and make SAST do the whole work, instead of having SAST for proprietary
code and BlackDuck for open-source code?
Ans: Coverity can essentially do this part, the problem remains that SAST becomes inefficient when it comes to
dependencies that open-source code may have, SAST can’t jump on to the dependencies to check them for vulnerability,
one dependency may have dependency on the other which may also form a dependency chain,
which is handled better by SCA.
Also, SAST tools are not designed to handle license compliance issues associated with open-source components
Question: Okay so, if SAST is not good at analyzing vulnerabilities in dependencies, then what about the external
libraries the software code may use in proprietary code, who will check for its vulnerabilities? SAST or SCA?
Ans: while SAST can provide some insights into how external libraries are used within the codebase, it's not
designed to comprehensively assess the security of those libraries.
To effectively manage risks associated with external libraries, organizations need to complement SAST with tools like
Software Composition Analysis (SCA) that specialize in analyzing open-source components and their vulnerabilities
Question: Can’t SAST check open-source code integrated with proprietary code for compliance?
Ans:
SAST can perform some basic compliance checks on open-source code.
SCA is the preferred tool for in-depth open-source compliance and risk management.
A combination of SAST and SCA provides the most robust approach to managing open-source software.
Competitors Other SCA vendors have focused on either license or security. Black Duck is the most comprehensive
with our market leading license compliance features
Synopsys has very strong policy management and SDLC integrations
In the Forrester Wave for SCA, Synopsys was the only vendor to receive perfect scores in both policy and
integrations
Synopsys: Leverages BDSA data for a complete picture of risk, including exploit, solution, severity, CWE, and
reachability info for Java. Prioritize remediation based on all of these factors.
Competition: Few competitors offer this functionality, but it is narrow in scope (language coverage) and at times very
manual.
How to Respond: Overall prioritization of vulnerabilities is key to OS management. Knowing which are thought
to be in use is one data point in that prioritization but shouldn't be used as the ONLY data point.
Synopsys: Leverage Code Sight which provides automated remediation and manual remediation information right in
the IDE, so developers can find and fix vulnerabilities as they code and save time later in the SDLC.
Competition: Some competitors are beginning to offer this but it is narrow in scope and coverage and only works for
certain applications.
How to Respond: This can have downstream impacts and we want you to have control over how you remediate based on
your requirements. We provide you with the enhanced information you need to make those decisions.
DEMO
BlackDuck Questions to ask cutomers
CodeDx
Code Dx is Application Security Orchestration and Correlation solution designed to ingest
and correlate data from multiple AST tools, analyze the findings and assess criticality,
prioritize remediation efforts and orchestrate developer workflows
ingest – collects
remediation – vulnerability fix
Orchastrate workflow – Coordinate the flow of info and actions between tools and teams, prioritize and assign tasks to developers based on
severity, etc.
- Results from all tools are aggregated and normalized to provide consistent scoring and descriptions for all issues
- Using multiple tools or running multiple scans can result in a single vulnerability being found by multiple tools and reported as multiple issues.
Code Dx examines the results from similar tools to eliminate duplicate vulnerabilities found by more than one tool.
Code Dx correlation and deduplication eliminated over 1,000 issues, reducing the triage workload by 20%
- Hybrid Correlation Engine: Code Dx combines data from both static analysis tools (SAST) and dynamic analysis tools (DAST) to provide a
more comprehensive view of vulnerabilities.
- Records when each AppSec test was run
- Records all the issues that were found and which are highest priority
- Stores remediation status (which issues were fixed which weren’t)
- Centralized platform offering a unified view of all security risks associated with your application.
Identified issues can be exported to JIRA, creating a new ticket for tracking and resolution.
The integration pushes information from Coverity to JIRA but doesn't pull updates back
from JIRA.
Code Dx: For customers needing a two-way connection with JIRA, Coverity offers
Code Dx, a separate product that allows viewing and manipulating JIRA issue
statuses directly within Coverity.
CodeSight
While a strong ecosystem of integrations is important for any AppSec tool, the reality is that value is only as good as the workflows
you can build around it. From a DevOps perspective, one goal way to improve application quality is to shift security and quality
information "left" toward the developer. That means getting the AppSec findings into the developers IDE.
The Synopsis Code Site plugin helps developers find quality of security issues in your source code. It helps them fix these issues and
increases your confidence that you are checking in clean Code Code Site launches one or more synopsis software analysis engines to
scan your source code and detect. Issues code site runs within many types of IDE applications. It displays the information it finds in
its own views, which appear within the IDE interface.
BSIMM
BSIMM (Building Security In Maturity Model) is a framework that helps organizations assess and improve their software security
practices. It is based on the principle that "you can't improve what you don't measure."
BSIMM takes a data-driven approach to identify how well application security programs are working and provide feedback on
where improvements can be made
BSIMM helps organizations assess their security maturity and identify improvement areas.
By measuring their performance against industry standards, organizations can identify areas for improvement and implement
targeted actions.
BSIMM gathers data from a wide range of organizations across various industries. This diverse dataset allows for meaningful
comparisons and the identification of industry-specific trends.
WHY BSIMM
BSIMM's Goal: To assist organizations in planning, executing, maturing, and measuring their SSIs.
BSIMM focuses on identifying the highest-level activity observed within each security practice, providing a quick overview of an
initiative's maturity.
BSIMM addresses the gap by providing real-world data instead of relying on hypothetical expert opinions.
BSIMM offers a standardized approach to measuring and describing 122 distinct security activities.
SSI (Software Security Initiatives) is primarily a practice. It refers to the specific actions and strategies an organization undertakes to
secure its software.
While not a framework itself, it's often integrated within a broader software development or security framework. Think of it as the
"what" you do, while frameworks like BSIMM or OWASP provide the "how" and "when" to do it.