0% found this document useful (0 votes)
15 views51 pages

6. Synopsys Keywords

Uploaded by

HOME NETWORK
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views51 pages

6. Synopsys Keywords

Uploaded by

HOME NETWORK
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 51

Questions:

1) Which Solution should be advertised to whom?

2) What if we are talking about Coverity or WhiteHat and they state this point that we stated:
Seeker customers have reported finding vulnerabilities they didn't even know to look for, they couldn't have found them with a
DAST scanner or with static analysis.

3) Each of these provide automation IAST, DAST, SAST or Software Risk Manager(SAST+SCA) which one to recommend?

4) Seeker IAST addresses the AppSec testing gap by providing a solution that allows for automated testing of web applications, with
the ability to provide the development with the specific location of vulnerabilities in the code.
If SAST integrated with IDE, the vulnerabilities are found then and there, even if we don’t, does Coverity SAST point the exact
location of code that causes vulnerability? Also does WhiteHAT dynamic pinpoint the location of code that causes vulnerability?

5) Is IAST a replacement of SAST and DAST?

7) If Polaris and SRM do not provide IDE Integration,


Would they not be giving instant vulnerability notification, while developer writes the code, like Coverity?

6) Synopsys Software risk Manager, Synopsys CodeSight, Synopsys Polaris are all SCA + SAST, what the difference?

Feature Software Risk Manager (SRM) CodeSight Polaris

Core Function ASPM IDE integration AST platform

SCA Included Included Included

SAST Included Included Included

DAST Included (optional) Not included Included

IDE Integration No Yes No

Cloud-based (IDE
Cloud-Based On-premises or cloud plugin) Cloud-based

Deployment
Model Enterprise-wide Developer workstation Cloud-based

Focus Security program management Developer productivity Comprehensive AST

Target Audience Security teams, application owners Developers Security and development teams

Remediation Tips Yes Yes Yes

 ASPM is best for managing and prioritizing vulnerabilities across the application portfolio.

 AST excels at finding specific vulnerabilities in code and applications, including their location, severity, and potential impact.

 IDE-integrated tools are ideal for early detection and developer education.
sast - coverity

dast - white hat

iast - Seeker

sca - black duck

fuzzing – defensics

Polaris – SCA, SAST, DAST (Saas)

CodeSight – SCA, SAST, DAST (ide plugin)

SRM | CodeDX - SCA, SAST, DAST ( On-Prem )

Coverity SAST

Software size and complexity is exploding,

The larger the code, the higher the posibility of vulnerabilities,

It is crucial to evaluate code throughout the sdlc.

which helps identify and address security flaws in early stages of development

Scans source code.

Finds security weaknesses like:

SQL injection, cross site scripting , etc

Best for custom code

White Hat DAST

Finds vulnerable app behavior such as misconfiguration and authentication issues.

Many common vulnerabilities go undetected by testing that takes place in sdlc when software is coded,

They often remain undetected until they are exploited by threat actors.

DAST performs exploiting the application like a threat actor.

Black Duck SCA

According to 2022 audits, open source is found in 97% of codebases. Clearly, open source is the fuel of modern applications,

Which makes it mandatory for us to perform security checks on to it,

for which we utilize SCA.

SCA is an automated process that evalulates open source code for security vulnerabilities , compliances issues and code quality.
Synopsys offers high-quality AST capabilities delivered as either SaaS (Polaris) or on-premises (Software Risk Manager).

Competitions
Checkmarx (AST Solution provides SAST,DAST,SCA) But its SCA, DAST are weak

Veracode (Appsec Platform): Strong SAST, but weak SCA and DAST

SoanrQube (SAST): SonarQube only focuses on ‘code smells’ which indicate where problems might exist while Polaris(ASPM) and
Coverity (SAST) identify actual issues, and provide remediation guidance to resolve them quickly. Also SonarQube only looks for
small portion of vulnerabilities that put applications at risk.
Our SAST scans protect customer applications from being exploited by finding critical vulnerabilities that SonarQube misses.

Synk(SCA, SAST, IaC, container, and cloud security.): Weak SAST, no DAST. Perform poorly in competitive bake-offs due to weak
SAST capabilities, a lack of DAST, and inaccurate and incomplete SCA results due to only offering dependency analysis.
Snyk SCA only offers dependency analysis, when software supply chain risk management is top of mind, it’s crucial to identify ALL
third-party dependencies.

SRM (now CodeDx)


About Synopsys
Synopsys is a leader in electronic design automation (EDA), a leader in semiconductor IP solutions, and an emerging leader in software quality
and security.
We are a strong #2 provider of semiconductor IP solutions (second to ARM)
In 2014 Synopsys entered the software security and quality market with our acquisition of Coverity, industry leader

Gartner's Critical Capabilities for Application Security Testing (AST) report provides a detailed evaluation of vendors in the AST market.
The report ranks 12 vendors based on their ability to deliver specific capabilities.

Five Use Cases: The vendors are evaluated across five common use cases:

Enterprise: Traditional on-premises applications


DevSecOps: Integrating security into the development pipeline
Cloud-Native Applications: Protecting applications in cloud environments
Mobile and Client: Securing mobile applications
Software Supply Chain Security: Managing open-source and third-party risks
Synopsys Achieved Highest Scores: Synopsys received the highest scores across all five use cases,
indicating its strong capabilities in these areas

To position Synopsys Software Integrity Group's key differentiators , here are set of qualifying questions to get you started.
1)What are your software security and quality needs or goals?
2)Do you know how your software security compares to your peers?
3)Are your teams moving toward DevOps, DevSecOps or the cloud?
4)Do you have sufficient application security skills and resources?
5)Are your application security tools meeting your needs?
6)Are security and quality built into your software development life cycle tools and processes?

Holistic Application security entails a thorough architecture risk analysis also


Known as ARA to find System Design flaws and a combination of automated static application
Security testing (SAST) , software composition analysis (SCA ), dynamic application security testing(DAST), Interactive Application Security
testing(IAST) and fuzzing to find known vulnerabilities.
Followed by expert led manual penetration testing to dig deeper and find common weaknesses which are also known as common weakness
enumeration (CWE) that a real life hacker can exploit.
Then there is pen testing experts who use their experience and playbook to validate all false positive and provide prioritized remediation advice
according to the business unique risk profile.

Synopsys AppSec mission is to help our customers build secure, high quality software faster by providing the solutions they need to effectively
manage their AppSec efforts.

Understanding the key fears of Customers


Understadning Appsec

1) Confidentiality: by encryption

2) Integrity: by using hash function and MACs making sure that data has not been tampered.

3) Availability – Redundancy | Replication || Authentication – Digital Certificate

Security in SDLC

Devsecops
The objective of DevSecOps is not to eliminate all risks prior to deployment but to establish a proactive approach to security. This
involves identifying potential risks beforehand, prioritizing them based on severity, addressing the most critical issues, and
implementing measures to prevent similar vulnerabilities from recurring. Additionally, DevSecOps emphasizes continuous
monitoring of the threat landscape to identify emerging risks and address them promptly, avoiding repeated vulnerabilities.
Technical Presales

Coverity (SAST)
2022 Osprey report it was found that in. Over 97% of code bases contained open source
Proprietary code should focus on creating differentiated value, without reinventing foundational elements
You might use third party libraries in opensource form, but it's up to you to ensure they need your compliance targets.
Make sure that you do no harm with your codes.

.Static analysis looks at the source code. Its core role is to identify if there are code patterns that represent quality issues
or security weaknesses
Static analysis looks at the source code. Its core role is to identify if there are code patterns that represent quality issues
or security weaknesses. Common examples include SQL injection, buffer overflows, or even memory leaks. While this is
best for proprietary code, it's important to recognize that it can't be run against any code in source code form, including
open source and 3rd party libraries
In Coverity,
Proactive SAST, finds issues in your application code as you type in your IDE. ( Immediate )
Change-Set SAST finds and fixed common security issues before changes are committed into source code
repositories ( take Minutes)
Iterative SAST analyzes code once merged to the main branch (CI/CD) ( takes Mintues)
Comprehensive SAST thoroughly scan all your codes and makes informed decisions prior to release.
With all of these tools, it's not uncommon for development teams to complain about the Impact security, tooling and
assessment tab on delivery schedules. Those teams are pressured to deliver more, faster, and all without sacrificing
quality or security
The problem is that the most security tools are run too often relative to the code that's being edited or changing at one of
the following phases. For example, you want to have a comprehensive static code analysis running as part of a CI
workflow, but it doesn't mean to run on every commit.
That can be done as part of a nightly test cycle where there is time to complete a deeper analysis that spans all the
commits the application experience that day. As you move left towards the developer, the iterative or chain set based
analysis is more important.
Such analysis focuses more attention on the code that changed as part of a merge commit and is much faster than the
deeper comprehensive analysis performed nightly.
Faster of course doesn't need to come with a lack of coverage, but for merge commits a shallower test and makes sense
given the goal is mostly to ensure the current change set didn't introduce problems in the code branch.
Rounding Progression towards the developer is a proactive analysis performed within the developers IDE ,
This can be thought of as being the equivalent of a spell checker for a Word document. By identifying security or quality
issues as the code is being written, the developer can adjust their implementation with minimal impact to the delivery
schedule because they are currently thinking about the specifics of the feature they're working on.
When combined, the proactive, iterative and comprehensive flow reduces risk associated with both feature and merging
the feature commits into the main code branch, all without impacting the software assurance efforts of a comprehensive
analysis.
Thus keeping the Dev OPS and the Dev SEC teams in harmony

2) Integrated
- It intergrates with
-- popular ide
-- source code control and bug tracking system
-- CI / CD frameworks

Performant
- Analyze full project build or without build
Technical Capapbilities of Coverity:
1) Minimal false positives
2) Broad framework and API support
3) Configurable to match application risk profile
4) Policy based scan ensure complicance
5) Tracks progress with trend reports
6) Integrates with IDE and resolves issues in real time
7) Deploy Anywhere: SAAS for cloud based application, On-Premise for highly regulated applications.
8) Fast Comprehesive analysis during development

Covertly also supports auto assignment. When it finds a new defect, it checks a software configuration management system and
automatically assigns the defect to the developer responsible, who then gets notified by e-mail. With Coverity, your developers
don't need to be security experts.By identifying issues and getting actionable remediation advice as they code, they'll quickly learn
how to fix software flaws and security vulnerabilities early in the SDLC when it's least costly.

Fast desktop analysis enables developers to find fixed and track software flaws quickly, all within their IDE and issue trackers. It takes
only a few seconds to get accurate results, but no tradeoff in performance versus accuracy.

And if you're not able to do a build, the file system capture feature can be used to scan and analyze uncompiled Code.

Q. What Is Application risk profile?


Q. What do you mean trakcs progress with trend reports? What are trends?
Q. This means that Coverity integrates with IDE right? Or does Synopsys has their own IDE?

Rather than simply committing code and waiting for another team to test it, Coverity integrated
via the Code Sight plugin allows them to analyze or scan code as they write it. This enables
developers to fix issues before they impact others

Coverity empowers security teams to "independently scan projects,"


as coverity provides tools that allow security teams to check for vulnerabilities without waiting for the developers to
finish a section or needing to use separate tools. This means security checks can be integrated into the development
process more smoothly, potentially leading to faster and more secure software releases.

Coverity and Code Sight were specifically designed to address these concerns. Code Sight and
Coverity are integrated in most popular IDEs which doesn’t require developers to learn complex new tools or implement
major changes to workflows. Findings are actionable and accurate which helps ensure teams focus on fixing defects and
not triaging tooling integrations or configurations. The ultimate end goal being for developers to identify bugs as they
write features rather than needing to wait until a centralized team flags a bug for them to fix where that bug was
identifiable during feature development.

Coverity:
 Reduces security exposure in your applications
 Reduces project risks by finding problems earlier
 Lower software development costs

Coverity Connect enables grouping and organizing defects by code bases, releases, and branches by the
order they were discovered.
Grouping defects by code base, release, and branch makes it easier to identify and prioritize vulnerabilities. For example,
defects in a specific release nearing deployment might be prioritized higher than those in a development branch.
Also,
developers and security teams can concentrate on analyzing vulnerabilities in a particular section of the code.
Also,
Seeing the order of defect discovery helps understand how vulnerabilities were introduced over time and track their
remediation progress.

Coverity also allows us to go back to results from a specific state of code. Analyzed at a specific time, how does it work?
Coverity Connect provides 3 container types which are
Project
Stream
Snapshot
Project is represents a specific The project focuses on a specific release stream. A branch in your version control
system can be represented as a stream. One or multiple streams can be grouped into a project. Snapshot Within each
stream, there are one or several snapshots which corresponds to a historical state of codes captured at the time of
analysis, which could be from last night or several nights ago.

The Coverity reports include:

Coverity MISRA report that can gauge MISRA compliance and display the associated issues in your
codebase.
Coverity Security Report quantifies and displays the security issues in your codebase. The report
benefits compliance departments and developers.
Coverity Integrity Report provides a standard way to objectively measure the integrity of your own
software as well as software you integrate from suppliers and the open source community.
Coverity CVSS Report uses the CVSS standard to evaluate security risks in your codebase and
allows an auditor to adjust the report's findings in Coverity Connect. This report can be used internally
and externally to your organization.

Cross-site Request Forgery (CSRF) shows a vulnerability in red, the reasons


of the vulnerability and how to protect this issue

Coverity also introduces ‘Rapid Scan’ feature,


Which can perform a rapid scan of code,
more importantly,
it can also find vulnerabilities in IaC (infrastructure as Code)

IaC xallows you to configure network devices, servers , databases remotely


using code removing the need for being physically present at the premises.

This an important capability that Synopsys has as a competitive advantage to all of our competitors. Something to
certainly highlight to our customers and prospects.

An example of how solving issue is made easy in Coverity:


 You're looking at a Java file with red highlighted lines.
 The red highlights are called "contributing events" and indicate factors leading to an issue.
 The main issue, flagged with an exclamation mark, is at the bottom.

 The entire report fits on one screen, making it developer-friendly.


 For complex issues spanning multiple files, a list view is available.
 The "contributing to issue" window on the right details contributing events.

 Coverity highlights issues like SQL injection, where user input is used in a database query.
 It tracks this input through various function calls (procedure calls) to the point it's used (the "sink").
Coverity provides quick access to information about the identified issue.

 Clicking on the issue description leads directly to relevant documentation within Coverity.
 This documentation explains the issue, its code manifestations, and additional examples.

 Developers can quickly understand the issue, its cause, and how it affects the code.
 This reduces the time spent analyzing the problem and allows for faster resolution.
Creating Description for the null pointer deference

So for every null value returned,

you can give a description,

Classification: Whether its a bug, false positive, etc

Severity: What is its severity major, moderate or minor?

Action: What should be the action taken? fix required, fix submitted, modelling required or ignore?

Legacy: When a new team starts using SAST, they might find many existing issues. To focus on preventing new problems, they can mark these
older, less critical issues as 'legacy' and address them later.

Finally,
You can also 'export' the issue to jira,
i.e, push the issue to jira.

CWE is a list of software and hardware weaknesses maintained by MITRE.


It categorizes and describes these weaknesses, including potential vulnerabilities they can create.
Coverity provides a link to the specific CWE entry for the identified security issue.
This gives developers access to extensive information beyond Coverity's basic explanation.
CWE link empowers developers to gain a comprehensive understanding of security vulnerabilities in their code, allowing them to
make informed decisions about fixing them.

Coverity provides a visual representation of the Null pointer dereference issue, making it easier for developers to understand the
problem and its connection to other pointers in the code.
After understanding the issue, developers can categorize it in various ways:
1) Bug vs. Non-Bug: This helps distinguish genuine errors from potential false positives.
2) Severity Level: Developers can assign a severity level based on the potential impact of the vulnerability (e.g., Critical,
High, Medium, Low).

Developers can choose the appropriate action for the issue:


1) Fix: This flags the issue for developers to address.
2) JIRA/Bugzilla: This option integrates with issue tracking systems like JIRA or Bugzilla, creating a ticket for
the developer to track the fix.
Companies adopting Coverity for the first time might have existing known bugs. The "Legacy" flag allows them to mark these issues
and focus on newly identified vulnerabilities.

 Roles-Based Access Control (RBAC): Some organizations require stricter control over marking issues as false
positives. RBAC allows setting permissions so only authorized individuals can perform this action.
 Proposal and Comment: Coverity allows users to propose an issue as a false positive and add comments to justify
their reasoning. This creates an audit trail for future reference.
Identified issues can be exported to JIRA, creating a new ticket for tracking and resolution.

Coverity provides a link to the JIRA ticket for easy access.

The integration pushes information from Coverity to JIRA but doesn't pull updates back
from JIRA.
Code Dx: For customers needing a two-way connection with JIRA, Coverity offers
Code Dx, a separate product that allows viewing and manipulating JIRA issue
statuses directly within Coverity.

Competitive Key Differentiators


Questions to ask customers

Business Objectives of Coverity:

 Reduce AppSec risk by finding Security Issues in proprietary code


 Reduce AppSec risk by finding Security Configuration issues in InfraStructure as Code (IaC) and application configuration files
 Improve Code Quality by finding impactful quality issues in proprietary code
 Achieve safety compliance by finding deviations from code compliance standards.
 Improve development efficiency by finding and managing issues early and throughout the development lifecycle
 Improve development efficiency and tool adoption of coverity by centrally managing the filtering and triage of issues.
 Improve tool adoption and efficiency with high accuracy and low false positives
Success Criteria

1. Technical Role

 Scanning (Coverity scans code for vulnerabilities, defects, and coding standards violations)

 Analysis (Coverity analyzes code to identify security vulnerabilities, performance issues, and code quality problems)

 Workflow Integrations (Coverity integrates with CI/CD pipelines, issue tracking systems, and code repositories)

 Scoping (Coverity allows users to define specific code areas for analysis)

2. Project Manager Role

 Project Configuration (Coverity enables setting up projects, teams, and scan configurations)

 Dashboarding (Coverity provides customizable dashboards to visualize code quality metrics)

 Process and RBAC (Coverity supports role-based access control and workflow customization)

 Deliverables Management (Coverity generates reports and metrics for project management)

3. Security Engineer Role

 Support and Analysis (Coverity offers support and troubleshooting for security issues)

 Security Audits (Coverity can be used to conduct security audits and assessments)

 Policies (Coverity can enforce security coding standards and best practices)

4. Compliance Role

 Compliance Standards (Coverity checks code against industry standards like CWE, OWASP, PCI DSS)

 Support and Analysis (Coverity provides support for compliance-related issues)

 Deliverables (Coverity generates compliance reports and evidence)

5. Developer Role

 Accessing Results (Developers can view and analyze scan results within Coverity)

 Triage Workflow (Coverity supports issue prioritization and assignment)

 Support and Analysis (Coverity provides developer support and troubleshooting)


DEFENSICS (FUZZING)
fuzzing - defensics

when a software bug can be exploited for malicious activities, it's called the software vulnerability, and these can be
categorized as either known or unknown.

Unknown vulnerabilities, also known as zero days, are dangerous, allowing attackers to operate unnoticed for an
extended period of time.
And reactive security solutions are useless against zero day attacks.
What if you could test software for unknowns? You can with fuzz testing.
Fuzz testing manipulates input data to send until the malformed input which causes the software to crash.

You could perform fuzz testing randomly and miss vulnerabilities, or with a template and still miss
vulnerabilities. Or you can fuzz intelligently manipulating only the input that matters most to you.
Introducing generational intelligent fuzz testing from Synopsis. Our fuzzing solution provides prebuilt test
suites that eases the burden of manual black box test creation, and our fuzz testing solution runs on any VM or Windows
or Linux computer to produce a detailed remediation package that helps identify and fix software issues fast. Because
you don't know what you don't know.

Defensics uses generational passing, which means it understands everything about how the input should look. It
systematically breaks the rules for each field of each message to create test cases with very high quality, capable of
reaching many code paths and driving the target.Into new states before delivering an anomalies.

IAST, DAST, SAST cannot detect unknown (zero day) vulnerabilities, as


they haven’t been identified or patched yet.
IAST, DAST, SAST are based on prefined rules and patterns of known
vulnerabilities

Types of Fuzzing:
Mutation Based
Generation Fuzzing
Random Fuzzing

The general idea is that initially an interface is supplied with a malformed data, which the
programmers did not consider during development, such that deviations from the specified
system behavior are caused to malform. It is essential for the malformed data to be semi valid,
meaning that its construction is valid enough not to be rejected immediately by the interface,
but invalid enough to cause unexpected effects on the underlying software layers.
What makes a good fuzzer?
1) Efficient:
 Identifying critical code paths that could lead to vulnerabilities.
 Generating test cases that target these paths effectively.

2) Intelligent:

The Fuzzer shouldn’t just throw random data on the software.


A good fuzzer understands the protocols used in the code, to generate data relevant to protocols behavior,
increasing the chance of uncovering vulnerabilities.

3) Integration with Development Process:


 Fuzz testing shouldn't be a separate activity from development.

 A good fuzzer integrates seamlessly with existing development workflows.

 This allows developers to easily incorporate fuzz testing into their routine and identify vulnerabilities early in the development
cycle.

4) Repeatability:

If a fuzz test identifies a vulnerability but doesn't produce the same results consistently, it becomes difficult to verify if the issue is
real or a fluke.
Once a developer fixes a vulnerability identified through fuzz testing, they need to re-run the test to confirm the fix actually
addressed the issue. Unstable results make it hard to be certain.

Defensics is a black box testing.


Blackbox testing focuses on external behavior of application without examining internal code,
In Blackbox, testers provide input and observe output and overall functionality,
this approach helps identify issues related to user experience, functionality and API interaction.

. Fuzzing throughout the development


final versions of the application might miss opportunities

lifecycle helps identify and fix vulnerabilities earlier, leading to a more secure
product.
Fuzzing prototypes during development is crucial for embedded systems. Early identification of vulnerabilities can prevent costly
hardware changes later.

Defensics found the Heartbleed bug of 2014

Bug Vulnerability and Exploit.


Bugs Lead to Vulnerability
Vulnerability is weakness or flaw of system design, implementation, or operation & management that can be attacked.
Exploit: It is the attack on computer system, taking advantage of the vulnerability.
We're doing something different with Defensics than we're doing with Coverity and Seeker IAST.
We're trying to break the system. We’re trying to send it things that are out of range. We're trying to send it too much data or too
little data. Or even send it a lot of garbage data to see what it's capable of responding to. We're mostly trying to crash it, or trying
to make it act in some unexpected way.

Questions to ask customers

Demo
Fuzzing involves sending invalid or malformed input into the application server to make it crash.

Fuzzing Input may involve:


1) Sending malformed data through input fields like login forms, search boxes.

2) Altering network packets (like Http request, i.e, constructing malformed url, altering request headers, modifying request bodies, testing different http methods
like put, delete )

3) File uploads like documents, images (uploading files with unsupported file extensions, Injecting malicous code within file content, Exploit directory traversal
vulnerabilities, Uploading extremely large file to test for handling limits)

5) Protocol interactions: Manipulating data which uses certain protocol for transfer. for ex:

HTTP: Manipulating request header, body content.

FTP: Altering file content

SMTP: Modifying email header, attachments format, message content.

TCP/IP: Manipulating packet header, checksums, data segments.

4) API endpoints:
Api fuzzing target api endpoint
Api endpoint is the url where the carried by api is displayed.

We fuzz the api end points by:


a) Manipulating the parameters of api body,
for ex: {name:John, ...},
When we say parameter, we mean 'john',
We may send invalid data type, omit a required parameter, Add extra parameter, Inject malicous code into parameter values, Test parameter lenght limit.
b) Payload fuzzing: Same as above, payload is the data carried by api, so sending malformed data in api parameters.

c) Header fuzzing: Modifying or removing HTTP headers to test for vulnerabilities.

d) Rate limiting and DoS testing: Overloading the API with requests to assess its resilience.

e) Authentication and authorization testing: Bypassing security mechanisms or exploiting vulnerabilities in authentication and authorization processes

Practical part:
step1) Load the Test suite like HTTP server test suite

step2) Set the target where the fuzzing needs to be done.

Step3) Interoperability test to ensure basic configuration is properly set and connection to target is working.

Interoperability means the ability of your test suite to communicate and interact correctly with the system you're testing (the
target).

This is a crucial first step before you start the actual fuzzing process.

step4) Advanced Configurations for granular control over test runs.

Run Control: This section allows you to manage the duration and behavior of your test run. You can set time limits, control how test
cases are repeated, and define timeouts for message responses. This is useful for optimizing test efficiency and handling slow or
unresponsive targets.

Logging: Here, you can control the amount of information Defensics records during the test run. By default, it logs successful and
failed test cases. You can adjust this to include more or less detail based on your debugging needs.

Capture: This option enables you to record network traffic during the test run. This captured data can be invaluable for analyzing
issues and understanding the behavior of your target system.

Network: These settings allow you to override the network configuration used by the test suite. This is useful for testing in specific
network environments or when troubleshooting connectivity problems.

CVSS (Common Vulnerability Scoring System): This section allows you to configure how Defensics calculates vulnerability scores. You
can adjust the scoring parameters to align with your organization's specific requirements.

step 4) Instrumentation to figure out if failure has occured in target.

Instrumentations are basically methods to determine if a failure has occured or not.

Instrumentation methods:

Valid Case Instrumentation: This is the most basic method. It compares the response from the target to the expected response for a
valid test case. If there's a mismatch, a failure is indicated.

Connection Instrumentation: This monitors the network connection between Defensics and the target. If the connection is lost or
there's an error, it's considered a failure.

Syslog Instrumentation: This method analyzes system logs on the target for error messages or unexpected events that might
indicate a failure.

SNMP Trap Instrumentation: If the target supports SNMP, this method can monitor for specific SNMP traps that signal issues.

SNMP Value Instrumentation: Similar to SNMP Trap, but instead of monitoring for specific events, this method checks for changes in
specific SNMP values.
External Instrumentation: This allows you to use custom scripts or tools to determine if a failure has occurred. This provides
maximum flexibility but requires additional development effort.

Agent Instrumentation: This involves installing an agent on the target system to collect detailed information about its state and
performance. This can be helpful for complex systems but might introduce overhead.

step 5) Select test cases from within the test suite we chose to run on the target

DEFENSICS GROUP TEST CASES BASED ON ANOMALIES (MALFORMED INPUT)

Defensics also provides selection modes:

Default: This mode selects a basic set of test cases for initial testing.

Full: This mode includes a significantly larger set of test cases for more comprehensive coverage.

Unlimited: This mode generates an almost infinite number of test cases by continuously mutating existing ones.

Limited: This mode allows you to specify a percentage of test cases to run, useful for time-constrained testing

step 6) Run the test. Monitor the test results, pass/ fail, including the anamolies that caused the failure.

Step 7) Remediation

Defensics Remediation collects Defensics test run logs and test suite settings into a package called remediation package.

The remediation package can be put into a zip file.

---

• When testing an application like "Opium PB2", what will be the target?
Ans: The network end point, i.e, server or database. Usually the server hosting the appliaction

• What is a test suite in this context?


Ans: Pre-Defined collection of test cases.
These test cases are specified data sent to target application, to make it crash,the data can be valid, invalid or malformed.

• Does a test case mean invalid input?


Ans: Yes it contains valid, invalid and malformed data.

Development teams need to know that fuzzing won’t slow them down or get in their way, so they will be especially interested in
automating fuzzing and integrating it into existing processes and systems.

Fuzzing is dynamic testing, but it can be integrated into application development as soon as an executable build or module is
available.
Testing teams are tasked with managing the quality and security of an application. When application security teams recognize the
value of fuzzing in managing unknown vulnerabilities, they will be keen to help development teams automate and integrate fuzzing
into the application development life cycle.
POLARIS SKIPPED
SEEKER - IAST

IAST EXCELS AT PROVIDING DETAILED LOCATION INFORMATION ABOUT VULNERABILITY.


IT CAN GIVE THE EXACT LINE OF CODE,
Which is not possible with SAST and DAST.

Often, tools create too much noise or are too slow. Some tools are great at generating findings, but those findings are either too
superficial or too high-level to recognize the real problem.

SYNOPSYS SEEKER IAST CAN PERFORM TESTING BOTH DURING DEVELOPMENT AND WHEN THE APPLICATION IS
RUNNING.

Seeker employs a technique called instrumentation


By monitoring the instrumented application, Seeker examines how the code executes, how data moves through the system, and
how memory is used.

Note: The word instrumentation was also used in ‘Defensics demo step 4’

"seeker analyzes custom code HTTP traffic libraries and frameworks back-end connections and runtime behavior the result is a
comprehensive view of vulnerabilities and broad runtime test coverage with no extra steps saving you time and effort”. Please
Explain.
Ans:
 Custom code: Seeker inspects the application's custom code for vulnerabilities, looking for weaknesses in the logic and implementation.

 HTTP traffic: It analyzes the network communication between the application and clients, identifying potential vulnerabilities like injection
attacks, cross-site scripting (XSS), and others.

 Libraries and frameworks: Seeker examines third-party components used in the application to ensure they are not exploited.

 Back-end connections: It analyzes the application's interactions with databases, APIs, and other backend systems to detect vulnerabilities like
SQL injection or insecure data exposure.

 Runtime behavior: Seeker monitors the application's dynamic behavior to uncover vulnerabilities that might not be apparent in static code
analysis.

By integrating with BlackDuck,


Seeker effectively tracks open-source components used in applications and associated security risks. This helps mitigate supply
chain vulnerabilities.

Seeker identifies and monitors sensitive data throughout the application lifecycle, alerting teams when it's mishandled or stored
insecurely.

Seeker helps organizations adhere to security standards like OWASP Top 10, CWE/SANS Top 25, PCI DSS, and GDPR by providing
relevant reports.
Seeker monitors how sensitive data moves through your application, including how it's handled in URLs, logs, user interfaces (UI),
databases (DB), and other components. This visibility helps identify potential exposure points.

By tracking sensitive data, Seeker can detect potential data leakage scenarios. For example, if sensitive data appears in logs without
proper encryption or is exposed through a vulnerable endpoint, Seeker can flag it as a potential issue.

Seeker provides precise details about vulnerabilities, including code snippets and data flow analysis, to aid developers in
understanding the issue.
The tool offers actionable advice and even sample code to expedite the fixing process.

Build breaking: For organizations with strict security requirements, Seeker can prevent builds from progressing if security criteria
are not met.

Developers and teams receive timely notifications about new vulnerabilities through Slack or email.
Seeker integrates smoothly with popular development and CI/CD tools like Jira and Jenkins

How Seeker works


step 1)Instrumentation: The Seeker agent is embedded into the application's code.
step 2)Request Handling: When the application receives a request (e.g., a web request), the agent is activated.
step 3)The agent determines the specific code section and action triggered by the request.
step 4)The agent examines how data moves through the application, focusing on areas prone to vulnerabilities (e.g., input
validation, output encoding, authentication).
step 5)Based on predefined rules and patterns, the agent looks for signs of vulnerabilities like SQL injection, cross-site scripting
(XSS), or others.
step 6) Seeker performs additional checks to confirm the identified vulnerability, reducing false positives.
step 7) Information about the vulnerability, including its location, type, and severity, is sent to the Seeker server
The Seeker server provides detailed reports and recommendations to help developers fix the issue

"active verification", is similar to having a virtual team of pen-testers 24x7 but more accurate and faster. Seeker active verification
automatically retests detected vulnerabilities to determine their exploitability. It is scalable enough to process hundreds of
thousands of web requests. Security teams can filter the issues to focus attention on verified issues that pose the most significant
security risk to the application and its data.

DAST has a history of false positive,


Seeker IAST reduces false positive using active verification and it provides actionable findings

Seeker agents can be easily instrumented to multiple target application endpoints or nodes, and they will track every interaction
between the application services under test autonomously in the background while the teams carry out their normal development
and testing work.

Endpoints is any point where user or any other system can interact with your application.
Endpoints can be internal facing or external facing.
External facing: Webpages (homepage, login page, product page)
Internal Facing: Database connectors( MySQLConnected,etc), External Api clients (payment gateway, social media
platforms, etc), file system interactions (components that read or write from external storage(like cloud storage))

Feature Seeker Coverity SAST WhiteHat DAST

Verification
(vulnerability) Active verification No verification Limited verification

Testing Approach Runtime analysis Static code analysis Dynamic testing

Continuous Monitoring Yes No Limited

ONE OF THE KEY CHALLENGES MANY APPSEC TOOLS IS THAT THEIR FINDINGS AREN’T READILY ACTIONABLE BY THE DEVELOPMENT
TEAMS THAT NEED TO FIX THE ISSUES.

SEEKER SOLVES THIS PROBLEM BY PROVIDING DETAILED INFORMATION THAT PINPOINTS THE PRECISE URL WHERE THE
VULNERABILITY OCCURRED ALONG WITH ANY PARAMETERS.

A DETAILED CALL GRAPH SHOWS HOW THE USER-SUPPLIED DATA WAS MANIPULATED TO CREATE THE OBSERVED ISSUE. THIS
ENABLES DEVELOPERS TO QUICKLY UNDERSTAND THE PROBLEM WITHOUT NEEDING TO BUILD REPRO CASES.

Seeker delivers one thing that development teams never have quite enough of – and that’s time.
Traditional AppSec Approach

Before the advent of tools like Seeker, application security testing was a fragmented process involving multiple stages:

1)Developer IDEs: Security checks were often limited to static analysis tools within the developer's environment.

2)Build Systems: Some security checks might be integrated into the build process, but these were usually basic.

3)Post-Deployment: This is where the bulk of security testing occurred, including dynamic application security testing (DAST) and
penetration testing.

This approach was time-consuming, resource-intensive, and often resulted in vulnerabilities being discovered late in the
development cycle.

Seeker's Approach

Seeker offers a more integrated and efficient approach:

1 )Parallel Testing: It performs security tests concurrently with other development and testing activities.

2) Reduced Post-Deployment Testing: By identifying issues earlier in the development lifecycle, Seeker decreases the need for
extensive post-deployment testing.

3) Increased Test Coverage: Seeker's comprehensive approach often uncovers vulnerabilities that might be missed by traditional
methods.
Key Benefits:

1) Faster Time-to-Market: By shifting security testing left, Seeker helps accelerate the software development process.

2) Cost Reduction: Fewer resources are dedicated to post-deployment testing.

3) Improved Security Posture: Earlier detection of vulnerabilities leads to a more secure application.

Seeker optimizes the security testing process by integrating seamlessly into the development workflow and providing
continuous security feedback.

Seeker comprises of two components.


The Seeker Agent and the Seeker Server
Seeker Agent: This component is deployed alongside the application. It monitors the application's behavior, collects data on
interactions, and sends it to the Seeker server.

Seeker Server: This component can be hosted on-premises or in a cloud environment. It receives data from the agent, processes it,
and provides a user interface for security teams to view results.
This is the main component of Seeker. It provides the Seeker user interface, collects vulnerabilities, (removes duplicates) and stores
the vulnerabilities generated by the Agents. The server provides Web-API’s and hosts groups and projects.

Automated Testing Triggered by User Interactions

Seeker leverages user interactions as a catalyst for testing. When a user interacts with the application, the agent captures data
about the request, response, and underlying application behavior. Seeker then automatically performs security checks based on this
information.

From a testing perspective, functional and ad hoc tests are perfect triggers for Seeker instrumentation. Please explain.

Functional and ad-hoc testing are activities performed by testers or developers to test the application.

In this case, Seeker observes the application's behavior during these tests, collects data, and analyzes it for security vulnerabilities.

Seeker's role is to observe and analyze the application's behavior during these tests, not to execute the tests themselves.

Customer has no DAST solution: When customers don’t have an automated DAST solution, they rely upon penetration testing to
find issues. This approach can be expensive, so, it’s unlikely to be used with each release and for every application. When teams
follow a DevOps workflow, they likely release multiple times per month, which pen-testing was never intended for. Seeker,
complements the security team’s pen-testing efforts. Can also improve the overall security of the applications with each release.
This allows the pen-testing to focus on compliance requirements and mission-critical applications.

•IAST solutions: These have been around for a while, but legacy solutions, like web DAST, have a history of false positives. By
embedding Seeker into the development workflow, its active verification can reduce false positives and provide actionable findings.
This increased accuracy builds confidence within the development teams leading to greater Seeker adoption.

• A supplement to DAST: Teams looking to integrate some form of dynamic testing into their DevOps pipelines will find Seeker
particularly attractive. Due to its instrumentation, Seeker can be added to any test environment with a supported language and
immediately complement the current test workflows. Testing teams don’t need to change test cases, test harnesses, or drivers.
Simply enable the Seeker agent and run the existing test suites. In some cases, Seeker supplements some DAST functions by
reducing the incidence of false positives.
Seeker is an interactive application security testing (IAST) solution that enables development, DevOps, and QA teams to perform
security testing concurrently with the automated testing of web applications.

Seeker can be run as:

microservices-based or cloud-based application,

standard architecture. Its deployment can be: fully automated, docker-based or manual

Seeker is designed to be run in non-production environments, though it can be used in a


production environment if active verification is disabled.

Seeker Demo

Seeker is designed to be run in non-production environments, though it can be used in a production environment if active
verification is disabled

Seeker is instrumented to the target endpoint and server and will be able to capture all the app runtime behavior during app
testing. No additional scan is required, no manual human intervention or configuration needed unlike other AST tools that scan at
code level or at build integration level.

Seeker is meant to be run interactively. It runs concurrently, in the background while the web server or microservices test run is
ongoing.

if a customer adds Seeker to a system that automatically tests specific parts of their website for defects, it's automatically going to
run these additional tests constantly.

Seeker – Data Flow

Seeker – Patented Active Verification Proof

Seeker – Vulnerability Remediation

Seeker – Microservices
Seekers patented active verification engine will take the detected high severity finding and automatically replay on its own to
confirm the validity of detected findings.

Seeker verifies how data flows through the application, ensuring that the entire system, complies with security standards such as
PCI DSS.

The data flow map. This tells you what's connected to where and what they're talking
about.

In simple systems, its relatively straight forward to visualize dataflow,


but in complex systems, that use microservices, visualizing data flow is challenging , this is where seeker comes in.

BY VISUALIZING THE DATAFLOW, SEEKER CAN PINPOINT VULNERABILITIES THAT MIGHT ARISE FROM
INTERACTIONS BETWEEN MICROSERVICES LIKE DATA EXPOSURE, UNAUTHORIZED ACCESS, INJECTION
ATTACKS

Seeker – Endpoint Risk

Seeker – Outbound Endpoint

Seeker Project – Compliance

Seeker Integration with SCA

Seeker – Contextual learning

Seeker agents can be easily instrumented to multiple target application


endpoints or nodes, and they will track every interaction between the
application services under test autonomously in the background while the
teams carry out their normal development and testing work.
For instance, you can see here our testing currently has 19% endpoint coverage, which means we hitting 19% of the total areas we
could be hitting.

As you can see, we found that there's two vulnerable endpoints. We also found that there's 142 endpoints we haven't tested, and
Seeker knows about them. Over time, it will run some synthetic test- a test that Seeker can run even if you don't have a test to run
against the endpoint.

Seeker customers have reported finding vulnerabilities they didn't even know to look for, they couldn't have found them with a
DAST scanner or with static analysis.

Second order cross site scripting is very tricky to catch. Very, very challenging to catch again.
But Seeker can catch this sort of thing because it's watching all of the web data flows between components .

An endpoint is essentially a point of interaction where an application


connects with another system or service.
Endpoints can be
server
DB
API
Message queue
File Systems
External services (Payment gateway, authentication providers, etc)
Network Sockets
Hardware devices (sensors, actuators)

Outbound endpoints are external resources accessed from an application, such as APIs, databases, or
message queues.

Inside the application, Seeker can look at the actual endpoints to tell you what you tested versus what
actually exists because these endpoints are collected by agents no matter if whether they are
vulnerable or not.
The top banner displays the vulnerability metrics of outbound endpoints: the total number of
endpoints, the numbers of vulnerable endpoints, public and unknown APIs.
Often developers, don't realize that their applications reach out to many outbound endpoints.
Incoming endpoint, you can discover where our web application is going out and connecting to? And if
you don't think that's OK, you can remediate it.
Competetive Differentiators

Seeker replays potential malicious payloads to simulate attack reducing false positive to less than 5%,
While competitors focus on findings that can also create significant false positives.

Seeker Supports AWS Lamda and Cross microservices support,


While competitors offer microservices support, they treat microservices as independent app not as component of larger system.
And No comptitors currently offer AWS Lambda support.

Seeker Provides sensitive data management,


Competitor claims sensitive data management , and its typically a set of hard coded rules than increases false positives.

Comprehensive Capture: By being embedded within the application, Seeker can observe all actions, data flows, and interactions
that occur during testing. This provides a complete picture of the application's runtime behavior
Why Seeker Amazing Response:
Security teams need actionable results, not list of finding, to be verified later

Seeker Active Verification improves accuracy.

Seeker fully supports cross-microservices analysis and AWS Lambda

Offer a POC – hands on dispels myths.

Seeker IAST addresses the AppSec testing gap by providing a


solution that allows for automated testing of web applications,
with the ability to provide the development with the specific
location of vulnerabilities in the code.

Business Drivers

Integrates with CI/CD workflows, JIRA, SLACK, EMAIL


Automatic Security testing
Higly accurate identification fo vulnerabilities.
Sensitive data tracking.
Gives prioritizes critical specific remediation guidance by tracing vulnerability down to the line of code.

The Business objectives of this proof of value are:

• Automatically identify security and sensitive data issues within your running web(-services) applications with minimal effort

• Minimize false positives/verified security vulnerabilities/prioritize real/exploitable vulnerabilities

• Reduce remediation time

• Report compliance with security standards

• Perform interactive application security testing focused on the biggest attack surface – web applications and services
Questions to ask customers
Most SAST solutions focuses on benefits provided to security teams,
which leads to discussions around false positives, challenges interpreting finding, impact

Dynamic Analysis, or DAST is used to test running applications. It's very commonly used by operation teams when
attempting to understand how exploitable a deployment configuration might be

What do you mean by deployment configuration here?

Software Composition Analysis is looking for unpatched vulnerabilities and open source code
Those unpatched vulnerabilities represent CVEs(Common Vulnerabilities and Exposures) that were disclosed against an
open source component, but where the patch hasn't been applied since open source components don't have a single
source or origin point, each with potentially different implementations.
WhiteHat DAST

WhiteHat DAST is a powerful solution to verify the security of their applications in production and identify security issues before
they can be exploited by threat actors

Unlike other DAST solutions that can make unwanted changes to your data or applications, White Hat Dynamic is 100%
production safe and will not corrupt your application or any underlying data.

Other DAST solutions can overwhelm teams with a lot of low-quality results. White Hat Dynamic combines advanced
automated analysis with expert verification by Synopsys security analysts to deliver actionable findings with near-zero
false positives, saving triage time and effort and helping teams focus remediation activities on real exploitable
vulnerabilities.

There's a continuous and on demand risk assessments, White Hat DAST offers true continuous analysis, constantly scanning
customers websites as it evolves, automatic detection and analysis of code changes to web applications, alerts for newly discovered
notabilities and the ability to retest a vulnerability without having to test from the beginning and always on risk assessment

Real-time Analysis: When changes are detected, the system immediately analyzes the impact on the application's security posture .

Once we have the type of vulnerability, the ID. Where within the application this vulnerability was found? How long it's been open for?And we also provide you
with the CVSS score

The first one is impact, which is how bad it might be for business for this vulnerability to be exploited. Alright, the next one is the likelihood, which is the
perceived degree of difficulty to exploit this vulnerability

it's important to know that every time you scan through the application, We are retesting the vulnerabilities automatically, alright, we're not always gonna create
a new list of vulnerabilities every time we scan through the application, right? So those vulnerabilities just gonna stay open as long as the scanner sees them

let's say you're a developer working a high priority issue. Do you think you've got a fixed and you wanna push in production? You can come in here, click this blue
button and the it is just gonna pinpoint that area of the application, rerun it to the test algorithms and give you immediate feedback

Synopsys also offers a threat research center also known as TRC


lets say WhiteHat shows some vulnerability findings,
you tried fixing it,
then you clicked on blue button which will recheck for vulnerabilities and
it was not resolved,
And you don’t understand what to do,
so you can simply contact the threat research center to get the solution.

There is no limit to number of times you can contact,


there is no limit to number of employees you can give access to.
Competetive Key Differentiators
Software Risk Manager(SRM)

Software Risk Manager brings together policy, orchestration, correlation, and built-in SAST & SCA engines to integrate security
activities intelligently and consistently

ASPM stands for Application Security Posture Management. It's a comprehensive approach to managing and improving
the security of an organization's applications.

Synopsys Software Risk Manager is the only ASPM tools with market leading SCA and SAST
SRM enables you to set up policy-driven workflows to orchestrate AST tools like Coverity and
Black Duck, prioritize issues, and monitor compliance across your software assets.

 Centralized control: SRM allows you to manage both manual and automated security tests from a single platform.

 Tool integration: It can integrate with Synopsys tools and other third-party security tools, providing a unified view of security
activities.
 Data consolidation: SRM gathers security data from various sources, including on-premises and cloud-based systems.

 Data standardization: It standardizes the data format to ensure consistency and comparability.

 Duplicate removal: Identifies and eliminates duplicate security findings.


Risk-based approach: SRM helps you determine the most critical security issues based on factors like severity, asset importance,
and service level agreements (SLAs).

 Comprehensive overview: SRM provides a holistic view of security risks across all managed applications.

 Compliance tracking: It helps you monitor compliance with industry regulations and standards.

 Traceability: Links security findings to specific applications and code segments.

Software Risk Manager Capabilities


• Integration with 135+ security tools—more than any ASPM tool on the market today
• Centralized pre- and post-scan policy management
• Built-in testing engines for industry-leading Synopsys SAST and SCA
• Support for 20+ compliance standards
• Unify results across new and existing security tools with 135+ integrations.
• Customizable, extensible correlation rules
• Bidirectional integration with popular issue-trackers and developer tools including Jira, ServiceNow, Azure DevOps, GitLab, GitHub, Jenkins,
TeamCity, and Bamboo, as well as IDE plugins for Visual Studio, Eclipse, Visual Studio Code, and IntelliJ
• Sixteen built-in open source testing tools—the correct tool is automatically recommended via language detection

Software Risk Manager allows you to do the following with your AppSec data:
 Correlate results
 Prioritize vulnerabilities
 Track remediation
 Centralize risk visibility
 Additional Capabilities
 Automatically identifies and prioritizes critical issues based on a uniform assessment of risk.
 Delivers high-priority vulnerabilities to developers directly, including links to the exact line of code via bidirectional sync with issue tracking
systems
 Quickly and accurately detects vulnerabilities in source code and open source via built-in SAST and SCA engines, with preset rules to achieve
required testing workflows with minimal setup
 Provides contextually relevant remediation guidance to developers based on language, vulnerability type, and source, and recommends
remediation actions based on historical trends
 Displays security activities at the branch level, so developers can test fixes efficiently and reduce the frequency of build breaks

 Centrally orchestrates scans for Synopsys tools (built-in or standalone) or third-party tools
 Provides a 360-degree view of risk scoring, findings, and key performance trends for all projects and sources of code (custom
built, third party, and open source)
 Maps findings to regulatory compliance standards (including NIST, PCI, HIPAA, DISA, OWASP Top 10) and provides audit reports
for critical violations
 Provides both UI and API-based workflows to create, enforce, and monitor security policies across software assets
 Enables security teams to specify risk thresholds for issue types, desired application security testing tooling, SLAs on
remediation time for fixes, and required notifications to development stakeholders

Identify high Impact security activities and summarize across manual and automated AST and developer tools

Demo

Projects- contain all the all the projects you have in your Software Risk Manager instance. A project is a
collection of scans over time for a target software, so usually each product (target software) is an
individual project. Projects usually contain multiple scans done by different tools

Findings- contain all the discovered flaws and vulnerabilities from all your projects, collated in one place. If
you have multiple projects, Findings section is an easy way to look at the big picture, and see which
projects have the most pressing issues. You can also create reports from the Findings section by selecting
specific vulnerabilities by using filters.

Integrations- show you a tile for every supported tool in Software Risk Manager.

Hosts- are available to Software Risk Manager Enterprise users with the InfraSec add-on. When Software Risk Manager ingests Network Security
results, the location of those results is typically expressed in terms of a "host", with the level of detail varying from tool to tool. The Hosts page is
Software Risk Manager's location for interacting with host data directly, outside the context of Findings or Projects.

When Software Risk Manager ingests data from various network security tools, it often categorizes the information based on the concept of a
"host". A host can be a server, a network device, or any other entity within your network infrastructure.

The "Hosts" section provides a centralized location to view and manage this host-related data independently from the findings or projects related
to application security.

API Keys- are for integrating with specific tools or plugins

Server Logs- provides a helpful UI for certain events and errors that administrators might be interested in, for auditing purposes.
Server Manager UI DEMO:
Click on Projects tab. This dashboard gives a list of all your projects in SRM. From here you can add,
analyze and filter the projects within your list.
The Findings tab shows all the findings from all of your projects. You can filter the results as you will, and
drill deeper in to any individual finding. You are able to create a more system-wide report on all of your
findings.
In Policies tab, Software Risk Manager allows you to track compliance to specified requirements. Once
defined, policies can be applied to projects, and policy violations can be monitored.
In Integration tab, it show you a tile for every supported tool in Software Risk Manager.
In Hosts tab, Hosts are available for Enterprise users with InfraSec add-on. It is SRM's location for
interacting with host data directly, outside the context of Findings or Projects.
In Settings tab,
The Users page allows you to manage users. You can create new local users, set existing users as admins,
disable, or delete them. You can also see each user's latest login date.
The Project Metadata Fields page allows you to configure metadata for your projects. Create different
types of fields, text or tags for your projects.
The API Keys section shows all the generated API keys. For example, API keys are used for integrating
with a specific tool or plugin. If you haven't generated any API keys, your view would be empty like in this
example.
User Groups is a feature that allows permissions to be assigned to users in bulk. Groups are like teams,
for example developers and managers. You can create new groups and add users to any group in this
view.
Manual Entry Configuration allows administrators to define custom values which can be entered into a
Manual Results Form.
The Machine Learning Control Panel is available with the proper add-on. Once enabled, you can configure
machine learning capabilities here. ML Service assists is available if you have manually triaged at least
100 issues.
Tags can be assigned to projects to help manage people, organizations or severity.
Server Logs provide a helpful UI for certain events and errors that administrators might be interested in,
for auditing purposes.
The Licenses tab shows you your license information including users, projects, and
expiration; additionally this page also allows you to update your license.
Clicking the question mark icon on the top toolbar opens a menu for different SRM guides. You have the
option to select between HTML and PDF version for most of the available guides. Click the HTML version
of the User Guide.
When you click the download icon on the top bar, a list opens to give you links for downloading plugins
for SRM. In addition to plugins, also XML Schemas and Examples, and Java Tracer Agent are available for
download.
The Gear icon gives you the option to view the Visual Log for SRM.
Click Visual Log.
Visual Log shows you log events from your SRM instance. You can use the the log filter on the left side to
select which type of items and events are shown in the log.
When you click the admin link on the top-right corner, a menu opens where you can select either your
settings, or to log out.
Click My Settings.
We start with the Notifications tab, used for setting up email notifications that can be triggered by certain
events
In Personal Access Tokens tab, Personal Access Tokens. If you generate a token, you can use the SRM's
REST API. When you create a new token, you need to give it a name, and select the roles the token has
access to.
BlackDuck SCA

Black Duck SCA is the market leading software composition analysis tool used to
assess, manage and mitigate the risk involved with use of open-source software
across the entire SDLC. This risk falls into 3 main categories,
1)security,
2)license compliance
3) code quality.

SCA (Software Composition Analysis) helps companies ensure they are complying with the licenses of the open-source components they use. SCA
tools identify open-source components within your application and analyze their associated licenses. This helps companies avoid any potential
legal issues or license violations.

Black Duck raises a flag whenever it finds component of open source that might be risky.
For example, if you are using outdated version of open source component.

Black Duck is a market-leading software composition analysis (SCA) tool used to assess, manage, and mitigate the risk involved with
the use of open source software. This risk falls into three main categories: security, license compliance, and code quality.

By integrating into several tools across the entire software development lifecycle (SDLC), Black Duck is able to analyze application
code and files to identify open source components, and any related vulnerabilities and licenses.

Due to its configurability, Black Duck assists teams in installing custom open source governance, unique to their unique risk
tolerance, without getting in the way of innovation.

Software Composition Analysis is looking for unpatched vulnerabilities and open source code. Those unpatched vulnerabilities
represent CVEs(Common Vulnerabilities and Exposures) , that were disclosed against an open source component, but where the
patch hasn't been applied since open source components don't have a single source or origin point, each with potentially different
implementations

Throughout the past few years, the use of open-source software has exploded in popularity. This comes as no surprise given the
countless benefits of employing open source. However, without effectively managing its use, open source can expose an
organization to some unfamiliar risks.

With Blackduck, you can leverage open source and 3rd party code in your applications containers while easily managing the security
risks that often come along with it. Since the level of tolerable risk is unique to each organization, Black Duck is easy to configure
and customize to your company specific security and license policies.

Black Duck can be configured to automatically enforce policies like this and any violation will trigger the proper alert. Developers
can get ahead of these potential violations as early as the development stage, where the code side IDE plugin will notify developers
of vulnerabilities. And automatically remediate.

Black Duck also integrates with several other tools in order to find and scan your code base. Next time, you're provided with a bill of
materials, which gives you a complete and detailed inventory of all open source identified in your code base. For every component
identified, Blackduck surfaces all known security or compliance issues and flags them for review. Because you were able to
preconfigure Blackduck, you can filter the list to show only the items that violate your company's policies. When security risks are
identified, you're able to dive deeper into the component to view the Black Duck Security Advisory. This gives you all the
information needed to assess your risk and make the fix, including descriptions, severity scoring, exploit type, remediation guidance
and any related CDC's. Black Ducks vulnerability impact analysis indicates whether a exploit type, remediation guidance and any
related cdc.

Black Duck continuously monitors your SBOM and alerts you both inside and outside of the tool if any new vulnerabilities are
detected

 Black Duck doesn't directly examine the SBOM for vulnerabilities.

 Instead, it uses the information in the SBOM to identify the components used in the software.

 It then checks these components against its vulnerability database to determine if they have known vulnerabilities.

The SBOM acts as a guide or map to the components within the software, but it doesn't inherently contain vulnerability
information.

SBOM(Software Bill of Materials) : A document which lists all the components used in the
application.
The SBOM acts as a guide or map to the components within the software, but it doesn't inherently
contain vulnerability information.

Black Duck is specifically designed to assist organizations in creating and managing Software Bills of Materials (SBOMs). It's a core
functionality of the tool.

BLACK DUCK SCA addresses the vulnerabilities open source poses

GitHub Octoverse 2023 provided predications for open source software:

Generative AI, cloud-native development, and open source collaboration are emerging as major trends shaping the software
development industry.

2023 witnessed a record number of first-time contributors to open-source projects, highlighting the expanding reach and inclusivity
of the open-source ecosystem.

Commercially backed open-source projects continue to attract a significant portion of both first-time and overall contributions,
suggesting the influence of commercial support on open-source growth.

GitHub Octoverse 2023 also provides numbers:

 There is a 60% increase in automated pull request of packages, i.e, open-source packages used in the projects, even
though these packages are vulnerable

 Less than 30% of fortunes 100 companies have OSPO, i.e, teams that manages open source components and
checks for risks, compliances and vulnerabilities.
What does commercially backed open source project mean?

Ans: The main code branch is made open source,


people copy this code, make modifications,
Then request the main branch maintainers for changes as per the modifications they made,
if they maintainers find the modifications to be good,
they will make changes in the main branch and also pay or provide recognition or other benefits to the one who made the
modifications

 DevOps Pressure: Organizations are increasingly adopting DevOps practices to accelerate software delivery and improve agility.
This creates a demand for faster, more efficient processes.

 Open Source as a Solution: Open-source software offers several advantages to address DevOps challenges:

Faster acquisition: No need for lengthy procurement processes like purchase orders or contracts.

Quicker deployments: Open-source software can often be deployed rapidly due to pre-built packages.

Rapid evolution and innovation: Community-driven development leads to faster feature updates and improvements.

Higher quality: Extensive community testing often results in higher quality software.

Customization: Access to source code allows for tailored modifications.

Time-to-market: Open-source software can be up and running quickly, accelerating time-to-market.

 Risk Management: While open source offers numerous benefits, it also introduces risks like security vulnerabilities and licensing
compliance issues. Effective risk management is crucial to fully leverage the advantages of open source without compromising the
overall system.

while open source can significantly contribute to DevOps goals, organizations must implement robust strategies to address the
associated risks. Which is where BlackDuck comes is

Issues with open source code:


A significant portion of codebases examined contained open-source components that hadn't been updated in over two years.

Projects without active maintenance are more susceptible to vulnerabilities and exploits.

Many organizations struggle to keep open-source components up-to-date due to resource constraints, potential compatibility
issues, and the fear of unintended consequences.

SCM( Supply Chain Management)


Supply chain management is the identification of which opensource component to use,
assessing it for risks, ensuring we are adhering to open source licenses and updating and maintaining the open source components.

 Open-source licenses impose specific obligations on users.

 Non-compliance with license terms can lead to legal disputes.


for ex:
Cisco, a technology giant, faced a legal challenge due to its improper use of open-source software (OSS).
Cisco acquired Linksys, Linksys had integrated GPL-licensed code into its routers. The GPL, a type of open-source license, mandates
that any product incorporating GPL-licensed code must make the source code public.

Linksys failed to comply with this requirement by not making the router's source code publicly available. This non-adherence to
the GPL's terms led to legal action against Cisco.

The case highlights the critical importance of understanding and adhering to open-source licensing terms. Failure to do so can
result in significant legal and financial repercussions.

According to OSSRA 2024 findings:


 Open Source Prevalence: 96% of analyzed codebases contained open-source components, highlighting their
widespread use.
 Vulnerability Increase: A significant 54% increase in codebases with high-risk vulnerabilities was observed between
2022 and 2023.
 Outdated Components: 49% of codebases contained components with no development activity in the past 24
months, and 31% had components at least 12 months behind on maintenance.
 Audit Scope: Black Duck audits primarily focus on license compliance, with vulnerability and operational risk
assessments being optional.

Total Codebases Scanned: 1,067 codebases were analyzed in 2023.


Open Source Prevalence
 High Usage: 96% of codebases contained open-source components.
 Origin: 77% of the total code in the analyzed codebases originated from open source.
 Licensing Issues: 53% of codebases had license conflicts.
 Unlicensed Components: 31% of codebases contained open-source components without a license or a custom
license.

Open Source Vulnerabilities


 Vulnerability Rate: 84% of codebases assessed for risk contained vulnerabilities.
 High-Risk Vulnerabilities: 74% of codebases assessed for risk contained high-risk vulnerabilities.
 Outdated Components: 49% of codebases assessed for risk had components with no development activity in the
past 24 months.
 Component Versioning: 91% of codebases assessed for risk contained components that were 10 versions or more
behind the most current version.
 Age of Code: 14% of codebases assessed for risk had vulnerabilities older than 10 years.

Synopsys Cybersecurity Research Center ( CyRC)


Black Duck Security Advisories(BDSA) help address compliance requirements:

Black Duck provides a vulnerability feed specifically focused on open-source components.

This feed is enriched by the research conducted by the Synopsys Cybersecurity Research Center ( CYRC).
The focus is on delivering timely information (same-day notification) about critical vulnerabilities.
Vulnerability feed of open source curated to ensure accuracy and relevance to customers.

BDSA is a value-added service provided by Black Duck to enhance vulnerability information. It addresses shortcomings in existing
vulnerability databases like the NVD.

Black Duck Security Advisories (BDSA) provide more comprehensive and timely CVE information than the National
Vulnerability Database (NVD) by incorporating additional research and analysis from the Synopsys Cybersecurity
Research Center (CYRC).

Now let's look at manual OSS(open-source Software) governance that creates bottlenecks

Binary Repository: A centralized storage for pre-approved OSS components.


Process:
step 1) Developers initially search the binary repository for required components.
step 2) If the component is not found, it's submitted for review.
step 3) The legal and security teams assess the component based on licensing and security criteria.
step 4) If approved, the component is added to the binary repository. If rejected, the developer must find an alternative.
Bottlenecks:
Reliance on manual checks and approvals creates inefficiencies and delays.
Manual processes can lead to human errors and missed vulnerabilities.
Delays in the approval process can hinder development velocity.

Implementing automation for OSS governance:

Step1) Security and legal teams establish clear guidelines for acceptable open-source components.
Step2) When a developer introduces a component, it undergoes an automated evaluation against the defined policies
Step3) Developers receive immediate feedback on whether the component complies with the policies.
Step4) The system can implement complex decision-making processes based on multiple criteria
Step5) Approved components can be cached in a binary repository for future use.
Step6) Human intervention can still be involved for complex cases or exceptions.

Benefits:
Automating the review process accelerates development cycles.
By quickly identifying and blocking risky components, it reduces security vulnerabilities.

With implementation comes integrations that support how DevOps teams build their code
1) Regardless of the acquisition method (package manager, direct download, binary repository), the open-source code
itself remains the same. It should be treated consistently within the governance framework for a complete BOM"
emphasizes the importance of uniform handling for all open-source components.
This means that,
Every open-source component, regardless of its origin, should undergo the same governance processes.
A comprehensive BOM requires the inclusion of all open-source components, irrespective of their acquisition method.
All components should be subjected to the same security, license, and quality checks.
2) Similarly, developer preferences for building code shouldn't influence how open-source components are managed.
 Black Duck is designed to seamlessly integrate with various CI/CD processes and build paradigms, including cloud-
based services.
 This ensures Black Duck can effectively generate findings related to open-source usage throughout the development
lifecycle.

In effect, integrations that span the entire SDLC enables teams to manage open source without having to leave tool DevOps teams
use to build their code.

BDSA and CYRC are teams or departments within Synopsys.


 CYRC (Synopsys Cybersecurity Research Center): A team of researchers focused on identifying and analyzing
software vulnerabilities.
 BDSA (Black Duck Security Advisories): A service or product that leverages CYRC's research to provide timely
and actionable vulnerability information to customers

Question: Coverity SAST checks the whole source code to find vulnerabilities right? Then cant we simply integrate the
open source code with the proprietary code and make SAST do the whole work, instead of having SAST for proprietary
code and BlackDuck for open-source code?
Ans: Coverity can essentially do this part, the problem remains that SAST becomes inefficient when it comes to
dependencies that open-source code may have, SAST can’t jump on to the dependencies to check them for vulnerability,
one dependency may have dependency on the other which may also form a dependency chain,
which is handled better by SCA.
Also, SAST tools are not designed to handle license compliance issues associated with open-source components

Question: Okay so, if SAST is not good at analyzing vulnerabilities in dependencies, then what about the external
libraries the software code may use in proprietary code, who will check for its vulnerabilities? SAST or SCA?
Ans: while SAST can provide some insights into how external libraries are used within the codebase, it's not
designed to comprehensively assess the security of those libraries.
To effectively manage risks associated with external libraries, organizations need to complement SAST with tools like
Software Composition Analysis (SCA) that specialize in analyzing open-source components and their vulnerabilities

Question: Can’t SAST check open-source code integrated with proprietary code for compliance?
Ans:
 SAST can perform some basic compliance checks on open-source code.
 SCA is the preferred tool for in-depth open-source compliance and risk management.
 A combination of SAST and SCA provides the most robust approach to managing open-source software.

Competitive Key Differentiators


 Synopsys does dependency, file-system, binary, snipped scanning, no other SCA competitor offers all four scanning
methods.

Broader license coverage
File level license and copyright data
Simple to advanced workflow management

Competitors Other SCA vendors have focused on either license or security. Black Duck is the most comprehensive
with our market leading license compliance features
 Synopsys has very strong policy management and SDLC integrations
In the Forrester Wave for SCA, Synopsys was the only vendor to receive perfect scores in both policy and
integrations
 Synopsys: Leverages BDSA data for a complete picture of risk, including exploit, solution, severity, CWE, and
reachability info for Java. Prioritize remediation based on all of these factors.

Competition: Few competitors offer this functionality, but it is narrow in scope (language coverage) and at times very
manual.

How to Respond: Overall prioritization of vulnerabilities is key to OS management. Knowing which are thought
to be in use is one data point in that prioritization but shouldn't be used as the ONLY data point.

 Synopsys: Leverage Code Sight which provides automated remediation and manual remediation information right in
the IDE, so developers can find and fix vulnerabilities as they code and save time later in the SDLC.
Competition: Some competitors are beginning to offer this but it is narrow in scope and coverage and only works for
certain applications.

How to Respond: This can have downstream impacts and we want you to have control over how you remediate based on
your requirements. We provide you with the enhanced information you need to make those decisions.

DEMO
BlackDuck Questions to ask cutomers
CodeDx
Code Dx is Application Security Orchestration and Correlation solution designed to ingest
and correlate data from multiple AST tools, analyze the findings and assess criticality,
prioritize remediation efforts and orchestrate developer workflows

ingest – collects
remediation – vulnerability fix
Orchastrate workflow – Coordinate the flow of info and actions between tools and teams, prioritize and assign tasks to developers based on
severity, etc.

CodeDX does the following:


- When a project is added to Code Dx, it performs a quick analysis to automatically identify the appropriate application security tests needed, as
shown in Figure 1. This includes tools bundled with Code Dx as well as separately licensed commercial tools. Users can remove or add other tests,
change the rulesets used

- Results from all tools are aggregated and normalized to provide consistent scoring and descriptions for all issues
- Using multiple tools or running multiple scans can result in a single vulnerability being found by multiple tools and reported as multiple issues.
Code Dx examines the results from similar tools to eliminate duplicate vulnerabilities found by more than one tool.
Code Dx correlation and deduplication eliminated over 1,000 issues, reducing the triage workload by 20%
- Hybrid Correlation Engine: Code Dx combines data from both static analysis tools (SAST) and dynamic analysis tools (DAST) to provide a
more comprehensive view of vulnerabilities.
- Records when each AppSec test was run
- Records all the issues that were found and which are highest priority
- Stores remediation status (which issues were fixed which weren’t)
- Centralized platform offering a unified view of all security risks associated with your application.

Questions to ask customers for CodeDX

Identified issues can be exported to JIRA, creating a new ticket for tracking and resolution.

Coverity provides a link to the JIRA ticket for easy access.

The integration pushes information from Coverity to JIRA but doesn't pull updates back
from JIRA.
Code Dx: For customers needing a two-way connection with JIRA, Coverity offers
Code Dx, a separate product that allows viewing and manipulating JIRA issue
statuses directly within Coverity.
CodeSight

While a strong ecosystem of integrations is important for any AppSec tool, the reality is that value is only as good as the workflows
you can build around it. From a DevOps perspective, one goal way to improve application quality is to shift security and quality
information "left" toward the developer. That means getting the AppSec findings into the developers IDE.

The Synopsis Code Site plugin helps developers find quality of security issues in your source code. It helps them fix these issues and
increases your confidence that you are checking in clean Code Code Site launches one or more synopsis software analysis engines to
scan your source code and detect. Issues code site runs within many types of IDE applications. It displays the information it finds in
its own views, which appear within the IDE interface.
BSIMM

BSIMM (Building Security In Maturity Model) is a framework that helps organizations assess and improve their software security
practices. It is based on the principle that "you can't improve what you don't measure."
BSIMM takes a data-driven approach to identify how well application security programs are working and provide feedback on
where improvements can be made

BSIMM helps organizations assess their security maturity and identify improvement areas.

By measuring their performance against industry standards, organizations can identify areas for improvement and implement
targeted actions.

BSIMM gathers data from a wide range of organizations across various industries. This diverse dataset allows for meaningful
comparisons and the identification of industry-specific trends.

These are the foundational elements of BSIMM:

1. Establish a method for creating software security programs holistically.

2. Create a descriptive model of what firms were actually doing.

3. Gather data on a regular basis to keep the data current.

4. Kickstart business transformation for software security initiatives

WHY BSIMM

 BSIMM's Goal: To assist organizations in planning, executing, maturing, and measuring their SSIs.

 BSIMM focuses on identifying the highest-level activity observed within each security practice, providing a quick overview of an
initiative's maturity.

 BSIMM addresses the gap by providing real-world data instead of relying on hypothetical expert opinions.

 BSIMM offers a standardized approach to measuring and describing 122 distinct security activities.

 SSI: The specific actions and strategies to secure software.

 Framework: A structured approach or guideline for implementing SSIs.

SSI (Software Security Initiatives) is primarily a practice. It refers to the specific actions and strategies an organization undertakes to
secure its software.

While not a framework itself, it's often integrated within a broader software development or security framework. Think of it as the
"what" you do, while frameworks like BSIMM or OWASP provide the "how" and "when" to do it.

BSIMM and SSI are documents/guidelines and not tools


 BSIMM is a documented model that outlines best practices and benchmarks for software security. It's a reference guide.
 SSI is a documented strategy or plan that an organization creates to address its specific software security needs. It's a blueprint
for action.

You might also like