0% found this document useful (0 votes)
13 views

Measurement Dox

Uploaded by

Anh Hoai
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Measurement Dox

Uploaded by

Anh Hoai
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 50

Software Assurance And Assessment

DUY TAN UNIVERSITY


INTERNATIONAL SCHOOL

TEAMS PROJECT
CMU CS 462-BIS

SOFTWARE ASSURANCE AND ASSESSMENT

SUBJECT : Software Measurements & Analysis


TEACHER : ANAND NAYYAR
TEAM :
 Nguyen Van Tran Anh : 27211137987
 Le Hoai Anh : 27211248727

Nguyen Van Tran Anh and Le Hoai Anh 1


Software Assurance And Assessment

Table of Contents

Acknowledgments 5

1 Introduction to Software Assurance 6


1.1 Examples of Product and Process Confidence 6
2 Structuring Software Assurance Practices for Measurement 10
2.1 Defining the Software Assurance Target 10
2.2 The SAF 10
2.3 Justifying Sufficient Software Assurance Using Measurement 11
2.4 An Implementation Process for Each Metric 13
2.4.1 Collect Data 15
2.4.2 Analyze and Identify Issues and Gaps 16
2.4.3 Evaluate and Determine the Need for Response 16
2.4.4 Implement a Response and Determine Needed Monitoring 17

3 Selecting Measurement Data for Software Assurance Practices 18


3.1 Example Software Assurance Target and Relevant SAF Practices 18
3.2 Example for Selecting Evidence for Software Assurance Practices 20
3.3 Example for Finding Metrics Data in Available Documentation 21
3.4 Sustainment Example 22

4 Challenges for Addressing Lifecycle Software Assurance 24


4.1 Acquisitions Can Initiate Software Assurance with Independent Verification and
Validation 24
4.2 Monitoring the Development of a Custom Software Acquisition 26
4.3 Monitoring Integration of Third-Party Software 29
4.4 System-of-Systems Assurance 31

5 Software Process Maturity Assessment 33

5.1 Software Process Assessment Cycle 33

5.2 SCAMPI 35

5.3 Assessment Categories 36

6 Objectives of Software Assessment 37

6.1. Types of Software Assessment 37

6.1.1. Static Analysis 37

6.1.2 Dynamic Analysis 37

6.1.3 Manual Testing 38

6.1.4 Automated Testing 38

6.2 Key Metrics in Software Assessment 39

6.3 Software Assessment Tools 39

6.4 Software Assessment Process 39

6.5 Best Practices in Software Assessment 39

6.6 Challenges in Software Assessment 40

6.7 Future Trends in Software Assessment 40

7 Steps in Software Assessment 41

Nguyen Van Tran Anh and Le Hoai Anh 2


Software Assurance And Assessment

8 Software Assessment Framework 43

9 Case Studies 46

10 Conclusions 48

11 References/Bibliography 50

Nguyen Van Tran Anh and Le Hoai Anh 3


Software Assurance And Assessment

List of Figures

Figure 1: Failure Distribution Curves 8

Figure 2: Lifecycle Measures 9

Figure 3: Software Assurance Framework 11

Figure 4: Metrics Development Process 14

Figure 5: SQL-Injection Assurance Case 28


Figure 6: Supply Chain Monitoring 30

List of Tables
Table 1: Engineering Questions 12

Table 2: Practices/Outputs for Evidence Supporting Sustainment Example 23

Table 3: Requirements (SAF Engineering Practice Area 3.2) 27


Table 4: Evidence of Supplier Capabilities and Product Security 32

Nguyen Van Tran Anh and Le Hoai Anh 4


Software Assurance And Assessment

Acknowledgments
We thank Dr. Anand Nayyar for his input as reviewers.

Nguyen Van Tran Anh and Le Hoai Anh 5


Software Assurance And Assessment

1 Introduction to Software Assurance

There is always uncertainty about a software system’s behavior. Rather than


performing exactly the same steps repeatedly, most software components function
in a highly complex networked and interconnected system of systems that
changes constantly. Measuring the design and implementation yields confidence
that the delivered system will behave as specified. Determining that level of
confidence is the objective of software assurance, which is defined by the
Committee on National Security Systems as

Implementing software with a level of confidence that the software functions


as intended and is free of vulnerabilities, either intentionally or
unintentionally designed or inserted as part of the software, throughout the
lifecycle.

Measuring the software assurance of a product as it is developed and delivered to


function in a specific system context involves assembling carefully chosen metrics
that demonstrate a range of behaviors to confirm confidence that the product
functions as intended and is free of vulnerabilities. Measuring software assurance
is challenging, since it is a complex and difficult problem with no readily
available solutions.

The first challenge is evaluating whether a product’s assembled requirements


define the appropriate behavior. The second challenge is to confirm that the
completed product, as built, fully satisfies the specifications for use under realistic
conditions.

Determining assurance for the second challenge is an incremental process applied


across the lifecycle. There are many lifecycle approaches, but, in a broad sense,
some form of requirements, design, construction, and test is performed to define
what is wanted, enable its construction, and confirm its completion. Many metrics
are used to evaluate parts of these activities in isolation, but establishing
confidence for software assurance requires considering the fully integrated
solution to establish overall sufficiency.

1.1 Examples of Product and Process Confidence

As an example of the complexity in establishing confidence, consider one aspect


of product performance. When used, the product must meet some level of
performance (e.g., sub-second response time). Assurance includes tests to confirm
that the final product meets the requirements. Best practices start with building a
computational model during design and using simulations to demonstrate
assurance using engineering analysis. Assurance continues into the
implementation. For example, unit testing provides assurance that a component
behaves as specified by the model. If necessary, corrective action can be taken

Nguyen Van Tran Anh and Le Hoai Anh 6


Software Assurance And Assessment

during the design and implementation phases.

An additional complexity for software assurance is recognizing that software is


never defect free, and up to 5% of the unaddressed defects are vulnerabilities
[Ellison 2014]. According to Jones and Bonsignour, the average defect level in the
U.S. is 0.75 defects per function point or 6,000 per million lines of code (MLOC)
for a high-level language [Jones 2011]. Very good levels would be 600 to 1,000
defects per MLOC, and exceptional levels would be below 600 defects per
MLOC.

Thus, software cannot always function perfectly as intended. How can confidence
be established? One option is to use measures that establish reasonable
confidence that security is sufficient for the operational context. Assurance
measures are not absolutes, but information can be collected that indicates
whether key aspects of security have been sufficiently addressed throughout the
lifecycle to establish confidence that assurance is sufficient for operational needs.

At the start of development, much about the operational context remains


undefined, and there is a general knowledge of the operational and security risks
that might arise as well as the security behavior that is desired when the system is
deployed. This vision provides only a limited basis for establishing confidence in
the behavior of the delivered system.

Over the development lifecycle, as the details of the software and operational
context incrementally take shape, it is possible, with well-selected measurements,
to incrementally increase confidence and eventually confirm that the delivered
system will achieve the level of software assurance desired. When acquiring a
product, if it is not possible to conduct measurement directly, the vendor should
be contacted to provide data that shows product and process confidence.
Independent verification and validation should also be performed to confirm the
vendor’s information.A comparison of software and hardware reliability provides
some insight into challenges for man- aging software assurance. An evaluation of
hardware reliability uses statistical measures, such as the mean time between
failures (MTBF) since hardware failures are often associated with wear and other
errors that are frequently eliminated over time. A low number of hardware failures
in- creases our confidence in a device’s reliability.The differences between
software and hardware reliability are reflected in their associated failure-
distribution curves shown in Figure 1. A bathtub curve, shown in the left graph,
describes the typical failure distribution for hardware. The bathtub curve consists
of three parts: a decreasing failure rate (of early failures), a constant failure rate
(of random failures), and an increasing failure rate (of wear-out failures), as wear
increases the risk of failure. Software defects exist when a system is deployed.
Software’s failure distribution curve, shown in the right graph of Figure 1, reflects
changes in operational conditions that exercise those defects as well as new faults

Nguyen Van Tran Anh and Le Hoai Anh 7


Software Assurance And Assessment

introduced by upgrades. The reduction of errors between updates can lead system
engineers to make reliability predictions for a system based on a false assumption
that software is perfectible over time. Complex software systems are never as
error free as described above.

Figure 1: Failure Distribution Curves


As noted in the 2005 Department of Defense Guide for Achieving Reliability,
Availability, and Maintainability (RAM), a lack of observed software defects is
not necessarily a predictor for improved operational software reliability. Defects
are inserted into the software before it is deployed, and operational failure results
from environmental conditions that were not considered during testing. Too little
reliability engineering was a key reason for the reliability failures described in the
DoD RAM guide. This lack of reliability engineering was exhibited by failure to
design-in reliability early in the development process and the reliance on
predictions (i.e., using reliability defect models) instead of conducting engineering
design analysis.

The same problem applies to software assurance. Software assurance must be


engineered into the design of a software-intensive system. Designing in software
assurance requires going beyond identifying defects and security vulnerabilities
towards the end of the lifecycle (reacting) and extending to evaluating how system
requirements and the engineering decisions made during design contribute to
vulnerabilities. Many known attacks are the result of poor acquisition and
development practices.

This approach to software assurance depends on establishing measures for


managing software faults across the full acquisition lifecycle. It also requires
increased attention to earlier lifecycle steps, which anticipate results and consider
the verification side as shown in Figure 2. Many of these steps can be performed
iteratively with opportunities in each cycle to identify assurance limitations and
confirm results.

Nguyen Van Tran Anh and Le Hoai Anh 8


Software Assurance And Assessment

Figure 2: Lifecycle Measures

Nguyen Van Tran Anh and Le Hoai Anh 9


Software Assurance And Assessment

2 Structuring Software Assurance Practices for


Measurement

2.1 Defining the Software Assurance Target

Software assurance needs context to measure its practices usefully. Some software
assurance targets must be defined for the system to be fielded. It is then possible
to identify ways that engineering and acquisition ensure through policy, practices,
verification, and validation that the software assurance targets are addressed.

For example, if the system being delivered is a plane, a key mission concern is
that the plane can continue to fly and perform its mission even if it’s experiencing
problems. Therefore, our stated software assurance goal for this mission might be
“mission-critical and flight-critical applications executing on the plane or used to
interact with the plane from ground stations will have low cybersecurity risk.”

To establish activities that support meeting this software assurance goal, software
assurance practices should be integrated into the lifecycle. The Software
Assurance Framework (SAF), a base- line of good software assurance practices
for system and software engineers assembled by the SEI, can be used to confirm
the sufficiency of software assurance and identify gaps in current lifecycle
practices [Alberts 2017]. A range of evidence can be collected from these
practices across a lifecycle to establish confidence that software assurance is
addressed.

Evaluation of this evidence should be integrated into the many monitoring and
control steps already in a lifecycle, such as engineering design reviews,
architecture evaluations, component acquisition reviews, code inspections, code
analyses and testing, flight simulations, milestone reviews, and certification and
accreditation. Through the analysis of the selected practices, evidence and metrics
can be generated to quantify levels of assurance, which, in turn, can be used to
evaluate the sufficiency of a system’s software assurance practices. A well-
defined evidence-collection process can be automated as part of a development
pipeline to establish a consistent, repeatable process.

2.2 The SAF

The SAF [Alberts 2017] defines important software assurance practices for four
categories: process management, project management, engineering, and support.
(See Figure 3.) Each category comprises multiple areas of practice, and specific
practices are identified in each area. To support acquirers, relevant acquisition and
engineering artifacts where evidence can be provided are documented for each
practice, and an evaluator looks for evidence that a practice is implemented by

Nguyen Van Tran Anh and Le Hoai Anh 10


Software Assurance And Assessment

examining the artifacts related to that practice.Because most organizations use


unique lifecycle models structured to support the specific systems and software
products they deliver, using a framework of practices allows tailoring based on
the specific needs of a program in any organization.

Many relevant practices focus on cybersecurity, which is defined in Merriam-


Webster as “measures taken to protect a computer or computer system (as on the
Internet) against unauthor- ized access or attack.” A system containing
vulnerabilities that can be compromised to allow un- authorized access reduces
the confidence of software assurance.

Figure 3: Software Assurance Framework

2.3 Justifying Sufficient Software Assurance Using Measurement

Just as there is no single practice that addresses software assurance, there is no


one single measurement that demonstrates that a software assurance target has
been achieved. The use of many metrics is required to determine that a range of
practices is sufficiently addressed and the product performs as expected. These
metrics must be connected to the software assurance target in a manner that
supports increased confidence (or not) across the lifecycle.

One form of structuring metric information is an assurance case. Metrics provide


evidence in support of a software assurance target based on justification of the
value of the evidence (aka argument). Such evidence does not imply any kind of
guarantee or certification. It is simply a way to document rationale behind
software assurance decisions. Assurance cases were originally used to show that
systems satisfy their safetycritical properties. For that use, they are called safety
cases. Effective measurements require planning to determine what to measure and
analysis to determine what the measures reveal as evidence in support of a target.

Nguyen Van Tran Anh and Le Hoai Anh 11


Software Assurance And Assessment

An assurance case simply documents the verification of a claim. For example, an


assurance case for the performance example described in Section 1.1 consists of
the computational model, simulations that verify that model, unit tests that verify
the implementation of the model, and tests of the integrated system.

Several observations about how an assurance case can be used include the following:
 Creating a verification argument and identifying supporting evidence should
be the expected output of normal development activities.
 An assurance case is developed incrementally. For this example, the outline
of an assurance case was developed during design. It is likely refined during
implementation to satisfy verification requirements.
 Independent reviewers can evaluate the assurance argument and sufficiency
of proposed or supplied evidence throughout the development lifecycle.

Software assurance metrics are needed to evaluate both the practices in a software
assurance practice area as well as the resulting assurance of the product. For
example, in the SAF Engineering practice area, the engineers must (1) know what
to do, (2) actually do it, and (3) provide evidence that what they did is sufficient.

However, there are many competing qualities (e.g., performance, safety,


reliability, maintainability, usability) an engineer must consider in addition to
software assurance, and the result must provide sufficient assurance to meet the
target. Answers to the questions in Table 1 provide evidence that the engineering
was performed effectively. Further evidence is needed to determine if the software
assurance results based on the engineering decisions meet the target.
Table 1: Engineering Questions
Effectiveness Was applicable engineering analysis incorporated in the development practices?

Trade-offs When multiple practices are available, have realistic trade-offs been made between the effort
associated with applying a technique and the improved result that is achieved? (The improved result
refers to the efficiency and effectiveness of the techniques relative to the type of defect or weakness.)

Execution How well was the engineering done?


Results applied Was engineering analysis effectively incorporated into lifecycle development?

The Goal/Question/Metric (GQM) paradigm can be used to establish a link


between the software assurance target and the engineering practices that should
support the target. The GQM approach was developed in the 1980s as a
mechanism for structuring metrics and is a well-recognized and widely used
metrics approach.

To focus the use of GQM on software assurance, consider an example. An


engineering practice for software assurance identifies and protects the ways that a
software component can be compromised (aka attack paths). Such a practice must

integrate into all phases of the acquisition and development lifecycles. Measures

Nguyen Van Tran Anh and Le Hoai Anh 12


Software Assurance And Assessment

to provide assurance evidence can be collected from activities that implement this
practice in several lifecycle steps, such as the following:
 Requirements: What are the requirements for software attack risks, and are
they sufficient for the expected operational context?
 Architecture through design: What security controls and mitigations must
be incorporated into the design of all software components to reduce the
likelihood of successful attacks?
 Implementation: What steps must be taken to minimize the number of
vulnerabilities inserted during coding?
 Test, validation, and verification: How will actions performed during test,
validation, and verification address software attack risk mitigations?

For each of these engineering questions, explore relevant outputs and metrics that
can be used to establish, collect, and verify appropriate evidence. Since each
project is different in scope, schedule, and target assurance, actual implemented
choices should be the metrics that provide the greatest utility.

2.4 An Implementation Process for Each Metric

Selecting a metric is only the first step in establishing useful measurement of


software assurance. Metric data must also be collected, analyzed, and evaluated to
identify potential concerns. Each concern triggers a response determination and an
implementation of that response. Figure 4 describes the steps for establishing and
using a metric.

Nguyen Van Tran Anh and Le Hoai Anh 13


Software Assurance And Assessment

Collect Data
• What data should be collected, and where should it be collected?
• What is the data’s level of fidelity?
• How many sources of data are there?
• How should data be assembled for analysis and passed to the next step?

Analyze and Identify Issues and Gaps


• What is the criteria for abnormal conditions? (It requires a baseline of
expected behav- ior.)
• How frequent should the data be analyzed? (If nothing looks abnormal,
terminate the flow and revisit in the next review.)

Evaluate and Determine the Need for Response


• Confirm the validity of indicators, including the accuracy of the data and
its sources, and the validity of the metrics used to determine the condition.
• Identify the potential impacts, including the mission, requirements
variance, future sys- tem performance (i.e., product impact), and
operational capability (i.e., predicting future problems).
• Establish the criteria for evaluating the severity of impact and response,
including crises requiring immediate action, changes needed in
measurements, changes needed in re- quirements, and product changes
required (i.e., engineering changes).

Implement a Response and Determine Needed Monitoring


• Determine where impact and response are needed.
• Communicate the impact and response needs to appropriate stakeholders.
• Determine monitoring needs.
• Adjust data collection and measurement analysis as needed for future analyses.

Figure 4: Metrics Development Process

Nguyen Van Tran Anh and Le Hoai Anh 14


Software Assurance And Assessment

2.4.1 Collect Data

Collecting measurement data starts with answering the following standard


questions of who, what, where, when, and how.

Who performs the practice(s) selected to measure? If there is direct access to


who performs the practices, it is possible to request the data. However, in many
cases, the practices are performed by contracted resources, and a deliverable
must be added to performance criteria to ensure practices are performed. This
addition may mean contract modifications and increased costs.

What should be collected and by whom? In some cases, the data is already
available and is being used for a related secondary purpose. It’s likely that no one
is collecting the information because it hasn’t been required, or what is being
collected is imprecise or insufficiently correlated to what must be evaluated. Are
mechanisms available to collect the needed data? Are there log entries that can be
assembled or tools that can be applied to collect the data? If there is no way to
collect the data needed, a surrogate may be able to provide a close approximation
of what is needed.

Where might the data be collected and how many sources should be used? How
granular should the data be? Is information needed about every line of code, every
software module, every component, or each product within the system? Or is
information needed at an integration level? Is it necessary to collect detailed data
and construct the combined view, or can the data be collected at a point where it
will reflect the combinations? Is there a single point where the practice being
measured is performed, or is it spread throughout many separate steps, separate
lifecycle activities, and separate contractors? Are the practices being inserted into
the lifecycle, and do the measurement activities need to be part of that transition?
How many sources must participate to make the measurement useful? In many
cases, the volume of data may be too high for manual analysis, and the collection
process should be automated to be practical.

When should the metric be collected to be useful? If a metric is used for


prediction, then it must be part of early lifecycle activities. If it’s used for
lifecycle performance verification, then it should be part of later lifecycle
activities. How frequently (e.g., daily, weekly, monthly, at the end of a cycle, or
as part of planned reviews) is this information needed? There is no reason to
expend resources to collect data more frequently than needed.

How should the information be assembled for analysis? Data is useful only if it’s
analyzed, and data analysis is time and resource intensive. Mechanisms must be
in place to isolate data needed to conduct assurance analysis from the many log
files and other data repositories that potentially contain millions of records. Data
that is classified and cannot be shared with decision makers is useless unless the
analysis is framed so the decisions the data is intended to influence are addressed

Nguyen Van Tran Anh and Le Hoai Anh 15


Software Assurance And Assessment

within the classification boundaries.

2.4.2 Analyze and Identify Issues and Gaps

Measurement data is collected so that it can be used to influence action.


Measurements can show that work is proceeding as expected, and no action
beyond continuing the current course is required. Measurements can show
deviations from a desired range of performance, indicating the need for further
evaluation, possible engineering changes, or different measures because the data
does not correlate to expectations. Any of these outcomes requires knowledge of
what constitutes expected data so that undesirable behavior can be identified. A
worthy measurement plan predefines what the collected data means and how it
should be used to influence actions so that the interpretation of the results and
selected responses are appropriate.

Who reviews the data for potential response? How do they determine what is out
of acceptable bounds and when action is required? Is there a single decision
point? Or are performers at a granular level expected to (1) correct issues related
to measures within a certain range and (2) notify decision makers at the next level
when those bounds are exceeded? Each selected measure can have different
responses to these questions based on how the organization chooses to implement
its decision making.

2.4.3 Evaluate and Determine the Need for Response

There are several possible responses to measures that are considered out of
bounds. Initially, the data should be confirmed to ensure its validity. Were the
collection and submission processes followed so that the data has integrity? Are
the metrics appropriate to indicate specific action, or are they potential warning
indicators that should trigger further monitoring, data collection, and analysis?

If the data is believable, then what are the potential impacts indicated by an out-
of-bounds condition? There could be mission success impacts, system/product
performance impacts, operational capability impacts with future limitation
implications, etc.

If the measures can be considered predictive, then what actions should be


considered to prevent, mitigate, or monitor the possible impact? If the possible
impact is unacceptable, what must change to align the predicted outcome with
the desired result?

If the measures verify capability, are the conditions posed by the unexpected
variance great enough to justify rework of some or all of the system? Or will
responsibility, and possibly future change requests, be transferred to operations?

Any of the above responses requires criteria for evaluating the severity of impact
and the immediacy of expected response. Mechanisms for communicating the

Nguyen Van Tran Anh and Le Hoai Anh 16


Software Assurance And Assessment

need for response to current or future performers is also required.

2.4.4 Implement a Response and Determine Needed Monitoring

Once the desired response is determined, it’s necessary to communicate to those


expected to respond so that they (1) know what they must do, (2) understand the
expected response time, and
(3) have the proper authorization to act. How are such situations tracked to
determine resolution? Will additional measures be needed to confirm the expected
outcome, or is future monitoring of the existing measures sufficient?

It’s beneficial to periodically monitor and tune this process to improve the metrics
used and the actions that are determined and implemented based on those metrics.
Also system and organiza- tional changes can impact the metrics process.

Nguyen Van Tran Anh and Le Hoai Anh 17


Software Assurance And Assessment

3 Selecting Measurement Data for Software Assurance


Practices

The SAF documents practices for process management, program management,


engineering, and support. For any given software assurance target, there are GQM
questions that can be linked to each practice area and individual practice to help
identify potential evidence. In this section, this approach is used to develop an
example that shows how practices in each area can be used to pro- vide evidence
in support of a software assurance target.

The SAF provides practices as a starting point for a program, based on the SEI’s
expertise in software assurance, cybersecurity engineering, and risk management.
Each organization must tailor the practices to support its specific software
assurance target possibly modifying the questions for each relevant software
assurance practice and select a starting set of metrics for evidence that is worth
the time and effort needed to collect it.

3.1 Example Software Assurance Target and Relevant SAF Practices

Consider the following software assurance target: Supply software to the


warfighter with acceptable software risk. To meet this software assurance target,
two sub-goals are needed (based on the definition of software assurance):

Sub-Goal 1: Supply software to the warfighter that functions in the intended


manner. (Since this is the primary focus of every program, and volumes of
material are published about it, this sub-goal does not need to be further
elaborated.)

Sub-Goal 2: Supply software to the warfighter with a minimal number of


exploitable vulnerabilities. (The remainder of this section provides a way to
address this sub-goal.)

SAF-Based Questions

Using the SAF, the following questions should be asked to address sub-goal 2:
Supply software to the warfighter with a minimal number of exploitable
vulnerabilities.

1. Process Management: Do process management activities help minimize the


potential for exploitable software vulnerabilities?

1.1. Process Definition: Does the program establish and maintain cybersecurity processes?

1.2. Infrastructure Standards: Does the program establish and maintain


security standards for its infrastructure?

Nguyen Van Tran Anh and Le Hoai Anh 18


Software Assurance And Assessment

1.3. Resources: Does the program have access to the cybersecurity resources
(e.g., personnel, data, assets) it needs?

1.4. Training: Does the program provide security training for its personnel?

2. Program Management: Do program management activities help minimize


the potential for exploitable software vulnerabilities?

2.1. Program Plans: Has the program adequately planned for cybersecurity activities?

2.2. Program Infrastructure: Is the program’s infrastructure adequately secure?

2.3. Program Monitoring: Does the program monitor the status of cybersecurity activities?

2.4. Program Risk Management: Does the program manage program-level


cybersecurity risks?

2.5. Supplier Management: Does the program consider cybersecurity when


selecting suppliers and managing their activities?

3. Engineering: Do engineering activities minimize the potential for


exploitable software vulnerabilities?

3.1. Product Risk Management: Does the program manage cybersecurity


risk in software components?

3.2. Requirements: Does the program manage software security requirements?

3.3. Architecture: Does the program appropriately address cybersecurity in its


software architecture and design?

3.4. Implementation: Does the program minimize the number of


vulnerabilities inserted into its software code?

3.5. Testing, Validation, and Verification: Does the program test, validate,
and verify cybersecurity in its software components?

3.6. Support Tools and Documentation: Does the program develop tools and
documentation to support the secure configuration and operation of its
software components?

3.7. Deployment: Does the program consider cybersecurity during the


deployment of soft- ware components?

4. Support: Do support activities help minimize the potential for exploitable


software vulnerabilities?

4.1. Measurement and Analysis: Does the program adequately measure


cybersecurity in acquisition and engineering activities?

Nguyen Van Tran Anh and Le Hoai Anh 19


Software Assurance And Assessment

4.2. Change Management: Does the program manage cybersecurity changes to


its acquisition and engineering activities?
4.3. Product Operation and Sustainment: Is the organization with
responsibility for operating and sustaining the software-reliant system
managing vulnerabilities and cybersecurity risks?

There are many possible metrics that could provide indicators of how well each
practice in each practice area is addressing its assigned responsibility for meeting
the goal. The tables in Appendices B-E provide metric options to consider when
addressing the questions for each practice area except 3.1 Product Risk
Management, which, for this example, was not useful since the system under
development is the product.

There are many ways that the information provided in Appendices A-E can be
used for practices, outputs, and metrics. An organization can start with
 existing practices to identify related metrics
 known outputs to identify useful software assurance metrics
 known attacks to identify useful practices and measures for future
identification Three examples are included in this section.

3.2 Example for Selecting Evidence for Software Assurance Practices

A reasonable starting point for software assurance measurement is with practices


that the organization understands and is already addressing. Consider the
following example, which draws practices and metrics from Appendix D.

The DoD requires a program protection plan, and evidence could be collected
using metrics for engineering practices (see Figure 3, practice group 3) that
show how a program is handling pro- gram protection.

In Engineering practice area 3.2 Requirements, data can be collected to provide a


basis for completing the program protection plan. Relevant software assurance
data can come from requirements that include the following:
 the attack surface
 weaknesses resulting from the analysis of the attack surface, such as a
threat model for the system

In Engineering practice area 3.3 Architecture, data is collected to show that


requirements can be addressed. This data might include the following:
 the results of an expert review by those with security expertise to determine
the security effectiveness of the architecture
 attack paths identified and mapped to security controls
 security controls mapped to weaknesses identified in the threat modeling
activities in practice 3.2

Nguyen Van Tran Anh and Le Hoai Anh 20


Software Assurance And Assessment

In Engineering practice area 3.4 Implementation, data can be provided from


activities, such as code scanning, to show how weaknesses are identified and
removed. This data might include the following:
 results from static and dynamic tools and related code updates
 the percentage of software evaluated with tools and peer review

In Engineering practice area 3.5 Verification, Validation, and Testing, data can be
collected to determine that requirements have been confirmed and the following
evidence would be useful:
 percentage of security requirements tested (total number of security requirements and MLOC)
 code exercised in testing (MLOC)
 code surface tested (% of code exercises)

Each selected metric must have a process that establishes how data is collected,
analyzed, and evaluated based on information provided in Section 2.4 of this
report.

3.3 Example for Finding Metrics Data in Available Documentation

For each SAF practice, a range of outputs (e.g., documents, presentations,


dashboards) is typically created. In Appendices B through E, examples of these
outputs are provided for each SAF practice. The form of an output may vary
based on the lifecycle in use. An output may be provided at multiple points in a
lifecycle with increased content specificity. Available outputs can be evaluated
and tuned to include the desired measurement data.

In Engineering practice area 3.2 Requirements, the SAF includes the following practice:

A security risk assessment is an engineering-based security risk analysis that


includes the at- tack surface (those aspects of the system that are exposed to an
external agent) and abuse/misuse cases (potential weaknesses associated with
the attack surface that could lead to a compromise). This activity may also be
referred to as threat modeling.

A security risk assessment exhibits outputs with specificity that varies by lifecycle
phase. Initial risk assessment results might include only that the planned use of a
commercial database manager raises a specific vulnerability risk that should be
addressed during detailed design. The risk assessment associated with that
detailed design should recommend specific mitigations to the development team.
Testing plans should cover high-priority weaknesses and proposed mitigations.

Examples of useful data related to measuring this practice and that support the
software assurance target appear in the following list:
 recommended reductions in the attack surface to simplify development and
reduce security risks

Nguyen Van Tran Anh and Le Hoai Anh 21


Software Assurance And Assessment

 prioritized list of software security risks


 prioritized list of design weaknesses
 prioritized list of controls/mitigations
 mapping of controls/mitigations to design weaknesses
 prioritized list of issues to be addressed in test, validation, and verification
The outputs of a security risk assessment depend on the experience of the participants
as well as constraints imposed by costs and the schedule. An analysis of this data
should include consideration for missing security weaknesses or poor mitigation
analysis, which increases operational risks and future expenses.
Another practice in Engineering practice area 3.2 Requirements is
Conduct reviews (e.g., peer reviews, inspections, and independent reviews) of
software secu- rity requirements.

Output from reviews includes issues raised in internal reviews, review status, and
evaluation plans for software security requirements.

Analysis of the issues arising in various reviews should answer the questions
shown in following list to determine data that would be useful in evaluating
progress toward the software assurance goal.
 For software security requirements, what has not been reviewed? (Examples
include the number, difficulty, and criticality of “to be determined” [TBD]
and “to be added” [TBA] items.)
 Where are there essential inconsistencies in the analysis and/or mitigation
recommendations? (Examples include the number/percentage, difficulty, and
criticality of the differences.)
 Is there insufficient information for performing a proper security risk
analysis? (Examples include emerging technologies and/or functionality
where there is a limited history of security exploits and mitigation.)

3.4 Sustainment Example

The Heartbleed vulnerability is an example of a design flaw. Could software


assurance practices and measures have identified this type of problem before it
was fielded?

The assert function for the flawed software accepts two parameters: a string S
and an integer N and returns a substring of S of length N. For example, assert
(“that”,3) returns tha. A vulnerability existed for calls where N is greater than
then the length of S. For example, assert(“that”,500) returns a string starting with
“that” followed by 496 bytes of memory data stored adjacent to the string that.
Calls such as this one enable an attacker to view what should be inaccessible
memory contents. The input data specification that the value of N was less than
or equal to the length of the string was never verified.

Nguyen Van Tran Anh and Le Hoai Anh 22


Software Assurance And Assessment

The practices listed in Table 2 come from several SAF practices in the
Engineering practice area that should provide enough evidence to justify the claim
that the Heartbleed vulnerability was eliminated.
Table 2: Practices/Outputs for Evidence Supporting Sustainment Example
Practice Output

Threat modeling Software risk analysis identifies “input data risks with input verification” as requiring mitigation.

Design includes Input data verification is a design requirement.


mitigation

Software Software inspections confirm the verification of all input data.


inspection

Testing Testing plans include invalid input data.


Test results show mitigation is effective for supplied inputs.

Nguyen Van Tran Anh and Le Hoai Anh 23


Software Assurance And Assessment

4 Challenges for Addressing Lifecycle Software Assurance

As mentioned earlier in this report, the role of assurance metrics and data varies
with the type of assurance target. Earlier examples demonstrated that the effective
use of metrics for software assurance in engineering practices requires
coordinating data across many practices in the Engineering practice area.

Functional requirements typically (1) describe what a system should do and (2)
focus on required behavior that can be validated. Assurance requirements are more
likely expressed in terms of what a system should not do and are much more
difficult (if not impossible) to confirm. However, we should consider evaluations
that show that a behavior is less likely to occur.

For example, we can verify that the authentication and authorization functions
meet requirements and that authorization is confirmed when sensitive data is
accessed. However, that evidence is insufficient to demonstrate assurance because
only authorized users can access a data set. An attacker does not need to exploit a
weakness in those functions. Instead, they can use a vulnerability in the functional
software to change software performance and bypass authentication checks. In
other words, vulnerabilities enable an attack to bypass system controls. To reduce
the likelihood of this bypass occurring, practices that remove vulnerabilities are
critically needed.

4.1 Acquisitions Can Initiate Software Assurance with Independent


Verification and Validation

Challenge: Contractors are required to address a risk management framework


based on existing policy; contractors need to consider software assurance as well.
Can the two be combined?

Many government agencies use the NIST Risk Management Framework (RMF)
[NIST 2014] to identify practices for cybersecurity that also address software
assurance. These practices are included in a contract and evaluated as part of an
independent verification and validation (IV&V) process to confirm the level of
cybersecurity and software assurance risk addressed.
As an example, three areas of interest that could be combined were selected.
(Additional examples are provided in Appendix A.)

1. The first area of interest is Software Flaw Remediation, which covers five
RMF controls as follows:
 SI-2 Flaw Remediation
 SI-2(1) Flaw Remediation | Central Management
 SI-2(2) Flaw Remediation | Automated Flaw Remediation Status

Nguyen Van Tran Anh and Le Hoai Anh 24


Software Assurance And Assessment

 SI-2(3) Flaw Remediation | Time to Remediate Flaws/Benchmarks


for Corrective Actions
 SI-2(6) Flaw Remediation | Removal of Previous Versions of Software/Firmware

This area of interest is handled by SAF Engineering practice area 3.2


Implementation as part of “Evaluation practices (e.g., code reviews
and apply tools) are applied to identify and re- move vulnerabilities
in delivered code (including code libraries, open source, and other reused components).”

The same metrics could be selected to demonstrate meeting both RMF and
software assurance expectations from the following list:
 % of vendor contracts requiring the use of evaluation practices and
reporting vulnerability metrics
 code coverage (aka % of code evaluated [total and by each type of review])
 vulnerabilities per MLOC identified and removed
 unaddressed vulnerabilities per MLOC
 % code libraries evaluated
 % open source components evaluated
 % legacy components evaluated
 count of high-priority vulnerabilities identified and the count of those removed

2. The second area of interest is Malicious Code Protection, which covers the
following four RMF controls:
 SI-3 Malicious Code Protection
 SI-3(1) Malicious Code Protection | Central Management
 SI-3(2) Malicious Code Protection | Automatic Updates
 SI-3(10) Malicious Code Protection | Malicious Code Analysis

This area of interest is be handled by the SAF Engineering practice area 3.2 Implementation
as well. Specific metrics for these practice areas are provided in Appendix D.

3. The third area of interest is Software Supply Chain Protection, which


covers the following seven RMF controls:
 SA-12 Supply Chain Protection
 SA-12(1) Supply Chain Protection | Acquisition Strategies/Tools/Methods
 SA-12(5) Supply Chain Protection | Limitation of Harm
 SA-12(8) Supply Chain Protection | Use of All-Source Intelligence
 SA-12(9) Supply Chain Protection | Operations Security
 SA-12(11) Supply Chain Protection | Penetration Testing/Analysis of
Elements, Processes, and Actors
 SA-22 Unsupported System Components

This area of interest is addressed by practices in SAF Project Management


practice area 2.5 Supplier Management, which includes five practice

Nguyen Van Tran Anh and Le Hoai Anh 25


Software Assurance And Assessment

activities and a range of metrics for each practice as shown in Appendix C.

An additional 15 cybersecurity areas that map to an additional 20 RMF controls


(listed in Appendix A) can cross-reference to SAF practice areas and practices.
These SAF practice areas and practices link to potential metrics that can be
collected and analyzed at checkpoints throughout the acquisition lifecycle to
confirm that they are addressed.

For the DoD, milestone reviews in an acquisition lifecycle can be used to review
selected metrics and monitor how well the contractor is addressing the selected
RMF controls and practices for software assurance. As described in Sections 2
and 3 of this report, the acquirer must determine which data to collect and how it
will be evaluated to determine if the results are sufficient.

4.2 Monitoring the Development of a Custom Software Acquisition

Challenge: What evidence is needed to ensure that vulnerabilities are addressed by a contractor?

It is a common practice for a vendor to report the tools it uses to address


vulnerabilities as part of its execution pipeline. This source of evidence should
map to the expected practice that this evidence supports to determine how well
each part of the practice is addressed. Also, all lifecycle activities must be
considered since potential vulnerabilities can be introduced at any stage of the
lifecycle. Therefore, the acquirer should not just accept what a vendor reports that
it performs, but the acquirer should also map what is reported to the needed
practices and identify gaps and opportunities for improvement.

Capers Jones analyzed over 13,000 projects for the effects of general practices
(e.g., inspections, testing, and analysis) on improving software quality [Jones
2012]. His analysis shows that using a combination of techniques is best. Many of
the limitations associated with tools such as static analysis, which have high rates
of false positives and false negatives [Wedyan 2009], can be mitigated by other
development practices.

Jones’ analysis of projects showed that a combination of inspections, static


analysis, and testing was greater than 97% efficient in identifying defects.
However, these analyses address only the identify part of SAF Engineering
practice area 3.2 Implementation as part of “Evaluation practices (e.g., code
reviews and apply tools) are applied to identify and remove vulnerabilities in
delivered code (including code libraries, open source, and other reused
components),” and additional actions must be performed to remove them.

The Security Development Lifecycle (SDL) encouraged other developers to


include security analysis earlier in the development lifecycle [Howard 2006].
Vulnerabilities created during design should be identified and removed during risk
assessments or in design and implementation. Assurance now depends, in part, on
how well a developer anticipates how a system can be compromised and how well

Nguyen Van Tran Anh and Le Hoai Anh 26


Software Assurance And Assessment

the developer chooses and implements effective mitigations. Practices that


anticipate software weaknesses are included in SAF area 3.2, as shown in Table 3.
Table 3: Requirements (SAF Engineering Practice Area 3.2)
Activities/Practices Outputs

Conduct a security risk analysis, including threat Prioritized list of software security risks
modeling and abuse/misuse cases. Prioritized list of design weaknesses Prioritized
list of controls/mitigations
Mapping of controls/mitigations to design weaknesses

Threat modeling analyzes how a software design can be compromised. Such analysis
typically considers how an attack can compromise the information, flows, data stores,
and software that processes the data and can draw on the extensive documentation of
security exploits as represented by the Common Weakness Enumeration (CWE), the
Common Vulnerabilities and Exposure Enumeration (CVE), and the Common Attack
Pattern Enumeration and Classification (CAPEC).The output can describe the
likelihood of various classes of threats, such as a denial of service or disclosure of
information.
Verification should guide the choice of mitigations. Can claims about a mitigation
be verified? In other words, what is the level of confidence an acquirer should
have with the choice of mitigations? Creating an argument that a developer
reduced or eliminated vulnerabilities (i.e., a developer’s assurance case) should
start with risk analysis. The strength of the assurance argument and its eventual
verification depends, in part, on the evidence provided to support the mitigation of
software risks. An acquirer should consider the evidence that supports the
following:
1. validity of the risk analysis
2. cost effectiveness of the mitigations with respect to their effects on mission outcomes
3. effective implementation of the chosen mitigations

The output of a risk assessment includes predictions of how a system can be


compromised with the risk priorities weighted by likelihood and consequences.
Metrics now evaluate the engineering analysis in items 1 and 2, while the
incorporation of that engineering analysis is determined in later lifecycle activities
(item 3).

Instead of trying to confirm that the evidence provided for a practice is sufficient,
instead ask why the evidence may be insufficient or defective [Goodenough
2010]. For example, unanticipated risks raised during a program technical review
or by an independent product risk assessment reduce the confidence in a
developer’s risk analysis. Examples of other doubts that could arise include the
following:
 The test plans did not include all hazards identified during design.
 The web application developers had limited security experience.
 The acquirer did not provide sufficient data to validate the modeling and simulations.

Nguyen Van Tran Anh and Le Hoai Anh 27


Software Assurance And Assessment

 Integration testing did not adequately test recovery after component failures.

A developer should be able to provide evidence that confirms items 2 and 3 were
addressed. For example, assume a data flow includes an SQL database as a data
store. A risk assessment does the following:
 estimates the risk of an SQL-injection attack as described in CWE-135

 describes how a successful exploit could lead to a malicious modification of


data or the expo- sure of information to individuals who are not supposed to
have access to it
 recommends mitigations to reduce the risk of an SQL-injection vulnerability

It is difficult to verify that a routine, even written by an experienced coder,


prevents an SQL injection. A CWE recommended mitigation is to use a vetted
library or framework. Such a recommendation is an engineering decision
expressed as a coding rule to be enforced during implementation. The Consortium
for IT: Software Quality (CISQ) states that the validation of the use of such a
library can be automated by scanning the source code and does not require the
coder to have extensive security expertise [CISQ 2012]. A developer following
the CISQ approach can provide an acquirer with an assurance justification (as
shown in Figure 5).

Figure 5: SQL-Injection Assurance Case

The CISQ approach, like static analysis, is based on the analysis of developed
source code. However, the objective of the approach is to eliminate

Nguyen Van Tran Anh and Le Hoai Anh 28


Software Assurance And Assessment

vulnerabilities during coding rather than identifying defects after they are
injected.

Confidence in reducing defects, as demonstrated by Capers Jones, depends on


evidence that the security risks and recommended mitigations were (1) considered
during design and design re- views, and during inspections; and (2) incorporated
in test plans (like what was done for the SQL injection example).

4.3 Monitoring Integration of Third-Party Software

Challenge: Why is supply chain risk management such a growing source of

acquisition concern?

An increasing proportion of software development involves integrating


commercial software. An acquirer has limited visibility into the engineering of
that software and may rely on test labs and other alternative practices. Such
software includes database management systems and infrastructure services, such
as identity management for authorization and authentication. The appropriate
security measures depend on the context, which only the acquirer knows.

Supply chain risk management refers to the collection of practices that manage the
risks associated with the external manufacture or development of hardware and
software components. There are two sources of supply chain risks:
1. The supply chain is compromised, and counterfeit and tampered products
2. are inserted.

3. Poor development and manufacturing practices introduce vulnerabilities.

For example, there was a vulnerability in a widely used implementation of the


secure socket layer protocol that was used for securing web communications. The
vulnerability potentially exposed memory data (e.g., passwords, user
identification information, and other confidential information) to unauthorized
users. At the time of the announcement in 2014, there did not appear to be any
tools available that would have discovered the vulnerability [Kupsch 2014]. The
vulnerability occurred because the validity of the input to a software function was
not verified. In all likelihood, the defect could have been found during a code
inspection, but this activity was not part of the development process for this
software.

For commercial development, most of the practices that address defects are early
in the lifecycle. The acquirer does not see the product until integration and will
only be able to monitor the early lifecycle activities through provisions in the
contract. This separation is shown in Figure 6. Monitoring vendor development
practices depends entirely on information provided by the vendor.
When the acquirer simply receives the final product at integration, it does not
have direct visibility into the vendor’s development practices.

Nguyen Van Tran Anh and Le Hoai Anh 29


Software Assurance And Assessment

Figure 6: Supply Chain Monitoring

An acquirer must not only monitor a supplier’s development practices, but they
must also understand how that supplier monitors its suppliers. For example, how
does the prime contractor reduce supply chain risks associated with subcontractors
and commercial suppliers? Supply chains can be many layers deep, linking
organizations with a wide range of defect management approaches.
Product Development

Characteristics of commercial product development that can be available to an


acquirer might include the following:
 vulnerability history for the product as reported to the NIST National Vulnerability
 Database
 standards that a product developer applies, such as The Open Group’s Open
Trusted Technology Provider Standard (OTTPS) (ISO 20243), which uses
evidence of a supplier’s capabilities and product security as shown in Table 4
Table 4: Evidence of Supplier Capabilities and Product Security

Evidence of Quality Supplier practices conform to best practice requirements and recommendations primarily
Product Development associated with the activities relating to the product’s development.

Evidence of Secure Providers employ a secure engineering method when designing and developing their products.
Development Software providers and suppliers often employ methods or processes with the objective of
identifying, detecting, fixing, and mitigating defects and vulnerabilities that could be exploited
as well as verifying the security and resiliency of the finished products.

Evidence of Supply Suppliers manage their supply chains through the application of defined, monitored, and
Chain Security validated supply chain processes.

Integrated System Development

A commercial product developer can take advantage of a relatively stable set of


suppliers and knowledge of the security risks associated with earlier versions;
however, a system integrator requires general knowledge that is applicable across
multiple components and suppliers. Characteristics of integrated development

Nguyen Van Tran Anh and Le Hoai Anh 30


Software Assurance And Assessment

include the following:


 integration of independently developed components with limited visibility
into the actual code
 inconsistencies in security assumptions among components
 component behavior that is dynamic over time (i.e., each component
supported and updated separately)
 components that provide extensibility and customization
 ongoing product upgrades
 multiple components that compound threat analysis and mitigations
 supply chain risk management that includes integration and product risks

While threat modeling for a product can be incrementally upgraded as


functionality and threats evolve over time, a distinct threat model must be
constructed for each system by the acquirer. For a product to be integrated into a
commercial product, the supply chain must be managed by the integrator. For the
acquirer of the integrated product, visibility into how the integrator manages its
suppliers may be difficult.

Commercial software typically can customize and extend capabilities so that an


organization can tailor that software to its requirements and operational
environment. The implementation of a mitigation might take advantage of such
capabilities, but it is more likely that an attack exploits these features. Threat
modeling should be applied to identify any new risks and the effect of the
changes on recommended mitigations.

4.4 System-of-Systems Assurance

Challenge: Systems are typically integrated with other systems to address a


mission. Can software assurance be applied to a system of systems?

The assurance discussed for custom development and for supply chain assurance
were associated with eliminating identified defects and vulnerabilities. Threat
modeling attempts to reduce the risk of vulnerabilities associated with unexpected
conditions. Assurance should also be considered for an organization’s work
processes, which are based on systems working together to address a mission or
business process.

A good example is the August 2003 power grid failure. Approximately 50 million
electricity consumers in Canada and the northeastern U.S. were subject to a
cascading blackout. The events preceding the blackout included a mistake by tree
trimmers in Ohio that took three high-voltage lines out of service and a software
failure (a race condition) that disabled the computing service that notified the
power grid operators of changes in power grid conditions. With the alarm function

Nguyen Van Tran Anh and Le Hoai Anh 31


Software Assurance And Assessment

disabled, the power grid operators did not notice a sequence of power grid failures
that eventually lead to the blackout [NERC 2004].

The alert server was a commercial product. The integration of that component into
the power company’s system included a rollover to a second server if there was a
hardware failure in the primary server. However, the software error that disabled
the primary server also disabled the secondary server. This event was the first
time that this software fault had been reported for the commercial product.

A key observation by the technical reviewers was that the blackout would not
have occurred if the operators knew the alarm service failed. Typically, a response
involves finding alternative sources of electricity, and this response typically can
be implemented in 30 minutes. Instead of analyzing the details of the alarm server
failure, the reviewers asked why the following software assurance claim had not
been met [NERC 2004]:

Claim: Power grid operators had sufficient situational awareness to manage


the power grid to meet its reliability requirements.

The reviewers proposed the following assurance case. The claim is met if one out
of five of the sub-claims are satisfied.

Sub-Claim Status

A server provides alarms for condition changes. Alarm server recovery was designed for a hardware failure.
The alarm service did fail over to the secondary server, but
the software failure that disabled the primary server also
disabled the backup.

Server recovery can be completed within ten minutes. The commercial system required 30 minutes for a re- start.

Operators are notified of the loss of the alarm server. Automatic notification of server failure was not implemented.

Operators periodically check the output from contingency This practice was not done since those tools had repeated
analysis and state estimators. failures in the preceding week.

An independent real-time monitor of the regional power grid The independent monitoring organization had concur- rent
provides alerts. failures.

This operational assurance case should guide the acquisition and integration of
commercial power grid software.

Nguyen Van Tran Anh and Le Hoai Anh 32


Software Assurance And Assessment

5 Software Process Maturity Assessment

The scope of a software process assessment can cover all the processes in the
organization, a selected subset of the software processes, or a specific project.
Most of the standard-based process assessment approaches are invariably based
on the concept of process maturity.

When the assessment target is the organization, the results of a process


assessment may differ, even on successive applications of the same method.

There are two reasons for the different results. They are,
The organization being investigated must be determined. For a large company,
several definitions of organization are possible and therefore the actual scope
of appraisal may differ in successive assessments.

Even in what appears to be the same organization, the sample of projects


selected to represent the organization may affect the scope and outcome.
When the target unit of assessment is at the project level, the assessment
should include all meaningful factors that contribute to the success or failure
of the project. It should not be limited by established dimensions of a given
process maturity model. Here the degree of implementation and their effectiveness
as substantiated by project data are assessed.

Process maturity becomes relevant when an organization intends to embark


on an overall long-term improvement strategy. Software project assessments
should be independent assessments in order to be objective.

5.1 Software Process Assessment Cycle

According to Paulk and colleagues (1995), the CMM-based assessment


approach uses a six-step cycle. They are select a team - The members of the
team should be professionals knowledgeable in software engineering and
management.

The representatives of the site to be appraised complete the standard process


maturity questionnaire.

The assessment team performs an analysis of the questionnaire responses and


identifies the areas that warrant further exploration according to the CMM key
process areas.

The assessment team conducts a site visit to gain an understanding of the


software process followed by the site.

The assessment team produces a list of findings that identifies the strengths
and weakness of the organization's software process.

Nguyen Van Tran Anh and Le Hoai Anh 33


Software Assurance And Assessment

The assessment team prepares a Key Process Area (KPA) profile analysis
and presents the results to the appropriate audience.

For example, the assessment team must be led by an authorized SEI Lead
Assessor. The team must consist of between four to ten team members. At
least, one team member must be from the organization being assessed, and
all team members must complete the SEI's Introduction to the CMM
course (or its equivalent) and the SEI's CBA IPI team training course. Team
members must also meet some selection guidelines.

With regard to data collection, the CBA IPI relies on four methods the standard
maturity questionnaire individual and group interviews, document reviews,
feedback from the review of the draft findings with the assessment participants

Software process assessments based on a continuous assessment model.

Nguyen Van Tran Anh and Le Hoai Anh 34


Software Assurance And Assessment

5.2 SCAMPI

The Standard CMMI Assessment Method for Process Improvement


(SCAMPI) was developed to satisfy the CMMI model requirements
(Software Engineering Institute, 2000). It is also based on the CBA IPI. Both
the CBA IPI and the SCAMPI consist of three phases :
- Plan and preparation
- Conduct the assessment onsite
- Report results

The activities for the plan and preparation phase include :


- Identify the assessment scope
- Develop the assessment plan
- Prepare and train the assessment team
- Make a brief assessment of participants
- Administer the CMMI Appraisal Questionnaire
- Examine the questionnaire responses
- Conduct an initial document review

The activities for the onsite assessment phase include :


- Conduct an opening meeting
- Conduct interviews
- Consolidate information
- Prepare the presentation of draft findings
- Present the draft findings
- Consolidate, rate, and prepare the final findings

The activities of the reporting results phase include :


- Present the final findings
- Conduct an executive session
- Wrap up the assessment

The major characteristics of SCAMPI


- Accuracy : Ratings of the organization reflect its capability, which is used in
comparison to other organizations. The result of appraisal indicates
the strengths and weaknesses of the appraised organization
- Repeatability : Ratings and results of the appraisals are expected to
be consistent with another appraisal conducted under comparable
conditions (Another appraisal with identical scope will produce consistent
result)

Cost of Resources effectiveness


The appraisal method is efficient as it takes into account the organizational
investment in obtaining the appraisal results.

The Objectives of SCAMPI

Nguyen Van Tran Anh and Le Hoai Anh 35


Software Assurance And Assessment

- To identify strengths and weaknesses of existing processes in the


organization.
- To specify an integrated appraisal method for internal process
improvement.
- To act as a motivation for initiating and focusing on software process
Improvement

5.3 Assessment Categories

The top level assessment categories are:


- Availability: can a user find the software (discovery) and can they
obtain the software (access)?
- Usability: can a user understand the operation of the software, such
that they can use it, integrate it with other software, and extend or modify it?
- Maintainability: what is the likelihood that the software can be maintained
and developed over a period of time?
- Portability: what is the capacity for using the software in a different area,
field, or environment?

Nguyen Van Tran Anh and Le Hoai Anh 36


Software Assurance And Assessment

6. Objectives of Software Assessment

Quality Assurance: Ensure the software functions correctly and meets specified
requirements.
Performance Evaluation: Assess how well the software performs under various
conditions.
Security Analysis: Identify vulnerabilities and ensure the software is secure
against threats.
Compliance Check: Verify that the software adheres to relevant regulations and
standards.

6.1. Types of Software Assessment

Software assessment is a critical phase in the software development lifecycle,


ensuring that software meets the required standards and performs as expected.
This process can be broadly categorized into several types, each with its own
methodologies and tools. The primary types include Static Analysis, Dynamic
Analysis, Manual Testing, and Automated Testing.

6.1.1. Static Analysis

a) Definition and Methodologies


- Static Analysis involves examining the software’s source code or compiled
code without executing it. The aim is to detect potential issues such as syntax
errors, code smells, security vulnerabilities, and compliance with coding standards.
This analysis is typically conducted through:
+ Code Reviews: A systematic examination of the code by developers or
peers to identify defects and ensure adherence to coding standards. This
can be a formal process involving checklists and meetings or an informal
peer review.

- Syntax Checking: Using tools to verify that the code adheres to the language's
syntax rules.
- Static Code Analysis Tools: Automated tools that analyze the codebase to detect
various issues such as security vulnerabilities, bugs, and adherence to coding standards.
b) Tools and Techniques
- SonarQube: An open-source platform that provides continuous inspection of code
quality, detecting bugs, vulnerabilities, and code smells.
- Pylint: A tool for Python code that checks for errors, enforces a coding standard,
and looks for code smells.
- FindBugs: A static analysis tool that looks for bugs in Java programs.
c) Benefits:
- Early detection of defects, reducing the cost of fixing them.
- Improves code quality and maintainability.
- Enhances security by identifying vulnerabilities early in the development process.

6.1.2 Dynamic Analysis

Nguyen Van Tran Anh and Le Hoai Anh 37


Software Assurance And Assessment

a) Definition and Methodologies


- Dynamic Analysis involves evaluating the software during its execution. This type
of analysis is performed in a controlled environment, such as a testing environment,
to observe the software’s behavior and performance under various conditions. Key
methodologies include:
+ Unit Testing: Testing individual components or units of code to ensure they
function correctly.
+ Integration Testing: Assessing the interaction between integrated units or
components to ensure they work together as expected.
+ System Testing: Testing the complete and integrated software system to
verify it meets the specified requirements.
+ Performance Testing: Evaluating the software’s performance under expected
load conditions to ensure it can handle the anticipated user traffic.
b) Tools and Techniques
- Unit: A widely used framework for unit testing in Java.
- Selenium: An open-source tool for automating web applications for testing purposes.
- JMeter: A tool for performance and load testing, useful for simulating heavy load
conditions.
c) Benefit
- Identifies runtime errors and performance issues.
- Validates that the software works as intended in a real-world scenario.
- Provides insights into the software’s behavior under various conditions.

6.1.3 Manual Testing

a) Definition and Methodologies


- Manual Testing involves human testers manually interacting with the software
to find defects. This type of testing relies on the tester's creativity, intuition, and
experience to uncover issues that automated tests might miss. Key methodologies
include:
+ Exploratory Testing: Testers explore the software without predefined test
cases, using their knowledge and intuition to discover defects.
+ Usability Testing: Evaluating the software’s user interface and user experience
to ensure it is user-friendly and meets the target audience's needs.
+ Ad Hoc Testing: Unstructured testing performed without formal planning,
often used to find random defects
b) Tools and Techniques
- JIRA: A tool for tracking issues and managing test cases.
- TestRail: A test case management tool that helps in planning, managing, and tracking
test cases.
c) Benefit
- Identifies usability issues and provides insights into the user experience.
- Flexible and adaptable to changes.
- Can uncover defects that automated tests may miss due to human intuition and creativity

6.1.4 Automated Testing

a) Definition and Methodologies


- Automated Testing uses specialized tools to execute tests on the software

Nguyen Van Tran Anh and Le Hoai Anh 38


Software Assurance And Assessment

automatically. This approach is designed to increase testing efficiency, coverage,


and repeatability. Key methodologies include:
+ Unit Test Automation: Automating unit tests to ensure individual
components function correctly.
+ Regression Test Automation: Automating tests that verify that new
code changes have not adversely affected existing functionality.
+ Continuous Integration/Continuous Deployment (CI/CD) Testing:
Integrating automated testing into the CI/CD pipeline to ensure code
changes are tested continuously and deployed efficiently.
b) Tools and Techniques
- JUnit: For automating unit tests in Java.
- Cucumber: For behavior-driven development (BDD) that allows writing test
cases in plain language.
c) Benefit
- Increases test coverage and efficiency.
- Ensures consistent and repeatable test execution.
- Reduces human error and enables continuous testing.

6.2 Key Metrics in Software Assessment

- Functionality: Does the software perform the required tasks?


- Reliability: How consistently does the software perform under various conditions?
- Usability: Is the software user-friendly and accessible?
- Efficiency: How well does the software utilize resources?
- Maintainability: How easy is it to modify and update the software?

6.3 Software Assessment Tools

- Static Code Analysis Tools: SonarQube, ESLint, Checkmarx.


- Dynamic Analysis Tools: Selenium, JMeter, LoadRunner.
- Security Assessment Tools: OWASP ZAP, Nessus, Burp Suite.
- Performance Testing Tools: Apache JMeter, LoadRunner, NeoLoad.
- Portability: Can the software operate in different environments?

6. 4 Software Assessment Process

- Requirement Analysis: Understand the requirements and scope of the


assessment.
- Planning: Develop a comprehensive plan detailing the assessment criteria,
tools, and methodologies.
- Environment Setup: Prepare the environment necessary for assessment,
including hardware, software, and network configurations.
- Execution: Conduct the assessment using the selected tools and methodologies.
- Reporting: Document the findings, including identified issues, their severity,
and recommended actions.
- Review and Action: Stakeholders review the assessment report and take necessary
actions based on the findings.

6.5 Best Practices in Software Assessment

Nguyen Van Tran Anh and Le Hoai Anh 39


Software Assurance And Assessment

- Early and Continuous Testing: Incorporate testing at every stage of the SDLC to
identify issues early.
- Automated Testing: Leverage automated tools to enhance efficiency and repeatability.
- Comprehensive Coverage: Ensure that the assessment covers all aspects of the
software, including edge cases.
- Regular Updates: Keep assessment tools and practices up-to-date with the latest
standards and technologies.
- Cross-Functional Teams: Involve diverse expertise in the assessment process to
get comprehensive insights.

6.6 Challenges in Software Assessment

- Resource Constraints: Limited time, budget, and manpower can affect the
thoroughness of assessments.
- Complexity of Modern Software: The increasing complexity of software systems
can make assessments more challenging.
- Rapid Technological Changes: Keeping up with the fast-paced advancements in
technology can be difficult.
- Security Threats: Constantly evolving security threats require continuous vigilance
and updates in assessment practices.

6.7 Future Trends in Software Assessment

- AI and Machine Learning: Integration of AI/ML for predictive analysis and


automated defect detection.
- DevSecOps: Incorporating security assessments into DevOps practices for
continuous and integrated security checks.
- Cloud-Based Testing: Leveraging cloud platforms for scalable and flexible testing
environments.
- IoT and Embedded Systems: Expanding assessment practices to include IoT devices
and embedded systems for comprehensive evaluations.

Nguyen Van Tran Anh and Le Hoai Anh 40


Software Assurance And Assessment

7. Steps in Software Assessment

- Requirement Analysis
+ Objective: Understand and document the specific requirements and
goals of the software.
+ Activities: Stakeholder interviews, requirement documentation review,
and clarification sessions.

- Planning
+ Objective: Develop a detailed plan outlining the scope, criteria,
methodologies, tools, and timeline for the assessment.
+ Activities: Define assessment objectives, select tools, identify necessary
resources, and create a schedule.

- Environment Setup
+ Objective: Prepare the technical environment required for conducting
the assessment.
+ Activities: Configure hardware, software, and network settings; set up
testing environments; install and configure assessment tools.

- Execution
+ Objective: Conduct the actual assessment by running tests and analyzing
results.
+ Activities: Execute static and dynamic analysis, perform manual and
automated tests, log defects and issues.

- Reporting
+ Objective: Document the findings, including identified issues, their severity,
and suggested remedial actions.
+ Activities: Compile test results, create detailed assessment reports, provide
recommendations for improvements.

- Review and Action


+ Objective: Review the assessment report with stakeholders and implement
necessary actions.
+ Activities: Discuss findings, prioritize issues, develop an action plan, and
monitor remediation efforts.

Key Concepts in Software Assessment


- Static Analysis
+ Definition: Examines code without executing it to identify
potential issues.
+ Tools: SonarQube, ESLint, Checkmarx.
+ Benefits: Early defect detection, improved code quality.

- Dynamic Analysis

Nguyen Van Tran Anh and Le Hoai Anh 41


Software Assurance And Assessment

+Definition: Evaluates the software by executing it in a


controlled environment.
+ Tools: Selenium, JMeter, LoadRunner.
+Benefits: Observes software behavior, performance under load.

- Security Assessment
+ Definition: Identifies vulnerabilities and assesses the security
posture of the software.
+ Tools: OWASP ZAP, Nessus, Burp Suite.
+ Benefits: Enhanced security, protection against threats.

- Performance Testing
+ Definition: Measures how well the software performs under
various conditions.
+ Tools: Apache JMeter, LoadRunner, NeoLoad.
+ Benefits: Ensures scalability, reliability under load.

Nguyen Van Tran Anh and Le Hoai Anh 42


Software Assurance And Assessment

8. Software Assessment Framework

a) Framework Overview
A software assessment framework provides a structured approach to evaluating
software, ensuring consistency and comprehensiveness.
b) Elements of the Framework
Assessment Criteria: Define what aspects of the software will be evaluated
(e.g., functionality, performance, security).
Methodologies: Outline the methods used for assessment (e.g., manual testing,
automated testing, code review).
Tools: Specify the tools and technologies employed in the assessment process.
Reporting Structure: Describe the format and contents of the assessment reports.
c) Example Framework
Criteria: Functionality, Performance, Security, Usability.
Methodologies: Static analysis, dynamic analysis, manual and automated testing.
Tools: SonarQube, Selenium, OWASP ZAP, JMeter.
Reporting: Detailed findings, issue severity, recommendations.
d) Example of Software Assessment
Case Study: E-commerce Platform
a. Context
An e-commerce company needs to assess a new online shopping platform
before launch to ensure it meets quality and security standards.

b. Steps Followed
- Requirement Analysis: Gathered requirements from stakeholders, including
- performance expectations and security needs.
- Planning: Developed an assessment plan outlining tools like SonarQube for
static analysis, Selenium for functional testing, and JMeter for performance
testing.
- Environment Setup: Configured test environments simulating real-world
conditions, installed necessary tools.
- Execution: Conducted static analysis to find code issues, executed functional
tests to verify features, ran performance tests to check load handling, and
performed security scans.
- Reporting: Compiled a comprehensive report highlighting critical defects,
performance. metrics, and security vulnerabilities.
- Review and Action: Presented findings to stakeholders, prioritized critical
issues, and initiated remediation efforts.

c. Results
- Functionality: Identified and fixed several critical bugs affecting the
checkout process.
- Performance: Improved load handling capabilities, ensuring the platform
could support peak traffic.
- Security: Addressed vulnerabilities that could lead to data breaches,
enhancing overall security posture.

Nguyen Van Tran Anh and Le Hoai Anh 43


Software Assurance And Assessment

The landscape of software assessment has evolved significantly from 2020 onwards,
driven by advances in technology and shifting industry priorities. Here are some key
trends and developments in software assessment and related technologies over this period:

Digital Transformation Acceleration: The COVID-19 pandemic catalyzed a


significant acceleration in digital transformation across industries. Many
businesses prioritized the adoption of digital solutions to enhance operational
efficiency and customer engagement. This shift necessitated comprehensive
software assessments to ensure that new systems integrated seamlessly and
securely into existing infrastructures (SaM Solutions).

AI and Machine Learning Integration: Artificial Intelligence (AI) and Machine


Learning (ML) have become integral to software development and assessment.
These technologies help automate testing processes, enhance predictive analytics,
and improve software quality by identifying potential issues early in the
development cycle. AI-driven tools like TensorFlow have been pivotal in this
evolution, enabling sophisticated neural network training and
deployment (SaM Solutions).

Cybersecurity Focus: With the increase in digital activities, cybersecurity has


become a critical aspect of software assessment. Organizations are more focused
on identifying vulnerabilities and protecting against cyber threats. The UK’s
Cyber Security Breaches Survey highlights the growing emphasis on cybersecurity
measures among businesses and charities, indicating a need for robust security
assessments during software evaluations (Gov UK).

Emergence of Quantum Computing: Although still in its nascent stages, quantum


computing is poised to revolutionize software development. Companies like
Google have made significant strides, claiming to achieve quantum supremacy.
As this technology matures, it will require new assessment frameworks to evaluate
its impact on existing systems and processes (StartUs Magazine).

Progressive Web Apps (PWAs): PWAs have gained traction for their ability
to combine the best features of web and mobile applications. They are fast,
reliable, and provide a native app-like experience. Assessing the performance
and usability of PWAs has become essential as more businesses adopt this
technology for its cross-platform capabilities and enhanced user
experience (SaM Solutions).

Internet of Things (IoT) Expansion: The proliferation of IoT devices has


introduced new complexities in software assessment. Ensuring interoperability,
security, and efficient data management are key challenges that require
comprehensive testing strategies. The integration of 5G technology further
amplifies these challenges by enabling more connected devices and faster
data transmission (SaM Solutions).

Blockchain Applications: Blockchain technology has extended beyond


cryptocurrencies to various industries, including supply chain, healthcare,
and finance. Assessing blockchain-based solutions involves verifying the

Nguyen Van Tran Anh and Le Hoai Anh 44


Software Assurance And Assessment

integrity, transparency, and security of the decentralized networks they


operate on (StartUs Magazine).

Privacy and Data Security: The rise of remote work and increased data
generation have heightened concerns about privacy and data security.
Software assessments now include stringent checks on data protection measures,
compliance with regulations like GDPR, and the implementation of advanced
security protocols (StartUs Magazine).

In summary, software assessment from 2020 onwards has been characterized


by the need to adapt to rapidly advancing technologies and the increased
emphasis on cybersecurity, digital transformation, and new computing
paradigms. These trends highlight the ongoing evolution of software assessment
practices to meet the demands of modern digital environments.

Nguyen Van Tran Anh and Le Hoai Anh 45


Software Assurance And Assessment

9. Case Studies

Case Study 1: Static Analysis in Financial Software: A financial software


company implemented static analysis tools to identify code defects early,
resulting in a 30% reduction in production bugs.

a. Background
- A financial software company faced challenges with frequent
production bugs impacting customer trust and regulatory compliance.
To address this, they implemented static analysis tools.

b. Steps and Implementation


- Requirement Analysis: The primary goal was to identify code defects
early in the development process to reduce production bugs.
- Planning: The team selected SonarQube and Checkmarx as their static
analysis tools.
- Environment Setup: Integrated static analysis tools into the CI/CD
pipeline to automate code scanning with every commit.
- Execution: Developers ran static analysis regularly, receiving instant
feedback on code quality and security vulnerabilities.
- Reporting: Generated reports highlighting code issues, categorized by
severity and type.
- Review and Action: Developers reviewed the reports, fixed identified
defects, and followed best coding practices to prevent recurrence.

c. Outcomes
- Reduction in Production Bugs: Achieved a 30% reduction in production
bugs within six months.
- Improved Code Quality: Consistent code reviews and fixes led to higher
code quality and maintainability.
- Regulatory Compliance: Enhanced adherence to financial regulations
through early identification of compliance-related issues.

Case Study 2: Performance Testing in E-commerce: An e-commerce platform


used performance testing tools to handle peak loads during sales events, ensuring
smooth user experience.

a. Background
- An e-commerce platform needed to ensure its website could handle
peak loads during major sales events without degrading user experience.

b. Steps and Implementation


- Requirement Analysis: Key requirements included handling high traffic
volumes and ensuring fast load times during sales.
- Planning: The team selected Apache JMeter and LoadRunner for
performance testing.
- Environment Setup: Created test environments mimicking peak traffic
conditions, including server configurations and network settings.

Nguyen Van Tran Anh and Le Hoai Anh 46


Software Assurance And Assessment

- Execution: Ran performance tests simulating various user scenarios,


such as browsing, adding items to cart, and checkout.
- Reporting: Collected and analyzed performance metrics, including
response times, throughput, and error rates.
- Review and Action: Identified bottlenecks and optimized the application
and infrastructure based on test results.

c. Outcomes
- Improved Scalability: Enhanced the platform’s ability to handle peak
loads by optimizing database queries and server configurations.
- User Experience: Maintained fast response times even under heavy load,
ensuring a smooth shopping experience.
- Business Success: Successfully managed high traffic during sales events,
resulting in increased sales and customer satisfaction.

Case Study 3: Security Assessment in Healthcare Software: A healthcare provider


conducted a thorough security assessment, uncovering critical vulnerabilities and
enhancing patient data protection.

a. Background
- A healthcare provider needed to ensure the security of its patient data by
conducting a thorough security assessment of its software systems.

b. Steps and Implementation


- Requirement Analysis: Identified the need to protect sensitive patient
information and comply with healthcare regulations like HIPAA.
- Planning: Selected OWASP ZAP and Nessus for vulnerability scanning
and penetration testing.
- Environment Setup: Configured a secure testing environment with access
to all relevant software systems.
- Execution: Conducted vulnerability scans, penetration tests, and code reviews
to identify security weaknesses.
- Reporting: Compiled a detailed report of identified vulnerabilities, including
their severity and potential impact.
- Review and Action: Prioritized and remediated critical vulnerabilities,
implemented security best practices, and conducted additional training for
developers.

c. Outcomes
- Enhanced Security: Identified and remediated critical vulnerabilities,
significantly improving the security posture.
- Regulatory Compliance: Ensured compliance with HIPAA and other
regulations by addressing security gaps.
- Patient Data Protection: Strengthened data protection mechanisms, reducing
the risk of data breaches.

These case studies illustrate the importance of software assessment in different


industries. Implementing static analysis, performance testing, and security
assessments can significantly improve software quality, performance, and security.
By following a structured assessment process, organizations can mitigate risks,

Nguyen Van Tran Anh and Le Hoai Anh 47


Software Assurance And Assessment

enhance user satisfaction, and comply with regulatory requirements.

10 Conclusions

The Object Management Group established that software measurement relies on


discrete indicators to support real-world decision making. It also established that
a software assurance indicator is a metric or combination of metrics that provides
useful information about the development process, the how the project was
conducted, or the characteristics of the product itself.

A key aspect of software assurance in practice is performing activities associated


with sound soft- ware results. These activities help determine whether the
software functions as intended and is free of vulnerabilities. Experience shows
that just performing what has traditionally been done for hardware is not
sufficient for software. The SAF was used as a set of software practices for
exploring possible measurement options. A set of candidate metrics was
identified that can connect to some aspect of the execution of each practice in the
SAF.

There are many lifecycles used to address software acquisition and development.
Each SAF practice can be performed at varying points in a specific lifecycle. The
level of specificity available at each point in the lifecycle can be different.
Measures taken at some points in the lifecycle are predictive, since they are
connected with what is planned. Measures taken after plans are executed can be
used to verify that what was planned is what was actually performed.

Identifying a measurement for a practice by itself does not really tell us anything
about software assurance. To associate measures with software assurance, it is
necessary to determine what a measure tells us in relation to a target, but there is
limited field experience in making this association. The examples in this report
were provided to demonstrate ways to navigate the various aspects of assurance
goal, practice, and measurement in a logical structure. This report also covered
use of GQM and aspects of an assurance case to structure examples and show how
measurement can demonstrate some aspects of a practice.

The selection of a metric is only the first step in establishing a useful


measurement of software assurance. Metric data must be collected, analyzed, and
evaluated to identify potential concerns.

Measurement is not unique to software assurance. Performing sound software


engineering also includes considering measures for monitoring and controlling
results. The examples in this report explore aspects of integrating software
assurance measurement into what is already being done for other qualities instead
of defining an entirely separate approach.

Nguyen Van Tran Anh and Le Hoai Anh 48


Software Assurance And Assessment

This report explores what is different about software assurance that must be added

to what soft- ware engineers are already doing. Based on this exploration, it is
asserted that improved software assurance depends on improved engineering. The
DoD RAM guide makes that statement for reliability, and the examples in this
report confirm the criticality of good engineering for software assurance.
Engineering requires that evidence is collected across the lifecycle since the
product and what can be measured changes.

Motivating vendors to address software assurance requires establishing criteria for


evaluating the products they produce as well as the processes used to produce
them at strategic points in the lifecycle. These evaluations must depend on expert
opinion since the range of available data is in- sufficient for researchers to
structure useful patterns of “goodness.” However, the selection and consistent
collection of metrics at various points in the lifecycle provide indicators over time
that an acquirer can use to monitor and incentivize software assurance
improvement.

Nguyen Van Tran Anh and Le Hoai Anh 49


Software Assurance And Assessment

11 References

1.http://www.sciencepublishinggroup.com/journal/paperinfo?
journalid=137&doi=10.11648/j.ajsea.20130206.14

2. http://ieeex- plore.ieee.org/

3.https://resources.sei.cmu.edu/library/as- set-view.cfm?assetid=496134

4.http://dl.acm.org/cita- tion.cfm?id=1232684.1232687

5.http://ieeexplore.ieee.org/document/6754599/

6.https://www.diva-por- tal.org/smash/get/diva2:469570/FULLTEXT01.pdf

Nguyen Van Tran Anh and Le Hoai Anh 50

You might also like