Chapter 6 Information and Software Quality Management
Chapter 6 Information and Software Quality Management
By the end of this lesson you will be able to identify commonly-used definitions of software
quality and list software quality “ilities” most applicable to different categories of software-
intensive systems.
The quality management team must be big enough to monitor the performance
of all key planning, implementation, and verification activities. This generally
requires a team of about 5% the size of the development team.
Software Quality
Engineering quality into a product requires an effective defect prevention
program that engages accurate defect detection and analysis to determine how
and why defects are inserted. In other words, it is not enough just to find
defects. We need to fix the processes that generated these errors as well.
This illustrates why defining software quality and understanding the issues
associated with its definition and measurement are crucial software acquisition
management skills
Perspectives on Software Quality part (1)
Perspectives on Software Quality
Software quality is a difficult thing to define and measure. What constitutes
"quality" depends largely on your perspective, i.e., your relationship to the
product in question. That perspective drives the particular quality attributes that
are important to a project stakeholder.
Learning Objectives
Once you have completed this topic, you will be able to apply different criteria and
perspectives to define what constitutes quality in software.
• Quality is often an unknown commodity until the testing phase, but if you
only measure quality at the end, it is often too costly to fix.
Such standards are used in a software acquisition environment. They help define
how the "buyer" and "seller" interface, interact, and manage the project.
For example, some standards define software quality as "the ability of software
to satisfy its specified requirements."
At the top level are software quality factors. These are project-specific quality
characteristics that are important to an acquirer or other customer of the system. For
example, the guidance control software in the LRATS E-Sentinel missile system is both
safety and mission-critical. Therefore RELIABILITY would be a key top-level software quality
characteristic!
The middle level of the framework includes those specific software-oriented attributes,
which are technically accepted as best supporting the top-level quality requirement. For
example, Coding Simplicity, Module Consistency and Error Handling are criteria that when
properly applied to software design and coding, are known to improve its overall reliability.
At the bottom are technical measures, some of which can be programming language-
specific, that determine the degree to which the quality attributes are actually present. For
example, Coding Simplicity depends on limited employment of branching and looping
commands, use of single entry and exit point to routines, etc. These can be measured in the
software by automated tools and an assessment of the degree of Coding Simplicity made.
Each quality perspective embraces a different viewpoint of what is most important for
software quality. Each project stakeholder is likely to have different software quality
priorities based on his or her perspective.
• Logisticians, who are responsible for software life-cycle support, would likely have
an Attributes perspective. Quality attributes, such as maintainability and
supportability, would be among their top priorities.
• Systems Engineers have concerns about the ability of the software to work on a
wide variety of target hardware. From a systems perspective, they would be
concerned about all aspects of software quality but would consider portability and
transportability as particularly important.
• End Users, who use the system in a combat environment, operate from the User-
Focused perspective. For them, performance and reliability are key software quality
attributes.
• Software Suppliers work and are paid under the terms and conditions of a
contract. Quality from their perspective means meeting contractual requirements
Information and Software Quality Management Part (2)
Software Size
Some software quality measurements are evaluated relative to the "size" of the
software product. It is, therefore, important to understand the concept of
software size in order to make sense of those measurements.
Software size can be determined in several ways. The most commonly used
methods are:
Learning Objectives
This topic examines the SLOC and FP methods of determining software size and
discusses the issues associated with them.
Once you've completed this section, you will be able to describe these two ways of
measuring software size.
A large body of Software Engineering quality data exists based on using SLOC, or
thousands of Source Lines of Code (KSLOC) to determine quality attributes, such as
Error Density. An example of Error Density values expressed in KSLOC (E/KSLOC) for
several types of generic systems are displayed in a separate chart.
The chart is adapted from: Donald J. Reifer, Reifer Consultants, Inc, Industry Software
Cost, Quality and Productivity Benchmarks "Error Rates Upon Delivery by Application
Domain."
Counting SLOC
Some important factors to consider when measuring software size by SLOC:
In the English language, many rules exist for "proper usage." For example, the
first sentence in a paragraph is indented, the first word in a sentence is
capitalized, and a sentence ends with a punctuation mark.
Therefore, counting punctuation marks such as "." or "!" or "?" would give an
accurate count of the number of sentences in a segment of text.
For example:
• Many languages (e.g., C++, Pascal, Ada) use a ";" to end a single line-of-
code.
Programming language format characteristics such as these form the basis for
SLOC counting. However, other considerations regarding how they are actually
interpreted apply as well.
Estimating SLOC
The following page contains an Ada code module (named "Ignition countdown") that
might be used as part of the Long Range Acquisition & Targeting System (LRATS) E-
SENTINEL missile system. This simplified module is part of a much larger program that
activates the missile warhead.
SLOC counts can differ dramatically depending which counting criteria is used. Some
(many other equally valid ones exist!) of these criteria and perspectives on their use
include:
• Counting logical lines: only code that has meaning (is “executable”) to the
computer is counted since that directly impacts performance, memory use and
error density.
• Counting all lines: includes physical lines, comments and blank lines. Since
comments improve maintainability and blank lines improve legibility, the effort
taken by programmers to do this is reflected in the SLOC count.
• Counting physical lines: Comments and blank lines are excluded. However,
programmers frequently use inactive “debug” code to improve later testability;
using this criteria, they would not be counted.
Counting Logical Lines of Code includes only the lines of code that have meaning to the
computer. Counting Physical Lines of Code includes all lines in the code; even blank
ones and comments in the count. Exclude Comments and Blank Lines. This method
excludes Blank lines and Comments from the count.
Choosing a Method
SLOC counts ultimately impact all three of the classic Cost, Schedule, and
Performance measures commonly used to judge project success.
Select each measure below to see how they are impacted by SLOC.
ERROR DENSITY=ERRORS/KSLOC.
Functional Complexity
Suppose you counted the number of rivets in an aircraft. Would a greater number of
rivets make a more complex aircraft? Select the more complex aircraft.
Two images of the side of an aircraft. One has more rivets than the other.
No matter which image you choose, the feedback says "The number of rivets does not
necessarily correspond to the complexity of an aircraft! Similarly, a high SLOC count
does not always indicate a more complex software package."
Function Points
If you want to correlate a system's functional complexity to the ultimate
size of its software, another metric, called Function Points (FPs), can be
used. As opposed to SLOC, which is a low-level measure of size, Function
Points are based on what a system actually does.
Function Points (FPs):
No matter which image you click the feedback says, "This approach would result in the
air superiority fighter having a much higher measure of complexity over that of a crop-
duster, even though both might have approximately the same number of rivets."
• Should be identical for two systems that perform the same tasks
Function Points are weighted sums of counts of different factors. These factors
relate to overall system requirements and can include:
• Number of Inputs
• Number of Outputs
• Number of Inquiries
• Number of Interfaces
• 40,000,000 SLOC
Even though SLOC and Function Points are based on radically different
approaches, some researchers have been able to determine approximate
equivalents for two counting methodologies based on the programming
language used.
Click the image of a ruler to see a chart showing the average number of statements
required in each programming language to accomplish one function point.
Assembler=320 C=128 COBOL=107 Ada=71 DB Languages=40 Object Oriented=29
Query Languages=25 Generators=16.
• Determined Early: FPs are established early in the system life cycle and
thus useful for cost/schedule estimating.
Many heated technical arguments have occurred in trying to answer the question, "Should
SLOC or Function Points be used?" The best answer, as illustrated below, is to use both
Two robots labeled SLOC and FP boxing. The FP boxer is getting the better of the fight.
Function Points are useful during the early stages of a project, when top-level requirements
have been defined, but little is known about the detailed design. At this stage, SLOC takes a
beating! Function Points can be accurately assessed, but SLOC estimates may be
significantly in error.
The SLOC robot begins to fight back and overtake the FP robot.
SLOC makes a come back as the software design matures and more details are defined. As
software modules are coded, estimated SLOC values can be compared to actual counts, and
adjusted accordingly. Because SLOC is a low-level measure, it enables detailed project
tracking as programming progresses.
What statement describes the SLOC method of determing software size?
Learning Objective
This section introduces the most commonly used software quality measure, Error
Density.
Once you have completed this topic, you will be able to define Error Density and its role
as a software quality factor.
Counting Errors
Software errors can be counted in a variety of ways. In many Department
of Defense (DoD) projects, a Software Problem Report (SPR) or a
Software Test Report (STR) is generated to:
Summary
Error Density is the most commonly used measure of software quality. It
is calculated by dividing the number of errors present by the software
size. It is important to calculate Error Density values as accurately as
possible, because the PMO uses it as the basis of many project decisions.
Learning Objectives
This section discusses software quality factors and their attributes. Research has been able
to identify many software quality attributes and link them to various programming
techniques. By evaluating characteristics of the software itself, this linkage provides an
indirect way to quantify software quality.
When you have completed this topic, you will be able to list and define typical information
and software quality factors and ways that they are measured.
At the top level are project specific quality factors that are important to the acquirer.
The middle level includes software-oriented attributes, which technically support top-level
quality requirements.
At the bottom are technical measures that determine the degree to which quality attributes
are present.
• It is better to assess a select few than to try to assess all attributes that
have been defined.
• Consider the tradeoffs associated with a given project. For example, cost
and schedule pressures may keep you from using some attributes.
Many individuals from a variety of organizations and disciplines have conducted research on
software quality attributes.
This research, which started in the late 1970's and still continues, focuses on answering the
following questions:
Results of research on software quality factors and their attributes are described
in a Framework Guidebook published by what was then the Rome Air
Development Center. Currently an international standard, ISO 9126, also
provides similar guidance.
Many of these quality factors have a suffix of "ility," so quality attributes are often
referred to as the software quality "ilities."
Reliability—Does the software accurately do what I want all of the time? Extent to which
the software will perform without any failures within a specified time period
Survivability—If some of the system breaks will the software continue to function?
Embedded
• Efficiency
• Survivability
• Reliability
Portability
Maintainability
Usability
Some software quality factors are "universal" and used across several categories of
software-intensive systems. Other software quality factors tend to be emphasized more in
one type of system because of the technical characteristics and operational requirements.
Integrity
expandability
interoperability
The graphic outlines possible responses to the previous KRs. Quality factors for
Embedded Systems:
• Efficiency
• Reliability
• Survivability
• Accurately measure the attributes that make up each factor (Unlike Error
Density, determining values for those other quality attributes can be
highly technical.)
• The programmer used (or chose not to use) technical features of the
language when coding the module
Maintainability Example
Software design impacts quality factors. For example, the number of comments in
the code influences the program’s maintainability: the more comments, the easier to
maintain software. In this example, "maintainability" can be calculated as a
percentage of commented lines-of-code and assigning a rating to it.
Style Guides
A supplier-produced Programming Style Guide is typically used by project
programmers. Style Guides:
Software quality factors are broken into attributes that are then measured.
• Project tradeoffs such as cost and schedule drivers can influence choice of
quality factors.
The Department of Defense (DoD) does not dictate any specific SQA system be
used, however, there are some key activities and processes present in all
effective Software Quality Management programs.
Learning Objectives
This section presents an overview of a Software Quality Assurance (SQA) program. It
includes:
Once you have completed this lesson, you will be able to define SQA and outline its key
processes.
Flexible QA Process
A Program Manager (PM) cannot mandate the specific quality processes used by
a Supplier. The Defense Acquisition System Policies state that:
However they are organized, software standards recommend that the SQA
group should have a formal reporting channel to senior management. This
reporting channel:
• Documented somewhere
• Approved by management
Organization
A plan should include information about the organization responsible for quality
management program, the relationship of the quality management organization to other
organizational entities (such as configuration management), and the number and skill levels
of personnel who perform the software quality management activities.
Furnished Items
The Quality Assurance Plan should identify Government furnished facilities, equipment,
software, and services (including manufacturer, model number, and equipment
configuration) to be used in quality management.
Schedule
The Quality Assurance Plan should provide a detailed schedule for quality management
activities. The schedule should include activity initiation, activity completion, and personnel
responsible, as well as key development milestones such as formal reviews, audits, and
delivery of items on the contract data requirements list.
Implementations
The plan addresses all tasks to be performed by the supplier in conducting a quality
management program. It includes the procedures to be used in the quality management
program and in conducting continuous assessments of the development process. It should
also describe the tools and measures that will be used to conduct the program.
Records
The Quality Assurance Plan includes the supplier's plans for preparing, maintaining and
making available for Government review, the records of each quality management program
activity performed.
Resources
The Quality Assurance Plan will identify any subcontractors, vendors, or other resources to
be used by the supplier to fulfill the development requirements of the prime contract.
SQAP Formats
Because DoD policy is to rely on a contractor's internal quality processes,
acquirers do not specify the format of a supplier's SQA plans.
There are a variety of formats a supplier's SQAP can take; IEEE standard 730
provides one commonly used format.
I. Purpose
II. References
III. Management
IV. Documentation
VII. Testing
XII. Training
These quality activities apply to any type of project, not only software-intensive
ones
• True
It is much less costly to prevent defects than to 'inspect quality in' after
development is done.
The choice of methods employed on a specific project depends on the size and
complexity of the software, the amount of new code developed, system and
software risks, and available budget and time.
Many such methods are available that impact software quality. This lesson surveys
some of them.
Learning Objectives
This section introduces methods and techniques that can be used by government
and industry to help improve software quality. Once you have completed this topic,
you will be able to describe them and how they can be used to improve software
quality.
Once you have completed this topic, you will be able to identify and describe these
techniques.
This section introduces eight methods used in industry and government to assure
quality software products.
1.Peer Reviews
2.Walkthroughs
3.Formal Inspections
4.Cleanroom
5.Formal Specifications
6.Reviews and Audits
7.IV & V
Popup Text:
Method 2: Walkthroughs
Popup Text:
Method 2: Walkthroughs
The formal inspection process uses checklists and requires follow-up to ensure that
the defects and errors found are corrected. This process assures fewer future
defects and leads to improved process efficiency. Metrics on the number and type
of defects found, and the time spent in inspection are collected and recorded.
Popup Text:
Inspectors are formally trained and should be neutral, not involved in the product
development. The product author's role in the process is only to answer questions
and provide clarification to the inspection team. Each member of the inspection
team has a defined role.
The READER reads the material being inspected. The MODERATOR controls the
process flow of the inspection. The AUTHOR answers technical questions. The
RECORDER takes action notes.
The goal of cleanroom software engineering is defect prevention, rather than defect
removal. Proof of correctness is used to prevent defects. The emphasis shifts from
removing defects from software products, to preventing the introduction of defects
into the products.
Method 4: Cleanroom
The objectives of the cleanroom approach are to engineer software products under
statistical quality control using mathematical verification, not debugging; and
statistical certification of quality, through user testing at the system level and
reliability predictions (e.g., mean time between failures).
Because of cost, formal specifications are used rarely. Requirements must be well
understood and the system must be critical enough to justify a substantial
investment in time and resources. Some DoD nuclear-critical and cryptographic
systems may fall into this category.
Popup Text:
Periodic reviews and audits provide another way to gain insight into the quality of a
software product. They involve both the acquirer and the supplier in the process.
Audits are performed late in the development cycle, when there is an actual product
to be evaluated. As such, they are more useful as a final quality check than as an
error prevention technique.
Popup Text:
This review is conducted for each Configuration Item (CI) or aggregate of CIs. Risk
management actions and the results of risk mitigation activities are evaluated. A
system level PDR may be conducted upon completion of all CI PDRs.
For a Supplier using a waterfall approach, this is the point when the detailed design
documentation is released to fabricate, integrate and assemble hardware
qualification units and to code and integrate the software qualification units.
A system level CDR may be conducted after the CI CDRs have been completed to
review the progress of system development.
Reviews and audits are agreed upon during contract negotiation. The use of
Integrated Project Teams (IPTs) may eliminate the need for many traditional formal
reviews.
Other categories of common reviews and audits, performed as part of the systems
engineering process, are listed below.
Subsystem Reviews
Functional Reviews
Support Reviews
Training Reviews
Manufacturing Reviews
Popup Text:
Physical Configuration Audits (PCA) are formal evaluations of the as-built version of
a configuration item against its design documentation.
Physical Configuration Audits answer the question: Does the product as produced
conform to the design?
Subsystem Reviews
Subsystem reviews are held to assure that all requirements, including interface
requirements, for the
subsystem have been identified, balanced across prime mission products, and met.
These reviews allow subsystem review team members to address issues and assess
progress of a subsystem or configuration item (CI).
This review will determine whether the specifications form a satisfactory basis for
proceeding into preliminary software design.
Test procedures are evaluated for compliance with test plans and descriptions, and
for adequacy to accomplish testing requirements.
Functional Reviews
Training
Test Manufacturing
These reviews assess the functional area's status in satisfying the prime mission
products, surface issues, and support the development of required functional plans
and procedures.
Support Reviews
Interface issues
Manufacturing Reviews
These reviews assess manufacturing concerns, such as the need to identify high
risk/low yield manufacturing processes or materials, and the manufacturing efforts
necessary to satisfy design requirements.
Independent Verification and Validation (IV&V) was born during the early days of
the space and missile programs. Both NASA and the Department of Defense (DoD)
realized that the software developed for spacecraft and missile systems as well as
other "safety-critical" systems which had to perform correctly the first time required
special scrutiny.
For those system which are safety-critical (where failures can cause loss of life,
significant damage or security compromises) resources and time are set aside for
an organization or contractor, financially and managerially independent from the
software supplier, to perform several key functions for the PMO. These are to:
Popup Text:
Validation comprises evaluation, integration and test activities carried out at the
system level to ensure that the system developed satisfies the operational
requirements of the system specification.
Popup Text:
Developmental Testing
The purpose of developmental testing is to ensure that the system and the software
comprising it function in accordance with various technical requirements
documents. These have a variety of names depending on the System Engineering
approach being used. Some of these technical requirements documents typically
include:
There are various stages in the developmental test process. Some of the “lower
level” tests are performed by the supplier or software developer. Depending on the
test strategy and the type of system, as it is progressively integrated together in a
“bottoms-up” process, a government-industry team may perform integrated
developmental testing at the subsystem and system level. In other cases
government agencies will perform some of these tests.
Information and Software Quality Management: Methods and Techniques
Software Integration
System Integration
D-Link Text:
Long Description
Once a software configuration item (SCI) has been designed, it enters into Software
Coding and Testing.
During Software Coding and Testing, the Supplier evaluates and documents the
software code and test results for each Software Unit (SU) comprising the SCI
considering specific criteria.
The result of the Software Coding and Testing process is executable and tested
code for all the Software Units comprising a given SCI.
D-Link Text:
Long Description
Popup Text:
Performing SU testing
specific criteria
Testers consider the following criteria when evaluating software code and test
results:
Software Integration
After coding and testing each SU comprising the SCI, the next step is Software
Integration. The purpose of Software Integration is to ensure the SUs comprising the
SCI work together as intended.
The supplier evaluates and documents the integration plan, design, code, test, test
results, and user documentation using specific criteria.
D-Link Text:
Long Description
Popup Text:
specific criteria
Testers consider the following criteria when evaluating the integration plan, design,
code, test, test results, and user documentation:
The next step is to run each Software Configuration Item (SCI) through Software
Qualification Testing. This type of testing demonstrates to the acquirer that the SCI
meets the software requirements that have been allocated to it as part of the
Systems Engineering process.
The supplier evaluates and documents the design, code, test, test results, and user
documentation considering specific criteria.
Software Qualification Testing produces a qualified product baseline for each SCI.
D-Link Text:
Long Description
Popup Text:
specific criteria
Testers consider the following criteria when evaluating the integration plan, design,
code, test, test results, and user documentation:
System Integration
The supplier evaluates and documents the integrated system against specific
criteria.
The result of System Integration is a product baseline for the system that is ready
for system qualification testing.
D-Link Text:
Long Description
Popup Text:
specific criteria
Testers consider the following criteria when evaluating the integrated system:
Test coverage of system requirements
The result of the SQT is a qualified product baseline for the system. The system is
now ready to go before the Milestone C Decision Review.
D-Link Text:
Long Description
Popup Text:
specific criteria
Testers consider the following criteria when evaluating system qualification testing
results:
Operational Testing
Popup Text:
Operational testing
Usability
Effectiveness
Software maturity
Terms
Definitions
B. A small, critical piece of software used in a DoD multi-level security system must
be error-free. A proof of correctness of the software is desired. C. End-users of the
system want to verify that the systm has appropriate functionality and is usable in
its intended environment.
This type of testing is performed to demonstrate to the acquirer that the software
configuration item's requirements have been met in accordance with its software
requirements specification (SRS).
Summary
This topic presented a range of methods and techniques that can be used to help
improve software quality. Because in some cases their use depends on the type of
system under development, not all these techniques are usable on every system.
Peer Reviews and Walkthroughs are commonly used by nearly all software
developers. Those with more mature development processes generally use the
more effective Formal Inspections, which are rigorous and require a trained cadre.
All of these techniques emphasize the detection and elimination of efforts early in
the lifecycle, when they are the easiest and cheapest to remove.
Reviews and audits should always be used in system and software development.
The use of Developmental and Operational Testing is universal as well.
You have reached the end of this topic. To launch the next topic, click the topic title
in the Table of Contents.