Stqa Unit 3 Notes
Stqa Unit 3 Notes
Figure shows the software measurement process. The activities required to design a
measurement process using this architecture are:
Developing a measurement process to be made available as part of the organization's standard
software
process;
Planning the process on projects and documenting procedures by tailoring and adapting the
process asset;
Implementing the process on projects by executing the plans and procedures; and
Improving the process by evolving plans and procedures as the projects mature and their
measurement needs change
.
Ratio Scale
A measurement mapping that the preserves ordering, the size of intervals between
entities, and ratios between entities In a ratio scale, contrary to interval scale, there is an absolute
and non arbitrary zero point.
E.g. project A took twice as long as project B.
Absolute Scale
Absolute scale is the most informative in the measurement scale hierarchy [10]. In
absolute scale, measurement is done by counting method.
E.g. number of failures observed during integration testing can be measured only by counting
the number of failures observed.
Challenges with Software Measurement
Unambiguous measurement is vital in software development life cycle. There should be some
standardized measures which can
Be used as a common baseline for measurement;
Offer a point of reference for software measurers to verify their measurement results and
their ability to measure the same reference material;
Allow measurers to use the related reference concept, and thus to speak at the same level.
However measurement of software is challenging in nature Because of following reasons:
Software is an intangible product. It is an atypical product when compared to other
industrial products, in that it varies greatly in terms of size, complexity, design
techniques, test methods, applicability, etc.
There is little consensus on specific measures of software attributes, as illustrated by the
scarcity of international standard measures for software attributes, such as software
complexity and quality.
Software development is so complex that all models are weak approximations as they are
hard to validate.
Measurements are not common to all projects, organizations. Measures that work for one
project may not be applicable to another one.
Software Metrics
project.
Productivity
Attributes of Effective Software Metrics
achieved during the software development process. As a quality assurance process, a metric is
needed to be revalidated every time it is used. Two leading firms namely, IBM and HewlettPackard have placed a great deal of importance on software quality. The IBM measures the user
satisfaction and softwareacceptability in eight dimensions which are capability or functionality,
usability, performance, reliability, ability to be installed, maintainability, documentation, and
availability. For the Software Quality Metrics the Hewlett-Packard normally follows the five
Juran quality parameters namely the functionality,the usability, the reliability, the performance
and the serviceability. In general, for most software quality assurance systems the common
software metrics that are checked for improvement are the Source lines of code, cyclical
complexity of the code, Function point analysis, bugs per line of code, code coverage, number of
classes and interfaces, cohesion and coupling between the modules etc.
Common software metrics include:
Bugs per line of code
Code coverage
Cohesion
Coupling
Cyclomatic complexity
Function point analysis
Number of classes and interfaces
Number of lines of customer requirements
Order of growth
Source lines of code
Robert Cecil Martins software package metrics
Software Quality Metrics focus on the process, project and product. By analyzing the
metrics the organization the organization can take corrective action to fix those areas in the
process, project or product which are the cause of the software defects.
The de-facto definition of software quality consists of the two major attributes based on
intrinsic product quality and the user acceptability. The software quality metric encapsulates the
above two attributes, addressing the mean time to failure and defect density within the software
components. Finally it assesses user requirements and acceptability of the software. The intrinsic
quality of a software product is generally measured by the number of functional defects in the
software, often referred to as bugs, or by testing the software in run time mode for inherent
vulnerability to determine the software "crash" scenarios.In operational terms, the two metrics
are often described by terms namely the defect density (rate) and mean time to failure (MTTF).
Although there are many measures of software quality, correctness, maintainability, integrity and
usability provide useful insight.
Correctness
A program must operate correctly. Correctness is the degree to which the software
performs the required functions accurately. One of the most common measures is Defects per
KLOC. KLOC means thousands (Kilo) Of Lines of Code.) KLOC is a way of measuring the size
of a computer program by counting the number of lines of source code a program has.
Maintainability
Maintainability is the ease with which a program can be correct if an error occurs. Since
there is no direct way of measuring this an indirect way has been used to measure this. MTTC
(Mean time to change) is one such measure. It measures when a error is found, how much time it
takes to analyze the change, design the modification, implement it and test it.
Integrity
This measure the systems ability to with stand attacks to its security. In order to measure
integrity two additional parameters are threatand security need to be defined. Threat
probability that an attack of certain type will happen over a period of time. Security
probability that an attack of certain type will be removed over a period of time. Integrity =
Summation [(1 - threat) X (1 - security)]
Usability
How usable is your software application? This important characteristic of your application is
measured in terms of the following characteristics:
-Physical / Intellectual skill required to learn the system
-time required to become moderately efficient in the system.
-the net increase in productivity by use of the new system.
-subjective assessment(usually in the form of questionnaire on the new system)
Standard for the Software Evaluation
In context of the Software Quality Metrics, one of the popular standards that addresses
the quality model, external metrics, internal metrics and the quality in use metrics for the
software development process is ISO 9126.
Defect Removal Efficiency
Defect Removal Efficiency (DRE) is a measure of the efficacy of your SQA activities..
For eg. If the DRE is low during analysis and design, it means you should spend time improving
the way you conduct formal technical reviews.
DRE = E / ( E + D )
Where E = No. of Errors found before delivery of the software and D = No. of Errors found after
delivery of the software.
Ideal value of DRE should be 1 which means no defects found. If you score low on DRE it
means to say you need to re-look at your existing process. In essence DRE is a indicator of the
filtering ability of quality control and quality assurance activity . It encourages the team to
find as many defects before they are passed to the next activity stage. Some of the Metrics are
listed out here:
Test Coverage = Number of units (KLOC/FP) tested / total size of the system Number of tests
per unit size = Number of test cases per KLOC/FP Defects per size = Defects detected / system
size Cost to locate defect = Cost of testing / the number of defects located Defects detected in
testing = Defects detected in testing / total system defects Defects detected in production =
Defects detected in production/system size Quality of Testing = No. of defects found during
Testing/(No. of defects found during testing + No of acceptance defects found after delivery)
*100 System complaints = Number of third party complaints / number of transactions processed
Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for
Design and Documentation Test Execution Productivity = No of Test cycles executed / Actual
Effort for testing Test efficiency= (number of tests required / the number of system errors)
Measure
Metrics
Normalized per function point (or per LOC) At product delivery (first 3
months or first year of operation) Ongoing (per year of operation) By
2. Delivered defect
level of severity By category or cause, e.g.: requirements defect, design
quantities
defect, code defect, documentation/on-line help defect, defect introduced
by fixes, etc.
3.
Responsiveness Turnaround time for defect fixes, by level of severity Time for minor vs.
(turnaround time) to major enhancements; actual vs. planned elapsed time (by customers) in
users
the first year after product delivery
McCabe's cyclomatic complexity counts across the system Halsteads
7. Complexity of
measure Card's design complexity measures Predicted defects and
delivered product
maintenance costs, based on complexity measures
8. Test coverage
9. Cost of defects
Re-work effort (hours, as a percentage of the original coding hours) Reworked LOC (source lines of code, as a percentage of the total delivered
LOC) Re-worked software components (as a percentage of the total
delivered components)
12. Reliability
Customer satisfaction.
1.
2.
3.
4.
1.
2.
1.
2.
3.
The MTTF metric is most often used with safety critical systems such as airline traffic control
systems, avionics and weapons.
An error is a human mistake that results in incorrect software.
The Resulting fault is an accidental condition that causes a unit of the system to fail to function
as required.
A defect is an anomaly in a product.
A failure occurs when a functional unit of a software-related system can no longer perform its
required function or cannot perform it with in specified limits.
From the date gathering perspective, time between failures data is much more expensive. It
requires the failure occurrence time for each software failure recorded. To be useful, time
between failures data also requires a high degree of accuracy. This is perhaps the reason the
MTTF metric is not used widely by commercial developers.
2. Defect density metric : To compare the defect rates of a software product various issues are to
be addressed.
To calculate defect rate for the new and changed code, the following must be done:
LOC count: The entire software product as well as the new and changed code must be counted
for each release of the product.
Defect Tracking: Defects must be tracked to the release origin the portion of the code that
contains the defects and at what release the portion was added, changed or enhanced. When
calculating the defect rate of the entire product, all defects are used; when calculating the defect
rate for the new and changed code, only defects of the release origin of the new and changed
code are included.
3. Customers Perspective: The defect rate metric measures code quality on a per unit basis.
Good practice is Software Quality Engineering, however, also needs to consider customers
perspective. From the customers point of view, the defect rate is not as relevant as the total no.
of defects he/she is going to encounter. Therefore, it becomes necessary to reduce the number of
defects from release to release irrespective of the change in size.
4. Customer Problems Metric : Another product metric that is used by major developers in
the software industry measures the problems customers encounter when using the product.
Per User Month (PUM) = Total problems that customers reported for time period/Total no. of
license- months of the software during the period.
Where
Number of license months = Number of install licenses of the software * No. of months in
the calculation period.
To achieve a low PUM, approaches that can be taken are:
Improve the development process and reduce the product defects
Reduce the non-defect-oriented problems by improving all aspects of the products(such as
usability, documentation), customer education, and support
Increase the sale (the number of installed licenses) of the product.
12. Reliability
Availability (percentage of time a system is available, versus the time
the system is needed to be available)
Mean time between failure (MTBF)
Mean time to repair (MTTR)
Reliability ratio (MTBF / MTTR)
Number of product recalls or fix releases
Number of production re-runs as a ratio of production runs
2.
3.
4.
5.
6.
Analyze data.
The importance of the validation element of a data collection system or a development tracking
system cannot be overemphasized.
In their study of NASA's Software Engineering Laboratory projects, Basili and Weiss
(1984) found that software data are error-prone and that special validation provisions are
generally needed. Validation should be performed concurrently with software development and
data collection, based on interviews with those people supplying the data. In cases where data
collection is part of the configuration control process and automated tools are available, data
validation routines (e.g., consistency check, range limits, conditional entries, etc.) should be an
integral part of the tools. Furthermore, training, clear guidelines and instructions, and an
understanding of how the data are used by people who enter or collect the data enhance data
accuracy significantly.
The actual collection process can take several basic formats such as reporting forms,
interviews, and automatic collection using the computer system. For data collection to be
efficient and effective, it should be merged with the configuration management or change control
system. This is the case in most large development organizations. For example, at IBM
Rochester the change control system covers the entire development process, and online tools are
used for plan change control, development items and changes, integration, and change control
after integration (defect fixes). The tools capture data pertinent to schedule, resource, and project
status, as well as quality indicators. In general, change control is more prevalent after the code is
integrated. This is one of the reasons that in many organizations defect data are usually available
for the testing phases but not for the design and coding phases.
With regard to defect data, testing defects are generally more reliable than inspection
defects. During testing, a "bug" exists when a test case cannot execute or when the test results
deviate from the expected outcome. During inspections, the determination of a defect is based on
the judgment of the inspectors. Therefore, it is important to have a clear definition of an
inspection defect. The following is an example of such a definition:
Inspection defect: A problem found during the inspection process which, if not fixed, would
cause one or more of the following to occur:
A field defect
For example, misspelled words are not counted as defects, but would be if they were found on a
screen that customers use. Using nested IF-THEN-ELSE structures instead of a SELECT
statement would not be counted as a defect unless some standard or performance reason dictated
otherwise.
Figure 4.7 is an example of an inspection summary form. The form records the total number of
inspection defects and the LOC estimate for each part (module), as well as defect data classified
by defect origin and defect type. The following guideline pertains to the defect type classification
by development phase:
Interface defect: An interface defect is a defect in the way two separate pieces of logic
communicate. These are errors in communication between:
Components
Products
Modules and subroutines of a component
User interface (e.g., messages, panels)
Examples of interface
development phase follow.
defects
per
Code (I2)
Passing wrong values for parameters on macros, application program interfaces (APIs), modules
Setting up a common control block/area used by another piece of code incorrectly Not issuing
correct exception to caller of code
Logic defect: A logic defect is one that would cause incorrect results in the function to be
performed by the logic. High-level categories of this type of defect are as follows:
Function: capability not implemented or implemented incorrectly
Assignment: initialization
Checking: validate data/values before use
Timing: management of shared/real-time resources
Data Structures: static and dynamic definition of data
Examples of logic defects per development phase follow.
Code (I2)
Code does not implement I1 design
Lack of initialization
Variables initialized incorrectly
Missing exception monitors
Exception monitors in wrong order
Exception monitors not active
Exception monitors active at the wrong time
Exception monitors set up wrong
Truncating of double-byte character set data incorrectly (e.g., truncating before shift in character)
Documentation defect: A documentation defect is a defect in the description of the function (e.g.,
prologue of macro) that causes someone to do something wrong based on this information. For
example, if a macro prologue contained an incorrect description of a parameter that caused the
user of this macro to use the parameter incorrectly, this would be a documentation defect against
the macro.
Code (I2)
Information in prologue not correct or missing
Wrong wording in messages
UNIT IV
improvement. It can be complemented with any process improvement model or can be used as a
STAND ALONE model.
TMM has major two components
1.
2.
TMM Levels
Goals
Level 1:
Initial
Level 2:
Defined
are in place
Level 3:
Integrated
Level 4:
Establish a test
measurement program
Management and
Measurement
Level 5:
Optimized
TMM