SQA- Lecture 04 formated- Final
SQA- Lecture 04 formated- Final
assurance (SQA)
Lecture 4
Outline
Introduction
Software Life Cycle
Quality Control
Infrastructure
Management
Standards
Conclusion & Summary
Management
Project Progress Control
Quality Metrics
Cost of Quality
Discussion & Summary
1. Project Progress Control
Overview:
The components of project progress control
Progress control of internal projects and external
participants
Implementation of project progress control regimes
Computerized tools for software progress control
Components of Project Progress
Control
Control of risk management activities
Project schedule control
Project resource control
Project budget control
Control of Risk Management
Activities
systems; firmware)
Resource control primarily focuses on professional human
resources but applies to other assets too.
Real-time software and firmware projects require strict resource
monitoring through periodic reports.
Small deviations in resource use can mask larger cumulative
impacts if project progress is delayed.
Internal allocation reviews help detect budget strains, such as
imbalanced senior vs. junior analyst hours.
Early detection of deviations allows for resource reallocation,
team reorganization, or project plan revisions.
Project Budget Control
Main budget items:
Human resources
Development and testing facilities
Purchase of COTS software
Purchase of hardware
Payments to subcontractors
Control is based on the milestone reports and other
periodic reports
Usually budget control has the highest priority,
but only the combination of all control aspects
ensure the required
Progress control of internal
projects and external participants
Problem: In practice project control provides only a
limited view of the progress of internal software
development and an even more limited view on the
progress made by external project participants
Internal projects have by definition no external
customer and therefore tend to occupy a lower place
among management's priorities. Therefore, the full
range of project progress control should be employed
More significant efforts are required in order to achieve
acceptable levels of control for an external project
participant due to the more complex communication
and coordination
Implementation of project progress
control regimes
Procedures:
Allocation of responsibilities for
Person or management for progress control
Frequency of reporting from each of the unit levels and
administrative level
Situation requiring the project leader to report immediately
to management
Situation requiring lower management to report immediately
to upper management
Management audits of project progress which deals mainly with
(1) How well progress reports are transmitted by project
leaders and by lower- to upper-level management
(2) Specific management control activities to be initiated
Implementation of project progress
control regimes
Remarks:
Project progress control may be conducted on
several managerial levels in large software
development organizations coordination becomes
essential
Project leaders base there progress reports on
information gathered from team leaders
Computerized Project Progress
Control
Required for non trivial projects
Automation can reduce costs considerably
Computerized Control of risk
management
Lists of software risk items by category and
their planned solution dates
Lists of exceptions of software risk items –
overrun solution dates
Control
Classified lists of delayed activities.
Classified lists of delays of critical activities – delays
that can affect the project’s completion date.
Updated activity schedules, according to progress
reports and correction measures.
Classified lists of delayed milestones.
Updated milestones schedules, according to progress
reports and applied correction measures.
Computerized Project Resource
Control
Project resources allocation plan
for activities and software modules
for teams and development units
for designated time periods, etc.
Project resources utilization – as specified above
Project resources utilization exceptions – as specified
above
Updated resource allocation plans generated
according to progress reports and reaction measures
applied
Computerized Project Budget
Control
Project budget plans
for activity and software module
for teams and development units
for designated time periods, etc.
Project budget utilization reports — as specified
above
Project budget utilization deviations – by period or
accumulated – as specified above
Updated budget plans generated according to
progress reports ports and correction measures
2. Quality Metrics
“You can’t control what you can’t measure”
[DeMarco1982]
Overview:
Objectives of quality measurement
Classification of software quality metrics
Process metrics
Product metrics
Implementation of software quality metrics
Limitations of software metrics
The function point method
Definition
IEEE definition of software quality metrics:
A quantitative measure of the degree to
which an item possesses a given quality
attribute.
A function whose inputs are software data
and whose output is a single numerical value
that can be interpreted as the degree to
which the software possesses a given quality
attribute.
Objectives
Facilitate management control, planning and
managerial intervention. Based on:
Deviations of actual from planned performance.
Deviations of actual timetable and budget
performance from planned.
Identify situations for development or
maintenance process improvement (preventive
or corrective actions). Based on:
Accumulation of metrics information regarding the
performance of teams, units, etc.
Requirements
General requirements
Relevant
Valid
Reliable
Comprehensive
Mutually exclusive
Operative requirements
Easy and simple
Does not require independent data collection
Immune to biased interventions by interested parties
Classifications
Classification by phases of software system:
Process metrics: metrics related to the software
development process
Maintenance metrics: metrics related to software
maintenance (product metrics in [Galin2004])
Product metrics: metrics related to software artifacts
low severity 42 1 42
medium severity 17 3 51
high severity 11 9 99
Number of code errors (NCE) vs. weighted number of code errors (WCE)
Process Metrics Categories
Software process quality metrics
Error density metrics
Error severity metrics
Software process timetable metrics
Software process error removal
effectiveness metrics
Software process productivity metrics
Error Density Metrics
Code Name Calculation formula
NCE = The number of code errors detected by code inspections and testing.
NDE = total number of development (design and code) errors) detected in the development process.
WCE = weighted total code errors detected by code inspections and testing.
WDE = total weighted development (design and code) errors detected in development process.
Error Severity Metrics
Code Name Calculation
formula
ASCE Average Severity of Code WCE
ASCE = -----------
Errors NCE
DED Average Severity of WDE
ASDE = -----------
Development Errors NDE
NCE = The number of code errors detected by code inspections and testing.
NDE = total number of development (design and code) errors) detected in the
development process.
WCE = weighted total code errors detected by code inspections and testing.
WDE = total weighted development (design and code) errors detected in
development process.
Software Process Timetable Metrics
Code Name Calculation formula
NDE = total number of development (design and code) errors) detected in the
development process.
WCE = weighted total code errors detected by code inspections and testing.
WDE = total weighted development (design and code) errors detected in
development process.
NYF = number software failures detected during a year of maintenance service.
WYF = weighted number of software failures detected during a year of maintenance
service.
Process Productivity Metrics
Code Name Calculation formula
DevH = Total working hours invested in the development of the software system.
ReKLOC = Number of thousands of reused lines of code.
ReDoc = Number of reused pages of documentation.
NDoc = Number of pages of documentation.
Maintenance Metrics Categories (1/2)
Help desk service (HD):
software support by instructing customers regarding the
method of application of the software and solution for
customer implementation problems (depends to a great
extent on “user friendliness”)
Related metrics:
HD quality metrics:
HD calls density metrics - measured by the number of calls.
HD calls severity metrics - the severity of the HD issues raised.
HD success metrics – the level of success in responding to HD calls.
HD productivity metrics.
HD effectiveness metrics.
Maintenance Metrics Categories (2/2)
Corrective maintenance:
Correction of software failures identified by
customers/users or detected by the customer service team prior to
their discovery be the customer (directly related to the software
development quality)
Related metrics:
Corrective maintenance quality metrics.
Software system failures density metrics
Software system failures severity metrics
Failures of maintenance services metrics
Software system availability metrics
Corrective maintenance productivity metrics
Corrective maintenance effectiveness metrics
HD Calls Density Metrics
Code Name Calculation formula
NHYNOT = Number of yearly HD calls completed on time during one year of service.
NHYC = the number of HD calls during a year of service.
HD Productivity and Effectiveness Metrics
Code Name Calculation formula
HD Productivity HDYH
HDP HDP= --------------
KLNC
Function Point HD HDYH
FHDP FHDP = ----------
Productivity NMFP
HD effectiveness HDYH
HDE HDE = --------------
NHYC
HDYH = Total yearly working hours invested in HD servicing of the software system.
KLMC = Thousands of lines of maintained software code.
NMFP = number of function points to be maintained.
NHYC = the number of HD calls during a year of service.
Failures Density Metrics
Code Name Calculation formula
NYF
SSFD Software System Failure SSFD = --------------
Density KLMC
WYF
WSSFD Weighted Software WFFFD = ---------
System Failure Density KLMC
WYF
WSSFF Weighted Software WSSFF = ----------
System Failures per NMFP
Function point
CMaiYH = Total yearly working hours invested in the corrective maintenance of the software
system.
NYF = number of software failures detected during a year of maintenance service.
NMFP = number of function points designated for the maintained software.
KLMC = Thousands of lines of maintained software code.
Product Metrics Categories
Product metrics can also be used for general
predictions or to identify anomalous components.
Classes of product metric
Dynamic metrics which are collected by
measurements made of a program in execution;
Static metrics which are collected by measurements
made of the system representations;
Dynamic metrics help assess efficiency and
reliability; static metrics help assess complexity,
understandability and maintainability.
Static Metrics
Static metrics have an indirect relationship with
quality attributes (see also static analysis)
Software Product Metrics
Software metric Description
Fan in/Fan-out Fan-in is a measure of the number of functions or methods that call some other function
or method (say X). Fan-out is the number of functions that are called by function X. A
high value for fan-in means that X i s tightly coupled to the rest of the design and
changes to X will have extensive knock-on effects. A high value for fan-out suggests
that the overall complexity of X m ay be high because of the complexity of the control
logic needed to coordinate the called components.
Length of code This is a measure of the size of a p rogram. Generally, the larger the size of the code of a
component, the more complex and error-prone that component is likely to be. Length of
code has been shown to be one of the most reliable metrics for predicting error-
proneness in components.
Cyclomatic complexity This is a m easure of the control complexity of a p rogram. This control complexity may
be related to program understandability. I discuss how to compute cyclomatic
complexity in Chapter 22.
Length of identifiers This is a measure of the average length of distinct identifiers in a p rogram. The longer
the identifiers, the more likely they are to be m eaningful and hence the more
understandable the program.
Depth of conditional This is a measure of the depth of nesting of if-statements in a program. Deeply nested if
nesting statements are hard to understand and are potentially error-prone.
Fog index This is a measure of the average length of words and sentences in documents. The higher
the value for the Fog index, the more difficult the document is to understand.
Object-oriented Metrics
Object-oriented
metric Description
Depth of inheritance This represents the number of discrete levels in the inheritance tree where sub-
tree classes inherit attributes and operations (methods) from super-classes. The
deeper the inheritance tree, the more complex the design. Many different object
classes may have to be understood to understand the object classes at the leaves
of the tree.
Method fan-in/fan- This is directly related to fan-in and fan-out as described above and means
out essentially the same thing. However, it may be appropriate to make a
distinction between calls from other methods within the object and calls from
external methods.
Weighted methods This is the number of methods that are included in a class weighted by the
complexity of each method. Therefore, a simple method may have a complexity
per class of 1 and a large and complex method a much higher value. The larger the value
for this metric, the more complex the object class. Complex objects are more
likely to be more difficult to understand. They may not be logically cohesive so
cannot be reused effectively as super-classes in an inheritance tree.
Number of This is the number of operations in a super-class that are over-ridden in a sub-
overriding class. A high value for this metric indicates that the super-class used may not be
operations an appropriate parent for the sub-class.
The Measurement Process
A software measurement process may be part of a
quality control process.
(1) Define software quality metrics
(2) Select components to be assessed
(3) Data collection
(4) Identify anomalous measurement
(5) Analyse anomalous measurement
Data collected during this process should be
maintained as an organisational resource.
Once a measurement database has been
established, comparisons across projects become
possible.
(1) Defining
Software
Quality
Metrics
(3) Data Collection
A metrics programme should be based on a
set of product and process data.
Data should be collected immediately (not
in retrospect) and, if possible,
automatically.
Three types of automatic data collection
Static product analysis;
Dynamic product analysis;
Process data collation.
Data Accuracy
Don’t collect unnecessary data
The questions to be answered should be
decided in advance and the required data
identified.
Tell people why the data is being collected.
It should not be part of personnel evaluation.
Don’t rely on memory
Collect data when it is generated not after a
project has finished.
(5) Measurement Analysis
It is not always obvious what data means
Analysing collected data is very difficult.
Professional statisticians should be consulted
if available.
Data analysis must take local circumstances
into account.
Measurement Surprises
Reducing the number of faults in a program
leads to an increased number of help desk
calls
The program is now thought of as more reliable and
so has a wider more diverse market. The percentage
of users who call the help desk may have decreased
but the total may increase;
A more reliable system is used in a different way from
a system where users work around the faults. This
leads to more help desk calls.
Limitations of Quality Metrics
Budget constraints in allocating the necessary resources.
Human factors, especially opposition of employees to evaluation of their
activities.
Validity Uncertainty regarding the data's, partial and biased reporting.
[Galin2004]
Metrics assumptions:
A software property can be measured.
The relationship exists between what we can measure and what we want to
know. We can only measure internal attributes but are often more interested in
external software attributes.
This relationship has been formalised and validated.
It may be difficult to relate what can be measured to desirable external quality
attributes.
[Sommerville2004]
"Not everything that counts is countable; and not
everything that is countable counts."
Examples of Software Metrics that
exhibit Severe Weaknesses
levels depend on the number of data elements and the types of processing involved.
Low Complexity (1 point):
Simple data input with minimal processing (e.g., data entry forms).
Few data elements (1 to 5).
Average Complexity (2 points):
63
The ATTEND MASTER -
Data Flow Diagram
Calculation of CFP
Crude Function Points
Analysis
of the software system as presented in
the DFD summarizes the number of the various
components:
Number of user inputs – 2
Number of user outputs – 3
Number of user online queries – 3
Number of logical files – 2
Number of external interfaces – 2.
The
degree of complexity (simple, average or
complex) was evaluated for each component.
65
Crude Function Points: Calculation
Complexity level Total
Software
Simple average complex CFP
system
Count Weight Points Count Weight Points Count Weight Points
components Factor Factor Factor
A B C= D E F= G H I=
AxB DxE GxH
For C++:
KLOC = (85.86 * 64)/1000 = 5.495 KLOC
70
Extended function point metrics
Feature Points, UCPs …
The function point metric was originally designed to be applied to business
information systems applications.
The data dimension was emphasized to the exclusion of the functional and
behavioral (control) dimensions.
The function point measure was inadequate for many engineering and
embedded systems
Feature points : A superset of the function point, designed for applications
in which algorithmic complexity is high (real-time, process control,
embedded software applications).
Costs of
control costs
Appraisal costs
Cost of
software
quality
Internal failure
Costs of costs
Failure of
control costs
External failure
costs
Prevention Costs
Investments in development of SQA infrastructure
components
Procedures and work instructions
Support devices: templates, checklists etc
Software configuration management system
Software quality metrics
Regular implementation of SQA preventive
activities:
Instruction of new employees in SQA subjects
Certification of employees
Consultations on SQA issues to team leaders and others
Control of the SQA system through performance of:
Internal quality reviews
External quality audits
Management quality reviews
Appraisal Costs
Costs of reviews:
Formal design reviews (DRs)
Peer reviews (inspections and walkthroughs)
Expert reviews
Costs of software testing:
Unit, integration and software system tests
Acceptance tests (carried out by customers)
Costs of assuring quality of external
participants
Internal Failure Costs
Costs of redesign or design corrections
subsequent to design review and test findings
Costs of re-programming or correcting programs
in response to test findings
Costs of repeated design review and re-testing
(regression tests)
External Failure Costs
Typical external failure costs cover:
Resolution of customer complaints during the warranty period.
Correction of software bugs detected during regular operation.
Correction of software failures after the warranty period is over even if
the correction is not covered by the warranty.
Damages paid to customers in case of a severe software failure.
Reimbursement of customer's purchase costs.
Insurance against customer's claims.
Total
Total control
failure of costs
control
costs
low high Software quality level
Optimal
software
quality
level
Problems of Application
General problems:
Inaccurate and/or incomplete identification and classification of quality costs.
Negligent reporting by team members
Biased reporting of software costs, especially of “censored” internal and external
costs.
Biased recording of external failure costs - “camouflaged” compensation of
customers for failures.
Problems arising when collecting data on managerial costs:
Contract review and progress control activities are performed in a “part-time
mode”. The reporting of time invested is usually inaccurate and often neglected.
Many participants in these activities are senior staff members who are not
required to report use of their time resources.
Difficulties in determination of responsibility for schedule failures.
Payment of overt and formal compensation usually occurs quite some time after
the project is completed, and much too late for efficient application of the lessons
learned.
4. Discussion & Summary
Components of Project Progress Control
requires (1) control of risk management activities,
(2) project schedule control, (3) project resource
control, and (4) project budget control. Even though
budget control has usually the highest priority,
only the combination of all control tasks ensures
the required coverage of risks
Metrics facilitate management control, planning
and managerial intervention based on deviations of
actual from planned performance to identify
situations for development or maintenance process
improvement (preventive or corrective actions)
Discussion & Summary
Metrics must be relevant, valid, reliable, comprehensive, and
mutually exclusive. In addition, it must be easy and simple,
should not require independent data collection, and should
be immune to biased interventions by interested parties.
Available metrics for control are process metrics, which are
related to the software development process, maintenance
metrics, which are related to software maintenance (called
product metrics in [Galin2004]), and product metrics, which
are related to software artifacts.
Metrics are used to measure quality, accordance with
timetables, effectiveness (of error removal and
maintenance services), and productivity.
Discussion & Summary
A software measurement process may should be
part of a quality control process which (1) defines
the employed software quality metrics, (2) selects
components to be assessed, (3) collects the data,
(4) ensures that anomalous measurements are
identified, and (5) are analyzed. But due to the
complex interdependencies unexpected effects are
possible (measurement surprises)
The optimal software quality level is reached
when the sum of the total failure of control costs
and total control costs is minimal.