0% found this document useful (0 votes)
2 views

SQA- Lecture 04 formated- Final

The document covers Software Quality Assurance (SQA) with a focus on project progress control, quality metrics, and management practices. It details components such as risk management, schedule, resource, and budget control, along with the importance of computerized tools for effective monitoring. Additionally, it discusses various software quality metrics, their classifications, and calculations for measuring software performance and maintenance effectiveness.

Uploaded by

shortsdaily55
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

SQA- Lecture 04 formated- Final

The document covers Software Quality Assurance (SQA) with a focus on project progress control, quality metrics, and management practices. It details components such as risk management, schedule, resource, and budget control, along with the importance of computerized tools for effective monitoring. Additionally, it discusses various software quality metrics, their classifications, and calculations for measuring software performance and maintenance effectiveness.

Uploaded by

shortsdaily55
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 91

Software Quality

assurance (SQA)
Lecture 4
Outline
 Introduction
 Software Life Cycle
 Quality Control
 Infrastructure
 Management
 Standards
 Conclusion & Summary
Management
 Project Progress Control
 Quality Metrics
 Cost of Quality
 Discussion & Summary
1. Project Progress Control
Overview:
 The components of project progress control
 Progress control of internal projects and external
participants
 Implementation of project progress control regimes
 Computerized tools for software progress control
Components of Project Progress
Control
 Control of risk management activities
 Project schedule control
 Project resource control
 Project budget control
Control of Risk Management
Activities

Initial list of risk items come from the contract reviews


and project plan
Systematic risk management activities required:
 Periodic assessment about the state of the software risk
items
 Based on this reports the project managers are expected to
intervene and help arrive at a solution in the more extreme
cases
Project Schedule Control
 Compliance of the project with its approved and
contracted timetables
 Control is based mainly on milestone reports
which are set (in part) to facilitate identification of
delays and other periodic reports
 Milestones set in contracts, especially dates for
delivery, receive special emphasis
 Focus on critical delays (which may effect final
completion of the project)
 Management interventions:
Allocation of additional resources
Renegotiating the schedule with the customer
Project Resource Control
Main control items:
 Human resources
 Special development and testing equipment (real-time

systems; firmware)
 Resource control primarily focuses on professional human
resources but applies to other assets too.
 Real-time software and firmware projects require strict resource
monitoring through periodic reports.
 Small deviations in resource use can mask larger cumulative
impacts if project progress is delayed.
 Internal allocation reviews help detect budget strains, such as
imbalanced senior vs. junior analyst hours.
 Early detection of deviations allows for resource reallocation,
team reorganization, or project plan revisions.
Project Budget Control
Main budget items:
 Human resources
 Development and testing facilities
 Purchase of COTS software
 Purchase of hardware
 Payments to subcontractors
 Control is based on the milestone reports and other

periodic reports
 Usually budget control has the highest priority,
but only the combination of all control aspects
ensure the required
Progress control of internal
projects and external participants
Problem: In practice project control provides only a
limited view of the progress of internal software
development and an even more limited view on the
progress made by external project participants
Internal projects have by definition no external
customer and therefore tend to occupy a lower place
among management's priorities. Therefore, the full
range of project progress control should be employed
More significant efforts are required in order to achieve
acceptable levels of control for an external project
participant due to the more complex communication
and coordination
Implementation of project progress
control regimes
Procedures:
Allocation of responsibilities for
 Person or management for progress control
 Frequency of reporting from each of the unit levels and
administrative level
 Situation requiring the project leader to report immediately
to management
 Situation requiring lower management to report immediately
to upper management
Management audits of project progress which deals mainly with
(1) How well progress reports are transmitted by project
leaders and by lower- to upper-level management
(2) Specific management control activities to be initiated
Implementation of project progress
control regimes
Remarks:
Project progress control may be conducted on
several managerial levels in large software
development organizations coordination becomes
essential
Project leaders base there progress reports on
information gathered from team leaders
Computerized Project Progress
Control
 Required for non trivial projects
 Automation can reduce costs considerably
Computerized Control of risk
management
 Lists of software risk items by category and
their planned solution dates
 Lists of exceptions of software risk items –
overrun solution dates
Control
 Classified lists of delayed activities.
 Classified lists of delays of critical activities – delays
that can affect the project’s completion date.
 Updated activity schedules, according to progress
reports and correction measures.
 Classified lists of delayed milestones.
 Updated milestones schedules, according to progress
reports and applied correction measures.
Computerized Project Resource
Control
 Project resources allocation plan
for activities and software modules
for teams and development units
for designated time periods, etc.
 Project resources utilization – as specified above
 Project resources utilization exceptions – as specified
above
 Updated resource allocation plans generated
according to progress reports and reaction measures
applied
Computerized Project Budget
Control
 Project budget plans
for activity and software module
for teams and development units
for designated time periods, etc.
 Project budget utilization reports — as specified
above
 Project budget utilization deviations – by period or
accumulated – as specified above
 Updated budget plans generated according to
progress reports ports and correction measures
2. Quality Metrics
“You can’t control what you can’t measure”
[DeMarco1982]

Overview:
Objectives of quality measurement
Classification of software quality metrics
Process metrics
Product metrics
Implementation of software quality metrics
Limitations of software metrics
The function point method
Definition
IEEE definition of software quality metrics:
A quantitative measure of the degree to
which an item possesses a given quality
attribute.
A function whose inputs are software data
and whose output is a single numerical value
that can be interpreted as the degree to
which the software possesses a given quality
attribute.
Objectives
Facilitate management control, planning and
managerial intervention. Based on:
Deviations of actual from planned performance.
Deviations of actual timetable and budget
performance from planned.
Identify situations for development or
maintenance process improvement (preventive
or corrective actions). Based on:
Accumulation of metrics information regarding the
performance of teams, units, etc.
Requirements
General requirements
Relevant
Valid
Reliable
Comprehensive
Mutually exclusive

Operative requirements
Easy and simple
Does not require independent data collection
Immune to biased interventions by interested parties
Classifications
Classification by phases of software system:
Process metrics: metrics related to the software
development process
Maintenance metrics: metrics related to software
maintenance (product metrics in [Galin2004])
Product metrics: metrics related to software artifacts

Classification by subjects of measurements


Quality
Timetable
Effectiveness (of error removal and maintenance services)
Productivity
Software Size/Volume Measures

KLOC: classic metric that measures the size of


software by thousands of code lines.
Number of function points (NFP): a measure of the
development resources (human resources) required to
develop a program, based on the functionality
specified for the software system.
Error Counted Measures
Calculation of NCE Calculation of WCE

Error severity class Number of Errors Relative Weight Weighted


Errors
a b c D=bxc

low severity 42 1 42

medium severity 17 3 51

high severity 11 9 99

Total 70 --- 192

NCE 70 --- ---

WCE --- 192

Number of code errors (NCE) vs. weighted number of code errors (WCE)
Process Metrics Categories
Software process quality metrics
Error density metrics
Error severity metrics
Software process timetable metrics
Software process error removal
effectiveness metrics
Software process productivity metrics
Error Density Metrics
Code Name Calculation formula

CED Code Error Density


CED = -----------
NCE
KLOC
DED Development Error Density
DED = -----------
NDE
KLOC
WCED Weighted Code Error Density
WCDE = ---------
WCE
KLOC
WDED Weighted Development Error
WDED = ---------
WDE
Density KLOC
WCEF Weighted Code Errors per WCE
WCEF = ----------
Function Point NFP
WDEF Weighted Development Errors per
WDEF =
WDE
----------
Function Point NFP

NCE = The number of code errors detected by code inspections and testing.
NDE = total number of development (design and code) errors) detected in the development process.
WCE = weighted total code errors detected by code inspections and testing.
WDE = total weighted development (design and code) errors detected in development process.
Error Severity Metrics
Code Name Calculation
formula
ASCE Average Severity of Code WCE
ASCE = -----------
Errors NCE
DED Average Severity of WDE
ASDE = -----------
Development Errors NDE

NCE = The number of code errors detected by code inspections and testing.
NDE = total number of development (design and code) errors) detected in the
development process.
WCE = weighted total code errors detected by code inspections and testing.
WDE = total weighted development (design and code) errors detected in
development process.
Software Process Timetable Metrics
Code Name Calculation formula

TTO Time Table Observance MSOT


TTO = -----------
MS
ADMC Average Delay of Milestone TCDAM
ADMC = -----------
Completion MS

MSOT = Milestones completed on time.


MS = Total number of milestones.
TCDAM = Total Completion Delays (days, weeks, etc.) for all milestones.
Error Removal Effectiveness Metrics
Code Name Calculation formula

DERE Development Errors NDE


DERE = ----------------
Removal Effectiveness NDE + NYF
DWERE Development Weighted WDE
DWERE = ------------------
Errors Removal WDE+WYF
Effectiveness

NDE = total number of development (design and code) errors) detected in the
development process.
WCE = weighted total code errors detected by code inspections and testing.
WDE = total weighted development (design and code) errors detected in
development process.
NYF = number software failures detected during a year of maintenance service.
WYF = weighted number of software failures detected during a year of maintenance
service.
Process Productivity Metrics
Code Name Calculation formula

Development Productivity DevH


DevP DevP = ----------
KLOC
Function point Development DevH
FDevP FDevP = ----------
Productivity NFP
Code Reuse ReKLOC
CRe Cre = --------------
KLOC
Documentation Reuse ReDoc
DocRe DocRe = -----------
NDoc

DevH = Total working hours invested in the development of the software system.
ReKLOC = Number of thousands of reused lines of code.
ReDoc = Number of reused pages of documentation.
NDoc = Number of pages of documentation.
Maintenance Metrics Categories (1/2)
Help desk service (HD):
software support by instructing customers regarding the
method of application of the software and solution for
customer implementation problems (depends to a great
extent on “user friendliness”)

Related metrics:
HD quality metrics:
HD calls density metrics - measured by the number of calls.
HD calls severity metrics - the severity of the HD issues raised.
HD success metrics – the level of success in responding to HD calls.
HD productivity metrics.
HD effectiveness metrics.
Maintenance Metrics Categories (2/2)
Corrective maintenance:
Correction of software failures identified by
customers/users or detected by the customer service team prior to
their discovery be the customer (directly related to the software
development quality)

Related metrics:
Corrective maintenance quality metrics.
Software system failures density metrics
Software system failures severity metrics
Failures of maintenance services metrics
Software system availability metrics
Corrective maintenance productivity metrics
Corrective maintenance effectiveness metrics
HD Calls Density Metrics
Code Name Calculation formula

HDD HD calls density NHYC


HDD = --------------
KLMC
WHDD Weighted HD calls density WHYC
WHYC = ------------
KLMC
WHDF Weighted HD calls per WHYC
WHDF = ------------
function point NMFP

NHYC = the number of HD calls during a year of service.


KLMC = Thousands of lines of maintained software code.
WHYC = weighted HD calls received during one year of service.
NMFP = number of function points to be maintained.
Severity of HD Calls Metrics and HD Success
Metrics
Code Name Calculation formula

ASHC Average severity of HD WHYC


ASHC = --------------
calls NHYC

NHYC = the number of HD calls during a year of service.


WHYC = weighted HD calls received during one year of service.
Code Name Calculation formula

HDS HD service success NHYOT


HDS = --------------
NHYC

NHYNOT = Number of yearly HD calls completed on time during one year of service.
NHYC = the number of HD calls during a year of service.
HD Productivity and Effectiveness Metrics
Code Name Calculation formula

HD Productivity HDYH
HDP HDP= --------------
KLNC
Function Point HD HDYH
FHDP FHDP = ----------
Productivity NMFP
HD effectiveness HDYH
HDE HDE = --------------
NHYC

HDYH = Total yearly working hours invested in HD servicing of the software system.
KLMC = Thousands of lines of maintained software code.
NMFP = number of function points to be maintained.
NHYC = the number of HD calls during a year of service.
Failures Density Metrics
Code Name Calculation formula
NYF
SSFD Software System Failure SSFD = --------------
Density KLMC
WYF
WSSFD Weighted Software WFFFD = ---------
System Failure Density KLMC
WYF
WSSFF Weighted Software WSSFF = ----------
System Failures per NMFP
Function point

NYF = number of software failures detected during a year of maintenance service.


WYF = weighted number of yearly software failures detected during one year of
maintenance service.
NMFP = number of function points designated for the maintained software.
KLMC = Thousands of lines of maintained software code.
Failures Density Metrics
Code Name Calculation formula

ASSSF Average Severity of WYF


ASSSF = --------------
Software System NYF
Failures

NYF = number of software failures detected during a year of maintenance service.


WYF = weighted number of yearly software failures detected during one year.
Code Name Calculation formula

MRepF Maintenance Repeated RepYF


MRepF = --------------
repair Failure metric - NYF

NYF = number of software failures detected during a year of maintenance service.


RepYF = Number of repeated software failure calls (service failures).
Availability Metrics
Code Name Calculation formula
NYSerH - NYFH
FA Full Availability FA = -----------------------
NYSerH
NYSerH - NYVitFH
VitA Vital Availability VitA = -----------------------------
NYSerH
NYTFH
TUA Total Unavailability TUA = ------------
NYSerH

NYSerH = Number of hours software system is in service during one year.


NYFH = Number of hours where at least one function is unavailable (failed) during one year,
including total failure of the software system.
NYVitFH = Number of hours when at least one vital function is unavailable (failed) during
one year, including total failure of the software system.
NYTFH = Number of hours of total failure (all system functions failed) during one year.

Note: NYFH ≥ NYVitFH ≥ NYTFH and 1 – TUA ≥ VitA ≥FA


Software Corrective Maintenance
Productivity and Effectiveness Metrics
Code Name Calculation formula

Corrective Maintenance CMaiYH


CMaiP CMaiP = ---------------
Productivity KLMC
Function point Corrective CMaiYH
FCMP FCMP = --------------
Maintenance Productivity NMFP
Corrective Maintenance CMaiYH
CMaiE CMaiE = ------------
Effectiveness NYF

CMaiYH = Total yearly working hours invested in the corrective maintenance of the software
system.
NYF = number of software failures detected during a year of maintenance service.
NMFP = number of function points designated for the maintained software.
KLMC = Thousands of lines of maintained software code.
Product Metrics Categories
Product metrics can also be used for general
predictions or to identify anomalous components.
Classes of product metric
Dynamic metrics which are collected by
measurements made of a program in execution;
Static metrics which are collected by measurements
made of the system representations;
Dynamic metrics help assess efficiency and
reliability; static metrics help assess complexity,
understandability and maintainability.
Static Metrics
Static metrics have an indirect relationship with
quality attributes (see also static analysis)
Software Product Metrics
Software metric Description
Fan in/Fan-out Fan-in is a measure of the number of functions or methods that call some other function
or method (say X). Fan-out is the number of functions that are called by function X. A
high value for fan-in means that X i s tightly coupled to the rest of the design and
changes to X will have extensive knock-on effects. A high value for fan-out suggests
that the overall complexity of X m ay be high because of the complexity of the control
logic needed to coordinate the called components.
Length of code This is a measure of the size of a p rogram. Generally, the larger the size of the code of a
component, the more complex and error-prone that component is likely to be. Length of
code has been shown to be one of the most reliable metrics for predicting error-
proneness in components.
Cyclomatic complexity This is a m easure of the control complexity of a p rogram. This control complexity may
be related to program understandability. I discuss how to compute cyclomatic
complexity in Chapter 22.
Length of identifiers This is a measure of the average length of distinct identifiers in a p rogram. The longer
the identifiers, the more likely they are to be m eaningful and hence the more
understandable the program.
Depth of conditional This is a measure of the depth of nesting of if-statements in a program. Deeply nested if
nesting statements are hard to understand and are potentially error-prone.
Fog index This is a measure of the average length of words and sentences in documents. The higher
the value for the Fog index, the more difficult the document is to understand.
Object-oriented Metrics
Object-oriented
metric Description

Depth of inheritance This represents the number of discrete levels in the inheritance tree where sub-
tree classes inherit attributes and operations (methods) from super-classes. The
deeper the inheritance tree, the more complex the design. Many different object
classes may have to be understood to understand the object classes at the leaves
of the tree.

Method fan-in/fan- This is directly related to fan-in and fan-out as described above and means
out essentially the same thing. However, it may be appropriate to make a
distinction between calls from other methods within the object and calls from
external methods.

Weighted methods This is the number of methods that are included in a class weighted by the
complexity of each method. Therefore, a simple method may have a complexity
per class of 1 and a large and complex method a much higher value. The larger the value
for this metric, the more complex the object class. Complex objects are more
likely to be more difficult to understand. They may not be logically cohesive so
cannot be reused effectively as super-classes in an inheritance tree.

Number of This is the number of operations in a super-class that are over-ridden in a sub-
overriding class. A high value for this metric indicates that the super-class used may not be
operations an appropriate parent for the sub-class.
The Measurement Process
A software measurement process may be part of a
quality control process.
(1) Define software quality metrics
(2) Select components to be assessed
(3) Data collection
(4) Identify anomalous measurement
(5) Analyse anomalous measurement
Data collected during this process should be
maintained as an organisational resource.
Once a measurement database has been
established, comparisons across projects become
possible.
(1) Defining
Software
Quality
Metrics
(3) Data Collection
A metrics programme should be based on a
set of product and process data.
Data should be collected immediately (not
in retrospect) and, if possible,
automatically.
Three types of automatic data collection
Static product analysis;
Dynamic product analysis;
Process data collation.
Data Accuracy
Don’t collect unnecessary data
The questions to be answered should be
decided in advance and the required data
identified.
Tell people why the data is being collected.
It should not be part of personnel evaluation.
Don’t rely on memory
Collect data when it is generated not after a
project has finished.
(5) Measurement Analysis
 It is not always obvious what data means
Analysing collected data is very difficult.
 Professional statisticians should be consulted
if available.
 Data analysis must take local circumstances
into account.
Measurement Surprises
Reducing the number of faults in a program
leads to an increased number of help desk
calls
The program is now thought of as more reliable and
so has a wider more diverse market. The percentage
of users who call the help desk may have decreased
but the total may increase;
A more reliable system is used in a different way from
a system where users work around the faults. This
leads to more help desk calls.
Limitations of Quality Metrics
Budget constraints in allocating the necessary resources.
Human factors, especially opposition of employees to evaluation of their
activities.
Validity Uncertainty regarding the data's, partial and biased reporting.
[Galin2004]

Metrics assumptions:
A software property can be measured.
The relationship exists between what we can measure and what we want to
know. We can only measure internal attributes but are often more interested in
external software attributes.
This relationship has been formalised and validated.
It may be difficult to relate what can be measured to desirable external quality
attributes.
[Sommerville2004]
"Not everything that counts is countable; and not
everything that is countable counts."
Examples of Software Metrics that
exhibit Severe Weaknesses

Parameters used in development process


metrics:
KLOC, NDE, NCE.
Parameters used in product (maintenance)
metrics:
KLMC, NHYC, NYF.
Factors Affecting Parameters used
for Development Process Metrics
 Programming style (KLOC).
 Volume of documentation comments (KLOC).
 Software complexity (KLOC, NCE).
 Percentage of reused code (NDE, NCE).
 Professionalism and thoroughness of design review
and software testing teams: affects the number of
defects detected (NCE).
 Reporting style of the review and testing results:
concise reports vs. comprehensive reports
(NDE,NCE).
Factors Affecting Parameters used
for Maintenance Metrics
 Quality of installed software and its documentation
(NYF, NHYC).
 Programming style and volume of documentation
comments included in the code be maintained
(KLMC).
 Software complexity (NYF).
 Percentage of reused code (NYF).
 Number of installations, size of the user population
and level of applications in use: (NHYC, NYF).
Factors Affecting Parameters used
for Product Metrics
 Programming style
 Architectural styles/Architectures
 Thorough understanding of the domain
(more intensive use of inheritance)
 Level of reuse and use of frameworks
 The Function point approach for software
sizing was invented by Allan Albrecht in
1979
 The measure of Albrecht - Function Point
Analysis (FPA) - is well known because of its
great advantages:
 Independent of programming language and technology.
 Comprehensible for client and user.
 Applicable at early phase of software life cycle.
The Function Point Method
 The function point estimation process:
Stage 1: Compute crude function points
(CFP).
Stage 2: Compute the relative complexity
adjustment factor (RCAF) for the project.
RCAF varies between 0 and 70.
Stage 3: Compute the number of function
points (FP):
FP = CFP x (0.65 + 0.01 x RCAF)
Complexity Levels
 In function point analysis (FPA), measuring the complexity of different elements such as
user inputs, outputs, online queries, logical files, and external interfaces is crucial for
calculating the function point count. Here are the criteria used to determine their
complexity levels:
1. User Inputs (External Inputs - EI)
 User inputs refer to data entered into the system from external sources. The complexity

levels depend on the number of data elements and the types of processing involved.
 Low Complexity (1 point):

 Simple data input with minimal processing (e.g., data entry forms).
 Few data elements (1 to 5).
 Average Complexity (2 points):

 Moderate data input requiring validation or minor calculations.


 More data elements (6 to 19).
 High Complexity (3 points):

 Complex data input with significant processing (e.g., calculations, validation, or


multiple screens).
 Many data elements (20 or more).
Complexity Levels
2. User Outputs (External Outputs - EO)
 User outputs are the results presented to the user, including reports

and messages. Their complexity is based on the number of data


elements and the type of output processing.
 Low Complexity (1 point):

Simple reports or messages with minimal formatting.


Few data elements (1 to 5).
 Average Complexity (2 points):

Moderate reports or messages with some processing or formatting.


More data elements (6 to 19).
 High Complexity (3 points):

Complex reports or messages requiring significant formatting,


sorting, or calculations.
Many data elements (20 or more).
Complexity Levels
3. User Online Queries (External Inquiries - EQ)
 Online queries refer to user interactions where they request information or

perform a search. Complexity is based on the number of data elements and


the query’s processing complexity.
 Low Complexity (1 point):

 Simple queries with minimal processing (e.g., display a single record).


 Few data elements (1 to 5).
 Average Complexity (2 points):

 Queries with moderate processing, such as retrieval and display of


multiple records.
 More data elements (6 to 19).
 High Complexity (3 points):

 Complex queries with significant processing, such as aggregating or


sorting large datasets.
 Many data elements (20 or more).
Complexity Levels
4. Logical Files (Internal Logical Files - ILF)
 Logical files are internal data structures used for storing information. The

complexity depends on the number of data elements and the file’s


structure.
 Low Complexity (1 point):

 Small files with simple structures (e.g., a single table or list).


 Few data elements (1 to 5).
 Average Complexity (2 points):

 Files with moderate complexity, such as multiple related tables or files.


 More data elements (6 to 19).
 High Complexity (3 points):

 Large or complex files with many relationships or intricate data


structures.
 Many data elements (20 or more).
Complexity Levels
5. External Interfaces (External Interface Files - EIF)
 External interfaces refer to data that is used by the system but maintained outside the system boundary. Complexity
is based on the number of data elements and the integration complexity.
 Low Complexity (1 point):
 Simple data exchanges with external systems (e.g., file imports or exports).
 Few data elements (1 to 5).
 Average Complexity (2 points):
 Moderate external interfaces with some integration or data transformation.
 More data elements (6 to 19).
 High Complexity (3 points):
 Complex external interfaces requiring significant data transformation, validation, or integration.
 Many data elements (20 or more).
 General Considerations for Complexity Level:
 Data Elements: These are the individual units of data handled by the system, like fields or variables in files or
records.
 Processing Logic: Complexity increases when data requires more elaborate processing such as calculations, sorting,
or aggregation.
 External Interactions: Integration with external systems or user interfaces introduces more complexity due to the
need for more robust data handling or error-checking mechanisms.
 File Structure: More complex structures, such as those with relationships or hierarchical data, increase the
complexity level.
 The function point calculation process involves assigning these complexity values to each component based on its
characteristics and then summing them up to determine the overall function point count for the software application.
Function Point Method
An example: The Attend Master
Attend Master Software System
 Attend-Master is a basic employee attendance
system that is planned to serve small to medium-
sized businesses employing 10–100 employees.
 The system is planned to have interfaces to the

company’s other software packages: Human-


Master, which serves human resources units, and
Wage-Master, which serves the wages units.
 Attend-Master is planned to produce several

reports and online queries.

63
The ATTEND MASTER -
Data Flow Diagram
Calculation of CFP
Crude Function Points
 Analysis
of the software system as presented in
the DFD summarizes the number of the various
components:
Number of user inputs – 2
 Number of user outputs – 3
Number of user online queries – 3
Number of logical files – 2
Number of external interfaces – 2.
 The
degree of complexity (simple, average or
complex) was evaluated for each component.

65
Crude Function Points: Calculation
Complexity level Total
Software
Simple average complex CFP
system
Count Weight Points Count Weight Points Count Weight Points
components Factor Factor Factor

A B C= D E F= G H I=
AxB DxE GxH

User inputs 1 3 3 --- 4 --- 1 6 6 9


User outputs --- 4 --- 2 5 10 1 7 7 17
User online
queries 1 3 3 1 4 4 1 6 6 13
Logical files 1 7 7 --- 10 --- 1 15 15 22
External
interfaces --- 5 --- --- 7 --- 2 10 20 20
Total CFP 81
Calculation of RCAF
Relative Complexity Adjustment Factor
The ATTEND MASTER
RCAF calculation form
No Subject Grade

1 Requirement for reliable backup and recovery 0 1 2 3 4 5


2 Requirement for data communication 0 1 2 3 4 5
3 Extent of distributed processing 0 1 2 3 4 5
4 Performance requirements 0 1 2 3 4 5
5 Expected operational environment 0 1 2 3 4 5
6 Extent of online data entries 0 1 2 3 4 5
7 Extent of multi-screen or multi-operation online data input 0 1 2 3 4 5
8 Extent of online updating of master files 0 1 2 3 4 5
9 Extent of complex inputs, outputs, online queries and files 0 1 2 3 4 5
10 Extent of complex data processing 0 1 2 3 4 5
11 Extent that currently developed code can be designed for reuse 0 1 2 3 4 5
12 Extent of conversion and installation included in the design 0 1 2 3 4 5
13 Extent of multiple installations in an organization and variety of customer 0 1 2 3 4 5
organizations
14 Extent of change and focus on ease of use 0 1 2 3 4 5
Total = RCAF 41
The ATTEND MASTER – function
points calculation
FP = CFP x (0.65 + 0.01 x RCAF)

FP = 81 x (0.65 + 0.01 x 41) = 85.86


Converting NFP 2 KLOC
 The estimates for the average number of lines of
code (LOC) required for programming a function
point are the following:

For C++:
KLOC = (85.86 * 64)/1000 = 5.495 KLOC

70
Extended function point metrics
Feature Points, UCPs …
 The function point metric was originally designed to be applied to business
information systems applications.
 The data dimension was emphasized to the exclusion of the functional and
behavioral (control) dimensions.
 The function point measure was inadequate for many engineering and
embedded systems
 Feature points : A superset of the function point, designed for applications
in which algorithmic complexity is high (real-time, process control,
embedded software applications).

 UCPs: Use case points (UCPs) allow the estimation of an application’s


size and effort from its use cases. UCPs are based on the number of actors,
scenarios, and various technical and environmental factors in the use case
diagram.
Extended function point metrics
UCPs
 UCPs are based on the number of actors, scenarios, and

various technical and environmental factors in the use case


diagram.
 The UCP equation is based on four variables:

Technical complexity factor (TCF)


Environment complexity factor (ECF)
Unadjusted use case points (UUCP)
Productivity factor (PF)
 which yield the equation:
 UCP = TCP * ECF * UUCP * PF
Advantages & Disadvantages
Main advantages
Estimates can be prepared at the pre-project stage.
Based on requirement specification documents (not specific
dependent on development tools or programming languages), the
method’s reliability is relatively high.
Main disadvantages
While tools exist to support FPA, many steps in the process (e.g.,
identifying and categorizing functions) require manual analysis and are not
fully automatable.
Estimates based on detailed requirements specifications, which are not
always available.
The entire process requires an experienced function point team and
substantial resources.
The evaluations required result in subjective results.
Successful applications are related to data processing. The method
cannot yet be universally applied.
3. Cost of Quality
Overview:
Objectives of cost of software quality metrics
The classic model of cost of software quality
Galin’s extended model for cost of software
quality
Application of a cost of software quality
system
Problems in the application of cost of
software quality metrics
Objectives
In general – it enables management to achieve
economic control over SQA activities and
outcomes. The specific objectives are:
Control organization-initiated costs to prevent
and detect software errors.
Evaluation of the economic damages of software
failures as a basis for revising the SQA budget.
Evaluation of plans to increase or decrease of
SQA activities or to invest in SQA infrastructure on
the basis of past economic performance.
Performance Comparisons
 Control Budgeted expenditures (for SQA
prevention and appraisal activities).
 Previous year’s failure costs
 Previous project’s quality costs (control costs
and failure costs).
 Other department’s quality costs (control costs
and failure costs).
Evaluating SQA Systems
Cost metrics examples:
Percentage of cost of software quality out of
total software development costs.
Percentage of software failure costs out of
total software development costs.
Percentage of cost of software quality out of
total software maintenance costs.
Percentage of cost of software quality out of
total sales of software products and software
maintenance.
Model of Software Quality Costs
Prevention costs

Costs of
control costs
Appraisal costs
Cost of
software
quality
Internal failure
Costs of costs
Failure of
control costs
External failure
costs
Prevention Costs
Investments in development of SQA infrastructure
components
Procedures and work instructions
Support devices: templates, checklists etc
Software configuration management system
Software quality metrics
Regular implementation of SQA preventive
activities:
Instruction of new employees in SQA subjects
Certification of employees
Consultations on SQA issues to team leaders and others
Control of the SQA system through performance of:
Internal quality reviews
External quality audits
Management quality reviews
Appraisal Costs
Costs of reviews:
Formal design reviews (DRs)
Peer reviews (inspections and walkthroughs)
Expert reviews
Costs of software testing:
Unit, integration and software system tests
Acceptance tests (carried out by customers)
Costs of assuring quality of external
participants
Internal Failure Costs
 Costs of redesign or design corrections
subsequent to design review and test findings
 Costs of re-programming or correcting programs
in response to test findings
 Costs of repeated design review and re-testing
(regression tests)
External Failure Costs
Typical external failure costs cover:
Resolution of customer complaints during the warranty period.
Correction of software bugs detected during regular operation.
Correction of software failures after the warranty period is over even if
the correction is not covered by the warranty.
Damages paid to customers in case of a severe software failure.
Reimbursement of customer's purchase costs.
Insurance against customer's claims.

Typical examples of hidden external failure costs:


Reduction of sales to customers that suffered from software failures.
Severe reduction of sales motivated by the firm's damaged reputation.
Increased investment in sales promotion to counter the effects of past
software failures.
Reduced prospects to win a tender or, alternatively, the need to under-
price to prevent competitors from winning tenders.
Galin’s Extended Model of Software
Quality Costs
Prevention costs

Costs of Appraisal costs


Control costs
Managerial
Cost of preparations and
software control costs
quality
Internal failure
costs
Costs of External failure
Failure of costs
control costs
Managerial failure
costs
Managerial Preparation and
Control Costs
 Costs of carrying out contract reviews
 Costs of preparing project plans, including quality
plans
 Costs of periodic updating of project and quality
plans
 Costs of performing regular progress control
 Costs of performing regular progress control of
external participants’ contributions to projects
Managerial Failure Costs
 Unplanned costs for professional and other resources,
resulting from underestimation of the resources in the
proposals stage.
 Damages paid to customers as compensation for late
project completion, a result of the unrealistic schedule in
the Company’s proposal.
 Damages paid to customers as compensation for late
completion of the project, a result of management’s failure
to recruit team members.
 Domino effect: Damages to other projects planned to be
performed by the same teams involved in the delayed
projects. The domino effect may induce considerable
hidden external failure costs.
Application
Definition of a cost of software quality model
and specification of cost items.
Definition of the method of data collection for
each cost item.
Application of a cost of software quality
system, including thorough follow up.
Actions taken in response to the findings.
Cost of Software Quality
Balance by Quality Level
Quality
costs Total
cost of
Minimal software
total cost quality
of
software
quality

Total
Total control
failure of costs
control
costs
low high Software quality level
Optimal
software
quality
level
Problems of Application
General problems:
Inaccurate and/or incomplete identification and classification of quality costs.
Negligent reporting by team members
Biased reporting of software costs, especially of “censored” internal and external
costs.
Biased recording of external failure costs - “camouflaged” compensation of
customers for failures.
Problems arising when collecting data on managerial costs:
Contract review and progress control activities are performed in a “part-time
mode”. The reporting of time invested is usually inaccurate and often neglected.
Many participants in these activities are senior staff members who are not
required to report use of their time resources.
Difficulties in determination of responsibility for schedule failures.
Payment of overt and formal compensation usually occurs quite some time after
the project is completed, and much too late for efficient application of the lessons
learned.
4. Discussion & Summary
Components of Project Progress Control
requires (1) control of risk management activities,
(2) project schedule control, (3) project resource
control, and (4) project budget control. Even though
budget control has usually the highest priority,
only the combination of all control tasks ensures
the required coverage of risks
Metrics facilitate management control, planning
and managerial intervention based on deviations of
actual from planned performance to identify
situations for development or maintenance process
improvement (preventive or corrective actions)
Discussion & Summary
Metrics must be relevant, valid, reliable, comprehensive, and
mutually exclusive. In addition, it must be easy and simple,
should not require independent data collection, and should
be immune to biased interventions by interested parties.
Available metrics for control are process metrics, which are
related to the software development process, maintenance
metrics, which are related to software maintenance (called
product metrics in [Galin2004]), and product metrics, which
are related to software artifacts.
Metrics are used to measure quality, accordance with
timetables, effectiveness (of error removal and
maintenance services), and productivity.
Discussion & Summary
A software measurement process may should be
part of a quality control process which (1) defines
the employed software quality metrics, (2) selects
components to be assessed, (3) collects the data,
(4) ensures that anomalous measurements are
identified, and (5) are analyzed. But due to the
complex interdependencies unexpected effects are
possible (measurement surprises)
The optimal software quality level is reached
when the sum of the total failure of control costs
and total control costs is minimal.

You might also like