Unit 4 Part 2
Unit 4 Part 2
- Introduction
- Metrics in the Process Domain
- Metrics in the Project Domain
- Software Measurement
- Metrics for Software Quality
3
Uses of Measurement
• Can be applied to the software process with the intent of improving it
on a continuous basis
• Can be used throughout a software project to assist in estimation,
quality control, productivity assessment, and project control
• Can be used to help assess the quality of software work products and
to assist in tactical decision making as a project proceeds
4
Reasons to Measure
• To characterize in order to
– Gain an understanding of processes, products, resources, and
environments
– Establish baselines for comparisons with future assessments
• To evaluate in order to
– Determine status with respect to plans
• To predict in order to
– Gain understanding of relationships among processes and products
– Build models of these relationships
• To improve in order to
– Identify roadblocks, root causes, inefficiencies, and other opportunities for
improving product quality and process performance
5
Metrics in the Process Domain
Metrics in the Process Domain
• Process metrics are collected across all projects and over long periods
of time
• They are used for making strategic decisions
• The intent is to provide a set of process indicators that lead to long-
term software process improvement
• The only way to know how/where to improve any process is to
– Measure specific attributes of the process
– Develop a set of meaningful metrics based on these attributes
– Use the metrics to provide indicators that will lead to a strategy for
improvement
8
Etiquette of Process Metrics
• Use common sense and organizational sensitivity when interpreting
metrics data
• Provide regular feedback to the individuals and teams who collect
measures and metrics
• Don’t use metrics to evaluate individuals
• Work with practitioners and teams to set clear goals and metrics that will
be used to achieve them
• Never use metrics to threaten individuals or teams
• Metrics data that indicate a problem should not be considered “negative”
– Such data are merely an indicator for process improvement
• Don’t obsess on a single metric to the exclusion of other important
metrics
9
Metrics in the Project Domain
Metrics in the Project Domain
• Project metrics enable a software project manager to
– Assess the status of an ongoing project
– Track potential risks
– Uncover problem areas before their status becomes critical
– Adjust work flow or tasks
– Evaluate the project team’s ability to control quality of software work
products
• Many of the same metrics are used in both the process and project
domain
• Project metrics are used for making tactical decisions
– They are used to adapt project workflow and technical activities
11
Use of Project Metrics
• The first application of project metrics occurs during estimation
– Metrics from past projects are used as a basis for estimating time and effort
• As a project proceeds, the amount of time and effort expended are compared
to original estimates
• As technical work commences, other project metrics become important
– Production rates are measured (represented in terms of models created, review
hours, function points, and delivered source lines of code)
– Error uncovered during each generic framework activity (i.e, communication,
planning, modeling, construction, deployment) are measured
13
Software Measurement
Categories of Software Measurement
• Two categories of software measurement
– Direct measures of the
• Software process (cost, effort, etc.)
• Software product (lines of code produced, execution speed, defects reported
over time, etc.)
– Indirect measures of the
• Software product (functionality, quality, complexity, efficiency, reliability,
maintainability, etc.)
• Project metrics can be consolidated to create process metrics for an
organization
15
Size-oriented Metrics
• Derived by normalizing quality and/or productivity measures by
considering the size of the software produced
• Thousand lines of code (KLOC) are often chosen as the normalization
value
• Metrics include
– Errors per KLOC - Errors per person-month
– Defects per KLOC - KLOC per person-month
– Dollars per KLOC - Dollars per page of documentation
– Pages of documentation per KLOC
17
Function-oriented Metrics
• Function-oriented metrics use a measure of the functionality delivered
by the application as a normalization value
• Most widely used metric of this type is the function point:
FP = TC * [0.65 + 0.01*⅀(Xi)].
Measurements Parameters Examples
1.Number of External Inputs(EI) Input screen and tables
18
Function Point Controversy
• Like the KLOC measure, function point use also has proponents and
opponents
• Proponents claim that
– FP is programming language independent
– FP is based on data that are more likely to be known in the early stages of
a project, making it more attractive as an estimation approach
• Opponents claim that
– FP requires some “sleight of hand” because the computation is based on
subjective data
– FP has no direct physical meaning…it’s just a number
19
Reconciling LOC and FP Metrics
• Relationship between LOC and FP depends upon
– The programming language that is used to implement the software
– The quality of the design
• FP and LOC have been found to be relatively accurate predictors of
software development effort and cost
– However, a historical baseline of information must first be established
• LOC and FP can be used to estimate object-oriented software projects
– However, they do not provide enough granularity for the schedule and
effort adjustments required in the iterations of an evolutionary or
incremental process
• The table on the next slide provides a rough estimate of the average
LOC to one FP in various programming languages
20
LOC Per Function Point
Language Average Median Low High
Ada 154 -- 104 205
Assembler 337 315 91 694
C 162 109 33 704
C++ 66 53 29 178
COBOL 77 77 14 400
Java 55 53 9 214
PL/1 78 67 22 263
Visual Basic 47 42 16 158
www.qsm.com/?q=resources/function-point-languages-table/index.html
21
Object-oriented Metrics
• Number of scenario scripts (i.e., use cases)
– This number is directly related to the size of an application and to the
number of test cases required to test the system
• Number of key classes (the highly independent components)
– Key classes are defined early in object-oriented analysis and are central to
the problem domain
– This number indicates the amount of effort required to develop the
software
– It also indicates the potential amount of reuse to be applied during
development
• Number of support classes
– Support classes are required to implement the system but are not
immediately related to the problem domain (e.g., user interface, database,
computation)
– This number indicates the amount of effort and potential reuse
23
Metrics for Software Quality
• Correctness
– This is the number of defects per KLOC, where a defect is a verified lack of
conformance to requirements
– Defects are those problems reported by a program user after the program is
released for general use
• Maintainability
– This describes the ease with which a program can be corrected if an error is
found, adapted if the environment changes, or enhanced if the customer has
changed requirements
– Mean time to change (MTTC) : the time to analyze, design, implement, test, and
distribute a change to all users
• Maintainable programs on average have a lower MTTC
24
Defect Removal Efficiency
• Defect removal efficiency provides benefits at both the project and process
level
• It is a measure of the filtering ability of QA activities as they are applied
throughout all process framework activities
– It indicates the percentage of software errors found before software release
• It is defined as DRE = E / (E + D)
– E is the number of errors found before delivery of the software to the end user
– D is the number of defects found after delivery
• As D increases, DRE decreases (i.e., becomes a smaller and smaller fraction)
• The ideal value of DRE is 1, which means no defects are found after delivery
• DRE encourages a software team to institute techniques for finding as many
errors as possible before delivery
25