0% found this document useful (0 votes)
29 views

Project Metrics: - Sanket Shah

Project metrics can help software teams evaluate the effectiveness of processes and identify problem areas. Key metrics include measures of inputs like resources, outputs like deliverables, and results like effectiveness. Metrics can be direct like lines of code or indirect like reliability. Failure analysis uses metrics to categorize errors by origin and cost, identify the most costly categories, and develop plans to modify processes to reduce high-cost errors. Size, function, and process metrics provide different views of a project's status and quality.

Uploaded by

Dj-dimple Jones
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

Project Metrics: - Sanket Shah

Project metrics can help software teams evaluate the effectiveness of processes and identify problem areas. Key metrics include measures of inputs like resources, outputs like deliverables, and results like effectiveness. Metrics can be direct like lines of code or indirect like reliability. Failure analysis uses metrics to categorize errors by origin and cost, identify the most costly categories, and develop plans to modify processes to reduce high-cost errors. Size, function, and process metrics provide different views of a project's status and quality.

Uploaded by

Dj-dimple Jones
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 26

Project Metrics

- Sanket Shah
What is it?
• Quantitative measures helping
software people to find efficacy of
software process and other
projects using same process.
• Can also be used to pinpoint
Problem Areas.
Reasons for Metrics

• To Characterize
• To Evaluate
• To Predict
• To Improve
Definitions
• Measure:
– When single data point is collected
(errors uncovered per module)
• Measurement:
– Collection of one or more data
points (errors uncovered for each
module for large set of modules)
Definitions
• Metric:
– Relating individual measures in
some way.
• Indicator:
– Metric or combination of metrics
providing insight into software
process, software project or the
product itself.
Definitions
• Process Indicators:
– Enable Software Engineering
Organization to gain insight into the
efficacy of an existing progress. They
help to assess what works and what
doesn’t.
Definitions
• Project Indicators:
– Assess status of ongoing project
– Track potential risks
– Uncover problem areas before they
go « Critical ».
– Adjust work flow or tasks.
– Evaluate the project team’s ability to
control quality of software work
products.
Definitions
• Personal Software Process:
– Structured set of process
descriptions, measurements and
methods that can help them
estimate and plan their work.
– Uses forms, scripts and standards.
Metrics Guidelines

• Use common sense and organizational sensitivity


when interpreting metrics data.
• Provide regular feedback to the individuals and
teams who have worked to collect measures and
metrics.
• Don’t use metrics to appraise individuals.
• Work with practitioners and teams to set clear
goals and metrics that will be used to achieve
them.
Metrics Guidelines

• Never use metrics to threaten individuals or


teams.
• Metrics data that indicate a problem area
should not be considered “negative.” These
data are merely an indicator for process
improvement.
• Don’t obsess on a single metric to the
exclusion of other important metrics.
Failure Analysis Steps
• Categorize by origin.
• Recording cost to correct each error and defect.
• Count errors and defects in each category in
descending order.
• Compute overall cost of errors and defects in each
category.
• Resultant data are analyzed to uncover the
categories that result in highest cost to the
organization.
• Plans are developed to modify the process with the
intent of eliminating (or reducing the frequency of)
the class of errors and defects that is most costly.
Failure Analysis Steps
• Analyze resultant data to uncover
categories resulting in highest cost
• Develop Plans to modify process
with intent of eliminating (or
reducing) the class of errors and
defects that is most costly.
Project Metrics Measurements

• Inputs – measures of Resources


• Outputs – Measures of Deliverables
• Results – Effectiveness of Deliverables
Software Measure Types
• Direct (Quantity)
– Cost and Effort Applied
– LOC, Execution Code
• Indirect (Quality)
– Functionality, Complexity, Efficiency,
Reliability etc…
Product Metrics

• Quality of Deliverables
• Measures of Analysis Models
• Complexity of the design
– Internal algorithmic complexity, architectural
complexity, data flow complexity
• Code Measures
• Process Effectiveness
– Defect Removal Efficiency
Process Metrics - Strategic
• Majority focus on quality achieved
as a consequence of a repeatable
or managed process
• Statistical SQA data
– error categorization & analysis
• Defect removal efficiency
– propagation from phase to phase
• Reuse data
Project Metrics – Tactical

• Effort/time per SE task


• Errors uncovered per review hour
• Scheduled vs. actual milestone dates
• Changes (number) and their characteristics
• Distribution of effort on SE tasks
Size Oriented Metrics
• Errors per KLOC
• Defects per KLOC
• Amount spent per LOC
• Documentation size per KLOC
• Errors / person-month
• LOC per person-month
• Cost (Currency) / page of
documentation
Function Oriented Metrics

• Errors per FP
• Defects per FP
• Cost (Currency) per FP
• Pages of documentation per FP
• FP per person-month
Computing Function Points
Analyze information
domain of the Establish count for input domain and
application system interfaces
and develop counts

Weight each count by Assign level of complexity or weight


assessing complexity to each count

Assess influence of Grade significance of external factors, F i


global factors that affect
the application
such as reuse, concurrency, OS, ...

function points = (count x weight) x C


Compute
function points where:
complexity multiplier: C = (0.65 + 0.01 x N)
degree of influence: N = F
i
Information Domain Analysis

weighting factor
measurement parameter count simple avg. complex
number of user inputs X 3 4 6 =
number of user outputs X 4 5 7 =
number of user inquiries X 3 4 6 =
number of files X 7 10 15 =
number of ext.interfaces X 5 7 10 =
count-total
complexity multiplier
function points
Accounting Complexity

Factors are rated on a scale of 0 (not important)


to 5 (very important):

data communications on-line update


distributed functions complex processing
heavily used configuration installation ease
transaction rate operational ease
on-line data entry multiple sites
end user efficiency facilitate change
Reasons for selecting FP

• Independent of Programming Language


• Uses Readily countable characteristics of the
“Information Domain” of the problem
• Does not “penalize” inventive
implementations that require fewer LOC than
others
• Makes easier to accommodate reuse and the
trend towards Object Oriented Approaches
Measuring Quality
• Correctness — the degree to
which a program operates
according to specification
• Maintainability—the degree to
which a program is amenable to
change
Measuring Quality
• Integrity—the degree to which a
program is impervious to outside
attack
• Usability—the degree to which a
program is easy to use
Defect Removal Efficiency

• DRE = (errors) / (errors + Defects)

You might also like