0% found this document useful (0 votes)
3 views

Chap 4 Measurement-1

The document discusses the importance of software test metrics and measurements in managing software projects effectively. It defines software testing metrics as quantitative measures that help assess the quality, cost, and effectiveness of both the project and its processes. Additionally, it categorizes metrics into base and calculated metrics, and highlights the significance of these metrics in decision-making and project management.

Uploaded by

bnature634
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Chap 4 Measurement-1

The document discusses the importance of software test metrics and measurements in managing software projects effectively. It defines software testing metrics as quantitative measures that help assess the quality, cost, and effectiveness of both the project and its processes. Additionally, it categorizes metrics into base and calculated metrics, and highlights the significance of these metrics in decision-making and project management.

Uploaded by

bnature634
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Chap 4.

Measurement
Software Test Metrics and Measurements-
There is a famous statement: “We can’t control things that we can’t measure”.
In software projects, measuring the quality, cost, and effectiveness of the project and the
processes is most important. Without measuring these, a project can’t be completed
successfully.

Here controlling the projects means how a project manager/lead can identify the deviations
from the test plan ASAP to react in the perfect time. Generating test metrics based on the
project needs is very important to achieve the quality of the software being tested.

What is Software Testing Metrics?


“A Metric is a quantitative measure of the degree to which a system, system component, or
process possesses a given attribute”.
Metrics can be defined as “STANDARDS OF MEASUREMENT”.
Software Metrics are used to measure the quality of the project. A metric is a unit used for
describing an attribute. Metric is a scale for measurement.
Suppose “Kilogram” is a metric for measuring the attribute “Weight”. Similarly, in software,
“How many issues are found in a thousand lines of code?”, here the number of issues is one
measurement & No. of lines of code is another measurement. Metric is defined from these
two measurements.

Test metrics example:


 How many defects exist within the module?
 How many test cases are executed per person?
 What is the Test coverage %?

What is Software Testing Measurement?


Measurement indicates the extent, amount, dimension, capacity, or size of some attribute of a
product or process.
Test Measurement example: Total number of defects.
Please refer below diagram for a clear understanding of the difference between Measurement
& Metrics.

Why Test Metrics


Generation of Software Test Metrics is the most important responsibility of the Software Test
Lead/Manager.
Test Metrics are used to,
1. Decide for the next phase of activities, such as estimating the cost & schedule of
future projects.
2. Understand the kind of improvement required to succeed in the project.
3. Decide on the Process or Technology to be modified, etc.

Importance of Software Testing Metrics:


As explained above, Test Metrics are the most important to measure the quality of the
software.
Now, how can we measure the quality of the software by using Metrics?
Suppose, if a project does not have any metrics, then how will the quality of the work done
by a Test Analyst will be measured?
For Example, A Test Analyst has to,
1. Design the test cases for 5 requirements.
2. Execute the designed test cases.
3. Log the defects & need to fail the related test cases.
4. After the defect is resolved, we need to re-test the defect & re-execute the
corresponding failed test case.

In the above scenario, if metrics are not followed, then the work completed by the test analyst
will be subjective, i.e. the Test Report will not have the proper information to know the status
of his work/project.
If Metrics are involved in the project, then the exact status of his/her work with proper
numbers/data can be published.
i.e. in the Test Report, we can publish:
1. How many test cases have been designed per requirement?
2. How many test cases still need to be designed?
3. How many test cases are executed?
4. How many test cases are passed/failed/blocked?
5. How many test cases have not been executed yet?
6. How many defects are identified & what is the severity of those defects?
7. How many test cases are failed due to one particular defect? etc.
Based on the project needs, we can have more metrics than the above list, to know the status
of the project in detail.
The Test Lead/Manager will gain insight into the key points mentioned below based on
the provided metrics.
 Percentage of work completed.
 Percentage of remaining work.
 Time to complete the remaining work.
 Whether the project going as per the schedule or lagging? etc.
If the project does not meet the timeline, the manager will inform the client and other
stakeholders, providing reasons for the delay in preventing any last-minute surprises.

Metrics Life Cycle

Types of Manual Test Metrics


Testing Metrics are mainly divided into 2 categories.
1. Base Metrics
2. Calculated Metrics
Base Metrics: Base Metrics are the Metrics that are derived from
the data gathered by the Test Analyst during the test case
development and execution.
This data will be tracked throughout the Test Lifecycle. I.e.
collecting the data like the Total no. of test cases developed for a
project (or) no. of test cases that need to be executed (or) no. of
test cases passed/failed/blocked, etc.

Calculated Metrics: Calculated Metrics are derived from the


data gathered in Base Metrics. These Metrics are tracked by the
test lead/manager for Test Reporting purposes.
Examples of Software Testing Metrics
Let’s take an example to calculate various test metrics used in
software test reports:

Below is the table format for the data retrieved from the
Test Analyst who is involved in testing:

Definitions and Formulas for Calculating Metrics:


#1) Test cases executed (%ge): This metric determines the percentage of test cases that have
been executed.
%ge Test cases Executed = (No. of Test cases executed / Total no. of Test cases written) *
100.
So, from the above data,
%ge Test cases Executed = (65 / 100) * 100 = 65%

#2) Test cases not executed (%ge): This metric is used to obtain the pending execution
status of the test cases in terms of %ge.
%ge Test cases not executed = (No. of Test cases not executed / Total no. of Test cases
written) * 100.
So, from the above data,
%ge Test cases Blocked = (35 / 100) * 100 = 35%

#3) Test cases Passed (%ge): This metric is used to obtain the Pass %ge of the executed test
cases.
%ge Test cases Passed = (No. of Test cases Passed / Total no. of Test cases Executed) * 100.
So, from the above data,
%ge Test cases Passed = (30 / 65) * 100 = 46%
#4) Test cases Failed (%ge): This metric is used to obtain the Fail %ge of the executed test
cases.
%ge Test cases Failed = (No. of Test cases Failed / Total no. of Test cases Executed) * 100.
So, from the above data,
%ge Test cases Passed = (26 / 65) * 100 = 40%

#5) Test cases Blocked: This metric is used to obtain the blocked %ge of the executed test
cases. A detailed report can be submitted by specifying the actual reason for blocking the test
cases.
%ge Test cases Blocked = (No. of Test cases Blocked / Total no. of Test cases Executed) *
100.
So, from the above data,
%ge Test cases Blocked = (9 / 65) * 100 = 14%

#6) Defect Density = No. of Defects identified / size


(Here “Size” is considered a requirement. Hence, here the Defect Density is calculated as
several defects identified per requirement. Similarly, Defect Density can be calculated as
some Defects identified per 100 lines of code [OR] No. of defects identified per module, etc.)

So, from the above data,


Defect Density = (30 / 5) = 6

#7) Defect Removal Efficiency (DRE) = (No. of Defects found during QA testing / (No. of
Defects found during QA testing + No. of Defects found by End-user)) * 100

DRE is used to identify the test effectiveness of the system.


Suppose, During Development & QA testing, we have identified 100 defects.
After the QA testing, during Alpha & Beta testing, the end-user / client identified 40 defects,
which could have been identified during the QA testing phase.

Now, The DRE will be calculated as,


DRE = [100 / (100 + 40)] * 100 = [100 /140] * 100 = 71%

#8) Defect Leakage: Defect Leakage is the Metric that is used to identify the efficiency of
the QA testing i.e., how many defects are missed/slipped during the QA testing.

Defect Leakage = (No. of Defects found in UAT / No. of Defects found in QA testing.) * 100

Suppose, During Development & QA testing, we have identified 100 defects.


After the QA testing, during Alpha & Beta testing, the end-user / client identified 40 defects,
which could have been identified during the QA testing phase.

Defect Leakage = (40 /100) * 100 = 40%

#9) Defects by Priority: This metric is used to identify the no. of defects identified based on
the Severity / Priority of the defect which is used to decide the quality of the software.

%ge Critical Defects = No. of Critical Defects identified / Total no. of Defects identified *
100
From the data available in the above table,
%ge Critical Defects = 6/ 30 * 100 = 20%

%ge High Defects = No. of High Defects identified / Total no. of Defects identified * 100
From the data available in the above table,
%ge High Defects = 10/ 30 * 100 = 33.33%

%ge Medium Defects = No. of Medium Defects identified / Total no. of Defects identified *
100
From the data available in the above table,
%ge Medium Defects = 6/ 30 * 100 = 20%
%ge Low Defects = No. of Low Defects identified / Total no. of Defects identified * 100
From the data available in the above table,
%ge Low Defects = 8/ 30 * 100 = 27%

Software metrics:
Software metric is a measure of software characteristics which are quantifiable or countable.
Software metrics are important for many reasons, including measuring software performance,
planning work items, measuring productivity, and many other uses.

Within the software development process, there are many metrics that are all related to each
other. Software metrics are related to the four functions of management: Planning,
Organization, Control, or Improvement.

Software metrics is a standard of measure that contains many activities which involve some
degree of measurement. It can be classified into three categories: product metrics, process
metrics, and project metrics.

• Product metrics describe the characteristics of the product such as size, complexity,
design features, performance, and quality level.

• Process metrics can be used to improve software development and maintenance.


Examples include the effectiveness of defect removal during development, the pattern of
testing defect arrival, and the response time of the fix process.

• Project metrics describe the project characteristics and execution. Examples


include the number of software developers, the staffing pattern over the life cycle of the
software, cost, schedule, and productivity.
Some metrics belong to multiple categories. For example, the in-process quality metrics of a
project are both process metrics and project metrics.

Scope of Software Metrics


Software metrics contains many activities which include the following –

1. Cost and effort estimation

2. Productivity measures and model

3. Data collection

4. Quantity models and measures

5. Reliability models

6. Performance and evaluation models

7. Structural and complexity metrics

8. Capability – maturity assessment

9. Management by metrics

10.Evaluation of methods and tools

Software measurement is a diverse collection of these activities that range from models
predicting software project costs at a specific stage to measures of program structure.

1. Cost and Effort Estimation

Effort is expressed as a function of one or more variables such as the size of the program, the
capability of the developers and the level of reuse. Cost and effort estimation models have
been proposed to predict the project cost during early phases in the software life cycle. The
different models proposed are –

• Boehm’s COCOMO model

• Putnam’s slim model

• Albrecht’s function point model


2. Productivity Model and Measures
Productivity can be considered as a function of the value and the cost. Each can be
decomposed into different measurable size, functionality, time, money, etc. Different
possible components of a productivity model can be expressed in the following
diagram.

3. Data Collection
The quality of any measurement program is clearly dependent on careful data
collection. Data collected can be distilled into simple charts and graphs so that the
managers can understand the progress and problem of the development. Data
collection is also essential for scientific investigation of relationships and trends.

4. Quality Models and Measures


Quality models have been developed for the measurement of quality of the product
without which productivity is meaningless. These quality models can be combined
with productivity model for measuring the correct productivity. These models are
usually constructed in a tree-like fashion. The upper branches hold important high
level quality factors such as reliability and usability.
The notion of divide and conquer approach has been implemented as a standard
approach to measuring software quality.

5. Reliability Models
Most quality models include reliability as a component factor, however, the need to
predict and measure reliability has led to a separate specialization in reliability
modeling and prediction. The basic problem in reliability theory is to predict when a
system will eventually fail.

6. Performance Evaluation and Models


It includes externally observable system performance characteristics such as response
times and completion rates, and the internal working of the system such as the
efficiency of algorithms. It is another aspect of quality.

7. Structural and Complexity Metrics


Here we measure the structural attributes of representations of the software, which are
available in advance of execution. Then we try to establish empirically predictive
theories to support quality assurance, quality control, and quality prediction.

8. Capability Maturity Assessment


This model can assess many different attributes of development including the use of
tools, standard practices and more. It is based on the key practices that every good
contractor should be using.

9. Management by Metrics
For managing the software project, measurement has a vital role. For checking
whether the project is on track, users and developers can rely on the measurement
based chart and graph. The standard set of measurements and reporting methods are
especially important when the software is embedded in a product where the customers
are not usually well-versed in software terminology.

10. Evaluation of Methods and Tools


This depends on the experimental design, proper identification of factors likely to
affect the outcome and appropriate measurement of factor attributes.

Categories/Types of Software Testing Metrics

Here are the three different categories of software testing metrics: Process Metrics, Product
Metrics, and Project Metrics. Now, let’s delve into the key metrics in each category and their
importance to QA success.

 Defining testing standards and procedures

 Identifying data requirements

 Selecting an appropriate test framework

Process Metrics

Process metrics focus on the efficiency and effectiveness of testing activities.

1. Test Case Effectiveness: Measures how well test cases detect defects.

o Formula: (Defects Detected / Test Cases Run) x 100

o Use Case: Helps evaluate the quality of test cases and refine them for better
defect detection.

2. Cycle Time: Tracks the time taken to complete the testing process.
o Insight: Helps teams understand the efficiency of test runs and identify
delays.

o Example: If a testing cycle takes longer than expected, it may indicate


inefficiencies in the test environment or setup.

3. Defect Fixing Time: Measures the time taken to resolve a defect from detection to
closure.

o Use Case: Identifies delays in defect resolution, aiding in optimizing


workflows.

Product Metrics

Product metrics evaluate the quality of the software under test.

1. Number of Defects: Indicates the quality and efficiency of the software product.

o Insight: Helps teams identify problem areas and implement necessary


improvements.

2. Defect Severity: Classifies defects based on their impact.

o Example Categories: Critical, Major, Minor.

o Use Case: Prioritizes fixes based on severity to ensure critical issues are
addressed first.

3. Passed/Failed Test Case Metrics: Provides data on the stability and functionality of
the software.

o Formula for Passed Cases: (Passed Test Cases / Total Test Cases) x 100

o Use Case: Identifies areas of improvement by analysing failed test cases.

Project Metrics

Project metrics provide insights into the broader scope of the testing process, focusing on
team and resource efficiency.
1. Test Coverage: Measures the percentage of tested functionalities.

o Formula: (Tested Functionalities / Total Functionalities) x 100

o Use Case: Ensures that all critical functionalities are tested.

2. Cost of Testing: Assesses total testing expenditure, including infrastructure and


resource costs.

o Use Case: Helps in budget allocation and identifying cost-saving


opportunities.

3. Budget/Schedule Variance: Tracks deviations from planned costs and timelines.

o Use Case: Aids in maintaining project timelines and avoiding budget


overruns.

Examples of Software Testing Metrics Calculations

Example 1: Test Execution Metrics

 Total Test Cases Written: 200

 Total Test Cases Executed: 180

 Passed Test Cases: 100

 Failed Test Cases: 80

Calculation:

 Percentage of Test Cases Executed: (180 / 200) x 100 = 90%

 Passed Test Cases Percentage: (100 / 180) x 100 = 55.56%

 Failed Test Cases Percentage: (80 / 180) x 100 = 44.44%

Example 2: Defect Metrics

 Total Defects Identified: 20

 Valid Defects: 15
 Fixed Defects: 12

 Deferred Defects: 5

Calculation:

 Fixed Defects Percentage: (12 / 20) x 100 = 60%

 Accepted Defects Percentage: (15 / 20) x 100 = 75%

 Deferred Defects Percentage: (5 / 20) x 100 = 25%

OBJECT ORIENTED METRICS USED IN TESTING

Object oriented metrics capture many attributes of a software product and some of them are
relevant in testing. Measuring structural design attributes of a software system, such as
coupling, cohesion or complexity, is a promising approach towards early quality assessments.

There are several metrics available in the literature to capture the quality of design and
source code.

Coupling Metrics
Coupling relations increase complexity, reduce encapsulation, potential reuse, and limit
understanding and maintainability. Coupling metrics require information about attribute
usage and method invocations of other classes. These metrics are given in Table 10.1.
Higher values of coupling metrics indicate that a class under test will require a higher
number of stubs during testing. In addition, each interface will require to be tested
thoroughly.

You might also like