0% found this document useful (0 votes)
5 views

PMIT 6111 Lecture 6 Test Management

The document outlines various testing estimation techniques used in software testing and quality assurance, including PERT, UCP, and the Wideband Delphi technique. It details how to apply these methods, such as calculating optimistic, most likely, and pessimistic estimates in PERT, and how to assess use case points in UCP. Additionally, it emphasizes the importance of experience-based estimation by leveraging past project metrics and expert insights.

Uploaded by

Frost
Copyright
© Public Domain
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

PMIT 6111 Lecture 6 Test Management

The document outlines various testing estimation techniques used in software testing and quality assurance, including PERT, UCP, and the Wideband Delphi technique. It details how to apply these methods, such as calculating optimistic, most likely, and pessimistic estimates in PERT, and how to assess use case points in UCP. Additionally, it emphasizes the importance of experience-based estimation by leveraging past project metrics and expert insights.

Uploaded by

Frost
Copyright
© Public Domain
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

TEST MANAGEMENT

PMIT 6111: Software Testing & Quality Assurance


Test Management





Testing Estimation Techniques
• The following testing estimation techniques are proven to be accurate and
are widely used −
❑PERT software testing estimation technique
❑UCP Method
❑WBS
❑Wideband Delphi technique
❑Function point/Testing point analysis
❑Percentage distribution
❑Experience-based testing estimation technique
PERT(Program Evaluation and Review Technique (PERT) Software
Testing Estimation Technique
• PERT software testing estimation technique is based on statistical methods in which
each testing task is broken down into sub-tasks and then three types of estimation
are done on each sub-tasks.
• The formula used by this technique is −

• Test Estimate = (O + (4 × M) + P)/6


• Where,
• O = Optimistic estimate (best case scenario in which nothing goes wrong and all conditions are
optimal).
• M = Most likely estimate (most likely duration and there may be some problem but most of the
things will go right).
• P= Pessimistic estimate (worst case scenario where everything goes wrong).
• Standard Deviation for the technique is calculated as −

• Standard Deviation (SD) = (P − O)/6


PERT Example

Most Likely (M) 60 minutes clear weather, clear roads, normal volume of drivers on some of the roads
Optimistic (O) 30 minutes clear weather, clear roads, no other drivers on any road
thunderstorm with rain, road blocked from multiple vehicle accidents, the highest
Pessimistic (P) 120 minutes
volume of drivers

Staying with this example, consider if asked “how long does it take you to drive to work?”. Have you ever replied with “well,
if it is a weekday at 8 am, it usually takes me 30 minutes longer than if it is a sunny weekend afternoon.” You are giving
different estimates reflecting different situations for the same activity. Using PERT analysis, the estimates go from
“guesses” to mathematically verified estimates.
Practice

Use-Case Point Method


What is Use Case ??


Online shopping system
UCP’S Description


UCP Counting Process
1. Calculate Unadjusted UCP’s


Unadjusted Use Case Weight


Use-Case Complexity Number of Transactions Use-Case Weight
Simple ≤3 5
Average 4 to 7 10
Complex >7 15

Use Case Complexity Use-Case Weight No of Use Cases Product

Simple 5 NSUC 5*NSUC

Average 10 NAUC 10*NAUC

Complex 15 NCUC 15*NCUC

Unadjusted Use Case Weights (UUCW) 5*NSUC+10*NUAC+15*NCUC



Determine Unadjusted Actor Weight


Actor Complexity Example Actor Weight


Simple A System with defined API 1
Average A System interacting through a Protocol 2
Complex A User interacting through GUI 3

Actor Complexity Actor Weight No of Actors product

Simple 1 NSA 1*NSA

Average 2 NAA 2*NAA

Complex 3 NCA 3*NCA

Unadjusted Actor Weight (UAW) 1*NSA+2*NAA+3*NCA



Calculate Unadjusted Use case Point


(2)Adjust For technical Complexity

Factor Description Weight
T1 Distributed System 2.0
Response time or
T2 throughput performance 1.0
objectives
T3 End user efficiency 1.0
Complex internal
T4 1.0
processing
T5 Code must be reusable 1.0
T6 Easy to install .5
T7 Easy to use .5
T8 Portable 2.0
T9 Easy to change 1.0
T10 Concurrent 1.0
Includes special security
T11 1.0
objectives
Provides direct access for
T12 1.0
third parties
Special user training
T13 1.0
facilities are required

×

Factor Description Weight (W) Rated Value (0 to 5) (RV) Impact (I = W × RV)

T1 Distributed System 2.0


Response time or throughput
T2 1.0
performance objectives
T3 End user efficiency 1.0

T4 Complex internal processing 1.0 ➢Step 5


T5 Code must be reusable 1.0 Calculate the Technical
Complexity Factor (TCF) as −
T6 Easy to install .5
TCF = 0.6 + (0.01 × TFactor)
T7 Easy to use .5

T8 Portable 2.0

T9 Easy to change 1.0

T10 Concurrent 1.0

T11 Includes special security objectives 1.0


Provides direct access for third
T12 1.0
parties
Special user training facilities are
T13 1.0
required
Total Technical Factor (TFactor)
(3) Adjust for Environmental Complexity

Factor Description Weight
Familiar with the project
F1 1.5
model that is used
F2 Application experience .5
F3 Object-oriented experience 1.0
F4 Lead analyst capability .5
F5 Motivation 1.0
F6 Stable requirements 2.0
F7 Part-time staff -1.0
Difficult programming
F8 -1.0
language

• ×


Rated Value (0 Impact (I = W
Factor Description Weight (W)
to 5) (RV) × RV)
Familiar with
the project
F1 1.5
model that is Step 5
used Calculate the Environmental Factor
Application (EF) as − 1.4 + (-0.03 × EFactor)
F2 .5
experience
Object-
F3 oriented 1.0
experience
Lead analyst
F4 .5
capability
F5 Motivation 1.0
Stable
F6 2.0
requirements
F7 Part-time staff -1.0
Difficult
F8 programming -1.0
language
Total Environment Factor (EFactor)
(4) Calculate Adjusted Use Case Points

× ×




Work Breakdown Structure


Wideband Delphi Technique
• In Wideband Delphi Method, WBS is distributed to a team comprising of 3-7
members for re-estimating the tasks. The final estimate is the result of the
summarized estimates based on the team consensus.
• This method speaks more on experience rather than any statistical formula. This
method was popularized by Barry Boehm to emphasize on the group iteration to
reach a consensus where the team visualized different aspects of the problems
while estimating the test effort.
Function Point / Testing Point Analysis






• ×

• ×
Function Point Analysis-- Example

Weighting Factor
Information Domain Value Count
Simple Average Complex
External Inputs (EIs) 2 3 4 5 6
External Outputs (EOs) 2 2 4 6 4
External Inquiries (EQs) 2 3 5 8 6
Internal Logical Files (ILFs) 1 7 8 9 7
External Interface Files (EIFs) 3 5 7 10 15

Total count 38
Function Point Analysis-- Example

× ×

• ×
• ×

14

C𝐴𝐹 = ෍ 𝐹𝑖
𝑖=1


Experience-based Testing Estimation
Technique
1. This technique is based on analogies and experts.
2. The technique assumes that you already tested similar applications in
previous projects and collected metrics from those projects.
3. You also collected metrics from previous tests. Take inputs from subject
matter experts who know the application (as well as testing) very well
and use the metrics you have collected and arrive at the testing effort.
Source & Reference

You might also like