0% found this document useful (0 votes)
8 views

ST Unit-4

software testing

Uploaded by

hema
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

ST Unit-4

software testing

Uploaded by

hema
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 109

PERFORMANCE TESTING

https://www.youtube.com/watch?v=yJLtJONIG10

2
FACTORS GOVERNING PERFORMANCE
TESTING
• Performance is a basic requirement for any product and
is fast becoming a subject of great interest in the testing
community.
• There are many factors that govern performance testing.
• The system or the product in handling multiple
transactions is determined by the factor called
Throughput.
• Throughput represent the number of requests/business
transactions processed by the product in a specified time
duration.

3
Throughput Of a System At Various Load
Conditions

THROUGHPUT

SATURATION POINT

USER LOAD

4
• Measuring “Response time” becomes an important
activity of performance testing.

• Response time can be defined as the delay between


the point of request and the first response from the
product.
• Hence, it is important to know what delay the product
causes and what delay the environment causes.
• This brings up yet another factor for performance—
Latency.

5
Example of latencies at various levels—
network and applications

WEB
DATABASE
CLIENT SERVER
N1 N2 SEVER
A2
A1

A3
N4 N3

6
• -A web application providing a service by talking to a web
server and a database server connected in the network.
• Latency is a delay caused by the application , operating
system, and by the environment that are calculated
separately.

• Latency and response time can be calculated as:


Network latency =N1+N2+N3+N4
Product latency =A1+A2+A3
Actual response time = Network latency +
Product latency
7
• The next factor that governs the performance
testing is tuning.

• Tuning is a procedure by which the product


performance is enhanced by setting different
value to the parameters (variables) of the
product, operating system, and other
components.

8
• It is very important to compare the throughput
and response time of the product with those of
the competitive product.

• This type of performance testing wherein


competitive products are compared is called
Benchmarking.

• No two products are the same in features , cost


and functionality.

9
• The exercise to find out what resource and
configurations are needed is called Capacity
planning.
• The purpose of the capacity planning exercise is
to help customers plan for the set of hardware
and software resources prior to installation or
upgradation of the product.

10
METHODOLOGY FOR PERFORMANCE
TESTING

• Performance testing is complex and expensive


due to large resource requirements and the
time it takes.
• Hence, it requires careful planning and a
robust methodology.
• Performance testing is ambiguous because of
the different people who are performing the
various roles having different expectations.
11
• Steps involved in the methodology of
Performance Testing
 Collecting requirements
 Writing test cases
 Automating performance test cases
 Executing performance test cases
 Analyzing performance test results
 Performance tuning
 Performance benchmarking
 Recommending right configuration for the
customers (Capacity planning) 12
1.Collecting Requirements:
 Collecting requirements is the first step in
planning the performance testing.
 Typically, functionality testing has a definite set
of inputs and outputs, with a clear definition of
expected result.
 In contrast, performance testing generally
needs elaborate documentation and
environment setup and the expected results
may not well known in advance.

13
• Collecting requirements for performance testing
presents some unique challenges.
First, a performance testing requirement
should be testable– not all features/functionality
can be performance tested.
Second, a performance testing
requirement needs to clearly state what factors
needs to be measured and improved.
Last, performance testing requirement
needs to be associated with the actual number
or percentage of improvement that is desired.
14
• For example:
if a business transaction, say ATM
money withdrawal, should be completed
within two minutes, the requirement needs
to document the actual response time
expected.

• There are several source for deriving


performance requirement.

15
• Performance compared to the previous release
of the same product A performance
requirement can be something like “as ATM
withdrawal transaction will be faster then the
previous release by 10%”.

• Performance compared to the competitive


product(s) A performance requirement can be
documented as “ATM withdrawal will be as fast
as or faster then competitive product XYZ.”

16
• Performance compared to absolute numbers
derived from actual need A requirement can be
documented such as “ATM machine should be
capable of handling 1000 transactions per day with
each transaction not taking more than a minute.”

• Performance numbers derived from architecture


and design The architect or a designer of a product
would normally be in much better position than
anyone else to say what is the performance
expected out of the product.
17
• There are two types of requirements performance
testing focuses on– Generic requirements and
Specific requirements.
• Generic requirements are those that are common
across all products in the product domain area .
All products in that area are expected to meet
those performance expectations.
• Specific requirements are those that depend on
implementation for a particular product and differ
from one product to another in a given domain.

18
Example Of Performance Test Requirements

Transaction Expected response Loading Machine


time pattern/throughput configuration

ATM cash 2 Sec Up to 10,000 Pentium IV/512MB


withdrawal simultaneous access RAM/broadband
by users network

ATM cash 40 sec Up to 10,000 Pentium IV/512MB


withdrawal simultaneous access RAM/dial-up
by users network

ATM cash 4 Sec More than 10,000 Pentium IV/512MB


withdrawal but below 20,000 RAM/broadband
simultaneous access network
by users

19
2.Writing Test Cases:
• The next step involved in performance testing is
writing test cases.
• A test case for performance testing should have
the following details defined.
1.List of operations are business transaction
to be tested.
2.Steps for executing those operations/
transactions.

20
3.List of product, OS parameters that
impact the performance testing, and their value.
4.Loading pattern
5.Resource and their configuration
(network , hardware , software , configurations)
6.The expected results(that is, expected
response time, throughput , latency)
7.The product versions/competitive
products to be compared with and related
information such as their corresponding fields
21
• Performance test case are repetitive In nature.
• These test case are normally executed
repeatedly for different values of parameters,
different load conditions, different
configuration, and so on.
• While testing the product for different load
pattern, it is important to increase the load or
scalability gradually to avoid any unnecessary
effort in case of failures.

22
• For Example:
If an ATM withdrawal fails for ten concurrent
operations, there is no point in trying it for
10,000 operations .

• Performance testing is a laborious process


involving time and effort.

23
3.Automating Performance Test Cases:
Automation is an important step in the
methodology for performance testing.
• Characteristics:
1.Performance testing is repetitive.
2.Performance test cases cannot be effective
without automation and in most cases it is, in
fact, almost impossible to do performance
testing without automation.

24
3.The results of performance testing need to be
accurate, and manually calculating the response
time, throughput, and so on can introduce
inaccuracy.
4.There are far too many permutations and
combinations of those factors and it will be
difficult to remember all these and use them if
the tests are done manually.
5.The analysis of performance results and failure
needs to take into account related information
such as resource utilization, log files, trace files,
and so on that are collected at regular intervals.
25
4.Executing Performance Test Cases:
• Performance testing generally involves less
effort for executing but more effort for
planning, data collection, and analysis.
• The most effort-consuming aspect in execution
is usually data collection.
• Data corresponding to the following points
needs to be collected while executing
performance tests.

26
1.Start and end time of test case execution.
2.Log and trace/audit files of the product
and operating system(for future debugging and
repeatability purpose).
3.Utilization of resources (CPU, memory,
disk, network, utilization and so on).
4.Configuration of all environment factors
(hardware, software, and other components).
5.The response time, throughput, latency,
and so on as specified in the test case
documentation at regular intervals).
27
• Another aspect involved in performance test
execution is Scenario testing.
• A set of transaction/operations that are usually
performed by the user forms the scenario for
performance testing.
• For example:
Not all users withdraw cash from an ATM;
some of them query for account balance; some
make deposits, and so on

28
A)Response Time B)Throughput

30 1400
25 1200
ime 20 Transac 1000
aken 15 tion 800
10 process 600
5 ed/hou 400
0 r 200
50 70 90 110 130 150 0 100 200 300 400 500 600
Number of concurrent
transaction Number of users

1400
1200
1000
1.Memory utilization
800
2.Network packets
600
3.CPU utilization
400
4.Disk read/write
200
5.Throughput
0 1 2 3 4 5 6 7 8 9 10 11 12 13
c)Throughput and resource 29
5.Analyzing The Performance Test Results:
• Analyzing the performance test results requires
Multi-dimensional thinking.
• This is the most complex part of performance
testing where product knowledge, analytical
thinking and statistical background are all
absolutely essential.
• Before analyzing the data, some calculations of
data and organization of the data are required.

30
– Calculating the mean of the performance test result
data.
– Calculating the standard deviation.
– Removing the noise(noise removal) and re-plotting
and re-calculating the mean and standard deviation.
– In terms of caching and other technologies
implemented in the product, the data coming from
the cache need to be differentiated from the data
that gets processed by the product, and presented.

31
• Responsibility not only depends on taking the
average/mean of performance.
• It also depends on how consistently the product
delivers those performance numbers.
• Standard deviation can help here.

32
• The process of removing some unwanted
value in a set is called Noise removal.
• When some value are removed from the set,
the mean and standard deviation needs to
be re-calculated.

33
• The majority of the server-client , internet, and
database applications store the data in a local
high-speed buffer when a query is made.
• This enables them to present the data quickly
when the same request is made again. This is
called Caching.
• The performance data need to be
differentiated according to where the result is
coming from—the server or the cache.

34
• For Example:
Assume that data in a cache can produce a
response time of 1000 microseconds and a
sever access taken 1 microsecond and 90% of
the time a request is satisfied by the cache.
Then the average response time is:
(.9)*1000+0.1=900.1us.

35
• Once the data sets are organized , the analysis
of performance data is carried out to conclude
the following:
– Whether performance of the product is consistent
when tests are executed multiple time.
– What performance can be expected for what type
of configuration (both hardware and software),
resource.
– What parameters impact performance and how
they can be used to derive better performance.
– What is the effort of scenarios involving several mix
of operations for the performance factors.
36
– What is the effort of product technologies such as
caching on performance improvements.
– Up to what load are the performance numbers
acceptable and whether the performance of the
product meets the criteria of “Graceful
degradation”
– What is the optimum throughput/response time
of the product for a set of factors such as load,
recourse, and parameters.

37
– What performance requirements are met and how
the performance looks when compared to the
previous version or the expectations set earlier or the
competition.
– Sometime high-end configuration may not be
available for performance testing. The performance
numbers that are to be expected from a high-end
configuration should be extrapolated or predicted.

38
6. Performance Tuning:
• Analyzing performance data help in
narrowing down the list of parameters that
really impact the performance results and
improving product performance.
• Once the parameters are narrow down to a few,
the performance test case are repeated for
different values of those parameters to further
analyze their effect in getting better
performance.
39
• This performance-tuning exercise needs a high
degree of skill in identifying the list of
parameters and their contribution to
performance.
• There are two steps involving in getting the
optimum mileage from performance tuning.
 Tuning the product parameters and
 Tuning the operating system and
parameters

40
• The product parameters in isolation as well as
in combination have an impact on product
performance. Hence it is important to
1.Repeat the performance test for
different values of each parameter that impact
performance.
2.Sometimes when a particular parameter
value is changed, it needs change in other
parameters. Repeat the performance tests for a
group of parameters and their different values.

41
3.Repeat the performance tests for
default value of all parameters (called Factory
setting tests).
4.Repeat the performance tests for low
and high values of each parameter and
combinations.

• Performance tuning provides better results


only for a particular configuration and for
certain transactions.

42
• Tuning the OS parameters is another step
towards getting better performance. There are
various sets of parameters provided by the
operating system under different categories.
• Those values can be changed using the
appropriate tools that come along with the
operating system
• For example:
the registry in MS-windows can be edited
using regedit.exe . 43
• Parameter in the OS are grouped in different
categories:
– File system related parameters (for EX, number of
open file permitted).
– Disk management parameter (for EX, simultaneous
disk reads/writes ).
– Memory management parameters (for EX, virtual
memory page size and number of pages).
– Processor management parameters (for EX,
enabling/disabling processors in multiprocessor
environment).
– Network parameters(for EX, setting TCP/IP time out).
44
• There is important point that needs to be
remembered when tuning the OS parameter
for improving product performance.

• The machine on which the parameter is tuned,


may have multiple products and applications
that are running.

45
Response
Throughput(nor (normal)
mal)
Expected Expected
500 Throughput( 1600 Response
450 hight) 1400 (high)
hroug 400 1200
Respo
hput 350 1300
nse
300 1000
time
250 800
200 600
150 400
100 200
50
0 1 2 3 4 5 6 7 8 9 10 11 0 16 32 64 128 256 512 1GB

No. of CPUs Memory

a)Throughput b)Response time


46
• The results of performance tuning are
normally published in the form of a guide
called the “performance tuning guide”.
• The guide explains in detail the effect of each
product and OS parameter on performance.

47
7.Performance Benchmarking:
• Performance Benchmarking is about
comparing the performance of product
transactions with that of the competitors.
• No two products can have the same
architecture, design, functionality, and code.
• The customers and deployments can also be
different.
• Hence, it will be very difficult to compare two
products on those aspects.
48
• The steps involved in performance benchmarking
are the following:
1.Identifying the transactions/scenarios and the
test configuration
2.Comparing the performance of different products.
3.Tuning the parameters of the products being
compared fairly to deliver the best performance.
4.Publishing the results of performance
benchmarking.

49
• From the point of view of a specific product there
could be Three outcomes from performance
benchmarking.
 The first outcome can be positive, where it can be
found that a set of transactions/scenarios
outperform with respect to competition.
 The second outcome can be natural, where a set of
transaction are comparable with that of the
competition.
 The third outcome can be negative, where a set of
transaction under-perform compared to that of the
competition.
50
8.Capacity Planning:
• In capacity planning, the performance
requirements and performance result are
taken as input requirements and the
configuration is needed to satisfy that set of
requirements are derived.
• Capacity planning necessitates a clear
understanding of the resource requirements
for transaction/scenarios.

51
• Some transaction of the product associated with
certain load conditions could be disk intensive, and
some could be CPU intensive, some of them could be
network intensive, and some of them could be
memory intensive.
• Capacity planning corresponding to
short-,medium-,and long-term requirements are called
Minimum required configuration;
Typical configuration; and
Special configuration.

52
• A minimum required configuration denotes
that with anything less then this configuration,
the product may not even work.
• A typical configuration denote that the product
will work fine for meeting the performance
requirement of the required load pattern and
can also handle a slight increase in the load
pattern.
• A special configuration denotes that capacity
planning was done considering all future
requirements.
53
• There are two techniques that play a major
role in capacity planning.
• They are load balancing and high availability.
• Load balancing ensures that the multiple
machines available are used equally to service
the transactions.
• Machine clusters are used to ensure
availability.

54
• When doing capacity planning, both load
balancing and availability factors are included
to prescribe the desired configuration.

55
TOOLS FOR PERFORMANCE TESTING
• There are two type of tools that can be used for
performance testing– functional performance
tools and load tools.

• Functional performance tools help in recording


and playing back the transactions and obtaining
performance numbers.
• Load testing tools simulate the load condition
for performance testing.
56
• Some popular performance tools:
Functional performance tools
1. WinRunner from Mercury
2. QA Partner from Compuware
3. Silktest from Segue

57
Load testing tools
1. Load Runner from Mercury
2. QA Load from Compuware
3. Silk Performer from Segue

• Performance and load tools can only help in


getting performance numbers.
• “windows Task Manager” and “top” in Linux
are examples of tools that help in collecting
resource utilization.
58
• Network performance monitoring tools are
available with almost all operating system
today to collect network data.

59
PROCESS FOR PERFORMANCE TESTING
• Performance testing follows the same process
as any other testing type.
• The only difference is in getting more details
and analysis.
• A major challenge involved in performance
testing is getting the right process so that the
effort can be minimized.

60
Process for performance testing

Obtain measurable testable Evaluate exit criteria


requirement

Create a performance test


plan
Performance and analyze
performance test case

Design test cases

Automate test cases Evaluate entry criteria

61
• A majority of the performance issues require
rework or changes in architecture and design.
• Hence, it is important to collect the
requirements for performance earlier in the life
cycle and address them, because changes to
architecture and design late in the cycle are very
expensive.

62
• Resource requirements:
All additional resources that are
specifically needed for performance testing
need to be planned and obtained.
Normally these resources are
obtained, used for performance testing, and
released after performance testing is over.
• Test bed(simulated and real life), test-lab
setup:
The test lab , with all required
equipment and software configuration, has to
be set up prior to execution. 63
Hence, setting up both the simulated
and real-life environment is time consuming and
any mistake in the test bed setup may mean that
the complete performance tests have to be
repeated.
• Responsibilities:
performance defects, may cause
changes to architecture, design, and code.
Hence, a matrix containing
responsibilities must be worked out as part of
the performance test plan and communicated
across all teams.
64
• Setting up product traces, audits (external
and internal):
performance test results need to
be associated with traces and audit trails to
analyze the results and defects.
what trace and audit trails have to
be collected is planned in advance and is an
associated part of the test plan.

65
• Entry and exit criteria:
Performance tests require a stable product due to
its complexity and the accuracy that is needed.
Changes to the product affect performance
numbers and may mean that the tests have to be
repeated.
The set of criteria to be met are defined well in
advance and documented as part of the
performance test plan.
A set of exit criteria is defined to conclude the
results of performance tests.
66
• Design and automating the test cases form the
next step in the performance test process.
• Automation deserves a special mention because
it is almost impossible to perform performance
testing without automation.
• Entry and Exit criteria play a major role in the
process of performance test execution.
• Hence, keeping a strong process for
performance testing provides a high return on
investment.
67
CHALLENGES
• Performance testing is not very well
understood topic in the testing community.
• There are several interpretations of
performance testing .
• Some organizations separate performance
testing and load testing and conduct them at
different phases of testing.

68
• The availability of skills is a major problem
facing performance testing.
• Performance testing requires a large number
and amount of resources such as hardware,
software, effort, time, tools and people.

69
• Performance test result need to reflect real-life
environment and expectations.
• Selecting the right tool for the performance
testing is another challenge.
• Interfacing with different teams that include a
set of customers is yet another challenge in
performance testing.
• Lack of seriousness on performance tests by
the management and development team is
another challenge.
70
• Once all functionalities are working fine in a
product, it is assumed that the product is
ready to ship.
• A high degree of management commitment
and directive to fix performance defects
before product release are needed for
successful execution of performance tests.

71
REGRESSION TESTING

72
REGRESSION TESTING

 What is regression testing.


 Types of regression testing.
 When do we do regression testing.
 How to do regression testing.
 Best practices in regression testing.
What is Regression Testing
Software undergoes constant changes. Such
changes are necessitated because of defects to be
fixed, enhancements to be made to existing
functionality, or new functionality to be added.
Anytime such changes are made, it is important to
ensure that
1. The changes or addition work as designed; and
2. The changes or additions do not break something
that is already working and should continue to
work.
74
Regression testing is designed to address the above two
purposes. Let us illustrate this with a simple example.
Assume that in a given release of a product, there were
three defects- D1,D2,and D3. When these defects are
reported, presumably the development team will fix these
defects and the testing team will perform tests to ensure that
these defects are indeed fixed. When the customers start
using the product, they may encounter new defects-D4 and
D5. Again, the development and testing teams will fix and
test these new defect fixes. But, in the process of fixing D4
and D5, as an unintended side-effect, D1 may resurface.
Thus, the testing team should not only ensure that the fixes
take care of the defects they are supposed to fix, but also that
they do not break anything else that was already working.

75
• Regression testing enables the team to meet
this objective.
• Regression testing is important in today’s
context since software is being released very
often to keep up with the competition and
increasing customer awareness.

76
• Regression testing follows selective re-testing
technique.
• Whenever the defect fixes are done, a set of test
cases that need to be run to verify the defect fixes
are selected by the test team.
• An impact analysis is done to find out what areas
may get impacted due to those defect fixes.
• Based on the impact analysis, some more test cases
are selected to take care of the impacted areas.
• Since this testing technique focuses on reuse of
existing test cases that have already been executed,
the technique is called selective re-testing.

77
TYPES OF REGRESSION TESTING
 Before going into the types of regression testing, let us
understand what a “build” means. When internal or external
test teams or customers begin using a product, they report
defects.
 These defects are analyzed by each developer who make
individual defect fixes. The developers then do appropriate
unit testing and check the defect fixes into a Configuration
Management(CM) System.
 The source code for the complete product is then compiled
and these defect fixes along with the existing features get
consolidated into the build.
 The build thus becomes an aggregation of all the defect fixes
and features that are present in the product.
78
There are two types of regression testing in
practice.
1. Regular regression testing.
2. Final regression testing.

A regular regression testing is done


between test cycles to ensure that the defect fixes
that are done and the functionality that were
working with the earlier test cycles continue to
work.
A regular regression testing can use
more than one product build for the test cases to
be executed. 79
Final regression testing:
• Done to validate the final build before release.
• The CM engineer delivers the final build with the media
and other contents exactly as it would go to the customer.
• The final regression test cycle is conducted for a specific
period of duration, which is mutually agreed upon
between the development and testing teams. This is
called the “cook time” for regression testing.
• Cook time is necessary to keep testing the product for a
certain duration, since some of the defects can be
unearthed only after the product has been used for a
certain time duration.
• The product is continuously exercised for the complete
duration of the cook time to ensure that such time-bound
defects are identified. 80
• The final regression test cycle is more critical
than any other type or phase of testing, as this
is the only testing that ensures the same build
of the product that was tested reaches the
customer.

81
WHEN TO DO REGRESSION TESTING
• Whenever changes happen to software,
regression testing is done to ensure that these do
not adversely affect the existing functionality.
• A regular regression testing can use multiple
builds for the test cases to be executed.
• However, an unchanged build is highly
recommended for final regression testing.
• The test cases that failed due to the defects
should be included for future regression testing.

82
It is necessary to perform regression testing
when
1. A reasonable amount of initial testing is
already carried out.
2. A good number of defects have been fixed.
3. Defect fixes that can produce side-effects are
taken care of.

Regression testing may also be performed


periodically, as a pro-active measure.

83
• A defect tracking system is used to communicate the
status of defect fixes amongst the various stake holders.
• When a developer fixes a defect is sent back to the test
engineer for verification using the defect tracking system.
• The test engineer needs to take the appropriate action of
closing the defect if it is fixed or reopening it if it has not
been fixed properly.
• In this process what may get missed out are the side-
effects, where a fix would have fixed the particular defect
but some functionality which was working before has
stopped working now.
• Regression testing needs to be done when a set of defect
fixes are provided.

84
• To ensure that there are no side-effects, some more
test cases have to be selected and defect fixes
verified in the regression test cycle.
• Thus, before a tester can close the defect as fixed,
it is important to ensure that appropriate
regression tests are run and the fix produces no
side-effects.
• It is always a good practice to initiate regression
testing and verify the defect fixes.
• Else, when there is a side-effect or loss of
functionality observed at a later point of time
through testing, it will become very difficult to
identify which defect fix has caused it. 85
• From the above discussion it is clear that
regression testing is both a planned test
activity and a need-based activity and it is
done between builds and test cycles.
• Hence, regression test is applicable to all
phases in a software development life
cycle(SDLC) and also to component, system,
and acceptance test phases.

86
Figure 8.2 REGRESSION
Types
Regular regression
Final regression TESTING

When?
Why?
What? When set of defect
Defects creep in due
Select re-testing to fixes arrive after
to changes.
ensure . formal testing for
Defect fixes may cause
Defect fixes work. completed areas.
existing functionality
No side-effects. Performed in all test
to fail.
phases.

87
HOW TO DO REGRESSION TESTING
• A well-defined methodology for regression testing is
very important as this among is the final type of
testing that is normally performed just before release.
• If regression testing is not done right, it will enable
the defects to seep through and may result in
customers facing some serious issues not found by
test teams.
• There are several methodologies for regression testing
that are used by different organizations.
• The objective of this section is to explain a
methodology that encompasses the majority of them.
88
• The methodology here is made of the
following steps.
1. Performing an initial “smoke” or “sanity”
test.
2. Understanding the criteria for selecting the
test cases.
3. Classifying the test cases into different
priorities.
4. A methodology for selecting test cases.
5. Resetting the test cases for test execution.
6. Concluding the results of a regression cycle.
89
PERFORMING AN INITIAL “SMOKE” OR “SANITY” TEST

• Whenever changes are made to a product, it should


first be made sure that nothing basic breaks.
• For example, if you are building a database, then
any build of the database should be able to start it
up; perform basic operations such as queries, data
definition, data manipulation; and shutdown the
database.
• In addition, you may want to ensure that the key
interfaces to other products also work properly.
• This has to be done before performing any of the
other more detailed tests on the product.
90
• Smoke testing consists of
1. Identifying the basic functionality that a product
must satisfy.
2. Designing test cases to ensure that these basic
functionality work and packaging them into a
smoke test suite.
3. Ensuring that every time a product is built, this
suite is run successfully before anything else is
run; and
4. If this suite fails, escalating to the developers to
identify the changes and perhaps change or roll
back the changes to a state where the smoke test
suite succeeds.
91
UNDERSTANDING THE CRITERIA FOR
SELECTING THE TEST CASES

• Having performed a smoke test, the product can be


assumed worthy of being subjected to further detailed
tests.
• The question now is what tests should be run to achieve
the dual objective of ensuring that the fixes work and that
they do not cause unintended side-effects.
• There are two approaches to selecting the test cases for a
regression run.
• First, an organization can choose to have a constant set
of regression tests that are run for every build or change.
• In such case, deciding what tests to run is simple.
92
• But this approach is likely to be sub-optimal
because
1. In order to cover all fixes, the constant sets of
tests will encompass all features and tests which
are not required may be run every time; and
2. A given set of fixes or changes may introduce
problems for which there may not be ready-made
test cases in the constant set.
• A Second approach is to select the test cases
dynamically for each build by making judicious
choices of the test cases.
93
• The selection of test cases for regression
testing requires knowledge of
1. The defect fixes and changes made in the
current build;
2. The ways to test the current changes;
3. The impact that the current changes may have
on other parts of the system; and
4. The ways of testing the other impacted parts.

94
• Some of the criteria to select test cases for regression
testing are as follows.
1. Include test cases that have produced the maximum
defects in the past.
2. Include test cases for a functionality in which change has
been made.
3. Include test cases in which problems are reported.
4. Include test cases the basic functionality or the core
features of the product which are mandatory requirements
of the customer.
5. Includes test cases that the end-to-end behavior of the
application or the product.
6. Include test cases to test the positive test conditions.
7. Include the area which is highly visible to the users.
95
CLASSIFYING TEST CASES
• When the test cases have to be selected
dynamically for each regression run, it would
be worthwhile to plan for regression testing
from the beginning of project, even before the
test cycles start.
• To enable choosing the right tests for a
regression run, the test cases can be classified
into various priorities based on importance and
customer usage.

96
• As an example, we can classify the test cases into three
categories.
 Priority-0
These test cases can be called sanity test cases which
check basic functionality and are run for accepting the build
for further testing.
 Priority-1
Uses the basic and normal setup and these test cases
deliver high project value to both development team and to
customers.
 Priority-2
These test cases deliver moderate project value. They
are executed as part of the testing cycle and selected for
regression testing on a need basis.
97
Priority-2
65%

98
METHODOLOGY FOR SELECTING TEST CASES

• Once the test cases are classified into different


priorities, the test cases can be selected.
• There could be several right approaches to
regression testing which need to be decided on
“case to case” basis.

99
Case 1:
If the criticality and impact of the defect fixes are low,
then it is enough that a test engineer selects a few test cases
from test case database(TCDB) and executes them. These test
cases can fall under any priority(0,1,or 2).
Case 2:
If the criticality and the impact of the defect fixes are
medium, then we need to execute all Priority-0 and Priority-1
test cases. If defect fixes need additional test cases from
Priority-2, then those test cases can also be selected and used
for regression testing. Selecting Priority-2 test cases in this
case is desirable but not necessary.
Case 3:
If the criticality and impact of the defect fixes are high,
then we need to execute all priority-0, priority-1 and a
carefully selected subset of Priority-2 test cases. 100
Figure 8.5: METHODOLOGY FOR THE SELECTION OF TEST CASES

Criticality Impact

Low Med. High

P0 P0 P0
P1
P1 P1
P2 P2
P2
101
• The above methodology requires that the impact of defect
fixes be analyzed for all defects.
• This can be a time-consuming procedure.
• If, for some reason, there is not enough time and the risk of
not doing an impact analysis is low, then the alternative
methodologies given below can be considered.
 Regress all
For regression testing, all priority 0,1, and 2 test
cases are rerun. This means all the test cases in the regression
test bed/suite are executed.
 Priority based regression
For regression testing based on this priority, all
priority 0,1, and 2 test cases are run in order, based on the
availability of time. Deciding when to stop the regression
testing is based on the availability of time. 102
 Regress changes
For regression testing using this methodology
code changes are compared to the last cycle of testing and
test cases are selected based on their impact on the code.
 Random regression
Random test cases are selected and executed for
this regression methodology.
 Context based dynamic regression
A few Priority-0 test cases are selected, and based
on the context created by the analysis of those test cases
after the execution and outcome, additional related cases
are selected for continuing the regression testing.
• An effective regression strategy is usually a combination
of all of the above and not necessarily any of these in
isolation. 103
RESETTING THE TEST CASES FOR
REGRESSION TESTING
• Resetting of test cases, is not expected to be done often,
and it needs to be done with the following considerations
in mind.
1. When there is a major change in the product.
2. When there is a change in the build procedure which
affects the product.
3. Large release cycle where some test cases were not
executed for a long time.
4. When the product is in the final regression test cycle
within a few selected test cases.
5. Where there is a situation, that the expected results of
the test cases could be quite different from the previous
cycles. 104
6. The test cases relating to defect fixes and
production problems need to be evaluated
release after release. In cases they are found to
be working fine, they can be reset.
7. Whenever existing application functionality is
removed, the related test cases can be reset.
8. Test cases that consistently produce a positive
result can be removed.
9. Test cases relating to a few negative test
conditions can be removed.

105
CONCLUDING THE RESULTS OF REGRESSION
TESTING

• Apart from test teams, regression test results


are monitored by many people in an
organization as it is done after test cycles and
sometimes very close to the release date.
• Developers also monitor the results from
regression as they would like to know how
well their defect fixes work in the product.
• Hence, there is a need to understand a method
for concluding the results of regression.
106
 If the result of a particular test case was a pass using the previous builds
and a fail in the current build, then regression has failed. A new build is
required and the testing must start from scratch after resetting the test
cases.
 If the result of a particular test case was a fail using the previous builds
and a pass in the current build, then it is safe to assume the defect fixes
worked.
 If the result of a particular test case was a fail using the previous builds
and a fail in the current build and if there are no defect fixes for this
particular test case, it may mean that the result of this test case should not
be considered for the pass percentage. This may also mean that such test
cases should not be selected for regression.
 If the result of a particular test case is a fail using the previous builds but
works with a documented workaround and if you satisfied with the
workaround, then it should considered as a pass for both the system test
cycle and regression test cycle.
 If you are not satisfied with the workaround, then it should be considered
as a fail for a system test cycle but may be considered as a regression test
cycle. 107
Current result Previous Conclusion Remarks
from regression results
FAIL PASS FAIL Need to improve the regression
process and code reviews.
PASS FAIL PASS This is the expected result of a
good regression to say defect
fixes work properly.
FAIL FAIL FAIL Need to analyze why defect fixes
are not working.” Is it a wrong
fix?” Also should analyze why
this test is rerun for regression.
PASS (with a FAIL Analyze the Workarounds also need a good
work-around) workaround review as they can also create
and if satisfied side-effects.
mark result as
PASS
PASS PASS PASS This pattern of results gives a
comfort feeling that there are
many side-effects due to the
defect fixes.
108
BEST PRACTICES IN REGRESSION TESTING

• Regression methodology can be applied when


1. We need to assess the quality of product between
test cycles;
2. We are doing a major release of a product, have
executed all test cycles, and are planning a
regression test cycle for defect fixes; and
3. We are doing a minor release of a product
having only defect fixes, and we can plan for
regression test cycles to take care of those defect
fixes.
109

You might also like