ST Unit-4
ST Unit-4
https://www.youtube.com/watch?v=yJLtJONIG10
2
FACTORS GOVERNING PERFORMANCE
TESTING
• Performance is a basic requirement for any product and
is fast becoming a subject of great interest in the testing
community.
• There are many factors that govern performance testing.
• The system or the product in handling multiple
transactions is determined by the factor called
Throughput.
• Throughput represent the number of requests/business
transactions processed by the product in a specified time
duration.
3
Throughput Of a System At Various Load
Conditions
THROUGHPUT
SATURATION POINT
USER LOAD
4
• Measuring “Response time” becomes an important
activity of performance testing.
5
Example of latencies at various levels—
network and applications
WEB
DATABASE
CLIENT SERVER
N1 N2 SEVER
A2
A1
A3
N4 N3
6
• -A web application providing a service by talking to a web
server and a database server connected in the network.
• Latency is a delay caused by the application , operating
system, and by the environment that are calculated
separately.
8
• It is very important to compare the throughput
and response time of the product with those of
the competitive product.
9
• The exercise to find out what resource and
configurations are needed is called Capacity
planning.
• The purpose of the capacity planning exercise is
to help customers plan for the set of hardware
and software resources prior to installation or
upgradation of the product.
10
METHODOLOGY FOR PERFORMANCE
TESTING
13
• Collecting requirements for performance testing
presents some unique challenges.
First, a performance testing requirement
should be testable– not all features/functionality
can be performance tested.
Second, a performance testing
requirement needs to clearly state what factors
needs to be measured and improved.
Last, performance testing requirement
needs to be associated with the actual number
or percentage of improvement that is desired.
14
• For example:
if a business transaction, say ATM
money withdrawal, should be completed
within two minutes, the requirement needs
to document the actual response time
expected.
15
• Performance compared to the previous release
of the same product A performance
requirement can be something like “as ATM
withdrawal transaction will be faster then the
previous release by 10%”.
16
• Performance compared to absolute numbers
derived from actual need A requirement can be
documented such as “ATM machine should be
capable of handling 1000 transactions per day with
each transaction not taking more than a minute.”
18
Example Of Performance Test Requirements
19
2.Writing Test Cases:
• The next step involved in performance testing is
writing test cases.
• A test case for performance testing should have
the following details defined.
1.List of operations are business transaction
to be tested.
2.Steps for executing those operations/
transactions.
20
3.List of product, OS parameters that
impact the performance testing, and their value.
4.Loading pattern
5.Resource and their configuration
(network , hardware , software , configurations)
6.The expected results(that is, expected
response time, throughput , latency)
7.The product versions/competitive
products to be compared with and related
information such as their corresponding fields
21
• Performance test case are repetitive In nature.
• These test case are normally executed
repeatedly for different values of parameters,
different load conditions, different
configuration, and so on.
• While testing the product for different load
pattern, it is important to increase the load or
scalability gradually to avoid any unnecessary
effort in case of failures.
22
• For Example:
If an ATM withdrawal fails for ten concurrent
operations, there is no point in trying it for
10,000 operations .
23
3.Automating Performance Test Cases:
Automation is an important step in the
methodology for performance testing.
• Characteristics:
1.Performance testing is repetitive.
2.Performance test cases cannot be effective
without automation and in most cases it is, in
fact, almost impossible to do performance
testing without automation.
24
3.The results of performance testing need to be
accurate, and manually calculating the response
time, throughput, and so on can introduce
inaccuracy.
4.There are far too many permutations and
combinations of those factors and it will be
difficult to remember all these and use them if
the tests are done manually.
5.The analysis of performance results and failure
needs to take into account related information
such as resource utilization, log files, trace files,
and so on that are collected at regular intervals.
25
4.Executing Performance Test Cases:
• Performance testing generally involves less
effort for executing but more effort for
planning, data collection, and analysis.
• The most effort-consuming aspect in execution
is usually data collection.
• Data corresponding to the following points
needs to be collected while executing
performance tests.
26
1.Start and end time of test case execution.
2.Log and trace/audit files of the product
and operating system(for future debugging and
repeatability purpose).
3.Utilization of resources (CPU, memory,
disk, network, utilization and so on).
4.Configuration of all environment factors
(hardware, software, and other components).
5.The response time, throughput, latency,
and so on as specified in the test case
documentation at regular intervals).
27
• Another aspect involved in performance test
execution is Scenario testing.
• A set of transaction/operations that are usually
performed by the user forms the scenario for
performance testing.
• For example:
Not all users withdraw cash from an ATM;
some of them query for account balance; some
make deposits, and so on
28
A)Response Time B)Throughput
30 1400
25 1200
ime 20 Transac 1000
aken 15 tion 800
10 process 600
5 ed/hou 400
0 r 200
50 70 90 110 130 150 0 100 200 300 400 500 600
Number of concurrent
transaction Number of users
1400
1200
1000
1.Memory utilization
800
2.Network packets
600
3.CPU utilization
400
4.Disk read/write
200
5.Throughput
0 1 2 3 4 5 6 7 8 9 10 11 12 13
c)Throughput and resource 29
5.Analyzing The Performance Test Results:
• Analyzing the performance test results requires
Multi-dimensional thinking.
• This is the most complex part of performance
testing where product knowledge, analytical
thinking and statistical background are all
absolutely essential.
• Before analyzing the data, some calculations of
data and organization of the data are required.
30
– Calculating the mean of the performance test result
data.
– Calculating the standard deviation.
– Removing the noise(noise removal) and re-plotting
and re-calculating the mean and standard deviation.
– In terms of caching and other technologies
implemented in the product, the data coming from
the cache need to be differentiated from the data
that gets processed by the product, and presented.
31
• Responsibility not only depends on taking the
average/mean of performance.
• It also depends on how consistently the product
delivers those performance numbers.
• Standard deviation can help here.
32
• The process of removing some unwanted
value in a set is called Noise removal.
• When some value are removed from the set,
the mean and standard deviation needs to
be re-calculated.
33
• The majority of the server-client , internet, and
database applications store the data in a local
high-speed buffer when a query is made.
• This enables them to present the data quickly
when the same request is made again. This is
called Caching.
• The performance data need to be
differentiated according to where the result is
coming from—the server or the cache.
34
• For Example:
Assume that data in a cache can produce a
response time of 1000 microseconds and a
sever access taken 1 microsecond and 90% of
the time a request is satisfied by the cache.
Then the average response time is:
(.9)*1000+0.1=900.1us.
35
• Once the data sets are organized , the analysis
of performance data is carried out to conclude
the following:
– Whether performance of the product is consistent
when tests are executed multiple time.
– What performance can be expected for what type
of configuration (both hardware and software),
resource.
– What parameters impact performance and how
they can be used to derive better performance.
– What is the effort of scenarios involving several mix
of operations for the performance factors.
36
– What is the effort of product technologies such as
caching on performance improvements.
– Up to what load are the performance numbers
acceptable and whether the performance of the
product meets the criteria of “Graceful
degradation”
– What is the optimum throughput/response time
of the product for a set of factors such as load,
recourse, and parameters.
37
– What performance requirements are met and how
the performance looks when compared to the
previous version or the expectations set earlier or the
competition.
– Sometime high-end configuration may not be
available for performance testing. The performance
numbers that are to be expected from a high-end
configuration should be extrapolated or predicted.
38
6. Performance Tuning:
• Analyzing performance data help in
narrowing down the list of parameters that
really impact the performance results and
improving product performance.
• Once the parameters are narrow down to a few,
the performance test case are repeated for
different values of those parameters to further
analyze their effect in getting better
performance.
39
• This performance-tuning exercise needs a high
degree of skill in identifying the list of
parameters and their contribution to
performance.
• There are two steps involving in getting the
optimum mileage from performance tuning.
Tuning the product parameters and
Tuning the operating system and
parameters
40
• The product parameters in isolation as well as
in combination have an impact on product
performance. Hence it is important to
1.Repeat the performance test for
different values of each parameter that impact
performance.
2.Sometimes when a particular parameter
value is changed, it needs change in other
parameters. Repeat the performance tests for a
group of parameters and their different values.
41
3.Repeat the performance tests for
default value of all parameters (called Factory
setting tests).
4.Repeat the performance tests for low
and high values of each parameter and
combinations.
42
• Tuning the OS parameters is another step
towards getting better performance. There are
various sets of parameters provided by the
operating system under different categories.
• Those values can be changed using the
appropriate tools that come along with the
operating system
• For example:
the registry in MS-windows can be edited
using regedit.exe . 43
• Parameter in the OS are grouped in different
categories:
– File system related parameters (for EX, number of
open file permitted).
– Disk management parameter (for EX, simultaneous
disk reads/writes ).
– Memory management parameters (for EX, virtual
memory page size and number of pages).
– Processor management parameters (for EX,
enabling/disabling processors in multiprocessor
environment).
– Network parameters(for EX, setting TCP/IP time out).
44
• There is important point that needs to be
remembered when tuning the OS parameter
for improving product performance.
45
Response
Throughput(nor (normal)
mal)
Expected Expected
500 Throughput( 1600 Response
450 hight) 1400 (high)
hroug 400 1200
Respo
hput 350 1300
nse
300 1000
time
250 800
200 600
150 400
100 200
50
0 1 2 3 4 5 6 7 8 9 10 11 0 16 32 64 128 256 512 1GB
47
7.Performance Benchmarking:
• Performance Benchmarking is about
comparing the performance of product
transactions with that of the competitors.
• No two products can have the same
architecture, design, functionality, and code.
• The customers and deployments can also be
different.
• Hence, it will be very difficult to compare two
products on those aspects.
48
• The steps involved in performance benchmarking
are the following:
1.Identifying the transactions/scenarios and the
test configuration
2.Comparing the performance of different products.
3.Tuning the parameters of the products being
compared fairly to deliver the best performance.
4.Publishing the results of performance
benchmarking.
49
• From the point of view of a specific product there
could be Three outcomes from performance
benchmarking.
The first outcome can be positive, where it can be
found that a set of transactions/scenarios
outperform with respect to competition.
The second outcome can be natural, where a set of
transaction are comparable with that of the
competition.
The third outcome can be negative, where a set of
transaction under-perform compared to that of the
competition.
50
8.Capacity Planning:
• In capacity planning, the performance
requirements and performance result are
taken as input requirements and the
configuration is needed to satisfy that set of
requirements are derived.
• Capacity planning necessitates a clear
understanding of the resource requirements
for transaction/scenarios.
51
• Some transaction of the product associated with
certain load conditions could be disk intensive, and
some could be CPU intensive, some of them could be
network intensive, and some of them could be
memory intensive.
• Capacity planning corresponding to
short-,medium-,and long-term requirements are called
Minimum required configuration;
Typical configuration; and
Special configuration.
52
• A minimum required configuration denotes
that with anything less then this configuration,
the product may not even work.
• A typical configuration denote that the product
will work fine for meeting the performance
requirement of the required load pattern and
can also handle a slight increase in the load
pattern.
• A special configuration denotes that capacity
planning was done considering all future
requirements.
53
• There are two techniques that play a major
role in capacity planning.
• They are load balancing and high availability.
• Load balancing ensures that the multiple
machines available are used equally to service
the transactions.
• Machine clusters are used to ensure
availability.
54
• When doing capacity planning, both load
balancing and availability factors are included
to prescribe the desired configuration.
55
TOOLS FOR PERFORMANCE TESTING
• There are two type of tools that can be used for
performance testing– functional performance
tools and load tools.
57
Load testing tools
1. Load Runner from Mercury
2. QA Load from Compuware
3. Silk Performer from Segue
59
PROCESS FOR PERFORMANCE TESTING
• Performance testing follows the same process
as any other testing type.
• The only difference is in getting more details
and analysis.
• A major challenge involved in performance
testing is getting the right process so that the
effort can be minimized.
60
Process for performance testing
61
• A majority of the performance issues require
rework or changes in architecture and design.
• Hence, it is important to collect the
requirements for performance earlier in the life
cycle and address them, because changes to
architecture and design late in the cycle are very
expensive.
62
• Resource requirements:
All additional resources that are
specifically needed for performance testing
need to be planned and obtained.
Normally these resources are
obtained, used for performance testing, and
released after performance testing is over.
• Test bed(simulated and real life), test-lab
setup:
The test lab , with all required
equipment and software configuration, has to
be set up prior to execution. 63
Hence, setting up both the simulated
and real-life environment is time consuming and
any mistake in the test bed setup may mean that
the complete performance tests have to be
repeated.
• Responsibilities:
performance defects, may cause
changes to architecture, design, and code.
Hence, a matrix containing
responsibilities must be worked out as part of
the performance test plan and communicated
across all teams.
64
• Setting up product traces, audits (external
and internal):
performance test results need to
be associated with traces and audit trails to
analyze the results and defects.
what trace and audit trails have to
be collected is planned in advance and is an
associated part of the test plan.
65
• Entry and exit criteria:
Performance tests require a stable product due to
its complexity and the accuracy that is needed.
Changes to the product affect performance
numbers and may mean that the tests have to be
repeated.
The set of criteria to be met are defined well in
advance and documented as part of the
performance test plan.
A set of exit criteria is defined to conclude the
results of performance tests.
66
• Design and automating the test cases form the
next step in the performance test process.
• Automation deserves a special mention because
it is almost impossible to perform performance
testing without automation.
• Entry and Exit criteria play a major role in the
process of performance test execution.
• Hence, keeping a strong process for
performance testing provides a high return on
investment.
67
CHALLENGES
• Performance testing is not very well
understood topic in the testing community.
• There are several interpretations of
performance testing .
• Some organizations separate performance
testing and load testing and conduct them at
different phases of testing.
68
• The availability of skills is a major problem
facing performance testing.
• Performance testing requires a large number
and amount of resources such as hardware,
software, effort, time, tools and people.
69
• Performance test result need to reflect real-life
environment and expectations.
• Selecting the right tool for the performance
testing is another challenge.
• Interfacing with different teams that include a
set of customers is yet another challenge in
performance testing.
• Lack of seriousness on performance tests by
the management and development team is
another challenge.
70
• Once all functionalities are working fine in a
product, it is assumed that the product is
ready to ship.
• A high degree of management commitment
and directive to fix performance defects
before product release are needed for
successful execution of performance tests.
71
REGRESSION TESTING
72
REGRESSION TESTING
75
• Regression testing enables the team to meet
this objective.
• Regression testing is important in today’s
context since software is being released very
often to keep up with the competition and
increasing customer awareness.
76
• Regression testing follows selective re-testing
technique.
• Whenever the defect fixes are done, a set of test
cases that need to be run to verify the defect fixes
are selected by the test team.
• An impact analysis is done to find out what areas
may get impacted due to those defect fixes.
• Based on the impact analysis, some more test cases
are selected to take care of the impacted areas.
• Since this testing technique focuses on reuse of
existing test cases that have already been executed,
the technique is called selective re-testing.
77
TYPES OF REGRESSION TESTING
Before going into the types of regression testing, let us
understand what a “build” means. When internal or external
test teams or customers begin using a product, they report
defects.
These defects are analyzed by each developer who make
individual defect fixes. The developers then do appropriate
unit testing and check the defect fixes into a Configuration
Management(CM) System.
The source code for the complete product is then compiled
and these defect fixes along with the existing features get
consolidated into the build.
The build thus becomes an aggregation of all the defect fixes
and features that are present in the product.
78
There are two types of regression testing in
practice.
1. Regular regression testing.
2. Final regression testing.
81
WHEN TO DO REGRESSION TESTING
• Whenever changes happen to software,
regression testing is done to ensure that these do
not adversely affect the existing functionality.
• A regular regression testing can use multiple
builds for the test cases to be executed.
• However, an unchanged build is highly
recommended for final regression testing.
• The test cases that failed due to the defects
should be included for future regression testing.
82
It is necessary to perform regression testing
when
1. A reasonable amount of initial testing is
already carried out.
2. A good number of defects have been fixed.
3. Defect fixes that can produce side-effects are
taken care of.
83
• A defect tracking system is used to communicate the
status of defect fixes amongst the various stake holders.
• When a developer fixes a defect is sent back to the test
engineer for verification using the defect tracking system.
• The test engineer needs to take the appropriate action of
closing the defect if it is fixed or reopening it if it has not
been fixed properly.
• In this process what may get missed out are the side-
effects, where a fix would have fixed the particular defect
but some functionality which was working before has
stopped working now.
• Regression testing needs to be done when a set of defect
fixes are provided.
84
• To ensure that there are no side-effects, some more
test cases have to be selected and defect fixes
verified in the regression test cycle.
• Thus, before a tester can close the defect as fixed,
it is important to ensure that appropriate
regression tests are run and the fix produces no
side-effects.
• It is always a good practice to initiate regression
testing and verify the defect fixes.
• Else, when there is a side-effect or loss of
functionality observed at a later point of time
through testing, it will become very difficult to
identify which defect fix has caused it. 85
• From the above discussion it is clear that
regression testing is both a planned test
activity and a need-based activity and it is
done between builds and test cycles.
• Hence, regression test is applicable to all
phases in a software development life
cycle(SDLC) and also to component, system,
and acceptance test phases.
86
Figure 8.2 REGRESSION
Types
Regular regression
Final regression TESTING
When?
Why?
What? When set of defect
Defects creep in due
Select re-testing to fixes arrive after
to changes.
ensure . formal testing for
Defect fixes may cause
Defect fixes work. completed areas.
existing functionality
No side-effects. Performed in all test
to fail.
phases.
87
HOW TO DO REGRESSION TESTING
• A well-defined methodology for regression testing is
very important as this among is the final type of
testing that is normally performed just before release.
• If regression testing is not done right, it will enable
the defects to seep through and may result in
customers facing some serious issues not found by
test teams.
• There are several methodologies for regression testing
that are used by different organizations.
• The objective of this section is to explain a
methodology that encompasses the majority of them.
88
• The methodology here is made of the
following steps.
1. Performing an initial “smoke” or “sanity”
test.
2. Understanding the criteria for selecting the
test cases.
3. Classifying the test cases into different
priorities.
4. A methodology for selecting test cases.
5. Resetting the test cases for test execution.
6. Concluding the results of a regression cycle.
89
PERFORMING AN INITIAL “SMOKE” OR “SANITY” TEST
94
• Some of the criteria to select test cases for regression
testing are as follows.
1. Include test cases that have produced the maximum
defects in the past.
2. Include test cases for a functionality in which change has
been made.
3. Include test cases in which problems are reported.
4. Include test cases the basic functionality or the core
features of the product which are mandatory requirements
of the customer.
5. Includes test cases that the end-to-end behavior of the
application or the product.
6. Include test cases to test the positive test conditions.
7. Include the area which is highly visible to the users.
95
CLASSIFYING TEST CASES
• When the test cases have to be selected
dynamically for each regression run, it would
be worthwhile to plan for regression testing
from the beginning of project, even before the
test cycles start.
• To enable choosing the right tests for a
regression run, the test cases can be classified
into various priorities based on importance and
customer usage.
96
• As an example, we can classify the test cases into three
categories.
Priority-0
These test cases can be called sanity test cases which
check basic functionality and are run for accepting the build
for further testing.
Priority-1
Uses the basic and normal setup and these test cases
deliver high project value to both development team and to
customers.
Priority-2
These test cases deliver moderate project value. They
are executed as part of the testing cycle and selected for
regression testing on a need basis.
97
Priority-2
65%
98
METHODOLOGY FOR SELECTING TEST CASES
99
Case 1:
If the criticality and impact of the defect fixes are low,
then it is enough that a test engineer selects a few test cases
from test case database(TCDB) and executes them. These test
cases can fall under any priority(0,1,or 2).
Case 2:
If the criticality and the impact of the defect fixes are
medium, then we need to execute all Priority-0 and Priority-1
test cases. If defect fixes need additional test cases from
Priority-2, then those test cases can also be selected and used
for regression testing. Selecting Priority-2 test cases in this
case is desirable but not necessary.
Case 3:
If the criticality and impact of the defect fixes are high,
then we need to execute all priority-0, priority-1 and a
carefully selected subset of Priority-2 test cases. 100
Figure 8.5: METHODOLOGY FOR THE SELECTION OF TEST CASES
Criticality Impact
P0 P0 P0
P1
P1 P1
P2 P2
P2
101
• The above methodology requires that the impact of defect
fixes be analyzed for all defects.
• This can be a time-consuming procedure.
• If, for some reason, there is not enough time and the risk of
not doing an impact analysis is low, then the alternative
methodologies given below can be considered.
Regress all
For regression testing, all priority 0,1, and 2 test
cases are rerun. This means all the test cases in the regression
test bed/suite are executed.
Priority based regression
For regression testing based on this priority, all
priority 0,1, and 2 test cases are run in order, based on the
availability of time. Deciding when to stop the regression
testing is based on the availability of time. 102
Regress changes
For regression testing using this methodology
code changes are compared to the last cycle of testing and
test cases are selected based on their impact on the code.
Random regression
Random test cases are selected and executed for
this regression methodology.
Context based dynamic regression
A few Priority-0 test cases are selected, and based
on the context created by the analysis of those test cases
after the execution and outcome, additional related cases
are selected for continuing the regression testing.
• An effective regression strategy is usually a combination
of all of the above and not necessarily any of these in
isolation. 103
RESETTING THE TEST CASES FOR
REGRESSION TESTING
• Resetting of test cases, is not expected to be done often,
and it needs to be done with the following considerations
in mind.
1. When there is a major change in the product.
2. When there is a change in the build procedure which
affects the product.
3. Large release cycle where some test cases were not
executed for a long time.
4. When the product is in the final regression test cycle
within a few selected test cases.
5. Where there is a situation, that the expected results of
the test cases could be quite different from the previous
cycles. 104
6. The test cases relating to defect fixes and
production problems need to be evaluated
release after release. In cases they are found to
be working fine, they can be reset.
7. Whenever existing application functionality is
removed, the related test cases can be reset.
8. Test cases that consistently produce a positive
result can be removed.
9. Test cases relating to a few negative test
conditions can be removed.
105
CONCLUDING THE RESULTS OF REGRESSION
TESTING