Unit - IV Software Testing
Unit - IV Software Testing
Software Testing
Disclaimer:
The lecture notes have been prepared by referring to a book. This document does not claim any originality and cannot be used as a
substitute for prescribed textbooks.
List OF Topics
Introduction to Testing
Verification
Validation
Verification Vs Validation
Test Strategy
Planning
Test Project Monitoring and Control
Design of Master Test Plan
Test Case Design
Test Case Management
Test Case Reporting
Test Artifacts
2
Software testing
• Testing is the process of exercising a program with the specific intent of finding errors
prior to delivery to the end user.
• Basic Definitions
• Errors
• An error is a mistake, misconception, or misunderstanding on the part of a software
developer.
• Faults (Defects)
• A fault (defect) is introduced into the software as the result of an error. It is an anomaly in
the software that may cause it to behave incorrectly, and not according to its specification.
• Failures
• A failure is the inability of a software system or component to perform its required
functions within specified performance requirements [2].
3
Software Testing
Introduction – Problems with Traditional Development Model
• Traditionally, software testing was done only after software was constructed.
• This is used to limit the scope of software testing in the development life cycle
(see Figure below-Traditional Software Development Model-Too little, Too late
testing).
• This practice led to a situation that was too little and too late.
• By the time software was constructed, already faulty requirement specifications
and faulty software design had resulted in defect ridden software.
Verification & Validation
Verification and Validation
• The source code is reviewed for dead code, unused variables, faulty logic,
constructs, etc.
• Once the source code is ready to be run as a system, validation testing can be
started.
•
Verification vs validation
Software testing is part of a broader group of activities called verification and validation that are involved in software quality assurance
• Verification (Are the algorithms coded correctly?)
– The set of activities that ensure that software correctly implements a specific
function or algorithm
– It refers to the set of tasks that ensure that software correctly implements a
specific function
Verification: “Are we building the product right?”
– The set of activities that ensure that the software that has been built is traceable
to customer requirements
Validation: “Are we building the right product?”
• Verification and validation includes a wide array of SQA activities: technical reviews, quality and configuration audits, performance monitoring,
simulation, feasibility study, documentation review, database review, algorithm analysis, development testing, usability testing, qualification testing,
acceptance testing, and installation testing
6
Software testing stratergy – big picture
System Testing
r s to
pe
Validation Testing
de w
co
oa ro
Br Nar
Integration Testing
Unit Testing
Code
Design
r e to
nc t
co strac
te
Requirements
Ab
System Engineering
7
Major Types of testing
8
Software testing steps
9
Testing strategy applied to conventional
applications
Unit testing
Exercises specific paths in a component's control structure to ensure complete coverage and maximum error
detection
Components are then assembled and integrated
Integration testing
Focuses on inputs and outputs, and how well the components fit together and work together
Validation testing
Provides final assurance that the software meets all functional, behavioral, and performance requirements
System testing
Verifies that all system elements (software, hardware, people, databases) mesh properly and that overall system
function and performance is achieved
10
SOME common error – unit testing
• Misunderstood or incorrect arithmetic precedence
• Mixed mode operations (e.g., int, float, char)
• Incorrect initialization of values
• Precision inaccuracy and round-off errors
• Incorrect symbolic representation of an expression (int vs. float)
• Failure to exit when divergent iteration is encountered
• Improperly modified loop variables
• Boundary value violations
11
Integration testing
12
Non-incremental testing
13
Incremental testing
• Three kinds
• Top-down integration
• Bottom-up integration
• Sandwich integration
• The program is constructed and tested in small increments
• Errors are easier to isolate and correct
• Interfaces are more likely to be tested completely
• A systematic test approach is applied
14
Top-down integration testing
Modules are integrated by moving downward through the control hierarchy, beginning with the
main module .
The control program is tested first. Modules are integrated one at a time. Emphasize on interface
testing
15
TOP DOWN integration
• Advantages
• This approach verifies major control or decision points early in the test process
• No test drivers needed
• Interface errors are discovered early
• Modular features aid debugging
• Disadvantages
16
Bottom-up integration
• Integration and testing starts with the most atomic modules in the control
hierarchy
• Allow early testing aimed at proving feasibility and emphasize on module
functionality and performance
17
Bottom –up integration
• Advantages
– This approach verifies low-level data processing early in the testing process
– Need for stubs is eliminated
• Disadvantages
– Driver modules need to be built to test the lower-level modules; this code is
later discarded or expanded into a full-featured version
– Drivers inherently do not contain the complete algorithms that will eventually
use the services of the lower-level modules; consequently, testing may be
incomplete or more testing may be needed later when the upper level
modules are available
18
Regression testing
19
Acceptance testing-Alpha and beta testing
• Alpha testing
– Conducted at the developer’s site by end users
– Software is used in a natural setting with developers watching intently
– Testing is conducted in a controlled environment
• Beta testing
– Conducted at end-user sites
– Developer is generally not present
– It serves as a live application of the software in an environment that
cannot be controlled by the developer
– The end-user records all problems that are encountered and reports these
to the developers at regular intervals
• After beta testing is complete, software engineers make software
modifications and prepare for release of the software product to
the entire customer base
20
Acceptance testing
21
System testing
• Recovery Testing
• Security Testing
• Stress Testing
• Performance Testing
• Deployment Testing
22
Test Strategy and Planning
Introduction
• Software testing is a vast field in itself, and so the common practice is to
consider it as a separate project.
• In those cases, it is known as an independent verification and validation project.
• As such, a separate project plan is made for that project and is linked to the
parent software development project.
• There are many techniques available to execute software test projects.
• It depends on the kind of test project. However, most test projects must have a
test plan and a test strategy before the project can be ready for execution.
• Often due to time constraints, testing cycles are cut short by project managers.
• This leads to a half-tested product that is pushed out of the door.
• In such cases, a large number of product defects are left undetected.
• Ultimately, end users discover these defects. Fixing these defects at this stage is
costly.
• Moreover, they cannot be fixed one at a time.
• They are to be taken in batches and are incorporated in maintenance project
plans.
• This leads to excessive costs in maintaining the software.
Test Strategy and Planning
• It is a lot cheaper to trap those bugs during the testing cycle and fix them.
• It is appropriately said that “testing costs money but not testing, costs more!”
• Test strategies should include things like test prioritization, automation strategy,
risk analysis, etc.
• Test planning should include a work breakdown structure, requirement review,
resource allocation, effort estimation, tools selection, setting up communication
channels, etc.
Test Prioritization
• Even before the test effort actually starts, it is of utmost importance that the
test prioritization should be made.
• First of all, all parts of the software product will not be used by end users with
the same intensity.
• Some parts of the product are used by end users extensively, while other parts
are seldom used.
• So the extensively used parts of the product should not have any defects at all
and thus they need to be tested thoroughly.
• Test Case
• A test case in a practical sense is a test-related item which contains the following
• information:
• 1. A set of test inputs. These are data items received from an external source by
• the code under test. The external source can be hardware, software, or human.
• 2. Execution conditions. These are conditions required for running the test, for
• example, a certain state of a database, or a configuration of a hardware device.
• 3. Expected outputs. These are the specified results to be produced by the code
• under test.
• Test
• A test is a group of related test cases, or a group of related test cases and test
procedures
• Test Bed
• A test bed is an environment that contains all the hardware and software needed
to test a software component or a software system.
25
How to Write Test Cases: Sample Template with Examples
Risk Management
• The test manager should also do plan for all known risks that could impact the
test project.
• If proper risk mitigation planning is not done and a mishap occurs, then the test
project schedule could be jeopardized, costs could escalate and/or quality could
go down.
• Some of the risks that can have severe, adverse impact on a test project include
an unrealistic schedule, resource unavailability, skill unavailability, frequent
requirement changes, etc.
Test Strategy and Planning
Risk Management
• Requirement changes pose a serious threat to testing effort because for each
requirement change, the whole test plan gets changed.
• The test team has to revise its schedule for additional work as well as to assess
impact of the change on the test cases they have to recreate.
• Some enthusiastic test engineers estimate much less effort than it actually
should be.
• In that case, the test manager would be in trouble trying to explain why testing
is taking more than the scheduled time schedule.
• In such cases, even after loading testing engineers more than 150%, the testing
cycle get delayed.
• This is a very common situation on most of the test projects.
• This also happens because the marketing team agrees on unrealistic schedules
with the customer in order to bag the project.
• Even the test manager at that time feels that somehow he will manage it, but
later on it proves impossible to achieve.
• Other test engineers unnecessarily pad their estimate and later on, when the
customer detects it, the test manager finds himself in a spot.
Test Strategy and Planning
Risk Management
• When the software development market, along with the software testing
market, is hot (this is the case most of the time, as businesses need to
implement software systems more and more and so software professionals are
in great demand), software professionals have many job offers in hand.
• They leave the project at short notice and the test manager has to find a
replacement fast.
• Sometimes, a project may have some kind of testing for which skilled test
professionals are hard to find.
• In both situations, the test manager may not be able to start those tasks in need
of adequate resources.
• For test professional resources, a good alternative resource planning is required.
• The test manager should be in consultation with human resource manager, keep
a line of test professionals who may join in case one is needed on his project.
• For scheduling problems, the test manager has to ensure in advance that
schedules do not get affected.
• He has to keep a buffer in the schedule for any eventuality.
Test Strategy and Planning
Risk Management
• To keep a tab on the project budget, the test manager has to ensure that the
schedule is not unrealistic and also has to load his test engineers appropriately.
• If some test engineers are not loaded adequately, then project costs may go
higher.
• For this reason, if any test professionals do not have enough assignments on
one project, they should be assigned work from other projects.
Effort Estimation
• For making scheduling, resource planning and budget for a test project, the test
manager should make a good effort estimate.
• Effort estimate should include information such as project size, productivity, and
test strategy.
• While project size and test strategy information comes after consultation with
the customer, the productivity figure comes from experience and knowledge of
the team members of the project team.
• The wideband Delphi technique uses brainstorming sessions to arrive at effort
estimate figures after discussing the project details with the project team.
Test Strategy and Planning
Effort Estimation
• This is a good technique because the people who will be assigned the project
work will know their own productivity levels and can figure out the size of their
assigned project tasks from their own experience.
• Initial estimates from each team member are then discussed with other team
members in an open environment.
• Each person has his own estimate.
• These estimates are then unanimously condensed into final estimate figures for
each project task.
• In an experience-based technique, instead of group sessions, the test manager
meets each team member and asks him his estimate for the project work he has
been assigned.
• This technique works best when team members are well aware, particularly, of
their prior experience of similar project tasks.
• Effort estimation is one area where no test manager can have a good grasp at
the initial stages of the project.
• This is because not many details are clear about the project.
• As the project unfolds, after executing some of its related tasks, things become
clearer.
Test Strategy and Planning
Effort Estimation
• At that stage, any test manager can comfortably give an effort estimate for the
remaining project tasks. But that is too late.
• Project stakeholders want to know at the very beginning of the project, what
would be the cost estimates and when the project would be delivered.
• These two questions are very important for project stakeholders and it is on the
top of their mind.
• Unfortunately, test managers are not equipped to provide an accurate schedule
and costs for the project at those initial stages because of unclear project scope,
size, etc.
• Nevertheless, it is one of their critical tasks that they have to finish and provide
the requested information.
• The best solution is to find a relatively objective method of effort estimation and
provide the requested information.
Test Strategy and Planning
Effort Estimation: Test Point Analysis
• There are many methods available for effort estimation for test projects.
• Some of them include test point analysis, the wideband Delphi technique,
experience-based estimation, etc.
• In the test point analysis technique, three inputs required are project size, test
strategy, and productivity.
• Project size is determined by calculating the number of test points in the
software application which is being developed.
• Test points, in turn, are calculated from function points.
• The number of function points is calculated from the number of functions and
function complexity.
• If the number of function points in the application has been calculated by the
development team, then test points are calculated from the available function
point information.
• Otherwise rough function point data can be used (Figure below - Test point
analysis components).
• A test strategy is derived from two pieces of information from the customer,
what will be the quality level for the application and which features of the
application will be used most frequently.
Test Strategy and Planning
Effort Estimation: Test Point Analysis
• Productivity is derived from knowledge and experience of the test team
members.
• While productivity can be calculated objectively without taking reference from
any statistical data, it makes sense to use past productivity data from previously
executed projects to make productivity figures more realistic.
• In case of iterative development, testing cycles will be short and iterative in
nature.
• The test manager should make the test effort calculations accordingly.
Test Project Monitoring and Control
Introduction
• Test projects involve a large variety of activities including test case design, test
case management, test case automation, test execution, defect tracking,
verifying and validating the application under test, etc (Figure below – Test life
cycle).
• If the testing is done for an in-house software product, traditionally, it is used to,
not be a performance evaluation measurement.
• What really counted was the number of defects found in production when the
software product was deployed and used by end users.
• But it is too late for a performance measurement.
Test Project Monitoring and Control
Defect Tracking
• What if many of the test team members left before the product was deployed?
• In fact this is a reality, given the high attrition rate (as much as 20% at many
corporations) of software professionals.
• Once they are gone, there is no point in measuring the performance.
• Thus, a better measurement would allow for more immediate results.
• This is achieved by measuring the defect count per hour per day.
• Then there is the case of outsourced test projects.
• If the contract is only for testing up to deployment and not afterward, then
measurement does not make sense after the contract has ended.
• A good defect tracking application should be deployed on a central server that is
accessible to all test and development teams.
• Each defect should be logged in such a way that it could be understood by both
development and testing teams.
• Generally, the defects should be reproducible, but in many instances, this is
difficult.
• In such instances, a good resolution should be made by the test and
development managers.
Test Case Reporting
Test Case Reporting
• During the execution of a test project, many initial and final reports are made.
• But status reports also need to be made.
• Test reports include test planning reports, test strategy reports, requirement
document review comments, number of test cases created, automation scripts
created, test execution cycle reports, defect tracking reports, etc.
• Some other reports include trace-ability matrix reports, defect density, test
execution rate, test creation rate, test automation script writing rate, etc.
REFERENCES
• Ashfaque Ahmed, Software Project Management: A Process-driven approach,
Boca Raton, Fla: CRC Press, 2012.
THANK YOU