0% found this document useful (0 votes)
4 views

Unit - IV Software Testing

This document provides an overview of software testing, including definitions of errors, faults, and failures, and emphasizes the importance of verification and validation in ensuring software quality. It outlines various testing strategies, types of testing, and the significance of test planning and prioritization to manage risks effectively. The document also discusses the process of writing test cases and the differences between test scenarios and test cases.

Uploaded by

B Monica
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Unit - IV Software Testing

This document provides an overview of software testing, including definitions of errors, faults, and failures, and emphasizes the importance of verification and validation in ensuring software quality. It outlines various testing strategies, types of testing, and the significance of test planning and prioritization to manage risks effectively. The document also discusses the process of writing test cases and the differences between test scenarios and test cases.

Uploaded by

B Monica
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 54

UNIT - IV

Software Testing

Disclaimer:
The lecture notes have been prepared by referring to a book. This document does not claim any originality and cannot be used as a
substitute for prescribed textbooks.
List OF Topics
 Introduction to Testing
 Verification
 Validation
 Verification Vs Validation
 Test Strategy
 Planning
 Test Project Monitoring and Control
 Design of Master Test Plan
 Test Case Design
 Test Case Management
 Test Case Reporting
 Test Artifacts

2
Software testing

• Testing is the process of exercising a program with the specific intent of finding errors
prior to delivery to the end user.
• Basic Definitions
• Errors
• An error is a mistake, misconception, or misunderstanding on the part of a software
developer.
• Faults (Defects)
• A fault (defect) is introduced into the software as the result of an error. It is an anomaly in
the software that may cause it to behave incorrectly, and not according to its specification.
• Failures
• A failure is the inability of a software system or component to perform its required
functions within specified performance requirements [2].

3
Software Testing
Introduction – Problems with Traditional Development Model
• Traditionally, software testing was done only after software was constructed.
• This is used to limit the scope of software testing in the development life cycle
(see Figure below-Traditional Software Development Model-Too little, Too late
testing).
• This practice led to a situation that was too little and too late.
• By the time software was constructed, already faulty requirement specifications
and faulty software design had resulted in defect ridden software.
Verification & Validation
Verification and Validation

• The source code is reviewed for dead code, unused variables, faulty logic,
constructs, etc.
• Once the source code is ready to be run as a system, validation testing can be
started.

Verification vs validation
Software testing is part of a broader group of activities called verification and validation that are involved in software quality assurance
• Verification (Are the algorithms coded correctly?)

– The set of activities that ensure that software correctly implements a specific
function or algorithm
– It refers to the set of tasks that ensure that software correctly implements a
specific function
Verification: “Are we building the product right?”

• Validation (Does it meet user requirements?)

– The set of activities that ensure that the software that has been built is traceable
to customer requirements
Validation: “Are we building the right product?”

• Verification and validation includes a wide array of SQA activities: technical reviews, quality and configuration audits, performance monitoring,
simulation, feasibility study, documentation review, database review, algorithm analysis, development testing, usability testing, qualification testing,
acceptance testing, and installation testing

6
Software testing stratergy – big picture

System Testing

r s to
pe
Validation Testing

de w
co
oa ro
Br Nar
Integration Testing
Unit Testing

Code
Design
r e to
nc t
co strac
te

Requirements
Ab

System Engineering

7
Major Types of testing

• Unit testing [white box]


– Concentrates on each component/function of the software as implemented in the source code
• Integration testing
– Focuses on the design and construction of the software architecture
• Validation testing
– Requirements are validated against the constructed software
• System testing
– The software and other system elements are tested as a whole

– There are other sublevel testing are performed like


– Functional Testing (after Unit testing) – Testing technique used – Black box
– Regression Testing
– Smoke Testing (will see in detail in later slides)

8
Software testing steps

9
Testing strategy applied to conventional
applications

Unit testing
 Exercises specific paths in a component's control structure to ensure complete coverage and maximum error
detection
 Components are then assembled and integrated

Integration testing
 Focuses on inputs and outputs, and how well the components fit together and work together

Validation testing
 Provides final assurance that the software meets all functional, behavioral, and performance requirements

System testing
 Verifies that all system elements (software, hardware, people, databases) mesh properly and that overall system
function and performance is achieved

10
SOME common error – unit testing
• Misunderstood or incorrect arithmetic precedence
• Mixed mode operations (e.g., int, float, char)
• Incorrect initialization of values
• Precision inaccuracy and round-off errors
• Incorrect symbolic representation of an expression (int vs. float)
• Failure to exit when divergent iteration is encountered
• Improperly modified loop variables
• Boundary value violations

11
Integration testing

• Defined as a systematic technique for constructing the software architecture

– At the same time integration is occurring,


conduct tests to uncover errors associated with
interfaces
• Objective is to take unit tested modules and build a program structure based on the prescribed
design
• Two Approaches

– Non-incremental Integration Testing


– Incremental Integration Testing

12
Non-incremental testing

• Commonly called the “Big Bang” approach


• All components are combined in advance
• The entire program is tested as a whole
• Chaos results
• Many seemingly-unrelated errors are encountered
• Correction is difficult because isolation of causes is complicated
• Once a set of errors are corrected, more errors occur, and testing
appears to enter an endless loop

13
Incremental testing

• Three kinds
• Top-down integration
• Bottom-up integration
• Sandwich integration
• The program is constructed and tested in small increments
• Errors are easier to isolate and correct
• Interfaces are more likely to be tested completely
• A systematic test approach is applied

14
Top-down integration testing
 Modules are integrated by moving downward through the control hierarchy, beginning with the
main module .
 The control program is tested first. Modules are integrated one at a time. Emphasize on interface
testing

 Subordinate modules are incorporated in either a depth-first or breadth-first fashion


– DF: All modules on a major control path are
integrated
– BF: All modules directly subordinate at each level
are integrated

15
TOP DOWN integration
• Advantages

• This approach verifies major control or decision points early in the test process
• No test drivers needed
• Interface errors are discovered early
• Modular features aid debugging

• Disadvantages

• Stubs need to be created to substitute for modules


•that have not been built or tested yet; this code is later discarded.
• Because stubs are used to replace lower level modules,
•no significant data flow can occur until much later in the
•integration/testing process.

16
Bottom-up integration

• Integration and testing starts with the most atomic modules in the control
hierarchy
• Allow early testing aimed at proving feasibility and emphasize on module
functionality and performance

17
Bottom –up integration
• Advantages
– This approach verifies low-level data processing early in the testing process
– Need for stubs is eliminated
• Disadvantages
– Driver modules need to be built to test the lower-level modules; this code is
later discarded or expanded into a full-featured version
– Drivers inherently do not contain the complete algorithms that will eventually
use the services of the lower-level modules; consequently, testing may be
incomplete or more testing may be needed later when the upper level
modules are available

18
Regression testing

• Each new addition or change to baselined software may cause problems


with functions that previously worked flawlessly
• Regression testing re-executes a small subset of tests that have already
been conducted
• Regression test suite contains three different classes of test cases
• Regression Testing helps to ensure that changes do not introduce
unintended behavior or additional errors

19
Acceptance testing-Alpha and beta testing

• Alpha testing
– Conducted at the developer’s site by end users
– Software is used in a natural setting with developers watching intently
– Testing is conducted in a controlled environment
• Beta testing
– Conducted at end-user sites
– Developer is generally not present
– It serves as a live application of the software in an environment that
cannot be controlled by the developer
– The end-user records all problems that are encountered and reports these
to the developers at regular intervals
• After beta testing is complete, software engineers make software
modifications and prepare for release of the software product to
the entire customer base

20
Acceptance testing

• A variation on beta testing, called customer acceptance testing,


is sometimes performed when custom software is delivered to a
customer under contract.
• The customer performs a series of specific tests in an attempt to
uncover errors beforeaccepting the software from the
developer.
• In some cases (e.g., a major corporate orgovernmental system)
acceptance testing can be very formal and encompass many
days or even weeks of testing.

21
System testing

• Recovery Testing
• Security Testing
• Stress Testing
• Performance Testing
• Deployment Testing

22
Test Strategy and Planning
Introduction
• Software testing is a vast field in itself, and so the common practice is to
consider it as a separate project.
• In those cases, it is known as an independent verification and validation project.
• As such, a separate project plan is made for that project and is linked to the
parent software development project.
• There are many techniques available to execute software test projects.
• It depends on the kind of test project. However, most test projects must have a
test plan and a test strategy before the project can be ready for execution.
• Often due to time constraints, testing cycles are cut short by project managers.
• This leads to a half-tested product that is pushed out of the door.
• In such cases, a large number of product defects are left undetected.
• Ultimately, end users discover these defects. Fixing these defects at this stage is
costly.
• Moreover, they cannot be fixed one at a time.
• They are to be taken in batches and are incorporated in maintenance project
plans.
• This leads to excessive costs in maintaining the software.
Test Strategy and Planning
• It is a lot cheaper to trap those bugs during the testing cycle and fix them.
• It is appropriately said that “testing costs money but not testing, costs more!”
• Test strategies should include things like test prioritization, automation strategy,
risk analysis, etc.
• Test planning should include a work breakdown structure, requirement review,
resource allocation, effort estimation, tools selection, setting up communication
channels, etc.

Test Prioritization
• Even before the test effort actually starts, it is of utmost importance that the
test prioritization should be made.
• First of all, all parts of the software product will not be used by end users with
the same intensity.
• Some parts of the product are used by end users extensively, while other parts
are seldom used.
• So the extensively used parts of the product should not have any defects at all
and thus they need to be tested thoroughly.
• Test Case
• A test case in a practical sense is a test-related item which contains the following
• information:
• 1. A set of test inputs. These are data items received from an external source by
• the code under test. The external source can be hardware, software, or human.
• 2. Execution conditions. These are conditions required for running the test, for
• example, a certain state of a database, or a configuration of a hardware device.
• 3. Expected outputs. These are the specified results to be produced by the code
• under test.
• Test
• A test is a group of related test cases, or a group of related test cases and test
procedures

• Test Bed
• A test bed is an environment that contains all the hardware and software needed
to test a software component or a software system.

25
How to Write Test Cases: Sample Template with Examples

• A Test Case is a set of actions executed to verify a


particular feature or functionality of your software
application. A Test Case contains test steps, test
data, precondition, postcondition developed for
specific test scenario to verify any requirement.
The test case includes specific variables or
conditions, using which a testing engineer can
compare expected and actual results to determine
whether a software product is functioning as per
the requirements of the customer.
Test Scenario Vs Test Case

• Test scenarios are rather vague and cover a wide range


of possibilities. Testing is all about being very specific.
• For a Test Scenario: Check Login Functionality there
many possible test cases are:
 Test Case 1: Check results on entering valid User Id &
Password
 Test Case 2: Check results on entering Invalid User ID &
Password
 Test Case 3: Check response when a User ID is Empty &
Login Button is pressed, and many more
• Test cases have a few integral parts that should always be present in fields. However, every test case can be broken down into
8 basic steps.
• Step 1: Test Case ID
• Test cases should all bear unique IDs to represent them. In most cases, following a convention for this naming ID helps with
organization, clarity, and understanding.
• Step 2: Test Description
• This description should detail what unit, feature, or function is being tested or what is being verified.
• Step 3: Assumptions and Pre-Conditions
• This entails any conditions to be met before test case execution. One example would be requiring a valid Outlook account for a
login.
• Step 4: Test Data
• This relates to the variables and their values in the test case. In the example of an email login, it would be the username and
password for the account.
• Step 5: Steps to be Executed
• These should be easily repeatable steps as executed from the end user’s perspective. For instance, a test case for logging into
an email server might include these steps:
• Open email server web page.
• Enter username.
• Enter password.
• Click “Enter” or “Login” button.
• Step 6: Expected Result
• This indicates the result expected after the test case step execution. Upon entering the right login information, the expected
result would be a successful login.
• Step 7: Actual Result and Post-Conditions
• As compared to the expected result, we can determine the status of the test case. In the case of the email login, the user
would either be successfully logged in or not. The post-condition is what happens as a result of the step execution such as
being redirected to the email inbox.
• Step 8: Pass/Fail
• Determining the pass/fail status depends on how the expected result and the actual result compare to each other.
• Same result = Pass
Different results = Fail
Test Strategy and Planning
Test Prioritization
• For making such a strategy, you must prioritize your testing.
• Put a high priority on tests which are to be done for these critical parts of the
software product and put a low priority on uncritical parts.
• Then test the high priority areas first.
• Once testing is thoroughly done for these parts, then you should start testing
the low priority areas.

Risk Management
• The test manager should also do plan for all known risks that could impact the
test project.
• If proper risk mitigation planning is not done and a mishap occurs, then the test
project schedule could be jeopardized, costs could escalate and/or quality could
go down.
• Some of the risks that can have severe, adverse impact on a test project include
an unrealistic schedule, resource unavailability, skill unavailability, frequent
requirement changes, etc.
Test Strategy and Planning
Risk Management
• Requirement changes pose a serious threat to testing effort because for each
requirement change, the whole test plan gets changed.
• The test team has to revise its schedule for additional work as well as to assess
impact of the change on the test cases they have to recreate.
• Some enthusiastic test engineers estimate much less effort than it actually
should be.
• In that case, the test manager would be in trouble trying to explain why testing
is taking more than the scheduled time schedule.
• In such cases, even after loading testing engineers more than 150%, the testing
cycle get delayed.
• This is a very common situation on most of the test projects.
• This also happens because the marketing team agrees on unrealistic schedules
with the customer in order to bag the project.
• Even the test manager at that time feels that somehow he will manage it, but
later on it proves impossible to achieve.
• Other test engineers unnecessarily pad their estimate and later on, when the
customer detects it, the test manager finds himself in a spot.
Test Strategy and Planning
Risk Management
• When the software development market, along with the software testing
market, is hot (this is the case most of the time, as businesses need to
implement software systems more and more and so software professionals are
in great demand), software professionals have many job offers in hand.
• They leave the project at short notice and the test manager has to find a
replacement fast.
• Sometimes, a project may have some kind of testing for which skilled test
professionals are hard to find.
• In both situations, the test manager may not be able to start those tasks in need
of adequate resources.
• For test professional resources, a good alternative resource planning is required.
• The test manager should be in consultation with human resource manager, keep
a line of test professionals who may join in case one is needed on his project.
• For scheduling problems, the test manager has to ensure in advance that
schedules do not get affected.
• He has to keep a buffer in the schedule for any eventuality.
Test Strategy and Planning
Risk Management
• To keep a tab on the project budget, the test manager has to ensure that the
schedule is not unrealistic and also has to load his test engineers appropriately.
• If some test engineers are not loaded adequately, then project costs may go
higher.
• For this reason, if any test professionals do not have enough assignments on
one project, they should be assigned work from other projects.

Effort Estimation
• For making scheduling, resource planning and budget for a test project, the test
manager should make a good effort estimate.
• Effort estimate should include information such as project size, productivity, and
test strategy.
• While project size and test strategy information comes after consultation with
the customer, the productivity figure comes from experience and knowledge of
the team members of the project team.
• The wideband Delphi technique uses brainstorming sessions to arrive at effort
estimate figures after discussing the project details with the project team.
Test Strategy and Planning
Effort Estimation
• This is a good technique because the people who will be assigned the project
work will know their own productivity levels and can figure out the size of their
assigned project tasks from their own experience.
• Initial estimates from each team member are then discussed with other team
members in an open environment.
• Each person has his own estimate.
• These estimates are then unanimously condensed into final estimate figures for
each project task.
• In an experience-based technique, instead of group sessions, the test manager
meets each team member and asks him his estimate for the project work he has
been assigned.
• This technique works best when team members are well aware, particularly, of
their prior experience of similar project tasks.
• Effort estimation is one area where no test manager can have a good grasp at
the initial stages of the project.
• This is because not many details are clear about the project.
• As the project unfolds, after executing some of its related tasks, things become
clearer.
Test Strategy and Planning
Effort Estimation
• At that stage, any test manager can comfortably give an effort estimate for the
remaining project tasks. But that is too late.
• Project stakeholders want to know at the very beginning of the project, what
would be the cost estimates and when the project would be delivered.
• These two questions are very important for project stakeholders and it is on the
top of their mind.
• Unfortunately, test managers are not equipped to provide an accurate schedule
and costs for the project at those initial stages because of unclear project scope,
size, etc.
• Nevertheless, it is one of their critical tasks that they have to finish and provide
the requested information.
• The best solution is to find a relatively objective method of effort estimation and
provide the requested information.
Test Strategy and Planning
Effort Estimation: Test Point Analysis
• There are many methods available for effort estimation for test projects.
• Some of them include test point analysis, the wideband Delphi technique,
experience-based estimation, etc.
• In the test point analysis technique, three inputs required are project size, test
strategy, and productivity.
• Project size is determined by calculating the number of test points in the
software application which is being developed.
• Test points, in turn, are calculated from function points.
• The number of function points is calculated from the number of functions and
function complexity.
• If the number of function points in the application has been calculated by the
development team, then test points are calculated from the available function
point information.
• Otherwise rough function point data can be used (Figure below - Test point
analysis components).
• A test strategy is derived from two pieces of information from the customer,
what will be the quality level for the application and which features of the
application will be used most frequently.
Test Strategy and Planning
Effort Estimation: Test Point Analysis
• Productivity is derived from knowledge and experience of the test team
members.
• While productivity can be calculated objectively without taking reference from
any statistical data, it makes sense to use past productivity data from previously
executed projects to make productivity figures more realistic.
• In case of iterative development, testing cycles will be short and iterative in
nature.
• The test manager should make the test effort calculations accordingly.
Test Project Monitoring and Control
Introduction
• Test projects involve a large variety of activities including test case design, test
case management, test case automation, test execution, defect tracking,
verifying and validating the application under test, etc (Figure below – Test life
cycle).

Test Case Design


• A proper test case design plan goes a long way in ensuring that test cases are
designed properly.
• The test manager has to ensure which kind of tests are to be designed, how
many test cases have to be written for particular modules and which test areas
are priority areas.
Test Project Monitoring and Control
Test Case Design: Test Types
• An application may have to be tested for functionality, performance, usability,
compatibility and many other kinds of things to make sure it is really useful for
end users.
• For each kind of testing, a set of test cases has to be written and executed then
finally, the system should be verified and validated.
• For applications that have many versions, regression tests also have to be
performed.
• Managing all these kinds of testing is a big task for the test manager.
• A good test manager will first divide the testing tasks on the basis of test types.
• Then tasks can be further divided by modules.
• After that, he can allocate testing tasks to test engineers appropriately.
• There is one more way of segregating tests.
• Depending on the project phase, we need to perform system testing, integration
testing or user acceptance testing.
• Usually when the application is built after the construction phase, it has to be
tested and verified whether it is functioning as per requirements.
Test Project Monitoring and Control
Test Case Design: Test Types
• Integration testing is performed when the application needs to be integrated
with any other external application to ensure that integration is proper.
• User acceptance testing is done by end users.
• If any defect is found during these tests, they are fixed so that the application
goes into production with as few defects as possible.

Test Case Management


• There could be existing test cases as well as new test cases that also need to be
created.
• Test case management involves managing different versions of test cases,
keeping track of changes in them, keeping a separate repository of test cases
based on type of tests, as well as creating and managing automation scripts.
Test Project Monitoring and Control
Test Bed Preparation
• Test bed preparation involves installing the application on a machine that is
accessible to all test teams.
• Care is taken to ensure that this machine is free of any interference from
unauthorized access.
• Test data is populated in the application.
• Care should also be taken to ensure that the test bed resembles the production
environment as closely as possible, including all software and hardware
configurations.
• For all types of testing, it is very important that the “application under test”
(UAT) should be tested under an environment that is as close to the
environment under which the proposed application will be deployed for
production.
• That is why test bed preparation is very important.
• The application should be installed on a dedicated server that has the same
configuration as the proposed production environment.
• This server should not be used for any other purpose except for testing.
Test Project Monitoring and Control
Test Bed Preparation
• It should be installed centrally, so that even distributed teams, contractors or
service providers can easily access it using remote desktop sharing or any peer
to peer networking protocol over the Internet.
• If the application can be directly accessed over the Internet then it is even
better.
• There should not be any testing done on applications that are deployed on the
local test engineer’s machine.
• To gain familiarity with the application and preliminary testing, it is acceptable
to have a local copy of the application, but never for testing when defects are to
be logged and verified by many people.
• It is because it is very important to reproduce the defect when the developer or
any concerned person asks for it.
• In case of disputes, if a defect cannot be reproduced, then it becomes difficult
for the test team to justify why a defect has been logged when others cannot
reproduce it.
• That is the reason for which the test bed should be prepared very carefully and
kept as isolated from any other environment as much as possible to preserve its
integrity.
Test Project Monitoring and Control
Test Bed Preparation
• The test data preparation is also a very tricky affair.
• The test data should closely resemble what the end users use in their daily
transactions.
• For this, the test team can get some business data already used by the end
users.
• The test bed should be populated with a similar kind of data.

Test Case Execution


• Test case execution involves executing prepared test cases manually or using
automation tools to execute them.
• For regression tests, automated test execution is a preferred method.
• After each test case is executed, it may pass or fail.
• If it fails then defects have to be logged.
• Exit criteria for test case execution cycle are generally defined in advance.
• Generally, when a certain level of quality of the application is reached then test
execution stops.
Test Project Monitoring and Control
Defect Tracking
• Defect tracking is one of the most important activities in a test project.
• During defect tracking it is ensured that defects are logged and get fixed.
• All defects and their fixing are tracked carefully (Figure below – Defect life cycle).
• Defect count per hour per day is a common way of measuring performance of a
test team.

• If the testing is done for an in-house software product, traditionally, it is used to,
not be a performance evaluation measurement.
• What really counted was the number of defects found in production when the
software product was deployed and used by end users.
• But it is too late for a performance measurement.
Test Project Monitoring and Control
Defect Tracking
• What if many of the test team members left before the product was deployed?
• In fact this is a reality, given the high attrition rate (as much as 20% at many
corporations) of software professionals.
• Once they are gone, there is no point in measuring the performance.
• Thus, a better measurement would allow for more immediate results.
• This is achieved by measuring the defect count per hour per day.
• Then there is the case of outsourced test projects.
• If the contract is only for testing up to deployment and not afterward, then
measurement does not make sense after the contract has ended.
• A good defect tracking application should be deployed on a central server that is
accessible to all test and development teams.
• Each defect should be logged in such a way that it could be understood by both
development and testing teams.
• Generally, the defects should be reproducible, but in many instances, this is
difficult.
• In such instances, a good resolution should be made by the test and
development managers.
Test Case Reporting
Test Case Reporting
• During the execution of a test project, many initial and final reports are made.
• But status reports also need to be made.
• Test reports include test planning reports, test strategy reports, requirement
document review comments, number of test cases created, automation scripts
created, test execution cycle reports, defect tracking reports, etc.
• Some other reports include trace-ability matrix reports, defect density, test
execution rate, test creation rate, test automation script writing rate, etc.
REFERENCES
• Ashfaque Ahmed, Software Project Management: A Process-driven approach,
Boca Raton, Fla: CRC Press, 2012.
THANK YOU

You might also like