0% found this document useful (0 votes)
77 views

New Manual Notes

The document discusses software testing and quality. It defines software testing as a process of executing a program to find bugs and ensure requirements are met. Quality software is bug-free, delivered on time, meets requirements, and is maintainable. Testing involves planning, preparation, evaluation, and testing software products and documents. It aims to verify requirements and validate business needs are met, and find defects. Testing is necessary to check for mistakes and issues early.

Uploaded by

rathodshri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views

New Manual Notes

The document discusses software testing and quality. It defines software testing as a process of executing a program to find bugs and ensure requirements are met. Quality software is bug-free, delivered on time, meets requirements, and is maintainable. Testing involves planning, preparation, evaluation, and testing software products and documents. It aims to verify requirements and validate business needs are met, and find defects. Testing is necessary to check for mistakes and issues early.

Uploaded by

rathodshri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 68

Software Testing:

Software Testing is a process of executing a program or application with the intent of finding
the software bugs. It can also be stated as the process of validating and verifying that a
software program or application or product, which meets the business and technical requirements
that guided its design and development, works as expected, and can be implemented with the
same characteristic.

Software Quality:

Quality software means reasonably bug or defect free, delivered on time and within budget,

meets requirements and/or expectations, and is maintainable.


Key aspects of quality for the customer include:

 Good design – looks and style.


 Good functionality – it does the job well.
 Reliable – acceptable level of breakdowns or failure.
 Consistency.
 Durable – lasts as long as it should.
 Good after sales service.
 Value for money.

The following parts are the definition of Software testing:


1.  Process:  Testing is a process rather than a single activity.
2.  All Life Cycle Activities: Testing is a process that’s take place throughout the Software
Development Life Cycle (SDLC).The process of designing tests early in the life cycle can help to
prevent defects from being introduced in the code. The test basis includes documents such as the
requirements and design specifications.
3.  Static Testing:  It can test and find defects without executing code. Static Testing is done
during verification process. This testing includes reviewing of the documents (including  source
code) and static analysis. This is useful and cost effective way of testing.  For example:
reviewing, walkthrough, inspection, etc.
4.  Dynamic Testing:  In dynamic testing the software code is executed to demonstrate the
result of running tests. It’s done during validation process. For example: unit testing, integration
testing, system testing, etc.
5. Planning:  We need to plan as what we want to do. We control the test activities, we report
on testing progress and the status of the software under test.
6.  Preparation:  We need to choose what testing we will do, by selecting test conditions
and designing test cases.
7.  Evaluation:  During evaluation we must check the results and evaluate the software under
test and the completion criteria, which helps us to decide whether we have finished testing and
whether the software product has passed the tests.
8.  Software products and related work products:  Along with the testing of code the testing
of requirement and design specifications and also the related documents like operation, user and
training material is equally important.

Main purpose of Software testing:


Software testing has three main purposes: Verification, validation, and defect finding.
The verification process confirms that the software meets its technical specifications.
The validation process confirms that the software meets the business requirements.
A defect is a variance between the expected and actual result. The defect’s ultimate source may
be traced to a fault introduced in the specification, design, or development (coding) phases.
Why Does Software Testing:
 Does it really work as expected?
 Does it meet the users’ requirements?
 Is it what the users expect?
 Do the users like it?
 Is it compatible with our other systems?
 How does it perform?
 How does it scale when more users are added?
 Which areas need more work?
 Is it ready for release?
What can we do with the answers to these questions?
 Save time and money by identifying defects early
 Avoid or reduce development downtime
 Provide better customer service by building a better application
 Know that we’ve satisfied our users’ requirements
 Build a list of desired modifications and enhancements for later versions
 Identify and catalog reusable modules and components
 Identify areas where programmers and developers need training

What to Test?
First, test what’s important. Focus on the core functionality—the parts that are critical or popular
—before looking at the ‘nice to have’ features. Concentrate on the application’s capabilities in
common usage situations before going on to unlikely situations. For example, if the application
retrieves data and performance are important, test reasonable queries with a normal load on the
server before going on to unlikely ones at peak usage times. It’s worth saying again: focus on
what’s important. Good business requirements will tell you what’s important.

Who Does Testing?


Software testing is not a one person job. It takes a team, but the team may be larger or smaller
depending on the size and complexity of the application being tested. Testers must be cautious,
curious, critical but non-judgmental, and good communicators.
 How well does it work?
 What does it mean to you that “it works”?
 How do you know it works?
 What evidence do you have?
 In what ways could it seem to work but still have something wrong?
 In what ways could it seem to not work but really be working?
 What might cause it to not to work well? A good developer does not necessarily make a
good.

Why is testing necessary?


Testing is necessary because we all make mistakes. Some of those mistakes are unimportant,
but some of them are expensive or dangerous. We need to check everything and anything we
produce because things can always go wrong –humans make mistakes all the time. 
Since we assume that our work may have mistakes, hence we all need to check our own work.
However some mistakes come from bad assumptions and blind spots, so we might make the
same mistakes when we check our own work as we made when we did it. So we may not notice
the flaws in what we have done.

What are the testing objectives and purposes?


Testing has different goals and objectives:
 Finding defects.
 Gaining confidence in and providing information about the level of quality.
 Preventing defects.

What are the Software Development Life Cycle phases?


There are various software development approaches defined and designed, which are
used/employed during development process of software, these approaches are also referred as
“Software Development Process Models”(e.g. Waterfall model, incremental model, V-
model, iterative model, etc.).
There are following six phases in every Software development life cycle model:
1. Requirement gathering and analysis.
2. Design.
3. Implementation or coding.
4. Testing.
5. Deployment.
6. Maintenance.
1) Requirement gathering and analysis: 
The business requirements are gathered in this   phase. This phase is the main focus of the
project managers and stake holders. Meetings with managers, stake holders and users are held
in order to determine the requirements like; who is going to use the system? How will they use
the system?  What data should be input into the system?  What data should be output by the
system? 
2)  Design: 
In this phase, the system and software design is prepared from the requirement specifications
which were studied in the first phase. System Design helps in specifying hardware and system
requirements and also helps in defining overall system architecture.
3)  Implementation / Coding: 
On receiving system design documents, the work is divided in modules/units and actual coding is
started. Since, in this phase the code is produced so it is the main focus for the developer. This is
the longest phase of the software development life cycle.
4)  Testing: 
After the code is developed it is tested against the requirements to make sure that the product is
actually solving the needs addressed and gathered during the requirements phase. During this
phase unit testing, integration testing, system testing, acceptance testing are done.
5)  Deployment: 
After successful testing the product is delivered / deployed to the customer for their use.
6) Maintenance:
Once when the customers starts using the developed system, then the actual problems comes up
and needs to be solved from time to time. This process where the care is taken for the developed
product is known as maintenance.

Software Development Models?

The Software Development Models are as follows:


 Waterfall model.
 V model.
 Incremental model.
 RAD model.
 Agile model.
 Iterative model.
 Spiral model.

What is Waterfall model? What are the advantages, disadvantages, and when to use Waterfall

model?
The Waterfall Model is the first Process Model to be introduced. It is also referred to as a linear
sequential life cycle model.  It is very simple to understand and use.  In a waterfall model,
each phase must be completed fully before the next phase can begin.   At the end of each phase,
a review takes place to determine if the project is on the right path and whether or not to
continue or discard the project. In waterfall model phases do not overlap.
Diagram of Waterfall-model: 

Advantages of waterfall model:


 Simple and easy to understand and use.
 Easy to manage due to the rigidity of the model – each phase has specific deliverables
and a review process.
 Phases are processed and completed one at a time.
 Works well for smaller projects where requirements are very well understood.
Disadvantages of waterfall model:
 Once an application is in the testing stage, it is very difficult to go back and
 change something that was not well-thought out in the concept stage.
 No working software is produced until late during the life cycle.
 High amounts of risk and uncertainty.
 Not a good model for complex and object-oriented projects.
 Poor model for long and ongoing projects.
 Not suitable for the projects where requirements are at a moderate to high risk of
changing.
When to use the waterfall model?
 If requirements are very well known, clear and fixed.
 If product definition is stable.
 If technology is understood.
 If there are no ambiguous requirements.
 If ample resources with required expertise are available freely.
 If the project is short.

What is Incremental model? What are the advantages, disadvantages, and when to use

Incremental model?

In incremental model, the whole requirement is divided into various builds. Multiple development

cycles take place here, “making the life cycle a multi-waterfall” cycle.  Cycles are divided up into

smaller, more easily managed modules.  Each module passes through the requirements, design,
implementation and testing phases. A working version of software is produced during the first

module, so you have working software early on during the software life cycle. Each subsequent

release of the module adds function to the previous release. The process continues till the

complete system is achieved.

Diagram of Incremental model:

Advantages of Incremental model:


 Generates working software quickly and early during the software life cycle.
 More flexible – less costly to change scope and requirements.
 Easier to test and debug during a smaller iteration.
 Customer can respond to each built.
 Lowers initial delivery cost.
 Easier to manage risk because risky pieces are identified and handled during it’d
iteration.
Disadvantages of Incremental model:
 Needs good planning and design.
 Needs a clear and complete definition of the whole system before it can be broken down
and built incrementally.
 Total cost is higher than waterfall.
When to use the Incremental model?
 Requirements of the complete system are clearly defined and understood.
 Major requirements must be defined; however, some details can evolve with time.
 There is a need to get a product to the market early.
 A new technology is being used.
 Resources with needed skill set are not available.
 There are some high risk features and goals.

What is V-model? What are the advantages, disadvantages, and when to use V-model?
V- Model means Verification and Validation model. Just like the waterfall model, the V-Shaped life
cycle is a sequential path of execution of processes. Each phase must be completed before the
next phase begins.  Testing of the product is planned in parallel with a corresponding phase of
development.
Diagram of V-model:

The various phases of the V-model are as follows:

Requirements:
Like BRS and SRS begin the life cycle model just like the waterfall model. But, in this model
before development is started, a system test plan is created.  The test plan focuses on meeting
the functionality specified in the requirements gathering.
The high-level design (HLD):
Phase focuses on system architecture and design. It provides overview of solution, platform,
system, product and service/process. An integration test plan is created in this phase as well in
order to test the pieces of the software systems ability to work together.
The low-level design (LLD):
Phase is where the actual software components are designed. It defines the actual logic for each
and every component of the system. Class diagram with all the methods and relation between
classes comes under LLD. Component tests are created in this phase as well.
The implementation:
Phase is, again, where all coding takes place. Once coding is complete, the path of execution
continues up the right side of the V where the test plans developed earlier are now put to use.
Coding:
This is at the bottom of the V-Shape model. Module design is converted into code by developers.
Advantages of V-model:
 Simple and easy to use.
 Testing activities like planning, test designing happens well before coding. This saves a
lot of time. Hence higher chance of success over the waterfall model.
 Proactive defect tracking – that is defects are found at early stage.
 Avoids the downward flow of the defects.
 Works well for small projects where requirements are easily understood.
Disadvantages of V-model:
 Very rigid and least flexible.
 Software is developed during the implementation phase, so no early prototypes of the
software are produced.
 If any changes happen in midway, then the test documents along with requirement
documents has to be updated.
When to use the V-model:
 The V-shaped model should be used for small to medium sized projects where requirements
are clearly defined and fixed.
 The V-Shaped model should be chosen when ample technical resources are available with
needed technical expertise.
 High confidence of customer is required for choosing the V-Shaped model approach. Since,
no prototypes are produced, there is a very high risk involved in meeting customer
expectations.

What is Spiral model- advantages, disadvantages and when to use it?

The spiral model is similar to the incremental model, with more emphasis placed on

risk analysis.

The spiral model has four phases: Planning, Risk Analysis, Engineering and

evaluation. A software project repeatedly passes through these phases in iterations (called

Spirals in this model)

Diagram of Spiral model:

Disadvantages of Spiral model:


 Can be a costly model to use.
 Risk analysis requires highly specific expertise.
 Project’s success is highly dependent on the risk analysis phase.
 Doesn’t work well for smaller projects.

 When to use Spiral model:


 When costs and risk evaluation is important.
 For medium to high-risk projects.
 Long-term project commitment unwise because of potential changes to
economic priorities.
 Users are unsure of their needs.
 Requirements are complex.
 New product line.
 Significant changes are expected (research and exploration).

What is Agile model – advantages, disadvantages and when to use it?


Agile development model is also a type of Incremental model. Software is developed in
incremental, rapid cycles. This results in small incremental releases with each release building on
previous functionality. Each release is thoroughly tested to ensure software quality is maintained.
It is used for time critical applications.  Extreme Programming (XP) is currently one of the most
well known agile development life cycle model.

Diagram of Agile model:

Advantages of Agile model:


 Customer satisfaction by rapid, continuous delivery of useful software.
 People and interactions are emphasized rather than process and tools. Customers,
developers and testers constantly interact with each other.
 Working software is delivered frequently (weeks rather than months).
 Face-to-face conversation is the best form of communication.
 Close and daily cooperation between business people and developers.
 Continuous attention to technical excellence and good design.
 Regular adaptation to changing circumstances.
 Even late changes in requirements are welcomed.

Disadvantages of Agile model:



In case of some software deliverables, especially the large ones, it is difficult to assess
the effort required at the beginning of the software development life cycle.
 There is lack of emphasis on necessary designing and documentation.
 The project can easily get taken off track if the customer representative is not clear
what final outcome that they want.
 Only senior programmers are capable of taking the kind of decisions required during the
development process. Hence it has no place for newbie programmers, unless combined
with experienced resources.
When to use agile model?
 When new changes are needed to be implemented. The freedom agile gives to change is
very important. New changes can be implemented at very little cost because of the
frequency of new increments that are produced.
 To implement a new feature the developers need to lose only the work of a few days, or
even only hours, to roll back and implement it.
 Unlike the waterfall model in agile model very limited planning is required to get started
with the project. Agile assumes that the end users’ needs are ever changing in a
dynamic business and IT world.

What are Principles of testing?


There are seven principles of testing. They are as follows:
1) Testing shows presence of defects: Testing can show the defects are present, but cannot
prove that there are no defects. Even after testing the application or product thoroughly we
cannot say that the product is 100% defect free. Testing always reduces the number of
undiscovered defects remaining in the software but even if no defects are found, it is not a proof
of correctness.
2) Exhaustive testing is impossible: Testing everything including all combinations of inputs
and preconditions is not possible. So, instead of doing the exhaustive testing we can use risks
and priorities to focus testing efforts.
3) Early testing: In the software development life cycle testing activities should start as early as
possible and should be focused on defined objectives.
4) Defect clustering: A small number of modules contains most of the defects discovered
during pre-release testing or shows the most operational failures.
5) Pesticide paradox: If the same kinds of tests are repeated again and again, eventually the
same set of test cases will no longer be able to find any new bugs. To overcome this “Pesticide
Paradox”, it is really very important to review the test cases regularly and new and different tests
need to be written to exercise different parts of the software or system to potentially find more
defects.
6) Testing is context depending: Testing is basically context dependent. Different kinds of
sites are tested differently. For example, safety – critical software is tested differently from an  e-
commerce site.
7) Absence – of – errors fallacy: If the system built is unusable and does not fulfil the user’s
needs and expectations then finding and fixing defects does not help.

Fundamental test process in software testing:


Testing is a process rather than a single activity. This process starts from test planning then
designing test cases, preparing for execution and evaluating status till the test closure. So, we
can divide the activities within the fundamental test process into the following basic steps:
1)  Planning and Control.
2)  Analysis and Design.
3)  Implementation and Execution.
4)  Evaluating exit criteria and Reporting.
5)  Test Closure activities.
1) Planning and Control:
Test planning has following major tasks:
I. To determine the scope and risks and identify the objectives of testing.
ii. To determine the test approach.
iii. To implement the test policy and/or the test strategy. (Test strategy is an outline that
describes the testing portion of the software development cycle. It is created to inform PM,
testers and developers about some key issues of the testing process. This includes the
testing objectives, method of testing, total time and resources required for the project and
the testing environments.).
iv. To determine the required test resources like people, test environments, PCs, etc.
v. To schedule test analysis and design tasks, test implementation, execution and evaluation.
vi. To determine the Exit criteria we need to set criteria such as Coverage criteria. 

Test control has the following major tasks:


i. To measure and analyze the results of reviews and testing.
ii. To monitor and document progress, test coverage and exit criteria.
iii. To provide information on testing.
iv. To initiate corrective actions.
v. To make decisions.

2)  Analysis and Design:


Test analysis and Test Design has the following major tasks:
I. To review the test basis. (The test basis is the information we need in order to start the test
analysis and   create our own test cases. Basically it’s a documentation on which test cases are
based, such as requirements, design specifications, product risk analysis, architecture and
interfaces. We can use the test basis documents to understand what the system should do once
built.)
ii. To identify test conditions.
iii. To design the tests.
iv. To evaluate testability of the requirements and system.
v. To design the test environment set-up and identify and required infrastructure and tools.
Implementation and Execution:
During test implementation and execution, we take the test conditions into test cases and
Procedures and Other test ware such as scripts for automation, the test environment and any
other test infrastructure. (Test cases are a set of conditions under which a tester will determine
Whether an application is working correctly or not.)
Test implementation has the following major task:
i. To develop and prioritize our test cases by using techniques and create test data for those
tests.
We also write some instructions for carrying out the tests which is known as test procedures.
ii. To create test suites from the test cases for efficient test execution.
(Test suite is a collection of test cases that are used to test a software program to show that it
has some specified set of behaviors.)
iii. To implement and verify the environment.
Test execution has the following major task:
i. To execute test suites and individual test cases following the test procedures.
ii. To re-execute the tests that previously failed in order to confirm a fix. This is known
as confirmation testing or re-testing.
iii. To log the outcome of the test execution and record the identities and versions of the
software under tests. The test log is used for the audit trial. (A test log is nothing but, what are
the test cases that we executed, in what order we executed, who executed that test cases and
what is the status of the test case (pass/fail). These descriptions are documented and called as
test log.).
Iv. To Compare actual results with expected results.
v. Where there are differences between actual and expected results, it report discrepancies as
Incidents.

Psychology of testing
Comparison of the mindset of the tester and developer:
The testing and reviewing of the applications are different from the analyzing and developing of
it. By this we mean to say that if we are building or developing applications we are working
positively to solve the problems.
during the development process and to make the product according to the user specification.
However while Testing or reviewing a product we are looking for the defects or failures in the
product. Thus building the software requires a different mindset from testing the software.
The balance between self-testing and independent testing:
The comparison made on the mindset of the tester and the developer in the above article is just
to compare the two different perspectives. It does not mean that the tester cannot be the
programmer, or that the programmer cannot be the tester, although they often are separate
roles.
This degree of independence avoids author bias and is often more effective at finding  defects
and failures.
There are several levels of independence in software testing which is listed here from the lowest
level of independence to the highest:
i. Tests by the person who wrote the item.
ii. Tests by another person within the same team, like another programmer.
iii. Tests by the person from some different group such as an independent test team.
iv. Tests by a person from a different organization or company, such as outsourced testing or
certification by an external body.
Clear and courteous communication and feedback on defects between tester and developer:
We all make mistakes and we sometimes get annoyed and upset or depressed when
someone points them out. So, when as testers we run a test which is a good test from our
viewpoint because we found the defects and failures in the software. But at the same time we
need to be very careful as how we react or report the defects and failures to the programmers.
We are pleased because we found a good bug but how will the requirement analyst, the designer,
developer, project manager and customer react.
 The people who build the application may react defensively and take this reported defect as
personal criticism.
 The project manager may be annoyed with everyone for holding up the project.
 The customer may lose confidence in the product because he can see defects.

Independent testing:
There are several levels of independence which is listed here from the lowest level of
independence to the highest:
i. Tests by the person who wrote the item.
ii. Tests by another person within the same team, like another programmer.
Iii. Tests by the person from some different group such as an independent test team.
Iv. Tests by a person from a different organization or company, such as outsourced testing or
certification by an external body.

Benefits of independence testing:


 An independent tester can repeatedly find out more, other, and different defects than a tester
working within a programming team – or a tester who is by profession a programmer.
 While business analysts, marketing staff, designers, and programmers bring their own
assumptions to the specification and implementation of the item under test, an independent
tester brings a different set of assumptions to testing and to reviews, which often helps in
exposing the hidden defects and problems
 An independent tester who reports to senior management can report his results honestly and
without any concern for reprisal that might result from pointing out problems in coworkers’ or,
worse yet, the manager’s work.
 An independent test team often has a separate budget, which helps ensure the proper level
of money is spent on tester training, testing tools, test equipment, etc.
 In addition, in some organizations, testers in an independent test team may find it easier to have
a career path that leads up into more senior roles in testing.

Verification in software testing:


 It makes sure that the product is designed to deliver all functionality to the customer.
 Verification is done at the starting of the development process. It includes reviews and meetings,
walkthroughs, inspection, etc. to evaluate documents, plans, code, requirements
and specifications.
 It answers the questions like: Am I building the product right?
 Am I accessing the data right (in the right place; in the right way)?
 It is a Low level activity.
 Performed during development on key artifacts, like walkthroughs, reviews and inspections,
mentor feedback, training, checklists and standards.
 Demonstration of consistency, completeness, and correctness of the software at each stage and
between each stage of the development life cycle.

Validation in software testing


 Determining if the system complies with the requirements and performs functions for which it is
intended and meets the organization’s goals and user needs.
 Validation is done at the end of the development process and takes place after verifications are
completed.
 It answers the question like: Am I building the right product?
 Am I accessing the right data (in terms of the data required to satisfy the requirement).
 It is a High level activity.
 Performed after a work product is produced against established criteria ensuring that the product
integrates correctly into the environment.
 Determination of correctness of the final software product by a development project with respect
to the user needs and requirements.

Software Testing Levels


The Software Testing levels are basically to identify missing areas and prevent overlap and repetition
between the development life cycle phases. In software development life cycle models there are
defined phases like requirement gathering and analysis, design, coding or implementation, testing
and deployment.  Each phase goes through the testing. Hence there are various levels of testing. The
various levels of testing are:
 Unit testing
 Component testing
 Integration testing
 Big bang integration testing
 Top down
 Bottom up
 Functional incremental
 Component integration testing
 System integration testing
 System testing
 Acceptance testing
 Alpha testing
 Beta testing
Unit testing:
 Unit testing is a method by which individual units of source code are tested to determine if
they are fit for use. A unit is the smallest testable part of an application like
functions/procedures, classes, interfaces.
 Unit tests are typically written and run by software developers to ensure that code
meets its design and behaves as intended.
 The goal of unit testing is to isolate each part of the program and show that the individual
parts are correct.
 A unit test provides a strict, written contract that the piece of code must satisfy. As a result,
it affords several benefits. Unit tests find problems early in the development cycle.

Component testing:
 Component testing is also known as module and program testing. It finds the defects in the
module and verifies the functioning of software.
 Component testing is done by the tester.
 Component testing may be done in isolation from rest of the system depending on the
development life cycle model chosen for that particular application. 

Integration testing:
 Integration testing tests integration or interfaces between components, interactions
to different parts of the system such as an operating system, file system and
hardware or interfaces between systems.
 Integration testing is done by a specific integration tester or test team.
Big Bang integration testing:
 In Big Bang integration testing all components or modules are integrated
simultaneously, after which everything is tested as a whole.
 Big Bang testing has the advantage that everything is finished before integration
testing starts.
 The major disadvantage is that in general it is time consuming and difficult to trace the
cause of failures because of this late integration.
Incremental testing:
Another extreme is that all programmers are integrated one by one, and a test is carried out
after each step.

Top down: 
Testing takes place from top to bottom, following the control flow or architectural structure (e.g.
starting from the GUI or main menu). Components or systems are substituted by stubs.
Bottom up: 
Testing takes place from the bottom of the control flow upwards. Components or systems are
substituted by drivers.
Functional incremental: 
Integration and testing takes place on the basis of the functions and functionalities, as
documented in the functional specification.

System testing
 In system testing the behavior of whole system/product is tested as defined by the
scope of the development project or product.
 It may include tests based on risks and/or requirement specifications, business process,
use cases, or other high level descriptions of system behavior, interactions with the
operating systems, and system resources.
 System testing is most often the final test to verify that the system to be delivered
meets the specification and its purpose.
 System testing is carried out by specialists testers or independent testers.
 System testing should investigate both functional and non-functional requirements of
the testing.

Acceptance testing
 After the system test has corrected all or most defects, the system will be delivered to
the user or customer for acceptance testing.
 Acceptance testing is basically done by the user or customer although other
stakeholders may be involved as well.
 The goal of acceptance testing is to establish confidence in the system.
 Acceptance testing is most often focused on a validation type testing.
 Acceptance testing may occur at more than just a single level, for example:
The types of acceptance testing are:
 The User Acceptance test: focuses mainly on the functionality thereby validating the
fitness-for-use of the system by the business user. The user acceptance test is
performed by the users and
application managers.
 The Operational Acceptance test: also known as Production acceptance test validates
whether the system meets the requirements for operation. In most of the organization
the operational acceptance test is performed by the system administration before the
system is released. The operational acceptance test may include testing of
backup/restore, disaster recovery, maintenance tasks and periodic check of security
vulnerabilities.
 Contract Acceptance testing: It is performed against the contract’s acceptance
criteria for producing custom developed software. Acceptance should be formally defined
when the contract is agreed.
 Compliance acceptance testing: It is also known as regulation acceptance testing is
performed against the regulations which must be adhered to, such as governmental,
legal or safety regulations.

Beta testing
It is also known as field testing. It takes place at customer’s site. It sends the system
to users who install it and use it under real-world working conditions.
A beta test is the second phase of software testing in which a sampling of the intended
audience tries the product out. The goal of beta testing is to place your application in the
hands of real users outside of your own engineering team to discover any flaws or issues
from the user’s perspective that you would not want to have in your final, released
version of the application.

Advantages of beta testing:

 You have the opportunity to get your application into the hands of users prior to releasing it
to the general public.
 Users can install, test your application, and send feedback to you during this beta testing
period.
 Your beta testers can discover issues with your application that you may have not noticed,
such as confusing application flow, and even crashes.
 Using the feedback you get from these users, you can fix problems before it is released to
the general public.
 The more issues you fix that solve real user problems, the higher the quality of
your application when you release it to the general public.
 Having a higher-quality application when you release to the general public will increase
customer satisfaction.
 These users, who are early adopters of your application, will generate excitement about
your application.

Software Test Types


Test types are introduced as a means of clearly defining the objective of a certain level for a
program or project.  A test type is focused on a particular test objective, which could be the
testing of the function to be performed by the component or system; a non-functional quality
characteristics, such as reliability or usability; the structure or architecture of the component
or system; or related to changes, i.e. confirming that defects have been fixed
(confirmation testing or retesting) and looking for unintended changes (regression testing).
Depending on its objectives, testing will be organized differently. Hence there are four
software test types:
1. Functional testing
2. Non-functional testing
3. Structural testing
4. Change related testing

Functional testing
In functional testing basically the testing of the functions of component or system is
done. It refers to activities that verify a specific action or function of the code. Functional
test tends to answer the questions like “can the user do this” or “does this particular
feature work”. This is typically described in a requirements specification or in a
functional specification.
The techniques used for functional testing are often specification-based. Testing
functionality can be done from two perspectives:

Requirement-based testing: 
In this type of testing the requirements are prioritized depending on the risk criteria and
accordingly the tests are prioritized. This will ensure that the most important and most critical
tests are included in the testing effort.
Business-process-based testing:
In this type of testing the scenarios involved in the day-to-day business use of the system are
described. It uses the knowledge of the business processes. For example, a personal and payroll
system may have the business process along the lines of: someone joins the company, employee
is paid on the regular basis and employee finally leaves the company.

Non-functional testing
In non-functional testing the quality characteristics of the component or system is tested. Non-
functional refers to aspects of the software that may not be related to a specific function or user
action such as scalability or security. Eg. How many people can log in at once? Non-functional
testing is also performed at all levels like functional testing.
Non-functional testing includes:

 Functionality testing
 Reliability testing
 Usability testing
 Efficiency testing
 Maintainability testing
 Portability testing
 Baseline testing
 Compliance testing
 Documentation testing
 Endurance testing
 Load testing
 Performance testing
 Compatibility testing
 Security testing
 Scalability testing
 Volume testing
 Stress testing
 Recovery testing
 Internationalization testing and Localization testing
Functionality testing: 
Functionality testing is performed to verify that a software application performs and functions
correctly according to design specifications. During functionality testing we check the core
application functions, text input, menu functions and installation and setup on localized machines,
etc.
Reliability testing: 
Reliability Testing is about exercising an application so that failures are discovered and removed
before the system is deployed. The purpose of reliability testing is to determine product
reliability, and to determine whether the software meets the customer’s reliability requirements.
Usability testing:
In usability testing basically the testers tests the ease with which the user interfaces can be used.
It tests that whether the application or the product built is user-friendly or not
Usability testing includes the following five components:

Learnability: 
How easy is it for users to accomplish basic tasks the first time they encounter the design?
Efficiency: 
How fast can experienced users accomplish tasks?
Memo ability: 
When users return to the design after a period of not using it, does the user remember enough to
use it
effectively the next time, or does the user have to start over again learning everything?
Errors: 
How many errors do users make, how severe are these errors and how easily can they recover
from the errors?
Satisfaction: 
How much does the user like using the system?

Efficiency testing: 
Efficiency testing test the amount of code and testing resources required by a program to
perform a particular
function. Software Test Efficiency is number of test cases executed divided by unit of time
(generally per hour)
.Maintainability testing:
  It basically defines that how easy it is to maintain the system. This means that how easy it is to
analyze, change and test the application or product.
Portability testing: 
It refers to the process of testing the ease with which a computer software component or
application can be moved from one environment to another, e.g. moving of any application from
Windows 2000 to Windows XP. This is usually measured in terms of the maximum amount of effort
permitted. Results are measured in terms of the time required to move the software and
complete the and documentation updates.
Baseline testing: 
It refers to the validation of documents and specifications on which test cases would be designed.
The requirement specification validation is baseline testing. 
Compliance testing: 
It is related with the IT standards followed by the company and it is the testing done to find
the deviations from the company prescribed standards.

Documentation testing: 
As per the IEEE Documentation describing plans for, or results of, the testing of a system or
component, Types include test case specification, test incident report, test log, test plan, test
procedure, test report. Hence the testing of all the above mentioned documents is known as
documentation testing.
Endurance testing: 
Endurance testing involves testing a system with a significant load extended over a
significant period of time, to discover how the system behaves under sustained use. For
example, in software testing, a system may behave exactly as expected when tested for 1
hour but when the same system is tested for 3 hours, problems such as memory leaks cause
the system to fail or behave randomly.
Load testing: 
A load test is usually conducted to understand the behavior of the application under a
specific expected load. Load testing is performed to determine a system’s behavior under
both normal and at peak conditions. It helps to identify the maximum operating capacity of
an application as well as any bottlenecks and determine which element is causing
degradation. E.g. If the number of users are in creased then how much CPU, memory will be
consumed, what is the network and bandwidth response time
Performance testing:
 Performance testing is testing that is performed, to determine how fast some aspect of a
system performs under a particular workload. It can serve different purposes like it can
demonstrate that the system meets performance criteria. It can compare two systems to find
which performs better. Or it can measure what part of the system or workload causes the
system to perform badly.
Compatibility testing:
 Compatibility testing is basically the testing of the application or the product built with the
computing environment. It tests whether the application or the software product built is
compatible with the hardware, operating system, database or other system software or not.
Security testing: 
Security testing is basically to check that whether the application or the product is secured or
not. Can anyone came tomorrow and hack the system or login the application without any
authorization. It is a process to determine that an information system protects data and
maintains functionality as intended.
Scalability testing:
 It is the testing of a software application for measuring its capability to scale up in terms of
any of its non-functional capability like load supported, the number of transactions, the data
volume etc.

Volume testing: 
Volume testing refers to testing a software application or the product with a certain amount
of data. E.g., if we want to volume test our application with a specific database size, we need
to expand our database to that size and then test the application’s performance on it.

Stress testing: 
It involves testing beyond normal operational capacity, often to a breaking point, in order to
observe the results. It is a form of testing that is used to determine the stability of a given
system. It  put  greater emphasis on robustness, availability, and error handling under a
heavy load, rather than on what would be considered correct behavior under normal
circumstances. The goals of such tests may be to ensure the software does not crash in
conditions of insufficient computational resources (such as memory or disk space).
Recovery testing: 
Recovery testing is done in order to check how fast and better the application can recover
after it has gone through any type of crash or hardware failure etc. Recovery testing is the
forced failure of the software in a variety of ways to verify that recovery is properly
performed. For example, when an application is receiving data from a network, unplug
the connecting cable. After some time, plug the cable back in and analyze the application’s
ability to continue receiving data from the point at which the network connection got
disappeared. Restart the system while a browser has a definite number of sessions and
check whether the browser is able to recover all of them or not.
Internationalization testing and Localization testing:
Internationalization is a process of designing a software application so that it can be adapted
to various languages and regions without any changes. Whereas Localization is a process of
adapting internationalized software for a specific region or language by adding local specific
components and translating text.
Confirmation testing or re-testing: 
When a test fails because of the defect then that defect is reported and a new version of the
software is expected that has had the defect fixed. In this case we need to execute the test again to
confirm that whether t he defect got actually fixed or not. This is known as confirmation testing and
also known as re-testing. It is important to ensure that the test is executed in exactly the same way
it was the first time using the same inputs, data and environments.
Hence, when the change is made to the defect in order to fix it then confirmation testing or re-
testing is helpful.

Regression testing

During confirmation testing the defect got fixed and that part of the application started working

as intended. But there might be a possibility that the fix may have introduced or uncovered a different

defect elsewhere in the software. The way to detect these ‘unexpected side-effects’ of fixes is to do

regression testing. The purpose of a regression testing is to verify that modifications in the

software or the environment have not caused any unintended adverse side effects and that the system

still meets its requirements. Regression testing are mostly automated because in order to fix the defect
the same test is carried out again and again and it will be very tedious to do it manually.

Regression tests are executed whenever the software changes, either as a result of fixes or new or

changed functionality.

Maintenance Testing
Once a system is deployed it is in service for years and decades. During this time the system and
its operational environment is often corrected, changed or extended. Testing that is provided during
this phase is called maintenance testing.

Usually maintenance testing is consisting of two parts:

 First one is, testing the changes that has been made because of the correction in the system
or if the system is extended or because of some additional features added to it.
 Second one is regression tests to prove that the rest of the system has not been affected by
the maintenance work.

Impact analysis in software testing


Impact analysis is basically analyzing the impact of the changes in the deployed application or
product.
It tells us about the parts of the system that may be unintentionally got affected because of the
change in the application and therefore need careful regression testing.  This decision is taken
together with the stakeholders.

Test design technique


By design we mean to create a plan for how to implement an idea and technique is a method or
way for performing a task. So, Test Design is creating a set of inputs for given software that will provide
a set of expected outputs. The idea is to ensure that the system is working good enough and it can be
released with as few problems as possible for the average user.
Broadly speaking there are two main categories of Test Design Techniques. They are:

1. Static Techniques
2. Dynamic Techniques
 Below is the tree structure of the testing techniques:
Static Testing
 Static testing is the testing of the software work products manually, or with a set of tools,
but they are not executed.
 It starts early in the Life cycle and so it is done during the verification process.
 It does not need computer as the testing of program is done without executing the program.
For example:  reviewing, walk through, inspection, etc.

Uses of Static Testing


 The uses of static testing are as follows:
 Since static testing canstart early in the life cycle so early feedback on quality issues can be
established.
 As the defects are getting detected at an early stage so the rework cost most often relatively
low.
 Development productivity is likely to increase because of the less rework effort.
 Types of the defects that are easier to find during the static testing are: deviation
from standards, missing requirements, design defects, non-maintainable code and
inconsistent interface specifications.
 Static tests contribute to the increased awareness of quality issues.

Informal reviews

Informal reviews are applied many times during the early stages of the life cycle of the

document. Atwo person team can conduct an informal review. In later stages these reviews often involve

more people and a meeting. The goal is to keep the author and to improve the quality of the
document. The most important thing to keep in mind about the informal reviews is that they are not

documented.

Formal review
Formal reviews follow a formal process. It is well structured and regulated.
A formal review process consists of six main steps:
 Planning
 Kick-off
 Preparation
 Review meeting
 Rework
 Follow-up
1. Planning
 The documents should not reveal a large number of major defects.
 The documents to be reviewed should be with line numbers.
 The documents should be cleaned up by running any automated checks that apply.
 The author should feel confident about the quality of the document so that he can join
the review team with that document.

 Types of review
The main review types that come under the static testing are mentioned below:
    Walkthrough:
 It is not a formal process
 It is led by the authors
 Author guide the participants through the document according to his or her thought process to
achieve a common understanding and to gather feedback.
 Useful for the people if they are not from the software discipline, who are not used to or cannot
easily understand software development process.
 Is especially useful for higher level documents like requirementspecification, etc.
The goals of a walkthrough:
 To present the documents both within and outside the software discipline in order to gather
the information regarding the topic under documentation.
 To explain or do the knowledge transfer and evaluate the contents of the document
 To achieve a common understanding and to gather feedback.
 To examine and discuss the validity of the proposed solutions
Technical review:
 It is less formal review
 It is led by the trained moderator but can also be led by a technical expert
 It is often performed as a peer review without management  participation
 Defects are found by the experts (such as architects, designers, key users) who focus on the
content of the document.
 In practice, technical reviews vary from quite informal to very formal
The goals of the technical review are:
 To ensure that an early stage the technical concepts are used correctly
 To access the value of technical concepts and alternatives in the product
 To have consistency in the use and representation of technical concepts
 To inform participants about the technical content of the document
Inspection:
 It is the most formal review type
 It is led by the trained moderators
 During inspection the documents are prepared and checked thoroughly by the reviewers
before the meeting
 It involves peers to examine the product
 A separate preparation is carried out during which the product is examined and the defects
are found
 The defects found are documented in a logging list or issue log
 A formal follow-up is carried out by the moderator applying exit criteria
The goals of inspection are:
 It helps the author to improve the quality of the document under inspection
 It removes defects efficiently and as early as possible
 It improve product quality
 It create common understanding by exchanging information
 It learn from defects found and prevent the occurrence of similar defects

Test design
 Basically test design is the act of creating and writing test suites for testing software.
 Test analysis and identifying test conditions gives us a generic idea for testing which covers
quite a large range of possibilities. But when we come to make a test case we need to be
very specific. In fact now we need the exact and detailed specific input. But just having some
values to input to the system is not a test, if you don’t know what the system is supposed to
do with the inputs, you will not be able to tell that whether your test has passed or failed.
 Test cases can be documented as described in the IEEE 829 Standard for Test
Documentation.
  Once a given input value has been chosen, the tester needs to determine what the expected
result of entering that input would be and document it as part of the test case. Expected results
include information displayed on a screen in response to an input. If we don’t decide on the expected
results before we run a test then there might be a chance that we will notice that there is something
wildly wrong

Traceability in Software testing


Test conditions should be able to be linked back to their sources in the test basis, this is known
as traceability. Traceability can be horizontal through all the test documentation for a given test
level (e.g. system testing, from test conditions through test cases to test scripts) or it can be
vertical through the layers of development documentation (e.g. from requirements to components).
Now, the question may arise is that Why is traceability important?  So, let’s have a look on the
following examples:

 The requirements for a given function or feature have changed. Some of the fields now have
different ranges that can be entered. Which tests were looking at those boundaries? They
now need to be changed. How many tests will actually be affected by this change in the
requirements? These questions can be answered easily if the requirements can easily be
traced to the tests.
 A set of tests that has run OK in the past has now started creating serious problems. What
functionality do these tests actually exercise? Traceability between the tests and the
requirement being tested enables the functions or features affected to be identified more
easily.
Before delivering a new release, we want to know whether or not we have tested all of the
specified requirements in the requirements specification. We have the list of the
tests that have passed – was
every requirement tested?

Test implementation
The document that describes the steps to be taken in running a set of tests and specifies the
executable order of the tests is called a test procedure in IEEE 829, and is also known as a test
script.  When test Procedure Specification is prepared then it is implemented and is called Test
implementation. Test script is also used to describe the instructions to a test execution tool. An
automation script is written in a programming language that the tool can understand. (This is an
automated test procedure.)
The tests that are intended to be run manually rather than using a test execution tool can be
called as manual test script. The test procedures, or test scripts, are then formed into a test
execution schedule that specifies which procedures are to be run first – a kind of superscript.
Writing the test procedure is another opportunity to prioritize the tests, to ensure that the
best testing is done in t he time available. A good rule of thumb is ‘Find the scary stuff first’. However
the definition of what is ‘scary’ depends on the business, system or project and depends up on the
risk of the project.

Categories of test design techniques


A test design technique basically helps us to select a good set of tests from the total number of
all possible tests for a given system. There are many different types of software testing technique, each
with its own strengths and weaknesses. Each individual technique is good at finding particular types of
defect and relatively poor at finding other types.

 Static technique

 Dynamic technique

 Specification-based (black-box, also known as behavioral techniques)


 Structure-based (white-box or structural techniques)
 Experience- based
Black-box, Specification-based, also known as behavioral testing techniques

Specification-based testing technique is also known as ‘black-box’ or input/output driven testing


techniques because they view the software as a black-box with inputs and outputs.
The testers have no knowledge of how the system or component is structured inside the box. In
black-box testing the tester is concentrating on what the software does, not how it does it.
 The definition mentions both functional and non-functional testing. Functional testing is
concerned with what the system does its features or functions. Non-functional testing is
concerned with examining how well the system does. Non-functional testing
like performance, usability, portability, maintainability, etc.
 Specification-based techniques are appropriate at all levels of testing (component testing
through to acceptance testing) where a specification exists. For example, when performing
system or acceptance testing, the requirements specification or functional specification may
form the basis of the tests.
There are four specification-based or black-box technique:
 Equivalence partitioning
 Boundary value analysis
 Decision tables
 State transition testing

Equivalence partitioning in Software testing


 Equivalence partitioning(EP) is a specification-based or black-box technique.
 It can be applied at any level of testing and is often a good technique to use first.
 The idea behind this technique is to divide (i.e. to partition) a set of test conditions into
groups or sets that can be considered the same (i.e. the system should handle them
equivalently), hence ‘equivalence partitioning’. Equivalence partitions are also known
as equivalence classes – the two terms mean exactly the same thing.
 In equivalence-partitioning technique we need to test only one condition from each
partition. This is because we are assuming that all the conditions in one partition will be
treated in the same way by the software. If one condition in a partition works, we
assume all of the conditions in that partition will work, and so there is little point in
testing any of these others. Similarly, if one of the conditions in a partition does not
work, then we assume that none of the conditions in that partition will work so again
there is little point in testing any more in that partition.

Important to do both equivalence partitioning and boundary value analysis


Technically, because every boundary is in some partition, if you did only boundary value
analysis you would also have tested every equivalence partition. However, this approach
may cause problems if that value fails – was it only the boundary value that failed or did
the whole partition fail? Also by testing only boundaries we would probably not give the
users much confidence as we are using extreme values rather than normal values. The
boundaries may be more difficult (and therefore more costly) to set up as well.  For
example, in the printer copies example described earlier we identified the following
boundary values:

Suppose we test only the valid boundary values 1 and 99 and nothing in between. If
both tests pass, this seems to indicate that all the values in between should also work.
However, suppose that one page prints correctly, but 99 pages do not

Decision table in software testing


A decision table is a good way to deal with combinations of things (e.g. inputs). This
technique is sometimes also referred to as a ’cause-effect’ table. The reason for this is
that there is an associated logic diagramming technique called ’cause-effect graphing’
which was sometimes used to help derive the decision table. Decision tables provide a
systematic way of stating complex business rules, which is useful for developers as well
as for testers.
 Decision tables can be used in test design whether or not they are used in specifications, as
they help testers explore the effects of combinations of different inputs and other software
states that must correctly implement business rules.
 It helps the developers to do a better job can also lead to better relationships with them.
Testing combinations can be a challenge, as the number of combinations can often be huge.
Testing all combinations may be impractical if not impossible. We have to be satisfied with
testing just a small subset of combinations but making the choice of which combinations to
test and which to leave out is also important. If you do not have a systematic way of
selecting combinations, an arbitrary subset will be used and this may well result in an
ineffective test effort.
How to Use decision tables for test designing?

The first task is to identify a suitable function or subsystem which reacts according to a
combination of inputs or events. The system should not contain too many inputs otherwise the
number of combinations will become unmanageable. It is better to deal with large numbers of
conditions by dividing them into subsets and dealing with the subsets one at a time. Once you
have identified the aspects that need to be combined, then you put them into a table listing all
the combinations of true and false for each of the aspects.
Let us consider an example of a loan application, where you can enter the amount of the monthly
repayment or the number of years you want to take to pay it back (the term of the loan). If you
enter both, the system will make a compromise between the two if they conflict. The two
conditions are the loan amount and the term, so we put them in a table (see Table 4.2).

TABLE 4.2 Empty decision table:


Conditions                      Rule 1                 Rule 2                Rule 3               Rule 4
Repayment amount has
been entered:
Term of loan has been
Entered:
Next we will identify all of the combinations of True and False (see Table 4.3). With two
conditions, each of which can be True or False, we will have four combinations (two to the power
of the number of things to be combined). Note that if we have three things to combine, we will
have eight combinations, with four things, there are 16, etc. This is why it is good to tackle small
sets of combinations at a time. In order to keep track of which combinations we have, we will
alternate True and False on the bottom row, put two Trues and then two Falses on the row above
the bottom row, etc., so the top row will have all Trues and then all Falses (and this principle
applies to all such tables).

TABLE 4.3 Decision table with input combinations:


Conditions                         Rule 1               Rule 2                  Rule 3                Rule 4
Repayment amount has          T                        T                         F                         F
been entered:
Term of loan has been             T                        F                        T                          F
entered:
In the next step we will now identify the correct outcome for each combination (see Table 4.4).
In this example, we can enter one or both of the two fields. Each combination is sometimes
referred to as a rule.

TABLE 4.4 Decision table with combinations and outcomes:


Conditions                        Rule 1                 Rule 2               Rule 3                  Rule 4
Repayment amount has        T                          T                        F                           F
been entered:
Term of loan has been           T                           F                        T                          F
entered: 
Actions/Outcomes
Process loan amount:             Y                         Y
Process term:                          Y                                                   Y
TABLE 4. 5 Decision table with additional outcomes:
Conditions                          Rule 1                 Rule 2                 Rule 3              Rule 4
Repayment amount has          T                          T                         F                        F
been entered:
Term of loan has been             T                          F                        T                         F
entered: 
Actions/Outcomes
Process loan amount:                  Y                          Y
Process term:                               Y                                                     Y
Error message:                                                                                                          Y
Now, we make slight change in this example, so that the customer is not allowed to enter both
repayment and term. Now the outcome of our table will change, because there should also be an
error message if both are entered, so it will look like Table 4.6.

TABLE 4 . 6 Decision table with changed outcomes:


Conditions                         Rule 1                 Rule 2              Rule 3               Rule 4
Repayment amount has         T                          T                       F                         F
been entered:
Term of loan has been           T                           F                        T                        F
entered: 
Actions/Outcomes
Process loan amount:                                        Y
Process term:                                                                               Y
Error message:                     Y                                                                                 Y
You might notice now that there is only one ‘Yes’ in each column, i.e. our actions are mutually
exclusive – only one action occurs for each combination of conditions. We could represent this in
a different way by listing the actions in the cell of one row, as shown in Table 4.7. Note that if
more than one action results from any of the combinations, then it would be better to show them
as separate rows rather than combining them into one row.

TABLE 4.7 Decision table with outcomes in one row:


Conditions                           Rule 1               Rule 2                 Rule 3             Rule 4
Repayment amount has           T                        T                         F                       F
been entered:
Term of loan has been             T                         F                         T                       F
entered: 
Actions/Outcomes:
Result:     Error                Process loan           Process              Error
message               amount                   term               message
The final step of this technique is to write test cases to exercise each of the four rules in our
table.

Use case testing in software testing


 Use case testing is a technique that helps us identify test cases that exercise the whole system
on a transaction by transaction basis from start to finish. They are described by Ivar Jacobson in
his book Object-Oriented Software Engineering: A Use Case Driven Approach[Jacobson, 1992].
 A use case is a description of a particular use of the system by an actor (a user of the system).
Each use case describes the interactions the actor has with the system in order to achieve a
specific task (or, at least, produce something of value to the user).
 Actors are generally people but they may also be other systems.
 Use cases are a sequence of steps that describe the interactions between the actor and the
system. Use cases are defined in terms of the actor, not the system, describing what the actor
does and what the actor sees rather than what inputs the system expects and what the system’s
outputs.
 They often use the language and terms of the business rather than technical terms, especially
when the actor is a business user.
 They serve as the foundation for developing test cases mostly at the system and acceptance
testing levels.
 Use cases can uncover integration defects, that is, defects caused by the incorrect interaction
between different components. Used in this way, the actor may be something that the system
interfaces to such as a communication link or sub-system.
 Use cases describe the process flows through a system based on its most likely use. This makes
the test cases derived from use cases particularly good for finding defects in the real-world use of
the system (i.e. the defects that the users are most likely to come across when first using the
system).
 Each use case usually has a mainstream (or most likely) scenario and sometimes additional
alternative branches (covering, for example, special cases or exceptional conditions).
 Each use case must specify any preconditions that need to be met for the use case to work.
 Use cases must also specify post conditions that are observable results and a description of the
final state of the system after the use case has been executed successfully.
 The ATM PIN example is shown below in Figure 4.3. We show successful and unsuccessful
scenarios. In this diagram we can see the interactions between the A (actor – in this case it is a
human being) and S (system). From step 1 to step 5 that is success scenario it shows that the
card and pin both got validated and allows Actor to access the account. But in extensions there
can be three other cases that are 2a, 4a, 4b which is shown in the diagram below.

 For use case testing, we would have a test of the success scenario and one testing for each
extension. In this example, we may give extension 4b a higher priority than 4a from a security
point of view.

 System requirements can also be specified as a set of use cases. This approach can make it
easier to involve the users in the requirements gathering and definition process.

Error guessing in software testing


 The Error guessing is a technique where the experienced and good testers are encouraged to
think of situations in which the software may not be able to cope. Some people seem to be
naturally good at testing and others are good testers because they have a lot of experience
either as a tester or working with a particular system and so are able to find out its
weaknesses. This is why an error guessing approach, used after more formal techniques
have been applied to some extent, can be very effective. It also saves a lot of time because
of the assumptions and guessing made by the experienced testers to find out the defects
which otherwise won’t be able to find.
 The success of error guessing is very much dependent on the skill of the tester, as good
testers know where the defects are most likely to be.
 This is why an error guessing approach, used after more formal techniques have been
applied to some extent, can be very effective. In using more formal techniques, the tester is
likely to gain a better understanding of the system, what it does and how it works. With this
better understanding, he or she is likely to be better at guessing ways in which the system
may not work properly.
 Typical conditions to try include division by zero, blank (or no) input, empty files and the
wrong kind of data (e.g. alphabetic characters where numeric are required). If anyone ever
says of a system or the environment in which it is to operate ‘That could never happen’, it
might be a good idea to test that condition, as such assumptions about what will and will not
happen in the live environment are often the cause of failures.
 A structured approach to the error-guessing technique is to list possible defects or failures
and to design tests that attempt to produce them. These defect and failure lists can be built
based on the tester’s own experience or that of other people, available defect and failure
data, and from common knowledge about why software fails.

 Exploratory testing in software testing

 As its name implies, exploratory testing is about exploring, finding out about the software,
what it does, what it doesn’t do, what works and what doesn’t work. The tester is constantly
making decisions about what to test next and where to spend the (limited) time. This is an
approach that is most useful when there are no or poor specifications and when time is
severely limited.
 Exploratory testing is a hands-on approach in which testers are involved in minimum
planning and maximum test execution.
 The planning involves the creation of a test charter, a short declaration of the scope of a
short (1 to 2 hour) time-boxed test effort, the objectives and possible approaches to be
used.
 The test design and test execution activities are performed in parallel typically without
formally documenting the test conditions, test cases or test scripts. This does not mean that
other, more formal testing techniques will not be used. For example, the tester may decide
to us boundary value analysis but will think through and test the most important boundary
values without necessarily writing them down. Some notes will be written during the
exploratory-testing session, so that a report can be produced afterwards.
 Test logging is undertaken as test execution is performed, documenting the key aspects of
what is tested, any defects found and any thoughts about possible further testing.
 It can also serve to complement other, more formal testing, helping to establish greater
confidence in the software. In this way, exploratory testing can be used as a check on the
formal test process by helping to ensure that the most serious defects have been found.

Structure-based technique in software testing


 Structure-based techniques serve two purposes: test coverage measurement and structural
test case design.
 They are often used first to assess the amount of testing performed by tests derived from
specification-based techniques, i.e. to assess coverage.
 They are then used to design additional tests with the aim of increasing the test coverage.
 Structure-based test design techniques are a good way of generating additional test cases
that are different from existing tests.
 They can help ensure more breadth of testing, in the sense that test cases that achieve
100% coverage in any measure will be exercising all parts of the software from the point of
view of the items being covered.

Test coverage in software testing


Test coverage measures the amount of testing performed by a set of test. Wherever we can
count things and can tell whether or not each of those things has been tested by some test, then
we can measure coverage and is known as test coverage.
The basic coverage measure is where the ‘coverage item’ is whatever we have been able to count
and see whether a test has exercised or used this item.

There is danger in using a coverage measure. But, 100% coverage does not mean 100% tested.
Coverage techniques measure only one dimension of a multi-dimensional concept. Two different
test cases may achieve exactly the same coverage but the input data of one may find an error
that the input data of the other doesn’t.
Benefit of code coverage measurement:
 It creates additional test cases to increase coverage
 It helps in finding areas of a program not exercised by a set of test cases
 It helps in determining a quantitative measure of code coverage, which indirectly measure
the quality of the application or product.
Drawback of code coverage measurement:
 One drawback of code coverage measurement is that it measures coverage of what has been
written, i.e. the code itself; it cannot say anything about the software that has  not been
written.
 If a specified function has not been implemented or a function was omitted from the
specification, then structure-based techniques cannot say anything about them it only looks
at a structure which is already there.

 Types of coverage
There are many types of test coverage. Test coverage can be used in any level of the testing.
Test coverage can be measured based on a number of different structural elements in a system
or component. Coverage can be measured at component testing level, integration-testing level or
at system- or acceptance-testing levels. For example, at system or acceptance level, the
coverage items may be requirements, menu options, screens, or typical business transactions. At
integration level, we could measure coverage of interfaces or specific interactions that have been
tested.

Roles and responsibilities of a Test Leader


 Test leaders tend to be involved in the planning, monitoring, and control of the testing activities
and tasks.
 At the outset of the project, test leaders, in collaboration with the other stakeholders, devise the
test objectives, organizational test policies, test strategies and test plans.
 They estimate the testing to be done and negotiate with management to acquire the necessary
resources.
 They recognize when test automation is appropriate and, if it is, they plan the effort, select the
tools, and ensure training of the team.  They may consult with other groups – e.g., programmers – to
help them with their testing.
 They lead, guide and monitor the analysis, design, implementation and execution of the test
cases, test procedures and test suites.
 They ensure proper configuration management of the testware produced and traceability of the
tests to the test basis.
 As test execution comes near, they make sure the test environment is put into place before test
execution and managed during test execution.
 They schedule the tests for execution and then they monitor, measure, control and report on the
test progress, the product quality status and the test results, adapting the test plan and
compensating as needed to adjust to evolving conditions.
 During test execution and as the project winds down, they write summary reports on test status.
 Sometimes test leaders wear different titles, such as test manager or test coordinator.
Alternatively, the test leader role may wind up assigned to a project manager, a development
manager or a quality assurance manager. Whoever is playing the role, expect them to plan, monitor
and control the testing work.
Along with the test leaders testers should also be included from the beginning of the projects, although
most of the time the project doesn’t need a full complement of testers until the test execution period. So,
now we will see testers responsibilities.

Roles and responsibilities of a Tester?


 In the planning and preparation phases of the testing, testers should review and contribute to
test plans, as well as analyzing, reviewing and assessing requirements and design specifications.
They may be involved in or even be the primary people identifying test conditions and creating test
designs, test cases, test procedure specifications and test data, and may automate or help to
automate the tests.
 They often set up the test environments or assist system administration and network
management staff in doing so.
 As test execution begins, the number of testers often increases, starting with the work required
to implement tests in the test environment.
 Testers execute and log the tests, evaluate the results and document problems found.
 They monitor the testing and the test environment, often using tools for this task, and often
gather performance metrics.
 Throughout the testing life cycle, they review each other’s work, including test specifications,
defect reports and test results.

Purpose and importance of test plans in software testing


Test plan is the project plan for the testing work to be done. It is not a test design specification, a
collection of test cases or a set of test procedures; in fact, most of our test plans do not address that
level of detail. Many people have different definitions for test plans.
Why it is required to write test plans? We have three main reasons to write the test plans:

First, by writing a test plan it guides our thinking.  Writing a test plan forces us to confront the challenges
that await us
And focus our thinking on important topics.
By using a template for writing test plans helps us remember the important challenges. You can use the
IEEE 829 test plan template shown in this chapter, use someone else’s template, or create your own
template over time.
Second, the test planning process and the plan itself serve as the means of communication with other
members of the project team, testers, peers, managers and other stakeholders. This communication
allows the test plan to influence the project team and the project team to influence the test plan,
especially in the areas of organization-wide testing policies and motivations; test scope, objectives and
critical areas to test; project and product risks, resource considerations and constraints; and the
testability of the item under test. We can complete this communication by circulating one or two test plan
drafts and through review meetings. Such a draft will include many notes such as the following:
Third, the test plan helps us to manage change. During early phases of the project, as we gather more
information, we revise our plans At times it is better to write multiple test plans in some situations. For
example, when we manage both integration and system test levels, those two test execution periods
occur at different points in time and have different objectives. For some systems projects, a hardware test
plan and a software test plan will address different techniques and tools as well as different audiences.
However, there are chances that these test plans can get overlapped, hence, a master test plan should be
made that addresses the common elements of both the test plans can reduce the amount of redundant
documentation.

IEEE 829 STANDARD TEST PLAN TEMPLATE


Test plan identifier 
Test deliverables
Introduction 
Test tasks
Test items 
Environmental needs
Features to be tested 
Responsibilities
Features not to be tested 
Staffing and training needs
Approach Schedule
Item pass/fail criteria
Risks and contingencies
Suspension and resumption criteria Approvals

Things to keep in mind while planning tests


A good test plan is always kept short and focused. At a high level, you need to consider the purpose
served by the testing work. Hence, it is really very important to keep the following things in mind while
planning tests:

 What is in scope and what is out of scope for this testing effort?
 What are the test objectives?
 What are the important project and product risks? (details on risks will discuss in Section 5.5).
 What constraints affect testing (e.g., budget limitations, hard deadlines, etc.)?
 What is most critical for this product and project?
 Which aspects of the product are more (or less) testable?
 What should be the overall test execution schedule and how should we decide the order in which
to run specific tests? (Product and planning risks, discussed later in this chapter, will influence the
answers to these questions.)
 How to split the testing work into various levels (e.g., component, integration, system and
acceptance).
 If that decision has already been made, you need to decide how to best fit your testing work in
the level you are responsible for with the testing work done in those other test levels.
 During the analysis and design of tests, you’ll want to reduce gaps and overlap between levels
and, during test execution, you’ll want to coordinate between the levels. Such details dealing with
inter-level coordination are often addressed in the master test plan.
 In addition to integrating and coordinating between test levels, you should also plan to integrate
and coordinate all the testing work to be done with the rest of the project. For example, what items
must be acquired for the testing?
 When will the programmers complete work on the system under test?
 What operations support is required for the test environment?
 What kind of information must be delivered to the maintenance team at the end of testing?
 How many resources are required to carry out the work.
Now, think about what would be true about the project when the project was ready to start executing
tests. What would be true about the project when the project was ready to declare test execution done?
At what point can you safely start a particular test level or phase, test suite or test target? When can you
finish it? The factors to consider in such decisions are often called ‘entry criteria’ and ‘exit criteria.’ For
such criteria, typical factors are:

 Acquisition and supply: the availability of staff, tools, systems and other materials required.
 Test items: the state that the items to be tested must be in to start and to finish testing.
 Defects: the number known to be present, the arrival rate, the number predicted to remain, and
the number resolved.
 Tests: the number run, passed, failed, blocked, skipped, and so forth.
 Coverage: the portions of the test basis, the software code or both that have been tested and
which have not.
 Quality: the status of the important quality characteristics for the system.
 Money: the cost of finding the next defect in the current level of testing compared to the cost of
finding it in the next level of testing (or in production).
 Risk: the undesirable outcomes that could result from shipping too early (such as latent defects
or untested areas) – or too late (such as loss of market share).
When writing exit criteria, we try to remember that a successful project is a balance of quality, budget,
schedule and feature considerations. This is even more important when applying exit criteria at the end of
the project.

Estimating what testing will involve and what it will cost


As we know that testing is a process rather than a single activity. Hence, we need to break down a testing
project into phases using the fundamental test process identified in the ISTQB Syllabus: planning and
control; analysis and design; implementation and execution; evaluating exit criteria and reporting; and
test closure.
Within each phase we identify activities and within each activity we identify tasks and perhaps subtasks.
To identify the activities and tasks, we work both forward and backward. When we say we work forward,
we mean that we start with the planning activities and then move forward in time step by step, asking,
‘Now, what comes next?’
Working backward means that we consider the risks that we identified during risk analysis (which we will
discuss in Section 5.5) and depending on the type of risk we decide that ‘What activities and tasks are
required in each stage to carry out this testing?’

Let’s look at an example of how you might work backward.

 Suppose that you have identified performance as a major area of risk for your product. So,
performance testing is an activity in the test execution phase. You now estimate the tasks involved
with running a performance test, how long those tasks will take and how many times you will need to
run the performance tests.
 Now, those tests need to be developed by someone. So, performance test development entails
activities in test analysis, design and implementation. You now estimate the tasks involved in
developing a performance test, such as writing test scripts and creating test data. Typically,
performance tests need to be run in a special test environment that is designed to look like the
production or field environment.
 You now estimate tasks involved in acquiring and configuring such a test environment, such as
getting the right hardware, software and tools and setting up hardware, software and tools.
 Not everyone knows how to use performance-testing tools or to design performance tests. So,
performance-testing training or staffing is an activity in the test planning phase. Depending on the
approach you intend to take, you now estimate the time required to identify and hire a performance
test professional or to train one or more people in your organization to do the job.
 Finally, in many cases a detailed test plan is written for performance testing, due to its
differences from other test types. So, performance-testing planning is an activity in the test planning
phase. You now estimate the time required to draft, review and finalize a performance test plan.

The estimation techniques in software testing


1. People with expertise on the tasks to be done:
In this process we ask the individual contributors and experts involves working with experienced staff
members to develop a work-breakdown structure for the project. With that done, you work together to
understand, for each task, the effort, duration, dependencies, and resource requirements. The idea is to
draw on the collective wisdom of the team to create your test estimate. Using a tool such as Microsoft
Project or a whiteboard and sticky-notes, you and the team can then predict the testing end-date and
major milestones. This technique is often called ‘bottom up’ estimation because you start at the lowest
level of the hierarchical breakdown in the work-breakdown structure – the task – and let the duration,
effort, dependencies and resources for each task add up across all the tasks.

2. Consulting the people who will do the work:

Even the best estimate must be negotiated with management. Negotiating sessions exhibit amazing
variety, depending on the people involved. However, there are some classic negotiating positions. It’s not
unusual for the test leader or manager to try to sell the management team on the value added by the
testing or to alert management to the potential problems that would result from not testing enough. It’s
not unusual for management to look for smart ways to accelerate the schedule or to press for equivalent
coverage in less time or with fewer resources. In between these positions, you and your colleagues can
reach compromise, if the parties are willing. Our experience has been that successful negotiations about
estimates are those where the focus is less on winning and losing and more about figuring out how best to
balance competing pressures in the realms of quality, schedule, budget and features.

Factors affecting test effort in software testing


When you create test plans and estimate the testing effort and schedule, you must keep these factors in
mind otherwise your plans and estimates will mislead you at the beginning of the project and betray you
at the middle or end.

The test strategies or approaches you pick will have a major influence on the testing effort. In this section,
let’s look at factors related to the product, the process and the results of testing.
In Product factors the presence of sufficient project documentation is important so that the testers can
figure out what the system is, how it is supposed to work and what correct behavior looks like. This will
help us do our job more efficiently.

The factors which affect the test effort are:


 While good project documentation is a positive factor, it’s also true that having to produce
detailed documentation, such as meticulously specified test cases, results in delays. During test
execution, having to maintain such detailed documentation requires lots of effort, as does working
with fragile test data that must be maintained or restored frequently during testing.
 Increasing the size of the product leads to increases in the size of the project and the project
team. Increases in the project and project team increases the difficulty of predicting and managing
them. This leads to the disproportionate rate of collapse of large projects.
 The life cycle itself is an influential process factor, as the V-model tends to be more fragile in the
face of late change while incremental models tend to have high regression testing costs.
 Process maturity, including test process maturity, is another factor, especially the implication that
mature processes involve carefully managing change in the middle and end of the project, which
reduces test execution cost.
 Time pressure is another factor to be considered. Pressure should not be an excuse to take
unwarranted risks. However, it is a reason to make careful, considered decisions and to plan and re-
plan intelligently throughout the process.
 People execute the process, and people factors are as important or more important than any
other. Important people factors include the skills of the individuals and the team as a whole, and the
alignment of those skills with the project’s needs. It is true that there are many troubling things
about a project but an excellent team can often make good things happen on the project and in
testing.
 Since a project team is a team, solid relationships, reliable execution of agreed-upon
commitments and responsibilities and a determination to work together towards a common goal are
important. This is especially important for testing, where so much of what we test, use, and produce
either comes from, relies upon or goes to people outside the testing group. Because of the
importance of trusting relationships and the lengthy learning curves involved in software and system
engineering, the stability of the project team is an important people factor, too.
 The test results themselves are important in the total amount of test effort during test execution.
The delivery of good-quality software at the start of test execution and quick, solid defect fixes during
test execution prevents delays in the test execution process. A defect, once identified, should not
have to go through multiple cycles of fix/retest/re-open, at least not if the initial estimate is going to
be held to.

The test approaches or strategies in software testing


The choice of test approaches or strategiesis one of the most powerful factor in the success of the test
effort and the accuracy of the test plans and estimates. This factor is under the control of the testers and
test leaders.
Let’s survey the major types of test strategies that are commonly found:
 Analytical: Let us take an example to understand this. The risk-based strategy involves
performing a risk analysis using project documents and stakeholder input, then planning, estimating,
designing, and prioritizing the tests based on risk. Another analytical test strategy is the
requirements-based strategy, where an analysis of the requirements specification forms the basis for
planning, estimating and designing tests. Analytical test strategies have in common the use of some
formal or informal analytical technique, usually during the requirements and design stages of the
project.
 Model-based: Let us take an example to understand this. You can build mathematical models
for loading and response for e commerce servers, and test based on that model. If the behavior of
the system under test conforms to that predicted by the model, the system is deemed to be working.
Model-based test strategies have in common the creation or selection of some formal or informal
model for critical system behaviors, usually during the requirements and design stages of the project.
 Methodical: Let us take an example to understand this. You might have a checklist that you
have put together over the years that suggests the major areas of testing to run or you might follow
an industry-standard for software quality, such as ISO 9126, for your outline of major test areas. You
then methodically design, implement and execute tests following this outline. Methodical test
strategies have in common the adherence to a pre-planned, systematized approach that has been
developed in-house, assembled from various concepts developed inhouse and gathered from outside,
or adapted significantly from outside ideas and may have an early or late point of involvement for
testing.
 Process – or standard-compliant: Let us take an example to understand this. You might adopt
the IEEE 829 standard for your testing, using books such as [Craig, 2002] or [Drabick, 2004] to fill in
the methodological gaps. Alternatively, you might adopt one of the agile methodologies such as
Extreme Programming. Process- or standard-compliant strategies have in common reliance upon an
externally developed approach to testing, often with little – if any – customization and may have an
early or late point of involvement for testing.
 Dynamic: Let us take an example to understand this. You might create a lightweight set of
testing guide lines that focus on rapid adaptation or known weaknesses in software. Dynamic
strategies, such asexploratory testing, have in common concentrating on finding as many defects
as possible during test execution and adapting to the realities of the system under test as it is when
delivered, and they typically emphasize the later stages of testing. See, for example, the attack
based approach of [Whittaker, 2002] and [Whittaker, 2003] and the exploratory approach of
[Kaner et al., 2002].
 Consultative or directed: Let us take an example to understand this. You might ask the users
or developers of the system to tell you what to test or even rely on them to do the testing.
Consultative or directed strategies have in common the reliance on a group of non-testers to guide or
perform the testing effort and typically emphasize the later stages of testing simply due to the lack of
recognition of the value of early testing.
 Regression-averse: Let us take an example to understand this. You might try to automate all
the tests of system functionality so that, whenever anything changes, you can re-run every test to
ensure nothing has broken. Regression-averse strategies have in common a set of procedures –
usually automated – that allow them to detect regression defects. A regression-averse strategy may
involve automating functional tests prior to release of the function, in which case it requires early
testing, but sometimes the testing is almost entirely focused on testing functions that already have
been released, which is in some sense a form of post release test involvement.
Some of these strategies are more preventive, others more reactive. For example, analytical test
strategies involve upfront analysis of the test basis, and tend to identify problems in the test basis prior to
test execution. This allows the early – and cheap – removal of defects. That is a strength of preventive
approaches.
Dynamic test strategies focus on the test execution period. Such strategies allow the location of defects
and defect clusters that might have been hard to anticipate until you have the actual system in front of
you. That is a strength of reactive approaches.

Rather than see the choice of strategies, particularly the preventive or reactive strategies, as an either/or
situation, we’ll let you in on the worst-kept secret of testing (and many other disciplines): There is no one
best way. We suggest that you adopt whatever test approaches make the most sense in your particular
situation, and feel free to borrow and blend.

How do you know which strategies to pick or blend for the best chance of success? There are
many factors to consider, but let us highlight a few of the most important:
 Risks: Risk management is very important during testing, so consider the risks and the level of
risk. For a well-established application that is evolving slowly, regression is an important risk, so
regression-averse strategies make sense. For a new application, a risk analysis may reveal different
risks if you pick a risk-based analytical strategy.
 Skills: Consider which skills your testers possess and lack because strategies must not only be
chosen, they must also be executed. . A standard compliant strategy is a smart choice when you lack
the time and skills in your team to create your own approach.
 Objectives: Testing must satisfy the needs and requirements of stakeholders to be successful. If
the objective is to find as many defects as possible with a minimal amount of up-front time and effort
invested – for example, at a typical independent test lab – then a dynamic strategy makes sense.
 Regulations: Sometimes you must satisfy not only stakeholders, but also regulators. In this
case, you may need to plan a methodical test strategy that satisfies these regulators that you have
met all their requirements.
 Product: Some products like, weapons systems and contract-development software tend to have
well-specified requirements. This leads to synergy with a requirements-based analytical strategy.
 Business: Business considerations and business continuity are often important. If you can use a
legacy system as a model for a new system, you can use a model-based strategy.
You must choose testing strategies with an eye towards the factors mentioned earlier, the schedule,
budget, and feature constraints of the project and the realities of the organization and its politics.

We mentioned above that a good team can sometimes triumph over a situation where materials, process
and delaying factors are ranged against its success. However, talented execution of an unwise strategy is
the equivalent of going very fast down a highway in the wrong direction. Therefore, you must make smart
choices in terms of testing strategies.

 Test monitoring in software testing


Test monitoring can serve various purposes during the project, including the following:

 Give the test team and the test manager feedback on how the testing work is going, allowing
opportunities to guide and improve the testing and the project.
 Provide the project team with visibility about the test results.
 Measure the status of the testing, test coverage and test items against the exit criteria to
determine whether the test work is done.
 Gather data for use in estimating future test efforts.
For small projects, the test leader or a delegated person can gather test progress monitoring information
manually using documents, spreadsheets and simple databases. But, when working with large teams,
distributed projects and long-term test efforts, we find that the efficiency and consistency of data
collection is done by the use of automated tools.

One way to keep the records of test progress information is by using the IEEE 829 test log template. While
much of the information related to logging events can be usefully captured in a document, we prefer to
capture the test-by-test information in spreadsheets (see Figure 5.1).

Let us take an
example as shown in Figure 5.1, columns A and B show the test ID and the test case or test suite name.
The state of the test case is shown in column C (‘Warn’ indicates a test that resulted in a minor failure).
Column D shows the tested configuration, where the codes A, B and C correspond to test environments
described in detail in the test plan. Columns E and F show the defect (or bug) ID number (from the
defect-tracking database) and the risk priority number of the defect (ranging from 1, the worst, to 25, the
least risky). Column G shows the initials of the tester who ran the test. Columns H through L capture data
for each test related to dates, effort and duration (in hours). We have metrics for planned and actual
effort and dates completed which would allow us to summarize progress against the planned schedule and
budget. This spreadsheet can also be summarized in terms of the percentage of tests which have been run
and the percentage of tests which have passed and failed.
Figure 5.1 might show a snapshot of test progress during the test execution Period.  During the analysis,
design and implementation of the tests, such a worksheet would show the state of the tests in terms of
their state of development.

In addition to test case status, it is also common to monitor test progress during the test execution period
by looking at the number of defects found and fixed. Figure 5.2 shows a graph that plots the total number
of defects opened and closed over the course of the test execution so far. It also shows the planned test
period end date and the planned number of defects that will be found. Ideally, as the project approaches
the planned end date, the total number of defects opened will settle in at the predicted number and the
total number of defects closed will converge with the total number opened. These two outcomes tell us
that we have found enough defects to feel comfortable that we’re done testing, that we have no reason to
think many more defects are lurking in the product, and that all known defects have been resolved.
Charts such as Figure 5.2 can also be used to show failure rates ordefect density. When reliability is a
key concern, we might be more concerned with the frequency with which failures are observed (called
failure rates) than with how many defects are causing the failures (called defect density).
In organizations that are looking to produce ultra-reliable software, they may plot the number of
unresolved defects normalized by the size of the product, either in thousands of source lines of code
(KSLOC), function points (FP) or some other metric of code size. Once the number of unresolved defects
falls below some predefined threshold – for example, three per million lines of code – then the product
may be deemed to have met the defect density exit criteria.
That is why it is said, test progress monitoring techniques vary considerably depending on the preferences
of the testers and stakeholders, the needs and goals of the project, regulatory requirements, time and
money constraints and other factors.
In addition to the kinds of information shown in the IEEE 829 Test Log Template, Figures 5.1 and Figure
5.2, other common metrics for test progress monitoring include:

 The extent of completion of test environment preparation;


 The extent of test coverage achieved, measured against requirements, risks, code, configurations
or other areas of interest;
 The status of the testing (including analysis, design and implementation) compared to various
test milestones;
The economics of testing, such as the costs and benefits of continuing test execution in terms of finding
the next defect or running the next test.

Test control
Projects do not always open up as planned. If the planned product and the actual product is different then
risks become occurrences, stakeholder needs evolve, the world around us changes.  Hence it is required
and needed to bring the project back under control.
Test control is about guiding and corrective actions to try to achieve the best possible outcome for the
project. The specific guiding actions depend on what we are trying to control. Let us take few hypothetical
examples:
 A portion of the software under test will be delivered late but market conditions dictate that we
cannot change the release date. At this point of time test control might involve re-prioritizing the
tests so that we start testing against what is available now.
 For cost reasons, performance testing is normally run on weekday evenings during off-hours in
the production environment. Due to unexpected high demand for your products, the company has
temporarily adopted an evening shift that keeps the production environment in use 18 hours a day,
five days a week. In this context test control might involve rescheduling the performance tests for the
weekend.

Configuration management in software testing


 Configuration management determines clearly about the items that make up the software or
system. These items include source code, test scripts, third-party software, hardware, data and both
development and test documentation.
 Configuration management is also about making sure that these items are managed carefully,
thoroughly and attentively during the entire project and product life cycle.
 Configuration management has a number of important implications for testing. Like configuration
management allows the testers to manage their testware and test results using the same
configuration management mechanisms.
 Configuration management also supports the build process, which is important for delivery of a
test release into the test environment. Simply sending Zip archives by e-mail will not be sufficient,
because there are too many opportunities for such archives to become polluted with undesirable
contents or to harbor left-over previous versions of items. Especially in later phases of testing, it is
critical to have a solid, reliable way of delivering test items that work and are the proper version.
 Last but not least, configuration management allows us to keep the record of what is being tested
to the underlying files and components that make it up. This is very important. Let us take an
example, when we report defects, we need to report them against something, something which
is version controlled. If it is not clear what we found the defect in, the programmers will have a
very tough time of finding the defect in order to fix it. For the kind of test reports discussed earlier to
have any meaning, we must be able to trace the test results back to what exactly we tested.
Ideally, when testers receive an organized, version-controlled test release from a change-managed source
code repository, it is along with a test itemtrans-mittal report or release notes. [IEEE 829] provides a
useful guideline for what goes into such a report. Release notes are not always so formal and do not
always contain all the information shown.

Configuration management is a topic that is very complex. So, advanced planning is very important to
make this work. During the project planning stage – and perhaps as part of your own test plan – make
sure that configuration management procedures and tools are selected. As the project proceeds, the
configuration process and mechanisms must be implemented, and the key interfaces to the rest of the
development process should be documented.

Risk in software testing


In software testing Risksare the possible problems that might endanger the objectives of the project
stakeholders. It is the possibility of a negative or undesirable outcome. A risk is something that has not
happened yet and it may never happen; it is a potential problem.
In the future, a risk has some probability between 0% and 100%; it is a possibility, not a certainty.

The chance of a risk becoming an outcome is dependent on the level of risk associated with its possible
negative consequences.

For example, most people are expected to catch a cold in the course of their lives, usually more than
once. But the healthy individual suffers no serious consequences. Therefore, the overall level of risk
associated with colds is low for this person. In other hand the risk of a cold for an elderly person with
breathing difficulties would be high. So, in his case the overall level of risk associated with cold is high.

We can classify risks into following categories:

1. Product risk (factors relating to what is produced by the work, i.e. the thing we are testing).
2. Project risk (factors relating to the way the work is carried out, i.e. the test project)

Product risk in software testing


Product risk is the possibility that the system or software might fail to satisfy or fulfill some reasonable
expectation of the customer, user, or stakeholder. (Some authors also called the ‘Product risks’ as ‘Quality
risks’ as they are risks to the quality of the product.)
The product risks that can put the product or software in danger are:

 If the software skips some key function that the customers specified, the users required or the
stakeholders were promised.
 If the software is unreliable and frequently fails to work.
 If software fail in ways that cause financial or other damage to a user or the company that user
works for.
 If the software has problems related to a particular quality characteristic, which might not be
functionality, but rather security, reliability, usability, maintainability or performance.
Two quick tips about product risk analysis:
First, remember to consider both likelihood of occurrence of the risk and the impact of the risk. While you
may feel proud by finding lots of defects but testing is also about building confidence in key functions. We
need to test the things that probably won’t break but would be very bad if they did.
Second, early risk analysis, are often educated guesses. At key project milestones it’s important to
ensure that you revisit and follow up on the risk analysis.

 Project risk in software testing


Testing is an activity like the rest of the project and thus it is subject to risks that cause danger to the
project.
The project risk that can endanger the project are:

 Risk such as the late delivery of the test items to the test team or availability issues with the test
environment.
 There are also indirect risks such as excessive delays in repairing defects found in testing or
problems with getting professional system administration support for the test environment.
 For any risk, project risk or product risk we have four typical actions that we can take:

 Mitigate: Take steps in advance to reduce the possibility and impact of the risk.
 Contingency: Have a plan in place to reduce the possibility of the risk to become an outcome.
 Transfer: Convince some other member of the team or project stakeholder to reduce the
probability or accept the impact of the risk.
 Ignore: Ignore the risk, which is usually a good option only when there is little that can be done
or when the possibility and impact of that risk are low in the project.

Risk based testing


Risk-based testing is basically a testing done for the project based on risks.  It uses risk to
prioritize and emphasize the appropriate tests during test execution. Risk-based testing is the idea that we
can organize our testing efforts in a way that reduces the residual level of product risk when the system is
deployed.
 Risk-based testing starts early in the project, identifying risks to system quality and using that
knowledge of risk to guide testing planning, specification, preparation and execution.
 Risk-based testing involves both mitigation – testing to provide opportunities to reduce the
likelihood of defects, especially high-impact defects – and contingency – testing to identify work-
arounds to make the defects that do get past us less painful.
 Risk-based testing also involves measuring how well we are doing at finding and removing
defects in critical areas.
 Risk-based testing can also involve using risk analysis to identify proactive opportunities to
remove or prevent defects through non-testing activities and to help us select which test activities to
perform.
The goal of risk-based testing cannot practically be – a risk-free project. What we can get from risk-based
testing is to carry out the testing with best practices in risk management to achieve a project outcome
that balances risks with quality, features, budget and schedule.

Risk analysis
There are many techniques to analyze the testing. They are:

 One technique for risk analysis is a close reading of the requirements specification, design
specifications, user documentation and other items.
 Another technique is brainstorming with many of the project stakeholders.
 Another is a sequence of one-on-one or small-group sessions with the business and technology
experts in the company.
 Some people use all these techniques when they can. To us, a team-based approach that
involves the key stakeholders and experts is preferable to a purely document-based approach, as
team approaches draw on the knowledge, wisdom and insight of the entire team to determine what
to test and how much.
The scales used to rate possibility and impact vary. Some people rate them high, medium and low. Some
use a 1-10 scale. The problem with a 1-10 scale is that it’s often difficult to tell a 2 from a 3 or a 7 from
an 8, unless the differences between each rating are clearly defined. A five-point scale (very high, high,
medium, low and very low) tends to work well.

Let us also discuss some risks which occur usually along with some options for managing them:

 Logistics or product quality problems that block tests: These can be made moderate by
careful planning, good defect triage and management, and robust test design.
 Test items that won’t install in the test environment: These can be mitigated through
smoke (or acceptance) testing prior to starting test phases or as part of a nightly build or continuous
integration. Having a defined uninstall process is a good contingency plan.
 Excessive change to the product that invalidates test results or requires updates to test
cases, expected results and environments: These can be mitigated through good change-control
processes, robust test design and light weight test documentation. When severe incidents occur,
transference of the risk by escalation to management is often in order.
 Insufficient or unrealistic test environments that yield misleading results:  One option is
to transfer the risks to management by explaining the limits on test results obtained in limited
environments. Mitigation – sometimes complete alleviation – can be achieved by outsourcing tests
such as performance tests that are particularly sensitive to proper test environments.
Let us also go through some additional risks and perhaps ways to manage them:

 Organizational issues such as shortages of people, skills or training, problems with


communicating and responding to test results, bad expectations of what testing can achieve and
complexity of the project team or organization.
 Supplier issues such as problems with underlying platforms or hardware, failure to consider
testing issues in the contract or failure to properly respond to the issues when they arise.
 Technical issues related to ambiguous, conflicting or unprioritized requirements, an excessively
large number of requirements given other project constraints, high system complexity and quality
problems with the design, the code or the tests.
It is really very important to keep in mind that not all projects are subject to the same risks.
Finally, we should not forget that even test items can also have risks associated with them.
For example, there is a risk that the test plan will omit tests for a functional area or that the test cases do
not exercise the critical areas of the system.
By using a test plan template like the IEEE 829 template shown earlier, you can remind yourself to
consider and manage risks during the planning phase. It is worth repeating at early stage of the project
because risks at this point of time are educated guesses. Some of those guesses might be wrong. Make
sure that you plan to re-assess and adjust your risks at regular intervals in the project and make
appropriate course corrections to the testing or the project itself.

Incident in software testing


 While executing a test, you might observe that the actual results vary from expected results.
When the actual result is different from the expected result then it is called as incidents, bugs,
defects, problems or issues.
 To be specific, we sometimes make difference between incidents and the defects or bugs. An
incident is basically any situation where the system exhibits questionable behavior, but often we refer
to an incident as a defect only when the root cause is some problem in the item we are testing.
 Other causes of incidents include misconfiguration or failure of the test environment, corrupted
test data, bad tests, invalid expected results and tester mistakes.

 Incident reports in software testing


After logging the incidents that occur in the field or after deployment of the system we also need some
way of reporting, tracking, and managing them.  It is most common to find defects reported against the
code or the system itself. However, there are cases where defects are reported against requirements and
design specifications, user and operator guides and tests also.
Why to report the incidents?

There are many benefits of reporting the incidents as given below:


 In some projects, a very large number of defects are found. Even on smaller projects where 100
or fewer defects are found, it is very difficult to keep track of all of them unless you have a process
for reporting, classifying, assigning and managing the defects from discovery to final resolution.
 An incident report contains a description of the misbehavior that was observed and classification
of that misbehavior.
 As with any written communication, it helps to have clear goals in mind when writing. One
common goal for such reports is to provide programmers, managers and others with detailed
information about the behavior observed and the defect.
 Another is to support the analysis of trends in aggregate defect data, either for understanding
more about a particular set of problems or tests or for understanding and reporting the overall level
of system quality. Finally, defect reports, when analyzed over a project and even across projects, give
information that can lead to development and test process improvements.
 The programmers need the information in the report to find and fix the defects. Before that
happens, though, managers should review and prioritize the defects so that scarce testing and
developer resources are spent fixing and confirmation testing the most important defects.
While many of these incidents will be user error or some other behavior not related to a defect, some
percentage of defects gets escaped from quality assurance and testing activities.

The defect detection percentage, which compares field defects with test defects, is an important metric
of the effectiveness of the test process.
Here is an example of a DDP formula that would apply for calculating DDP for the last level of testing prior
to release to the field:

Often, it aids the effectiveness and efficiency of


reporting, tracking and managing defects when the defect-tracking tool provides an ability to vary some of
the information captured depending on what the defect was reported against.

Write a good incident report in software testing


 A good incident report is a technical document. We have few rules of thumb that can help you write a
better incident report:
 First, use a careful, attentive approach to execute your tests. You never know when you are
going to find a problem. If you are pounding on the keyboard while gossiping with office mates or
thinking about a movie you just saw, you might miss few of the strange behaviors.
 You should also try to isolate the defect by making carefully chosen changes to the steps used to
reproduce it. By isolating the defect it will help to guide the programmer in the challenging part of the
system.
 By writing the incident report it will help in increasing your own knowledge of how the system
works – and how it fails.
 Some test cases focus on boundary conditions, which may make it appear that a defect is not
likely to happen frequently in practice. It is always a good idea to look for more generalized
conditions that cause the failure to occur, rather than simply relying on the test case. This helps
prevent the infamous incident report response, ‘No real user is ever going to do that.’ It also cuts
down on the number of duplicate reports that get filed.
 Irregular or infrequent symptoms are a fact of life for some defects and it’s always discouraging
to have an incident report bounced back as ‘irreproducible’. So, it’s a good idea to try to reproduce
symptoms when you see them. If a defect is irregular, we would still report it, but we would be sure
to include as much information as possible, especially how many times we tried to reproduce it and
how many times it did in fact occur.
 As there is a lot of testing going on with the system during a test period, there are also lots of
other test results available. Comparing an observed problem against other test results and known
defects found is a good way to find and document additional information that the programmer is
likely to find very useful. For example, you might check for similar symptoms observed with other
defects, the same symptom observed with defects that were fixed in previous versions or similar (or
different) results seen in tests that cover similar parts of the system.
 Many readers of incident reports, especially the managers need to understand
the priority and severity of the defect in order to know the impact of the problem in the project.
 Most defect-tracking systems have a title or summary field in which the impact should also be
mentioned. Choice of words matters a lot in incident reports. You should be clear and unambiguous.
You should also be unbiased, neutral and fact-focused keeping in mind the testing-related
interpersonal issues as discussed in Chapter 1.
 Finally, keeping the report brief and to the point that will help to keep people’s attention and
avoids the problem of losing them in the details.
 As a last rule of thumb for incident reports, we recommend that you use a review process for all
reports that are filed. It works if you have the lead tester review reports and we have also allowed
testers – at least experienced ones – to review other tester’s reports. Reviews are proven quality
assurance techniques and incident reports are important project deliverables.

 Test status report


Where test progress monitoring is about gathering detailed test data, there reporting test status is
about effectively communicating our findings to other project stakeholders. 
Test status reporting is often about enlightening and influencing stakeholders about test results. This
involves analyzing the information and metrics available to support conclusions, recommendations, and
decisions about how to guide the project forward.
For example, if you are doing risk-based testing, the main test objective is to subject the important
product risks to the appropriate extent of testing. Table 5.1 given below shows an example of a chart that
would allow you to report your test coverage and unresolved defects against the main product risk areas
you identified during risk analysis. If you are doing requirements-based testing, you could measure
coverage in terms of requirements or functional areas instead of risks. On some projects, the test team
must create a test summary report. Such a report, created either at a key milestone or at the end of a
test level, describes the results of a given level or phase of testing.
The IEEE 829 Standard Test Summary Report Template provides a useful guideline about such report. You
might also discuss about the important events (especially difficult ones) that occurred during testing, the
objectives of testing and whether they were achieved, the test strategy followed and how well it worked,
and the overall effectiveness of the test effort.

Defect or bugs or faults in software testing


A defect is an error or a bug, in the application which is created. A programmer while designing and building the
software can make mistakes or error. These mistakes or errors mean that there are flaws in the software. These
are called defects.

Failure in software testing


If under certain environment and situation defects in the application or product get executed then the system will
produce the wrong results causing a failure.
Not all defects result in failures, some may stay inactive in the code and we may never notice them. Example: 
Defects in dead code will never result in failures.

It is not just defects that give rise to failure. Failures can also be caused because of the other reasons also like:

 Because of the environmental conditions as well like a radiation burst, a strong magnetic field, electronic field or
pollution could cause faults in hardware or firmware. Those faults might prevent or change the execution of
software.
 Failures may also arise because of human error in interacting with the software, perhaps a wrong input value
being entered or an output being misinterpreted.
 Finally failures may also be caused by someone deliberately trying to cause a failure in the system.

When do defects in software testing arise


To know when defects in software testing arise, let us take a small example with a diagram as given below.
We can see that Requirement 1 is implemented correctly – we understood the customer’s requirement,
designed correctly to meet that requirement, built correctly to meet the design, and so deliver that requirement
with the right attributes: functionally, it does what it is supposed to do and it also has the right non-functional
attributes, so it is fast enough, easy to understand and so on.
With the
other requirements, errors have been made at different stages. Requirement 2 is fine until the software is
coded, when we make some mistakes and introduce defects. Probably, these are easily spotted and corrected
during testing, because we can see the product does not meet its design specification.
The defects introduced in Requirement 3 are harder to deal with; we built exactly what we were told to but
unfortunately the designer made some mistakes so there are defects in the design. Unless we check against
the requirements definition, we will not spot those defects during testing. When we do notice them they will be
hard to fix because design changes will be required.
The defects in Requirement 4 were introduced during the definition of the requirements; the   product has been
designed and built to meet that flawed requirements definition. If we test the product meets its requirements and
design, it will pass its tests but may be rejected by the user or customer. Defects reported by the customer in
acceptance test or live use can be very costly. Unfortunately, requirements and design defects are not rare;
assessments of thousands of projects have shown that defects introduced during requirements and design
make up close to half of the total number of defects.

Cost of defects in software testing


If the error is made and the consequent defect is detected in the requirements phase then it is relatively cheap
to fix it.
Similarly if an error is made and the consequent defect is found in the design phase then the design can be
corrected and reissued with relatively little expense.
The same applies for construction phase. If however, a defect is introduced in the requirement specification
and it is not detected until acceptance testing or even once the system has been implemented then it will be
much more expensive to fix. This is because rework will be needed in the specification and design before
changes can be made in construction; because one defect in the requirements may well propagate into several
places in the design and code; and because all the testing work done-to that point will need to be repeated in
order to reach the confidence level in the software that we require.
It is quite often the case that defects detected at a very late stage, depending on how serious they are, are not
corrected because the cost of doing so is too expensive.

Defect Life Cycle or a Bug lifecycle in software testing


Defect life cycle is a cycle which a defect goes through during its lifetime. It starts when defect is found and
ends when a defect is closed, after ensuring it’s not reproduced. Defect life cycle is related to the bug found
during testing.
The bug has different states in the Life Cycle. The Life cycle of the bug can be shown diagrammatically as

Bug or defect life cycle includes following steps or status:

 New:  When a defect is logged and posted for the first time. It’s state is given as new.
 Assigned:  After the tester has posted the bug, the lead of the tester approves that the bug is genuine and he
assigns the bug to corresponding developer and the developer team. It’s state given as assigned.
 Open:  At  this state the developer has started analyzing and working on the defect fix.
 Fixed:  When developer makes necessary code changes and verifies the changes then he/she can make bug
status as ‘Fixed’ and the bug is passed to testing team.
 Pending retest:  After fixing the defect the developer has given that particular code for retesting to the tester.
Here the testing is pending on the testers end. Hence its status is pending retest.
 Retest:  At this stage the tester do the retesting of the changed code which developer has given to him to
check whether the defect got fixed or not.
 Verified:  The tester tests the bug again after it got fixed by the developer. If the bug is not present in the
software, he approves that the bug is fixed and changes the status to “verified”.
 Reopen:  If the bug still exists even after the bug is fixed by the developer, the tester changes the status to
“reopened”. The bug goes through the life cycle once again.
 Closed:  Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the
software, he changes the status of the bug to “closed”. This state means that the bug is fixed, tested and
approved.
 Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug
status is changed to “duplicate“.
 Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is
changed to “rejected”.
 Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The
reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low,
lack of time for the release or the bug may not have major effect on the software. 
 Not a bug:  The state given as “Not a bug” if there is no change in the functionality of the application. For an
example: If customer asks for some change in the look and field of the application like change of colour of some
text then it is not a bug but just some change in the looks of the  application.

Difference between Severity and Priority


There are two key things in defects of the software testing. They are:
 Severity

 Priority

1)  Severity:
It is the extent to which the defect can affect the software. In other words it defines the impact that a given
defect has on the system. For example: If an application or web page crashes when a remote link is clicked, in
this case clicking the remote link by an user is rare but the impact of  application crashing is severe. So the
severity is high but priority is low.
Severity can be of following types:

 Critical: The defect that results in the termination of the complete system or one or more component of the
system and causes extensive corruption of the data. The failed function is unusable and there is no acceptable
alternative method to achieve the required results then the severity will be stated as critical.
 Major: The defect that results in the termination of the complete system or one or more component of the
system and causes extensive corruption of the data. The failed function is unusable but there exists an
acceptable alternative method to achieve the required results then the severity will be stated as major.
 Moderate: The defect that does not result in the termination, but causes the system to produce incorrect,
incomplete or inconsistent results then the severity will be stated as moderate.
 Minor: The defect that does not result in the termination and does not damage the usability of the system and
the desired results can be easily obtained by working around the defects then the severity is stated as minor.
 Cosmetic: The defect that is related to the enhancement of the system where the changes are related to the
look and field of the application then the severity is stated as cosmetic.
2)  Priority:
Priority defines the order in which we should resolve a defect. Should   we fix it now, or can it wait? This priority
status is set by the tester to the developer mentioning the time frame to fix the defect. If high priority is
mentioned then the developer has to fix it at the earliest. The priority status is set based on the customer
requirements. For example: If the company name is misspelled in the home page of the website, then the
priority is high and severity is low to fix it.
Priority can be of following types:

 Low: The defect is an irritant which should be repaired, but repair can be deferred until after more serious
defect have been fixed.
 Medium: The defect should be resolved in the normal course of development activities. It can wait until a new
build or version is created.
 High: The defect must be resolved as soon as possible because the defect is affecting the application or the
product severely. The system cannot be used until the repair has been done.
Few very important scenarios related to the severity and priority which are asked during the interview:
High Priority & High Severity: An error which occurs on the basic functionality of the application and will not
allow the user to use the system. (Eg. A site maintaining the student details, on saving record if it, doesn’t allow
to save the record then this is high priority and high severity bug.)
High Priority & Low Severity: 
The spelling mistakes that happens on the cover page or heading or title of an application.
High Severity & Low Priority:
An error which occurs on the functionality of the application (for which there is no workaround) and will not
allow the user to use the system but on click of link which is rarely used by the end user.
Low Priority and Low Severity:
 Any cosmetic or spelling issues which is within a paragraph or in the report (Not on cover page, heading, title).

How to write effective Test cases, procedures and definitions


Writing effective test cases is a skill and that can be achieved by some experience and in-depth study
of the application on which test cases are being written.
What is a test case?
“A test case has components that describes an input, action or event and an expected response, to
determine if a feature of an application is working correctly.” Definition by Glossary
There are levels in which each test case will fall in order to avoid duplication efforts.
Level 1: In this level you will write the basic test cases from the available specification and user
documentation.
Level 2: This is the practical stage in which writing test cases depend on actual functional and system flow of
the application.
Level 3: This is the stage in which you will group some test cases and write a test procedure. Test procedure
is nothing but a group of small test cases maximum of 10.
Level 4: Automation of the project. This will minimize human interaction with system and thus QA can focus
on current updated functionalities to test rather than remaining busy with regression testing.
So you can observe a systematic growth from no testable item to a Automation suit.

Why we write test cases?


The basic objective of writing test cases is to validate the testing coverage of the application. If
you are working in any CMMi company then you will strictly follow test cases standards. So writing test
cases brings some sort of standardization and minimizes the ad-hoc approach in testing.
How to write test cases?
Here is a simple test case format
Fields in test cases:
Test case id:
Unit to test: What to be verified?
Assumptions:
Test data: Variables and their values
Steps to be executed:
Expected result:
Actual result:
Pass/Fail:
Comments:

Tips for Writing Test Cases


One of the most frequent and major activity of a Software Tester (SQA/SQC person) is to write Test
Cases. First of all, kindly keep in mind that all this discussion is about ‘Writing Test Cases’ not about
designing/defining/identifying TCs.

There are some important and critical factors related to this major activity.Let us have a bird’s eye view
of those factors first.
a. Test Cases are prone to regular revision and update:
We live in a continuously changing world, software are also not immune to changes. Same holds good
for requirements and this directly impacts the test cases. Whenever, requirements are altered, TCs
need to be updated. Yet, it is not only the change in requirement that may cause revision and update to
TCs.
During the execution of TCs, many ideas arise in the mind, many sub-conditions of a single TC cause
update and even addition of TCs. Moreover, during regression testing several fixes and/or ripples
demand revised or new TCs.

b. Test Cases are prone to distribution among the testers who will execute these:
Of course there is hardly the case that a single tester executes all the TCs. Normally there are several
testers who test different modules of a single application. So the TCs are divided among them according
to their owned areas of application under test. Some TCs related to integration of application, may be
executed by multiple testers while some may be executed only by a single tester.

c. Test Cases are prone to clustering and batching:


It is normal and common that TCs belonging to a single test scenario usually demand their execution in
some specific sequence or in the form of group. There may be some TCs pre-requisite of other TCs.
Similarly, according to the business logic of AUT, a single TC may contribute in several test conditions
and a single test condition may consist of multiple TCs.

d. Test Cases have tendency of inter-dependence:


This is also an interesting and important behavior of TCs that those may be interdependent on each
other. In medium to large applications with complex business logic, this tendency is more visible.

The clearest area of any application where this behavior can definitely be observed is the
interoperability between different modules of same or even different applications. Simply speaking,
wherever the different modules or applications are interdependent, the same behavior is reflected in the
TCs.

Website Cookie Testing, Test cases for testing web application cookies
What is Cookie?

Cookie is small information stored in text file on user’s hard drive by web server. This information is
later used by web browser to retrieve information from that machine. Generally cookie contains
personalized user data or information that is used to communicate between different web pages.
Why Cookies are used?

Cookies are nothing but the user’s identity and used to track where the user navigated throughout the
web site pages. The communication between web browser and web server is stateless.
For example if you are accessing domain http://www.example.com/1.html then web browser will simply
query to example.com web server for the page 1.html. Next time if you type page as
http://www.example.com/2.html then new request is send to example.com web server for sending
2.html page and web server don’t know anything about to whom the previous page 1.html served.

What if you want the previous history of this user communication with the web server? You need to
maintain the user state and interaction between web browser and web server somewhere. This is where
cookie comes into picture. Cookies serve the purpose of maintaining the user interactions with web
server.
How cookies work?

The HTTP protocol used to exchange information files on the web is used to maintain the cookies. There
are two types of HTTP protocol. Stateless HTTP and Stateful HTTP protocol. Stateless HTTP protocol
does not keep any record of previously accessed web page history. While Stateful HTTP protocol do
keep some history of previous web browser and web server interactions and this protocol is used by
cookies to maintain the user interactions.
Whenever user visits the site or page that is using cookie, small code inside that HTML page (Generally
a call to some language script to write the cookie like cookies in JAVAScript, PHP, Perl) writes a text file
on users machine called cookie.
Here is one example of the code that is used to write cookie and can be placed inside any HTML page:
Set-Cookie: NAME=VALUE; expires=DATE; path=PATH; domain=DOMAIN_NAME;

When user visits the same page or domain later time this cookie is read from disk and used to identify
the second visit of the same user on that domain. Expiration time is set while writing the cookie. This
time is decided by the application that is going to use the cookie.

Generally two types of cookies are written on user machine.

1) Session cookies: 

This cookie is active till the browser that invoked the cookie is open. When we close the browser this
session cookie gets deleted. Some time session of say 20 minutes can be set to expire the cookie.
2) Persistent cookies: 

The cookies that are written permanently on user machine and lasts for months or years.
Where cookies are stored?

When any web page application writes cookie it get saved in a text file on user hard disk drive. The path
where the cookies get stored depends on the browser. Different browsers store cookie in different
paths. E.g. Internet explorer store cookies on path “C:\Documents and Settings\Default User\
Cookies”
Here the “Default User” can be replaced by the current user you logged in as. Like “Administrator”, or
user name like “ranga” etc.
The cookie path can be easily found by navigating through the browser options. In Mozilla Firefox
browser you can even see the cookies in browser options itself. Open the Mozila browser, click on Tools-
>Options->Privacy and then “Show cookies” button.
How cookies are stored?

Lets take example of cookie written by rediff.com on Mozilla Firefox browser:


On Mozilla Firefox browser when you open the page rediff.com or login to your rediffmail account, a
cookie will get written on your Hard disk. To view this cookie simply click on “Show cookies” button
mentioned on above path. Click on Rediff.com site under this cookie list. You can see different cookies
written by rediff domain with different names.
Site: Rediff.com Cookie name: RMID
Name: RMID (Name of the cookie)
Content: 1d11c8ec44bf49e0… (Encrypted content)
Domain: .rediff.com
Path: / (Any path after the domain name)
Send For: Any type of connection
Expires: Thursday, December 31, 2020 11:59:59 PM
Applications where cookies can be used:
1) To implement shopping cart:
Cookies are used for maintaining online ordering system. Cookies remember what user wants to buy.
What if user adds some products in their shopping cart and if due to some reason user don’t want to
buy those products this time and closes the browser window? When next time same user visits the
purchase page he can see all the products he added in shopping cart in his last visit.
2) Personalized sites:

When user visits certain pages they are asked which pages they don’t want to visit or display. User
options are get stored in cookie and till the user is online, those pages are not shown to him.
3) User tracking:
 
To track number of unique visitors online at particular time.
4) Marketing:

Some companies use cookies to display advertisements on user machines. Cookies control these
advertisements. When and which advertisement should be shown? What is the interest of the user?
Which keywords he searches on the site? All these things can be maintained using cookies.
5) User sessions:

Cookies can track user sessions to particular domain using user ID and password.
Drawbacks of cookies:

1) Even writing Cookie is a great way to maintain user interaction, if user has set browser options to
warn before writing any cookie or disabled the cookies completely then site containing cookie will be
completely disabled and can not perform any operation resulting in loss of site traffic.
2) Too many Cookies:

If you are writing too many cookies on every page navigation and if user has turned on option to warn
before writing cookie, this could turn away user from your site.

Web Testing
Let’s have first web testing checklist.

1) Functionality Testing
2) Usability testing
3) Interface testing
4) Compatibility testing
5) Performance testing
6) Security testing
1) Functionality Testing:
Test for – all the links in web pages, database connection, forms used in the web pages for submitting
or getting information from user, Cookie testing.

Check all the links:


 Test the outgoing links from all the pages from specific domain under test.
 Test all internal links.
 Test links jumping on the same pages.
 Test links used to send the email to admin or other users from web pages.
 Test to check if there are any orphan pages.
 Lastly in link checking, check for broken links in all above-mentioned links.
Test forms in all pages:

Forms are the integral part of any web site. Forms are used to get information from users and to keep
interaction with them. So what should be checked on these forms?
 First check all the validations on each field.
 Check for the default values of fields.
 Wrong inputs to the fields in the forms.
 Options to create forms if any, form delete, view or modify the forms.
Database testing:

Data consistency is very important in web application. Check for data integrity and errors while you
edit, delete, modify the forms or do any DB related functionality.
Check if all the database queries are executing correctly, data is retrieved correctly and also updated
correctly. More on database testing could be load on DB, we will address this in web load or
performance testing below.
2) Usability Testing:
Test for navigation:
Navigation means how the user surfs the web pages, different controls like buttons, boxes or how user
using the links on the pages to surf different pages.
Usability testing includes:
Web site should be easy to use. Instructions should be provided clearly. Check if the provided
instructions are correct means whether they satisfy purpose.
Main menu should be provided on each page. It should be consistent.
Content checking: 

Content should be logical and easy to understand. Check for spelling errors. Use of dark colors annoys
users and should not be used in site theme. You can follow some standards that are used for web page
and content building. These are common accepted standards like as I mentioned above about annoying
colors, fonts, frames etc.
Content should be meaningful. All the anchor text links should be working properly. Images should be
placed properly with proper sizes.
These are some basic standards that should be followed in web development. Your task is to validate all
for UI testing
Other user information for user help:

Like search option, sitemap, help files etc. Sitemap should be present with all the links in web sites with
proper tree view of navigation. Check for all links on the sitemap.
“Search in the site” option will help users to find content pages they are looking for easily and quickly.
These are all optional items and if present should be validated.
3) Interface Testing:

The main interfaces are:


Web server and application server interface
Application server and Database server interface.
Check if all the interactions between these servers are executed properly. Errors are handled properly.
If database or web server returns any error message for any query by application server then
application server should catch and display these error messages appropriately to users. Check what
happens if user interrupts any transaction in-between? Check what happens if connection to web server
is reset in between?

4) Compatibility Testing:

Compatibility of your web site is very important testing aspect. See which compatibility test to be executed:
 

 Browser compatibility
 Operating system compatibility
 Mobile browsing
 Printing options
Browser compatibility:

In my web-testing career I have experienced this as most influencing part on web site testing.
Some applications are very dependent on browsers. Different browsers have different configurations
and settings that your web page should be compatible with. Your web site coding should be cross
browser platform compatible. If you are using java scripts or AJAX calls for UI functionality, performing
security checks or validations then give more stress on browser compatibility testing of your web
application.
Test web application on different browsers like Internet explorer, Firefox, Netscape navigator, AOL,
Safari, Opera browsers with different versions.
OS compatibility:

Some functionality in your web application is may not be compatible with all operating systems. All new
technologies used in web development like graphics designs, interface calls like different API’s may not
be available in all Operating Systems.
Test your web application on different operating systems like Windows, Unix, MAC, Linux, Solaris with
different OS flavors.
Mobile browsing:

This is new technology age. So in future Mobile browsing will rock. Test your web pages on mobile
browsers. Compatibility issues may be there on mobile.
Printing options:

If you are giving page-printing options then make sure fonts, page alignment, page graphics getting
printed properly. Pages should be fit to paper size or as per the size mentioned in printing option.
5) Performance testing:

Web application should sustain to heavy load. Web performance testing should include:
Web Load Testing
Web Stress Testing
Test application performance on different internet connection speed.
In web load testing test if many users are accessing or requesting the same page. Can system sustain
in peak load times? Site should handle many simultaneous user requests, large input data from users,
Simultaneous connection to DB, heavy load on specific pages etc.
Stress testing:
 
Generally stress means stretching the system beyond its specification limits. Web stress testing is
performed to break the site by giving stress and checked how system reacts to stress and how system
recovers from crashes.
Stress is generally given on input fields, login and sign up areas.
In web performance testing web site functionality on different operating systems, different hardware
platforms is checked for software, hardware memory leakage errors,

6) Security Testing:

Following are some test cases for web security testing:

 Test by pasting internal url directly into browser address bar without login. Internal pages should not
open.
 If you are logged in using username and password and browsing internal pages then try changing url
options directly. I.e. If you are checking some publisher site statistics with publisher site ID= 123. Try
directly changing the url site ID parameter to different site ID which is not related to logged in user.
Access should denied for this user to view others stats.
 Try some invalid inputs in input fields like login username, password, input text boxes. Check the
system reaction on all invalid inputs.
 Web directories or files should not be accessible directly unless given download option.
 Test the CAPTCHA for automates scripts logins.
 Test if SSL is used for security measures. If used proper message should get displayed when user
switch from non-secure http:// pages to secure https:// pages and vice versa.
 All transactions, error messages, security breach attempts should get logged in log files somewhere on
web server.
What is the difference between client-server testing and web based testing and what are things that
we need to test in such applications?

Projects are broadly divided into two types of:


 2 tier applications
 3 tier applications
CLIENT / SERVER TESTING

This type of testing usually done for 2 tier applications (usually developed for LAN)
Here we will be having front-end and backend.
The application launched on front-end will be having forms and reports which will be monitoring and
manipulating data

E.g: applications developed in VB, VC++, Core Java, C, C++, D2K, PowerBuilder etc.,
The backend for these applications would be MS Access, SQL Server, Oracle, Sybase, Mysql, Quadbase
How will you report this bug effectively?

Here is the sample bug report for above mentioned example:


(Note that some ‘bug report’ fields might differ depending on your bug tracking system)
SAMPLE BUG REPORT:
Bug Name: Application crash on clicking the SAVE button while creating a new user.
Bug ID: (It will be automatically created by the BUG Tracking tool once you save this bug)
Area Path: USERS menu > New Users
Build Number: Version Number 5.0.1
Severity: HIGH (High/Medium/Low) or 1
Priority: HIGH (High/Medium/Low) or 1
Assigned to: Developer-X
Reported By: Your Name
Reported On: Date
Reason: Defect
Status: New/Open/Active (Depends on the Tool you are using)
Environment: Windows 2003/SQL Server 2005
Description:
Application crash on clicking the SAVE button while creating a new
user, hence unable to create a new user in the application.
 

Steps To Reproduce:

1) Logon into the application


2) Navigate to the Users Menu > New User
3) Filled all the user information fields
4) Clicked on ‘Save’ button
5) Seen an error page “ORA1090 Exception: Insert values Error…”
6) See the attached logs for more information (Attach more logs related to bug..IF any)
7) And also see the attached screenshot of the error page.
Expected result: On clicking SAVE button, should be prompted to a success message “New User has
been created successfully”.
(Attach ‘application crash’ screen shot.. IF any)
Save the defect/bug in the BUG TRACKING TOOL.  You will get a bug id, which you can use for further
bug reference.
Default ‘New bug’ mail will go to respective developer and the default module owner (Team leader or
manager) for further action.

How to write software Testing Weekly Status Report

Follow the below template:


Prepared By:
Project:
Date of preparation:
Status:
A) Issues:
Issues holding the QA team from delivering on schedule:
Project:
Issue description:
Possible solution:
Issue resolution date:
You can mark these issues in red colour. These are the issues that requires managements help in
resolving them.
Issues that management should be aware:
These are the issues that not hold the QA team from delivering on time but management should be
aware of them. Mark these issues inYellow colour. You can use above same template to report them.
Project accomplishments:

Mark them in Green colour. Use below template.


Project:
Accomplishment:
Accomplishment date:
B) Next week Priorities:

Actionable items next week list them in two categories:


1) Pending deliverables: 

Mark them in blue colour: These are previous weeks deliverables which should get released as soon as
possible in this week.
Project:
Work update:
Scheduled date:
Reason for extending:
2) New tasks:

List all next weeks new task here. You can use black colour for this.
Project:
Scheduled Task:
Date of release:
C) Defect status:
 

Active defects:

List all active defects here with Reporter, Module, Severity, priority, assigned to.
Closed Defects:

List all closed defects with Reporter, Module, Severity, priority, assigned to.
Test cases:

List total number of test cases wrote, test cases passed, test cases failed, test cases to be executed.
This template should give you the overall idea of the status report. Don’t ignore the status report. Even
if your managers are not forcing you to write these reports they are most important for your work
assessment in future.

What are the qualities of a good software bug report?

Anyone can write a bug report. But not everyone can write a effective bug report. You should be able to
distinguish between average bug report and a good bug report. How to distinguish a good or bad bug
report? It’s simple, apply following characteristics and techniques to report a bug.
1) Having clearly specified bug number:

Always assign a unique number to each bug report. This will help to identify the bug record. If you are
using any automated bug-reporting tool then this unique number will be generated automatically each
time you report the bug. Note the number and brief description of each bug you reported.
2) Reproducible:

If your bug is not reproducible it will never get fixed. You should clearly mention the steps to reproduce
the bug. Do not assume or skip any reproducing step. Step by step described bug problem is easy to
reproduce and fix.
3) Be Specific:

Do not write a essay about the problem. Be Specific and to the point. Try to summarize the problem in
minimum words yet in effective way. Do not combine multiple problems even they seem to be similar.
Write different reports for each problem.
How to Report a Bug?

Use following simple Bug report template:


This is a simple bug report format. It may vary on the bug report tool you are using. If you are writing
bug report manually then some fields need to specifically mention like Bug number which should be
assigned manually.
Reporter: Your name and email address.
Product: In which product you found this bug.
Version: The product version if any.
Component: These are the major sub modules of the product.
Platform: Mention the hardware platform where you found this bug. The various platforms like ‘PC’,
‘MAC’, ‘HP’, ‘Sun’ etc.
Operating system: 

Mention all operating systems where you found the bug. Operating systems like Windows, Linux, Unix,
SunOS, Mac OS. Mention the different OS versions also if applicable like Windows NT, Windows 2000,
Windows XP etc.
Priority:

When bug should be fixed? Priority is generally set from P1 to P5. P1 as “fix the bug with highest
priority” and P5 as ” Fix when time permits”.
Severity:

This describes the impact of the bug.


Types of Severity:
 Blocker: No further testing work can be done.
 Critical: Application crash, Loss of data.
 Major: Major loss of function.
 Minor: minor loss of function.
 Trivial: Some UI enhancements.
 Enhancement: Request for new feature or some enhancement in existing one.
Status:

When you are logging the bug in any bug tracking system then by default the bug status is ‘New’.
Later on bug goes through various stages like Fixed, Verified, Reopen, Won’t Fix etc.

Assign To: 

If you know which developer is responsible for that particular module in which bug occurred, then you
can specify email address of that developer. Else keep it blank this will assign bug to module owner or
Manger will assign bug to developer. Possibly add the manager email address in CC list.
URL:

The page url on which bug occurred.


 

Summary:

A brief summary of the bug mostly in 60 or below words. Make sure your summary is reflecting what
the problem is and where it is.
Description:

Description of bug. Use following fields for description field:


 Reproduce steps: Clearly mention the steps to reproduce the bug.
 Expected result: How application should behave on above mentioned steps.
 Actual result: What is the actual result on running above steps i.e. the bug behavior.
These are the important steps in bug report. You can also add the “Report type” as one more field which will
describe the bug type.

The report types are typically:

1) Coding error
2) Design error
3) New suggestion
4) Documentation issue
5) Hardware problem
Some Bonus tips to write a good bug report:

1) Report the problem immediately:

If you found any bug while testing, do not wait to write detail bug report later. Instead write the bug
report immediately. This will ensure a good and reproducible bug report. If you decide to write the bug
report later on then chances are high to miss the important steps in your report.
2) Reproduce the bug three times before writing bug report:

Your bug should be reproducible. Make sure your steps are robust enough to reproduce the bug without
any ambiguity. If your bug is not reproducible every time you can still file a bug mentioning the periodic
nature of the bug.
3) Test the same bug occurrence on other similar module:

Sometimes developer use same code for different similar modules. So chances are high that bug in one
module can occur in other similar modules as well. Even you can try to find more severe version of the
bug you found.
4) Write a good bug summary:

Bug summary will help developers to quickly analyze the bug nature. Poor quality report will
unnecessarily increase the development and testing time. Communicate well through your bug report
summary. Keep in mind bug summary is used as a reference to search the bug in bug inventory.
5) Read bug report before hitting Submit button:

Read all sentences, wording, steps used in bug report. See if any sentence is creating ambiguity that
can lead to misinterpretation. Misleading words or sentences should be avoided in order to have a clear
bug report.
6) Do not use Abusive language:

It’s nice that you did a good work and found a bug but do not use this credit for criticizing developer or
to attack any individual.

Capability Maturity Model


Capability Maturity Model is a bench-mark for measuring the maturity of an organization’s software process. It is
a methodology used to develop and refine an organization’s software development process. CMM can be used
to assess an organization against a scale of five process maturity levels based on certain Key Process Areas
(KPA). It describes the maturity of the company based upon the project the company is dealing with and the
clients. Each level ranks the organization according to its standardization of processes in the subject area being
assessed.
A maturity model provides:

 A place to start
 The benefit of a community’s prior experiences
 A common language and a shared vision
 A framework for prioritizing actions
 A way to define what improvement means for your organization
In CMMI models with a staged representation, there are five maturity levels designated by the numbers 1 through 5
as shown below:

 Initial
 Managed
 Defined
 Quantitatively Managed
 Optimizing
Maturity
levels consist of a predefined set of process areas. The maturity levels are measured by the achievement of
the specific and generic goals that apply to each predefined set of process areas. The following sections
describe the characteristics of each maturity level in detail.
Maturity Level 1 – Initial: 
Company has no standard process for software development. Nor does it have a project-tracking system that
enables developers to predict costs or finish dates with any accuracy.
In detail we can describe it as given below:
 At maturity level 1, processes are usually ad hoc and chaotic.
 The organization usually does not provide a stable environment. Success in these organizations
depends on the competence and heroics of the people in the organization and not on the use of
proven processes.
 Maturity level 1 organizations often produce products and services that work but company has no
standard process for software development. Nor does it have a project-tracking system that enables
developers to predict costs or finish dates with any accuracy.
 Maturity level 1 organizations are characterized by a tendency to over commit, abandon processes in
the time of crisis, and not be able to repeat their past successes.
Maturity Level 2 – Managed:
Company has installed basic software management processes and controls. But there is no consistency or
coordination among different groups.
In detail we can describe it as given below:
 At maturity level 2, an organization has achieved all the specific and generic goals of the maturity level 2
process areas. In other words, the projects of the organization have ensured that requirements are managed
and that processes are planned, performed, measured, and controlled.
 The process discipline reflected by maturity level 2 helps to ensure that existing practices are retained during
times of stress. When these practices are in place, projects are performed and managed according to their
documented plans.
 At maturity level 2, requirements, processes, work products, and services are managed. The status of the work
products and the delivery of services are visible to management at defined points.
 Commitments are established among relevant stakeholders and are revised as needed. Work products are
reviewed with stakeholders and are controlled.
 The work products and services satisfy their specified requirements, standards, and objectives.
Maturity Level 3 – Defined:
Company has pulled together a standard set of processes and controls for the entire organization so that
developers can move between projects more easily and customers can begin to get consistency from different
groups.
In detail we can describe it as given below:
 At maturity level 3, an organization has achieved all the specific andgeneric goals.
 At maturity level 3, processes are well characterized and understood, and are described in standards,
procedures, tools, and methods.
 A critical distinction between maturity level 2 and maturity level 3 is the scope of standards, process
descriptions, and procedures. At maturity level 2, the standards, process descriptions, and procedures may be
quite different in each specific instance of the process (for example, on a particular project). At maturity level 3,
the standards, process descriptions, and procedures for a project are tailored from the organization’s set of
standard processes to suit a particular project or organizational unit.
 The organization’s set of standard processes includes the processes addressed at maturity level 2 and maturity
level 3. As a result, the processes that are performed across the organization are consistent except for the
differences allowed by the tailoring guidelines.
 Another critical distinction is that at maturity level 3, processes are typically described in more detail and more
rigorously than at maturity level 2.
 At maturity level 3, processes are managed more proactively using an understanding of the interrelationships of
the process activities and detailed measures of the process, its work products, and its services.
Maturity Level 4 – Quantitatively Managed:
In addition to implementing standard processes, company has installed systems to measure the quality of
those processes across all projects.
In detail we can describe it as given below:
 At maturity level 4, an organization has achieved all the specific goals of the process areas assigned to
maturity levels 2, 3, and 4 and the generic goals assigned to maturity levels 2 and 3.
 At maturity level 4 Sub-processes are selected that significantly contribute to overall process performance.
These selected sub-processes are controlled using statistical and other quantitative techniques.
 Quantitative objectives for quality and process performance are established and used as criteria in managing
processes. Quantitative objectives are based on the needs of the customer, end users, organization, and
process implementers. Quality and process performance are understood in statistical terms and are managed
throughout the life of the processes.
 For these processes, detailed measures of process performance are collected and statistically analyzed.
Special causes of process variation are identified and, where appropriate, the sources of special causes are
corrected to prevent future occurrences.
 Quality and process performance measures are incorporated into the organizations measurement repository to
support fact-based decision making in the future.
 A critical distinction between maturity level 3 and maturity level 4 is the predictability of process performance. At
maturity level 4, the performance of processes is controlled using statistical and other quantitative techniques,
and is quantitatively predictable. At maturity level 3, processes are only qualitatively predictable.
Maturity Level 5 – Optimizing: 
Company has accomplished all of the above and can now begin to see patterns in performance over time, so it
can tweak its processes in order to improve productivity and reduce defects in software development across the
entire organization.
In detail we can describe it as given below:
 At maturity level 5, an organization has achieved all the specific goalsof the process areas assigned to
maturity levels 2, 3, 4, and 5 and thegeneric goals assigned to maturity levels 2 and 3.
 Processes are continually improved based on a quantitative understanding of the common causes of variation
inherent in processes.
 Maturity level 5 focuses on continually improving process performance through both incremental and innovative
technological improvements.
 Quantitative process-improvement objectives for the organization are established, continually revised to reflect
changing business objectives, and used as criteria in managing process improvement.
 The effects of deployed process improvements are measured and evaluated against the quantitative process-
improvement objectives. Both the defined processes and the organization’s set of standard processes are
targets of measurable improvement activities.
 Optimizing processes that are agile and innovative depends on the participation of an empowered workforce
aligned with the business values and objectives of the organization.
 The organization’s ability to rapidly respond to changes and opportunities is enhanced by finding ways to
accelerate and share learning. Improvement of the processes is inherently part of everybody’s role, resulting in
a cycle of continual improvement.
 A critical distinction between maturity level 4 and maturity level 5 is the type of process variation addressed. At
maturity level 4, processes are concerned with addressing special causes of process variation and providing
statistical predictability of the results. Though processes may produce predictable results, the results may be
insufficient to achieve the established objectives. At maturity level 5, processes are concerned with addressing
common causes of process variation and changing the process (that is, shifting the mean of the process
performance) to improve process performance (while maintaining statistical predictability) to achieve the
established quantitative process-improvement objectives.
Smoke Testing Vs Sanity Testing - Key Differences

 
Smoke Testing Sanity Testing

Smoke Testing is performed to ascertain that the critical Sanity Testing is done to check the new functionality /
functionalities of the program is working fine bugs have been fixed

The objective of the testing is to verify the


The objective of this testing is to verify the "stability" of the system
"rationality" of the system in order to proceed with
in order to proceed with more rigorous testing
more rigorous testing

This testing is performed by the developers or testers Sanity testing is usually performed by testers

Sanity testing is usually not documented and is


Smoke testing is usually documented or scripted
unscripted

Smoke testing is a subset of Regression testing Sanity testing is a subset of Acceptance testing

Sanity testing exercises only the particular component


Smoke testing exercises the entire system from end to end
of the entire system

Smoke testing is like General Health Check Up Sanity Testing is like specialized health check up
Software Testing Life Cycle STLC

The different stages in Software Test Life Cycle -

Each of these stages have a definite Entry and Exit criteria  , Activities & Deliverables associated with it.
In an Ideal world you will not enter the next stage until the exit criteria for the previous stage is met. But
practically this is not always possible. So for this tutorial , we will focus of activities and deliverables for the
different stages in STLC. Lets look into them in detail.
Requirement Analysis
During this phase, test team studies the requirements from a
testing point of view to identify the testable requirements. The
QA team may interact with various stakeholders (Client,
Business Analyst, Technical Leads, System Architects etc) to
understand the requirements in detail. Requirements could be
either Functional (defining what the software must do) or Non
Functional (defining system performance /security
availability ) .Automation feasibility for the given testing project
is also done in this stage.
Activities
 Identify types of tests to be performed. 
 Gather details about testing priorities and focus.
 Prepare Requirement Traceability Matrix (RTM).
 Identify test environment details where testing is supposed to
be carried out. 
 Automation feasibility analysis (if required).
Deliverables 
 RTM
 Automation feasibility report. (if applicable)
 
Test Planning
This phase is also called Test Strategy phase. Typically , in this stage, a Senior QA manager will determine
effort and cost estimates for the project and would prepare and finalize the Test Plan.
Activities
 Preparation of test plan/strategy document for various types of testing
 Test tool selection 
 Test effort estimation 
 Resource planning and determining roles and responsibilities.
 Training requirement
Deliverables 
 Test plan /strategy document.
 Effort estimation document.
 
Test Case Development
This phase involves creation, verification and rework of test cases & test scripts. Test data , is
identified/created and is reviewed and then reworked as well.
Activities
 Create test cases, automation scripts (if applicable)
 Review and baseline test cases and scripts 
 Create test data (If Test Environment is available)
Deliverables 
 Test cases/scripts 
 Test data
 

Test Environment Setup


Test environment decides the software and hardware conditions
under which a work product is tested. Test environment set-up is
one of the critical aspects of testing process and can be done in
parallel with Test Case Development Stage. Test team
may not be involved in this activity if the
customer/development team provides the test environment in
which case the test team is required to do a readiness check
(smoke testing) of the given environment.
Activities 
 Understand the required architecture, environment set-up
and prepare hardware and software requirement list for the
Test Environment. 
 Setup test Environment and test data 
 Perform smoke test on the build
Deliverables 
 Environment ready with test data set up 
 Smoke Test Results.
 
Test Execution
During this phase test team will carry out the testing based on
the test plans and the test cases prepared. Bugs will be reported
back to the development team for correction and retesting will
be performed.
Activities 
 Execute tests as per plan
 Document test results, and log defects for failed cases 
 Map defects to test cases in RTM 
 Retest the defect fixes 
 Track the defects to closure
Deliverables 
 Completed RTM with execution status 
 Test cases updated with results 
 Defect reports
 
Test Cycle Closure
Testing team will meet , discuss and analyze testing artifacts to identify strategies that have to be implemented
in future, taking lessons from the current test cycle. The idea is to remove the process bottlenecks for future
test cycles and share best practices for any similar projects in future.
Activities
 Evaluate cycle completion criteria based on Time,Test coverage,Cost,Software,Critical Business
Objectives , Quality
 Prepare test metrics based on the above parameters. 
 Document the learning out of the project 
 Prepare Test closure report 
 Qualitative and quantitative reporting of quality of the work product to the customer. 
 Test result analysis to find out the defect distribution by type and severity.
Deliverables 
 Test Closure report 
 Test metrics
 
 
Finally, summary of STLC along with Entry and Exit Criteria

 
STLC Stage Entry Criteria Activity Exit Criteria Deliverables
Requirement Requirements Document Analyse business functionality to know Signed off RTM RTM
Analysis available (both the business modules and module Test automation Automation
functional and non specific functionalities. feasibility report feasibility report
functional) Identify all transactions in the modules. signed off by the (if applicable)
Acceptance criteria Identify all the user profiles. client  
defined. Gather user interface/authentication,    
Application architectural geographic spread requirements.  
document available. Identify types of tests to be performed.
Gather details about testing priorities
and focus.
Prepare Requirement Traceability
Matrix (RTM).
Identify test environment details where
testing is supposed to be carried out.
Automation feasibility analysis (if
required).
Test Planning Requirements Analyze various testing approaches Approved test Test plan/strategy
Documents available plan/strategy document.
Requirement Finalize on the best suited approach document. Effort estimation
Traceability matrix. Preparation of test plan/strategy Effort estimation document.
Test automation document for various types of testing document signed  
feasibility document. Test tool selection off.
Test effort estimation  
Resource planning and determining
roles and responsibilities.
Test case Requirements Create test cases, automation scripts Reviewed and Test cases/scripts
development Documents (where applicable) signed test Test data
RTM and test plan Review and baseline test cases and Cases/scripts  
Automation analysis scripts Reviewed and
report Create test data signed test data
 
Test System Design and Understand the required architecture, Environment setup Environment
Environment architecture documents environment set-up is working as per ready with test
setup are available Prepare hardware and software the plan and data set up
Environment set-up plan requirement list checklist Smoke Test
is available Finalize connectivity requirements Test data setup is Results.
Prepare environment setup checklist complete  
Setup test Environment and test data Smoke test is
Perform smoke test on the build successful
Accept/reject the build depending on  
smoke test result
Test Execution Baselined RTM, Test Execute tests as per plan All tests planned are Completed RTM
Plan , Test case/scripts Document test results, and log defects executed with execution
are available for failed cases Defects logged and status
Test environment is Update test plans/test cases, if tracked to closure Test cases
ready necessary   updated with
Test data set up is done Map defects to test cases in RTM results
Unit/Integration test Retest the defect fixes Defect reports
report for the build to be Regression testing of application
tested is available Track the defects to closure
 
Test Cycle Testing has been Evaluate cycle completion criteria Test Closure report Test Closure
closure completed based on - Time, Test coverage , Cost , signed off by client report
Test results are Software Quality , Critical Business   Test metrics
available Objectives  
Defect logs are available Prepare test metrics based on the
above parameters.
Document the learning out of the
project
Prepare Test closure report
Qualitative and quantitative reporting
of quality of the work product to the
customer.
Test result analysis to find out the
defect distribution by type and severity

You might also like