Test Plan
Test Plan
<Client name>
<Confidentiality>
Test Plan
<Project name>
[Note: The following template is provided for use within Gecko Solutions. Text enclosed in square brackets and
displayed in blue italic is included to provide guidance to the author and should be deleted before publishing the
document. Text enclosed in square brackets and displayed in brown regular font represents example - this can be
modified to match specific project and kept in document, or deleted. ]
[To customize automatic fields in Microsoft Word (which display a gray background when selected), click the Microsoft
Office Button , point to Prepare, and then click Properties. select designated field and update it or use Document
Properties > Advanced properties >Custom tab to update the values. Each field can be updated by selecting Update
Field option from the right click menu on the when positioned on automated field area.]
age 1
<Project name>
DOCUMENT REVISIONS
CHANGES
Date Version Author Description
DISTRIBUTION
Name Position
age 2
<Project name>
Table of contents
1. INTRODUCTION.....................................................................................................................3
1.1 PURPOSE OF DOCUMENT........................................................................................................................3
1.2 PROJECT IDENTIFICATION........................................................................................................................3
1.3 ROLES AND RESPONSIBILITIES...................................................................................................................3
1.4 TARGET PLATFORM AND SYSTEM ARCHITECTURE..........................................................................................3
1.5 ENVIRONMENTS....................................................................................................................................3
1.6 TESTING PHASES....................................................................................................................................3
Entry criteria.....................................................................................................................................3
Exit criteria........................................................................................................................................3
Testing Schedule...............................................................................................................................3
2. SCOPE...................................................................................................................................3
2.1 SCOPE OF THE TESTING...........................................................................................................................3
2.2 DEFINITION OF DONE............................................................................................................................3
2.3 OUT OF THE SCOPE................................................................................................................................3
2.4 ASSUMPTIONS, CONTRAINTS AND RISKS.....................................................................................................3
2.5 RISK MITIGATION..................................................................................................................................3
Before the project starts...................................................................................................................3
During the project.............................................................................................................................3
After the project finished..................................................................................................................3
3. ISSUE TRACKING...................................................................................................................3
3.1 ISSUE TRACKING SYSTEM.........................................................................................................................3
3.2 ISSUE WORKFLOW..................................................................................................................................3
3.3 ISSUE REPORTING..................................................................................................................................3
4. TEST STRATEGY.....................................................................................................................3
4.1 TESTING TYPES......................................................................................................................................3
Functional Testing.............................................................................................................................3
Non-Functional Testing.....................................................................................................................3
Data and Database Integrity Testing.................................................................................................3
Performance, Load and Stress Testing..............................................................................................3
Security and Access Control Testing.................................................................................................3
Installation Testing............................................................................................................................3
Structural Testing..............................................................................................................................3
Change related Testing.....................................................................................................................3
ADDITIONAL INFORMATION........................................................................................................3
age 3
<Project name>
age 4
<Project name>
1. Introduction
1.1 Purpose of document
The purpose of Test Plan document is to define the project’s approach to testing - testing strategy. The
strategy looks at the characteristics of the system to be built and plans the breadth and depth of the quality
assurance effort. The Testing Strategy will influence tasks related to test planning, test types, test script
development, and test execution.
- What – what are the aspects of software system that are tested and what are we testing testing?
- How – how tests are prepared, executed and documented?
- When – at what moment does testing takes place?
age 5
<Project name>
age 6
<Project name>
Windows XP
Linux
Testing target is mobile native application implemented for Android/iOS devices. Application is compatible
All testing and verification will be done with respect to this specification.
1.5 Environments
[List all environments (servers, databases, countinuous build tooll instances, user story/issue tracking tool
instances) used in development and testing, as well as details of production environment (existing or
planned)]
[We are using these environments on project for development and verification:
- Development environment (DEV) – where all done features/fixed issues are deployed first by
developers. Developer is responsible to configure all required changes in development environment
for functionality to work. Also developer is obligated to perform quick basic verification of deployed
functionality and test main expected scenarios on this environment. After all this is done developer
can change status of task to “Resolved”. All features and bug fixes for current version being
developed / tested should be kept and listed in deployment list available to all team members (for
example on Wiki pages of issue tracking system on project). This will be very useful to QA during
verification of specific version of application.
- Test environment (TEST) – where all features/bug fixes previously resolved and deployed to
development environment (and verified in terms of basic scenarios) by developer, are then deployed
age 7
<Project name>
for detailed testing and verification by QA. Deployment of testing version of application to test
environment can be done by :
o Developer – Recommended if deployment to test environment and configuration process is
complex, such us running DB schema and/or data updates, doing changes in configuration
files on server etc. In that case developer will do deployment and all necessary
configuration in test environment in order to have application and functionality ready for
testing by QA. After proper version of application is deployed to test environment and
configured, developer changes status of ticket from “Resolved” to “QA In Progress”, and
DOESN’T change assignment of ticket. QA will change assignment of ticket to himself, in the
moment he starts verification of ticket.
o QA – Recommended if deployment to test environment and configuration process is simple
and can be done automatically, such as using continuous build tools like Jenkins. In this
case, after completing task and verifying on DEV environment, developer should just
change assignment of ticket to QA and LEAVE status of ticket to “Resolved/Done”. QA will
change status of ticket to “QA In Progress” once the application is deployed to test
environment.
After task (feature/bug fix) is verified by QA on test environment, QA changes status of ticket to
“Verified” and change assignment to BA or PM on project.
Testing will be involved in all stages of the development lyfecycle and can be divided in the following
phases:
- Preparation – stage for study and examining specifications and other documentation used as a
knowledge base for testing process on project. Most important input document used for this stage
is Business requirements document where all user stories, features and specific business rules of
the system are specified in details.
age 8
<Project name>
- Specification – during this stage and based on functional specification (business requirements
document) and non-functional specification test cases are written and test infrastructure is
prepared and configured.
- Execution – this stage can be started once development of the first testable software component
or version is completed. The software is being tested by approach, methods and tools defined in
the Test strategy. The possible differences between expected results and actual test results may be
the results of incorrect implementation of the functionality, defect in the specification, issue in the
test infrastructure, incorrect test script. The cause of the difference will be investigated during
testing activities. As soon as rework has been completed (found defects fixed), the tests are
executed again to verify that issues have been fixed.
- Completion – stage for finalizing all test activities, collect and record the test results and make a
Go/No Go decision for the implemented functionality and/or product build, based on test results
and specified definition of DONE. Create and share test report.
Entry criteria
[Defines pre-conditions for testing process activities]
- Functional and non-functional specifications are described well enough to start Preparation and
Specification phases. Most important document: Business requirements specification is defined.
- All designs, mock-ups, prototypes needed for the reference are present.
- Development of the testable application component is finished.
- Developer responsible for features being tested performs developer’s test (main scenarios) on
development environment. If developer’s test fails he returns issue status to “In Progress”.
- The latest version of the application with the component under test is deployed on test
environment.
- If Developer is doing deployment and configuration to test environment, than developer
responsible for features being tested, performs developer test on test environment to make sure
everything is ready for QA. If developers test fails he returns issue status to “In Progress”.
- Developer responsible for features being tested provides instructions for testing and any other
relevant information on story/issue tickets.
- QA team is informed about features and testable components that need to be tested (list of
features/issue fixes for software version being tested, story/issue tickets in proper status, assigned
to tester etc.)
Exit criteria
[Defines post conditions of successfully done testing process activities]
Functional specifications are covered by the test cases.
Non-functional specifications are covered by the test cases if possible.
Text cases are executed on the build containing the functionality under test.
Test case execution results are recorded in defined form.
All the issues found during testing were reported to the team using defined procedures.
Bug tasks inside are created in issue tracking system, for the issues that supposed to be fixed before
the release.
age 9
<Project name>
Bug reports in issue tracking tool are created for the issues left after the test (or sprint/phase).
Regression tests for the application were executed to cover all functionalities being completed so
far.
Testing Schedule
This is time schedule plan for QA activities in different stages of project:
Project phase Testing phase Project Testing activities Project Time schedule
finished artefacts used artefacts
produced
Requirements Preparation Business Study and examine Test plan 2 weeks after
Gathering requirements specifications and the start of the
document other documentation project
used as a knowledge
base for testing
process on the project
Plan test process on
the project
Analysis Specification Test plan Write test cases Test cases After all
document based on functional document business
Business specification (business requirements
requirements requirements are specified
document document) , defined and main
Use cases functionalities (use functionalities
document cases document) and of the
non-functional application
specifications. defined in Use
Prepare and configure cases
test environment and
infrastructure.
age 10
<Project name>
software/functionality
is re-tested.
Implementation Execution, Test plan Acceptance testing is Test report After delivery
phase N Completion - document being done to verify according to of stable
Testing of Test cases all functionalities test plan software
completed document developed within document version with all
stable version Business phase N of software required
of software (for requirements implementation. functionalities
phase N) being document Regression testing is for phase N
delivered to being done to cover implemented.
client. all functionalities of
the software
implemented so far.
Implementation Execution, Test plan Acceptance testing is Test report After delivery
FINAL phase Completion - document being done to verify according of final
Testing of Test cases all functionalities to test software
completed final document developed within plan version with all
version of Business FINAL phase of document required
software being requirements software Deployme functionalities
delivered to document implementation. nt check implemented.
client. Regression testing is list
being done to cover document
all functionalities of
the software
implemented so far.
Maintenance Execution, Test plan Acceptance testing is Test report After software
Completion - document being done to verify according to started life in
Testing of Test cases all new functionalities test plan production
software that document developed within document environment.
started life in Deployment maintenance of
production check list software.
environment. document Testing of all core
functionalities is done
after each deploy of
new version of
software. These test
are defined in
„Deployment check
list“ document.
Full regression testing
is being done to cover
all functionalities of
the software
implemented – only if
required (large
age 11
<Project name>
changes in the
software).
2. Scope
2.1 Scope of the testing
[Describe the stages of testingfor example, Unit, Integration, or Systemand the types of testing that
will be addressed by this plan, such as Function or Performance.
Provide a brief list of the target-of-test’s features and functions that will be tested.]
[The scope of the testing includes only functional testing of the web application including verifying visual
realization of the interface according to the specified design.
Test cases are defined in test cases document to cover each system functionality user story. Testing will be
done manually by executing test cases on target environment.
Deployment check list document contains test cases that cover most important, critical functionalities of
application that MUST work. After each new version of application is deployed to target test environment,
and after each deployment to production environment, test cases from deployment check list must be
executed. Any found defects are reported as high priority issues.]
Definition of done helps to identify deliverables that a team has to complete in order to build software.
Focusing on value-added steps allows the team to eliminate wasteful activities that complicate software
development efforts.
- Definition of Done for a Scrum Product Backlog item (e.g. writing code, tests and all necessary
documentation)
- Definition of Done for a sprint (e.g. install demo system for review)
- Definition of Done for a release (e.g. writing release notes)
age 12
<Project name>
[Provide a brief list of the target-of-test’s features and functions that will not be tested.]
[List any assumptions made during the development of this document that may impact the design,
development or implementation of testing.
List any risks or contingencies that may affect the design, development or implementation of testing.
List any constraints that may affect the design, development or implementation of testing.]
age 13
<Project name>
- [Application components may not be available on time for testing – test phase will start late and it
may have an impact on the story release schedule.
- No access to the customer’s user acceptance database to prepare test data needed for the test, as
a result the delay in test data preparation due to the ordering of all the necessary data; this will
also have negative impact on the exploratory testing – more planning and preparation should be
done upfront.]
age 14
<Project name>
age 15
<Project name>
3. Issue tracking
[<ITS> is used as issue tracking tool on project. All application features and any issues are specified and
tracked in <ITS> as tickets. In general there are 3 types of tickets:
- Feature tickets - tickets that are used for specifying application features and/or user stories. These
tickets are used to specify some rounded functionality of the system. It must contain all required
information for development and verification of feature: description, all business rules, all design
mocks and screens. Ticket of this type are not untis of testing. Exception is features that are small
enought that don't require breaking into subtask tickets. This all means that feature ticket can still
be in status „In Progress“, but one or more it's subtasks can be in status „Resolved“ or „Verified“, or
even „Reopen“ (in don't pass QA verification). So feature tickets should remain in status
„Resolved“ and not updated to status „QA In Progress“ by DEV. When QA verifies all subtask
tickets of feature ticket, he will move status of feature ticket - first to „QA In Progress“ and then
to „Verified“.
- Subtask tickets - Feature ticket can have many subtasks created as subtickets that break large
functionality in more relatively independent subtask that are testable. Subticket must have
reference to it's parent feature ticket. Tickets of this type are usually main units of testing and
verification.
- Issue tickets – are created to report and describe any found defect/issue during testing and
verification. During testing of feature and/or subtask, for each found defect/bug issue tickets is
created as subtickets of ticket being tested. More on this in separate section „Issue reporting“.
There should never be more than 2 levels of granularity for Feature – Subtask tickets. So one feature
ticket can have 0 or more subtask tickets defined, but that subtask tickets are not allowed to have other
subtasks (they can only have issue tickets as sub tickets).
Subtask ticket is considered verified when all opened issue subtickets are resolved and verified. Feature
ticket is considered verified when all subtask tickets are resolved and verified and any opened issue
subtickets are resolved and verified. So after all subtasks of feature ticket are resolved and verified, feature
ticket can also be updated to status „Verified“.
Feature and subtask tickets must contain all relevant specification information that allows development
of feature by DEV and later verification by QA. Minimum information needed is:
age 16
<Project name>
All this data then will be used as base for test cases being executed by QA and for acceptance criteria by
client.]
Transition from
Status Workflow description
status
Initial status when ticket is created in <ITS>. Set of tickets in this status
forms backlog. If ticket is created as issue ticket it can be assigned to
PM/BA (if it is general standalone issue that can be left in backlog for
future work) or can be assigned to DEV (if this is issue related to some
- specific task that needs additional work on fixing the issue in order to
resolve task). If task is created as new feature task it is usually left
initially unassigned. Later PM assigns task to DEV responsible for
implementation which means ticket is well defined and assigned to
developer for work.
After DEV starts working on ticket, but for some reason ticket needs to
In Progress
be returned to backlog. Leave ticket unassigned.
In Progress Before DEV assigned to ticket starts working on ticket that is ready for
New / Ready work, he needs to change status to “In Progress”. Assigned user remains
the same.
Reopen Before DEV assigned to ticket starts working on ticket that needs
additional work (reopened), he needs to change status to “In Progress”.
Assigned user remains the same.
age 17
<Project name>
This is final status of every ticket. After ticket is verified by QA, PM/BA
Verified
optionally makes additional verification and decides to close the ticket.
Closed
Rejected If ticket was rejected, PM decides to close it.
age 18
<Project name>
Issue ticket can also be created as independent (standalone) tickets in <ITS> in following cases:
- Found defect/bug is not related to any ticket that is being verified (some more „general issues“,
etc.). Issue ticket is created in backlog (status “New/Ready”) and assigned to PM/BA.
- Found defect/bug is related to some feature/ticket that is already passed verification earlier and
closed (for example defects found in regression testing). Issue ticket is created in backlog (status
“New/Ready”) and assigned to PM/BA.
- Found defect/bug is really minor severity and in some cases issue ticket doesn't need to be created
as sub ticket of ticket being tested. This is because feature can be considered done and found
age 19
<Project name>
cosmetic issue can be resolved later as independent issue (for example feature is working fine in all
browsers except in IE9). Issue ticket is created in backlog in initial status “New/Ready” and assigned
to PM/BA.
age 20
<Project name>
4. Test Strategy
[For each type of test, provide a description of the test and why it is being implemented and executed. If a
type of test will not be implemented and executed, indicate this in a sentence stating the test will not be
implemented or executed and stating the justification, such as “This test will not be implemented or
executed. This test is not appropriate, or will be done by development team”.
The main considerations for the test strategy are the techniques to be used and the criterion for knowing
when the testing is completed.
In addition to the considerations provided for each test below, testing should only be executed using known,
controlled databases in secured environments. ]
The Test Strategy presents the recommended approach to the testing of the application. The previous
sections described what will be tested, what is general scope of testing, test process phases and
procedures in general. This section describes how the application will be tested.
Test types are introduced as a means of clearly defining the objective of a certain level for a program or
project. A test type is focused on a particular test objective, which could be the testing of the function to be
performed by the component or system, a non-functional quality characteristics, the structure or
architecture of the component or system or related to changes confirming that defects have been fixed
(confirmation testing or retesting) and looking for unintended changes (regression testing).
Functional testing
Non-functional testing (Data Integrity Testing, Performance Testing, Security Testing, Installation
Testing)
Structural testing
Change related testing
Functional Testing
In functional testing the testing of the functions of component or system is done. It refers to activities that
verify a specific action or function of the code. Functional test tends to answer the questions like “can the
user do this” or “does this particular feature work”. This is typically described in a requirements
specification or in a functional specification.
age 21
<Project name>
[Functional testing of the target-of-test should focus on any requirements for test that can be traced
directly to use cases or business functions and business rules. The goals of these tests are to verify proper
data acceptance, processing, and retrieval, and the appropriate implementation of the business rules. This
type of testing is based upon black box techniques; that is verifying the application and its internal
processes by interacting with the application via the Graphical User Interface (GUI) and analyzing the
output or results. Identified below is an outline of the testing recommended for each application:]
4.1..2 TECHNIQUE
[Execute each use case, use-case flow or function, using valid and invalid data, to verify the following:
- The expected results occur when valid data is used.
- The appropriate error or warning messages are displayed when invalid data is used.
- Each business rule is properly applied.]
Test cases are defined to cover each system functionality user story. Testing will be done manually by
executing test cases on target environment.
1. Features verification - Functional test cases (scenarios) are written describing test steps of each
feature/user story.
2. Look and Feel verification - Each graphical component of UI will be compared against design
provided by UX team.
3. Exploratory testing – Ad-hoc testing to verify some special scenarios not cover by test cases.
age 22
<Project name>
Non-Functional Testing
In non-functional testing the quality characteristics of the component or system is tested. Non-functional
refers to aspects of the software that may not be related to a specific function or user action.
4.1..2 TECHNIQUE
- [Invoke each database access method and process, seeding each with valid and invalid data or
requests for data.
- Inspect the database to ensure the data has been populated as intended, all database events
occurred properly, or review the returned data to ensure that the correct data was retrieved for the
correct reasons.]
age 23
<Project name>
Note: Transactions below refer to “logical business transactions”. These transactions are defined as specific
use cases that an actor of the system is expected to perform using the target-of-test, such as add or modify
a given contract.]
[Load testing is a performance test which subjects the target-of-test to varying workloads to measure and
evaluate the performance behaviors and ability of the target-of-test to continue to function properly under
these different workloads. The goal of load testing is to determine and ensure that the system functions
properly beyond the expected maximum workload. Additionally, load testing evaluates the performance
characteristics, such as response times, transaction rates and other time sensitive issues).]
[Note: Transactions below refer to “logical business transactions”. These transactions are defined as
specific functions that an end user of the system is expected to perform using the application, such as add or
modify a given contract.]
[Stress testing is a type of performance test implemented and executed to find errors due to low resources
or competition for resources. Low memory or disk space may reveal defects in the target-of-test that aren't
apparent under normal conditions. Other defects might result from competition for shared resources like
database locks or network bandwidth. Stress testing can also be used to identify the peak workload the
target-of-test can handle.]
age 24
<Project name>
- Scripts should be run on one machine (best case to benchmark single user, single transaction) and
be repeated with multiple clients (virtual or actual, see Special Considerations below).]
age 25
<Project name>
age 26
<Project name>
Application-level security ensures that, based upon the desired security, actors are restricted to specific
functions or use cases, or are limited in the data that is available to them. For example, everyone may be
permitted to enter data and create new accounts, but only managers can delete them. If there is security at
the data level, testing ensures that “ user type one” can see all customer information, including financial
data, however,” user two” only sees the demographic data for the same client.
System-level security ensures that only those users granted access to the system are capable of accessing
the applications and only through the appropriate gateways.]
4.1..2 TECHNIQUE
- Application-level Security: [Identify and list each user type and the functions or data each type has
permissions for.]
o [Create tests for each user type and verify each permission by creating transactions
specific to each user type.]
o Modify user type and re-run tests for same users. In each case, verify those additional
functions or data are correctly available or denied.
- System-level Access: [See Special Considerations below]
age 27
<Project name>
Installation Testing
[Installation testing has two purposes. The first is to insure that the software can be installed under different
conditions - such as a new installation, an upgrade, and a complete or custom installation - under normal
and abnormal conditions. Abnormal conditions include insufficient disk space, lack of privilege to create
directories, and so on. The second purpose is to verify that, once installed, the software operates correctly.
This usually means running a number of the tests that were developed for Function Testing.]
4.1..6 TECHNIQUE
- [Manually or develop automated scripts, to validate the condition of the target machine new -
never installed; same version or older version already installed).
- Launch or perform installation.
- Using a predetermined sub-set of function test scripts, run the transactions.]
Structural Testing
The structural testing is the testing of the structure of the system or component.
Structural testing is often referred to as ‘white box’ or ‘glass box’ or ‘clear-box testing’ because in structural
testing we are interested in what is happening ‘inside the system/application’.
In structural testing the testers are required to have the knowledge of the internal implementations of the
code. Here the testers require knowledge of how the software is implemented, how it works. During
structural testing the tester is concentrating on how the software does it. For example, a structural
technique wants to know how loops in the software are working. Different test cases may be derived to
exercise the loop once, twice, and many times. This may be done regardless of the functionality of the
software. Structural testing can be used at all levels of testing. Developers use structural testing in
component testing and component integration testing, especially where there is good tool support for code
coverage. Structural testing is also used in system and acceptance testing, but the structures are different.
age 28
<Project name>
Goals of change related testing are confirming that defects have been fixed (confirmation testing or
retesting) and looking for unintended changes (regression testing)
Confirmation testing or re-testing: When a test fails because of the defect, then that defect is reported and
a new version of the software is expected that has had the defect fixed. In this case we need to execute the
test again to confirm that whether the defect got actually fixed or not. This is known as confirmation testing
and also known as re-testing. It is important to ensure that the test is executed in exactly the same way it
was the first time using the same inputs, data and environments.
Regression testing: During confirmation testing the defect got fixed and that part of the application started
working as intended. But there might be a possibility that the fix may have introduced or uncovered a
different defect elsewhere in the software. The way to detect these ‘unexpected side-effects’ of fixes is to
do regression testing. The purpose of a regression testing is to verify that modifications in the software or
the environment have not caused any unintended side effects and that the system still meets its
requirements. Regression tests are executed whenever the software changes, either as a result of fixes or
new or changed functionality.
age 29
<Project name>
TOOLS
The following tools will be employed for this project:
age 30
<Project name>
Additional information
4.2 Apendix A - Documents
[List all relevant documents that are used as input and created as output of testing process. Put links for
downloading documents from share]
END OF DOCUMENT
age 31