0% found this document useful (0 votes)
64 views31 pages

Test Plan

The test plan summarizes the approach to testing a project. It outlines the testing strategy, scope, roles and responsibilities, and issue tracking process. Testing will validate functionality, performance, security, and other quality attributes. The document defines the testing phases, entry and exit criteria, and schedule. Risks will be mitigated before, during, and after the project.

Uploaded by

Jovan Djordjevic
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views31 pages

Test Plan

The test plan summarizes the approach to testing a project. It outlines the testing strategy, scope, roles and responsibilities, and issue tracking process. Testing will validate functionality, performance, security, and other quality attributes. The document defines the testing phases, entry and exit criteria, and schedule. Risks will be mitigated before, during, and after the project.

Uploaded by

Jovan Djordjevic
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 31

<Project name>

<Client name>
<Confidentiality>

Test Plan
<Project name>

[Note: The following template is provided for use within Gecko Solutions. Text enclosed in square brackets and
displayed in blue italic is included to provide guidance to the author and should be deleted before publishing the
document. Text enclosed in square brackets and displayed in brown regular font represents example - this can be
modified to match specific project and kept in document, or deleted. ]

[To customize automatic fields in Microsoft Word (which display a gray background when selected), click the Microsoft

Office Button , point to Prepare, and then click Properties. select designated field and update it or use Document
Properties > Advanced properties >Custom tab to update the values. Each field can be updated by selecting Update
Field option from the right click menu on the when positioned on automated field area.]

age 1
<Project name>

DOCUMENT REVISIONS

CHANGES
Date Version Author Description

DISTRIBUTION
Name Position

age 2
<Project name>

Table of contents

1. INTRODUCTION.....................................................................................................................3
1.1 PURPOSE OF DOCUMENT........................................................................................................................3
1.2 PROJECT IDENTIFICATION........................................................................................................................3
1.3 ROLES AND RESPONSIBILITIES...................................................................................................................3
1.4 TARGET PLATFORM AND SYSTEM ARCHITECTURE..........................................................................................3
1.5 ENVIRONMENTS....................................................................................................................................3
1.6 TESTING PHASES....................................................................................................................................3
 Entry criteria.....................................................................................................................................3
 Exit criteria........................................................................................................................................3
 Testing Schedule...............................................................................................................................3

2. SCOPE...................................................................................................................................3
2.1 SCOPE OF THE TESTING...........................................................................................................................3
2.2 DEFINITION OF DONE............................................................................................................................3
2.3 OUT OF THE SCOPE................................................................................................................................3
2.4 ASSUMPTIONS, CONTRAINTS AND RISKS.....................................................................................................3
2.5 RISK MITIGATION..................................................................................................................................3
 Before the project starts...................................................................................................................3
 During the project.............................................................................................................................3
 After the project finished..................................................................................................................3

3. ISSUE TRACKING...................................................................................................................3
3.1 ISSUE TRACKING SYSTEM.........................................................................................................................3
3.2 ISSUE WORKFLOW..................................................................................................................................3
3.3 ISSUE REPORTING..................................................................................................................................3

4. TEST STRATEGY.....................................................................................................................3
4.1 TESTING TYPES......................................................................................................................................3
 Functional Testing.............................................................................................................................3
 Non-Functional Testing.....................................................................................................................3
 Data and Database Integrity Testing.................................................................................................3
 Performance, Load and Stress Testing..............................................................................................3
 Security and Access Control Testing.................................................................................................3
 Installation Testing............................................................................................................................3
 Structural Testing..............................................................................................................................3
 Change related Testing.....................................................................................................................3

ADDITIONAL INFORMATION........................................................................................................3

age 3
<Project name>

4.2 APENDIX A - DOCUMENTS......................................................................................................................3

age 4
<Project name>

1. Introduction
1.1 Purpose of document
The purpose of Test Plan document is to define the project’s approach to testing - testing strategy. The
strategy looks at the characteristics of the system to be built and plans the breadth and depth of the quality
assurance effort. The Testing Strategy will influence tasks related to test planning, test types, test script
development, and test execution.

This test plan gives answers to questions:

- What – what are the aspects of software system that are tested and what are we testing testing?
- How – how tests are prepared, executed and documented?
- When – at what moment does testing takes place?

1.2 Project Identification


[Describe project in general in few sentences]

1.3 Roles and responsibilities


[ List roles on project and their main responsibilities]

Resource Type Responsibilities Name

Project Manager  Provides Go/No Go authorization that product


is ready for release as part of implementation
planning and launch process
 Prioritizes issues and defects, and manage
technical resources
 Makes decisions on unresolved issues
 Provides guidance on the overall project
 Coordinates and develops project schedule
 Liaison with business to ensure participation
and ownership
 Tracks all project activities and resources,
ensuring project remains within scope
 Facilitates identifying and bringing closure to
open issues
 Communicates project status
Business Analyst  Define business requirements and expected
results for business acceptance
 Execute user acceptance testing
 Write business requirements
 Maintain requirements in Test Director
 Lead testing cycle and coordinate test
environment

age 5
<Project name>

Developers  Design application architecture


 Create technical design
 Database Administrator
 Write application code
 Resolve defects
 Support testers
QA  Maintain project in Test Director
 Write test plan to include test scenarios and
cases
 build test scripts
 Facilitate testing
 Maintain and manage defects in Test Director
 Perform user acceptance testing

1.4 Target platform and system architecture


[Describe platform used and high level description of architecture. Eg. Testing target is web application
implemented in Java programing language using JSF framework.]
Application should be compatible with following browsers running on specified platform/OS:

Platform Operating system Browser Device types Notes

Desktop Windows 7 IE 9 Optimized for


Windows XP IE 10 16:9 screen
IE 11 ratio
Chrome 40
Firefox 35
Opera
Safari

Desktop OS X 10.10 Safari Mac Book


Chrome
Firefox
Opera

Mobile Android 4.0+ Stock Tablets, Optimized for


Chrome Smartphones with tablets
Firefox screen 4 inch+

Mobile iOS 7+ Safari For iPod Touch, Optimized for


Chrome iPhone, iPad tablets

age 6
<Project name>

Testing target is desktop native application implemented in Java programing language.

Application is compatible with following operating systems :

Operating system Notes

Windows 7 <Describe minimum and optimal system


requirements ?>

Windows XP

Linux

Testing target is mobile native application implemented for Android/iOS devices. Application is compatible

with following versions of operating systems :

Operating system Device types Notes

Android 4.0 + Tablet, Smartphone Optimized for tablet running


Android 4.4.

iOS 7+ Tablet, Smartphone Optimized for iPhone 6

All testing and verification will be done with respect to this specification.

1.5 Environments
[List all environments (servers, databases, countinuous build tooll instances, user story/issue tracking tool
instances) used in development and testing, as well as details of production environment (existing or
planned)]
[We are using these environments on project for development and verification:

- Development environment (DEV) – where all done features/fixed issues are deployed first by
developers. Developer is responsible to configure all required changes in development environment
for functionality to work. Also developer is obligated to perform quick basic verification of deployed
functionality and test main expected scenarios on this environment. After all this is done developer
can change status of task to “Resolved”. All features and bug fixes for current version being
developed / tested should be kept and listed in deployment list available to all team members (for
example on Wiki pages of issue tracking system on project). This will be very useful to QA during
verification of specific version of application.
- Test environment (TEST) – where all features/bug fixes previously resolved and deployed to
development environment (and verified in terms of basic scenarios) by developer, are then deployed

age 7
<Project name>

for detailed testing and verification by QA. Deployment of testing version of application to test
environment can be done by :
o Developer – Recommended if deployment to test environment and configuration process is
complex, such us running DB schema and/or data updates, doing changes in configuration
files on server etc. In that case developer will do deployment and all necessary
configuration in test environment in order to have application and functionality ready for
testing by QA. After proper version of application is deployed to test environment and
configured, developer changes status of ticket from “Resolved” to “QA In Progress”, and
DOESN’T change assignment of ticket. QA will change assignment of ticket to himself, in the
moment he starts verification of ticket.
o QA – Recommended if deployment to test environment and configuration process is simple
and can be done automatically, such as using continuous build tools like Jenkins. In this
case, after completing task and verifying on DEV environment, developer should just
change assignment of ticket to QA and LEAVE status of ticket to “Resolved/Done”. QA will
change status of ticket to “QA In Progress” once the application is deployed to test
environment.

After task (feature/bug fix) is verified by QA on test environment, QA changes status of ticket to
“Verified” and change assignment to BA or PM on project.

- Production environment (PROD) – Actual production environment where application is deployed


for end users. QA can (and should) perform some of the additional verification of targeted
features/bug fixes after deployment of version of application to PROD environment to make sure
everything is propagated well to final environment. But QA must be very careful not to affect or
damage any real data and/or processes on production. Also additional quick regression test and
verification is done by performing set of test cases that verifies critical set of functionalities of
applications – listed in document “Deployment checklist”. After this verification all affected tasks
can be updated to status “Closed” (by PM, QA or customer).

These environments are specified in details on Wiki page : <linkToEnvironmentWikiPage>]

[Add staging environment and all related changes if needed]

1.6 Testing phases


[Describe the general methodology on project (Scrum) and list stages of testing]

Testing will be involved in all stages of the development lyfecycle and can be divided in the following
phases:

- Preparation – stage for study and examining specifications and other documentation used as a
knowledge base for testing process on project. Most important input document used for this stage
is Business requirements document where all user stories, features and specific business rules of
the system are specified in details.

age 8
<Project name>

- Specification – during this stage and based on functional specification (business requirements
document) and non-functional specification test cases are written and test infrastructure is
prepared and configured.
- Execution – this stage can be started once development of the first testable software component
or version is completed. The software is being tested by approach, methods and tools defined in
the Test strategy. The possible differences between expected results and actual test results may be
the results of incorrect implementation of the functionality, defect in the specification, issue in the
test infrastructure, incorrect test script. The cause of the difference will be investigated during
testing activities. As soon as rework has been completed (found defects fixed), the tests are
executed again to verify that issues have been fixed.
- Completion – stage for finalizing all test activities, collect and record the test results and make a
Go/No Go decision for the implemented functionality and/or product build, based on test results
and specified definition of DONE. Create and share test report.

 Entry criteria
[Defines pre-conditions for testing process activities]
- Functional and non-functional specifications are described well enough to start Preparation and
Specification phases. Most important document: Business requirements specification is defined.
- All designs, mock-ups, prototypes needed for the reference are present.
- Development of the testable application component is finished.
- Developer responsible for features being tested performs developer’s test (main scenarios) on
development environment. If developer’s test fails he returns issue status to “In Progress”.
- The latest version of the application with the component under test is deployed on test
environment.
- If Developer is doing deployment and configuration to test environment, than developer
responsible for features being tested, performs developer test on test environment to make sure
everything is ready for QA. If developers test fails he returns issue status to “In Progress”.
- Developer responsible for features being tested provides instructions for testing and any other
relevant information on story/issue tickets.
- QA team is informed about features and testable components that need to be tested (list of
features/issue fixes for software version being tested, story/issue tickets in proper status, assigned
to tester etc.)

 Exit criteria
[Defines post conditions of successfully done testing process activities]
 Functional specifications are covered by the test cases.
 Non-functional specifications are covered by the test cases if possible.
 Text cases are executed on the build containing the functionality under test.
 Test case execution results are recorded in defined form.
 All the issues found during testing were reported to the team using defined procedures.
 Bug tasks inside are created in issue tracking system, for the issues that supposed to be fixed before
the release.

age 9
<Project name>

 Bug reports in issue tracking tool are created for the issues left after the test (or sprint/phase).
 Regression tests for the application were executed to cover all functionalities being completed so
far.

 Testing Schedule
This is time schedule plan for QA activities in different stages of project:

Project phase Testing phase Project Testing activities Project Time schedule
finished artefacts used artefacts
produced

Requirements Preparation Business  Study and examine Test plan 2 weeks after
Gathering requirements specifications and the start of the
document other documentation project
used as a knowledge
base for testing
process on the project
 Plan test process on
the project

Analysis Specification  Test plan  Write test cases Test cases After all
document based on functional document business
 Business specification (business requirements
requirements requirements are specified
document document) , defined and main
 Use cases functionalities (use functionalities
document cases document) and of the
non-functional application
specifications. defined in Use
 Prepare and configure cases
test environment and
infrastructure.

Implementation Execution,  Test plan  Each functionality Test report After


of functionality Completion – document implemented is being according to development
Incremental  Test cases tested. test plan and
testing in the document  The software is being document deployement
process of  Business tested by approach, of each
implementation requirements methods and tools testable
done in many document defined in the Test software
development strategy. Any issues component
cycles found are reported. (functionality)
 After each reported or version is
issue is addressed and completed.
fixed,

age 10
<Project name>

software/functionality
is re-tested.

Implementation Execution,  Test plan  Acceptance testing is Test report After delivery
phase N Completion - document being done to verify according to of stable
Testing of  Test cases all functionalities test plan software
completed document developed within document version with all
stable version  Business phase N of software required
of software (for requirements implementation. functionalities
phase N) being document  Regression testing is for phase N
delivered to being done to cover implemented.
client. all functionalities of
the software
implemented so far.

Implementation Execution,  Test plan  Acceptance testing is  Test report After delivery
FINAL phase Completion - document being done to verify according of final
Testing of  Test cases all functionalities to test software
completed final document developed within plan version with all
version of  Business FINAL phase of document required
software being requirements software  Deployme functionalities
delivered to document implementation. nt check implemented.
client.  Regression testing is list
being done to cover document
all functionalities of
the software
implemented so far.

Maintenance Execution,  Test plan  Acceptance testing is Test report After software
Completion - document being done to verify according to started life in
Testing of  Test cases all new functionalities test plan production
software that document developed within document environment.
started life in  Deployment maintenance of
production check list software.
environment. document  Testing of all core
functionalities is done
after each deploy of
new version of
software. These test
are defined in
„Deployment check
list“ document.
 Full regression testing
is being done to cover
all functionalities of
the software
implemented – only if
required (large

age 11
<Project name>

changes in the
software).

2. Scope
2.1 Scope of the testing
[Describe the stages of testingfor example, Unit, Integration, or Systemand the types of testing that
will be addressed by this plan, such as Function or Performance.
Provide a brief list of the target-of-test’s features and functions that will be tested.]

[The scope of the testing includes only functional testing of the web application including verifying visual
realization of the interface according to the specified design.

Test cases are defined in test cases document to cover each system functionality user story. Testing will be
done manually by executing test cases on target environment.

Deployment check list document contains test cases that cover most important, critical functionalities of
application that MUST work. After each new version of application is deployed to target test environment,
and after each deployment to production environment, test cases from deployment check list must be
executed. Any found defects are reported as high priority issues.]

2.2 Definition of DONE


[Provide definition of done which will be used for defining acceptance criteria for system's functional
features. This incldes platforms, operating systems, list of devices, list of browsers that must be supported
by the system. ]
Definition of Done is a list of activities (writing code, coding comments, unit testing, integration testing,
release notes, design documents, etc.) that adds verifiable and demonstrable value to the product. It is a
comprehensive checklist of necessary activities that ensures that only truly done features are delivered, not
only in terms of functionality but in terms of quality as well.

Definition of done helps to identify deliverables that a team has to complete in order to build software.
Focusing on value-added steps allows the team to eliminate wasteful activities that complicate software
development efforts.

There might be different Definition of Done at various levels:

- Definition of Done for a Scrum Product Backlog item (e.g. writing code, tests and all necessary
documentation)
- Definition of Done for a sprint (e.g. install demo system for review)
- Definition of Done for a release (e.g. writing release notes)

age 12
<Project name>

Definition of Done helps to:

- Track progress on items in work


- Enable transparency within Scrum Team
- Highlight items that need attention
- Determine when an increment is ready for release

Example Definition of Done for backlog item

- Fulfil all requirements, as stated in the ticket :


o Verify task in local developer’s environment
o Submit any code changes to code repostitory
o Apply all necessery configuration / database changes to development server
o Build / Deploy target version of application to development server
o Verify task on development server
o Update ticked with appropriate comment, change status of ticket, and assign to
appropriate user (depends on specific flow)
- Follow best implementation practices
- Extract any common styles that belong to the styleguide and update common guide
- Test and make sure it works on all supported browsers and devices
- Add appropriate documentation for your task inside the ticket
- Peer code review

2.3 Out of the scope

[Provide a brief list of the target-of-test’s features and functions that will not be tested.]

[List of system aspects that are not covered by the testing:

- Content validation is not covered by this test plan.


- Technical details regarding performance of the application.
- Technical details regarding security.]

2.4 Assumptions, contraints and risks

[List any assumptions made during the development of this document that may impact the design,
development or implementation of testing.
List any risks or contingencies that may affect the design, development or implementation of testing.
List any constraints that may affect the design, development or implementation of testing.]

age 13
<Project name>

- [Application components may not be available on time for testing – test phase will start late and it
may have an impact on the story release schedule.
- No access to the customer’s user acceptance database to prepare test data needed for the test, as
a result the delay in test data preparation due to the ordering of all the necessary data; this will
also have negative impact on the exploratory testing – more planning and preparation should be
done upfront.]

2.5 Risk Mitigation

 Before the project starts


- Defined and clear business requirements is a must
- Available and accessible software requirements documents
- Implemented change control process
- Defined implementation and test plan
- Wide agreement about implementation and test plan and in-use by people involved in project
- In-use strategy for creating applications in four tiers
- Development / Integration / Staging (QA) / Production
- Have on hand testing environment with more than one server available
- Set up testing environment prior to testing phase begun
- Defined issue tracking strategy, defect prioritization and resolution
- Provide data sets before project starts
- Case study before project started - be well prepared
- Define in details pass and fail statuses - don’t assume anything
- Agreed implementation schedule

 During the project


- In case of requirements changes be informed, build in variables, look ahead
- Keep defined software requirement on mind
- Schedule changes - count impact of delay, quick shifts, inform customer
- To mitigate staff changes have wiki pages, good knowledge transfer, good test cases
- Unit testing/reviews are highly recommended
- Properly utilize provided data sets
- Follow up/Keep progress index
- Improve product knowledge, do workshops, communicate with BA’s
- Work according the defined implementation and test plan
- Follow up implementation schedule

age 14
<Project name>

 After the project finished


- Case study after project finished
- Implement retrospectives after project finished
- Realize and find weak spots
- Talk and mitigate weak spots
- Schedule/timetable assessment
- Evaluate success/report achievement
- People appraisal

age 15
<Project name>

3. Issue tracking

3.1 Issue tracking system


[Specify issue tracking system used on project (refered as <ITS> in following text). Specify types of tickets in
<ITS> and rules that cover creating and updating tickets.]

[<ITS> is used as issue tracking tool on project. All application features and any issues are specified and
tracked in <ITS> as tickets. In general there are 3 types of tickets:
- Feature tickets - tickets that are used for specifying application features and/or user stories. These
tickets are used to specify some rounded functionality of the system. It must contain all required
information for development and verification of feature: description, all business rules, all design
mocks and screens. Ticket of this type are not untis of testing. Exception is features that are small
enought that don't require breaking into subtask tickets. This all means that feature ticket can still
be in status „In Progress“, but one or more it's subtasks can be in status „Resolved“ or „Verified“, or
even „Reopen“ (in don't pass QA verification). So feature tickets should remain in status
„Resolved“ and not updated to status „QA In Progress“ by DEV. When QA verifies all subtask
tickets of feature ticket, he will move status of feature ticket - first to „QA In Progress“ and then
to „Verified“.
- Subtask tickets - Feature ticket can have many subtasks created as subtickets that break large
functionality in more relatively independent subtask that are testable. Subticket must have
reference to it's parent feature ticket. Tickets of this type are usually main units of testing and
verification.
- Issue tickets – are created to report and describe any found defect/issue during testing and
verification. During testing of feature and/or subtask, for each found defect/bug issue tickets is
created as subtickets of ticket being tested. More on this in separate section „Issue reporting“.

There should never be more than 2 levels of granularity for Feature – Subtask tickets. So one feature
ticket can have 0 or more subtask tickets defined, but that subtask tickets are not allowed to have other
subtasks (they can only have issue tickets as sub tickets).

Subtask ticket is considered verified when all opened issue subtickets are resolved and verified. Feature
ticket is considered verified when all subtask tickets are resolved and verified and any opened issue
subtickets are resolved and verified. So after all subtasks of feature ticket are resolved and verified, feature
ticket can also be updated to status „Verified“.

Feature and subtask tickets must contain all relevant specification information that allows development
of feature by DEV and later verification by QA. Minimum information needed is:

- High level description of functionality being specified


- Screens/wireframes for functionality
- Steps describing main use case scenario
- All business rules, input validations, pre-conditions and post-conditions for use case
- Important notes/instructions for QA added by DEV that helps testing being more efficient and
effective.

age 16
<Project name>

All this data then will be used as base for test cases being executed by QA and for acceptance criteria by
client.]

3.2 Issue workflow


[Define workflow of issues in terms of lifecycle, statuses and actores that are allowed to change statuses of
tickets.]
[Each ticket created in <ITS> can be in one status at the time. Transition from one status to another is done
by designated users in predefined moments during lifecycle of the ticket.

Transition from
Status Workflow description
status

Initial status when ticket is created in <ITS>. Set of tickets in this status
forms backlog. If ticket is created as issue ticket it can be assigned to
PM/BA (if it is general standalone issue that can be left in backlog for
future work) or can be assigned to DEV (if this is issue related to some
- specific task that needs additional work on fixing the issue in order to
resolve task). If task is created as new feature task it is usually left
initially unassigned. Later PM assigns task to DEV responsible for
implementation which means ticket is well defined and assigned to
developer for work.

After DEV starts working on ticket, but for some reason ticket needs to
In Progress
be returned to backlog. Leave ticket unassigned.

Ticket re-opened by QA if can't be verified and ticket needs to returned


in backlog for later re-implementation/fixing. QA assigns ticket to
developer who was working on mentioned ticket or if not in office to
PM (or confirm with PM who will continue working on ticket). Any
discovered issues are reported as issue tickets created as subtasks of
ticket being verified by QA. These issue tasks are assigned to PM or DEV
(if we confirm with him and PM that he is the one that needs start
Reopen QA In progress
working on the issues).

Also if ticket that is being tested by QA need to be rejected status is


changed to “Reopen” and ticket is assigned to PM with appropriate
comment (reason for rejecting). Ticket is rejected if issue from the ticket
can’t be reproduced or if described behavior is something that is not
considered an issue.

In Progress Before DEV assigned to ticket starts working on ticket that is ready for
New / Ready work, he needs to change status to “In Progress”. Assigned user remains
the same.

Reopen Before DEV assigned to ticket starts working on ticket that needs
additional work (reopened), he needs to change status to “In Progress”.
Assigned user remains the same.

age 17
<Project name>

When ticket that is marked as resolved, needs re-work it can be


Resolved returned directly to status “In Progress”. This is usually done by
developer itself. Assigned user remains the same.

When ticket task is completed by DEV, he changes status to “Resolved”.


Resolved In Progress
Assigned user remains the same.

After ticket is resolved and build/version of application containing


finished feature is deployed to test environment and configured, QA
Resolved assigns ticket to himself. In some cases this will be done by QA person if
QA controls build and deploy to test environment.
When QA start with verification he changes status to “QA In Progress”.
QA In Progress
After ticket is previously verified, but it turns out that there are some
defects left undetected or additional testing on ticket is needed, PM/BA
Verified returns ticket to status “QA In Progress” and assigns ticket back to QA. If
additional defects are discovered by QA itself, than he can do this
status/assignment update.

After ticket is successfully verified on test environment QA updates


Verified QA In Progress
ticket status to “Verified” and assigns ticket to PM/BA on project.

This is final status of every ticket. After ticket is verified by QA, PM/BA
Verified
optionally makes additional verification and decides to close the ticket.
Closed
Rejected If ticket was rejected, PM decides to close it.

Ready / New Ticket from backlog is rejected by PM/BA.


Rejected
Reopen Ticket from status “Reopen” is rejected by PM/BA as invalid.

age 18
<Project name>

3.3 Issue reporting


[Describs flow for tickets.]
[During testing of feature and/or subtask, any found defects/bugs tickets are created as subtickets of ticket
being tested. Status of ticket must be returned to „Reopen“ and assigned back to developer responsible for
that subtask (or PM). When issue is found/issue ticket created, we only change status of ticket being
tested and never status of its parent ticket (feature).

Issue ticket can also be created as independent (standalone) tickets in <ITS> in following cases:

- Found defect/bug is not related to any ticket that is being verified (some more „general issues“,
etc.). Issue ticket is created in backlog (status “New/Ready”) and assigned to PM/BA.
- Found defect/bug is related to some feature/ticket that is already passed verification earlier and
closed (for example defects found in regression testing). Issue ticket is created in backlog (status
“New/Ready”) and assigned to PM/BA.
- Found defect/bug is really minor severity and in some cases issue ticket doesn't need to be created
as sub ticket of ticket being tested. This is because feature can be considered done and found

age 19
<Project name>

cosmetic issue can be resolved later as independent issue (for example feature is working fine in all
browsers except in IE9). Issue ticket is created in backlog in initial status “New/Ready” and assigned
to PM/BA.

Each issue ticket created in <ITS> must contain following information:

- Title/Summary: Descriptive title that gives general info what feature/component/page/page


element is affected by the issue and what is the problem. It can be specified if form of pseudo
breadcrumb.
Example: User registration – Personal Information Page – Phone number – validation issue
- Type: Issue
- Severity: Severity of the found issue:
o Blocker - User cannot proceed testing/work for bigger functionality parts (e.g. login, sign-
up)
o Critical - User cannot access functionality and no workaround is possible
o Major - Functionality cannot be tested but work around possible
o Low - Low impact functionality or graphical issues that is not preventing from testing
functionality (misspelling, incorrect alignment)
o Cosmetic - Functionality/Graphical enhancement that would be nice to have, but not
necessary
- Affected version : Version of software being tested/Build number
- Environment: Environment(s) where issue is found (Dev, Test, Staging, Production).
- Description: Detailed description of the issue. It should contain following:
o Test case (steps to reproduce the issue)
o Encountered issue
o Expected result
- Screenshot: provide screen shot if relevant
- Reference to subtask: Optional link to subtask ticket being tested (only if there is one)
- Reference to other issues: Optional Link to other relevant/related issue tickets.]

age 20
<Project name>

4. Test Strategy
[For each type of test, provide a description of the test and why it is being implemented and executed. If a
type of test will not be implemented and executed, indicate this in a sentence stating the test will not be
implemented or executed and stating the justification, such as “This test will not be implemented or
executed. This test is not appropriate, or will be done by development team”.
The main considerations for the test strategy are the techniques to be used and the criterion for knowing
when the testing is completed.
In addition to the considerations provided for each test below, testing should only be executed using known,
controlled databases in secured environments. ]

The Test Strategy presents the recommended approach to the testing of the application. The previous
sections described what will be tested, what is general scope of testing, test process phases and
procedures in general. This section describes how the application will be tested.

4.1 Testing Types

Test types are introduced as a means of clearly defining the objective of a certain level for a program or
project. A test type is focused on a particular test objective, which could be the testing of the function to be
performed by the component or system, a non-functional quality characteristics, the structure or
architecture of the component or system or related to changes confirming that defects have been fixed
(confirmation testing or retesting) and looking for unintended changes (regression testing).

Depending on its objectives, there are four software test types:

 Functional testing
 Non-functional testing (Data Integrity Testing, Performance Testing, Security Testing, Installation
Testing)
 Structural testing
 Change related testing

 Functional Testing

In functional testing the testing of the functions of component or system is done. It refers to activities that
verify a specific action or function of the code. Functional test tends to answer the questions like “can the
user do this” or “does this particular feature work”. This is typically described in a requirements
specification or in a functional specification.

age 21
<Project name>

[Functional testing of the target-of-test should focus on any requirements for test that can be traced
directly to use cases or business functions and business rules. The goals of these tests are to verify proper
data acceptance, processing, and retrieval, and the appropriate implementation of the business rules. This
type of testing is based upon black box techniques; that is verifying the application and its internal
processes by interacting with the application via the Graphical User Interface (GUI) and analyzing the
output or results. Identified below is an outline of the testing recommended for each application:]

4.1..1 TEST OBJECTIVE


[Ensure proper target-of-test functionality, including navigation, data entry, processing, and retrieval.]
[Functional testing of the application needs to verify application functtionality including: navigation, data
entry, processing, and retrieval. Also we need to verify visual realization of the interface according to the
specified design.]

4.1..2 TECHNIQUE
[Execute each use case, use-case flow or function, using valid and invalid data, to verify the following:
- The expected results occur when valid data is used.
- The appropriate error or warning messages are displayed when invalid data is used.
- Each business rule is properly applied.]
Test cases are defined to cover each system functionality user story. Testing will be done manually by
executing test cases on target environment.

Testing methods that are used:

1. Features verification - Functional test cases (scenarios) are written describing test steps of each
feature/user story.
2. Look and Feel verification - Each graphical component of UI will be compared against design
provided by UX team.
3. Exploratory testing – Ad-hoc testing to verify some special scenarios not cover by test cases.

4.1..3 COMPLETION CRITERIA


[All planned tests have been executed.
All identified defects have been addressed.]

4.1..4 SPECIAL CONSIDERATIONS


[Identify or describe those items or issues (internal or external) that impact the implementation and
execution of function test.]

age 22
<Project name>

 Non-Functional Testing
In non-functional testing the quality characteristics of the component or system is tested. Non-functional
refers to aspects of the software that may not be related to a specific function or user action.

 Data and Database Integrity Testing


[The databases and the database processes should be tested as a subsystem within the . These subsystems
should be tested without the target-of-test’s User Interface as the interface to the data. Additional research
into the DataBase Management System (DBMS) needs to be performed to identify the tools and techniques
that may exist to support the testing identified below.]

4.1..1 TEST OBJECTIVE


[Ensure database access methods and processes function properly and without data corruption.]

4.1..2 TECHNIQUE
- [Invoke each database access method and process, seeding each with valid and invalid data or
requests for data.
- Inspect the database to ensure the data has been populated as intended, all database events
occurred properly, or review the returned data to ensure that the correct data was retrieved for the
correct reasons.]

4.1..3 COMPLETION CRITERIA


[All database access methods and processes function as designed and without any data corruption.]

4.1..4 SPECIAL CONSIDERATIONS


- [Testing may require a DBMS development environment or drivers to enter or modify data directly
in the databases.
- Processes should be invoked manually.
- Small or minimally sized databases (limited number of records) should be used to increase the
visibility of any non-acceptable events.]

age 23
<Project name>

 Performance, Load and Stress Testing


[Performance profiling is a performance test in which response times, transaction rates, and other time-
sensitive requirements are measured and evaluated. The goal of Performance Profiling is to verify
performance requirements have been achieved. Performance profiling is implemented and executed to
profile and tune a target-of-test's performance behaviors as a function of conditions such as workload or
hardware configurations.

Note: Transactions below refer to “logical business transactions”. These transactions are defined as specific
use cases that an actor of the system is expected to perform using the target-of-test, such as add or modify
a given contract.]

[Load testing is a performance test which subjects the target-of-test to varying workloads to measure and
evaluate the performance behaviors and ability of the target-of-test to continue to function properly under
these different workloads. The goal of load testing is to determine and ensure that the system functions
properly beyond the expected maximum workload. Additionally, load testing evaluates the performance
characteristics, such as response times, transaction rates and other time sensitive issues).]

[Note: Transactions below refer to “logical business transactions”. These transactions are defined as
specific functions that an end user of the system is expected to perform using the application, such as add or
modify a given contract.]

[Stress testing is a type of performance test implemented and executed to find errors due to low resources
or competition for resources. Low memory or disk space may reveal defects in the target-of-test that aren't
apparent under normal conditions. Other defects might result from competition for shared resources like
database locks or network bandwidth. Stress testing can also be used to identify the peak workload the
target-of-test can handle.]

[Note: References to transactions below refer to logical business transactions.]

4.1..5 TEST OBJECTIVE (PERFORMANCE TESTING)


[Verify performance behaviors for designated transactions or business functions under the following
conditions:

- Normal anticipated workload.


- Anticipated worst case workload.]

4.1..6 TECHNIQUE (PERFORMANCE TESTING)


- [Use Test Procedures developed for Function or Business Cycle Testing.
- Modify data files to increase the number of transactions or the scripts to increase the number of
iterations each transaction occurs.

age 24
<Project name>

- Scripts should be run on one machine (best case to benchmark single user, single transaction) and
be repeated with multiple clients (virtual or actual, see Special Considerations below).]

4.1..7 COMPLETION CRITERIA (PERFORMANCE TESTING)


- [Single Transaction or single user: Successful completion of the test scripts without any failures and
within the expected or required time allocation per transaction.]
- [Multiple transactions or multiple users: Successful completion of the test scripts without any
failures and within acceptable time allocation.]

4.1..8 SPECIAL CONSIDERATIONS (PERFORMANCE TESTING)


[Comprehensive performance testing includes having a background workload on the server.
There are several methods that can be used to perform this, including:
- “Drive transactions” directly to the server, usually in the form of Structured Query Language (SQL)
calls.
- Create “virtual” user load to simulate many clients, usually several hundred. Remote Terminal
Emulation tools are used to accomplish this load. This technique can also be used to load the
network with “traffic”.
- Use multiple physical clients, each running test scripts to place a load on the system.
Performance testing should be performed on a dedicated machine or at a dedicated time. This permits full
control and accurate measurement.
The databases used for Performance Testing should be either actual size or scaled equally.]

4.1..9 TEST OBJECTIVE (LOAD TESTING)


[Verify performance behavior time for designated transactions or business cases under varying workload
conditions.]

4.1..10 TECHNIQUE (LOAD TESTING)


- [Use tests developed for Function or Business Cycle Testing.
- Modify data files to increase the number of transactions or the tests to increase the number of
times each transaction occurs.]

4.1..11 COMPLETION CRITERIA (LOAD TESTING)


[Multiple transactions or multiple users: Successful completion of the tests without any failures and within
acceptable time allocation.]

4.1..12 SPECIAL CONSIDERATIONS (LOAD TESTING)


- [Load testing should be performed on a dedicated machine or at a dedicated time. This permits full
control and accurate measurement.
- The databases used for load testing should be either actual size or scaled equally.]

age 25
<Project name>

4.1..13 TEST OBJECTIVE (STRESS TESTING)


[Verify that the target-of-test functions properly and without error under the following stress conditions:
- little or no memory available on the server (RAM and DASD)
- maximum actual or physically capable number of clients connected or simulated
- multiple users performing the same transactions against the same data or accounts
- worst case transaction volume or mix (see Performance Testing above).
Notes: The goal of Stress Testing might also be stated as identify and document the conditions under which
the system FAILS to continue functioning properly.
Stress Testing of the client is described under section 3.1.11, Configuration Testing.]

4.1..14 TECHNIQUE (STRESS TESTING)


- [Use tests developed for Performance Profiling or Load Testing.
- To test limited resources, tests should be run on a single machine, and RAM and DASD on server
should be reduced or limited.
- For remaining stress tests, multiple clients should be used, either running the same tests or
complementary tests to produce the worst-case transaction volume or mix.

4.1..15 COMPLETION CRITERIA (STRESS TESTING)


[All planned tests are executed and specified system limits are reached or exceeded without the software
failing or conditions under which system failure occurs is outside of the specified conditions.]

4.1..16 SPECIAL CONSIDERATIONS (STRESS TESTING)


- [Stressing the network may require network tools to load the network with messages or packets.
- The DASD used for the system should temporarily be reduced to restrict the available space for the
database to grow.
- Synchronization of the simultaneous clients accessing of the same records or data accounts.]

age 26
<Project name>

 Security and Access Control Testing


[Security and Access Control Testing focus on two key areas of security:

- Application-level security, including access to the Data or Business Functions


- System-level Security, including logging into or remote access to the system.

Application-level security ensures that, based upon the desired security, actors are restricted to specific
functions or use cases, or are limited in the data that is available to them. For example, everyone may be
permitted to enter data and create new accounts, but only managers can delete them. If there is security at
the data level, testing ensures that “ user type one” can see all customer information, including financial
data, however,” user two” only sees the demographic data for the same client.

System-level security ensures that only those users granted access to the system are capable of accessing
the applications and only through the appropriate gateways.]

4.1..1 TEST OBJECTIVE


- Application-level Security: [Verify that an actor can access only those functions or data for which
their user type is provided permissions.]
- System-level Security: [ Verify that only those actors with access to the system and applications are
permitted to access them.]

4.1..2 TECHNIQUE
- Application-level Security: [Identify and list each user type and the functions or data each type has
permissions for.]
o [Create tests for each user type and verify each permission by creating transactions
specific to each user type.]
o Modify user type and re-run tests for same users. In each case, verify those additional
functions or data are correctly available or denied.
- System-level Access: [See Special Considerations below]

4.1..3 COMPLETION CRITERIA


[For each known actor type the appropriate function or data are available, and all transactions function as
expected and run in prior Application Function tests.]

4.1..4 SPECIAL CONSIDERATIONS


[Access to the system must be reviewed or discussed with the appropriate network or systems
administrator. This testing may not be required as it may be a function of network or systems
administration.]

age 27
<Project name>

 Installation Testing
[Installation testing has two purposes. The first is to insure that the software can be installed under different
conditions - such as a new installation, an upgrade, and a complete or custom installation - under normal
and abnormal conditions. Abnormal conditions include insufficient disk space, lack of privilege to create
directories, and so on. The second purpose is to verify that, once installed, the software operates correctly.
This usually means running a number of the tests that were developed for Function Testing.]

4.1..5 TEST OBJECTIVE


Verify that the target-of-test properly installs onto each required hardware configuration under the
following conditions:
- new installation, a new machine, never installed previously with
- update, machine previously installed , same version
- update, machine previously installed , older version

4.1..6 TECHNIQUE
- [Manually or develop automated scripts, to validate the condition of the target machine new -
never installed; same version or older version already installed).
- Launch or perform installation.
- Using a predetermined sub-set of function test scripts, run the transactions.]

4.1..7 COMPLETION CRITERIA


[Transactions execute successfully without failure.]

4.1..8 SPECIAL CONSIDERATIONS


[What transactions should be selected to comprise a confidence test that application has been successfully
installed and no major software components are missing?]

 Structural Testing
The structural testing is the testing of the structure of the system or component.

Structural testing is often referred to as ‘white box’ or ‘glass box’ or ‘clear-box testing’ because in structural
testing we are interested in what is happening ‘inside the system/application’.

In structural testing the testers are required to have the knowledge of the internal implementations of the
code. Here the testers require knowledge of how the software is implemented, how it works. During
structural testing the tester is concentrating on how the software does it. For example, a structural
technique wants to know how loops in the software are working. Different test cases may be derived to
exercise the loop once, twice, and many times. This may be done regardless of the functionality of the
software. Structural testing can be used at all levels of testing. Developers use structural testing in
component testing and component integration testing, especially where there is good tool support for code
coverage. Structural testing is also used in system and acceptance testing, but the structures are different.

age 28
<Project name>

 Change related Testing

Goals of change related testing are confirming that defects have been fixed (confirmation testing or
retesting) and looking for unintended changes (regression testing)

Confirmation testing or re-testing: When a test fails because of the defect, then that defect is reported and
a new version of the software is expected that has had the defect fixed. In this case we need to execute the
test again to confirm that whether the defect got actually fixed or not. This is known as confirmation testing
and also known as re-testing. It is important to ensure that the test is executed in exactly the same way it
was the first time using the same inputs, data and environments.

Regression testing: During confirmation testing the defect got fixed and that part of the application started
working as intended. But there might be a possibility that the fix may have introduced or uncovered a
different defect elsewhere in the software. The way to detect these ‘unexpected side-effects’ of fixes is to
do regression testing. The purpose of a regression testing is to verify that modifications in the software or
the environment have not caused any unintended side effects and that the system still meets its
requirements. Regression tests are executed whenever the software changes, either as a result of fixes or
new or changed functionality.

age 29
<Project name>

TOOLS
The following tools will be employed for this project:

[Note: Delete or add items as appropriate.]

Tool type Tool name URL

age 30
<Project name>

Additional information
4.2 Apendix A - Documents
[List all relevant documents that are used as input and created as output of testing process. Put links for
downloading documents from share]

END OF DOCUMENT

age 31

You might also like