CART Testing Plan
CART Testing Plan
Developmental Disabilities
Requirements Specification
CART Test Plan
Version 0.2<Project Name>
Revision History
Date
Version
Description
Author
4/15/2005
V0.1
Draft creation
PVK
4/21/2005
V0.2
Draft update
PVK
Table of Contents
1.
Introduction
1.1
Purpose
1.2
Background
1.3
Scope
1.4
Project Identification
2.
3.
Test Strategy
3.1
Testing Types
3.1.1 Function Testing
3.1.2 User Interface Testing
3.1.3 Security and Access Control Testing
3.1.4 Recovery Testing
3.2
Tools
4.
Resources
4.1
Roles
4.2
System
5.
Project Milestones
6.
Deliverables
6.1
Test Logs
6.2
Defect Reports
Appendix A
Project Tasks
7
8
9
10
Test Plan
1.
Introduction
1.1
Purpose
This Test Plan document for the CART Project supports the following objectives:
Perform system testing in a stable test environment to ensure that new or existing functionality
meets requirements, desired goals or outcome on an iterative basis (periodically and
continuously).
Recommended requirements are that functional business units validate the application continues to
meet stated requirements and that defects are logged and assigned for resolution using an issue
tracking system provided for your use.
Usage of a testing matrix checklist will be followed for each iteration to ensure that all test cases
defined or required for any/all iterations are completed or, if failure exists, documentation is
created to detail what conditions were active that provided the failure.
The resources required for testing the application is as follows (for functional business area,
estimated):
1.2
Completed test plan, indicating a pass or fail for respective business areas
Background
The purpose of this testing approach, being iterative, has been proven to be very instrumental in ensuring
that an application is continuously monitored and validated for functionality and defects early on in the
project plan life. By performing these steps of testing iteratively, problems arise long before a production
implementation where the resolution to defects is more difficult, costly and detrimental to a projects
success.
The amount of time spent in iterative testing generally does not consume any more than if testing were
done traditionally, at the end of a project timeline. Additionally, by identifying missing requirements or
defects early on, usually results in less time spent on functional validation and defect prevention.
1.3
Scope
The stages of testing being performed during the initial development of the CART application is as follows:
Integration testing may not necessarily be done, due to scope, but may occur, and if so, by
development staff
Application usability ease of use, look and feel and general navigation are covered here
Disconnected environment laptop / desktop environment will be used to validate the deployable
nature of the application functions separated from the network
Web environment web interface functionality is crucial to the success of this application and
thus must be validated
Database recovery laptop database usage and recovery will be tested to provide a stable
environment for remote use, including database backup and recovery
Exception handling network failure, connection issues and general application failure points will
be tested to ensure proper handling / messaging of these conditions is communicated to the
application user
If testing by the business functional area owners is not performed during each iteration then there is a risk
that missing requirements and/or defects may be part of the finished application. It is imperative that the
business groups engage in this process continuously and interactively with the technology group to ensure
proper communication of issues and successes are performed timely.
1.4
Project Identification
The table below identifies the documentation and availability used for developing the test plan:
Document
Created or
Available
Received or
Reviewed
Author or
Resource
Requirements Specification
Yes No
Yes No
Functional Specification
Yes No
Yes No
Use-Case Reports
Yes No
Yes No
Project Plan
Yes No
Yes No
AW
Design Specifications
Yes No
Yes No
PVK
Prototype
Yes No
Yes No
PVK
Users Manuals
Yes No
Yes No
Yes No
Yes No
AW
Yes No
Yes No
PVK
Yes No
Yes No
Yes No
Yes No
2.
Notes
PVK
have been identified as targets for testing. This list represents what will be tested:
Survey Creation
o
Create Questions
Create Answers
Survey Maintenance
o
3.
Survey Administration
o
Survey Conduction
o
Web environment
Data transfer
o
Test Strategy
The list above detailed what is to be tested. This section describes just how this is to be accomplished.
Each bulleted item will be tested, initially individually as it becomes available for testing. Eventually, all
items will be tested for a final system/integration test that will validate the application is complete and
meets defined requirements.
Create Survey Groups create a group containing 1 or more surveys. All surveys belong
to at least 1 group. A group is defined as containing all the surveys that need to be
performed for that survey step to be completed. In some cases, a starting survey needs to
be performed before others, thus there is a dependency in survey order and completion.
Create Survey Definitions the definition contains the details about the survey, such as
its name, status, whether it can be versioned, etc. Without the definition, a survey cannot
exist.
Create Survey Sections all surveys have at least 1 section and may have as many as is
required. The sections are required as this is where the questions are assigned.
Create Survey Layouts the layout simply defines the heading/sub-heading that is shown
on the interfaces detailing what survey is being conducted. Additional properties such as
a cover page, logos (images) and the like are configured in this interface.
Create Questions without questions, a survey has no value. Through this process,
questions can be validated against duplication (within the possible realm of finding
duplicates) and can be assigned to rules.
Create Answers answers need to be provided for questions. The answer interface is
somewhat complicated as it provides a wide variety of choices for answer type, source of
data for the answer list and how the answer is to appear (where applicable).
Survey Maintenance
o
Ability to change, delete items created during Survey Creation standard maintenance
functionality
Survey Administration
o
Survey Status changes published, unavailable, deleted the change of a surveys status
has repercussions, especially if surveys are being conducted and the survey is to no
longer be used. Also, changes need to be communicated to conductors and managements
thus the communication channel test is needed.
User / Manager email accounts and notification chain this step is needed to ensure that
the communications detailed in Survey Status Change are successful.
Survey Conduction
o
Laptop / desktop environment clearly, the survey once defined is to be executed. The
laptop is the primary environment to be tested, as it is the disconnected environment that
has priority and complexity that needs to be validated.
Web environment surveys can be completed through the web and the usage of web
based surveys will certainly grow over time. This environment will need to be validated
as well as the laptop environment.
Data transfer
o
Uploading of survey data to server once the laptop environment contains survey data, it
needs to be uploaded to the survey for further processing. The upload process introduces
a few layers of complication and needs to be tested well to ensure that data is not lost and
uploaded successfully.
validated.
o
Update laptop/desktop application with updates should changes occur (and they will) to
the laptop/desktop application, those changes need to be redistributed back to the
laptop/desktop.
3.1
Testing Types
3.1.1
Function Testing
Function testing of the various areas should focus on any requirements for test that can be traced directly to
business functions and business rules. The goals of these tests are to verify proper data acceptance,
processing, and retrieval, and the appropriate implementation of the business rules. This type of testing is
based upon black box techniques; that is verifying the application and its internal processes by interacting
with the application via the Web or GUI interfaces and analyzing the output or results. Identified below is
an outline of the testing recommended for each application area:
Test Objective:
Technique:
Execute each use case, use-case flow, or function, using valid and invalid data,
to verify the following:
Completion Criteria:
Special Considerations:
Identify or describe those items or issues (internal or external) that impact the
implementation and execution of function test
3.1.2
Test Objective:
Technique:
Create or modify tests for each window to verify proper navigation and content
for each application window or web page
Completion Criteria:
Special Considerations:
3.1.3
Application-level security ensures that, based upon the desired security, actors are restricted to specific
functions or use cases, or are limited in the data that is available to them. For example, everyone may be
permitted to enter data and create new accounts, but only managers can delete them. If there is security at
the data level, testing ensures that user type one can see all customer information, including financial
data, however, user two only sees the demographic data for the same client.
System-level security ensures that only those users granted access to the system are capable of accessing
the applications and only through the appropriate gateways.
Test Objective:
Technique:
Completion Criteria:
Special Considerations:
Application-level Security: Identify and list each user type and the
functions or data each type has permissions for.
Create tests for each user type and verify each permission by
performing actions specific to each user type.
Modify user type and re-run tests for same users. In each case,
verify those additional functions or data are correctly
available or denied.
For each known user type the appropriate function or data are
available, and all actions function as expected
3.1.4
Recovery Testing
Recovery Testing ensures that the target-of-test can successfully recover from a variety of hardware,
software or network malfunctions with undue loss of data or data integrity.
Recovery testing is an antagonistic test process in which the application or system is exposed to extreme
conditions, or simulated conditions, to cause a failure, such as device Input/Output (I/O) failures or invalid
database pointers and keys. Recovery processes are invoked and the application or system is monitored and
inspected to verify proper application, or system, and data recovery has been achieved.
Test Objective:
Technique:
Tests created for Function and Business Cycle testing should be used
to create a series of transactions. Once the desired starting test point is
reached, the following actions should be performed, or simulated,
individually:
Power interruption to the client: power the PC down / close the
application (hard terminate)
Interruption via network servers: simulate or initiate communication
loss with the network (physically disconnects communication wires or
power down network servers or routers).
Completion Criteria:
In all cases above, the application, database, and system should, upon
completion of recovery procedures, return to a known, desirable state.
Special Considerations:
3.2
Tools
The following tools will be employed for this project:
Test Management
Defect Tracking
Project Management
DBMS tools
Tool
Vendor/In-house
Version
Excel
Microsoft
Any
Gemini
http://www.countersoft.com/
Latest
Microsoft / MySQL
2000 / 4.x
Unknown, or N/A
SQL Server / MySQL
4.
4.1
Resources
Roles
This table shows the staffing assumptions for the project.
Human Resources
Worker
Minimum Resources
Recommended
Test Manager,
Responsibilities:
1-3
Test Designer
Tester
execute tests
log results
Database Administrator,
Database Manager
Responsibilities:
Responsibilities:
4.2
System
The following table sets forth the system resources for the testing project.
The specific elements of the test system are not fully known at this time. It is recommended that the system
simulate the production environment, scaling down the accesses and database sizes if and where
appropriate.
System Resources
Resource
Name / Type
Database Server
Network or Subnet
156.63.178.215
Server Name
MRDDTESTSQL
Database Name
CART
Test Environment
Network or Subnet
156.63.178.216
Server Name
MRDDTESTWEB
5.
Project Milestones
Testing of CART should incorporate test activities for each of the test efforts identified in the previous
sections. Separate project milestones should be identified to communicate project status accomplishments.
The table below represents an example plan for the first phase of testing the CART application.
Milestone Task
Effort
Start Date
End Date
Plan Test
2 hours
5/2/2005
5/2/2005
Design Test
1 hour
5/2/2005
5/2/2005
Implement Test
1 hour
5/3/2005
5/3/2005
Execute Test
2-3 hours
5/9/2005
5/9/2005
Evaluate Test
1 hour
5/10/2005
5/10/2005
6.
Deliverables
In this section, the various documents, tools and reports that will be created and delivered are as follows,
including who is responsible for producing those deliverables:
6.1
Test Schedule schedule representing testing date(s) and resources identified to perform test
development staff
business staff
Test Plan document indicating functionality being tested in this iteration
development staff
Test Cases scripted test actions and processes for detailed requirements
development staff
business staff
Testing Results Report summary report of tests performed and results
development staff
Defect Report report identifying new defects found, or old defects which may still exist
development staff
Test Phase Status test iteration test status: identifying whether that iteration passed or failed
development staff
Defect Reports
The incident-tracking tool that will be provided will be used as a central repository for categories such as
project testing state, incident tracking and reporting.
All those involved with testing and evaluating the CART application will be provided with a user id to access
the tracking tool to accomplish their tasks regarding testing and incident reporting.
Appendix A
Project Tasks
Plan Test
-
assess risk
create schedule
Design Test
-
Implement Test
- identify test-specific functionality in the Design and Implementation Model
- establish external data sets or data to support test
- cleanse databases of previous test data, as needed (possible refreshes from snapshots)
Execute Test
-
log defects
Evaluate Test
-
analyze defects
determine if Test Completion Criteria and Success Criteria have been achieved