ATP
ATP
1 Project Information
1.1 Objectives
1.2 Document Overview
1.3 System Description
1.4 Key Stakeholders
1.5 Relationship to Other Plans
1.6 Points of Contact
1.7 References
1.8 Methodology, Tools, and Techniques
1.9 Policies, Directives and Procedures
3 Project Management
3.1 Test Deliverables
3.2 Testing Tasks
3.3 Schedule
3.4 Risk Assessment
3.5 Constraints
3.6 Issues
3.7 Assumptions
3.8 Dependencies
3.9 Sign-Off Criteria
4 Project Team
4.1 Roles
4.2 Resources
4.3 Software Tools
4.4 Training
5 Appendix A
5.1 Glossary of Terms
5.2 Acronyms and Abbreviations
------------------------------------------
User Acceptance Testing - UAT
The test procedures that lead to formal 'acceptance' of new or changed systems.
User Acceptance Testing is a critical phase of any 'systems' project and requires
significant participation by the 'End Users'. To be of real use, an Acceptance Test
Plan should be developed in order to plan precisely, and in detail, the means by
which 'Acceptance' will be achieved. The final part of the UAT can also include a
parallel run to prove the system against the current system.
The User Acceptance Test Plan will vary from system to system but, in general, the
testing should be planned in order to provide a realistic and adequate exposure of
the system to all reasonably expected events. The testing can be based upon the
User Requirements Specification to which the system should conform.
As in any system though, problems will arise and it is important to have determined
what will be the expected and required responses from the various parties
concerned; including Users; Project Team; Vendors and possibly Consultants /
Contractors.
In order to agree what such responses should be, the End Users and the Project Team
need to develop and agree a range of 'Severity Levels'. These levels will range
from (say) 1 to 6 and will represent the relative severity, in terms of business /
commercial impact, of a problem with the system, found during testing. Here is an
example which has been used successfully; '1' is the most severe; and '6' has the
least impact :-
'Show Stopper' i.e. it is impossible to continue with the testing because of the
severity of this error / bug
Critical Problem; testing can continue but we cannot go into production (live) with
this problem
Major Problem; testing can continue but live this feature will cause severe
disruption to business processes in live operation
Medium Problem; testing can continue and the system is likely to go live with only
minimal departure from agreed business processes
Minor Problem ; both testing and live operations may progress. This problem should
be corrected, but little or no changes to business processes are envisaged
'Cosmetic' Problem e.g. colours; fonts; pitch size However, if such features are
key to the business requirements they will warrant a higher severity level.
The users of the system, in consultation with the executive sponsor of the project,
must then agree upon the responsibilities and required actions for each category of
problem. For example, you may demand that any problems in severity level 1, receive
priority response and that all testing will cease until such level 1 problems are
resolved.
Caution. Even where the severity levels and the responses to each have been agreed
by all parties; the allocation of a problem into its appropriate severity level can
be subjective and open to question. To avoid the risk of lengthy and protracted
exchanges over the categorisation of problems; we strongly advised that a range of
examples are agreed in advance to ensure that there are no fundamental areas of
disagreement; or, or if there are, these will be known in advance and your
organisation is forewarned.
N.B. In some cases, users may agree to accept ('sign off') the system subject to a
range of conditions. These conditions need to be analysed as they may, perhaps
unintentionally, seek additional functionality which could be classified as scope
creep. In any event, any and all fixes from the software developers, must be
subjected to rigorous System Testing and, where appropriate Regression Testing.