50% found this document useful (2 votes)
175 views41 pages

Executing The Test Plan: By: Abel Almeida

The document outlines various types of tests that should be included in a test plan, including functional, structural, erroneous, stress, and use case testing. It provides details on how to design test cases for each type of testing and describes the process for executing the test plan, including unit, integration, and system testing. The goal of testing is to achieve adequate test coverage and determine when an application is ready for production.

Uploaded by

api-3733726
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
50% found this document useful (2 votes)
175 views41 pages

Executing The Test Plan: By: Abel Almeida

The document outlines various types of tests that should be included in a test plan, including functional, structural, erroneous, stress, and use case testing. It provides details on how to design test cases for each type of testing and describes the process for executing the test plan, including unit, integration, and system testing. The goal of testing is to achieve adequate test coverage and determine when an application is ready for production.

Uploaded by

api-3733726
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 41

Executing the Test Plan

By: Abel Almeida


Test case Design
• Test plan decomposed into smaller test
cases and test scripts.
• Create or use a test matrix
• Ideas for different test cases
– Functional
– Structural
– Erroneous
– Stress
– Scripts
– Use Cases
Functional Test Cases

• when test data is developed from


documents that specify a module’s
intended behavior. Documents have
• actual specification and the high-
and-low-level design of the code to
be tested
Functional Test Cases

– Functional Testing Independent of the


Specification Technique
 Interface of units : inputs, outputs & related
value spaces.
 derives test data from the features of the
specification.
Functional Test Cases

• Functional Testing Based on the


Interface
– 3 types
– Input Testing
 External testing, extreme input
– Equivalence Partitioning
 identification of a finite set of functions and
their associated input and output results

- Syntax Checking Input can handle incorrectly


formatted data
Functional Test Cases

– Functional Testing Based on the Function to


be Computed
 Special-Value Testing - Selecting test data on
the basis of features of the function to be
computed
 Output Result Coverage - modules have been
 checked for maximum and minimum output
conditions and that all categories of error
messages have.
Functional Test Cases

– Functional Testing Dependent on the


Specification Technique
 Algebraic - specifications, properties of a data
abstraction are expressed by means of axioms
or rewrite rules.
 Axiomatic - relationship between predicate
calculus specifications and path testing has
been explored.
 State Machine - testing can be used to decide
whether a program that simulates a finite
automation with a bounded number of nodes
is equivalent to the one specified.
 Decision Tables - a concise method of
representing an equivalence partitioning
Structural Test Cases

• Structural Analysis - programs are analyzed without being executed

• Complexity Measures - It is intuitively appealing to suggest


that the more complex the code, the more thoroughly it
should be tested

• Data Flow Analysis - Data flow analysis can also be used in


test data generation, exploiting the relationship between
points where variables are defined and points where they
are used.

• Symbolic Execution - symbolic execution system accepts


three inputs: a program to be interpreted; symbolic input
for the program; and the path to follow.
Structural Test Cases
• Structural Testing
- a dynamic technique in which test data selection
and evaluation are driven by the goal of covering various
characteristics of the code during testing.

• Statement Testing - requires that every statement


in the program be executed. 100 % does not mean it
is a correct program
• Branch Testing - Branch coverage can be
checked by probes inserted at points in the
program that represent arcs from branch
points in the flow graph.
Structural Test Cases
• Structural Testing
– Conditional Testing - Conditional testing thus
subsumes branch testing; and therefore, inherits
the same problems as branch testing.

– Expression Testing - requires that every


expression assume a variety of values during a
test in such a way that no expression can be
replaced by a simpler expression and still pass
the test.

– Path Testing - data is selected to ensure that all


paths of the program have been executed.
Erroneous Test Cases

• There are three broad categories of such techniques:


statistical assessment, errorbased testing, and fault-based
testing.
• Statistical Methods -employs statistical techniques to determine the
operational reliability of the program.
• Error-Based Testing – driven by histories of programmer errors,
measures of software complexity, knowledge of error-prone syntactic
constructs, or even error guessing
• Fault Estimation
• Input Testing
• Perturbation Testing
• Fault-Based Testing
• Local Extent, Finite Breadth
• Global Extent, Finite Breadth
• Local Extent, Infinite Breadth
• Global Extent, Infinite Breadth
Stress Test Cases

• Stress or volume testing needs a tool that supplements test data.


• Types of internal limitations
– Internal accumulation of information, such as
tables.
– Number of line items in an event, such as the
number of items that can be included within an
order.
– Size of accumulation fields.
– Data-related limitations, such as leap year,
decade change, switching calendar years, etc.
– Field size limitations, such as number of
characters allocated for people’s names.
– Number of accounting entities, such as number
of business locations, state/country in which
business is performed, etc.
Stress Test Cases
• The recommended steps for determining
program and system limitations follow
 Identify input data used by the program.
 Identify data created by the program.
 Challenge each data element for potential
limitations.
 Document limitations.
 Perform stress testing.
Test Scripts

– needed to develop, use, and maintain


test scripts:
• Determine testing levels
• Develop the scripts
• Execute the scripts
• Analyze the results
• Maintain the scripts
Use Cases

• is a description of how a user (or another


system) uses the system being designed to
perform a given task.
– Build a System Boundary Diagram – depicts
the interfaces between the software being tested and
the individuals, systems, and other interfaces
– example of a system boundary diagram for an
automated teller machine on page286
Use Cases
• Define Use Cases
individual use case consists of:
- Preconditions that set the stage for the series
of events that should occur for the use case
Use cases are used to:
• Identify classes and objects (OO)
• Design and code (Non-OO)
• Manage (and trace) requirements
• Develop application documentation
• Develop training
• Develop test cases
Building Test Cases

• Process for Building Test Cases


• Identify test resources.
• Identify conditions to be tested.
• Rank test conditions.
• Select conditions for testing.
• Determine correct results of processing.
• Create test cases.
• Document test conditions.
• Conduct test.
• Verify and correct.
Test Coverage

• objective of test coverage is simply to


assure that the test process has covered
the application.
– Methods to use
• Statement Coverage
• Branch Coverage
• Basis Path Coverage
• Integration Sub-tree Coverage
• Modified Decision Coverage
• Global Data Coverage
• User-specified Data Coverage
Performing Tests

• Performing tests:
– Test platforms
– Test cycle strategy
– Use of tools in testing
– Test execution
– Executing the Unit Test plan
– Executing the Integration Test Plan
– Executing the System Test Plan
– When is Testing Complete?
– Concerns
• PlatForms:
– test scripts and test data may need to run on different
platforms, the platforms must be taken into
consideration in the design of test data and test scripts.
Test Cycle Strategy

• Each execution of testing is referred


to as a test cycle.
• cycles are planned and included in
the test plan
• Other cycles may address attributes
of the software such as data entry,
database updating and maintenance,
and error processing.
Use of Tools in Testing

• Test tools can ease the burden of


test design, test execution, general
information handling, and
communication.
Use of Tools in Testing

Test Documentation
• software documentation during the
development phase recommend that test
• guidelines for documentation be prepared
for all multipurpose or multi-user projects
and for other large software development
projects.
Use of Tools in Testing

• Test Drivers - testing is performed incrementally,


an untested function is combined with a tested one and the
package is then tested.

• Automatic Test Systems and Test Languages -


actual performance of each test requires
the execution of code with input data, an
examination
• of the output, and a comparison of the
output with the expected results.
Perform Tests

• test plan should have been updated throughout the


project in response to approved changes made to the
application specifications
• Perform Unit Testing - performed by the programmer that
developed the program.
• Perform Integration Test - Integration testing should begin
once unit testing for the components to be integrated is complete
• Perform System Test - System test should begin as soon
as a minimal set of components has been integrated and
successfully completed integration testing. Major steps
outlined on page 299
When is Testing Complete?

• When Manager must be able to report, with some


degree of confidence, that the application will perform
as expected in production and whether the quality
goals defined at the start of the project have been
met.
General Concerns

• Software is not in a testable mode for


this test level.
• There is inadequate time and
resources.
• Significant problems will not be
uncovered during testing.
Recording Test Results

– A test problem is a condition that exists


within the software system that needs
to be addressed.
• four attributes should be developed for all
test problems:
– Statement of condition
– Criteria
– Effect
– Cause
Problem Deviation

• the user compares “what is” with


“what should be.”
• The “what is” can be called the
statement of condition. The “what
should be” shall be called the
criteria.
Problem Effect

• the attention that the problem


statement gets after it is reported
depends largely on its significance.
Problem Cause

• determination of the cause. Here are


some steps
• Define the problem
• Identify the flow of work and information
leading to the condition.
• Identify the procedures used in producing
the condition.
• Identify the people involved.
• Recreate the circumstances to identify the
cause of a condition.
Use of Test Results

• Decisions need to be made as to who


should receive the results of testing
– People who should have results are
• End users
• Software project manager
• IT quality assurance
Defect Management

• The test objective is to identify defects.


– General principles for defect management
process.
• The primary goal is to prevent defects.
• Entire software development process to be risk
driven.
• Information on defects should be captured at the
source as a natural by-product of doing the job.
• The capture and analysis of the information should be
automated.
• Defect information should be used to improve the
process.
• Imperfect or flawed processes cause most defects.
Defect Naming

• It’s important to name defects early.


– Here are 3 level framework for naming defects
is recommended.
• Level 1 Naming the defect
– Gather defect information, which comes from help
desk, quality assurance, problem management, and
project teams.
– Identify the major developmental phases and activities.
– The defects identified should then be sorted by these
phases or activities
– Categorize the defects into groups that have similar
characteristics.
Defect Naming
• Level 2 Developmental Phase or Activity
in
which the Defect Occurred
– your organization’s
 business requirements
 technical design
 development
 acceptance
 installation
Defect Naming
• Level 3 The Category of the Defect
– Here are some defect categories
 Missing
 Inaccurate
 Incomplete
 Inconsistent
The Defect Management
Process
– Defect Management Process diagram pg306.
• Defect Prevention – Experts say the best approach to defects is to
eliminate them altogether.
– Identify Critical Risks which include-
• A key requirement is missing.
• Critical application software does not function properly.
• Vendor-supplied software does not function properly.
• Software does not support major business functions – necessitates
process
• reengineering.
• Performance is unacceptably poor.
• Hardware that malfunctions.
• Hardware and software do not integrate properly.
• Hardware that is new to installation site.
• Users are unable or unwilling to embrace new system.
• User’s ability to actively participate in project
The Defect Management
Process
• Defect Prevention – Experts say the best approach to
defects is to eliminate them altogether.
– Estimate Expected Impact - The expected impact of a risk is
affected by both the probability that the risk will become a
problem and the potential impact of the problem on the
organization.
– Minimize Expected Impact - Expected impact is also
affected by the action that is taken once a problem is
recognized
• 3strategies:
– Eliminate the risk.
– Reduce the probability of a risk becoming a problem.
– Reduce the impact if there is a problem.
The Defect Management
Process
• Deliverable Baseline
– You baseline a deliverable, or work
product when it reaches a predefined
milestone in its development.
The Defect Management
Process
• Defect Discovery
• A defect is considered to have been
discovered when the defect has been
formally brought to the attention of the
developers, and the developers
acknowledge that the defect is valid.
• Here are some steps involved:
– Finding Defects
– Recording Defects
– Reporting Defects
– Acknowledge Defect
The Defect Management
Process
• Defect Resolution – When developers have
acknowledged that a reported defect is a valid defect, the
defect resolution process begins.
– steps involved in defect resolution
• Prioritize Fix
• Schedule Fix
• Fix Defect
• Report Resolution
The Defect Management
Process
• Process Improvement –
– This is perhaps the activity that is most ignored by
organizations today,
– NASA emphasizes the point that any defect represents a
weakness in the process.

You might also like