0% found this document useful (0 votes)
57 views22 pages

Chapter 7 - Software Testing and Maintenance

The document summarizes key concepts related to software testing and maintenance. It discusses test components, faults, failures, test cases, test stubs, drivers, corrections, and different types of testing activities including component inspection, usability testing, unit testing, integration testing, and system testing. It also covers managing testing through planning, documentation, and assigning responsibilities. Finally, it defines software maintenance and discusses its objectives and types including corrective, adaptive, perfective, and inspection maintenance.

Uploaded by

mamoabi2016
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views22 pages

Chapter 7 - Software Testing and Maintenance

The document summarizes key concepts related to software testing and maintenance. It discusses test components, faults, failures, test cases, test stubs, drivers, corrections, and different types of testing activities including component inspection, usability testing, unit testing, integration testing, and system testing. It also covers managing testing through planning, documentation, and assigning responsibilities. Finally, it defines software maintenance and discusses its objectives and types including corrective, adaptive, perfective, and inspection maintenance.

Uploaded by

mamoabi2016
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Chapter 5- Software Testing and

Maintenance

• Testing is the process of finding differences


between the expected behavior specified by
system models and the observed behavior of
the implemented system.
Test Concept
 The model elements used during testing

 A test component is a part of the system that can be isolated for testing. A

component can be an object, a group of objects, or one or more subsystems.

 A fault, also called bug or defect, is a design or coding mistake that may cause

abnormal component behavior. An erroneous state is a manifestation of a fault

during the execution of the system. An erroneous state is caused by one or more

faults and can lead to a failure.

 A failure is a deviation between the specification and the actual behavior. A

failure is triggered by one or more erroneous states. Not all erroneous states

trigger a failure.
 A test case is a set of inputs and expected results that exercises a test
component with the purpose of causing failures and detecting faults.

 A test stub is a partial implementation of components on which the tested


component depends.

 A test driver is a partial implementation of a component that depends on


the test component. Test stubs and drivers enable components to be
isolated from the rest of the system for testing.

 A correction is a change to a component. The purpose of a correction is to


repair a fault. Note that a correction can introduce new faults.
Test Activities

 The technical activities of testing include:


Component Inspection
Usability testing
Unit testing
Integration testing
System testing
Component Inspection
 Inspections find faults in a component by reviewing its source code in a formal meeting.

 Inspections can be conducted before or after the unit test.

 Fagan’s inspection method consists of five steps:

Overview. The author of the component briefly presents the purpose and scope of the
component and the goals of the inspection.

Preparation. The reviewers become familiar with the implementation of the component.

Inspection meeting. A reader paraphrases the source code of the component, and the
inspection team raises issues with the component. A moderator keeps the meeting on track.

Rework. The author revises the component.

 Follow-up. The moderator checks the quality of the rework and may determine the
component that needs to be re-inspected.
Usability Testing
 Usability testing tests the user’s understanding of the system.

 Usability testing does not compare the system against a specification.

 Instead, it focuses on finding differences between the system and the users’
expectation of what it should do.

 There are three types of usability tests:

Scenario test. During this test, one or more users are presented with a visionary
scenario of the system.

Prototype test. During this type of test, the end users are presented with a piece of
software that implements key aspects of the system.

Product test. This test is similar to the prototype test except that a functional
version of the system is used in place of the prototype.
 In all three types of tests, the basic elements of usability
testing include:

Development of test objectives

A representative sample of end users

The actual or simulated work environment

Controlled, extensive interrogation, and probing of the users


by the person performing the usability test

Collection and analysis of quantitative and qualitative results

Recommendations on how to improve the system.


Unit Testing
 Unit testing focuses on the building blocks of the software system, that is, objects and

subsystems.

 There are three motivations behind focusing on these building blocks.

First:-

 Unit testing reduces the complexity of overall test activities, allowing us to focus on

smaller units of the system.

Second:-

 Unit testing makes it easier to pinpoint and correct faults, given that few components are

involved in the test.

Third:-

 Unit testing allows parallelism in the testing activities; that is, each component can be

tested independently of the others.


 Many unit testing techniques have been devised.

 Below, we describe the most important ones: equivalence testing, boundary testing, path testing,
and state-based testing.

 Equivalence testing:- This blackbox testing technique minimizes the number of test cases. The
possible inputs are partitioned into equivalence classes, and a test case is selected for each class.

 Equivalence testing consists of two steps: identification of the equivalence classes and selection
of the test inputs.

 The following criteria are used in determining the equivalence classes.

Coverage. Every possible input belongs to one of the equivalence classes.

Disjointedness. No input belongs to more than one equivalence class.

Representation. If the execution demonstrates an erroneous state when a particular member


of a equivalence class is used as input, then the same erroneous state can be detected by
using any other member of the class as input.
Boundary testing

 This special case of equivalence testing focuses on the conditions at the boundary of the

equivalence classes.

 Rather than selecting any element in the equivalence class, boundary testing requires

that the elements be selected from the “edges” of the equivalence class.

Path testing
 This whitebox testing technique identifies faults in the implementation of the
component.
 The assumption behind path testing is that, by exercising all possible paths through the
code at least once, most faults will trigger failures.
 The identification of paths requires knowledge of the source code and data structures.

Polymorphism testing
 Polymorphism introduces a new challenge in testing because it enables
messages to be bound to different methods based on the class of the target.
 Although this enables developers to reuse code across a larger number of
classes, it also introduces more cases to test.
Integration testing
 It detects faults that have not been detected during unit testing by
focusing on small groups of components.
 Two or more components are integrated and tested, and when no new
faults are revealed, additional components are added to the group.
 If two components are tested together, it is call this a double test.
 Horizontal integration testing strategies
Bing bang testing
Sandwich testing
 Vertical integration testing strategies
Bottom- up testing
Top-down testing
System Testing
Functional testing. Test of functional requirements (from RAD)

Performance testing. Test of non-functional requirements (from


SDD)

Pilot testing. Tests of common functionality among a selected


group of end users in the target environment

Acceptance testing. Usability, functional, and performance tests


performed by the customer in the development environment
against acceptance criteria (from Project Agreement)

Installation testing. Usability, functional, and performance tests


performed by the customer in the target environment.
Managing Testing
 Many testing activities occur near the end of the project, when resources are
running low and delivery pressure increases.

 Planning Testing

Developers can reduce the cost of testing and the elapsed time necessary for its
completion through careful planning.

 Two key elements are to start the selection of test cases early and to parallelize tests.

 Documenting Testing

 Testing activities are documented in four types of documents, the Test Plan, the
Test Case Specifications, the Test Incident Reports, and the Test Summary.

 The Test Plan focuses on the managerial aspects of testing. It documents the
scope, approach, resources, and schedule of testing activities.
 Each test is documented by a Test Case Specification. This document
contains the inputs, drivers, stubs, and expected outputs of the tests, as
well as the tasks to be performed.

 Each execution of each test is documented by a Test Incident Report.


The actual results of the tests and differences from the expected output
are recorded.

 The Test Report Summary document lists all the failures discovered
during the tests that need to be investigated. From the Test Report
Summary, the developers analyse and prioritize each failure and plan
for changes in the system and in the models. These changes in turn
can trigger new test cases and new test executions.
Assigning Responsibilities
 Testing requires developers to find faults in components
of the system.
 This is best done when the testing is performed by a
developer who was not involved in the development of
the component under test, one who is less reticent to
break the component being tested and who is more
likely to find ambiguities in the component
specification.
Definition of Maintenance

 Is the set of activities, both technical


and managerial, that ensures that
software continues to meet
organizational and business objectives in
a cost effective way.

16
Software Maintenance
Objectives

 Difference between Software Product


and Software Maintenance is:
 Software Product is the result of the
Software development.
 Software Maintenance results in a
service being delivered to the customer.

17
Software Maintenance
Objectives

 Corrective,
 Adaptive,
 Perfective,
 Inspection.

18
Types of Maintenance

 Corrective:

 Taking existing code and correcting a


fault that causes the code to behave in
some way that deviates from its
documented requirements.

19
Types of Maintenance

 Adaptive:

 Taking existing code and adapting it to


provide new features and functionality.
These are typically part of a new release
of the code and part of a larger
development effort.

20
Types of Maintenance

 Perfective:

 These are typically made to improve the


maintainability of the code such as
restructuring it to make it more easily
understood or to remove ambiguities.

21
Types of Maintenance

 Inspection:

 These are usually made as a result of


code inspections and focus more of
adhering to coding standards or to
reduce the likelihood of a failure.

22

You might also like