0% found this document useful (0 votes)
24 views37 pages

Ch-3-Final Exam

This document discusses different levels of software testing, including unit testing, integration testing, and system testing. It provides details on each level, such as what is being tested, when it occurs, who performs it, and how to plan for each level. The levels build upon each other, with unit testing being the smallest and earliest level, followed by integration testing of combined units, and then system testing of the entire software project. Planning and documentation are important aspects of properly executing each testing level.

Uploaded by

Elias Hailu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views37 pages

Ch-3-Final Exam

This document discusses different levels of software testing, including unit testing, integration testing, and system testing. It provides details on each level, such as what is being tested, when it occurs, who performs it, and how to plan for each level. The levels build upon each other, with unit testing being the smallest and earliest level, followed by integration testing of combined units, and then system testing of the entire software project. Planning and documentation are important aspects of properly executing each testing level.

Uploaded by

Elias Hailu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 37

Software Testing and Quality

Assurance

Chapter 3: Levels of testing


The Need for Levels of Testing
Execution-based software testing, especially for large
systems, is usually carried out at different levels.
In most cases there will be 3-4 levels, or major phases
of testing: unit test, integration test, system test
Each of these may consist of one or more sublevels or
phases. At each level there are specific testing goals.
The approach used to design and develop a software
system has an impact on how testers plan and design
suitable tests. (bottom-up, and top- down)
These approaches are supported by two major types
of programming languages:
1. Procedure- oriented
2. Object-oriented.
Testing level

Figure 5.1. testing level


Unit Testing
 A unit is the smallest testable part of software.
 In procedural programming a unit may be an
individual program, function, procedure, etc.
 In OOP, the smallest unit is a method.
 Unit testing is often neglected but it is, in fact, the
most important level of testing.

4
Figure 5.2. Unit testing
Continued...
METHOD
 Unit Testing is performed by using the
method White Box Testing
When is it performed?
 Unit Testing is the first level of testing and
is performed prior to Integration Testing
Who performs it?
 Unit Testing is normally performed by
software developers themselves or their
peers.
 In rare cases it may also be performed by
independent software testers.
Figure 5.3. unit testing

Object-Oriented Systems Development Bahrami ©


Unit Test Planning
 Informal, i.e. no formal test plan specified and written
down
 A general unit test plan should be prepared.
 It may be prepared as a component of the master test
plan or as a stand-alone plan.
 Documents that provide inputs for the unit test plan are
the project plan, as well the requirements, specification,
and design documents that describe the target units.
 Components of a unit test plan are described in detail the
IEEE Standard for Software Unit Testing.
 This standard is rich in information and is an excellent
guide for the test planner.

6
Continued…
Phase 1: Describe Unit Test Approach and Risks
 In this phase of unit testing planning the general
approach to unit testing is outlined. The test planner:
a) identifies test risks;
b) describes techniques to be used for designing the test
cases for the units;
c) describes techniques to be used for data validation and
recording of test results;
d) describes the requirements for test harnesses and other
software that interfaces with the units to be tested
 During this phase the planner also identifies
completeness requirements
 The planner also identifies termination conditions for the
7
unit tests.
Continued…
Phase 2: Identify Unit Features to be tested
 This phase requires information from the unit specification and
detailed design description.
 The planner determines which features of each unit will be
tested, for example: functions, performance requirements, states,
and state transitions, control structures, messages, and data flow
patterns.
 Input/output characteristics associated with each unit should
also be identified.
Phase 3: Add Levels of Detail to the Plan
 In this phase the planner refines the plan as produced in the
previous two phases.
 The planner adds new details to the approach, resource, and
scheduling portions of the unit test plan. 8
Designing the unit tests
 Part of the preparation work for unit test involves unit
test design.
 It is important to specify (i) the test cases (including input
data, and expected outputs for each test case), and, (ii) the
test procedures (steps required run the tests).
 Test case data should be tabularized for ease of use, and
reuse.
 To specifically support object-oriented test design and the
organization of test data
 Arranging the components of a test case into a semantic
network with parts, Object ID, Test_Case_ID, Purpose,
and List_of_Test_Case_Steps.
 Each of these items has component parts.
9
Test Harness
 The auxiliary code developed to support testing of units
and components is called a test harness. The harness
consists of drivers that call the target code and stubs that
represent modules it calls. A driver calls the component to
be tested. A stub is called from the software component to
be tested

Figure 5.4Test Harness 10


Running the unit tests and recording results
 Unit tests can begin when
 The units becomes available from the developers (an estimation of
availability is part of the test plan),
 The test cases have been designed and reviewed, and
 The test harness, and any other supplemental supporting tools,
are available.
 Other likely causes that need to be carefully investigated by the
tester are the following:
 a fault in the test case specification (the input or the output was
not specified correctly);
 a fault in test procedure execution (the test should be rerun);
 a fault in the test environment (perhaps a database was not set up
properly);
 a fault in the unit design (the code correctly adheres to the design
specification, but the latter is incorrect). 11
Continued…
 The causes of the failure should be recorded in a test summary
report, which is a summary of testing activities for all the units
covered by the unit test plan.

Figure 5.5 Summary work sheet for unit test results


12
Integration Testing
 Integration Testing is a level of the software testing process
where individual units are combined and tested as a group.
 Integration testing tests interface between components, interaction
to different parts of system.
ANALOGY
 During the process of manufacturing a ballpoint pen, the cap, the
body, the tail and clip, the ink cartridge and the ballpoint are
produced separately and unit tested separately. When two or
more units are ready, they are assembled and Integration Testing
is performed. For example, whether the cap fits into the body or
not.
METHOD
 Any of Black Box, White Box, grey Box Testing methods can be
used. Normally, the method depends on your definition of ‘unit’.

13
Integration Testing Strategy
 The entire system is viewed as a collection of
subsystems.
 The Integration testing strategy determines the order
in which the subsystems are selected for testing and
integration-
 Top down integration
 Bottom up integration
 Sandwich testing
 Big bang integration (Non incremental)

14
Integration Testing Approaches
 Top Down is an approach to
Integration Testing where top
level units are tested first and
lower level units are tested step
by step after that. This approach
is taken when top down
development approach is
followed
 Bottom Up is an approach to
Integration Testing where
bottom level units are tested
first and upper level units step
by step after that. This approach
is taken when bottom up
development approach is
followed.
Figure 5.6. integration testing
15
Continued...
 Big Bang is an approach to
Integration Testing where all or
most of the units are combined
together and tested at one go. This
approach is taken when the testing
team receives the entire software in
a bundle.

 Sandwich/Hybrid is an approach to
Integration Testing which is a
combination of Top Down and
Bottom Up approaches.
16
Figure 5.8. integration testing
Integration test planning
 The plan includes the following items:
a) clusters this cluster is dependent on;
b) a natural language description of the functionality
of the cluster to be tested;
c) list of classes in the cluster;
d) a set of cluster test cases.

17
System Testing
 When integration tests are completed, a software system has
been assembled and its major subsystems have been tested.
 At this point the developers/ testers begin to test it as a whole.
 System test planning should begin at the requirements phase
with the development of a master test plan and requirements-
based (black box) tests.
 There are many components of the plan that need to be prepared
such as test approaches, costs, schedules, test cases, and test
procedures.
 System testing itself requires a large amount of resources.
 The goal is to ensure that the system performs according to its
requirements.
 System test evaluates both functional behaviour and quality
requirements such as reliability, usability, performance and
security. 18
Continued…
 This phase of testing is especially useful for detecting external
hardware and software interface defects, for example, those
causing race conditions, and deadlocks, problems with
interrupts and exception handling, and ineffective memory
usage.
 After system test the software will be turned over to users for
evaluation during acceptance test or alpha/beta test.
 Because system test often requires many resources, special
laboratory equipment, and long test times, it is usually
performed by a team of testers.
 The best scenario is for the team to be part of an independent
testing group. The team must do their best to find any weak
areas in the software; therefore, it is best that no developers
are directly involved.
19
Types of System Testing
 There are several types of system tests as shown on
Figure 5.10 The types are as follows:
 Functional testing
 Performance testing
 Stress testing
 Configuration testing
 Security testing
 Recovery testing

20
Continued…

Figure 5.10: Types of System Tests


21
Functional Testing
 Goal: Test functionality of system
 Functional tests at the system level are used to ensure that the
behaviour of the system adheres to the requirements specification.
 For example, if a personal finance system is required to allow users
to set up accounts, add, modify, and delete entries in the accounts,
and print reports, the function-based system and acceptance tests
must ensure that the system can perform these tasks.
 Clients and users will expect this at acceptance test time.
 Functional tests are black box in nature.
 Test cases are designed from the requirements analysis document
(better: user manual) and centered around requirements and key
functions (use cases).The system is treated as black box
 Unit test cases can be reused, but new test cases have to be
developed as well.

22
Performance Testing
 Goal: Try to violate non-functional requirements
 Test how the system behaves when overloaded.
 Try unusual orders of execution
 Check the system’s response to large volumes of data
 The users may have objectives for the software system in terms
of memory use, response time, throughput, and delays.
 Testers also learn from performance test whether there are any
hardware or software factors that impact on the system’s
performance.
 Performance testing allows testers to tune the system; that is, to
optimize the allocation of system resources. For example, testers
may find that they need to reallocate memory pools, or to
modify the priority level of certain system operations.
 Testers may also be able to project the system’s future
performance levels. This is useful for planning subsequent
releases. 23
Stress Testing
 When a system is tested with a load that causes it to
allocate its resources in maximum amounts, this is
called stress testing.
 For example, if an operating system is required to
handle 10 interrupts/second and the load causes 20
interrupts/second, the system is being stressed.
 The goal of stress test is to try to break the system;
find the circumstances under which it will crash. This
is sometimes called “breaking the system”.
 Stress testing is important from the user/client point
of view.
 When systems operate correctly under conditions of
stress then clients have confidence that the software
can perform as required. 24
Configuration Testing
 Typical software systems interact with hardware devices such as
disc drives, tape drives, and printers.
 Many software systems also interact with multiple CPUs, some of
which are redundant.
 Software that controls real-time processes, or embedded software
also interfaces with devices, but these are very specialized
hardware items such as missile launchers, and nuclear power
device sensors.
 In many cases, users require that devices be interchangeable,
removable, or reconfigurable.
 For example, a printer of type X should be substitutable for a
printer of type Y, CPU A should be removable from a system
composed of several other CPUs, sensor A should be replaceable
with sensor B.
 Configuration testing allows developers/testers to evaluate system
performance and availability when hardware exchanges 25 and
reconfigurations occur.
Security Testing
 Designing and testing software systems to ensure that they
are safe and secure is a big issue facing software
developers and test specialists.
 Recently, safety and security issues have taken on
additional importance due to the proliferation of
commercial applications for use on the Internet.
 If Internet users believe that their personal information is
not secure and is available to those with intent to do harm
 Security testing evaluates system characteristics that relate
to the availability, integrity, and confidentially of system
data and services.
 Users/clients should be encouraged to make sure their
security needs are clearly known at requirements time, so
that security issues can be addressed by designers and
testers. 26
Other types of Performance Testing
Volume testing
 Test what happens if large amounts of data are handled
Compatibility test
 Test backward compatibility with existing systems
Timing testing
 Evaluate response times and time to perform a function
Environmental test
 Test tolerances for heat, humidity, motion
Quality testing
 Test reliability, maintain- ability & availability
Recovery testing
 Test system’s response to presence of errors or loss of data
Human factors testing
 Test with end users. 27
Acceptance Testing
 Goal: Demonstrate system is ready for
operational use
 Choice of tests is made by client
 Many tests can be taken from integration testing
 Acceptance test is performed by the client, not by the
developer.
 Acceptance test cases are based on requirements. The
user manual is an additional source for test cases.
 System test cases may be reused. The software must run
under real-world conditions on operational hardware
and software.
28
Continued…
 After acceptance testing the client will point out to the
developers which requirement have/have not been
satisfied. Some requirements may be deleted,
modified, or added due to changing needs.
 If the client is satisfied that the software is usable and
reliable, and they give their approval, then the next
step is to install the system at the client’s site.
 If the client’s site conditions are different from that of
the developers, the developers must set up the system
so that it can interface with client software and
hardware.
 Retesting may have to be done to insure that the
software works as required in the client’s
environment. This is called installation test.
Alpha and Beta test
Alpha test: Beta test:
 Client uses the software at  Conducted at client’s environment
the developer’s (developer is not present)
environment.  Software gets a realistic workout in
 Software used in a target environment
controlled setting, with the  Beta testing is also known as pre-
developer always ready to release testing.
fix bugs.  The following will be tested
 The following will be tested  Users will install, run the application
in the application: and send their feedback to the project
 Spelling Mistakes team.
 Broken Links  Typographical errors, confusing
 Cloudy Directions application flow, and even crashes.

30
Information needed at different Levels of
Testing

Figure 5.11 testing levels 31


System Testing

Figure 5.12. system testing 32


Regression Testing
 Testing activities occur after software changes.
 Regression testing usually refers to testing activities
during software maintenance phase.
 It means re-testing an application after its code has been
modified to verify that it still functions correctly.
 It is a type of testing carried out to ensure that changes
made in the fixes are not impacting the previously
working functionality.
 Major regression testing objectives:
 Retest changed components (or parts)
 Check the affected parts (or components)
 Make sure that changed component is not impacting the
unchanged part of the component. 33
Continued…
 Regression testing is not a level of testing
 Regression testing can occur at any level of test, for
example, when unit tests are run the unit may pass a
number of these tests until one of the tests does reveal a
defect.
 Regression tests are especially important when multiple
software releases are developed.
 Users want new capabilities in the latest releases, but still
expect the older capabilities to remain in place.
 Test cases, test procedures, and other test- related items
from previous releases should be available so that these
tests can be run with the new versions of the software.
 Regression testing may be conducted manually, by re
executing a subset of all test cases or using automated
34
capture/playback tools.
Regression testing at different levels
 Regression testing at the unit level
 Re-integration
 Regression testing at the function level
 Regression testing at the system level
 What do you need to perform software regression testing?
 Software change information (change notes).
 Updated software REQ and Design specifications, and
user manuals.
 Software regression testing process and strategy.
 Software regression testing methods and criteria.

35
Continued…

Software Change
Analysis

Software Change
Impact Analysis

Define Regression
Testing Strategy

Build Regression
Test Suite

Run Regression
Tests at different levels

Report Retest
Results
Thank you!
Questions?

You might also like