ISTQB. Module 2
ISTQB. Module 2
Testing Throughout
the Software
Development Lifecycle
1
Terms
Acceptance Testing Change-related testing Component Testing
A test level that focuses on One of the test levels focused
A type of testing initiated by
determining if the user will on individual hardware or
modification to a component
accept the product as is. software components.
or system.
Component Contractual
Beta Testing
A type of acceptance testing Integration Testing Acceptance Testing
performed by potential and/or
Testing is performed to expose A type of acceptance testing
existing users/customers at an
defects in the interfaces and is performed to verify
external site to the developer's
interaction between integrated whether a system satisfies
test environment (a form of
components. its contractual requirements.
.
external acceptance testing).
2
Term
s
6
Common SDLC models
are:
This means that any phase in the development process should begin when the previous
phase is complete.
7
The main
features
of the Figure 2.2
model:
▪ The development activities (e.g.,
requirements analysis, design,
coding) are completed one
after another;
8
This model shows how a fully tested product can be created
Consider now the same aircraft, but the product is the software controlling
the display provided for the aircrew. If, at the point of testing, too many
defects are found, what happens next? Can we release just parts of the
system?
9
The V-model provides guidance that testing needs to begin as early as possible in the life
cycle.
The main idea of the V-model is that development and testing tasks are corresponding
activities of equal importance. The two branches of the V symbolize this:
10
Figure 2.3 V-model for
software development
Requirement specification –
Activities of
capturing of user needs.
the left-hand
side of the model Functional specification – definition of
functions required to meet user needs.
are the activities
known from the
Technical specification – technical
waterfall model,
design of functions identified
and they focused in the functional specification.
on the initial
Program specification – detailed
requirements:
design of each module or unit to be
built to meet required functionality.
11
✔ The middle of the V-model shows that planning for testing should start with
each work product.
✔ The right-hand side focuses on the testing activities. For each work product, a
testing activity is identified.
✔ Testing against the functional specification takes place at the system testing
stage.
✔ Testing against the program specification takes place at the unit testing stage.
12
Unlike sequential development models where delivered software contains the complete set of features,
and typically require months or years for delivery to stakeholders and users, incremental development
models involve establishing requirements, designing, building, and testing a system in pieces.
Iterative development occurs when groups of features are specified, designed, built, and
tested together in a series of cycles. Each iteration delivers working software.
Im Im Im
De De De
Def Bui Tes ple Def Bui Tes ple Def Bui Tes ple
vel vel vel
ine ld t me ine ld t me ine ld t me
op op op
Figure 2.4 nt nt nt
Iterative
development
model
13 Live
Implementation
Rational Unified Process - each iteration tends to be relatively
long (e.g., two to three months), and the feature increments
are correspondingly large, such as two or three groups of
related features;
14
Agile software development -– a group of software
development methodologies based on
iterative incremental development, where
requirements and solutions evolve through
collaboration between self-organizing
cross-functional teams.
15
▪ individuals and interactions over
processes and tools;
▪ responding to change
over following a plan.
16
Characteristics of project
teams using Scrum:
▪ The generation of business stories (a form of lightweight use cases) to define the functionality, rather than highly detailed
requirements specifications.
▪ The incorporation of business representatives into the development process, as part of each iteration (called a ‘sprint’ and
typically lasting 2 to 4 weeks), providing continual feedback and defining and carrying out functional acceptance testing.
▪ The recognition that we can’t know the future, so changes to requirements are welcomed throughout the development
process, as this approach can produce a product that better meets the stakeholders' needs as their knowledge grows over
time.
▪ The concept of shared code ownership among the developers, and the close inclusion of testers in the sprint teams.
▪ The writing of tests as the first step in the development of a component, and the automation of those tests before any code is
written. The component is complete when it then passes the automated tests. This is known as Test-Driven Development.
▪ Simplicity: building only what is necessary, not everything you can think of.
▪ The continuous integration and testing of the code throughout the sprint, at least once a day.
17
Benefits for Testers When Moving to an
Agile Development Approach:
✔ The focus on working software and good quality code;
✔ Self-organizing teams where the whole team is responsible for the quality and
giving testers more autonomy in their work;
(The manifesto does not say that documentation is no longer necessary or that it has no
value, but it is often interpreted that way)
19
✔ The tester’s role is different. Testers may be acting more as coaches
in testing to both stakeholders and developers, who may not have
a lot of testing knowledge.
✔ There is also constant time pressure and less time to think about
the testing for the new features.
(Although there is less to test in one iteration than a whole system)
20
SDLC models must be selected and adapted to the context of
project and product characteristics.
21
Example 1:
Example 2:
a V-model may be used for the development and testing of the backend systems and
their integrations, while an Agile development model may be used to develop and
test the front-end user interface (UI) and functionality.
Example 3:
22
Example 4:
23
Component
2.2 Test Levels Testing
Acceptance
Testing
24
Attributes of Test Levels:
▪ Specific objectives – the process, product, and project objectives, ideally
with measurable effectiveness and efficiency metrics and targets.
▪ Test basis – the work products used to derive the test cases.
▪ Test object – the item, build, or system under test (i.e., what is being
tested).
• Reducing risk;
2.2.1
• Verifying whether the functional and non-functional
Component behaviors of
Testing
the component is as designed and specified;
• Building confidence in the component’s quality;
26
Component testing is often done in isolation from the rest of the system, depending on the SDLC
model and the system.
Component testing
27
Test basis:
• Detailed design;
• Code;
• Data model;
• Component specifications.
An approach to unit testing is called Test Driven Development. As its name suggests, test cases are written
first, code built, tested, and changed until the unit passes its tests. This is an iterative approach to unit
29 testing.
Integration testing focuses on interactions between
components or systems.
• Reducing risk.
• Verifying whether the functional and non-functional
behaviors of the interfaces are as designed and specified.
2.2.2
• Building confidence in the quality of the interfaces.
Integration
Testing • Finding defects (which may be in the interfaces themselves
or within the components or systems).
• Preventing defects from escaping to higher test levels.
30
Test Basis Test objects
▪ System integration testing tests the interactions between different systems and may be done after the
31 system
Typical defects and failures:
For component integration testing: For system integration testing:
interface calls.
• Interface mismatch.
• Interface mismatch. • Failures in communication between systems.
(non-incremental integration) Testing takes place from top to bottom, following the
Strategy means waiting until all control flow or architectural structure (e.g. starting from
together in one step. ● Advantage: Test drivers are not needed, or only
simple ones are required;
● Disadvantage: Stubs must replace lower-level
components not yet integrated. This can be very
costly.
33
Figure 2.6 Top-down control
structure
34
Bottom-up integration:
Testing takes place from the bottom of the control flow upwards. Components
or systems are substituted by drivers.
• 4,2
• 5,2
• 6,3
• 7,3
• 2,1
• 3,1
35
Ad hoc integration:
Backbone integration:
The components are being integrated
A skeleton or backbone is built and
in the (casual) order in which they are
components are gradually integrated into it.
finished.
• Advantage: Components can be
• Advantage: This saves time
integrated in any order.
because every component is
integrated as early as possible
• Disadvantage: A possibly
into its environment.
labor-intensive skeleton or
• Disadvantage: Stubs as well as backbone is required.
test drivers are required.
36
Specific approaches and
responsibilities:
• Component integration tests and system integration tests should concentrate on the integration itself.
• Systematic integration strategies may be based on the system architecture (e.g., top-down and bottom-up),
functional tasks, transaction processing sequences, or some other aspect of the system or components.
• In order to simplify defect isolation and detect defects early, integration should normally
be incremental.
• A risk analysis of the most complex interfaces can help to focus the integration testing.
• Reducing risk.
Example:
• Verifying whether the functional and non-functional
behaviors of the system are as designed and VSR-System tests
specified. The main purpose of the VSR-System is to make ordering a
car as easy as possible. While ordering a car, the user uses
• Validating that the system is complete and will work all the components of the VSR-System: the car is configured
(DreamCar), financing and insurance are calculated
as expected. (EasyFinance, NoRisk), the order is transmitted to
production (JustInTime), and the contracts are archived
• Building confidence in the quality of the system as a (ContractBase).
whole.
The system fulfills its purpose only when all these
system functions and all the components collaborate
correctly.
• Finding defects.
The system test determines whether this is the case.
• Preventing defects from escaping to higher test
levels or production.
38
Test basis: Test object:
• System and software
• Applications
requirement specifications
• Hardware/software systems
(functional and non-
• Operating systems
functional)
• System under test (SUT)
• Risk analysis reports
• System configuration and
• Use cases configuration data
• Epics and user stories
• Models of system behavior
• State diagrams
• System and user manuals
39
Typical defects and failures of system
testing:
• Incorrect calculations
40
Specific approaches and
responsibilities:
• System testing should focus on the overall, end-to-end behavior of the system as a whole, both
functional and non-functional.
• System testing should use the most appropriate techniques for the aspect(s) of the system to be tested.
• Most often it is carried out by specialist testers that form a dedicated, and sometimes independent, test
team within the development, reporting to the development manager or project manager.
• Defects in specifications (e.g., missing user stories, incorrectly stated business requirements, etc.) can
lead to a lack of understanding of, or disagreements about, expected system behavior. Such situations
can cause false positives and false negatives, which waste time and reduce defect detection
effectiveness, respectively.
41
2.2.4 Acceptance
Testing
Acceptance testing, like system testing, typically focuses on the behavior and capabilities of a whole system or product.
• Verifying that functional and non-functional behaviors of the system are as specified
42
User acceptance Operational acceptance testing
testing
Performed in a (simulated) production environment
The acceptance testing of the system by by systems administration staff. This can include
users is typically focused on validating the checking:
fitness for use of the system by intended ▪ Back-up facilities
users in a real or simulated operational ▪ Installing, uninstalling and upgrading
environment. The main objective is to build
▪ Procedures for disaster recovery
confidence that the users can use the
system to meet their needs, fulfill ▪ Training for end users
▪ Performance testing
43
Contractual and regulatory acceptance testing Alpha and beta testing
• Contract acceptance testing – sometimes the criteria for • Alpha testing takes place at the developer’s site
– the operational system is tested whilst still at
accepting a system are documented in a contract.
the developer’s site by internal staff, before
Testing is then conducted to check that these criteria
releasing to external customers. Note that
have been met before the system is accepted.
testing here is still independent of the
• Regulation acceptance testing – in some industries, development team.
systems must meet governmental, legal, or safety • Beta testing takes place at the customer’s site –
standards. Examples of these are the defense, the operational system is tested by a group of
banking, and pharmaceutical industries. customers, who use the product at their own
locations and provide feedback before the
The main objective of contractual and regulatory system is released. This is often called ‘field
acceptance testing is building confidence that testing’.
contractual or regulatory compliance has been
achieved.
44
Test basis:
Examples of work products that can be used Test basis for operational acceptance
for any form of acceptance testing: testing:
45
Typical test objects Typical defects and failures:
• System under test • System workflows do not meet business or
46
Specific approaches and responsibilities:
47
Category Objective
2.3 Test
Functional testing
Types To evaluate functional quality characteristics,
such as completeness, correctness, and
appropriateness.
Non-functional testing
To evaluate non-functional quality
Each test level has specific test objectives.
characteristics, such as reliability,
A test type - it’s a group of test activities aimed at performance efficiency, security,
compatibility, and usability.
testing specific characteristics of software based on
specific test objectives.
White-box
To evaluate whether the structure or
testing (structural
testing)
Different test types are relevant at each test level. architecture of the component or system is correct,
complete, and as specified.
Test types fall into 4 categories, and each category
has its own testing objectives: Change-related Testing
To evaluate the effects of changes,
such as confirming that defects have been fixed
(confirmation testing) and looking for changes in the
behavior of the system, which could occur as a
result of changes in software or environment
(regression testing).
48
2.3.1 Functional
Testing
Functional testing of a system involves tests that Functional requirements ➞ specify the behavior of the system; they
describe what the system must be able to do.
evaluate functions that the system should perform.
Functional testing considers the specified behavior and is Example: Requirements of the VSR-System (VirtualShowRoom)
often also referred to as black-box testing
(specification-based testing).
The user can choose a vehicle model from the current
Functional requirements R 100: model list for configuration.
For a chosen model, the deliverable extra
R 101: equipment items are indicated. The user can
choose the desired individual equipment from
this list.
Documented Undocumented The total price of the chosen configuration is
T 102.2: added.
Functional tests should be performed at all test levels, though the focus is different at each level.
Functional test design and execution may involve special skills or knowledge, such as knowledge of the
particular business problem the software solves or the particular role the software serves.
50
2.3.2 Non-functional
Testing testing evaluates product quality characteristics of a systems and software
Non-functional
The quality of a system is the degree to which the system satisfies the stated and implied needs of its various stakeholders
and thus provides value.
Those stakeholders' needs (functionality, performance, security, maintainability, etc.) are precisely what is represented in
the quality model, which categorizes the product quality into characteristics and sub-characteristics.
51
Functional Suitability
(придатність)
This characteristic represents the degree to which a product or system provides functions that meet
stated and implied needs when used under specified conditions. This characteristic is composed of
the following sub-characteristics:
Functional completeness – the degree to which the set of functions covers all the specified tasks
and user objectives.
Functional correctness – the degree to which a product or system provides the correct results
with the needed degree of precision.
Time behavior – the degree to which the response and processing times and throughput
rates of a product or system, when performing its functions, meet requirements.
Resource utilization – the degree to which the amounts and types of resources used by a product or
system, when performing its functions, meet requirements.
Capacity – the degree to which the maximum limits of a product or system parameter meet
requirements.
53
Compatibility
(сумісність)
The degree to which a product, system, or component can exchange information with other products,
systems, or components, and/or perform its required functions while sharing the same hardware or
software environment. This characteristic is composed of the following sub-characteristics:
Co-existence (співіснування) – the degree to which a product can perform its required functions
efficiently while sharing a common environment and resources with other products, without
detrimental impact on any other product.
Interoperability (сумісність) – the degree to which two or more systems, products, or components can
exchange information and use the information that has been exchanged.
54
Usability
The degree to which a product or system can be used by specified users to achieve specified goals with
effectiveness, efficiency, and satisfaction in a specified context of use. This characteristic is composed of the
following sub-characteristics:
Appropriateness recognizability – the degree to which users can recognize whether a product or system is appropriate for
their needs.
Learnability – the degree to which a product or system can be used by specified users to achieve specified goals of learning to
use the product or system with effectiveness, efficiency, freedom from risk, and satisfaction in a specified context of use.
Operability – the degree to which a product or system has attributes that make it easy to operate and control.
User error protection – the degree to which a system protects users against making errors.
User interface aesthetics – the degree to which a user interface enables pleasing and satisfying interaction for the user.
Accessibility – the degree to which a product or system can be used by people with the widest range of characteristics and
capabilities to achieve a specified goal in a specified context of use.
55
Reliability (надійність)
The degree to which a system, product, or component performs specified functions under specified
conditions for a specified period of time. This characteristic is composed of the following
sub-characteristics:
Maturity (зрілість) – the degree to which a system, product, or component meets the needs for
reliability under normal operation.
Availability (доступність) – the degree to which a system, product or component is operational and
accessible when required for use.
Fault tolerance (стійкість до відмов) – the degree to which a system, product, or component operates
as intended despite the presence of hardware or software faults.
Confidentiality (конфіденційність) – the degree to which a product or system ensures that data
are accessible only to those authorized to have access.
Accountability (відповідальність) – the degree to which the actions of an entity can be traced uniquely
to the entity.
Authenticity (автентичність) - the degree to which the identity of a subject or resource can be proved to be the
one claimed.
57
Maintainability
This characteristic represents the degree of effectiveness and efficiency with which a product or system can be
modified to improve it, correct it, or adapt it to changes in the environment, and in requirements. This characteristic is
composed of the following sub-characteristics:
Modularity – the degree to which a system or computer program is composed of discrete components such that a change to one
component has minimal impact on other components.
Reusability – the degree to which an asset can be used in more than one system, or in building other assets.
Analyzability – the degree of effectiveness and efficiency with which it is possible to assess the impact on a product or system of
an intended change to one or more of its parts, or diagnose a product for deficiencies or causes of failures, or identify parts to be
modified.
Modifiability – the degree to which a product or system can be effectively and efficiently modified without introducing defects
or degrading existing product quality.
Testability – the degree of effectiveness and efficiency with which test criteria can be established for a system, product, or
58
component, and tests can be performed to determine whether those criteria have been met.
Portability (переносимість)
Degree of effectiveness and efficiency with which a system, product, or component can be
transferred from one hardware, software, or other operational or usage environment to
another. This characteristic is composed of the following sub-characteristics:
Adaptability – the degree to which a product or system can effectively and efficiently be
adapted for different or evolving hardware, software, or other operational or usage
environments.
Replaceability – the degree to which a product can replace another specified software product
for the same purpose in the same environment.
59
✔ Non-functional testing evaluates characteristics of systems and
software that are responsible for “how well” the system behaves.
60
Non-functional testing includes, but is not limited to:
▪ performance testing – the process of testing to determine the performance of a software product;
▪ load testing – a type of performance testing conducted to evaluate the behavior of a component or
system with increasing load, e.g. numbers of parallel users and/or numbers of transactions, to
determine what load can be handled by the component or system;
▪ usability testing – testing to determine the extent to which the software product is understood, easy
to learn, easy to operate, and attractive to the users;
▪ reliability testing – the process of testing to determine the reliability of a software product;
▪ portability testing – the process of testing to determine the portability of a software product;
▪ security testing – the process against unauthorized access to the system or data, denial of service
attacks, etc.
61
2.3.3 White-box
Testing
White-box testing (or structure-based testing) derives tests based on the system’s internal structure or
implementation.
• code,
• architecture,
• work flows,
• component test level, where code coverage is based on the percentage of component code that has been
tested,
• component integration test level, where testing may be based on the architecture of the system, such as
interfaces
between components
Confirmation testing: performed after a defect is fixed. All test cases that failed should be re-executed on the new
software version. The software may also be tested with new tests if, for instance, the defect was missing functionality. The
purpose of a confirmation test is to confirm whether the original defect has been successfully fixed.
Regression testing: involves the creation of a set of test cases executed before, which serve to demonstrate that the
system works as expected after a change made in different parts of the code. Changes may include changes to the
environment, such as a new version of an operating system or database management system.
Change-related testing may be performed at all test levels and applies to functional, nonfunctional, and white-box
testing Regression test suites are run many times and generally evolve slowly, so regression testing is a strong
candidate for automation.
63
2.3.5 Test Types and Test Levels
Performing of different test types at different test levels can be illustrated by the example of a banking
application.
Functional testing
Component testing Tests are designed based on how a component should calculate
compound interest.
Component integration testing Tests are designed based on how account information captured at
the user interface is passed to the business logic.
System testing Tests are designed based on how account holders can apply for a line
of credit on their checking accounts.
System integration testing Tests are designed based on how the system uses an external microservice
to check an account holder’s credit score.
Acceptance testing Tests are designed based on how the banker handles approving
or declining a credit application.
64
Non-functional testing
Component testing Performance tests are designed to evaluate the number of CPU
cycles required to perform a complex total interest calculation.
Component integration testing Security tests are designed for buffer overflow vulnerabilities due to
data passed from the user interface to the business logic.
System testing Portability tests are designed to check whether the presentation
layer works on all supported browsers and mobile devices.
System integration testing Reliability tests are designed to evaluate system robustness if
the credit score microservice fails to respond.
Acceptance testing Usability tests are designed to evaluate the
accessibility of the banker’s credit processing
interface for people with disabilities.
65
White-box testing
Component testing Tests are designed to achieve complete statement and decision
coverage (will be discussed later) for all components that perform
financial calculations.
Component integration testing Tests are designed to exercise how each screen in
the browser interface passes data to the next
screen and the business logic.
System testing Tests are designed to cover sequences of web pages that can
occur during a credit line application.
System integration testing Tests are designed to exercise all possible inquiry
types sent to the credit score microservice.
Acceptance testing Tests are designed to cover all supported financial data
file structures and value ranges for bank-to-bank
transfers.
66
Change-related testing
Component testing Automated regression tests are built for each component and
included within the continuous integration framework.
Component integration testing Tests are designed to confirm fixes to interface-related defects as the fixes are
checked into the code repository.
System testing All tests for a given workflow are re-executed if any screen on
that workflow changes.
System integration testing Tests of the application interacting with the credit scoring microservice are
re-executed daily as part of the continuous deployment of that microservice.
67
A maintenance release may require
2.4 Maintenance maintenance testing at multiple test
Testing
Testing that is executed during the levels, using various test types, based
life cycle phase of the system (after on its scope. The scope of maintenance
the system was deployed to testing depends on:
production environments) is called
maintenance testing. • The degree of risk of the change
68
2.4.1 Triggers for
Maintenance
There are several reasons why maintenance testing takes place:
• Modification, such as planned enhancements (e.g., release-based), corrective and emergency changes,
changes in the operational environment (such as planned operating system or database upgrades),
upgrades of COTS software, and patches for defects and vulnerabilities.
• Migration, such as from one platform to another, which can require operational tests of the new
environment as well as of the changed software, or tests of data conversion when data from another
application will be migrated into the system being maintained.
When an application or system is retired, this can require testing of data migration or archiving if long data
retention periods are required.
69
2.4.2 Impact Analysis for
Maintenance
Usually, maintenance testing will consist of two parts:
During impact analysis, together with stakeholders, a decision is made on what parts of the system may be
unintentionally affected and therefore need careful regression testing.
Impact analysis can also help to identify the impact of a change on existing tests.
70
Impact analysis can be difficult if:
✔ Specifications (e.g., business requirements, user stories, architecture) are out of date or missing;
✔ Bi-directional traceability between tests and the test basis has not been maintained;
✔ Insufficient attention has been paid to the software's maintainability during development.
71