0% found this document useful (0 votes)
31 views

ISTQB. Module 2

The document discusses different types of software development lifecycle (SDLC) models and their implications for testing. It describes sequential and iterative development models. The waterfall model is presented as a sequential model where each phase must be completed before moving to the next. Testing occurs only after full development is completed. Iterative models allow for overlapping and repeated phases with testing throughout development.

Uploaded by

Alexandru Păun
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

ISTQB. Module 2

The document discusses different types of software development lifecycle (SDLC) models and their implications for testing. It describes sequential and iterative development models. The waterfall model is presented as a sequential model where each phase must be completed before moving to the next. Testing occurs only after full development is completed. Iterative models allow for overlapping and repeated phases with testing throughout development.

Uploaded by

Alexandru Păun
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 71

2.

Testing Throughout
the Software
Development Lifecycle

1
Terms
Acceptance Testing Change-related testing Component Testing
A test level that focuses on One of the test levels focused
A type of testing initiated by
determining if the user will on individual hardware or
modification to a component
accept the product as is. software components.
or system.

Alpha Testing Commercial Confirmation Testing


A type of acceptance testing Testing is performed after fixing a
off-the-shelf (COTS)
performed by potential defect to confirm that a failure
users/customers or an A product developed in an does not reproduce (running
independent test team in the identical format for a large failed test cases).
developer's test environment number of customers in the
(a form of internal general market.
acceptance testing).

Component Contractual
Beta Testing
A type of acceptance testing Integration Testing Acceptance Testing
performed by potential and/or
Testing is performed to expose A type of acceptance testing
existing users/customers at an
defects in the interfaces and is performed to verify
external site to the developer's
interaction between integrated whether a system satisfies
test environment (a form of
components. its contractual requirements.
.
external acceptance testing).
2
Term
s

Functional Testing Maintenance Testing Regression Testing


Testing is performed to Testing the changes to an A type of change-related testing to
evaluate if a component or operational system or the impact detect whether defects have been
system satisfies functional of a changed environment to an introduced or uncovered in
requirements. operational system. unchanged areas of the software.

Impact Analysis Regulatory Acceptance


Non-Functional Testing
The identification of all work
Testing is performed to evaluate Testing
products affected by a change,
whether a component or system A type of acceptance testing
including an estimate of the
complies with non-functional performed to verify whether a
resources needed to
requirements. system conforms to relevant laws,
accomplish the change.
policies, and regulations.

Integration Testing Sequential Development


Operational
A test level that focuses on Acceptance Testing Model
interactions between
A type of software development
components or systems. A type of acceptance testing is
lifecycle model in which a
performed to determine if
complete system is developed in a
operations and/or systems
linear way of several discrete and
administration staff can accept
successive phases with no overlap
a system.
3
between them.
Term
s

System Integration Test Environment Test Type


Testing An environment containing A group of test activities based
hardware, instrumentation, on specific test objectives
A test level that focuses on
simulators, software tools, and aimed at specific characteristics
interactions between
other support elements is needed of a component or system.
systems.
to conduct a test.

System Testing Test Level


User Acceptance
A test level that focuses on
А group of test activities that are
Testing
verifying that a system as a whole
A type of acceptance
meets specified requirements. organized and managed
testing is performed to
together.
determine if intended
users accept the system.
Test Basis Test Object
White-Box Testing
All documents used as the basis The work product to be Testing is based on an analysis
for test analysis and design. tested. of the internal structure of the
component or system.

Test Case Test Objective


A set of preconditions, inputs,
The reason or purpose of
actions (where applicable),
testing.
expected results, and
postconditions, developed based
on test conditions.
4
2.1 Software Development Lifecycle
Models
A software development lifecycle
(SDLC) model - different types of
activity performed at each stage in
a software development project,
and how the activities relate to
one another logically and
chronologically.

Each CDLS model requires


different approaches to testing.

A simple development model is


shown in Figure 2.1. This is
known as the waterfall model. Figure 2.1
Waterfall model
5
The life cycle model that is adopted for a project will have a big impact on
the testing that is carried out.

Test activities are highly related to software development activities.


Several characteristics of good testing in any SDLC model:

2.1.1 Software • For every development activity, there is a corresponding test


Development activit
y
and Software • Each test level has test objectives specific to that level.
Testing • Test analysis and design for a given test level begin during the
corresponding development activity.
• Testers participate in discussions to define and refine requirements
and design and are involved in reviewing work products (e.g.,
requirements, design, user stories, etc.) as soon as drafts are
available.

6
Common SDLC models
are:

Sequential development Iterative and incremental development


models models

A sequential development model - it’s a model where the software development


process as a linear, sequential flow of activities.

This means that any phase in the development process should begin when the previous
phase is complete.

The most known sequential development models are: the Waterfall


model and the V-model.

7
The main
features
of the Figure 2.2

Waterfall Waterfall mode

model:
▪ The development activities (e.g.,
requirements analysis, design,
coding) are completed one
after another;

▪ Testing is carried out once the code


has been fully developed.

8
This model shows how a fully tested product can be created

Examples of when the model can and cannot be applied:

In a factory environment producing rivets for an aircraft fuselage, checks are


made by operators to assess the rivets on a conveyor belt. This assessment
may reveal a percentage of the rivets to be defective. Usually, this
percentage is small and does not result in the whole batch of rivets being
rejected. Therefore the bulk of the product can be released.

Consider now the same aircraft, but the product is the software controlling
the display provided for the aircrew. If, at the point of testing, too many
defects are found, what happens next? Can we release just parts of the
system?

9
The V-model provides guidance that testing needs to begin as early as possible in the life
cycle.
The main idea of the V-model is that development and testing tasks are corresponding
activities of equal importance. The two branches of the V symbolize this:

10
Figure 2.3 V-model for
software development
Requirement specification –
Activities of
capturing of user needs.
the left-hand
side of the model Functional specification – definition of
functions required to meet user needs.
are the activities
known from the
Technical specification – technical
waterfall model,
design of functions identified
and they focused in the functional specification.

on the initial
Program specification – detailed
requirements:
design of each module or unit to be
built to meet required functionality.

11
✔ The middle of the V-model shows that planning for testing should start with
each work product.

✔ The right-hand side focuses on the testing activities. For each work product, a
testing activity is identified.

✔ Testing against the requirement specification takes place at the acceptance


testing stage.

✔ Testing against the functional specification takes place at the system testing
stage.

✔ Testing against the technical specification takes place at the integration


testing stage.

✔ Testing against the program specification takes place at the unit testing stage.

12
Unlike sequential development models where delivered software contains the complete set of features,
and typically require months or years for delivery to stakeholders and users, incremental development
models involve establishing requirements, designing, building, and testing a system in pieces.

Iterative development occurs when groups of features are specified, designed, built, and
tested together in a series of cycles. Each iteration delivers working software.

Phase 1 Phase 2 Phase 3

Im Im Im
De De De
Def Bui Tes ple Def Bui Tes ple Def Bui Tes ple
vel vel vel
ine ld t me ine ld t me ine ld t me
op op op
Figure 2.4 nt nt nt
Iterative
development
model

13 Live
Implementation
Rational Unified Process - each iteration tends to be relatively
long (e.g., two to three months), and the feature increments
are correspondingly large, such as two or three groups of
related features;

Scrum - each iteration tends to be relatively short (e.g., hours,


days, or a few weeks), and the feature increments are
Examples of correspondingly small, such as a few enhancements and/or two or
incremental three new features;
development
models: Kanban implemented with or without fixed-length iterations,
which can deliver either a single enhancement or feature upon
completion, or can group features together to release at once;

Spiral (or prototyping) involves creating experimental


increments, some of which may be heavily re-worked or even
abandoned in subsequent development work.

14
Agile software development -– a group of software
development methodologies based on
iterative incremental development, where
requirements and solutions evolve through
collaboration between self-organizing
cross-functional teams.

Scrum – a management framework for


iterative incremental development projects ––
is the most used in agile teams.

15
▪ individuals and interactions over
processes and tools;

▪ working software over


Agile comprehensive
documentation;
manifesto:
▪ customer collaboration over contract
negotiation;

▪ responding to change
over following a plan.

16
Characteristics of project
teams using Scrum:
▪ The generation of business stories (a form of lightweight use cases) to define the functionality, rather than highly detailed
requirements specifications.

▪ The incorporation of business representatives into the development process, as part of each iteration (called a ‘sprint’ and
typically lasting 2 to 4 weeks), providing continual feedback and defining and carrying out functional acceptance testing.

▪ The recognition that we can’t know the future, so changes to requirements are welcomed throughout the development
process, as this approach can produce a product that better meets the stakeholders' needs as their knowledge grows over
time.

▪ The concept of shared code ownership among the developers, and the close inclusion of testers in the sprint teams.

▪ The writing of tests as the first step in the development of a component, and the automation of those tests before any code is
written. The component is complete when it then passes the automated tests. This is known as Test-Driven Development.

▪ Simplicity: building only what is necessary, not everything you can think of.

▪ The continuous integration and testing of the code throughout the sprint, at least once a day.

17
Benefits for Testers When Moving to an
Agile Development Approach:
✔ The focus on working software and good quality code;

✔ The inclusion of testing as part of and the starting point of software


development (test-driven development);

✔ Accessibility of business stakeholders to help testers resolve questions about the


expected behavior of the system;

✔ Self-organizing teams where the whole team is responsible for the quality and
giving testers more autonomy in their work;

✔ Simplicity of design that should be easier to test.


18
Significant challenges for testers when
moving to an Agile Development
Approach:
✔ Testers have to use less formal documentation to design tests;

(The manifesto does not say that documentation is no longer necessary or that it has no
value, but it is often interpreted that way)

✔ The opinion "Testers are not needed" may be strengthened because


developers
are forced to do more component testing.
(But component testing may miss major problems. System testing as well as end-to-end
functional testing is needed even if it doesn’t fit comfortably into a sprint)

19
✔ The tester’s role is different. Testers may be acting more as coaches
in testing to both stakeholders and developers, who may not have
a lot of testing knowledge.

✔ There is also constant time pressure and less time to think about
the testing for the new features.
(Although there is less to test in one iteration than a whole system)

✔ Because each increment is adding to an existing working


system, regression testing becomes extremely important,
and automation becomes more beneficial.

✔ Simply taking existing automated component or component


integration tests may not make an adequate regression suite.

20
SDLC models must be selected and adapted to the context of
project and product characteristics.

Items to be considered when choosing an SDLC model:


• the project goal;
2.1.2 Software • the type of product being developed;
Development
Lifecycle Models • business priorities (e.g.,
time-to-market);
in Context
• identified product and project risks

Depending on the context of the project, it may be necessary


to combine or reorganize test levels and/or test activities, as
well as SDLC models themselves may be combined.

21
Example 1:

the integration of a commercial off-the-shelf (COTS) software product into a


larger system, the purchaser may perform interoperability testing at the system
integration test level

(e.g., integration to the infrastructure and other systems) and at the


acceptance test level (functional and non-functional, along with user
acceptance testing and operational acceptance testing).

Example 2:

a V-model may be used for the development and testing of the backend systems and
their integrations, while an Agile development model may be used to develop and
test the front-end user interface (UI) and functionality.

Example 3:

Prototyping may be used early in a project, with an incremental development


model adopted once the experimental phase is complete.

22
Example 4:

Internet of Things (IoT) systems, which consist of many different objects,


such as devices, products, and services, typically apply separate software
development lifecycle models for each object.

This presents a particular challenge for the development of Internet of


Things system versions.

Additionally the software development lifecycle of such objects places stronger


emphasis on the later phases of the software development lifecycle after they
have been introduced to operational use (e.g., operate, update, and
decommission phases).

23
Component
2.2 Test Levels Testing

Test levels are groups of test


Integration
activities that are organized
Testing
and managed together.
Test
Each test level is an instance of
the test process, consisting of
Levels
the different activities, System Testing
performed in relation to software
at a given level of development.

Acceptance
Testing

24
Attributes of Test Levels:
▪ Specific objectives – the process, product, and project objectives, ideally
with measurable effectiveness and efficiency metrics and targets.

▪ Test basis – the work products used to derive the test cases.

▪ Test object – the item, build, or system under test (i.e., what is being
tested).

▪ Typical defects and failures (that we are looking for).

▪ Specific approaches that intend to use.

▪ Responsibilities (the individuals who are responsible for the activities


required to carry out the fundamental test process for the test level).
25
Component testing (also known as unit or module testing)
focuses on components that are separately testable.

Objectives of component testing:

• Reducing risk;
2.2.1
• Verifying whether the functional and non-functional
Component behaviors of
Testing
the component is as designed and specified;
• Building confidence in the component’s quality;

• Finding defects in the component;

• Preventing defects from escaping to higher test levels.

26
Component testing is often done in isolation from the rest of the system, depending on the SDLC
model and the system.

Component testing

May Require: May Cover:


• Mock objects • Functionality (e.g., the correctness of calculations)
• Service • Non-functional characteristics (e.g., searching for
virtualization
• Harnesses memory leaks)
• Stubs • Structural properties (e.g., decision testing)
• Drivers

FIGURE 2.5 Stubs


and Drivers

27
Test basis:
• Detailed design;
• Code;
• Data model;
• Component specifications.

Example: Code as a test basis.

Analyzing the code of calculate_price(), the following command can be recognized as


a line that is relevant for testing:
if (discount >
addon_discount)
addon_discount = discount;
Additional test cases that lead to fulfilling the condition (discount >
addon_discount) can easily be derived from the code. The specification of the price
calculation contains no information about this situation; the implemented
functionality is extra: it is not supposed to be there.
28
Specific approaches and
Test objects: Typical defects and failures: responsibilities:
• Component testing is
• Components, units, or • Incorrect functionality
usually performed by the
modules (e.g., not as described in
developer who wrote the
design specifications)
• Code and data code;
structures • Data flow problems
• Developers may alternate

• Classes • Incorrect code and logic component development with


finding and fixing defects;
• Database modules
• Developers will often write and
execute tests after having
written the code for a
component

An approach to unit testing is called Test Driven Development. As its name suggests, test cases are written
first, code built, tested, and changed until the unit passes its tests. This is an iterative approach to unit

29 testing.
Integration testing focuses on interactions between
components or systems.

Objectives of integration testing:

• Reducing risk.
• Verifying whether the functional and non-functional
behaviors of the interfaces are as designed and specified.
2.2.2
• Building confidence in the quality of the interfaces.
Integration
Testing • Finding defects (which may be in the interfaces themselves
or within the components or systems).
• Preventing defects from escaping to higher test levels.

30
Test Basis Test objects

▪ Software and system design • Subsystems


▪ Sequence diagrams ▪ Databases
▪ Interface and communication protocol ▪ Infrastructure
specifications ▪ Interfaces
▪ Use cases ▪ APIs
▪ Architecture at the component or ▪ Microservices
system level
▪ Workflows
▪ External interface definitions

Two different levels of integration testing:


▪ Component integration testing tests the interactions between software components and is done after
component testing.

▪ System integration testing tests the interactions between different systems and may be done after the
31 system
Typical defects and failures:
For component integration testing: For system integration testing:

• Incorrect data, missing data, or • Inconsistent message structures between systems


incorrect data encoding. • Incorrect data, missing data, or incorrect data
• Incorrect sequencing or timing of encoding.

interface calls.
• Interface mismatch.
• Interface mismatch. • Failures in communication between systems.

• Failures in communication between • Unhandled or improperly handled

components. communication failures between systems.

• Unhandled or improperly handled • Incorrect assumptions about the meaning,


communication failures between units, or boundaries of the data being passed
components. between systems
• Incorrect assumptions about the meaning, • Failure to comply with mandatory security
units, or boundaries of the data being regulations Unhandled or improperly

passed between components. handled communication failures between


systems
32
Generic integration testing
strategies:

Big-bang integration Top-down integration:

(non-incremental integration) Testing takes place from top to bottom, following the

Strategy means waiting until all control flow or architectural structure (e.g. starting from

software elements are developed the GUI or main menu).

and then throwing everything Components or systems are substituted by stubs.

together in one step. ● Advantage: Test drivers are not needed, or only
simple ones are required;
● Disadvantage: Stubs must replace lower-level
components not yet integrated. This can be very
costly.

33
Figure 2.6 Top-down control
structure

34
Bottom-up integration:
Testing takes place from the bottom of the control flow upwards. Components
or systems are substituted by drivers.

• Advantage: No stubs are needed.

• Disadvantage: Test drivers must simulate higher-level components. The


integration order of structure at Figure 2.6 might be:

• 4,2
• 5,2
• 6,3
• 7,3
• 2,1
• 3,1

35
Ad hoc integration:
Backbone integration:
The components are being integrated
A skeleton or backbone is built and
in the (casual) order in which they are
components are gradually integrated into it.
finished.
• Advantage: Components can be
• Advantage: This saves time
integrated in any order.
because every component is
integrated as early as possible
• Disadvantage: A possibly
into its environment.
labor-intensive skeleton or
• Disadvantage: Stubs as well as backbone is required.
test drivers are required.

36
Specific approaches and
responsibilities:
• Component integration tests and system integration tests should concentrate on the integration itself.

• Component integration testing is often the responsibility of developers.

• System integration testing is generally the responsibility of testers.

• Systematic integration strategies may be based on the system architecture (e.g., top-down and bottom-up),
functional tasks, transaction processing sequences, or some other aspect of the system or components.

• In order to simplify defect isolation and detect defects early, integration should normally

be incremental.

• A risk analysis of the most complex interfaces can help to focus the integration testing.

Continuous integration, where software is integrated on a component-by-component basis (i.e., functional


integration), has become common practice in order to isolate defects to a specific component or system
and to reduce risks and time for troubleshooting.Such continuous integration often includes automated
37
regression testing, ideally at multiple test levels.
2.2.3 System
Testing
System testing focuses on the behavior and
capabilities of a whole system or product.

Objectives of system testing:

• Reducing risk.
Example:
• Verifying whether the functional and non-functional
behaviors of the system are as designed and VSR-System tests
specified. The main purpose of the VSR-System is to make ordering a
car as easy as possible. While ordering a car, the user uses
• Validating that the system is complete and will work all the components of the VSR-System: the car is configured
(DreamCar), financing and insurance are calculated
as expected. (EasyFinance, NoRisk), the order is transmitted to
production (JustInTime), and the contracts are archived
• Building confidence in the quality of the system as a (ContractBase).
whole.
The system fulfills its purpose only when all these
system functions and all the components collaborate
correctly.
• Finding defects.
The system test determines whether this is the case.
• Preventing defects from escaping to higher test
levels or production.
38
Test basis: Test object:
• System and software
• Applications
requirement specifications
• Hardware/software systems
(functional and non-
• Operating systems
functional)
• System under test (SUT)
• Risk analysis reports
• System configuration and
• Use cases configuration data
• Epics and user stories
• Models of system behavior
• State diagrams
• System and user manuals

39
Typical defects and failures of system
testing:

• Incorrect calculations

• Incorrect or unexpected system functional or non-functional


behavior

• Incorrect control and/or data flows within the system

• Failure to properly and completely carry out end-to-end functional


tasks

• Failure of the system to work properly in the production


environment(s)

• Failure of the system to work as described in the system and


user manuals

40
Specific approaches and
responsibilities:
• System testing should focus on the overall, end-to-end behavior of the system as a whole, both
functional and non-functional.

• System testing should use the most appropriate techniques for the aspect(s) of the system to be tested.

• Most often it is carried out by specialist testers that form a dedicated, and sometimes independent, test
team within the development, reporting to the development manager or project manager.

• Sometimes system testing is carried out by a third-party team or by business analysts.

• Defects in specifications (e.g., missing user stories, incorrectly stated business requirements, etc.) can
lead to a lack of understanding of, or disagreements about, expected system behavior. Such situations
can cause false positives and false negatives, which waste time and reduce defect detection
effectiveness, respectively.

41
2.2.4 Acceptance
Testing
Acceptance testing, like system testing, typically focuses on the behavior and capabilities of a whole system or product.

Objectives of acceptance testing:

• Establishing confidence in the quality of the system as a whole

• Validating that the system is complete and will work as expected

• Verifying that functional and non-functional behaviors of the system are as specified

Acceptance testing consists of:

• User acceptance testing

• Operational acceptance testing

• Contractual and regulatory acceptance testing

• Alpha and beta testing

42
User acceptance Operational acceptance testing
testing
Performed in a (simulated) production environment
The acceptance testing of the system by by systems administration staff. This can include
users is typically focused on validating the checking:
fitness for use of the system by intended ▪ Back-up facilities
users in a real or simulated operational ▪ Installing, uninstalling and upgrading
environment. The main objective is to build
▪ Procedures for disaster recovery
confidence that the users can use the
system to meet their needs, fulfill ▪ Training for end users

requirements, and perform business ▪ Maintenance tasks


processes with minimum difficulty, cost, and
▪ Data load and migration tasks
risk.
▪ Security procedures

▪ Performance testing

43
Contractual and regulatory acceptance testing Alpha and beta testing

• Contract acceptance testing – sometimes the criteria for • Alpha testing takes place at the developer’s site
– the operational system is tested whilst still at
accepting a system are documented in a contract.
the developer’s site by internal staff, before
Testing is then conducted to check that these criteria
releasing to external customers. Note that
have been met before the system is accepted.
testing here is still independent of the
• Regulation acceptance testing – in some industries, development team.
systems must meet governmental, legal, or safety • Beta testing takes place at the customer’s site –
standards. Examples of these are the defense, the operational system is tested by a group of
banking, and pharmaceutical industries. customers, who use the product at their own
locations and provide feedback before the
The main objective of contractual and regulatory system is released. This is often called ‘field
acceptance testing is building confidence that testing’.
contractual or regulatory compliance has been
achieved.

44
Test basis:
Examples of work products that can be used Test basis for operational acceptance
for any form of acceptance testing: testing:

• Business processes • Backup and restore procedures

• User or business requirements • Disaster recovery procedures

• Regulations, legal contracts, and standards • Non-functional requirements

• Use cases • Operations documentation

• System requirements • Deployment and installation


instructions
• System or user documentation
• Performance targets
• Installation procedures
• Database packages
• Risk analysis reports
• Security standards or regulations

45
Typical test objects Typical defects and failures:
• System under test • System workflows do not meet business or

• System configuration and configuration data user requirements


• Business rules are not implemented
• Business processes for a fully integrated
correctly
system
• System does not satisfy contractual
• Recovery systems and hot sites (for
or regulatory
business continuity and disaster recovery
testing) requirements

• Operational and maintenance processes • Non-functional failures such as


• Forms security vulnerabilities, inadequate
performance efficiency under high
• Reports
loads, or improper operation on a
• Existing and converted production data
supported platform

46
Specific approaches and responsibilities:

✔ Acceptance testing is often the responsibility of the customers, business


users, product owners, or operators of a system, and other stakeholders
may be involved as well.

✔ Acceptance testing is often thought of as the last test level in a


sequential development lifecycle, but it may also occur at other times,
for example:
• Acceptance testing of a COTS software product may occur when
it is installed or integrated

• Acceptance testing of a new functional enhancement may occur


before system testing

47
Category Objective

2.3 Test
Functional testing
Types To evaluate functional quality characteristics,
such as completeness, correctness, and
appropriateness.

Non-functional testing
To evaluate non-functional quality
Each test level has specific test objectives.
characteristics, such as reliability,
A test type - it’s a group of test activities aimed at performance efficiency, security,
compatibility, and usability.
testing specific characteristics of software based on
specific test objectives.
White-box
To evaluate whether the structure or
testing (structural
testing)
Different test types are relevant at each test level. architecture of the component or system is correct,
complete, and as specified.
Test types fall into 4 categories, and each category
has its own testing objectives: Change-related Testing
To evaluate the effects of changes,
such as confirming that defects have been fixed
(confirmation testing) and looking for changes in the
behavior of the system, which could occur as a
result of changes in software or environment
(regression testing).

48
2.3.1 Functional
Testing
Functional testing of a system involves tests that Functional requirements ➞ specify the behavior of the system; they
describe what the system must be able to do.
evaluate functions that the system should perform.

Functional testing considers the specified behavior and is Example: Requirements of the VSR-System (VirtualShowRoom)
often also referred to as black-box testing
(specification-based testing).
The user can choose a vehicle model from the current
Functional requirements R 100: model list for configuration.
For a chosen model, the deliverable extra
R 101: equipment items are indicated. The user can
choose the desired individual equipment from
this list.
Documented Undocumented The total price of the chosen configuration is

R 102: continuously calculated from current price lists


• business and displayed.
requirements
specifications;
• epics;
• user stories;
• use cases;
• functional
49
specifications
For each requirement, at least one test case is designed and documented in the
test specification.

Example of the testing of requirement 102:

Example: Requirements-based testing

A vehicle model is chosen; its base price according to

T 102.1: the sales manual is displayed.

A special equipment item is selected; the price of this accessory is

T 102.2: added.

A special equipment item is deselected; the price falls accordingly.


T 102.3:

Three special equipment items are selected; the discount


T 102.4: comes into effect as defined in the specification.

Functional tests should be performed at all test levels, though the focus is different at each level.

Functional test design and execution may involve special skills or knowledge, such as knowledge of the
particular business problem the software solves or the particular role the software serves.

50
2.3.2 Non-functional
Testing testing evaluates product quality characteristics of a systems and software
Non-functional

The quality of a system is the degree to which the system satisfies the stated and implied needs of its various stakeholders
and thus provides value.
Those stakeholders' needs (functionality, performance, security, maintainability, etc.) are precisely what is represented in
the quality model, which categorizes the product quality into characteristics and sub-characteristics.

Figure 2.7 The product quality model (regarding ISO/IEC


25010)

51
Functional Suitability
(придатність)
This characteristic represents the degree to which a product or system provides functions that meet
stated and implied needs when used under specified conditions. This characteristic is composed of
the following sub-characteristics:

Functional completeness – the degree to which the set of functions covers all the specified tasks
and user objectives.

Functional correctness – the degree to which a product or system provides the correct results
with the needed degree of precision.

Functional appropriateness – the degree to which the functions facilitate the


accomplishment of specified tasks and objectives.
52
Performance efficiency
(ефективність)
This characteristic represents the performance relative to the amount of resources used under stated conditions. This
characteristic is composed of the following sub-characteristics:

Time behavior – the degree to which the response and processing times and throughput
rates of a product or system, when performing its functions, meet requirements.

Resource utilization – the degree to which the amounts and types of resources used by a product or
system, when performing its functions, meet requirements.

Capacity – the degree to which the maximum limits of a product or system parameter meet
requirements.

53
Compatibility
(сумісність)
The degree to which a product, system, or component can exchange information with other products,
systems, or components, and/or perform its required functions while sharing the same hardware or
software environment. This characteristic is composed of the following sub-characteristics:

Co-existence (співіснування) – the degree to which a product can perform its required functions
efficiently while sharing a common environment and resources with other products, without
detrimental impact on any other product.

Interoperability (сумісність) – the degree to which two or more systems, products, or components can
exchange information and use the information that has been exchanged.

54
Usability
The degree to which a product or system can be used by specified users to achieve specified goals with
effectiveness, efficiency, and satisfaction in a specified context of use. This characteristic is composed of the
following sub-characteristics:

Appropriateness recognizability – the degree to which users can recognize whether a product or system is appropriate for
their needs.

Learnability – the degree to which a product or system can be used by specified users to achieve specified goals of learning to
use the product or system with effectiveness, efficiency, freedom from risk, and satisfaction in a specified context of use.

Operability – the degree to which a product or system has attributes that make it easy to operate and control.

User error protection – the degree to which a system protects users against making errors.

User interface aesthetics – the degree to which a user interface enables pleasing and satisfying interaction for the user.

Accessibility – the degree to which a product or system can be used by people with the widest range of characteristics and
capabilities to achieve a specified goal in a specified context of use.

55
Reliability (надійність)
The degree to which a system, product, or component performs specified functions under specified
conditions for a specified period of time. This characteristic is composed of the following
sub-characteristics:

Maturity (зрілість) – the degree to which a system, product, or component meets the needs for
reliability under normal operation.

Availability (доступність) – the degree to which a system, product or component is operational and
accessible when required for use.

Fault tolerance (стійкість до відмов) – the degree to which a system, product, or component operates
as intended despite the presence of hardware or software faults.

Recoverability (здатність до відновлення) - – the degree to which, in the event of an interruption or a


failure, a product or system can recover the data directly affected and re-establish the desired state of the
56 system.
Security
The degree to which a product or system protects information and data so that persons or other products or
systems have the degree of data access appropriate to their types and levels of authorization. This
characteristic is composed of the following sub-characteristics:

Confidentiality (конфіденційність) – the degree to which a product or system ensures that data
are accessible only to those authorized to have access.

Integrity (недоторканність, доброчесність) – the degree to which a system, product, or


component prevents unauthorized access to, or modification of, computer programs or data.

Accountability (відповідальність) – the degree to which the actions of an entity can be traced uniquely
to the entity.

Authenticity (автентичність) - the degree to which the identity of a subject or resource can be proved to be the
one claimed.
57
Maintainability
This characteristic represents the degree of effectiveness and efficiency with which a product or system can be
modified to improve it, correct it, or adapt it to changes in the environment, and in requirements. This characteristic is
composed of the following sub-characteristics:

Modularity – the degree to which a system or computer program is composed of discrete components such that a change to one
component has minimal impact on other components.

Reusability – the degree to which an asset can be used in more than one system, or in building other assets.

Analyzability – the degree of effectiveness and efficiency with which it is possible to assess the impact on a product or system of
an intended change to one or more of its parts, or diagnose a product for deficiencies or causes of failures, or identify parts to be
modified.

Modifiability – the degree to which a product or system can be effectively and efficiently modified without introducing defects
or degrading existing product quality.

Testability – the degree of effectiveness and efficiency with which test criteria can be established for a system, product, or
58
component, and tests can be performed to determine whether those criteria have been met.
Portability (переносимість)
Degree of effectiveness and efficiency with which a system, product, or component can be
transferred from one hardware, software, or other operational or usage environment to
another. This characteristic is composed of the following sub-characteristics:

Adaptability – the degree to which a product or system can effectively and efficiently be
adapted for different or evolving hardware, software, or other operational or usage
environments.

Installability - the degree of effectiveness and efficiency with which a product or


system can be successfully installed and/or uninstalled in a specified environment.

Replaceability – the degree to which a product can replace another specified software product
for the same purpose in the same environment.

59
✔ Non-functional testing evaluates characteristics of systems and
software that are responsible for “how well” the system behaves.

✔ Non-functional testing, as functional testing, can and


often should be performed at all test levels, and done
as early as possible

✔ Black-box techniques may be used to derive test conditions and test


cases for non-functional testing.

✔ Non-functional test design and execution may involve special


skills or knowledge, such as knowledge of the inherent
weaknesses of a design or technology or the particular user base.

60
Non-functional testing includes, but is not limited to:
▪ performance testing – the process of testing to determine the performance of a software product;

▪ load testing – a type of performance testing conducted to evaluate the behavior of a component or
system with increasing load, e.g. numbers of parallel users and/or numbers of transactions, to
determine what load can be handled by the component or system;

▪ stress testing - a type of performance testing conducted to evaluate a system or component at or


beyond the limits of its anticipated or specified workloads, or with reduced availability of resources
such as access to memory or servers;

▪ usability testing – testing to determine the extent to which the software product is understood, easy
to learn, easy to operate, and attractive to the users;

▪ maintainability testing – the process of testing to determine the maintainability of a software


product;

▪ reliability testing – the process of testing to determine the reliability of a software product;

▪ portability testing – the process of testing to determine the portability of a software product;

▪ security testing – the process against unauthorized access to the system or data, denial of service
attacks, etc.
61
2.3.3 White-box
Testing
White-box testing (or structure-based testing) derives tests based on the system’s internal structure or
implementation.

Internal structure may include:

• code,

• architecture,

• work flows,

• and/or data flows within the system.

White-box testing can be performed at:

• component test level, where code coverage is based on the percentage of component code that has been
tested,

• component integration test level, where testing may be based on the architecture of the system, such as
interfaces
between components

• other levels in special cases


White-box test design and execution may involve special skills or knowledge, such as the way the code
62 is built, how data is stored, and how to use coverage tools and correctly interpret their results.
2.3.4 Change-related
Testing
When changes are made to a system, either to correct a defect or because of new or changing functionality, testing should
be done to confirm that the changes have corrected the defect or implemented the functionality correctly, and have not
caused any system failures.

Confirmation testing: performed after a defect is fixed. All test cases that failed should be re-executed on the new
software version. The software may also be tested with new tests if, for instance, the defect was missing functionality. The
purpose of a confirmation test is to confirm whether the original defect has been successfully fixed.

Regression testing: involves the creation of a set of test cases executed before, which serve to demonstrate that the
system works as expected after a change made in different parts of the code. Changes may include changes to the
environment, such as a new version of an operating system or database management system.

Change-related testing may be performed at all test levels and applies to functional, nonfunctional, and white-box
testing Regression test suites are run many times and generally evolve slowly, so regression testing is a strong
candidate for automation.

63
2.3.5 Test Types and Test Levels
Performing of different test types at different test levels can be illustrated by the example of a banking
application.
Functional testing

Component testing Tests are designed based on how a component should calculate
compound interest.

Component integration testing Tests are designed based on how account information captured at
the user interface is passed to the business logic.

System testing Tests are designed based on how account holders can apply for a line
of credit on their checking accounts.

System integration testing Tests are designed based on how the system uses an external microservice
to check an account holder’s credit score.

Acceptance testing Tests are designed based on how the banker handles approving
or declining a credit application.

64
Non-functional testing

Component testing Performance tests are designed to evaluate the number of CPU
cycles required to perform a complex total interest calculation.
Component integration testing Security tests are designed for buffer overflow vulnerabilities due to
data passed from the user interface to the business logic.
System testing Portability tests are designed to check whether the presentation
layer works on all supported browsers and mobile devices.
System integration testing Reliability tests are designed to evaluate system robustness if
the credit score microservice fails to respond.
Acceptance testing Usability tests are designed to evaluate the
accessibility of the banker’s credit processing
interface for people with disabilities.

65
White-box testing

Component testing Tests are designed to achieve complete statement and decision
coverage (will be discussed later) for all components that perform
financial calculations.

Component integration testing Tests are designed to exercise how each screen in
the browser interface passes data to the next
screen and the business logic.

System testing Tests are designed to cover sequences of web pages that can
occur during a credit line application.

System integration testing Tests are designed to exercise all possible inquiry
types sent to the credit score microservice.

Acceptance testing Tests are designed to cover all supported financial data
file structures and value ranges for bank-to-bank
transfers.

66
Change-related testing

Component testing Automated regression tests are built for each component and
included within the continuous integration framework.

Component integration testing Tests are designed to confirm fixes to interface-related defects as the fixes are
checked into the code repository.

System testing All tests for a given workflow are re-executed if any screen on
that workflow changes.

System integration testing Tests of the application interacting with the credit scoring microservice are
re-executed daily as part of the continuous deployment of that microservice.

Acceptance testing All previously-failed tests are re-executed after a defect


found in acceptance testing is fixed.

67
A maintenance release may require
2.4 Maintenance maintenance testing at multiple test
Testing
Testing that is executed during the levels, using various test types, based
life cycle phase of the system (after on its scope. The scope of maintenance
the system was deployed to testing depends on:
production environments) is called
maintenance testing. • The degree of risk of the change

• The size of the existing system


Maintenance testing focuses on
testing the changes to the system, • The size of the change
as well as testing unchanged parts
that might have been affected by !!! Maintenance testing is different

the changes. from maintainability testing, which


defines how easy it is to maintain the
system

68
2.4.1 Triggers for
Maintenance
There are several reasons why maintenance testing takes place:

• Modification, such as planned enhancements (e.g., release-based), corrective and emergency changes,
changes in the operational environment (such as planned operating system or database upgrades),
upgrades of COTS software, and patches for defects and vulnerabilities.
• Migration, such as from one platform to another, which can require operational tests of the new
environment as well as of the changed software, or tests of data conversion when data from another
application will be migrated into the system being maintained.

• Retirement, such as when an application reaches the end of its life.

When an application or system is retired, this can require testing of data migration or archiving if long data
retention periods are required.

69
2.4.2 Impact Analysis for
Maintenance
Usually, maintenance testing will consist of two parts:

• testing the changes;


• regression tests to show that the rest of the system has not been affected by the maintenance work.

A major and important activity within maintenance testing is impact analysis.

During impact analysis, together with stakeholders, a decision is made on what parts of the system may be
unintentionally affected and therefore need careful regression testing.

Impact analysis can also help to identify the impact of a change on existing tests.

70
Impact analysis can be difficult if:

✔ Specifications (e.g., business requirements, user stories, architecture) are out of date or missing;

✔ Test cases are not documented or are out of date;

✔ Bi-directional traceability between tests and the test basis has not been maintained;

✔ Tool support is weak or non-existent;

✔ The people involved do not have domain and/or system knowledge;

✔ Insufficient attention has been paid to the software's maintainability during development.

71

You might also like