ISTQB_CTFL_Syllabus-v4.0
ISTQB_CTFL_Syllabus-v4.0
v4.0
Table of Contents
0. Introduction
0.7. Accreditation
An ISTQB® Member Board may accredit training providers whose course material follows this syllabus.
Training providers should obtain accreditation guidelines from the Member Board or body that performs
the accreditation. An accredited course is recognized as conforming to this syllabus, and is allowed to
have an ISTQB® exam as part of the course. The accreditation guidelines for this syllabus follow the
general Accreditation Guidelines published by the Processes Management and Compliance Working
Group.
QC is a product-oriented, corrective approach that focuses on those activities supporting the achievement
of appropriate levels of quality. Testing is a major form of quality control, while others include formal
methods (model checking and proof of correctness), simulation and prototyping.
QA is a process-oriented, preventive approach that focuses on the implementation and improvement of
processes. It works on the basis that if a good process is followed correctly, then it will generate a good
product. QA applies to both the development and testing processes, and is the responsibility of everyone
on a project.
Test results are used by QA and QC. In QC they are used to fix defects, while in QA they provide
feedback on how well the development and test processes are performing.
illustration of the Pareto principle. Predicted defect clusters, and actual defect clusters observed during
testing or in operation, are an important input for risk-based testing (see section 5.2).
5. Tests wear out. If the same tests are repeated many times, they become increasingly ineffective in
detecting new defects (Beizer 1990). To overcome this effect, existing tests and test data may need to be
modified, and new tests may need to be written. However, in some cases, repeating the same tests can
have a beneficial outcome, e.g., in automated regression testing (see section 2.2.3).
6. Testing is context dependent. There is no single universally applicable approach to testing. Testing is
done differently in different contexts (Kaner 2011).
7. Absence-of-defects fallacy. It is a fallacy (i.e., a misconception) to expect that software verification
will ensure the success of a system. Thoroughly testing all the specified requirements and fixing all the
defects found could still produce a system that does not fulfill the users’ needs and expectations, that
does not help in achieving the customer’s business goals, and that is inferior compared to other
competing systems. In addition to verification, validation should also be carried out (Boehm 1981).
also includes defining the test data requirements, designing the test environment and identifying any
other required infrastructure and tools. Test design answers the question “how to test?”.
Test implementation includes creating or acquiring the testware necessary for test execution (e.g., test
data). Test cases can be organized into test procedures and are often assembled into test suites. Manual
and automated test scripts are created. Test procedures are prioritized and arranged within a test
execution schedule for efficient test execution (see section 5.1.5). The test environment is built and
verified to be set up correctly.
Test execution includes running the tests in accordance with the test execution schedule (test runs).
Test execution may be manual or automated. Test execution can take many forms, including continuous
testing or pair testing sessions. Actual test results are compared with the expected results. The test
results are logged. Anomalies are analyzed to identify their likely causes. This analysis allows us to report
the anomalies based on the failures observed (see section 5.5).
Test completion activities usually occur at project milestones (e.g., release, end of iteration, test level
completion) for any unresolved defects, change requests or product backlog items created. Any testware
that may be useful in the future is identified and archived or handed over to the appropriate teams. The
test environment is shut down to an agreed state. The test activities are analyzed to identify lessons
learned and improvements for future iterations, releases, or projects (see section 2.1.6). A test completion
report is created and communicated to the stakeholders.
1.4.3. Testware
Testware is created as output work products from the test activities described in section 1.4.1. There is a
significant variation in how different organizations produce, shape, name, organize and manage their
work products. Proper configuration management (see section 5.4) ensures consistency and integrity of
work products. The following list of work products is not exhaustive:
• Test planning work products include: test plan, test schedule, risk register, and entry and exit
criteria (see section 5.1). Risk register is a list of risks together with risk likelihood, risk impact and
information about risk mitigation (see section 5.2). Test schedule, risk register and entry and exit
criteria are often a part of the test plan.
• Test monitoring and control work products include: test progress reports (see section 5.3.2),
documentation of control directives (see section 5.3) and risk information (see section 5.2).
• Test analysis work products include: (prioritized) test conditions (e.g., acceptance criteria, see
section 4.5.2), and defect reports regarding defects in the test basis (if not fixed directly).
• Test design work products include: (prioritized) test cases, test charters, coverage items, test
data requirements and test environment requirements.
• Test implementation work products include: test procedures, automated test scripts, test
suites, test data, test execution schedule, and test environment elements. Examples of test
environment elements include: stubs, drivers, simulators, and service virtualizations.
• Test execution work products include: test logs, and defect reports (see section 5.5).
• Test completion work products include: test completion report (see section 5.3.2), action items
for improvement of subsequent projects or iterations, documented lessons learned, and change
requests (e.g., as product backlog items).
• Testers are involved in reviewing work products as soon as drafts of this documentation are
available, so that this earlier testing and defect detection can support the shift-left strategy (see
section 2.1.5)
• Automation through a delivery pipeline reduces the need for repetitive manual testing
• The risk in regression is minimized due to the scale and range of automated regression tests
DevOps is not without its risks and challenges, which include:
• The DevOps delivery pipeline must be defined and established
• CI / CD tools must be introduced and maintained
• Test automation requires additional resources and may be difficult to establish and maintain
Although DevOps comes with a high level of automated testing, manual testing – especially from the
user's perspective – will still be needed.
systems is also possible. System testing may be performed by an independent test team, and is
related to specifications for the system.
• System integration testing focuses on testing the interfaces of the system under test and other
systems and external services . System integration testing requires suitable test environments
preferably similar to the operational environment.
• Acceptance testing focuses on validation and on demonstrating readiness for deployment,
which means that the system fulfills the user’s business needs. Ideally, acceptance testing should
be performed by the intended users. The main forms of acceptance testing are: user acceptance
testing (UAT), operational acceptance testing, contractual and regulatory acceptance testing,
alpha testing and beta testing.
Test levels are distinguished by the following non-exhaustive list of attributes, to avoid overlapping of test
activities:
• Test object
• Test objectives
• Test basis
• Defects and failures
• Approach and responsibilities
they use the same functional tests, but check that while performing the function, a non-functional
constraint is satisfied (e.g., checking that a function performs within a specified time, or a function can be
ported to a new platform). The late discovery of non-functional defects can pose a serious threat to the
success of a project. Non-functional testing sometimes needs a very specific test environment, such as a
usability lab for usability testing.
Black-box testing (see section 4.2) is specification-based and derives tests from documentation external
to the test object. The main objective of black-box testing is checking the system's behavior against its
specifications.
White-box testing (see section 4.3) is structure-based and derives tests from the system's
implementation or internal structure (e.g., code, architecture, work flows, and data flows). The main
objective of white-box testing is to cover the underlying structure by the tests to the acceptable level.
All the four above mentioned test types can be applied to all test levels, although the focus will be
different at each level. Different test techniques can be used to derive test conditions and test cases for
all the mentioned test types.
Code defects can be detected using static analysis more efficiently than in dynamic testing, usually
resulting in both fewer code defects and a lower overall development effort.
Frequent stakeholder feedback throughout the SDLC can prevent misunderstandings about requirements
and ensure that changes to requirements are understood and implemented earlier. This helps the
development team to improve their understanding of what they are building. It allows them to focus on
those features that deliver the most value to the stakeholders and that have the most positive impact on
identified risks.
• Moderator (also known as the facilitator) – ensures the effective running of review meetings,
including mediation, time management, and a safe review environment in which everyone can
speak freely
• Scribe (also known as recorder) – collates anomalies from reviewers and records review
information, such as decisions and new anomalies found during the review meeting
• Reviewer – performs reviews. A reviewer may be someone working on the project, a subject
matter expert, or any other stakeholder
• Review leader – takes overall responsibility for the review such as deciding who will be involved,
and organizing when and where the review will take place
Other, more detailed roles are possible, as described in the ISO/IEC 20246 standard.
• Defining clear objectives and measurable exit criteria. Evaluation of participants should never be
an objective
• Choosing the appropriate review type to achieve the given objectives, and to suit the type of work
product, the review participants, the project needs and context
• Conducting reviews on small chunks, so that reviewers do not lose concentration during an
individual review and/or the review meeting (when held)
• Providing feedback from reviews to stakeholders and authors so they can improve the product
and their activities (see section 3.2.1)
• Providing adequate time to participants to prepare for the review
• Support from management for the review process
• Making reviews part of the organization’s culture, to promote learning and process improvement
• Providing adequate training for all participants so they know how to fulfil their role
• Facilitating meetings
A partition containing valid values is called a valid partition. A partition containing invalid values is called
an invalid partition. The definitions of valid and invalid values may vary among teams and organizations.
For example, valid values may be interpreted as those that should be processed by the test object or as
those for which the specification defines their processing. Invalid values may be interpreted as those that
should be ignored or rejected by the test object or as those for which no processing is defined in the test
object specification.
In EP, the coverage items are the equivalence partitions. To achieve 100% coverage with this technique,
test cases must exercise all identified partitions (including invalid partitions) by covering each partition at
least once. Coverage is measured as the number of partitions exercised by at least one test case, divided
by the total number of identified partitions, and is expressed as a percentage.
Many test objects include multiple sets of partitions (e.g., test objects with more than one input
parameter), which means that a test case will cover partitions from different sets of partitions. The
simplest coverage criterion in the case of multiple sets of partitions is called Each Choice coverage
(Ammann 2016). Each Choice coverage requires test cases to exercise each partition from each set of
partitions at least once. Each Choice coverage does not take into account combinations of partitions.
In all states coverage, the coverage items are the states. To achieve 100% all states coverage, test
cases must ensure that all the states are visited. Coverage is measured as the number of visited states
divided by the total number of states, and is expressed as a percentage.
In valid transitions coverage (also called 0-switch coverage), the coverage items are single valid
transitions. To achieve 100% valid transitions coverage, test cases must exercise all the valid transitions.
Coverage is measured as the number of exercised valid transitions divided by the total number of valid
transitions, and is expressed as a percentage.
In all transitions coverage, the coverage items are all the transitions shown in a state table. To achieve
100% all transitions coverage, test cases must exercise all the valid transitions and attempt to execute
invalid transitions. Testing only one invalid transition in a single test case helps to avoid fault masking,
i.e., a situation in which one defect prevents the detection of another. Coverage is measured as the
number of valid and invalid transitions exercised or attempted to be covered by executed test cases,
divided by the total number of valid and invalid transitions, and is expressed as a percentage.
All states coverage is weaker than valid transitions coverage, because it can typically be achieved without
exercising all the transitions. Valid transitions coverage is the most widely used coverage criterion.
Achieving full valid transitions coverage guarantees full all states coverage. Achieving full all transitions
coverage guarantees both full all states coverage and full valid transitions coverage and should be a
minimum requirement for mission and safety-critical software.
• The types of errors the developers tend to make and the types of defects that result from these
errors
• The types of failures that have occurred in other, similar applications
In general, errors, defects and failures may be related to: input (e.g., correct input not accepted,
parameters wrong or missing), output (e.g., wrong format, wrong result), logic (e.g., missing cases, wrong
operator), computation (e.g., incorrect operand, wrong computation), interfaces (e.g., parameter
mismatch, incompatible types), or data (e.g., incorrect initialization, wrong type).
Fault attacks are a methodical approach to the implementation of error guessing. This technique requires
the tester to create or acquire a list of possible errors, defects and failures, and to design tests that will
identify defects associated with the errors, expose the defects, or cause the failures. These lists can be
built based on experience, defect and failure data, or from common knowledge about why software fails.
See (Whittaker 2002, Whittaker 2003, Andrews 2006) for more information on error guessing and fault
attacks.
Some checklist entries may gradually become less effective over time because the developers will learn
to avoid making the same errors. New entries may also need to be added to reflect newly found high
severity defects. Therefore, checklists should be regularly updated based on defect analysis. However,
care should be taken to avoid letting the checklist become too long (Gawande 2009).
In the absence of detailed test cases, checklist-based testing can provide guidelines and some degree of
consistency for the testing. If the checklists are high-level, some variability in the actual testing is likely to
occur, resulting in potentially greater coverage but less repeatability.
software development. In Planning Poker, estimates are usually made using cards with numbers that
represent the effort size.
Three-point estimation. In this expert-based technique, three estimations are made by the experts: the
most optimistic estimation (a), the most likely estimation (m) and the most pessimistic estimation (b). The
final estimate (E) is their weighted arithmetic mean. In the most popular version of this technique, the
estimate is calculated as E = (a + 4*m + b) / 6. The advantage of this technique is that it allows the
experts to calculate the measurement error: SD = (b – a) / 6. For example, if the estimates (in person-
hours) are: a=6, m=9 and b=18, then the final estimation is 10±2 person-hours (i.e., between 8 and 12
person-hours), because E = (6 + 4*9 + 18) / 6 = 10 and SD = (18 – 6) / 6 = 2.
See (Kan 2003, Koomen 2006, Westfall 2009) for these and many other test estimation techniques.
(component) tests, integration (component integration) tests, and end-to-end tests. Other test levels (see
section 2.2.1) can also be used.
These two factors express the risk level, which is a measure for the risk. The higher the risk level, the
more important is its treatment.
Test completion collects data from completed test activities to consolidate experience, testware, and any
other relevant information. Test completion activities occur at project milestones such as when a test level
is completed, an agile iteration is finished, a test project is completed (or cancelled), a software system is
released, or a maintenance release is completed.
A test completion report is prepared during test completion, when a project, test level, or test type is
complete and when, ideally, its exit criteria have been met. This report uses test progress reports and
other data. Typical test completion reports include:
• Test summary
• Testing and product quality evaluation based on the original test plan (i.e., test objectives and exit
criteria)
• Deviations from the test plan (e.g., differences from the planned schedule, duration, and effort).
• Testing impediments and workarounds
• Test metrics based on test progress reports
• Unmitigated risks, defects not fixed
• Lessons learned that are relevant to the testing
Different audiences require different information in the reports, and influence the degree of formality and
the frequency of reporting. Reporting on test progress to others in the same team is often frequent and
informal, while reporting on testing for a completed project follows a set template and occurs only once.
The ISO/IEC/IEEE 29119-3 standard includes templates and examples for test progress reports (called
test status reports) and test completion reports.
For a complex configuration item (e.g., a test environment), CM records the items it consists of, their
relationships, and versions. If the configuration item is approved for testing, it becomes a baseline and
can only be changed through a formal change control process.
Configuration management keeps a record of changed configuration items when a new baseline is
created. It is possible to revert to a previous baseline to reproduce previous test results.
To properly support testing, CM ensures the following:
• All configuration items, including test items (individual parts of the test object), are uniquely
identified, version controlled, tracked for changes, and related to other configuration items so that
traceability can be maintained throughout the test process
• All identified documentation and software items are referenced unambiguously in test
documentation
Continuous integration, continuous delivery, continuous deployment and the associated testing are
typically implemented as part of an automated DevOps pipeline (see section 2.1.4), in which automated
CM is normally included.
• Description of the failure to enable reproduction and resolution including the steps that detected
the anomaly, and any relevant test logs, database dumps, screenshots, or recordings
• Expected results and actual results
• Severity of the defect (degree of impact) on the interests of stakeholders or requirements
• Priority to fix
• Status of the defect (e.g., open, deferred, duplicate, waiting to be fixed, awaiting confirmation
testing, re-opened, closed, rejected)
• References (e.g., to the test case)
Some of this data may be automatically included when using defect management tools (e.g., identifier,
date, author and initial status). Document templates for a defect report and example defect reports can be
found in the ISO/IEC/IEEE 29119-3 standard, which refers to defect reports as incident reports.
Level 2: Understand (K2) – the candidate can select the reasons or explanations for statements related
to the topic, and can summarize, compare, classify and give examples for the testing concept.
Action verbs: classify, compare, contrast, differentiate, distinguish, exemplify, explain, give examples,
interpret, summarize.
Examples:
• “Classify the different options for writing acceptance criteria.”
• “Compare the different roles in testing” (look for similarities, differences or both).
• “Distinguish between project risks and product risks” (allows concepts to be differentiated).
• “Exemplify the purpose and content of a test plan.”
• “Explain the impact of context on the test process.”
• “Summarize the activities of the review process.”
Level 3: Apply (K3) – the candidate can carry out a procedure when confronted with a familiar task, or
select the correct procedure and apply it to a given context.
Action verbs: apply, implement, prepare, use.
Examples:
• “Apply test case prioritization” (should refer to a procedure, technique, process, algorithm etc.).
• “Prepare a defect report.”
• “Use boundary value analysis to derive test cases.”
References for the cognitive levels of learning objectives:
Anderson, L. W. and Krathwohl, D. R. (eds) (2001) A Taxonomy for Learning, Teaching
Assessing: A Revision of Bloom's Taxonomy of Educational Objectives, Allyn & Bacon
FL-BO10
FL-BO11
FL-BO12
FL-BO13
FL-BO14
FL-BO1
FL-BO2
FL-BO3
FL-BO4
FL-BO5
FL-BO6
FL-BO7
FL-BO8
FL-BO9
Business Outcomes: Foundation Level
BUSINESS OUTCOMES
Chapter/
K-
FL-BO10
FL-BO11
FL-BO12
FL-BO13
FL-BO14
FL-BO1
FL-BO2
FL-BO3
FL-BO4
FL-BO5
FL-BO6
FL-BO7
FL-BO8
FL-BO9
section/ Learning objective
level
subsection
2.1.1 Explain the impact of the chosen software development lifecycle on testing K2 X
2.1.2 Recall good testing practices that apply to all software development lifecycles K1 X
2.1.6 Explain how retrospectives can be used as a mechanism for process improvement K2 X X
Recognize types of products that can be examined by the different static test K1
3.1.1 X X
techniques
3.1.2 Explain the value of static testing K2 X X X
Explain how to write user stories in collaboration with developers and business K2
4.5.1 X X
representatives
4.5.2 Classify the different options for writing acceptance criteria K2 X
5.1.2 Recognize how a tester adds value to iteration and release planning K1 X X X
5.1.7 Summarize the testing quadrants and their relationships with test levels and test types K2 X X
5.2.1 Identify risk level by using risk likelihood and risk impact K1 X X
5.2.3 Explain how product risk analysis may influence thoroughness and scope of testing K2 X X X
5.2.4 Explain what measures can be taken in response to analyzed product risks K2 X X X
5.3.2 Summarize the purposes, content, and audiences for test reports K2 X X X
o More focus on practices like: test-first approach (K1), shift-left (K2), retrospectives (K2)
o New section on testing in the context of DevOps (K2)
o Integration testing level split into two separate test levels: component integration testing
and system integration testing
• Major changes in chapter 3 (Static Testing)
o Section on review techniques, together with the K3 LO (apply a review technique)
removed
• Major changes in chapter 4 (Test Analysis and Design)
o Use case testing removed (but still present in the Advanced Test Analyst syllabus)
o More focus on collaboration-based approach to testing: new K3 LO about using ATDD to
derive test cases and two new K2 LOs about user stories and acceptance criteria
o Decision testing and coverage replaced with branch testing and coverage (first, branch
coverage is more commonly used in practice; second, different standards define the
decision differently, as opposed to “branch”; third, this solves a subtle, but serious flaw
from the old FL2018 which claims that „100% decision coverage implies 100% statement
coverage” – this sentence is not true in case of programs with no decisions)
o Section on the value of white-box testing improved
• Major changes in chapter 5 (Managing the Test Activities)
o Section on test strategies/approaches removed
o New K3 LO on estimation techniques for estimating the test effort
o More focus on the well-known Agile-related concepts and tools in test management:
iteration and release planning (K1), test pyramid (K1), and testing quadrants (K2)
o Section on risk management better structured by describing four main activities: risk
identification, risk assessment, risk mitigation and risk monitoring
• Major changes in chapter 6 (Test Tools)
o Content on some test automation issues reduced as being too advanced for the
foundation level – section on tools selection, performing pilot projects and introducing
tools into organization removed
11. Index