FoST 2018 Final Chapter 1
FoST 2018 Final Chapter 1
Fundamentals of testing
I n this chapter, we will introduce you to the fundamentals of testing: what software
testing is and why testing is needed, including its limitations, objectives and
© Cengage Learning, Inc. This content is not final and may not match the published product.
purpose; the principles behind testing; the process that testers follow, including
activities, tasks and work products; and some of the psychological factors that testers
must consider in their work. By reading this chapter you will gain an understanding
of the fundamentals of testing and be able to describe those fundamentals.
Note that the learning objectives start with ‘FL’ rather than ‘LO’ to show that they
are learning objectives for the Foundation Level qualification.
1 . 1 WHAT IS TESTING?
In this section, we will kick off the book by looking at what testing is, some miscon-
ceptions about testing, the typical objectives of testing and the difference between
testing and debugging.
Within each section of this book, there are terms that are important – they are used
in the section (and may be used elsewhere as well). They are listed in the Syllabus as
keywords, which means that you need to know the definition of the term and it could
appear in an exam question. We will give the definition of the relevant keyword terms
in the margin of the text, and they can also be found in the Glossary (including the
ISTQB online Glossary). We also show the keyword in bold within the section or
subsection where it is defined and discussed.
In this section, the relevant keyword terms are debugging, test object, test
objective, testing, validation and verification.
Software is everywhere
The last 100 years have seen an amazing human triumph of technology. Diseases
that once killed and paralyzed are routinely treated or prevented – or even eradicated
entirely, as with smallpox. Some children who stood amazed as they watched the
Perhaps the most dramatic advances in technology have occurred in the arena
of information technology. Software systems, in the sense that we know them, are
a recent innovation, less than 70 years old, but have already transformed daily life
around the world. Thomas Watson, the one-time head of IBM, famously predicted
that only about five computers would be needed in the whole world. This vastly
inaccurate prediction was based on the idea that information technology was useful
only for business and government applications, such as banking, insurance and con-
ducting a census. (The Hollerith punch-cards used by computers at the time Watson
made his prediction were developed for the United States census.) Now, everyone
who drives a car is using a machine not only designed with the help of computers,
but which also contains more computing power than the computers used by NASA
to get Apollo missions to and from the Moon. Mobile phones are now essentially
© Cengage Learning, Inc. This content is not final and may not match the published product.
handheld computers that get smarter with every new model. The Internet of Things
(IoT) now gives us the ability to see who is at our door or turn on the lights when we
are nowhere near our home.
However, in the software world, the technological triumph has not been perfect.
Almost every living person has been touched by information technology, and most
of us have dealt with the frustration and wasted time that occurs when software fails
and exhibits unexpected behaviours. Some unfortunate individuals and companies
have experienced financial loss or damage to their personal or business reputations as
a result of defective software. A highly unlucky few have even been injured or killed
by software failures, including by self-driving cars.
One way to help overcome such problems is software testing, when it is done well.
Testing covers activities throughout the life cycle and can have a number of different
objectives, as we will see in Section 1.1.1.
The reason for this broad definition is that both dynamic testing (at whatever level)
and static testing (of whatever type) often enable the achievement of similar project
objectives. Dynamic testing and static testing also generate information that can help
achieve an important process objective – that of understanding and improving the
software development and testing processes. Dynamic testing and static testing are
complementary activities, each able to generate information that the other cannot.
correctly?’ Note the emphasis in the definition on ‘specified requirements’. examination and
But just conforming to a specification is not sufficient testing, as we will see in through provision of
Section 1.3.7 (Absence-of-errors is a fallacy). We also need to test to see if the deliv- objective evidence that
specified requirements
ered software and system will meet user and stakeholder needs and expectations in
have been fulfilled.
its operational environment. Often it is the tester who becomes the advocate for the
end-user in this kind of testing, which is called validation. Here we are asking the Validation
question, ‘Have we built the right system?’ Note the emphasis in the definition on Confirmation by
‘intended use’. examination and
In every development life cycle, a part of testing is focused on verification testing through provision of
objective evidence that
and a part is focused on validation testing. Verification is concerned with evaluating
the requirements for a
a work product, component or system to determine whether it meets the requirements
specific intended use or
set. In fact, verification focuses on the question, ‘Is the deliverable built according application have been
to the specification?’ Validation is concerned with evaluating a work product, com- fulfilled.
ponent or system to determine whether it meets the user needs and requirements.
Validation focuses on the question, ‘Is the deliverable fit for purpose; for example,
does it provide a solution to the problem?’
●● To find failures and defects; this is typically a prime focus for software testing.
●● To provide sufficient information to stakeholders to allow them to make
Test object The informed decisions, especially regarding the level of quality of the test object –
component or system for example, by the satisfaction of entry or exit criteria.
to be tested.
●● To reduce the level of risk of inadequate software quality (e.g. previously
undetected failures occurring in operation).
●● To comply with contractual, legal or regulatory requirements or standards,
and/or to verify the test object’s compliance with such requirements or
standards.
These objectives are not universal. Different test viewpoints, test levels and test
stakeholders can have different objectives. While many levels of testing, such as
© Cengage Learning, Inc. This content is not final and may not match the published product.
component, integration and system testing, focus on discovering as many failures as
possible in order to find and remove defects, in acceptance testing the main objec-
tive is confirmation of correct system operation (at least under normal conditions),
together with building confidence that the system meets its requirements. The context
of the test object and the software development life cycle will also affect what test
objectives are appropriate. Let’s look at some examples to illustrate this.
When evaluating a software package that might be purchased or integrated into a
larger software system, the main objective of testing might be the assessment of the
quality of the software. Defects found may not be fixed, but rather might support a
conclusion that the software be rejected.
During component testing, one objective at this level may be to achieve a given
level of code coverage by the component tests – that is, to assess how much of the
code has actually been exercised by a set of tests and to add additional tests to exercise
parts of the code that have not yet been covered/tested. Another objective may be
to find as many failures as possible so that the underlying defects are identified and
fixed as early as possible.
During user acceptance testing, one objective may be to confirm that the sys-
tem works as expected (validation) and satisfies requirements (verification). Another
objective of testing here is to focus on providing stakeholders with an evaluation of
the risk of releasing the system at a given time. Evaluating risk can be part of a mix
of objectives, or it can be an objective of a separate level of testing, as when testing
a safety-critical system, for example.
During maintenance testing, our objectives often include checking whether devel-
opers have introduced any regressions (new defects not present in the previous ver-
sion) while making changes. Some forms of testing, such as operational testing,
focus on assessing quality characteristics such as reliability, security, performance
or availability.
FL-1.2.4 Distinguish between the root cause of a defect and its effects (K2)
In this section, we discuss how testing contributes to success and the relationship
between testing and quality assurance. We will describe the difference between
errors, defects and failures and illustrate how software defects or bugs can cause
problems for people, the environment or a company. We will draw important distinc-
tions between defects, their root causes and their effects.
As we go through this section, watch for the Syllabus terms defect, error, failure,
quality, quality assurance and root cause.
Testing can help to reduce the risk of failures occurring during operation, provided
it is carried out in a rigorous way, including reviews of documents and other work
products. Testing both verifies that a system is correctly built and validates that it
will meet users’ and stakeholders’ needs, even though no testing is ever exhaustive
(see Principle 2 in Section 1.3, Exhaustive testing is impossible). In some situations,
testing may not only be helpful, but may be necessary to meet contractual or legal
requirements or to conform to industry-specific standards, such as automotive or
safety-critical systems.
software or system is released into use. Here are some examples where testing could
contribute to more successful systems:
●● Having testers involved in requirements reviews or user story refinement could
detect defects in these work products before any design or coding is done for the
functionality described. Identifying and removing defects at this stage reduces
the risk of the wrong software (incorrect or untestable) being developed.
●● Having testers work closely with system designers while the system is being
designed can increase each party’s understanding of the design and how to test
it. Since misunderstandings are often the cause for defects in software, having a
better understanding at this stage can reduce the risk of design defects. A bonus
is that tests can be identified from the design – thinking about how to test the
© Cengage Learning, Inc. This content is not final and may not match the published product.
system at this stage often results in better design.
●● Having testers work closely with developers while the code is under develop-
ment can increase each party’s understanding of the code and how to test it. As
with design, this increased understanding, and the knowledge of how the code
will be tested, can reduce the risk of defects in the code (and in the tests).
●● Having testers verify and validate the software prior to release can detect
failures that might otherwise have been missed – this is traditionally where
the focus of testing has been. As we see with the previous examples, if we
leave it until release, we will not be nearly as efficient as we would have
been if we had caught these defects earlier. However, it is still necessary
to test just before release, and testers can also help to support debugging
activities, for example, by running confirmation and regression tests.
Thus, testing can help the software meet stakeholder needs and satisfy
requirements.
In addition to these examples, achieving the defined test objectives (see
ection 1.1.1) also contributes to the overall success of software development and
S
maintenance.
Quality control is concerned with the quality of products rather than processes, to
ensure that they have achieved the desired level of quality. Testing is looking at work
products, including software, so it is actually a quality control activity rather than a qual-
ity assurance activity, despite common usage. However, testing also has processes that
should be followed correctly, so quality assurance does support good testing in this way.
Sections 1.1.1 and 1.2.1 describe how testing contributes to the achievement of quality.
So, we see that testing plays an essential supporting role in delivering quality soft-
ware. However, testing by itself is not sufficient. Testing should be integrated into a
complete, team-wide and development process-wide set of activities for quality assur-
ance. Proper application of standards, training of staff, the use of retrospectives to
learn lessons from defects and other important elements of previous projects, rigorous
and appropriate software testing: all of these activities and more should be deployed
© Cengage Learning, Inc. This content is not final and may not match the published product.
by organizations to ensure acceptable levels of quality and quality risk upon release.
While we commonly think of failures being the result of ‘bugs in the code’, a
s ignificant number of defects are introduced in work products such as requirements
specifications and design specifications. Capers Jones reports that about 20% of
defects are introduced in requirements, and about 25% in design. The remaining 55%
are introduced during implementation or repair of the code, metadata or documen-
tation [Jones 2008]. Other experts and researchers have reached similar conclusions,
with one organization finding that as many as 75% of defects originate in require-
ments and design. Figure 1.1 shows four typical scenarios, the upper stream being
correct requirements, design and implementation, the lower three streams showing
defect introduction at some phase in the software life cycle.
Ideally, defects are removed in the same phase of the life cycle in which they are
introduced. (Well, ideally defects are not introduced at all, but this is not possible
© Cengage Learning, Inc. This content is not final and may not match the published product.
because, as discussed before, people are fallible.) The extent to which defects are
removed in the phase of introduction is called phase containment. Phase containment
is important because the cost of finding and removing a defect increases each time
that defect escapes to a later life cycle phase. Multiplicative increases in cost, of the
sort seen in Figure 1.2, are not unusual. The specific increases vary considerably,
with Boehm reporting cost increases of 1:5 (from requirements to after release) for
simple systems, to as high as 1:100 for complex systems [Boehm 1986]. If you are
curious about the economics of software testing and other quality-related activities,
you can see Gilb [1993], Black [2004] or Black [2009].
Defects may result in failures, or they may not, depending on inputs and other con-
ditions. In some cases, a defect can exist that will never cause a failure in actual use,
because the conditions that could cause the failure can never arise. In other cases, a defect
can exist that will not cause a failure during testing, but which always results in failures
in production. This can happen with security, reliability and performance defects, espe-
cially if the test environments do not closely replicate the production environment(s).
Cost to Repair
16X
8X
© Cengage Learning, Inc. This content is not final and may not match the published product.
4X
2X
1X
Requirement Design Code/Unit Independent After
Test Test Release
It can also happen that expected and actual results do not match for reasons other
than a defect. In some cases, environmental conditions can lead to unexpected results
that do not relate to a software defect. Radiation, magnetism, electronic fields and
pollution can damage hardware or firmware, or simply change the conditions of the
hardware or firmware temporarily in a way that causes the software to fail.
The failure here is the incorrect interest calculations for customers. The defect
is the wrong calculation in the code. The root cause was the product owner’s lack
of knowledge about how interest should be calculated, and the effect was customer
complaints.
The root cause can be addressed by providing additional training in interest rate
calculations to the product owner, and possibly additional reviews of user stories by
interest calculation experts. If this is done, then incorrect interest calculations due to
ambiguous user stories should be a thing of the past.
Root cause analysis is covered in more detail in two other ISTQB qualifications:
Expert Level Test Management, and Expert Level Improving the Test Process.
© Cengage Learning, Inc. This content is not final and may not match the published product.
1 . 3 SEVEN TESTING PRINCIPLES
In this section, we will review seven fundamental principles of testing that have been
observed over the last 40+ years. These principles, while not always understood or
noticed, are in action on most if not all projects. Knowing how to spot these princi-
ples, and how to take advantage of them, will make you a better tester.
In addition to the descriptions of each principle below, you can refer to Table 1.1
for a quick reference of the principles and their text as written in the Syllabus.
TA B L E 1 . 1 Testing principles
Principle 1: Testing shows Testing can show that defects are present, but cannot prove
the presence of that there are no defects. Testing reduces the probability of
defects, not their undiscovered defects remaining in the software but, even if no
absence defects are found, testing is not a proof of correctness.
Principle 2: Exhaustive testing Testing everything (all combinations of inputs and preconditions) is
is impossible not feasible except for trivial cases. Rather than attempting to test
exhaustively, risk analysis, test techniques and priorities should be
used to focus test efforts.
© Cengage Learning, Inc. This content is not final and may not match the published product.
Principle 3: Early testing saves To find defects early, both static and dynamic test activities should be
time and money started as early as possible in the software development life cycle.
Early testing is sometimes referred to as ‘shift left’. Testing early
in the software development life cycle helps reduce or eliminate
costly changes (see Chapter 3, Section 3.1).
Principle 4: Defects cluster A small number of modules usually contains most of the defects
together discovered during pre-release testing, or they are responsible for
most of the operational failures. Predicted defect clusters, and
the actual observed defect clusters in test or operation, are an
important input into a risk analysis used to focus the test effort
(as mentioned in Principle 2).
Principle 5: Beware of the If the same tests are repeated over and over again, eventually these
pesticide paradox tests no longer find any new defects. To detect new defects,
existing tests and test data are changed and new tests need to be
written. (Tests are no longer effective at finding defects, just as
pesticides are no longer effective at killing insects after a while.)
In some cases, such as automated regression testing, the pesticide
paradox has a beneficial outcome, which is the relatively low
number of regression defects.
Principle 6: Testing is context Testing is done differently in different contexts. For example,
dependent safety-critical software is tested differently from an e‑commerce
mobile app. As another example, testing in an Agile project is
done differently to testing in a sequential life cycle project (see
Chapter 2, Section 2.1).
Principle 7: Absence-of-errors Some organizations expect that testers can run all possible tests and
is a fallacy find all possible defects, but Principles 2 and 1, respectively, tell us
that this is impossible. Further, it is a fallacy to expect that just finding
and fixing a large number of defects will ensure the success of a
system. For example, thoroughly testing all specified requirements
and fixing all defects found could still produce a system that is
© Cengage Learning, Inc. This content is not final and may not match the published product.
only cover all the possible data value combinations.
So, we are confronted with a big, infinite cloud of possible tests; we must select
a subset from it. One way to select tests is to wander aimlessly in the cloud of tests,
selecting at random until we run out of time. While there is a place for automated
random testing, by itself it is a poor strategy. We’ll discuss testing strategies further
in Chapter 5, but for the moment let’s look at two.
One strategy for selecting tests is risk-based testing. In risk-based testing, we
have a cross-functional team of project and product stakeholders perform a special
type of risk analysis. In this analysis, stakeholders identify risks to the quality of the
system, and assess the level of risk (often using likelihood and impact) associated
with each risk item. We focus the test effort based on the level of risk, using the level
of risk to determine the appropriate number of test cases for each risk item, and also
to sequence the test cases.
Another strategy for selecting tests is requirements-based testing. In
requirements-based testing, testers analyze the requirements specification (which
would be user stories in Agile projects) to identify test conditions. These test condi-
tions inherit the priority of the requirement or user story they derive from. We focus
the test effort based on the priority to determine the appropriate number of test cases
for each aspect, and also to sequence the test cases.
as the previous system test level. Nevertheless, at the end of the process, fewer than
30 defects remain. Even though each test activity was only 45% effective at finding
defects, the overall sequence of activities was 97% effective. Note that now we are
doing both static testing (the reviews) and dynamic testing (the running of tests at the
different test levels). This approach of starting test activities as early as possible is also
called ‘shift left’ because the test activities are no longer all done on the right-hand
side of a sequential life cycle diagram, but on the left-hand side at the beginning of
development. Although unit test execution is of course on the right side of a sequential
life cycle diagram, improving and spending more effort on unit testing early on is a
very important part of the shift left paradigm.
In addition, defects removed early cost less to remove. Further, since much of the
cost in software engineering is associated with human effort, and since the size of
© Cengage Learning, Inc. This content is not final and may not match the published product.
a project team is relatively inflexible once that project is underway, reduced cost of
defects also means reduced duration of the project. That situation is shown graphi-
cally in Figure 1.3.
Now, this type of cumulative and highly efficient defect removal only works if each
of the test activities in the sequence is focused on different, defined objectives. If we
simply test the same test conditions over and over, we will not achieve the cumulative
effect, for reasons we will discuss in a moment.
Defect
Removed
Requirements
Design
Code/Unit Test
Integration Test
System Test
of the modules accounting for 80% (or more) of the defects. In other words, the
defect density of modules varies considerably. While controversy exists about why
defect clustering happens, the reality of defect clustering is well established. It was
first demonstrated in studies performed by IBM in the 1960s [Jones 2008], and is
mentioned in Myers [2011]. We continue to see evidence of defect clustering in our
work with clients.
Defect clustering is helpful to us as testers, because it provides a useful guide. If we
focus our test effort (at least in part) based on the expected (and ultimately observed)
likelihood of finding a defect in a certain area, we can make our testing more effective
and efficient, at least in terms of our objective of finding defects. Knowledge of and
predictions about defect clusters are important inputs to the risk-based testing strategy
discussed earlier. In a metaphorical way, we can imagine that bugs are social creatures
© Cengage Learning, Inc. This content is not final and may not match the published product.
who like to hang out together in the dark corners of the software.
might put the company out of business; if you tried to apply e-commerce approaches
to safety-critical software, you could put lives in danger. So, the context of the test-
ing influences how much testing we do and how the testing is done.
Another example is the way that testing is done in an Agile project as opposed to
a sequential life cycle project. Every sprint in an Agile project includes testing of the
functionality developed in that sprint; the testing is done by everyone on the Agile
team (ideally) and the testing is done continually over the whole of development. In
sequential life cycle projects, testing may be done more formally, documented in more
detail and may be focused towards the end of the project.
ities, started early and targeting specific and diverse objectives and areas of the
system, can effectively and efficiently find – and help a project team to remove – a
large percentage of the defects. Surely that is all that is required to achieve project
success?
Sadly, it is not. Many systems have been built that failed in user acceptance testing
or in the marketplace, such as the initial launch of the US healthcare.gov website,
which suffered from serious performance and web access problems.
Consider desktop computer operating systems. In the 1990s, as competition
peaked for dominance of the PC operating system market, Unix and its variants had
higher levels of quality than DOS and Windows. However, 25 years on, Windows
dominates the desktop marketplace. One major reason is that Unix and its variants
were too difficult for most users in the early 1990s.
Consider a system that perfectly conforms to its requirements (if that were possi-
ble), which has been tested thoroughly and all defects found have been fixed. Surely
this would be a success, right? Wrong! If the requirements were flawed, we now have a
perfectly working wrong system. Perhaps it is hard to use, as in the previous example.
Perhaps the requirements missed some major features that users were expecting or
needed to have. Perhaps this system is quite OK, but a competitor has come out with
a competing system that is easier to use, includes the expected features and is cheaper.
Our ‘perfect’ system is not looking so good after all, even though it has effectively
‘no defects’ in terms of ‘conformance to requirements’.
1 . 4 TEST PROCESS
FL-1.4.2 Describe the test activities and respective tasks within the test
process (K2)
FL-1.4.3 Differentiate the work products that support the test process (K2)
In this section, we will describe the test process: tasks, activities and work products.
We will talk about the influence of context on the test process and the importance
of traceability.
In this section, there are a large number of Glossary keywords (19 in all): coverage,
test analysis, test basis, test case, test completion, test condition, test control, test
data, test design, test execution, test execution schedule, test implementation,
test monitoring, test oracle, test planning, test procedure, test suite, testware
and traceability.
In Section 1.1, we looked at the definition of testing, and identified misperceptions
about testing, including that testing is not just test execution. Certainly, test execution
is the most visible testing activity. However, effective and efficient testing requires
test approaches that are properly planned and carried out, with tests designed and
© Cengage Learning, Inc. This content is not final and may not match the published product.
implemented to cover the proper areas of the system, executed in the right sequence
and with their results reviewed regularly. This is a process, with tasks and activities
that can be identified and need to be done, sometimes formally and other times very
informally. In this section, we will look at the test process in detail.
There is no ‘one size fits all’ test process, but testing does need to include com-
mon sets of activities, or it may not achieve its objectives. An organization may have
a test strategy where the test activities are specified, including how they are imple-
mented and when they occur within the life cycle. Another organization may have
a test strategy where test activities are not formally specified, but expertise about
test activities is shared among team members informally. The ‘right’ test process
for you is one that achieves your test objectives in the most efficient way. The best
test process for you would not be the best for another organization (and vice versa).
Simply having a defined test strategy is not enough. One of our clients recently
was a law firm that sued a company for a serious software failure. It turned out that
while the company had a written test strategy, this strategy was not aligned with
the testing best practices described in this book or the Syllabus. Further, upon close
examination of their test work products, it was clear that they had not even carried
out the strategy properly or completely. The company ended up paying a substantial
penalty for their lack of quality. So, you must consider whether your actual test activ-
ities and tasks are sufficient.
●●
●● Test design.
●● Test implementation.
●● Test execution.
●● Test completion.
These activities appear to be logically sequential, in the sense that tasks within
each activity often create the preconditions or precursor work products for tasks
in subsequent activities. However, in many cases, the activities in the process may
overlap or take place concurrently or iteratively, provided that these dependencies are
fulfilled. Each group of activities consists of many individual tasks; these will vary
for different projects or releases. For example, in Agile development, we have small
iterations of software design, build and test that happen continuously, and planning
Test planning The is also a very dynamic activity throughout. If there are multiple teams, some teams
© Cengage Learning, Inc. This content is not final and may not match the published product.
activity of establishing may be doing test analysis while other teams are in the middle of test implementation,
or updating a test plan. for example.
Test plan Note that this is ‘a’ test process, not ‘the’ test process. We have found that most
Documentation of these activities, and many of the tasks within these activities, are carried out in
describing the test some form or another on most successful test efforts. However, you should expect
objectives to be to have to tailor your test process, its main activities and the constituent tasks based
achieved and the means on the organizational, project, process and product needs, constraints and other con-
and the schedule textual realities. In sequential development, there will also be overlap, combination,
for achieving them,
concurrency or even omission of some tasks; this is why a test process is tailored for
organized to coordinate
each project.
testing activities.
(Note that we have
included the definition Test planning
of test plan here, even Test planning involves defining the objectives of testing and the approach for meet-
though it is not listed ing those objectives within project constraints and contexts. This includes deciding
in the Syllabus as a on suitable test techniques to use, deciding what tasks need to be done, formulating
term that you need to a test schedule and other things.
know for this chapter; Metaphorically, you can think of test planning as similar to figuring out how to
otherwise the definition get from one place to another (without using your GPS – there is no GPS for testing).
of test planning is not For small, simple and familiar projects, finding the route merely involves taking an
very informative.) existing map, highlighting the route and jotting down the specific directions. For
Test monitoring A test large, complex or new projects, finding the route can involve a sophisticated process
management activity of creating a new map, exploring unknown territory and blazing a fresh trail.
that involves checking We will discuss test planning in more detail in Section 5.2.
the status of testing
activities, identifying Test monitoring and control
any variances from the To continue our metaphor, even with the best map and the clearest directions, getting
planned or expected from one place to another involves careful attention, watching the dashboard, minor
status and reporting
(and sometimes major) course corrections, talking with our companions about the
status to stakeholders.
journey, looking ahead for trouble, tracking progress towards the ultimate destina-
Test control A test tion and coping with finding an alternate route if the road we wanted is blocked.
management task that So, in test monitoring, we continuously compare actual progress against the plan,
deals with developing check on the progress of test activities and report the test status and any necessary
and applying a set of deviations from the plan. In test control, we take whatever actions are necessary to
corrective actions to get meet the mission and objectives of the project, and/or adjust the plan.
a test project on track
Test monitoring is the ongoing comparison of actual progress against the test plan,
when monitoring shows
a deviation from what
using any test monitoring metrics that we have defined in the test plan. Test progress
was planned. against the plan is reported to stakeholders in test progress reports or stakeholder
meetings. One option that is often overlooked is that if things are going very wrong,
it may be time to stop the testing or even stop the project completely. In our driving
analogy, once you find out that you are headed in completely the wrong direction,
the best option is to stop and re-evaluate, not continue driving to the wrong place.
One way we can monitor test progress is by using exit criteria, also known as
‘definition of done’ in Agile development. For example, the exit criteria for test exe-
cution might include:
●● Checking test results and logs against specified coverage criteria (we have not
finished testing until we have tested what we planned to test).
●● Assessing the level of component or system quality based on test results and
logs (e.g. the number of defects found or ease of use).
●● Assessing product risk and determining if more tests are needed to reduce
© Cengage Learning, Inc. This content is not final and may not match the published product.
Test analysis
In test analysis, we analyze the test basis to identify testable features and define Test analysis The
associated test conditions. Test analysis determines ‘what to test’, including meas- activity that identifies
urable coverage criteria. We can say colloquially that the test basis is everything test conditions by
upon which we base our tests. The test basis can include requirements, user stories, analyzing the test basis.
design specifications, risk analysis reports, the system design and architecture, inter- Test condition (charter)
face specifications and user expectations. An aspect of the test
In test analysis, we transform the more general testing objectives defined in the basis that is relevant in
test plan into tangible test conditions. The way in which these are specifically docu- order to achieve specific
mented depends on the needs of the testers, the expectations of the project team, any test objectives. See also:
applicable regulations and other considerations. exploratory testing.
Test analysis includes the following major activities and tasks:
●● Analyze the test basis appropriate to the test level being considered. Examples
of a test basis include:
–– Requirement specifications, for example, business requirements, functional
requirements, system requirements, user stories, epics, use cases or similar
work products that specify desired functional and non-functional component
or system behaviour. These specifications say what the component or sys-
tem should do and are the source of tests to assess functionality as well as
non-functional aspects such as performance or usability.
–– Design and implementation information, such as system or software architecture
diagrams or documents, design specifications, call flows, modelling diagrams
(for example, UML or entity-relationship diagrams), interface specifications or
similar work products that specify component or system structure. Structures for
implemented systems or components can be a useful source of coverage criteria
to ensure that sufficient testing has been done on those structures.
–– The implementation of the component or system itself, including code, data-
base metadata and queries, and interfaces. Use all information about any
aspect of the system to help identify what should be tested.
–– Risk analysis reports, which may consider functional, non-functional and
●● Evaluate the test basis and test items to identify various types of defects that
might occur (typically done by reviews), such as:
–– ambiguities
–– omissions
–– inconsistencies
–– inaccuracies
–– contradictions
–– superfluous statements.
●● Identify features and sets of features to be tested.
●● Identify and prioritize test conditions for each feature, based on analysis of the
test basis, and considering functional, non-functional and structural characteris-
© Cengage Learning, Inc. This content is not final and may not match the published product.
tics, other business and technical factors, and levels of risks.
●● Capture bi-directional traceability between each element of the test basis
and the associated test conditions. This traceability should be bi-directional
(we can trace in both forward and backward directions) so that we can check
which test basis elements go with which test conditions (and vice versa) and
determine the degree of coverage of the test basis by the test conditions. See
Sections 1.4.3 and 1.4.4 for more on traceability. Traceability is also very
important for maintenance testing, as we will discuss in Chapter 2, Section 2.4.
How are the test conditions actually identified from a test basis? The test tech-
niques, which are described in Chapter 4, are used to identify test conditions. Black-
box techniques identify functional and non-functional test conditions, white-box
techniques identify structural test conditions and experience-based techniques can
identify other important test conditions. Using techniques helps to reduce the likeli-
hood of missing important conditions and helps to define more precise and accurate
test conditions.
Sometimes the test conditions identified can be used as test objectives for a
test charter. In exploratory testing, an experience-based technique (see Chapter 4,
Section 4.4.2), test charters are used as goals for the testing that will be carried out
in an exploratory way – that is, test design, execution and learning in parallel. When
these test objectives are traceable to the test basis, the coverage of those test condi-
tions can be measured.
One of the most beneficial side effects of identifying what to test in test analysis is
that you will find defects; for example, inconsistencies in requirements, contradictory
statements between different documents, missing requirements (such as no ‘other-
wise’ for a selection of options) or descriptions that do not make sense. Rather than
being a problem, this is a great opportunity to remove these defects before develop-
ment goes any further. This verification (and validation) of specifications is particu-
larly important if no other review processes for the test basis documents are in place.
Test analysis can also help to validate whether the requirements properly
capture customer, user and other stakeholder needs. For example, techniques such
as behaviour-driven development (BDD) and acceptance test-driven development
(ATDD) both involve generating test conditions (and test cases) from user stories.
BDD focuses on the behaviour of the system and ATDD focuses on the user view
of the system, and both techniques involve defining acceptance criteria. Since these
acceptance criteria are produced before coding, they also verify and validate the user
stories and the acceptance criteria. More about this is found in the ISTQB Foundation
Level Agile Tester Extension qualification.
Test design
Test analysis addresses ‘what to test’ and test design addresses the question ‘how Test design The
to test’; that is, what specific inputs and data are needed in order to exercise the soft- activity of deriving and
ware for a particular test condition. In test design, test conditions are elaborated (at a specifying test cases
high level) in test cases, sets of test cases and other testware. Test analysis identifies from test conditions.
general ‘things’ to test, and test design makes these general things specific for the Test case A set
component or system that we are testing. of preconditions,
Test design includes the following major activities: inputs, actions
(where applicable),
●● Design and prioritize test cases and sets of test cases. expected results
●● Identify the necessary test data to support the test conditions and test cases as and postconditions,
they are identified and designed. developed based on
© Cengage Learning, Inc. This content is not final and may not match the published product.
●● Design the test environment, including set-up, and identify any required infra- test conditions.
structure and tools. Test data Data created
●● Capture bi-directional traceability between the test basis, test conditions, test or selected to satisfy the
cases and test procedures (see also Section 1.4.4). execution preconditions
and inputs to execute
As with the identification of test conditions, test techniques are used to derive or one or more test cases.
elaborate test cases from the test conditions. These are described in Chapter 4, where
test analysis and test design are discussed in more detail.
Just as in test analysis, test design can also identify defects – in the test basis and
in the existing test conditions. Because test design is a deeper level of detail, some
defects that were not obvious when looking at test basis at a high level, may become
clear when deciding exactly what values to assign to test cases. For example, a test
Test implementation
condition might be to check the boundary values of an input field, but when deter-
The activity that
mining the exact values, we realize that a maximum value has not been specified in
prepares the testware
the test basis. Identifying defects at this point is a good thing because if they are fixed needed for test
now, they will not cause problems later. execution based on test
Which of these specific tasks applies to a particular project depends on various analysis and design.
contextual issues relevant to the project, and these are discussed further in Chapter 5.
Test procedure A
sequence of test cases
Test implementation in execution order, and
In test implementation, we specify test procedures (or test scripts). This involves any associated actions
combining the test cases in a particular order, as well as including any other infor- that may be required
mation needed for test execution. Test implementation also involves setting up the to set up the initial
preconditions and
test environment and anything else that needs to be done to prepare for test execu-
any wrap-up activities
tion, such as creating testware. Test design asked ‘how to test’, and test implementa- post execution.
tion asks ‘do we now have everything in place to run the tests?’
Test implementation includes the following major activities: Test suite (test case
suite, test set) A set
●● Develop and prioritize the test procedures and, potentially, create automated of test cases or test
test scripts. procedures to be
●● Create test suites from the test procedures and automated test scripts (if any). executed in a specific
See Chapter 6 for test automation. test cycle.
●● Arrange the test suites within a test execution schedule in a way that results in Test execution
efficient test execution (see Chapter 5, Section 5.2.4). schedule A schedule
for the execution of
Build the test environment (possibly including test harnesses, service virtualization,
●● Prepare test data and ensure that it is properly loaded in the test environment
(including inputs, data resident in databases and other data repositories, and
system configuration data).
●● Verify and update the bi-directional traceability between the test basis,
test conditions, test cases, test procedures and test suites (see also
Section 1.4.4).
Ideally, all of these tasks are completed before test execution begins, because
o therwise precious, limited test execution time can be lost on these types of prepara-
tory tasks. One of our clients reported losing as much as 25% of the test execution
period to what they called ‘environmental shakedown’, which turned out to consist
almost entirely of test implementation activities that could have been completed
© Cengage Learning, Inc. This content is not final and may not match the published product.
before the software was delivered.
Note that although we have discussed test design and test implementation as sep-
arate activities, in practice they are often combined and done together.
Not only are test design and implementation combined, but many test activities
may be combined and carried out concurrently. For example, in exploratory test-
ing (see Chapter 4, Section 4.4.2), test analysis, test design, test implementation
and test execution are done in an interactive way throughout an exploratory test
session.
Test execution
Test execution The In test execution, the test suites that have been assembled in test implementation are
process of running a run, according to the test execution schedule.
test on the component Test execution includes the following major activities:
or system under test,
producing actual ●● Record the identities and versions of all of the test items (parts of the test object
result(s). to be tested), test objects (system or component to be tested), test tools and other
testware.
Testware Work
products produced ●● Execute the tests either manually or by using an automated test execution tool,
during the test process according to the planned sequence.
for use in planning, ●● Compare actual results with expected results, observing where the actual and
designing, executing, expected results differ. These differences may be the result of defects, but at this
evaluating and point we do not know, so we refer to them as anomalies.
reporting on testing.
●● Analyze the anomalies in order to establish their likely causes. Failures may
occur due to defects in the code or they may be false-positives. (A false-positive
is where a defect is reported when there is no defect.) A failure may also be due
to a test defect, such as defects in specified test data, in a test document or
the test environment, or simply due to a mistake in the way the test was
executed.
●● Report defects based on the failures observed (see Chapter 5, Section 5.6). A
failure due to a defect in the code means that we can write a defect report. Some
organizations track test defects (i.e. defects in the tests themselves), while others
do not.
●● Log the outcome of test execution (e.g. pass, fail or blocked). This includes
not only the anomalies observed and the pass/fail status of the test cases,
but also the identities and versions of the software under test, test tools and
testware.
●● As necessary, repeat test activities when actions are taken to resolve discrepan-
cies. For example, we might need to re-run a test that previously failed in order
to confirm a fix (confirmation testing). We might need to run an updated test.
We might also need to run additional, previously executed tests to see whether
defects have been introduced in unchanged areas of the software or to see
whether a fixed defect now makes another defect apparent (regression testing).
●● Verify and update the bi-directional traceability between the test basis, test
conditions, test cases, test procedures and test results.
As before, which of these specific tasks applies to a particular project depends
on various contextual issues relevant to the project; these are discussed further in
Chapter 5.
© Cengage Learning, Inc. This content is not final and may not match the published product.
Test completion
Test completion activities collect data from completed test activities to consolidate Test completion The
experience, testware and any other relevant information. Test completion activities activity that makes
should occur at major project milestones. These can include when a software system test assets available
is released, when a test project is completed (or cancelled), when an Agile project for later use, leaves
iteration is finished (e.g. as part of a retrospective meeting), when a test level has test environments in a
satisfactory condition
been completed or when a maintenance release has been completed. The specific
and communicates the
milestones that involve test completion activities should be specified in the test plan.
results of testing to
Test completion includes the following major activities: relevant stakeholders.
●● Check whether all defect reports are closed, entering change requests or prod-
uct backlog items for any defects that remain unresolved at the end of test
execution.
●● Create a test summary report to be communicated to stakeholders.
●● Finalize and archive the test environment, the test data, the test infrastructure
and other testware for later reuse.
●● Hand over the testware to the maintenance teams, other project teams, and/or
other stakeholders who could benefit from its use.
●● Analyze lessons learned from completed test activities to determine changes
needed for future iterations, releases and projects (i.e. perform a retrospective).
●● Use the information gathered to improve test process maturity, especially as
an input to test planning for future projects.
The degree and extent to which test completion activities occur, and which specific
test completion activities do occur, depends on various contextual issues relevant to
the project, which are discussed further in Chapter 5.
Test work products are created as part of the test process, and there is significant
variation in the types of work products created, in the ways they are organized and
managed, and in the names used for them. The work products described in this section
are in the ISTQB Glossary of terms. More information can be found in ISO/IEC/
IEEE 29119-3 [2013].
Test work products can be captured, stored and managed in configuration man-
agement tools, or possibly in test management tools or defect management tools.
© Cengage Learning, Inc. This content is not final and may not match the published product.
related via traceability information (see Section 1.4.4). Test plans also include entry
and exit criteria (also known as definition of ready and definition of done) for the test-
ing within their scope – the exit criteria are used during test monitoring and control.
Beware of what people call a ‘test plan’; we have seen this name applied to any
kind of test document, including test case specifications and test execution schedules.
A test plan is a planning document – it contains information about what is intended
to happen in the future, and is similar to a project plan. It does not contain detail of
test conditions, test cases or other aspects of testing.
The test plan needs to be understandable to those who need to know the infor-
mation contained in it. The two-page cryptic diagram that was called a ‘test plan’ at
one organization would not be the right sort of work product for other organizations.
Test plans can cover a whole project, or be specific to a test level or type of testing.
Test plans are covered in more detail in Chapter 5, Section 5.2.
the sales discount might be to set up four existing customers, one who orders only a
small amount so does not qualify for a discount, and the other three who order enough
to qualify for a discount at each of the three discount levels respectively.
Having the test cases at a high level means that we can use the same test case
across multiple test cycles with different specific or concrete data. For example, one
application may have discounts of 2%, 5% and 10%, and another may have discounts
of 10%, 20% and 25%. Our high-level test case adequately documents the scope of
the test even though the details will be different in each application. The test case is
traceable to and from the test condition that it is derived from.
We have seen that high-level test cases can have advantages, but there are also
some aspects that you need to be aware of with high-level test cases. For example, it
may be difficult to reproduce the test exactly; different testers may use different test
data, so the test case is not exactly repeatable. A high-level test case is not directly
automatable; the tool needs exact instructions and specific data in order to execute
the test. The skill and domain knowledge of the tester is also critical; a junior new-
hire with no domain knowledge may struggle to know what they are supposed to be
doing, unless they are well supported by more experienced testers. These are not
insurmountable problems, but they do need to be considered.
Test design work products may also include test data, the design of the test envi-
ronment and the identification of infrastructure and tools. The extent and way in
which these are documented may vary significantly from project to project or from
one company to another.
When deriving test cases from the test conditions, we may also find defects or
improvements that we could make to the test conditions, so the test conditions them-
selves may be further refined during test design. In our sales discount example, in test
analysis we identified the three discounts as test conditions, but in test design, by look-
ing at the test cases, we identified the ‘no discount’ test condition – a discount of 0%.
Test cases are further discussed in Chapter 4.
need to specify the details of our existing customers, decide on exactly how much
each order will come to and calculate the final amount they would pay, including the
discount. So, for example, Mrs Smith puts in an order for $50.01. Because her order
is over $50, she gets a 10% discount, so she pays $45.01. We calculate the expected
Test oracle (oracle) A result for the test using a test oracle – the source of what the correct answer should
source to determine be (in this case simple arithmetic). We would also need to set up Mrs Smith and other
expected results to customers in the database as part of the preconditions of running the test, and this
compare with the would be included in the test procedure.
actual result of the In exploratory testing, we may be creating work products for test design and test
system under test.
implementation while doing test execution; traceability may be more difficult in this case.
Test implementation may also create work products that will be used by tools, for
example, test scripts for test execution tools, and sometimes work products are created
© Cengage Learning, Inc. This content is not final and may not match the published product.
by tools, such as a test execution schedule. Service virtualization may also create test
implementation work products.
As in test design, we may further refine test conditions (and high-level test cases)
during test implementation. For example, by deciding on the concrete values for our sales
discount example, we realize that a test condition we omitted was to consider two differ-
ent ways of clients paying between $45.01 and $50.00 (that is, with or without a discount).
This may not be important to include in our tests, but it is an additional test condition.
Test completion work products give closure to the whole of the test process and
should provide ongoing ideas for increasing the effectiveness and efficiency of testing
within the organization in the future.
products.
are called, this cross-linking gives many benefits to the test process. We saw how
traceability can aid in the measurement and reporting of test coverage in order to
report coverage and defects related to requirements, which is more meaningful and
of more value to stakeholders.
Good traceability also supports the following:
●● analyzing the impact of changes, whether to requirements or to the component
or system
●● making testing auditable, and being able to measure coverage
●● meeting IT governance criteria (where applicable)
●● improving the coherence of test progress reports and test summary reports to
stakeholders, as described above
●● relating the technical aspects of testing to stakeholders in terms that they can
understand
●● providing information to assess product quality, process capability and pro-
ject progress against business goals.
You may find that your test management or requirements management tool pro-
vides support for traceability of work products; if so, make use of that feature. Some
organizations find that they have to build their own management systems in order to
organize test work products in the way that they want and to ensure that they have
bi-directional traceability. However the support is implemented, it is important to
have automated support for traceability – it is not something that can be sustained
without tool support.
© Cengage Learning, Inc. This content is not final and may not match the published product.
documents (e.g. to evaluate a requirement specification for consistency, ambiguity
and completeness).
Finding defects
While dynamic and static testing are very different types of activities, they have
in common their ability to find defects. Static testing finds defects directly, while
dynamic testing finds evidence of a defect through a failure of the software to behave
as expected. Either way, people carrying out static or dynamic tests must be focused on
the possibility – indeed, the high likelihood in many cases – of finding defects. Indeed,
finding defects is often a primary objective of static and dynamic testing activities.
Identifying defects may unfortunately be perceived by developers as a criticism,
not only of the product but also of its author – and in a sense, it is. But finding defects
in testing should be constructive criticism, where testers have the best interest of the
developer in mind. One meaning of the word ‘criticism’ is ‘an examination, interpre-
tation, analysis or judgement about something’ – this is an objective assessment. But
other meanings include disapproval by pointing out faults or shortcomings and even
an attack on someone or something. Testing does not want to be the latter sense of
criticism, but even when intended in the first sense, it can be perceived in the other
ways. Testers need to be diplomats, along with everything else.
Bias
However, there is another factor at work when we are reporting defects; the author
(developer) believes that their code is correct – they obviously did not write it to be
intentionally wrong. This confidence in their understanding is in some sense nec-
essary for developers; they cannot proceed without it. But at the same time, this
confidence creates confirmation bias. Confirmation bias makes it difficult to accept
information that disagrees with your currently held beliefs. Simply put, the author of
a work product has confidence that they have solved the requirements, design, meta-
data or code problem, at least in an acceptable fashion; however, strictly speaking,
that is false confidence. Other biases may also be at work, and it is also human nature
to blame the bearer of bad news (which defects are perceived to be).
Testers are not always aware of their biases either, and they do have biases of their
own. Since those biases are different from the developers, that is a benefit, but a lack
of awareness of those biases sets up potential conflict.
This reluctance to accept that their work is not perfect is why some people regard
testing as a destructive activity (trying to destroy their work) rather than the construc-
tive activity it is (trying to construct better quality software). Good testing contributes
greatly to product quality and project quality, as we saw in Sections 1.1 and 1.2.
While some developers are aware of their biases when they participate in reviews
and perform unit testing of their own work products, those biases act to impede their
effectiveness at finding their own defects. The mental mistakes that caused them to
create the defects remain in their minds in most cases. When proofreading our own
work, for example, we see what we meant, not what we wrote.
istrators and developers do not know the review, static analysis and dynamic testing
techniques discussed in the Foundation Syllabus and this book. While that situation
is gradually changing, much of the self-testing by software work product developers
is either not done or is not done as effectively as it could be. The principles and tech-
niques in the Foundation Syllabus and this book are intended to help either testers
or others to be more effective at finding defects, both their own and those of others.
Attitudes
It is a particular problem when a tester revels in being the bearer of bad news. For
example, one tester made a revealing – and not very flattering – remark during
an interview with one of the authors. When asked what he liked about testing, he
responded, ‘I like to catch the developers’. He went on to explain that, when he found
a defect in someone’s work, he would go and demonstrate the failure on the pro-
grammer’s workstation. He said that he made sure that he found at least one defect in
everyone’s work on a project, and went through this process of ritually humiliating
the programmer with each and every one of his colleagues. When asked why, he
said, ‘I want to prove to everyone that I am their intellectual equal’. This person,
while possessing many of the skills and traits one would want in a tester, had exactly
the wrong personality to be a truly professional tester.
Instead of seeing themselves as their colleagues’ adversaries or social inferiors out
to prove their equality, testers should see themselves as teammates. In their special
role, testers provide essential services in the development organization. They should
ask themselves, ‘Who are the stakeholders in the work that I do as a tester?’ Having
identified these stakeholders, they should ask each stakeholder group, ‘What services
do you want from testing, and how well are we providing them?’
While the specific services are not always defined, it is common that mature and
wise developers know that studying their mistakes and the defects they have intro-
duced is the key to learning how to get better. Further, smart software development
managers understand that finding and fixing defects during testing not only reduces
the level of risk to the quality of the product, it also saves time and money when
compared to finding defects in production.
Communication
Clearly defined objectives and goals for testing, combined with constructive styles
of communication on the part of test professionals, will help to avoid most negative
personal or group dynamics between testers and their colleagues in the development
facts about defects, progress and risks in an objective and constructive way that
counteracts these misperceptions as much as possible. This helps to reduce tensions
and build positive relationships with colleagues, supporting the view of testing as a
constructive and helpful activity. While this is not necessary, we have noticed that
many consummate testing professionals have business analysts, system designers,
architects, developers and other specialists with whom they work as close personal
friends.
This applies not only to testers but also to test managers, and not just to defects
and failures but to all communication about testing, such as test results, test progress
and risks.
Having good communication skills is a complex topic, well beyond the scope of a
book on fundamental testing techniques. However, we can give you some basics for
© Cengage Learning, Inc. This content is not final and may not match the published product.
good communication with your development colleagues:
●● Remember to think of your colleagues as teammates, not as opponents or adver-
saries. The way you regard people has a profound effect on the way you treat
them. You do not have to think in terms of kinship or achieving world peace,
but you should keep in mind that everyone on the development team has the
common goal of delivering a quality system, and everyone must work together
to accomplish that. Start with collaboration, not battles.
●● Make sure that you focus on and emphasize the value and benefits of testing.
Remind your developer colleagues that defect information provided by testing
can help them to improve their own skills and future work products. Remind
managers that defects found early by testing and fixed as soon as possible will
save time and money and reduce overall product quality risk. Also, be sure to
respond well when developers find problems in your own test work products.
Ask them to review them and thank them for their findings (just as you would
like to be thanked for finding problems in their work).
●● Recognize that your colleagues have pride in their work, just as you do, and as
such you owe them a tactful communication about defects you have found. It is
not really any harder to communicate your findings, especially the potentially
embarrassing findings, in a neutral, fact-focused way. In fact, you will find that
if you avoid criticizing people and their work products, but instead keep your
written and verbal communications objective and factual, you will also avoid a
lot of unnecessary conflict and drama with your colleagues.
●● Before you communicate these potentially embarrassing findings, mentally put
yourself in the position of the person who created the work product. How are
they going to feel about this information? How might they react? What can you
do to help them get the essential message that they need to receive without pro-
voking a negative emotional reaction from them?
●● Keep in mind the psychological element of cognitive dissonance. Cogni-
tive dissonance is a defect – or perhaps a feature – in the human brain that
makes it difficult to process unexpected information, especially bad news.
So, while you might have been clear in what you said or wrote, the person on
the receiving end might not have clearly understood. Cognitive dissonance is
a two-way street, too, and it is quite possible that you are misunderstanding
someone’s reaction to your findings. So, before assuming the worst about
someone and their motivations, confirm that the other person has understood
what you have said and vice versa.
●● Curiosity. Good testers are curious about why systems behave the way they do
and how systems are built. When they see unexpected behaviour, they have a
natural urge to explore further, to isolate the failure, to look for more general-
ized problems and to gain deeper understanding.
●● Professional pessimism. Good testers expect to find defects and failures. They
© Cengage Learning, Inc. This content is not final and may not match the published product.
A good tester has the skills, the training, the certification and the mindset of a
professional tester, and of these four skills, the most important – and perhaps the
most elusive – is the mindset.
The tester’s mindset is to think about what could go wrong and what is missing.
The tester looks at a statement in a requirement or user story and asks, ‘What if it
isn’t? What haven’t they thought of here? What could go wrong?’ That mindset is
quite different from the mindset that a business analyst, system designer, architect,
database administrator or developer must bring to creating the work products involved
in developing software. While the testers (or reviewers) must assume that the work
product under review or test is defective in some way – and it is their job to find those
defects – the people developing that work product must have confidence that they
understand how to do so properly. Looking at a statement in a requirement or user
Having a tester or group of testers who are organizationally separate from devel-
opment, either as individuals or as an independent test team, can provide significant
benefits, such as increased defect-detection percentage. A tester’s mindset is a ‘dif-
ferent pair of eyes’ and independent testers can see things that developers do not see
(because of confirmation bias discussed above). This is especially important for large,
complex or safety-critical systems.
However, independence from the developers does not mean an adversarial relation-
ship with them. In fact, such a relationship is toxic, often fatally so, to a test team’s
effectiveness.
The softer side of software testing is often the harder side to master. A tester may
have adequate or even excellent technique skills and certifications, but if they do not
have adequate interpersonal and communication skills, they will not be an effective
© Cengage Learning, Inc. This content is not final and may not match the published product.
tester. Such soft skills can be improved with training and practice. The best testers
continuously strive to attain a more professional mindset, and it is a lifelong journey.
CHAPTER REVIEW
Let’s review what you have learned in this chapter.
From Section 1.1, you should now know what testing is. You should be able to
remember the typical objectives of testing. You should know the difference between
testing and debugging. You should know the Glossary keyword terms debugging,
test object, test objective, testing, validation and verification.
From Section 1.2, you should now be able to explain why testing is necessary and
support that explanation with examples. You should be able to explain the difference
between testing and quality assurance and how they work together to improve quality.
You should be able to distinguish between an error (made by a person), a defect (in
a work product) and a failure (where the component or system does not perform as
© Cengage Learning, Inc. This content is not final and may not match the published product.
expected). You should know the difference between the root cause of a defect and
the effects of a defect or failure. You should know the Glossary terms defect, error,
failure, quality, quality assurance and root cause.
You should be able to explain the seven principles of testing, discussed in
Section 1.3.
From Section 1.4, you should now recognize a test process. You should be able
to recall the main testing activities of test planning, test monitoring and control,
test analysis, test design, test implementation, test execution and test completion.
You should be familiar with the work products produced by each test activity. You
should know the Glossary terms coverage, test analysis, test basis, test case, test
completion, test condition, test control, test data, test design, test execution, test
execution schedule, test implementation, test monitoring, test oracle, test plan-
ning, test procedure, test suite, testware and traceability.
From Section 1.5, you now should be able to explain the psychological factors
that influence the success of testing. You should be able to explain and contrast the
mindsets of testers and developers, and why these differences can lead to problems.
© Cengage Learning, Inc. This content is not final and may not match the published product.
that have occurred have generally been low-impact,
Question 2 Consider the following definitions and which of the following testing principles is most likely
match the term with the definition. to help the test manager explain to these managers and
1. A reason or purpose for designing and executing executives why some defects are likely to be missed?
a test. a. Exhaustive testing is impossible.
2. The component or system to be tested. b. Defect clustering.
3. Confirmation by examination and through c. Pesticide paradox.
provision of objective evidence that the d. Absence-of-errors fallacy.
requirements for a specific intended use or
application have been fulfilled. Question 6 What are the benefits of traceability
a. 1) test object, 2) test objective, 3) validation. between the test basis and test work products?
b. 1) test objective, 2) test object, 3) validation. a. Traceability means that test basis documents and
test work products do not need to be reviewed.
c. 1) validation, 2) test basis, 3) verification.
b. Traceability ensures that test work products are
d. 1) test objective, 2) test object, 3) verification.
limited in number to save time in producing them.
Question 3 Which statement about quality c. Traceability enables test progress and defects to
assurance (QA) is true? be reported with reference to requirements, which
is more understandable to stakeholders.
a. QA and testing are the same.
d. Traceability enables developers to produce code
b. QA includes both testing and root cause analysis.
that is easier to test.
c. Testing is quality control, not QA.
d. QA does not apply to testing. Question 7 Which of the following is most
important to promote and maintain good
Question 4 It is important to ensure that test design relationships between testers and developers?
starts during the requirements definition. Which of a. Understanding what managers value about testing.
the following test objectives supports this?
b. Explaining test results in a neutral fashion.
a. Preventing defects in the system.
c. Identifying potential customer work-arounds
b. Finding defects through dynamic testing. for bugs.
c. Gaining confidence in the system. d. Promoting better quality software whenever
d. Finishing the project on time. possible.
Question 8 Given the following test work products, a. 1) – Test planning, 2) – Test design 3) – Test
identify the major activity in a test process that execution, 4) – Test implementation.
produces it. b. 1) – Test execution, 2) – Test analysis 3) – Test
1. Test execution schedule. completion, 4) – Test execution.
c. 1) – Test control, 2) – Test analysis, 3) – Test
2. Test cases.
monitoring, 4) – Test implementation.
3. Test progress reports.
d. 1) – Test implementation, 2) – Test design,
4. Defect reports. 3) – Test monitoring, 4) – Test execution.
© Cengage Learning, Inc. This content is not final and may not match the published product.