Bridging The Communication Gap
Bridging The Communication Gap
I wish that the book had been available a few years ago when the
company I was at (and myself) were trying out agile. Could have been
a lot easier and more successful if wed read it.
Philip Kirkham
If youve tried agile acceptance testing youll know that as well as being
really exciting its also incredibly difficult. Luckily we now have a book
that helps guide us through the many tricky choices that we face,
practical and pragmatic advice that even the most experienced agile
developer should be aware of.
Colin Jack, Senior Software Developer, FNZ
As a tester, I welcome any opportunity to increase shared understanding
of requirements and expectations - our team will be relying on this book
to guide us as we begin our journey with agile acceptance testing.
Marisa Seal
You would be at least 6 moths ahead of the game in Agile QA by just
reading Gojkos book.
Gennady Kaganer, QA Manager at Standard and Poors
Gojko applies his experience to the practice of producing software that
is useful to end users. This is an important work in extending the
test-driven specification of software beyond individual units and into
the sum of the parts.
Bob Clancy, http://www.agiletester.net
Bridging the Communication Gap will not only bring you up-to-date
with the latest thinking about agile acceptance testing, but also guide
you as you put the ideas into practice. This book is packed with insights
from Adzics experience in the field.
David Peterson, creator of the Concordion acceptance-testing
framework
Im convinced that the practice of agile acceptance testing, done properly,
can make a dramatic improvement to both the communication with the
customer and the quality of the final product. This book is a solid
introduction to the subject, and represents the first attempt Ive seen to
survey the practice as a whole without focusing on a single tool or
technology.
Geoff Bache
ISBN: 978-0-9556836-1-9
Table of Contents
About the PDF edition ............................................................... ix
Acknowledgements ................................................................... xi
About the author ..................................................................... xiii
Introduction ............................................................................. xv
Why should you care? ...................................................... xvi
Who is this book for? ...................................................... xvii
What will you get out of this book? ................................. xviii
What's inside? ................................................................. xix
Giving credit where credit is due ...................................... xxi
How this book is organised ............................................. xxii
I. The Communication Gap ........................................................ 1
1. The next bottleneck in software projects .......................... 3
The telephone game .................................................... 5
Imperative requirements are very easy to misunderstand .......................................................................... 8
Are obvious things really obvious? .............................. 9
A small misunderstanding can cost a lot of
money ...................................................................... 14
Fulfilling specifications does not guarantee
success ..................................................................... 15
Requirements are often already a solution ................. 18
Cognitive diversity is very important ......................... 20
Breaking The Spirit of Kansas ................................... 21
2. Finding ways to communicate better .............................. 25
Challenging requirements ......................................... 25
We really communicate with examples ...................... 26
Working together, we find better solutions ................ 28
Communicating intent ............................................. 29
Agile acceptance testing in a nutshell ......................... 31
So what does testing have to do with this? .................. 34
Better names ............................................................ 38
II. Building and Maintaining a Shared Understanding ............... 41
3. Specifying with examples ............................................... 43
How do you brush your teeth? ................................... 43
A practical example .................................................. 45
viii
ix
Acknowledgements
This book is a result of a small independent publishing effort, and as
such would not be possible without the help of many people.
I'd like to thank Antony Marcano, Bob Clancy, Colin Jack, David
Peterson, David Vydra, Eric Lefevre-Ardant, Gennady Kaganer, Geoff
Bache, Jennitta Andrea, Lisa Crispin, Marisa Seal, Mark Needham,
Melissa Tan, Mike Scott, Phil Kirkham and Rick Mugridge for all the
excellent suggestions, helping me keep this book focused and
providing insight into their views and experiences. Without you, this
book just would not be possible.
Marjory Bisset from Pearl Words again did a great job of copyediting
this book and ensuring that readers have a much more enjoyable
experience with it.
Finally, I'd like to thank Boris Marcetic from Popular for designing
the covers.
xi
xii
xiii
xiv
Introduction
I am getting more and more convinced every day that communication
is, in fact, what makes or breaks software projects. Programming
tools, practices and methods are definitely important, but if the
communication fails then the rest is just painting the corpse. Complex
projects simply have no chance of success without effective communication.
This is a book about improving communication between customers,
business analysts, developers and testers on software projects, especially by using specification by example and agile acceptance testing.
Although these two practices are not yet popular, I consider them as
key emerging software development practices because they can
significantly improve the chances of success of a software project. (At
the same time, agile acceptance testing is one of the worst named
practices ever. For the time being, just forget that it has the word
testing in the name.) Agile acceptance testing and specification by
example essentially help us to close the communication gap between
different participants in a software project, ensure that they speak
the same language and build a truly shared and consistent understanding of the domain. This leads to better specifications the participants
flush out incorrect assumptions and discover functional gaps before
the development starts and build software that is genuinely fit for
purpose.
Ward Cunningham and Ken Auer used the basic ideas behind agile
acceptance testing to nail down what their users wanted in 1999.1
Almost a decade later, the practice is still used only by a small group
of early adopters. However, it has matured considerably. Judging
from recent conferences, magazine articles and blog posts, it seems
to me that interest is growing and the time is right for a wider group
to learn about and adopt it. You probably agree, which is why you
picked up this book.
1
xv
Introduction
Introduction
http://download.microsoft.com/documents/uk/msdn/architecture/architectinsight/2007/Lifecycle/LIF02-The-Agile-Template-for-VSTS.ppt.
xviii
What's inside?
misconceptions and address fears and issues that people often have
about agile acceptance testing.
It is also my intention with this book to challenge some established
ways of thinking in the industry. Agile acceptance testing breaks down
traditional boundaries around testing, requirements and specification
processes in a way that significantly improves communication on a
project. Specification by example is an approach to the writing of
specifications and requirements radically different from the established industry process. I will explain this in a lot more detail
throughout the book, but please note that if you come from a more
traditional software process background, you may need to put aside
the stereotypes that you are familiar with in order to grasp the ideas
and gain the full benefits.
This book will help you to discover how agile acceptance testing affects
your work whether you are a programmer, business analyst or a tester.
It will help you gain an understanding of this technique and offer
ideas on how to convince other team members and stakeholders to
use it. I hope that this book takes into account the various perspectives
of these different roles. It is intentionally not too technical or suited
only for programmers, because agile acceptance testing is not a
programming technique: it is a communication technique that brings
people involved in a software project closer. I consider myself
primarily a programmer and my personal biases and experiences will
surely show. I have, however, found a much broader perspective on
agile acceptance testing as a result of working with different teams,
consulting, mentoring and giving public talks. Discussing fears such
as losing jobs with testers over coffee, participating in requirements
gathering, working as a product owner and even as a business domain
expert on several projects has hopefully given me the understanding
necessary to represent the needs and views of the other roles.
What's inside?
I am very proud of the fact that I have helped several companies adopt
agile acceptance testing and specification by example and deliver
xix
Introduction
successful software over the last few years. This book is a summary
of my experiences, learnings and ideas from this journey. It is based
on projects that are now happily running in production and a series
of public and private seminars, talks and workshops that I organised
during this period.
This book contains a description of the process that I use today to
bridge the communication gap between business people and software
implementation teams. More importantly, it describes ideas, principles
and practices behind the process and their effects on people participating in software projects.
This book is not a detailed user manual for any acceptance testing
tool. Although I will describe briefly some of the most popular tools
in Chapter 10, I intentionally want to keep this book relatively nontechnical and absolutely not tool or technology specific. Often I hear
from managers that their teams looked into this or that tool and that
they did not like it a bit, so they rejected the whole idea of agile
acceptance testing. One of the main messages I want to convey with
this book is that the biggest benefit of agile acceptance testing is
improved communication and mutual understanding, not test automation. There is too much focus on tools today and in my opinion
this is quite wrong.
While you are reading this book, it's important to focus on the ideas
and principles. The process and the tools described in the book are
there just to assist. If you do not like a particular tool, find a better
one or automate things yourself in a way that makes sense for your
team. If you do not like some part of the process, adjust it or replace
it with something that makes more sense for your team. Once you
understand the underlying ideas, principles and practices, applying
them in your environment should be easy. Use the process described
in this book just as a guide, not as a prescription.
This book also does not have all the answers about agile acceptance
testing and specification by example. These two practices are currently
gaining a lot of momentum and generating a lot of new ideas. Because
there is very little published material on them at the moment, you
xx
will see many more references to conference talks, blog posts and
online video clips than to books or articles. Apart from the practices
as I use them today, this book also describes some promising ideas
that I want to try out in the near future. It even contains some interesting views of other people that I do not agree with completely. Some
of them might prove to be interesting in the future with better tools
or be applicable in a different environment. In Chapter 11 I speculate
how the tools for agile acceptance testing tools might evolve. Blogs,
mailing lists and sites listed in Appendix A will help you continue the
journey and keep up-to-date with new advances.
xxi
Introduction
xxii
practices do not really solve the problem but only provide workarounds. Then I introduce agile acceptance testing as the solution to
these problems.
In Part II, I introduce the techniques and principles of agile acceptance
testing and explain how they work together to help us facilitate
communication and build better software.
In Part III, I talk about implementing agile acceptance testing in
organisations. I explain how this practice fits into the wider software
development process and how to start using it in your organisation.
I also briefly describe current popular tools for agile acceptance testing
and discuss what can we expect from future tools. This part also
includes a chapter on user stories, another agile technique that makes
the implementation of agile acceptance testing much easier.
In Part IV, I deal with the human side of this practice, explaining how
it affects our jobs and the way we work. I analyse the effects on business analysts, testers and developers. The chapter on the effects on
business analysts is also applicable to customers or other business
people involved in software projects. In this part we also revisit the
benefits listed in section Why should you care? on page xvi and see
how the principles and practices described this book deliver them.
xxiii
xxiv
Chapter 1
A nice thing about software development the agile way is that we can
easily go back and adjust the system, but making changes is not as
cheap as most programmers would like it to be. It costs a lot of money
and time. For those of you who would now suggest that using agile
practices makes this cheap, don't forget that you are only talking about
the technical cost. Agile practices help us cope with clients changing
their minds for whatever reason, so that in a sense we are hitting a
moving target. There is a huge difference between this and missing
a still target and then going back to have another go at hitting it. Even
if the requirements don't change, there is still a risk that the project
can miss a target if we don't solve the communication problems.
Although agile practices help a lot with reducing the risk of failure,
they should not and cannot be used to cover up for a project that
simply does not deliver what it was supposed to. Disappointing a
client is never good, agile or not agile. Gerald Weinberg and Donald
Gause suggest that the difference between disappointment and delight
is not a matter of delivering software, but how well the delivery
matches what clients expected.1 This was true twenty years ago when
they wrote Exploring Requirements[3], and it still holds true today.
Matching what clients expect is still a problem, mostly because of
communication issues. Individual effects of small communication
problems are very hard to detect, so they do not become apparent
instantly. Such problems are reflected in lots of small things not
working as expected or implied features that simply do not get
delivered. Individual issues may have small effects on the project, but
their cumulative effect is huge. The reason why changes in development practices in the last ten years have not solved this problem is
that most of these changes were driven by developers and I do not
believe that this particular issue is a development problem at all. It is
a communication problem involving all participants in the implementation team. This is why there is no development practice that can
solve the problem, whether or not it demands the involvement of
customer proxies and business people.
We all need to agree on what the target is, even if it moves, and make
sure that we all have the same understanding. And by we I mean all
participants in the process from stakeholders to domain experts,
business analysts, testers and developers. The path to success is to
ensure that these small communication problems get rooted out
instead of accumulating, so that the message gets delivered correctly
and completely.
http://www.testingreflections.com/node/view/7232
http://www.au.af.mil/au/awc/awcgate/milreview/shattuck.pdf
http://gojko.net/2008/08/29/how-many-points-are-there-in-a-five-point-star/
10
the same place. The two votes for fifteen are still a mystery to me.
Bob Clancy suggested that this might be a classic example of the
original problem being restated by the individual and than the restated
problem solved: if the star is drawn with lines crossing in the inside
of the star and then a point counted wherever two lines intersect,
people might count the points in the inner pentagon twice (although
they are the same as the five inner points).
Some readers are now probably asking themselves what is the right
answer. In general, there is no right answer. Or more precisely, all of
these answers are correct, depending on what you consider a point.
In software projects, on the other hand, there is a single correct answer
in similar situations: the one that the business people thought of. And
for the project to turn out just the way that customers want, this
answer has to come up in the heads of developers and testers as well.
This is quite a lot of mental alignment. Forty cards are not a sample
large enough for statistical relevance, but this experiment has
confirmed that even a simple thing such as a familiar image and a
straightforward question can be interpreted in many different ways.
11
Imagine that you are part of a software project that calculates prices
for gold-plating various metal pieces. This may be a contrived
example, but let's keep it simple for now. One of the requirements,
in its classical imperative form, states that the system shall let the
users enter a diameter and the number of points of a star-shaped
metal piece and calculate the price of gold-plating based on materials
(total piece surface area using prices in appendix A) and complexity
(number of edges using prices in appendix B). We have the prices
and formulas precisely defined in imaginary appendices and the
business analyst has spent a lot of time getting these absolutely clear
with the customers, because this is where the money is. Developers
should be able to work out the rest easily from the number of points
in the star and the diameter and testers should be able to compare
the test results to expected results easily. After all, everything is
specified precisely and we only have to do a bit of elementary
geometry. Right? Well, not exactly.
When requirements finally come to development, things become
much more precise because someone actually has to write the code.
Bertrand Russell wrote in The Philosophy of Logical Atomism[5] that
Everything is vague to a degree you do not realise till you have tried
to make it precise. On most projects even today, writing code is the
first time that we try to make the solution really precise. At this point,
a developer may have the same understanding of a point as the busi12
ness person making the request, but I would not bet on it. Work out
the probabilities from the experiment, and you'll get about a 39%
chance for this to happen. A tester will need to verify the result which
asks for another mind alignment and brings down the probabilities
to 20%. Again, I don't claim that this number is statistically significant
and describes a general success ratio, but whatever the precise figures
the probability falls exponentially with the number of participants
that need to have their minds aligned. Problems like this often don't
come to light before development because they are subtle and hidden
behind things perceived as more important such as the price of goldplating per square inch. We think that we don't have to be precise
about things that everyone understands during analysis, because it's
common sense how to draw a star with a number of points. The rules
for the surface area should be clear from basic geometry. Let's
disregard all the weird answers and consider just the fact that some
people in the experiment counted only outer points, and some counted
both outer and inner points. How would you test whether the system
works correctly if 12 was given as the number of points? Is the correct
star the one on the left or one on the right in Figure 1.3?
Things like this, where we feel familiar with the concept and implicitly
think that others have the same understanding of it as we do, are one
of the core causes of missing the target in software projects.
13
14
to a penny, then convert the second cent into another penny and end
up with twice the money you started with. Yes, it is just one penny
more, but the guy who worked this out wrote a script to do the dirty
job and apparently took more than ten thousand pounds before the
fraud was discovered.
When the news about this broke, business people argued that the
amount should have been rounded down and that the developers
should have known this, but the developers argued that they received
a request to round to two decimals without any specifics. The
requirement to round to two decimals sounds obvious and unambiguous, just as did the question about the number of points in the star.
In any case, the blame game does not solve the problem. We need to
prevent problems like this by ensuring that developers and business
analysts have the same understanding of rounding to two decimals.
The question needs to be raised before the development starts, not
after it is in production, when someone finds a hole in the system.
http://testdriveninformation.blogspot.com/2008/08/material-of-tdr-workshop-at-agile2008.html
15
of the story, if you don't know what a product owner is, think of it as
a business analyst.) There were two sets of construction bricks and
dominoes, placed on two halves of a large table with a large white
paper screen blocking the view in between, as shown in Figure 1.4.6
A person sitting at one side of the table could not see anything on the
other half. The aim of the workshop was to demonstrate how the
requirements and testing processes fail when communication is
impeded, even with something as simple as a domino construction.
At the start of the workshop, the product owner, the developer and
the tester left the room. Mantel put together a simple construction
on one half of the table with the customer, aligning the dominoes so
that they all fell when a small lead ball was rolled down one of the
bricks and bounced against another brick then the first domino. There
were lots of other bricks in the construction, but the primary business
goal was to make all the dominoes fall when the ball was rolled.
The product owner then walked into the room, looked at the
construction and discussed it with the customer. Then the developer
and the tester walked into the room. The developer was placed at the
other half of the construction table, unable to see the original
construction. The tester was placed on a different table, facing the
construction table backwards and unable to see what was going on.
This was intended to simulate the situation where a product owner
acts as a customer representative and testers have no influence over
development.
The product owner then started explaining to the developer how to
replicate the construction. The developer started building with the
bricks on his part of the table and the tester listened in to the
conversation. The developer was allowed to ask any questions and
the product owner was allowed to explain the construction in any
way he saw fit, but the tester was not allowed to ask any questions
during the construction. She just listened in and took notes. The
product owner led the developer by explaining the shape and relative
positions of different building blocks. They discussed at length
6
16
The interesting thing was that the product owner never asked the
customer about the goal of the construction, so he was not able to
communicate anything in this respect. This was not forbidden by the
rules of the game, it just never happened. When the exercise ended,
the customer said that the ordering of dominoes, alignment of bricks
by colour and all the extra bricks on the table were not really
important and that he would have accepted the construction if these
were different providing the dominoes fell after the ball was rolled.
17
18
rules and user interfaces and not on the infrastructure to support it.
Because business people lack deeper technical understanding of
implementation details, their proposed solution is sometimes much
more complicated than it needs to be. It it is not uncommon for a
technical person to suggest a much simpler solution once they know
what the problem is.
In 2005, I was involved in building a J2EE-based system for Bluetooth
content distribution. It was initially used for pushing small files and
text messages to mobile phones. After a few months, the client wanted
to add on-demand content pull and distribute large animations and
video files. Enabling on-demand content pull required significant
system changes, not to mention the fact that a Bluetooth network is
not suitable for sending five-megabyte movie files to hundreds of
mobile phones at frequent intervals. As always, the client wanted it
done as soon as possible, ideally in less than two months. There was
no way that such functionality could be implemented and work
properly in time. Even if it was possible, such a hack would make
future support a nightmare.
We asked the clients to identify the problems they wanted to solve by
pulling films to the phones. It turned out that they had an opportunity
to sell the system to an art fair, where the visitors would use mobile
phones to view a video tour. From this perspective, the video tour
had absolutely nothing in common with the original solution, except
the idea of software running on mobile phones. We decided to build
a small stand-alone application for the mobile devices, which would
read all the files from an MMC memory card and would not
communicate with any servers at all. This was done in time for the
art fair and did not break the architecture of the server.
This example demonstrates how important it is to get technical people
involved in the specifications process and to share information about
the problems that we are trying to solve, not just the proposed solutions. Traditional ways of defining requirements and specifications
only harness the knowledge of a selected few individuals, and don't
really use all the brains on the team.
19
http://en.wikipedia.org/wiki/Groupthink
20
the picture more, moving around to measure the lines again. About
70% of the subjects changed their answer at least once and one third
of the subjects went along with the group at least half the time. Asch
repeated the experiment with at least one of the collaborators selecting
the correct answer. This immediately encouraged the experiment
subjects to say what they really thought and the rate of conformity
plummeted. This experiment demonstrates the effects of peer pressure and how people are generally reluctant to state their opinion if
it is contrary to all the other opinions in the room. It also demonstrates that the situation changes dramatically even when a single
different opinion exists.
Surowiecki also states that even small groups of five to ten people can
exhibit what he called the wisdom of the crowds, reaching a state
where the group together is smarter than any individual in the group.
Cognitive diversity and independence of opinion are key factors in
achieving this. People should think about the problem from different
perspectives and use different approaches and heuristics. They should
also be free to offer their own judgments and knowledge rather than
just repeating what other people put forward. By getting different
people involved in the specification process, we can get the benefits
of this effect and produce better specifications.
http://edition.cnn.com/2008/US/06/06/crash.ap/index.html
21
Stuff to remember
The traditional specifications and requirements processes
now established in the software industry are inadequate
and essentially flawed.
Specifications do not contain enough information for
effective development or testing, they are prone to
9
22
23
24
Chapter 2
Finding ways to
communicate better
In order to bridge the communication gap, we need to work on
bringing business people and implementation teams together rather
than separating them with formal processes and intermediaries. Ron
Jeffries said, during his session on the natural laws of software
development at Agile 2008, that the most important information in
a requirements document are not the requirements, but the phone
number of the person who wrote it. Instead of handing down
incomplete abstract requirements, we should focus on facilitating the
flow of information and better communication between all team
members. Then people can work out for themselves whether the
information is complete and correct and ensure that they understand
each other.
Challenging requirements
Since requirements, however clearly expressed, may contain gaps and
inconsistencies, how do we fight against this problem before development rather than discovering it later? How do we ensure that
requirements, regardless of their form and whether or not they are
built up incrementally before every iteration, are complete and
correct? Donald Gause and Gerald Weinberg wrote in Exploring
Requirements[3] that the most effective way of checking requirements
is to use test cases very much like those for testing a completed system.
They suggested using a black-box testing approach during the
requirements phase because the design solution at this point still does
not exist, making it the perfect black box. This idea might sound
strange at first and it definitely takes a while to grasp. In essence, the
idea is to work out how a system would be tested as a way to check
25
26
This close link between requirements, tests and examples signals that
they all effectively deal with related concepts. The problem is that
every time examples show up on the timeline of a software project,
people have to re-invent them. Customers and business analysts use
one set of examples to discuss the system requirements. Developers
come up with their own examples to talk to business analysts. Testers
come up with their own examples to write test scripts. Because of the
effects illustrated by the telephone game, these examples might not
1
27
describe the same things (and they often do not). Examples that
developers invent are based on their understanding. Test scripts are
derived from what testers think about the system.
Going back to the equivalence hypothesis, tests and requirements can
be the same. Requirements are often driven from examples, and
examples also end up as tests. With enough examples, we can build
a full description of the future system. We can then discuss these
examples to make sure that everyone understands them correctly. By
formalising examples we can get rigorous requirements for the system
and a good set of tests. If we use the same examples throughout the
project, from discussions with customers and domain experts to
testing, then developers or testers do not have to come up with their
own examples in isolation. By consistently using the same set of
examples we can eliminate the effects of the telephone game.
http://www.infoq.com/presentations/Fowler-North-Crevasse-of-Doom
28
Communicating intent
Communicating intent
The communication of goals was at the core of Prussian military
tactics as a response to the dangers posed by Napoleon's invincible
army. The Prussian leaders figured out that they did not have a single
person capable of defeating Napoleon's genius, so they focused on
allowing the individual commanders and their troops to act collectively to better effect. They made sure they told the commanders why
something needed to be done and they put this information into the
perspective of overall goals, rather than just passing down a list of
imperative commands. The resulting military doctrine was called
Auftragstaktik, or mission-type tactics, and it was key to the great
successes of Prussian and later German armies. Today it survives as
Mission Command in the US Army.
A key lesson to take from this and apply in software development is
that understanding business reasons behind technical requests is
crucial for building a shared understanding of the domain. Examples
in the previous section and the section Requirements are often already
a solution on page 18 demonstrate this. Original intent is one of the
most important things that a customer or a business analyst should
pass on to the developers and testers. At the same time, in my experience, passing on this information is one of the most underestimated
and neglected practices in software development. This is just as much
29
30
31
32
33
examples to describe all the edge cases. When all the examples are
implemented so that the system works as they describe, the job is
done. To developers this might sound similar to unit testing, because
it essentially tries to do the same thing on a higher level (but acceptance tests are not a replacement for unit tests). I talk about this step
in more detail in Chapter 6.
And repeat
These steps are continuously repeated throughout the project to
clarify, specify, implement and verify small parts of the project iteratively and incrementally. In Chapter 8 I explain how this fits into the
overall development process in more detail.
34
35
could apply unit testing ideas by developing code to satisfy tests and
running these tests to verify that the code is on target, then repeating
the process until all tests go green. This is where the name agile
acceptance testing comes from. This practice started by expanding
the unit testing ideas to business rules, effectively specifying the
acceptance criteria in a form that could be executed as tests on the
code.
Going back to the conclusion of the previous chapter, the problem is
essentially a communication issue, not a technical one. The biggest
obstacle for any such effort is communication, so easy communication
and collaboration with business people have to be part of any viable
solution. Out of the effort to solve the problem have come tools like
FIT, FitNesse and so on. The ideas applied in these tools solved the
communication problem so nicely that the practice of agile acceptance
testing evolved into a great way to build a shared understanding of
the domain and enable all project participants to speak the same
language. Testing, in any possible meaning of this word, becomes
relatively unimportant because the greatest benefit is improved
communication.
Today, the name of this practice itself has become a major obstacle
to its adoption. The chief business analyst of a company I recently
worked with just rejected getting involved in the practice, with the
explanation I do not write tests. The word testing unfortunately
bears a negative connotation in the software development world. In
all the companies I worked for, testers were among the least well-paid
employees, right down there with the support engineers. Starting a
discussion on testing somehow seems to give business people the
green light to tune out. It is like a signal that the interesting part of
the meeting is over and that they can start playing Sudoku or thinking
about more important things. After all, testing is not something that
they do.
The practice of agile acceptance testing, especially as a way to improve
communication between team members and build a shared understanding of the domain, relies on the participation of business people.
They are the ones that need to pass their domain knowledge on to
36
programmers and testers. They are also the ones that need to make
decisions about edge cases and answer tough questions about business
rules. So they very much have to do tests.
To add a further complication, there is also user acceptance testing,
which sounds very similar to agile acceptance testing. User acceptance
testing (UAT) is a phase in a software project where clients sign off
a deliverable and accept it as complete. It can involve testing by endusers, verification from stakeholders that they are happy with the
product and a whole range of other testing activities. Although you
can verify the software during UAT using a successful run through
an acceptance testing suite as one of the criteria, agile acceptance
testing has very little to do with user acceptance testing. In fact, they
are on completely opposite sides of a software project. User acceptance
testing happens after development and it is often the final confirmation before money changes hands. It is performed by the clients or
third-party agents. Agile acceptance testing happens before and during
development and it is performed by the implementation team.
Some people call this practice acceptance test-driven development to
signify that acceptance tests are used to drive the development, not
just to verify the deliveries at the end. Others try to reduce the
confusion between user acceptance testing and agile acceptance testing
by avoiding the use of the word acceptance and calling the practice
functional test-driven development. This also signals the difference
between code-oriented unit tests which are traditionally linked with
TDD and functional (acceptance) tests. Three or four years ago, the
name storytest-driven development was used to describe this practice,
emphasising the fact that acceptance tests are related to user stories
and not code, but it did not really catch on. Another variant is testdriven requirements, explaining that acceptance tests actually deal
with requirements more than with development. Brian Marick calls
these tests business-facing to emphasise that business people should
be concerned with them.4
http://www.exampler.com/old-blog/2003/08/21/
37
Better names
Dan North suggests using the word behaviour instead of test,5 as
a way to clear up a lot of misunderstandings. Instead of test-driven
development, he talks about behaviour-driven development to avoid
the negative connotation of testing. This trick has solved the problem
of keeping business people awake quite a few times for me as well.
Behaviour-driven development (BDD) is just a variant of agile
acceptance testing, in my opinion. Some people will disagree, pointing
out the differences in tools and format of test scripts. For me, the
underlying principles are the same and BDD tools are just another
way to automate tests. BDD also promotes a specific approach to
implementation,6 but this is not really important for the topic of this
book. Again, I consider tools and tests to be of less importance than
communication and building a shared understanding.
A name that has become more popular recently is example-driven
development, with tests being called specification by example. This
reflects the fact that we are using concrete real-world examples to
produce the specifications instead of abstract requirements. I also
like executable specifications as a name instead of acceptance tests,
because it truly describes the nature of what we are building. Agile
acceptance tests are specifications for development in a form that can
be verified by executing them directly against the code.
People have tried to rename agile acceptance testing several times,
but this name has somehow stuck. Most still use this name and this
is why I decided to keep it here. I mention all the alternative names
here because they are interesting attempts to remove the word test
from the vocabulary to reduce the ambiguity and misunderstanding
that come from it. If your business people don't want to participate
because testing is beneath them, try to present the same thing but
with a different name.
5
6
http://dannorth.net/introducing-bdd
See http://behaviour-driven.org/ for more information
38
Better names
In any case, I want to point out that there are a lot of different names
and ideas emerging at the moment, but they are all effectively different
versions of the same underlying practice. This book is about the
underlying values, practices and principles that all these names and
ideas share.
Stuff to remember
Realistic examples are a great way to communicate, and
we use them often without even thinking about it.
Requirements, tests and examples all talk about the
same thing how a system will behave once it is
delivered.
We can use examples consistently throughout the project
to avoid the effects of the telephone game.
Building a shared understanding of the problem is one
of the key practices in software development.
Business intent is one of the most important things that
a customer or a business analyst should pass on to the
developers and testers.
Cross-functional teams create much better specifications
and requirements than business people in isolation.
Agile acceptance testing uses these ideas to solve
communication problems on software projects.
The name agile acceptance testing is misleading but has
been generally adopted.
Agile acceptance testing is very different from user
acceptance testing, and in general it is not about testing
at all. It is about improving communication and building
a shared understanding of the domain.
Implementing agile acceptance testing can be a real
organisational challenge.
Agile acceptance testing in a nutshell revolves around these
five principles:
39
40
Chapter 3
Specifying with
examples
The first stage of agile acceptance testing is to make sure that we all
know what we are talking about and more importantly to agree on
this. Before a phase of development, be it an iteration, a mini-project
or simply a chunk of software whose time has come, we specify what
we expect out of it. We specify this in the form of real world examples,
not with abstract requirements. The examples demonstrate how the
system should act and how it should help users do their jobs. These
examples are created by the whole implementation team, not by a
single domain expert as in the traditional model. We use the examples
to discuss the domain and make sure that there are no misunderstandings.
http://www.solutionsiq.com/agile2008/agile-2008-domain.php
43
look similar to the picture in Figure 3.1 to someone who has never
seen a toothbrush before.
If we don't share the same understanding of the business domain,
even the simplest of explanations can be ambiguous and misinterpreted. Software developers are technical experts and they know how
to write code, but they often have no real experience of the business
domain and their assumptions can be substantially different from
the assumptions of business people. Unless we build a shared understanding and flush out assumptions, developers will often end up
putting the toothpaste on the wrong side of the brush.
44
A practical example
with images and even some videos I have not found a single site in
the first few result pages that does it only with words.
A practical example
One of the best ways to ensure that people understand each other is
to demonstrate various differences in possibilities with realistic
examples. In the poker example in the section A small misunderstanding can cost a lot of money on page 14, if there had been a discussion
between developers and business people including the case where a
cent is rounded to a penny, the business analysts would have spotted
the issue straight away. Abstract requirements and specifications are
not a good tool for communication. Real-life examples are much
better.
Several years ago I was involved in building an affiliate advertising
system. Affiliate advertising systems, for those of you who have slept
through the web advertising revolution, connect web sites that want
to advertise something and web sites that offer advertising space. The
company that owns the web site with free space is called the affiliate.
The company that sells products or services and wants to advertise
pays a commission for the customers that come from the affiliate site
by clicking on an advert. Today, the bulk of such commission is paid
as a percentage of sales, but back then it was also paid on the number
of clicks.
Our client wanted to build their own affiliate management system
rather than join an existing ad exchange. For the sake of simplicity,
let's say that their initial request was to pay affiliates 2 pounds, dollars
or euros (whichever your preferred currency is) for 1000 clicks. As a
requirement, pay 2 pounds per 1000 clicks does not seem strange at
all in fact it is very much like any other normal requirement.
However, although it seems precise, it leaves a lot of room for speculation. What happens if the affiliate has only 500 clicks on a particular
day? Do we pay him 1 pound or do we not pay him anything at all?
As a developer, I can think about this question and give an educated
guess or opinion, but the real truth is that I should not be deciding
45
about such an issue. This is a question for the business people because
it is a manifestation of their business model. Maybe they want to pay
a pound, maybe they want to wait for people to accumulate 1000
clicks. Both options are valid from a technical perspective, but there
is only a single valid option from the business perspective. If we leave
this question to developers to decide, there is a big chance that they
will not select the same option as the customers did.
Problems like this one are not really bugs in the classical sense they
are caused by misunderstandings of business rules. The code might
be completely correct from the unit testing perspective, but still miss
the business target. This is where real-world examples come in. Instead
of an abstract requirement such as pay 2 pounds per 1000 clicks, we
need to identify interesting realistic cases and then discuss what
happens in these cases.
For a start, it is very rare for a web site to get a nice round number of
clicks. So a much more realistic case would be to have something like
7672 clicks during a day. What do we do with this number of clicks?
Do we round it down and pay 14 pounds, or do we round it to 8000
clicks and pay 16 pounds? Maybe just scale it and pay 15.34 pounds?
I like to throw in edge cases as well. Should we pay anything for 999
clicks? What happens with just one click? Discussing edge cases like
these would have pointed out the problem with 1 cent in the foreign
exchange story.
In this case, we could write the examples on a whiteboard, Excel
spreadsheet or something like that and then identify the cases.
46
A practical example
We then get the customers to give us values that they expect to pay
in these situations:
Discussing cases like these often raises more questions and reveals
other interesting examples. Because we don't pay anything for 999
clicks, it is not really fair to the affiliates to simply ignore their total
if they are missing a single click. In this case, the next question might
be should we let the the 999 clicks roll over to the next day or do we
reset the counter to zero for tomorrow? Maybe the clients are not too
bothered with small web sites, they only want to focus on big advertisers who will have several thousands of clicks per day, so discarding
this small remainder is not a big issue. Or maybe they also want to
please the small advertisers, who will rarely have more than a thousand
clicks on a day, but would appreciate being paid for the accumulated
clicks. So we can create a few more examples on the whiteboard and
discuss them:
47
48
49
in a private e-mail
50
51
first whether they are realistic or not. Keep in mind that there is a
difference between unreasonable and unrealistic. We don't want to
waste time discussing imaginary cases that are not important for the
system.
52
Writing things down like this makes it easier to spot some other
interesting cases that need to be discussed. Obviously, we could write
an example for a VIP customer from the UK that has only $30 worth
of flowers in the cart. But there are some specification gaps that are
less easy to spot, and the table uncovers them as well. For example,
what do we do when someone has an unused free delivery offer, but
they are a VIP customer and have more than $50 of flowers in the
shopping cart? Do we keep the free delivery offer for the next time?
What if they don't decide to use free delivery this time as well do
we allow them to keep two offers of free delivery for later or just one?
What happens if for some reason a UK customer has a free delivery
offer? How can this happen (perhaps we let someone change their
address but keep a free delivery offer), and should we disable this
option in this case? Ideally we want to flush out these important
examples while we have business experts available to discuss them
immediately.
53
54
55
http://gojko.net/2007/12/04/waterfall-trap/
56
57
focus on a single rule, so that the discussion can focus on this as well.
If larger flows are written as a single example, it's often hard to keep
track of the context and it becomes easy for ambiguity and hidden
complexities to pass undetected.
Stuff to remember
Instead of abstract requirements, use realistic examples
to demonstrate differences in possibilities.
The whole team should be involved in working through
edge cases and resolving ambiguities.
Discussing realistic examples makes people think harder
and they are less likely to just brush questions off.
Watch out for small differences in examples, as they
might indicate that a business rule is not directly
expressed.
Write specifications down as tables to make it easier to
grasp the content and spot gaps and inconsistencies.
With processing workflows, discuss rules for each
decision point separately.
For genuine workflows, write down examples so that
preconditions, processing steps and verifications are
clearly separated.
58
Chapter 4
Specification
workshops
Just writing the specifications or requirements as realistic examples
instead of abstract sentences is not enough. To get the most out of
realistic examples, we need to put them to an open discussion and
give everyone a chance to review them and ensure that we all understand the same thing.
A customer or a product owner would typically be concerned mostly
with the happy path, such as paying two pounds for every thousand
clicks. Developers tend to focus much more on edge cases and
alternative scenarios, so a developer might suggest an example with
999 clicks for discussion. Testers often think about how to break or
cheat the system, so a tester might suggest the case where all 1000
clicks come from the same IP address in a time frame of five seconds.
Do we still pay the affiliate in this case, or do we ignore it? A developer
might then again ask what if this same affiliate had 3000 more clicks
on the day do we pay six pounds for these further clicks or do we
ignore the affiliate completely?
All these people have different views of the system and ultimately
different mindsets. This is why examples have to be put to discussion
and analysed by the whole team. To facilitate this process, I like to
gather customers, business domain experts, developers and testers
around a whiteboard or a wall that we can write on and discuss
interesting examples. If the team is small, then the whole team gets
involved in this. With larger teams this might be a challenge because
it is hard to keep ten or more people focused on a single thing. For
larger teams, get at least one person from each of these groups in the
room. At the start of an iteration, this turns into an intensive, handson problem and domain exploration exercise, driven by examples. I
59
Specification workshops
Running a workshop
The workshop starts with a business person, typically the product
owner, business sponsor or an analyst, briefly describing a piece of
software that needs to be developed. He then explains how the code
should work once it is developed, providing realistic examples. Other
workshop participants ask questions, suggest additional examples
60
that make the functionality clearer or expose edge cases that concern
them. One of the main goals of the specification workshop is to flush
out these additional cases before the development starts, while business people are immediately available to discuss them.
Discussing examples facilitates an efficient transfer of domain
knowledge and provides a way for all the participants to ensure that
they agree on the same thing. It builds a shared understanding of the
domain. While discussing a use case, user story or whichever technique
you use to describe requirements, it is the job of the developers and
testers to raise all the tough questions and point out different
examples. The job of the customers, domain experts and business
analysts is to answer these questions and make sure that the other
participants have understood the answers correctly.
Instead of just filling in the blanks in the examples, customers and
business analysts should try to explain their decisions and make sure
that others understand them. The specification workshop is the perfect
chance to pass on domain knowledge to the implementation team
and we should use this chance to help everyone build a common
understanding. Knowing business reasons and understanding the
domain might not give developers and testers the power to make
business decisions, but it will give them enough knowledge to spot
when something is suspect and enable them to communicate much
more effectively with business people. Every once in a while, it also
leads to much simpler solutions that satisfy the customers' needs.
For the specification workshop to be really effective, people should
be free to challenge ideas and ask for clarifications until everyone
really agrees and understands the same things about the domain. The
specification workshop ends only when everyone involved agrees that
they have enough examples to move on, that they have identified all
important representative scenarios and covered all the edge cases.
61
Specification workshops
examples for the next month's work. This is absolutely fine, but it
must not be an excuse to skip the workshop at the beginning of this
next iteration. The examples that the business analysts have written
will be a good starting point for the workshop, but developers and
testers need to understand them, identify and suggest missing cases
and get a chance to discuss and critique the initial set of examples.
The workshop is especially important if there is no on-site customer
representative sitting with the implementation team, as it then gives
developers and testers regular planned access to domain experts. It
is often not possible to have domain experts constantly available, but
it should not be too hard to get them in the room with developers
every second Friday for a few hours.
62
Workshop output
Workshop output
The primary output of the workshop is a tangible set of realistic
examples that explain how the system should behave once the next
phase is complete. Key examples for every story or use case for the
next iteration should be discussed, including the main success scenario and key edge cases. They should be explained and written down
in enough detail so that all participants agree on what the story is,
what are the deliverables and how to verify that they are correct. To
keep the flow going, you might just take photos of whiteboards or
have someone write down examples into text documents as you discuss
them. It is not really important to have them in any specific format
or tool, as there will be a time to clean them up later (we discuss this
in the next chapter).
The key feature of these examples is that they should provide enough
information for developers to implement and for testers to verify the
stories planned for the iteration. The workshop should answer most
of the questions about the specification that developers or testers
would normally ask during the next few weeks of work. It is OK to
leave some edge cases for later as long as the discussion during the
workshop provides a framework to understand these cases. Make sure
to discuss success scenarios and exceptions, but don't waste time
specifying examples for all possible permutations of parameters. One
example for each important case is enough.
63
Specification workshops
To keep the workshop focused and keep the flow going, it is best to
keep the examples and discussion on a fairly high level. We want to
discuss what the software does, not how it does it. The discussion on
how software implements something should take place later and does
not necessarily need to involve project stakeholders or domain experts.
Although possible implementations sometimes need to be considered
during the workshop because they limit what can be done, as a general
rule of thumb try to avoid talking about implementation or infrastructure details. This will save time and keep the flow going.
64
Feedback exercises
Feedback exercises
Donald Gause and Gerald Weinberg suggest using ambiguity polls to
spot places where people disagree because of hidden assumptions and
help them reach an agreement ([3] Ch. 9). Their ambiguity poll idea
consists of selecting a metric that requires a solid understanding of
the domain to estimate. This can be, for example, performance, cost
or time to complete a task. The participants in the poll are asked to
estimate the metric independently and then discuss the variations in
results to grasp the reasons behind them. With a larger number of
participants, these polls often have clustered results where clusters
denote differences in understanding and variations in a cluster often
relate to observational differences. Although most specification
workshops I am involved in do not have nearly the number of participants required for numeric power laws to kick in, this idea is
applicable to smaller groups as well. In fact, it is already used in agile
planning and is known as planning poker. In planning poker, members
of the implementation team are asked to secretly select a card that
represents their estimate of how long it would take to develop a task.
All cards are then shown at the same time and the people who had
the highest and the lowest estimates explain their reasoning to the
group. This often leads to the discovery of sub-tasks or constraints
that the group was not aware of, or shortcuts that only a few people
in the group know. The process is then repeated until the estimates
converge. (Mike Cohn explains this in more detail in [12] Ch. 6)
A similar idea is described by Gary Klein in Sources of Power[4]. He
suggested using feedback exercises to improve understanding between
commanders and teams. In a feedback exercise, commanders first
give orders as normal, and then the person running the experiment
describes a potential unexpected event. The commander then writes
down how the team should react and team members also write down
what they would actually do. The notes are then compared and
mismatches and surprises are used as a basis for discussion on how
to improve communication.
65
Specification workshops
66
67
Specification workshops
use of the ubiquitous language will reveal gaps and awkward phrases,
and we then need to create new phrases in the language to cover these
cases and simplify the discussion. As we develop the model and
implement more and more functionality, new concepts will be introduced into the system and we will expand the language with names
for these concepts as well. But the important thing is to consistently
use the same language. There must be no business jargon and technical
jargon just a single project jargon, the ubiquitous language.
The ubiquitous language should be used in all the examples, diagrams,
code and speech. This involves the specification workshop, the
discussion that takes place there and any documents that come out
as a result. Examples that we use to build our specifications are a great
way to start creating the language and to challenge and evolve it in
practice. Enforcing the use of the ubiquitous language will make the
specification workshop much more effective because it helps to reduce
misunderstandings and ambiguity. At the same time, the workshop
will point to new concepts and ideas, and we will need to find names
for these new concepts. The specification workshop is a perfect place
to discuss and agree on the names so that everyone understands them.
I've organised workshops with a dozen people involved that were very
effective, but it was a real challenge to keep everyone focused on the
task in hand and not discuss other problems. This is why I think that
smaller workshops are better. Two developers, two testers and a few
business people should be enough to get the thing right. The
developers can then use the examples that come out of the workshop
to pass the knowledge on to other developers. Testers can do the same
with other testers.
69
Specification workshops
This list of topics is a very nice reminder for the discussion that needs
to take place during an specification workshop, and it might be really
useful to you if you are trying to act as a facilitator. The steps do not
have to happen in the sequence described above and they should not
happen by a single person lecturing or commanding the rest. We want
to promote a collaborative discovery and learning effort, but we should
cover all of them in the workshop so consider the list just as a good
reminder of what to talk about.
70
Here's why
Applied to the specification workshop, this asks for the business
reasons behind a business case to be clearly communicated, discussed
and explained to everyone. This will provide a better framework for
understanding and may launch a series of challenges and questions
ultimately resulting in a better overall solution than the basic set of
examples would have.
71
Specification workshops
Now, talk to me
The specification workshop should not be a lecture or a talk, it should
be an open discussion. We need to make sure that everyone voices
their concerns and has their questions answered (or written down
and chased later). If some are not participating in the discussion, the
facilitator should encourage them to join in.
Stuff to remember
Examples should be put to an open discussion and
reviewed by all members of the implementation team.
At the start of an iteration hold a specification workshop
to discuss examples, iron out ambiguities and functional
gaps and build a shared understanding.
Developers and testers should suggest examples of edge
cases or important issues for discussion.
Make sure that domain experts and subject authorities
answer questions. Don't make up answers yourself.
Business people should explain their answers to make
sure that others understand them correctly.
The discussion that happens during the workshop is
itself very valuable, because it teaches people about the
domain.
Organise feedback exercises to ensure that all participants share the same understanding.
Create and use an ubiquitous language consistently in
the workshop and all examples. Use the workshop to
evolve the ubiquitous language.
Keep the workshop focused, don't let it become just
another meeting.
Selecting a facilitator for the workshop can help to keep
it focused.
If a customer representative is not present at the workshop, make sure that they review and approve the
examples.
73
74
Chapter 5
Choosing acceptance
criteria
One very useful feature of realistic examples is that they are often
easily verifiable. They have precise starting values and precise expected
results. Once the software has been developed, we can actually check
whether the system pays out 14 pounds after 7672 clicks or not.
We use the examples that come out of workshops, described in the
previous chapter, to specify how a piece of software should behave
once it is implemented and confirm that it actually does what we want
it to do. Examples of system behaviour effectively become our
acceptance criteria for the current phase of development.
76
1
2
http://gojko.net/2006/10/22/magic-of-goals
http://www.slideshare.net/nashjain/acceptance-test-driven-development-350264
77
The script describes how something is tested. But it is not really clear
what exactly is being tested here. A fairly good guess would be that
this script verifies rules for free delivery, but what are those rules? Is
free delivery offered for the first order to the new users? Or is it offered
when people buy more than two books at the same time? Maybe it is
offered to the customers from the UK? Or is it a combination of those
three things? Maybe this example does not describe free delivery rules
at all, maybe it demonstrates the general flow of activities in our online
store. Now compare the script to the following statement:
Free delivery is offered to UK customers when they place their first
order
This is the same example, restated better and focused on the really
important pieces of information. It is much shorter and easier to
understand. It does not leave so much space for misunderstanding.
If a developer was given the first test as a target for implementation,
78
79
http://www.concordion.org/Technique.html
80
81
82
83
in a private e-mail
84
ing a special bonus rule that is applied to shopping carts with more
than 50 products, but the details of these products are irrelevant for
the rule, then do not list 50 products by name in the test. Write a
single step that will populate the cart with a number of random
products, in this case 50. Even in cases when you do want to list all
the products, keep the attributes actually being set in the test to a
minimum. Leave out everything that is not relevant for a particular
test. This will make tests easier to read and easier to maintain. If you
later remove an attribute from the product class, you will not have to
go through 50 examples and clean them up manually. The more
irrelevant information you strip from a test, the longer it will stay
valid because change requests and modifications of the parts that
were removed from the test will not affect it.
85
Automating tests
Toyota spearheaded the success of the Japanese motorcar industry,
changing the image of Japanese cars from cheap junk to reliable highquality vehicles. One of the major forces behind this success was their
innovative production system, which included a specific attitude
towards product quality called zero quality control, described by
Shigeo Shingo[15]. Zero in this case applies not to quality, but to
quality control. In the Toyota Production System, quality was
something that was created from the start and planned in, not
controlled at the end. The point of testing in zero quality control is
to ensure that items are produced correctly to prevent defects, not
to discover them at the end. This is how we should look at the tests
in agile acceptance testing. They are there to prevent us from doing
something wrong in the first place rather than to catch mistakes at
the end.
Zero quality control introduced the concept of source tests and
mistake-proofing devices. Mistake-proofing devices are used to check
whether a part is defective right there on the production line. A key
feature of these verifications is that they are inexpensive, so that parts
can be inspected frequently. A worker can verify a part on the
manufacturing line, then it can be verified again before assembly and
again later in the process. In knowledge-based work, such as software
development, time really is money. The biggest cost on software
86
don't repeat yourself. This is why I really prefer using tools that allow
us to write acceptance tests so that business users can understand
them.
Every automation tool has its own way of specifying and automating
tests, so programmers need to translate the examples chosen for tests
into code or some form of scripting. As I've mentioned before,
translation often causes problems because things get lost or become
less precise. Ideally, we want to avoid translation as much as we can.
Some automation tools allow us to write the description of a test
separately from the code automation part. When using tools like this,
it is possible to skip the translation from examples into tests altogether. For example, the test in Figure 5.2 could be automated with
FIT.
The automation part of this test may post actual orders or just update
a field in the database that says how many orders a customer has had
this year, depending on the design and implementation of the system.
But at this point we really don't care about how something will be
88
89
This division is done primarily to keep the test description humanreadable and shield business people from any code implementation
details. If you use a tool that separates test descriptions and test
automation, programmers can start automating tests even if some
questions are still unanswered and the tests contain some unconfirmed
results. The structure of the tests will most likely not change, even if
the values do. This often gives developers enough to start with, and
they can tweak the underlying automation a bit later to accommodate
structure changes if required.
90
91
92
http://www.solutionsiq.com/agile2008/agile-2008-domain.php
93
94
Simon Stewart suggests using the Page Object6 pattern to make user
interface tests easier to maintain. In this approach, we describe web
pages and the logical operations they support with Java objects and
a fluent interface, hiding the complexity and details of user interface
commands in their methods. Tests are then written against the page
objects, not directly against the APIs of the test automation tool. This
approach makes it possible to write tests before the user interface is
ready, because we can work with page objects and logical methods
that will later be connected to a web site. It also means that tests are
less affected by user interface changes, since we only need to change
the page object when a particular page changes. Page objects are
essentially another version of the domain-specific testing language
idea, but in this case the domain is the workflow of a particular web
site that we are building.
http://code.google.com/p/webdriver/wiki/PageObjects
95
away so that they can be automated and connected to the code more
easily. For example, if FIT or FitNesse is used as the automation tool,
a developer should think about fixture types that can be used to
automate test steps and may suggest rewriting some parts to make
the automation easier. Customers, business analysts and testers are
there as a safety net to prevent developers from going too far and
making the tests too technical.
If you do assign a single person to write the tests, make sure that you
know what you are doing and why you are doing it. This person is
put into a position of significant power and responsibility, as acceptance tests are the specification of the system. If you would not trust
someone to write the specifications, don't trust them with writing
acceptance tests either.
I heard about a case where the task of writing acceptance tests was
assigned to a junior external tester, just because he had some previous
exposure to FitNesse. A specification workshop was not held. A
developer with no previous experience of writing FIT fixtures was
told to automate and review the tests. These two guys just wanted to
get the job done, so they wrote the tests the best way they could,
making up the content themselves. I have a hard time imagining a
worse way to write acceptance tests. Without any domain knowledge
or understanding of the problems and requirements, they had absolutely no chance of capturing the real rules and constraints. They
made up their own theoretical examples, so the chances of identifying
any functional gaps were very thin. Introducing agile acceptance
testing in this way can only lead to disappointment, because all the
additional work will bring absolutely no benefits. Acceptance tests
written by people who do not understand the business and don't have
any influence on the scope are completely useless and your team is
actually much better off without them programmers will at least
not receive wrong specifications.
If a single person is charged with writing the tests, this person must
understand that her job is to research realistic examples and interview
domain experts on expected behaviour, not to make up theoretical
examples and definitely (and this is of crucial importance) not to
96
97
98
99
Stuff to remember
Select a set of representative examples to be the
acceptance criteria for the next phase of the project.
Clean up and formalise these examples as acceptance
tests.
Write tests collaboratively, ideally at the end of the
workshop.
100
101
102
Chapter 6
Focusing development
With traditional abstract requirements, many details are often left to
developers to work out. Although business analysts would probably
not agree with me at this point, from the perspective of a developer
who was typically charged with the task of digesting 500-page documents and then working out what actually needed to be implemented,
I can assure you that this is true. Agile acceptance testing helps a lot
in this respect, because it gives us acceptance tests that were collaboratively produced to tell developers exactly what is needed. Once the
acceptance criteria for the next phase of the project are captured in
acceptance tests, the expected result of development is clearly specified
in a measurable and verifiable form. From this moment, the actual
development can focus on fulfilling these requirements.
A short note before we continue: although I generally try to keep this
book non-technical, this particular chapter may be a bit too technical
for business people because it deals with the actual implementation
process. Feel free to skip it if you are not a developer or skip over parts
that seem uninteresting.
103
Focusing development
One of the ideas of agile acceptance testing is to flush out inconsistencies and functional gaps before the development of a feature starts.
The specification workshops facilitate this. As a result, developers
should have much less trouble completing their tasks than with more
traditional specifications and requirements documents. When they
start working on a particular service or domain object, most of the
requirements and descriptions of the expected behaviour should
already be specified in the form of realistic examples. This is the ideal
and sometimes things will need to change later and some functional
gaps will be identified during development, but this happens significantly less often than with traditional specifications. In practice,
acceptance tests provide a very good specification for all the required
functionality and a solid foundation for development. For a piece of
code to be considered complete, everything specified in relevant
acceptance tests should be implemented and confirmed by running
the tests. Mike Scott wrote1 that in his organisation acceptance tests
are used to measure the progress of development, in a metric called
running tested features. In his opinion any other measure does not
contribute to delivering value to the business.
Looking at this same idea from a different perspective, the specification is complete also in a sense that if something is not there, it should
not be implemented. The code should implement what the acceptance
tests expect no less and no more. Focusing the development just on
the things expected by acceptance tests helps a great deal to prevent
just-in-case code from leaking into the system. Ben Rady suggested
this rule of thumb for acceptance test coverage:2
And if you've got code that's not covered by acceptance tests,
you need to ask yourself this question: Can I delete this code
without affecting the functionality of the product? If so, you
should...simpler is better. If not, then you should probably write
some acceptance tests if you want to ensure that:
1. The customers are clear about what the system does.
1
2
in a private e-mail
http://tech.groups.yahoo.com/group/testdrivendevelopment/message/24125
104
105
Focusing development
during the iteration. If the change is small and you have enough time
to include it in the current iteration, you can just add to the examples
and do it. If the customer representative or a business analyst is not
readily available, or the change is too big to include it in the current
iteration, it is better to leave the discussion for the next workshop. In
any case, you should try to understand together why this example was
not spotted during the workshop and try to make sure you identify
all the examples next time. This discussion can take place during the
iteration retrospective.
It is also important to remember that acceptance tests are not dead or
set in stone. They are very much alive and subject to change.
Developers may notice that they need to clean up details or require
more information to complete the task or the clients may come up
with some more requirements or change their minds. Small changes
to acceptance tests should be allowed during development, provided
that they are communicated to everyone involved. It is not uncommon
for a test to be taken out or another test to be introduced during the
implementation. This is perfectly fine as long as everyone understands
what the changes affect. If you want to introduce a large change, think
about organising a mini-workshop to discuss the change with other
team members or waiting for the workshop scheduled for the next
iteration so that you can make sure that everyone understands the
change.
106
107
Focusing development
This workflow test hints that the service would most likely have a
method to place chips on the table, with a player, amount and selected
field as arguments. It also hints that there should be a method to spin
the roulette and a method to check how much a player won.
Just make sure not to take these hints as design requirements. For
example, it would be false to assume from the fact that the acceptance
test specifies the roulette table field Odd that the Spin() method for
the roulette business service takes the result field as an argument.
Code using the roulette table should not be able to specify the outcome
of the game, but we need exactly this in a test. The link between the
test specification and the business code for this step would have to be
implemented using a test system instead of a real random number
generator, for example, but not through the business service.
Tests should be focused on the behaviour under test and they often
do not provide the full precise context. For example, if a special bonus
should be applied to shopping carts with more than 50 products, the
108
acceptance test for this should specify a case where the cart has 50
products (as described in the section Focus the test on the rule being
tested on page 84). It would be wrong to assume that this implies a
business method which randomly generates a number of products to
fulfil the test workflow. On the other hand, this test hints at the need
to count the products in the cart, which should most likely be implemented as a domain method.
109
Focusing development
110
111
Focusing development
for the concepts in our code and avoid creating a technical jargon for
the project.
Acceptance tests help promote the ubiquitous language concept of
domain-driven design, because they give developers an obvious choice
of names for domain elements described in the tests. Consistent use
of the domain language in examples and tests will lead to a consistent
use of the language in code as well.
Concepts in the code should be given the same names as the respective
concepts in test descriptions and in other documentation. During the
implementation of the tests, you may notice some inconsistencies in
naming. This is perfectly normal especially in the early stages of the
project, since the language is still being formed and we do not use
anything to enforce formal naming conventions during specification
workshops. In two different tests we may refer to the customer's
address as mailing address and home address. When you start
implementing those two tests, you will notice the difference because
code compilation enforces formal naming. When things like this are
found, developers should communicate with the business people and
decide which name to select, and then consistently use this name.
This also means adjusting one of the test descriptions to make it
consistent with the code.
If you introduce a new domain concept during implementation
because it is required to complete an example, but the concept does
not appear in the tests, my suggestion is to contact the business people
again and ask them to suggest a name for the concept. Don't just give
a class the first name that comes to your mind. If the concept is a firstorder domain member, the business people may well already have a
name for it. Let's use names for our classes and objects that the business people can understand too. This will make it much easier to
discuss the concepts.
112
http://tech.groups.yahoo.com/group/fitnesse/message/10115
113
Focusing development
For example, an acceptance test for a search system might specify that
a user can enter multiple search phrases separated by commas in the
same line. Somewhere under the hood, this big search criterion needs
to be split into multiple criteria. Splitting the search string should be
a responsibility of a distinct code unit which is purely technical. The
overall acceptance test is not a particularly good target for development of this code unit. It will give us a green or red result, but it will
be difficult to know whether the problem is in the code that splits the
string into pieces or the code that executes the search. It will also not
give us quick feedback while we develop the functionality, as the search
needs to be developed as well in order for the acceptance test to run.
In cases like these, I prefer to write a few unit tests for the required
string manipulation functionality. These tests will allow me to develop
and test the string manipulation code unit in isolation and ensure
that the unit is correct before I include it in a wider context. Dividing
the code up like this also allows us to split the work. One developer
can work on string manipulation, another can work on executing the
search. The overall acceptance test verifies that both code units
cooperate correctly to give the correct business result at the end.
Unit testing tools are often used to perform integration tests or
exhaustive system testing. With acceptance testing in place, such tests
can be easily dropped. Having acceptance tests in the project in
addition to unit tests can help focus the unit tests on what they really
should be checking code units from the programmers' perspective.
Unit tests should not verify large workflows, business rules, connect
to a database or require any form of external system set-up or
configuration. We can move such verifications from a unit test suite
to the acceptance test suite and keep unit tests nice and quick.
Some duplication between unit tests and acceptance tests is not really
a problem because it does not introduce a large management overhead,
and you should not drop legitimate unit or acceptance tests because
the same thing is already verified in the other groups.
114
115
Focusing development
run all unit tests as we want to catch technical bugs before the code
goes into the central repository. For unit tests to be effective, they
have to run quickly and execute from the integrated development
environment. For acceptance tests, it is much more important that
they are easily understandable by business people and that they can
verify the system in a state as close as possible to the production
system.
Acceptance tests tend to be slower than unit tests, because they connect
to real databases and external services. Depending on external
dependencies, acceptance tests might not really be executable at all
on developer machines. Whereas this would be a huge problem with
unit tests, it is relatively fine with acceptance tests.
Even if your acceptance tests can run on developer machines, do not
include them in the basic build. They will just slow down the turnaround time for implementing small changes. Unit tests should
execute on every change to verify that bugs are not being introduced.
This basic test run should not last more than a few seconds, as it has
to be done frequently. Developers should run acceptance tests to verify
that they have finished with a larger piece of code from a business
perspective.
Automated acceptance tests are easy to run, although they typically
do not execute quickly (or at least not as quickly as unit tests).
Developers should take advantage of this and run them periodically
to verify that they are on the right track. A continuous integration
system should also run acceptance tests for previous phases of the
project overnight as a regression test to verify the system functionality.
If you do this, it is a good idea to publish the results somewhere where
the business people, project managers and customers can see them.
This will allow people to see what progress is being made.
116
Stuff to remember
In practice, acceptance tests provide a very good
specification for required functionality.
117
Focusing development
118
Chapter 7
Facilitating change
Once the entire set of acceptance tests for a project phase is green,
the development work is almost done. At this point, developers might
improve the design of the system with some new insights acquired
during the current iteration and re-execute the tests to confirm that
the system is still working as it should be. After this, we can continue
with the next phase of the project. The role of the set of acceptance
tests for the previous phase now changes from that of a target for
development to that of a utility that facilitates future change. It
becomes a live record of what the system does and a set of regression
tests that can be executed to verify that the system still does what it
was supposed to do.
A live specification
The executable code is often the only thing that we can really trust in
software systems. Other artefacts such as specifications, requirements,
even API documentation, quickly get out-of-date and cannot be
trusted completely. On the other hand, executable code is unusable
as the basis for a discussion with business people. Source code is no
better, since business people cannot read it. Even if they could read
it, source code has too many details to facilitate an efficient discussion
about business rules and features. It is too low-level and it prevents
us from seeing the wood for the trees.
An automated acceptance test suite, connected to the executable code,
can easily be run to check whether it still reflects reality. We can have
the same level of confidence in the automated acceptance tests as we
do in the code. This allows us to use the acceptance tests written for
previous phases of the project as a good basis for discussion. The
representative set of examples, formalised into acceptance tests and
connected to the code by test automation, plays the role of a live
119
Facilitating change
Keeping it live
During development, we expect acceptance tests for the current iteration to fail most of the time, since the functionality is not yet there.
But once the functionality is implemented, previous tests should
always pass. After a phase of development is done, it is of crucial
importance to promote related acceptance tests into the suite of
regression tests. These tests can be used to verify that the system keeps
doing what it is supposed to do.
1
120
Keeping it live
121
Facilitating change
122
Introducing changes
Introducing changes
With a live, relevant specification in the form of acceptance tests that
business people can understand, we can introduce changes to the
system much more easily. The live specification facilitates discussion
about change requests, and we can use existing tests to analyse how
the requested modifications would affect system behaviour. Existing
examples provide a good framework in which to discuss the changes
and make sure that we do not forget important edge cases. When we
start changing the existing tests and adding new information to them,
123
Facilitating change
Solving problems
With domain knowledge and understanding shared among team
members and a comprehensive acceptance test suite based on realistic
examples, the quality of the resulting software is much higher than
with more traditional specifications and requirements. This does not
mean that you will never get any bugs. Nobody is perfect and unfortunately bugs will still happen.
When a bug is found, the key questions to answer are: why was it not
caught by the tests in the first place? and which set of tests should
have caught it? This will provide insights for future specification
workshops. Mike Scott recommends3 that we should treat defects as
evidence of missing tests. If an existing feature is incorrect, then the
case was overlooked by the team so we need to add a new test and
ensure that similar cases are covered next time. Another possibility
is that the case was specified incorrectly, which means that real domain
experts did not participate in the workshop and did not review the
tests afterwards. To solve similar problems in the future, we need to
identify people who should participate in the workshops or review
tests for each part of the system. If the system is not doing something
that it should be, then a new feature should be put into the development plan and specified when the time comes for implementation.
There is also the possibility that the bug is not a domain problem at
all and should have been uncovered by unit tests or integration tests,
not by acceptance tests.
in a private e-mail
124
125
Facilitating change
126
Facilitating change
128
steps that are not necessarily relevant for the thing being described.
Changes in these seemingly unrelated steps will affect all the tests as
well.
This might happen because we started with a single story and then
added similar stories as change requests came in. Alternatively, the
stories might have initially started off as different but become similar
after code refactoring or test clean-ups.
In general, a good solution for this is to convert all the script-style
tests to a single focused specification test. This test should only contain
the relevant arguments and specify what the underlying functionality
does rather than explain how it works. You can see an example in the
section Distilling the specifications on page 78.
129
Facilitating change
http://gojko.net/2008/01/22/spring-rollback/
130
Interdependent tests
Tests that depend on the order of execution are just a special case of
the previous problem though this is not as visually obvious as in
the previous case and they suffer from the same problems. If a test
requires some other test to be executed beforehand to prepare the
data or initialise some external dependencies, then a change in the
first test might cause the second to start failing with no obvious reason.
131
Facilitating change
Cleaning up tests
Acceptance tests have to be maintained throughout the software lifecycle. Knowledge gained during development often provides new
insight into the domain and helps to implement, explain or define
things better. This knowledge should be incorporated into existing
tests in the following iterations. Here are a few tips for housekeeping
that you can apply to keep tests flexible and easy to change.
132
Cleaning up tests
Facilitating change
134
135
Facilitating change
at the same time is then easy, as is verifying that they are still
consistent.
Stuff to remember
Acceptance tests are an authoritative description of what
the system does.
We can have the same level of confidence in acceptance
tests as in executable code.
We can use existing acceptance tests to discuss future
changes.
Once the functionality is implemented, previous tests
should always pass.
If an earlier test fails, immediately discuss with customers
whether it specifies obsolete functionality. If yes, take it
out. If not, you found a bug.
Regression tests in doubt should never be disabled.
Automate periodic execution of regression tests using a
continuous integration tool.
Acceptance tests should not be the complete regression
testing suite. They are only a good start.
If you find yourself resisting changing code in order not
to have to fix tests, you need to simplify the tests and
make them easier to maintain.
Efficient organisation of tests is crucial to keeping them
easy to maintain.
Keep tests in the same version control repository as
domain code.
Watch out for these problems:
Long tests, especially those that check several rules
Parameters of calculation tests that always have the
same value
Similar tests with minor differences
136
Tests that reflect the way code was written or tests that
mimic code
Tests that fail intermittently even though you haven't
changed the code
Parts of tests or even complete tests used as set-up for
other tests
Interdependent tests
137
138
Chapter 8
141
Jim Shore gave one of the best summaries of agile acceptance testing
in the wider development context in his article How I use FIT.1 He
called the process describe-demonstrate-develop.
1. The first step in the process, describe, requires us to say what we
are going to develop using a short description, no longer than a
paragraph of text.
2. The second step, demonstrate, asks us to show various differences
in possibilities with examples and captures the actual specification
for the work.
3. The third step, develop, is the implementation of the specification
using regular development practices and methodologies.
4. The fourth step, although it does not appear in the name of the
process, is repeat. Each pass through the describe-demonstratedevelop cycle should elaborate one small business rule. A project
might consist of hundreds or thousands of such cycles.
Agile acceptance testing has much more to say about what happens
before development than during it. John von Neumann, the father of
modern computing, said: There is no sense being exact about
something if you don't even know what you're talking about. The
first two steps make sure that we know what we are talking about
before the real development starts.
From a theoretical perspective, the approach of using realistic
examples for discussion and learning about the domain and then
selecting a set of them as acceptance tests works with any development
system or methodology. In practice, however, the approach works
best in the context of agile development methods (hence the word
agile in the name). I have not tried it in a project that uses the waterfall-style method and honestly I have no intention of trying it in that
environment, but my gut feeling is that it would not work so well
there. Agile methods break up the project into small time-boxed
iterations. Each iteration aims to implement a relatively small, easily
manageable piece of the total project scope. Specification workshops
to discuss the next two weeks or month of development can be held
1
http://jamesshore.com/Blog/How-I-Use-Fit.html
142
143
144
145
146
147
148
149
150
Adopting in phases
Adopting in phases
Brian Marick wrote an article called An alternative to business-facing
TDD2 in March 2008, challenging the effectiveness of automation for
acceptance tests. Arguing that test automation by itself does not help
clarify the design, that live demonstrations can show continuous
progress and that automated acceptance tests cannot detect user
interface bugs, he concluded that test automation for business facing
(acceptance) tests does not pay off as much for code-oriented (unit)
tests. He suggested that it might be more effective not to convert the
examples into automated tests and perform exploratory testing for
acceptance:
An application built with programmer TDD, whiteboard-style
and example-heavy business-facing design, exploratory testing
of its visible workings, and some small set of automated
whole-system sanity tests will be cheaper to develop and no
worse in quality than one that differs in having minimal
exploratory testing, done through the GUI, plus a full set of
business-facing TDD tests derived from the example-heavy
design.
Although I do not completely agree with this hypothesis, I think that
it is important to include it because Brian Marick is one of the top
authorities on testing in agile projects and definitely has much more
experience with acceptance testing than I do. His article is relatively
short and is a very good read, so if you are sitting near a computer,
put this book down and take five minutes to go through it now. If
http://www.exampler.com/blog/2008/03/23/an-alternative-to-business-facing-tdd/
151
you are not close to a computer, make sure you remember to read it
later.
I agree that automated acceptance testing can never replace exploratory testing for finding UI bugs and identifying unforeseen problems,
but I do not think that these two practices should be competing against
each other. With today's automation tools, user interface testing is
still a big pain and I consider that exploratory testing is much more
effective for user interfaces than automated testing. On the other
hand, I see a lot of value in incrementally building up an automated
regression test suite for business rules. Automated acceptance tests
not only provide assurance that the software meets the goals, they
also serve as a very good safety net during design improvements and
changes (see the section Evolving the design on page 109). They also
leave more time for exploratory testing.
The idea of stopping after the specification workshop is interesting
from another aspect. The workshop helps to build a shared understanding of the domain and working with realistic examples helps us
get more complete and precise specifications. This is, for me, the
biggest benefit of agile acceptance testing at the moment. If you think
that introducing the whole practice at once might be too much for
your team, start by only doing the workshop and then slowly move
into test automation.
152
153
Hire a mentor
It always helps to have someone who has gone through the transition
to lead the way, answer questions and train project team members.
If you can hire someone like this and get him on the initial team, this
will help to establish best practices for acceptance testing in your
organisation much more quickly. The mentor can act as a facilitator
during workshops, help to choose tools, train people to use them and
help automate tests better.
Having someone readily available to answer questions and help with
problems will speed up progress, but it might also make the team too
reliant on the mentor. The ultimate goal is to empower your initial
team to work without external help and then even train other teams
in your organisation. I think that an especially effective technique is
to have someone to help out for a short period of time, such as a few
weeks or a month, and then have them go away and let the team
members work on their own for a few weeks. Once the mentor goes
away, the team members will have to try harder to solve issues and
problems on their own, so they will learn more from these challenges.
If you work on shorter iterations, then the mentor should be with the
team during the initial few iterations, then leave the team to work
154
out examples and produce tests for one iteration themselves, joining
towards the end of it to review the outcome and help them solve any
open issues before the specification workshop for the next iteration.
As the project progresses, the mentor should join for shorter periods
of time and leave for longer, making the team more and more selfreliant. After a few such visits, the mentor can just occasionally visit
the team to discuss progress. He can be available for phone or e-mail
consultations throughout. A good mentor will pass the knowledge
on to the initial team in a few visits, so the entire experiment should
not take more than two or three months.
155
business people. This might seem like a good start, but it is actually
as wrong as it can possibly be.
Cargo cults
During World War II, Allied forces and Japan invaded isolated
islands in the Pacific where the natives had had no contact with
the rest of the world before. They arrived with modern equipment, built airfields and delivered supplies by cargo drops. Some
of the supplies were shared with natives, drastically changing
their way of life. When the war ended, the soldiers and their
supplies were gone. Religious cults developed on the islands
calling for the cargo to start falling from the sky again. Islanders
built elaborate models of airplanes and air control towers out
of straw and wood and imitated soldiers, hoping to summon
the presents from the gods. Needless to say the presents never
arrived.
The term cargo cult was introduced to programming by Steve
McConnell, who characterised the ritual following of practices
that serve no real purpose as cargo-cult software engineering
in 2000.3 Since then, this name has became synonymous with
any blind use of programming practices without really understanding the underlying principles, which often does not bring
any benefits.
If developers write acceptance tests based on their understanding of
the system, they are not actually checking whether this is the same as
the understanding of business people. The result is that people are
disappointed because acceptance testing seems not to provide any
benefits apart from regression test coverage. If you think about it,
since there was no effort to build a shared understanding, there can
be no benefits of improved communication. Acceptance tests written
in this way are no better than the straw air-control towers and wooden
planes that the Micronesian natives built. They follow the outer
3
156
Stuff to remember
Agile acceptance testing is not a development methodology.
Iterative development is a prerequisite for effective agile
acceptance testing.
Start out small, with a team of enthusiastic people.
Think about assigning a facilitator for the initial workshops.
Don't take practices as carved in stone, adjust them to
your needs but keep to the basic principles.
Avoid mentioning the word test to get the buy-in of
people who think that testing is beneath them.
Developers should not write acceptance tests themselves.
157
158
Chapter 9
159
160
2. As a role
3. I want functionality
For me, there is no significant difference between these two formats,
but some people religiously insist that one is better than the other, as
if a change in the order could really make an important difference
for the project. This reminds me of the famous curly brace discussions
and similar coding format issues, which were always so pointless but
wasted so much time. Ken Arnold's legendary article1 asking for style
wars to stop always comes to my mind in these situations.
http://www.artima.com/weblogs/viewpost.jsp?thread=74230
161
162
163
http://www.xprogramming.com/xpmag/expCardConversationConfirmation.htm
164
Project planning
Connextra pioneered the index story card format that is now more
or less considered standard. The Connextra story card has the story
title and priority at the top, followed by the story description, and
then the story author, submission date and implementation estimate
at the bottom. An example is shown in Figure 9.1.
Project planning
I like to use a brainstorming session with key stakeholders at the
beginning of a project to create the first cut of user stories, writing
them down on index cards or sheets of paper that are then put up on
a wall. We first discuss very high-level stories (sometimes called epics)
and then break them up into smaller stories later. The way that the
stories are gathered is not really important for the topic of this book
see the recommended books on user stories for some good techniques for that. What is important is that each story should have
something to say about how the system will help users do their job
when it is complete, and why this particular functionality is important.
165
166
167
169
170
for this phase of development and which should be left for later. This
helps a lot to eliminate just-in-case code.
In essence, user stories are the scope for the project, facilitating longterm planning and helping us get the big picture of what needs to be
done. Acceptance tests are the detailed specifications that are ironed
out only before the implementation of a particular story or a set of
related stories. With regard to three Cs of user stories, the specification workshop is the conversation and acceptance tests coming out
of the workshop are the confirmation.
Stuff to remember
Agile acceptance testing and user stories complement
each other incredibly well.
You do not have to plan projects with user stories, but
there are significant benefits if you intend to introduce
agile acceptance testing.
User stories focus on customer benefits, so it is easier
for customers to plan based on them.
Group stories into deliverables by business goals and
make sure that each deliverable can go to production.
User stories are the scope. Acceptance tests are the
specification.
171
172
Chapter 10
Tools of today
Agile acceptance testing relies on test automation, so much so that
the tools that we use for automation dictate the form in which tests
are recorded and guide us in development. In this chapter, I describe
several tools that you can use for test automation while still keeping
tests in a form that can be understood by both business people and
software implementation teams.
Tools and technologies come and go, but ideas and practices stay with
us. In fact, the incompleteness of today's tools is one of the major
obstacles to the adoption of agile acceptance testing. I expect tools to
improve considerably in the near future, so giving you detailed
instructions on how to use them in this book would not be effective.
You will be able to find more complete and more up-to-date user
manuals for tools online.
Instead, I want to present an overview of some of the most popular
or interesting tools offered today, as a short tourist guide that will
inspire you to explore further and tell you where to look for more
information. This chapter is a bit technical, so feel free to skip parts
of it if you are only interested in the business side of the story.
FIT
Framework for Integrated Testing (FIT) was the first popular acceptance testing framework, originally developed for Java by Ward
Cunningham in 2002. At that time, Ward Cunningham called it a
tool for enhancing collaboration in software development.1 One of
the central ideas of FIT was to promote collaboration and allow
customers, testers and programmers to write and verify acceptance
tests together.
1
http://fit.c2.com/
173
Tools of today
FIT works with a tabular model for describing tests. A typical FIT
test is shown in Figure 10.1. Test inputs and expected results are
specified in a table, with expected outcomes ending with a question
mark in the column headings. This tabular form makes it very easy
to write and review tests. Results are presented in the same format.
If a particular expected value does not match the actual outcome, FIT
marks the cell red and prints out both the expected and actual value.
FIT simply ignores everything outside of the tables, so additional
documentation, project notes, diagrams and explanations can be
easily bundled with acceptance tests, providing deeper insight into
the problem domain and helping people understand and verify test
results. All this helps to evolve tests with the code more easily.
These tabular tests are excellent for calculation rules, but they are not
really good for describing workflows or stories. Rick Mugridge has
developed an extension to FIT called FitLibrary that allows us to
specify flow tests as stories in a format similar to English prose (see
Figure 10.2). Although it is technically a separate extension, FitLibrary
is now considered by most people to be part of the standard set of test
types. FIT and FitLibrary together provide about ten different ways
to specify tests with tables, including lists, sets, stories and calculations
based on column and row headings. This makes FIT very flexible and
174
FIT
because of this it remains the most popular tool for acceptance testing
despite much recent competition.
FIT tables connect to the domain code using a very thin glue code
layer. These layers are called fixtures. A fixture is effectively more an
integration API then a testing API. The fixture tells FIT how to
interpret a particular table, where to assign input arguments, how to
execute a test and where to read actual test outputs. The example in
Figure 10.3 is a Column Fixture, which maps table columns to public
fields, properties and methods of the fixture class. In general, FIT
requires very little extra code for testing just enough to provide a
sanity check for the underlying interfaces. Often, FIT fixtures
constitute the first client of our code.
175
Tools of today
Figure 10.3. FIT fixtures are the glue between tables and
domain code
FitNesse
FIT makes it easy to run tests, but does not provide a way to create
or manage them. The original idea was to write tests in Word, Excel,
or any tool that can output HTML. FitNesse (see Figure 10.4) is a web
wiki front-end to FIT developed by Robert C. Martin and Micah
Martin of Object Mentor in 2005. Today it is the most popular choice
for running FIT tests. It provides an integrated environment in which
we can write and execute tests and speeds up the job with quite a few
useful shortcuts. Although FitNesse is also written in Java, it is not
heavily integrated with FIT, but executes it as an external program.
This is very useful, as it enables us to plug in different test runners.
After the FIT-FitNesse combination became popular in the Java world,
test runners were written for other environments including C++,
Python, .NET and SmallTalk. FitNesse is a web-based server, allowing
easy collaboration. Business analysts and other non-technical people
do not have to set up any software in order to use FitNesse. Any
browser will do just fine. FitNesse also allows us to organise tests into
176
FitNesse
test suites, reuse a shared set-up or tear-down for the entire test suite
and include test components, making tests easier to manage and
maintain long-term.
177
Tools of today
Alternative tools
A number of alternative test runners and test management tools for
FIT have emerged in the last several years. FitClipse by DoubTech2
is an Eclipse plug-in that manages FitNesse tests. BandXI also offer
an Eclipse plug-in3 that enables developers to start and stop FitNesse
from Eclipse and execute tests. Jeremy D. Miller has written
StoryTeller,4 which provides a way to run and manage tests in Visual
Studio. Rick Mugridge is working on ZiBreve,5 a Java IDE for story
tests with WYSIWYG editing and support for refactoring tests. Jay
Flowers has written a plug-in6 for TestDriven.NET that runs FitNesse
tests from Visual Studio.
FIT, FitNesse and most of the supporting tools are open source. For
people requiring commercial support and integration with commercial
tools, there is Green Pepper7 by Pyxis Software. Green Pepper is built
on similar concepts to FIT and FitNesse, but it integrates nicely with
Confluence and JIRA from Atlassian, providing a more integrated
way of managing tests and relating them to development tasks and
issues. An interesting feature of Green Pepper is that it supports
multiple versions of a single test, so that you can mark which version
is actually implemented and which version is being implemented for
the next release, allowing you to manage more easily the transition
from acceptance tests to regression tests and back.
More information
For more information on FIT and FitNesse, see FIT for developing
software[9] by Rick Mugridge and Ward Cunningham and my book
Test Driven .NET Development with FitNesse[22]. The Fixture Gallery8
2
http://www.doubtech.com/development/software/projects/?project=1
http://www.bandxi.com/fitnesse/index.html
4
http://storyteller.tigris.org/
5
http://www.zibreve.com/
6
http://jayflowers.com/WordPress/?p=157
7
http://www.greenpeppersoftware.com/en/products/
8
http://sourceforge.net/projects/fixturegallery
3
178
Concordion
is a free PDF guide that explains all the most popular test (fixture)
types, offering advice on when to use them, when not to use them and
how to save time and effort when writing tests. For more resources,
see the main FitNesse web site http://www.fitnesse.org and the
community-maintained public wiki http://www.fitnesse.info.
Concordion
Concordion, developed by David Peterson and released under the
Apache open source license, is an interesting alternative to FIT. Like
FIT, Concordion uses HTML documents as executable specifications
and requires some glue code (also called fixture code) to connect the
executable elements of the specification to the domain code. Unlike
FIT, Concordion does not require the specification to be in any
particular format you can write examples as normal sentences,
without any restrictions.
Concordion is really simple. Its instrumentation only allows
programmers to set global test variables, execute fixture methods and
compare actual results with expected values. Programmers can use
special HTML element attributes to mark words or phrases that are
used as test inputs or compared to test results. Web browsers just
ignore unknown element attributes, so Concordion test instrumentation is effectively invisible to people that are not interested in test
automation. For repetitive specifications and calculation rules,
Concordion also supports attributes for tables similar to those of the
FIT Column Fixture.
Currently, Concordion only supports Java fixtures, since it actually
works as a JUnit extension. This provides direct integration with
JUnit test runners, making it much easier to execute Concordion tests
from popular development environments and integrate them into
continuous build systems. It also shortens the learning curve required
to start using Concordion. The fixture is simply a JUnit test class,
with the same name as the HTML file containing the test.
179
Tools of today
180
JBehave
will not ruin hidden test instrumentation, but I would still like to see
a proper test management tool for business people and testers.
Exposing tests in an IDE is great for developers, as they can debug
and troubleshoot tests easily, but testers and business people need to
view tests at a higher level.
Concordion has a really good web site with a helpful tutorial and lots
of examples. For more information on this tool, and to download
links and examples, see http://concordion.org.
JBehave
JBehave is the original behaviour-driven development tool, written
for Java by Dan North, Liz Keogh, Mauro Talevi and Shane Duan.
Behaviour-driven development uses a specific format of test scripts
specified in the given-when-then format (see the section Working
with business workflows on page 57). Here is a typical JBehave test
script, taken from the JBehave source code repository:
181
Tools of today
Given a stock of prices 0.5,1.0
When the stock is traded at 2.0
Then the alert status should be OFF
When the stock is traded at 5.0
Then the alert status should be OFF
When the stock is traded at 11.0
Then the alert status should be ON
182
TextTest
language from tests and assertions to what the domain code should
do, emphasising the fact that a test in this case is a specification. Unlike
Concordion, the test script instrumentation is in the fixture class.
This means that test scripts are unaffected by code refactoring, but
it also means that you have to write a lot more code than with
Concordion. Unlike Concordion, there is no standard way to specify
lots of related examples with table attributes. You can, however, extend
the way JBehave parses scenarios by implementing your own converter
to handle such cases. Currently JBehave developers are working on
a standard extension that would allow you to specify related examples
as a table under the other examples include heading.9
You can download JBehave from http://jbehave.org. If you are not
using Java, there are similar tools for different platforms and
programming languages. In fact, the Ruby version, RSpec, seems to
be a lot more popular among Ruby programmers than JBehave is in
the Java community. At the Agile 2008 conference a new Ruby BDD
tool called Cucumber was announced, with two very interesting
features. The first is support for internationalisation, making it easier
to apply BDD in another language besides English. The second
interesting innovation is the ability to supply additional combinations
of inputs and expected outputs for a scenario using a simple table
syntax appended to the classic RSpec scenario template, very similar
to a classic FIT column fixture. This could make writing and maintaining BDD tests much easier as you would no longer have to copy
and paste scenario descriptions to use different arguments.
TextTest
TextTest is a tool for managing system behaviour changes written
by Geoff Bache, and it stands out from all the other tools mentioned
here because of its way of working and its unique feature set. Geoff
Bache explains10 it as a move away from trying to write a sequence
of assertions that entirely describes the correct behaviour of the
program:
9
http://www.testingreflections.com/node/view/7339
in a private e-mail
10
183
Tools of today
Systems change over time and so does their behaviour: why not
have a testing tool that assumes this as normal? This view seems
to me entirely in line with an agile view of software
development, though it is a bit of paradigm shift if you're used
to testing with assertions and APIs [...] I don't see this as
different or mutually exclusive to testing in development (if a
program isn't in development, there isn't so much point testing
it). My attitude is that however incomplete the program, it does
something today that is better than nothing: so we manage the
changes in its behaviour and make sure that we always move
forwards, never backwards, when we change the code.
Unlike FIT and Concordion, which inspect the system using an
integration fixture API, TextTest inspects the system by analysing
log files and other outputs generated during test runs and and
comparing them to expected results. Because of this, TextTest is much
better for verifying business workflows and integrations. FIT and
Concordion can only confirm the results of a function call through
the external API. TextTest can confirm that the correct process was
executed during the function call, by analysing individual steps and
verifying the flow of information.
TextTest also supports multiple active versions of the same test and
integrates with xUseCase tools for recording and replaying user
interface tests. xUseCase libraries differ from standard UI recorders
because they associate recorded actions with use case command names
rather than captured UI system events. Use case commands are named
in domain terms, as suggested by business people, testers and
developers. Test scripts can then be written using these domainspecific names, not by capturing system events. This allows us to focus
user interface test scripts on what the system is intended to do in
terms of the domain, without getting involved in implementation
details. The result is that xUseCase scripts are much easier to understand and manipulate and are much closer to the domain. This is
similar to the domain-specific testing language approach of Mickey
Phoenix and SolutionsIQ, explained in the section Don't describe
business rules through the user interface on page 91. TextTest integ-
184
TextTest
11
12
http://sourceforge.net/projects/pyusecase/
http://jusecase.sourceforge.net/
185
Tools of today
186
Selenium
Selenium
Selenium is a web browser automation tool that can manipulate and
validate HTML document object model elements. The three most
common uses of Selenium are automating web user interface tests,
verifying web site compatibility with various browsers and platforms
and periodically checking whether a site is online and works correctly.
In the context of the topic of this book, Selenium is interesting as an
engine that drives acceptance tests through the user interface.
The Selenium project started as a JavaScript functional test runner
in 2004 created by Jason Huggins, Paul Gross and Jie Tina Wang,
who were working for ThoughtWorks at the time. It is an open source
product, actively maintained by ThoughtWorks, with a large
community of supporters. Since 2004, it has significantly evolved in
terms of functionality and tool support. In fact, additional tool
support and integration options are two of the best features of
Selenium.
The core of Selenium is written in JavaScript and HTML and it is
compatible with all major browsers, including Internet Explorer and
Firefox. It runs on Windows, Linux and Mac. Selenium works by
embedding control scripts into a browser window frame and automating another window frame with the scripts. An example Selenium
screen is shown in Figure 10.7. The top half of the screen is the
Selenium control interface: the top-left corner shows the current test
suite, the middle part shows the current test and the top-right corner
contains the panel that controls tests. The bottom part of the screen
shows the web site under test.
Selenium test scripts are written as tables, with table rows representing
test steps. The first cell in a row contains a command keyword, the
second and third cells contain command arguments. This test language
is known as Selenese. For an example that submits a search form and
validates the output, see Figure 10.8. Selenese has many different
keywords to simulate clicks, type text into fields, load pages, evaluate
text in elements, check for alerts and manipulate and inspect HTML
187
Tools of today
Selenium has a visual test automation tool that supports record-andreplay operations, called Selenium IDE (see Figure 10.9). The tool is
aimed at less technical people who do not want to write Selenese
scripts by hand and works as a Firefox plug-in (Internet explorer is
not supported currently). The Selenium IDE comes with good
13
188
Selenium
189
Tools of today
190
WebTest fixtures are a library of additional FIT fixtures for Java and
.NET which I wrote to allow business analysts and testers to specify
Selenium tests without actually learning the Selenese language. The
fixtures convert English sentences such as user clicks on Cancel and
Page contains text Hello Mike into Selenium Remote Control
commands, enabling non-programmers to write and maintain user
191
Tools of today
Stuff to remember
There are many different acceptance testing tools today
on the market.
Start by evaluating FITFitNesse, Concordion and JBehave
for business domain testing.
Selenium, CubicTest and StoryTestIQ might help you with
UI tests.
To maintain UI tests more easily, use page objects or
domain-specific keywords to describe actions in a business language.
TextTest can be an interesting choice for inspecting
workflows and building up regression test suites for
existing systems.
192
Chapter 11
Tools of tomorrow
Tool support for agile acceptance testing is today good enough for
us to get our work done, but not good enough for our work to be
really efficient. Early adopters of the practice were forgiving and ready
to overlook problems with tools, working around them or implementing their own solutions. However for this practice to become widespread, we need better tools that give us a shorter learning curve and
allow us to do our jobs more efficiently. As the practice evolves, the
tools need to evolve too. Here are some ideas that I would really like
to see in our tools in the near future.
Domain-specific languages
The idea of domain-specific languages (DSL) seems to be gaining a
lot of momentum currently. The basic idea is to create a specific miniprogramming language, often based on some action-oriented
keywords, focused on a particular business domain and even tailored
to the needs of a particular customer. We then use this mini-language
to describe business processes and constraints. Business people and
domain experts can then effectively participate in programming
even if they don't want to write in the language themselves, they will
at least be able to read and understand it. So domain-specific
languages can improve communication in a project. Another rationale
behind this idea is that domain-specific language constructs allow us
to describe the business problem more efficiently than with generalpurpose languages. The mini-language is ultimately implemented in
a general purpose language such as Java or C#, so that process definitions written in the business language can be integrated with the
infrastructure.
This idea is very close to the concept of writing acceptance tests so
that business people can understand them. In agile acceptance testing,
the acceptance tests are written in business language and serve as a
193
Tools of tomorrow
194
http://video.google.com/videoplay?docid=-6298610650460170080&hl=en
See [23]. Also see http://www.jennittaandrea.com/wp-content/uploads/2007/04/envisioningthenextgenerationoffunctionaltestingtools_ieeesw_may2007.pdf.
2
195
Tools of tomorrow
concerned just with the current state of tests, not so much about the
previous test runs. Testers are typically interested in the results of
earlier test executions and previous versions of tests as well.
I'm not so sure that all these roles can be reconciled in a single user
interface, and I'd like to see multiple user interfaces that focus on
providing the best individual benefits for particular roles. For example,
I would expect better integration with traditional integrated development environments for programmers, so that they can debug and
troubleshoot code under test while they are implementing parts of a
specification or bug-fixing. Testers, on the other hand, should not
have to run tests from a programmer IDE, but from an interface that
gives them a quicker global overview with fewer details. Business
people should be able to write and review tests in a visual WYSIWYG
environment similar to a word processor or spreadsheet. I would like
future tools not to attempt to be all things to all men, but to provide
all the features that a particular user needs to do his job efficiently,
while allowing others to use different tools.
Even though some IDEs such as Eclipse offer a rich client application
development framework, I still think that applications based on IDE
frameworks are too technical for business people to use, and that we
need a proper business view of the tests. After all, this is why FitNesse
is still the most popular test management system for FIT tests in spite
of all its problems.
196
http://sourceforge.net/projects/profit
http://studios.thoughtworks.com/twist-agile-test-automation/
197
Tools of tomorrow
198
Better editors
Better editors
During the second Agile Alliance Functional Testing Tools workshop,
Elisabeth Hendrickson called for better editor tools rather then just
support for refactoring.6 Her idea was that future tools must reduce
the cost of changing tests, allowing us to efficiently reconcile new
expectations with those already expressed as acceptance tests. Better
editors will reduce the cost of change and facilitate change in tests
rather then impede it. Elisabeth Hendrickson pointed out two key
features that future editors should have:
They should allow people to quickly identify mismatches between
expectations in their minds and those written down as acceptance
tests.
They should make it easy to change tests to reconcile mismatches
easily and quickly.
With better editors like these, the lack of support for carrying through
refactoring into acceptance tests would not be such a problem, because
5
6
http://gojko.net/2008/08/12/fit-without-fixtures/
http://video.google.com/videoplay?docid=8565239121902737883&hl=en
199
Tools of tomorrow
200
http://www.exampler.com/blog/2007/07/13/graphical-workflow-tests-for-rails/
201
202
Chapter 12
Effects on business
analysts
During the QCon 2007 conference in London, Martin Fowler and
Dan North organised a session called The Crevasse of Doom,1 to
discuss the communication gap between business users and software
developers. Martin Fowler called this gap the biggest difficulty we
face in software development today. He divided the strategies for
crossing the gap into two groups: the ferry and the bridge.
The ferry is the traditional way to think about a communication
problem. Fowler said that developers are often perceived to have poor
communication skills, dress in T-shirts and talk in techno-jargon, so
business people who dress in smart suits and talk in their business
jargon think that they could not possibly communicate with
developers. Professional intermediaries are hired as translators, who
ferry information between the two groups. The problem with this
approach is, as Fowler put it, that it is not a high-bandwidth form of
communication and it makes no difference to the tradional gulf
between developers and business people.
The bridge approach provides a structure that allows people to cross
the gap directly, facilitating direct communication between the two
groups. Instead of having professional translators, we work on
fostering communication and building links between the two worlds,
making a single world out of them. The bridge approach provides a
much higher bandwidth and less distortion. It also offers an opportunity for people with different skills to work together to find better
ways of solving business problems.
http://www.infoq.com/presentations/Fowler-North-Crevasse-of-Doom
205
206
207
208
209
helps significantly to discover the real business rules and to root out
incorrect assumptions, so the customers will be less likely to change
their minds later.
Not only is doing tests part of a business analyst's job, but it makes
the rest of your job much more effective.
210
But they will only look at the tests and not read the requirements...
211
212
213
214
There is no traceability
When a single person acts as a gateway for all the requirements and
everything is put into a single document, it is relatively easy to keep
track of who ordered what and when, especially if the document is
signed off before development starts. Because of the collaborative
model of agile acceptance testing and because a single phase of
development can be broken down into hundreds of small examples
and tests, some people are concerned about the traceability of
requirements and specifications. After all, implementing a requirement costs money and someone ultimately has to approve and pay
for it.
I don't like sign-offs and prefer to work closely with the customers
on building the system that they need. However, if you need to get
official approval, treat the formal set of acceptance tests as the
specification and get a sign-off on it. Most test automation and
management tools allow you to put arbitrary comments on tests, so
you can include the name of the person who approved a particular
test and the date when it was approved. If you implement strict
ownership, which anyway I think is a bad idea, you can list the name
of the person who owns a particular test as well.
215
When user stories are used for project planning, acceptance tests are
derived from stories and you can implement traceability by linking
individual tests to a story, then linking stories to customer benefits
and project goals. You can track who requested a story and when on
individual story cards, or you can put this information into an
acceptance test hierarchy.
If the tests are stored in a version control system, as I recommend in
the section Keep tests in a version control system on page 135, then it
is also very easy to go back and inspect who changed a test, when and
why. Most source version control systems give you this functionality
out of the box.
Stuff to remember
The role of the business analyst is changing from that
of a conveyor of information to that of a facilitator who
enables people to share knowledge directly.
The business analyst is a great choice for the facilitator
of the specification workshop.
Working with acceptance tests makes the rest of the
business analyst's work easier and more effective.
You can still keep your traditional documents, but use
acceptance tests as the authoritative specification.
You can modify the specification (acceptance tests) when
the development starts without causing great problems
for anyone.
There should be no handover of tests from business
analysts to developers or testers.
Tests don't have to be perfect from the start.
Getting it right is not your sole responsibility. Developers
and testers share responsibility for the specifications.
216
Chapter 13
Effects on testers
In 2005 I worked with a development organisation that had a remote
testing department. The testing team was considered completely
ineffective since many problems passed undetected through them,
and developers generally considered talking to the testing department
as a waste of time. To be fair to the testers, they were expected to grasp
instantly whatever the developers produced and test it without
knowing the first thing about it. Nobody wanted to spend any time
explaining anything to them and there was simply no feedback in the
whole process. I think that having a detached testing team simply
does not work, nor does getting testers involved at the end. In the
words of Philip Crosby, Quality has to be caused, not controlled.
This does not mean that there is no place for testers in agile processes
far from it but that the role of the tester has to change. Agile
acceptance testing is a great manifestation of this.
http://video.google.com/videoplay?docid=-5105910452864283694
217
Effects on testers
218
As you can add tests for any bugs found in production to the acceptance test suite, you can be sure that the bugs will not resurface, or at
least that the developers will find out about resurrected bugs and fix
them before you need to get involved.
219
Effects on testers
220
Testers, on the other hand, are specialised in this. By giving you more
time to do this, agile acceptance testing empowers you to perform
better exploratory testing and provide much better feedback from
your experiments.
Having more time for experimenting also allows you to refine and
expand the suite of regression tests based on what you learn during
exploratory testing and to suggest more possible problems in workshops for iterations to come. So acceptance testing, test automation
and exploratory testing are linked together in a positive reinforcing
loop.
221
Effects on testers
It's cheating
A very strange argument I have heard on a few occasions against agile
acceptance testing is that it is effectively cheating. The argument goes
that with agile acceptance testing, testers are required to tell the
developers exactly what they are going to check, so the developers can
implement the software to pass the tests upfront.
I honestly struggle to understand why this is a problem. Software
development, at least in my eyes, is a cooperative activity and not a
222
competition. If the developers don't put the problem into the system
in the first place, it's more efficient than you catching the problem
later and returning it back to them for fixing. It does not matter who
put the bug in, who caught it and who fixed it. Agile acceptance testing
asks us to not create the bug in the first place.
Unfortunately, agile acceptance testing has no chance of success in
companies where people earn money on metrics such as bugs found
and bugs resolved. If your salary or bonus depend on such statistics,
you are not going to like agile acceptance testing one bit. On the other
hand, if you consider improving the quality of the software as your
job, not catching bugs, then this practice can help you greatly to do
it more efficiently.
The idea of giving developers tests upfront is similar to the idea of
publicising where speed cameras are on the motorway. The goal of
introducing speed cameras is to reduce accidents, not to charge people
for speeding. If you tell everyone where the cameras are and if there
are enough cameras on the most accident-prone parts of the road
system, people almost always stop speeding there. Testers on an agile
team should ensure that there are enough speed cameras and that
they are placed on all parts of the road that are likely to cause accidents.
223
Effects on testers
224
in a private e-mail
225
Effects on testers
http://skillsmatter.com/podcast/home/understanding-qatesting-on-agile-projects
http://members.microsoft.com/careers/careerpath/technical/softwaretesting.mspx
5
http://www.testingreflections.com/node/view/7280
4
226
Stuff to remember
The new primary role of testers is to help people avoid
problems, not to discover them.
227
Effects on testers
228
Chapter 14
Effects on developers
Although the role of developers changes with the introduction of agile
acceptance testing, I find that they are generally enthusiastic about
the idea and do not put up any resistance. From a technical
perspective, most developers see it just as an extension of test-driven
development principles, and TDD is now becoming well established
among software developers. However developers are sometimes overly
enthusiastic and forget about the requirement to collaborate. They
take over more responsibility than they should, losing the benefits of
shared understanding and communication.
Effects on developers
230
231
Effects on developers
all the tests pass, we know that the work is done. If a previously passing
test fails, we have broken something.
232
code and maintain it. On the other hand, this should produce a much
more accurate and clear target for development, reducing later rework
caused by misunderstandings or incomplete specifications. From my
experience, the overall result is much less work for developers rather
then more work, although in the beginning this new work does seem
to be additional.
One potential issue with this is that the benefits of agile acceptance
testing are not that obvious or instant. You won't necessarily notice
straightaway that you have less rework to do, since there is usually a
time lag between delivery of code and requests for rework arriving.
The reductions in bug numbers and change requests are also hard to
spot when you are looking only at the current iteration. To truly
evaluate the benefits of agile acceptance testing, you have to look at
the way you work from a wider perspective and compare the whole
process after the introduction of acceptance testing with the way it
was before.
233
Effects on developers
234
235
Effects on developers
236
from the code-editing window, shortening the loop between developing and verifying a piece of code. Acceptance tests cover much larger
parts of the code, so it is not so easy to work out exactly what they
relate to, and current tools often lack good IDE integration. One of
the biggest complaints is that it is often much harder to debug code
under acceptance tests than with a unit test. The biggest benefit of
agile acceptance testing is improved communication and collaboration. This easily outweighs the importance of close integration and
easy debugging during test runs.
The only thing I can offer as a consolation is that tools are evolving.
One of the most important areas where tools need to improve is to
provide multiple views for different people, so that developers can
keep working from an IDE and business people can use something
more convenient for them. For the time being, tools are a big problem
for everyone, but the point of agile acceptance testing is not in tools
but in the conversation.
237
Effects on developers
Stuff to remember
You cannot write acceptance tests yourself.
You need to participate in domain discussions.
Specification workshops give you a good chance to
discuss edge cases and inconsistencies with domain
experts before development.
Acceptance tests do not provide instant feedback as unit
tests do. This is because they deal with the overall picture
and business rules, not code units.
You need both unit tests and acceptance tests.
Acceptance tests exist to facilitate communication
between people.
238
Appendix A.
Resources
Books and articles
[1] Tracy Reppert. Copyright 2004. Software Quality Engineering.
Better Software Magazine: Don't just break software, make
software. July/August 2004.
[2] Frederick P. Brooks. Copyright 1995. Addison-Wesley Professional. The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition (2nd Edition). 0201835959.
[3] Gerald M. Weinberg and Donald C. Gause. Copyright 1989.
Dorset House Publishing Company. Exploring Requirements:
Quality Before Design. 0932633137.
[4] Gary Klein. Copyright 1999. The MIT Press. Sources of Power:
How People Make Decisions. 0262611465.
[5] Bertrand Russell. Copyright 1985. Open Court Publishing
Company. The Philosophy of Logical Atomism. 0875484433.
[6] James Surowiecki. Copyright 2005. Addison-Wesley Professional. The Wisdom of Crowds. 0385721706.
[7] Tom Davenport. Copyright 2003. CXO Media Inc. CIO
Magazine: A Measurable Proposal. June 2003.
[8] Robert C. Martin and Grigori Melnik. Copyright 2008. IEEE
Software: Tests and Requirements, Requirements and Tests: a
Mobius Strip. January/February 2008. page 54.
[9] Rick Mugridge and Ward Cunningham. Copyright 2005.
Prentice Hall PTR. Fit for Developing Software: Framework
for Integrated Tests. 978-0321269348.
239
Resources
[10] David Lorge Parnas, Ryszard Janicki, and Jeffery Zucker. Copyright 1996. Springer Verlag. Relational Methods in
Computer Science:Tabular representations in relational
documents . pages 184-196. 978-3-211-82971-4.
[11] Mike Cohn. Copyright 2004. Addison-Wesley Professional.
User Stories Applied: For Agile Software Development. 9780321205681.
[12] Mike Cohn. Copyright 2005. Prentice Hall PTR. Agile Estimating and Planning (Robert C. Martin Series). 0131479415.
[13] Eric Evans. Copyright 2003. Addison-Wesley Professional.
Domain-Driven Design: Tackling Complexity in the Heart of
Software. 0321125215.
[14] Karl Weick. Copyright 1984. Jossey-Bass. The Executive Mind:
Managerial thought in the context of action. 0875895840.
pages 221-242.
[15] Shigeo Shingo. Copyright 1986. Productivity Press. Zero
Quality Control: Source Inspection and the Poka-Yoke System.
0915299070.
[16] Mary Poppendieck and Tom Poppendieck. Copyright 2003.
Addison-Wesley Professional. Lean Software Development:
An Agile Toolkit. 0321150783.
[17] George L. Kelling and James Q. Wilson. Copyright 1982. The
Atlantic Monthly Group. Atlantic Monthly: Broken Windows.
March 1982.
[18] Andrew Hunt. David Thomas. Copyright 1999. AddisonWesley Professional. Pragmatic Programmer: From Journeyman to Master. 978-0201616224.
[19] Ellen Gottesdiener. Copyright 2002. Addison-Wesley Professional. Requirements by Collaboration: Workshops for Defining
Needs. 0201786060.
240
Online resources
Online resources
Here are the links to all the online resources mentioned in the book.
You can find all these links and more on the accompanying web site
http://www.acceptancetesting.info.
241
Resources
Presentations
Gilles Mantel: Test Driven Requirements Workshop from Agile
2008. http://testdriveninformation.blogspot.com/2008/08/materialof-tdr-workshop-at-agile-2008.html
Michael Phoenix: Domain Specific Testing Languages from Agile
2008.
http://www.solutionsiq.com/agile2008/agile-2008domain.php
Naresh Jain: Acceptance Test Driven Development.
http://www.slideshare.net/nashjain/acceptance-test-driven-development-350264
Articles
Brian Marick: An Alternative to Business-Facing TDD.
http://www.exampler.com/blog/2008/03/23/an-alternative-tobusiness-facing-tdd/
Brian
Marick:
My
Agile
testing
project.
http://www.exampler.com/old-blog/2003/08/21/
Dan North: Introducing BDD. http://dannorth.net/introducingbdd
Gojko Adzic: Fitting Agile Acceptance Testing into the development
process. http://gojko.net/2008/09/17/fitting-agile-acceptance-testing-into-the-development-process/
Jennitta Andrea: Envisioning the next generation of functional
testing
tools.
http://www.jennittaandrea.com/wpcontent/uploads/2007/04/envisioningthenextgenerationoffunctionaltestingtools_ieeesw_may2007.pdf
242
Tools
Tools
Concordion: http://www.concordion.org
Cubic Test: http://www.cubictest.com
FitNesse main web site: http://fitnesse.org
FitNesse community site: http://www.fitnesse.info
Green Pepper: http://www.greenpeppersoftware.com/en/products/
JBehave: http://jbehave.org
JUseCase: http://jusecase.sourceforge.net/
PyUseCase: http://sourceforge.net/projects/pyusecase
Selenium: http://selenium.openqa.org/
StoryTestIQ: http://storytestiq.solutionsiq.com
Twist: http://studios.thoughtworks.com/twist-agile-test-automation/
Text Test: http://www.texttest.org
Mailing Lists
Agile acceptance testing (the companion mailing list to this book):
http://groups.google.com/group/agileacceptancetesting
Agile Testing: http://tech.groups.yahoo.com/group/agile-testing
243
Resources
Agile
Alliance
Functional
Testing
Tools:
http://tech.groups.yahoo.com/group/aa-ftt/
Concordion: http://tech.groups.yahoo.com/group/concordion/
FitNesse: http://tech.groups.yahoo.com/group/fitnesse/
244
Index
Symbols
.NET tools 176
A
A-7 project 54
abnormal use 51
abstract requirements 103
acceptance criteria 75
acceptance test-driven development
37
acceptance tests
and refactoring 110, 197
and user stories 169
as business resource 120
as documentation 119
as focus for development 33, 211
as specification 75, 103
as workflows 79
changing 106
code coverage 104, 235
common problems 126
communication technique 234
compared to unit tests 110, 113,
115
converting examples to 87
customer-specific 85
design hints 107
executing 115
facilitating changes 34
handover 98
incomplete 233
maintaining 126, 210
not complete verification 100
parameterisation 85
running 115, 121
saving time 209
self-explanatory 84
trust 93
what they are not 97
who should write 95
written by developers 129, 156,
235
action-oriented tests 129
affiliate advertising example 45, 59
Agile 2008 15, 25, 27, 43, 93, 183,
186, 198
agile acceptance testing
defined 31
introducing 148
not a development methodology
141
not a silver bullet 227
testing 34
Agile Alliance Functional Testing
Tools workshop 195, 199
agile development 4, 142
agile testing xxii
Amazon EC2 Cloud 189
ambiguity
manoeuvering space 212
polls 65
Andrea, Jennitta 195, 200, 201
anti-goals 71, 105
API documentation 119
architectural vision 107
Arnold, Ken 161
Asch, Solomon 20
Atlassian 178
Auer, Ken xv
Auftragstaktik 29
245
Index
B
B-2 bomber example 21
Bache, Geoff 183, 185, 186
BandXI 178
Bay of Pigs 20
BDD (see behaviour-driven development)
Beck, Kent xxi
behaviour-driven development xxi,
38, 81
JBehave 181
user story template 160
workflow template 57
benefits
for business analysts 206
for developers 230
for testers 218
user stories 162
bet placement workflow example
55
big documents 6
black-box requirements testing xxi,
25, 97
Bluetooth example 19
brainstorming sessions 165
bridge communication strategy 205
broken window syndrome 122
Brooks, Fred 3
brushing teeth example 43
bugs
discovering 124, 223
introducing 121
user interface 152
246
verification 125
bus effect 120
business analysts
as customer proxies 62
benefits 206
challenges 209
new role 206
role name xvii
testing role 225
time 210
traditional role 206
uninterested 63
business-facing tests xxi
business goals 17, 167
business rules 32, 211
conflicts 121, 208
scattered 215
tests 132
verifying 220
business workflows 80
C
C++ tools 176
calculation tests 127, 128
card-conversation-confirmation
xxi, 164
cards for user stories 164
cargo cults 155
Carpenter, Floyd L. 22
changes
agile development 4
costs 4, 212
during development 125
facilitating 34
introducing 123, 143
propagating effects of 196
Chinese whispers 6
Clancy, Bob 11
D
Dassing, Andy 113
database dependencies 130
Davenport, Tom 22
defects 124
deliverables 166
delivery plan 168
describe-demonstrate-develop xxii,
142
design
domain-driven 109
evolving 109
feature-driven 109
hints 107
improving 119
model-driven 109
detail in tests 84
developers
benefits 230
247
Index
challenges 232
collaboration 229
new role 229
perceptions of 205
repeating mistakes 218
role name xvii
suggesting new features 105
time 232
developer-testers 226
development
cooperative 222
progress 208
target 231
direct domain mapping 198
discount offers example 88
distilling specifications 80
document object model 188
DoFixture 198
DOM (see document object model)
domain
direct mapping 198
language 66, 111, 134
understanding 219, 229
domain-driven design 67, 109, 112,
198
domain knowledge
of testers 224
sharing 64
transfer 61
domain-specific languages 193
domain-specific testing languages
93, 184
virtual machine 195
dominoes example 15
DoubTech 178
DSL
(see
domain-specific
languages)
Duan, Shane 181
248
E
EC2 Cloud 189
Eclipse IDE 190, 196
plug-ins 178
edge cases 33, 47, 50, 231, 233
numerical 51
suggested by developers 49
technical 113
editor tools 199
Eisenhower, Dwight 64
epics 165
equivalence hypothesis 26
Evans, Eric xxii, 67
evolving language 134
example
affiliate advertising 45, 59
B-2 bomber 21
bet placement workflow 55
Bluetooth 19
brushing teeth 43
credit card transaction 57
CRUD 161
discount offers 88
dominoes 15
flower shop 52, 82
foreign currency exchange 14
fraud detection 49, 166
free delivery 52, 82, 84
gold-plating 12
media clips 49
poker 14
printing in Java 28
real-time reporting 30
risk profile 66
rounding to two decimals 14
F
facilitators 70, 152
fake objects 115
fear of blame 212
feature-driven design 109
feedback exercises 65
feedback from tests 237
ferry communication strategy 205
FIT xxii, 36, 54, 88, 96, 173, 194,
196
table 214
user interfaces 91
FIT.NET 198
FitClipse 178
FitLibrary xxii, 174, 198, 201
FitNesse xxii, 36, 96, 176, 196
user interfaces 91
version control system 200
web sites 179
five-pointed star 10
Fixture Gallery 178
fixtures 110
Concordion 180
domain 198
domain-specific languages 194
FIT 175, 179
WebTest 191
flexibility during development 125
Flowers, Jay 178
flower shop example 52, 82
foreign currency exchange example
14
Fowler, Martin 28, 109, 205
Framework for Integrated Testing
(see FIT)
fraud detection example 49, 166
free delivery example 52, 82
249
Index
G
Gause, Donald xxi, 4, 9, 25, 51, 64,
65, 97
Geras, Adam 195
given-when-then 57, 81, 181
glue code 110
gold-plating example 12
good fight 154
Google groups 155
Google Tech Talks 217
Google Test Automation conference 120
Gottesdiener, Ellen 153, 206
Green Pepper 178, 200
Gross, Paul 187
groupthink 20
H
handover material 98, 212
Hatoum, Andy 145
Hendrickson, Elisabeth 199
Hewlett-Packard 22
Huggins, Jason 187
human-readable tests 87
Hunt, Andrew 122
defined xvii
relationships 221
inconsistencies 104
incremental development 143
infrastructure constraints 163
interdependent tests 131
intermittent failures 130
iterations 76, 142, 144
planning 50
specification workshops 48, 63
iterative development 143
J
Jain, Naresh 77
jargons 67
Java IDE 178
Java tools 176, 179, 181
JBehave 181
Jeffries, Ron xxi, 25, 164
JetBrains 123
JIRA 178
job security 223
JUnit 179, 182
JUseCase 185, 194
just-in-case code 33, 104
K
Kelling, George 122
Keogh, Liz 181
Kerievsky, Joshua xxi
Klein, Gary 9, 30, 65, 70
L
I
IDE test management tools 180,
182, 196, 199, 236
implementation team 97
250
lean
production process 217
software development xxii
legacy systems 185
M
mailing lists 155
Mantel, Gilles 15
Marcano, Antony xxii, 226
telephone game 6
Marick, Brian xxi, 37, 151, 201
Martin, Micah 176
Martin, Robert C. 26, 176
McConnell, Steve 156
media clips example 49
Melnik, Grigori 26
mentors 154
metrics 223
Microsoft tester-developers 226
Miller, Jeremy D. 178
Minesweeper Kata screencast 186
Mission Command 29
mistake-proofing 86
misunderstandings 8
costs 14
mock objects 115, 130
model-driven design 109
Mugridge, Rick xxii, 54, 80, 120,
174, 178, 198, 201
multiple active versions 200
N
Neumann, John von 142
neutral facilitators 153
new features 105
normal use 51
North, Dan xxi, 28, 38, 181, 205
O
Object Mentor xxii, 176
obviousness 9
OmniGraffle 201
Ono, Taichi xxii
open issues
at workshops 72
automating tests 89
order of execution 131
organising tests 134, 200
P
Page Object 95
paralysis by analysis 160
parameterisation of acceptance tests
85
parameters with same values 128
Parnas, David 54
Pearl Harbor 20
peer pressure experiment 20
performance tests 90, 117
not automated 214
Peterson, David xxii, 80, 84, 179
phased implementation 151
Phoenix, Mickey 43, 93, 184
placeholders 169
planning poker 65
Platt, Lew 22
poker example 14
Poppendieck, Mary xxii, 105, 170,
217
Poppendieck, Tom xxii, 105
pre-planning 146
processing workflows 54, 79
product owners xvii, 15
251
Index
professional writers 97
Profit 197
progress tracking 208
project planning 165
changing the plan 169
Prussian military tactics xxii, 29
Python tools 176
PyUseCase 185, 194
Pyxis Software 178
Q
QCon 2007 28, 205
quality 217
building in 219
R
Rady, Ben 104
random values 130
realistic examples 48, 142
getting the most out of 59
specification workshop output
63
verifying 75
real-time reporting example 30
real-world examples 32
reasonable use 51
record-and-replay 99, 188
red-green-refactor 110
reducing complexity 133
refactoring 109, 134, 196
regression tests 119, 120, 123, 130
building gradually 125
don't disable 121
moving to acceptance tests 124
tools 185
relationships 221
remembering specifications 6
252
requirements
already a solution 18
black-box tests 25, 97
checking 25
documents 43, 64, 103, 210
eliciting 209, 229
most important information 25
out-of-date 119
relationship with tests and
examples 27
tests 51
traditional 103
reusing parts of tests 111
risk profile example 66
roles xvii
user interfaces 195
rounding to two decimals example
14
RSpec 183
Ruby tools 183, 201
running tested features 104
Russell, Bertrand 12
S
scalability tests 90, 117
not automated 214
scenarios
BDD 214
JBehave 182
scope of project 160, 166
Scott, Mike 50, 104, 124, 225
scripts 133
Scrum xxii
ScrumMaster as facilitator 153
SDET 226
search system example 114
security probing 117, 214
Selenese 187
sign-off 75
traditional 7
specifications over scripting xxii
specification workshops 104, 150
compared to design workshops
69
domain language 68
facilitating discussion 207
facilitators 70, 152
feedback exercises 66
for user stories 170
implementation issues 64
in iterations 144
introduced 60
keeping focused 70
open issues 72
output 63
preparing for 61
reviews 72
role of customers and business
analysts 61, 64
running 60
what they are not 68
who should attend 60
speed cameras 223
spine stories 50
Spirit of Kansas, The 21
state machines 127, 128
steps in JBehave 182
Stewart, Simon 95
Stockdale, Mike 198
StoryTeller 178
storytest-driven development xxi,
37
StoryTestIQ 191
string tokenisation example 114
stub objects 115
style wars 161
253
Index
SuiteFixture 201
Surowiecki, James 20
T
tables 54
compared to natural language 57
tags 201
Talevi, Mauro 181
TDD (see test-driven development)
TeamCity 123
team leader as facilitator 153
team relationships 221
tear-down section 111, 132
technical constraints 163
technical edge cases 113
technical reviews 97
telephone game 5
test automation tools 173, 193
test cases 25
TestDriven.NET 178
test-driven design 106
test-driven development xxii, 34
test-driven requirements 15, 37
test editing tools 199
tester-developers 226
testers
analysis role 225
benefits 218
career paths 224
challenges 222
influence 222
job security 223
new role 217
role name xvii
traditional role 6
testing
agile acceptance testing 34
control 222
254
U
UAT (see user acceptance testing)
ubiquitous language xxii, 66, 67,
134, 199
consistent use 111
UML 56
unanswered questions
at workshops 72
automating tests 89
understanding the domain 44, 60,
64, 66
unit tests 35, 129, 234
compared to acceptance tests
110, 113, 115
effective 116
feedback 237
purpose 114
unreasonable use 51
unreliable tests 130
usability
testing 117
usability tests 90
not automated 214
US Army xxii, 9, 29
use cases
compared to user stories 161
verifying 50
user acceptance testing 34, 37
user interfaces 214
bugs 152
tailored to roles 195
testing through 91
test robots 220
workflows 94
user interface testing 91, 227
record-and-replay 184
recorders 194
web 187
user stories xxii
and acceptance tests 169
as placeholders 169
benefits 162
business goals 167
cards 164
compared to use cases 161
easily understood 213
first cut 165
introduced 160
verifying 50
US Naval Research Lab 54
V
version control systems 135, 216
video tour example 19
VIP customers example 49, 52
255
Index
W
Wang, Jie Tina 187
waterfall development 142
WebTest fixtures 191
Weick, Karl 70
Weinberg, Gerald xxi, 4, 9, 25, 51,
64, 65, 97
Wilson, James Q. 122
Windsor knot example 44
wireframes 91
workflows
avoiding combinatorial explosion of cases 55
BDD template 57
business 57, 80
in user interfaces 91
large 81
maintaining tests 80
processing 54, 79
testing 185
user interface 94
visual descriptions 201
with several rules 127
workshops (see specification workshops)
X
xUseCase 184
compared to domain-specific
languages 93
Y
Yahoo groups 155
256
Z
zero quality control xxii, 86
ZiBreve 178
257
258