Exploratory Testing by Maaret Pyhäjärvi
Exploratory Testing by Maaret Pyhäjärvi
Maaret Pyhäjärvi
This book is for sale at http://leanpub.com/exploratorytesting
This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing
process. Lean Publishing is the act of publishing an in-progress ebook using lightweight tools and
many iterations to get reader feedback, pivot until you have the right book and build traction once
you do.
Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Place of exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Separate Tester Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Future is Here, Just Not Evenly Distributed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
An Approach to Testing
Exploratory Testing is an approach to testing that centers the person doing testing by emphasizing
intertwined test design and execution with continuous learning where next test is influenced by
lessons on previous tests. As an approach, it gives a frame on how we do testing in a skilled way.
We use and grow multidisciplinary knowledge for fuller picture of empirical information testing
can provide. With the product as our external imagination, we are grounded on what is there but
inspired to seek beyond it.
We learn with every test about the application under test, ourselves as the tool doing the testing,
other tools helpful in extending our capabilities and the helpful ways we can view the world the
application lives in. We keep track of testing that has been done, needs doing and how this all
integrates with the rest of the people working on similar or interconnected themes.
What makes our activity exploratory testing over other exploratory approaches founded in curiosity
is the intent to evaluate. We evaluate, seek for information we are missing, making sure what we
know is real with empirical evidence.
Specifying It By Contrast
It’s not a surprise that some folks would like to call exploratory testing just testing. In many ways it
is the only way of testing that makes sense — incorporating active learning is central to our success
with software these days.
To contrast exploratory testing with what people often refer to as manual testing, exploratory testing
as a skilled approach encompasses use of programming for testing purposes. We use our brains,
our hands as well as programs to dig in deep while testing. Sometimes the way of including test
automation happens through means of collaboration, where ideas from exploratory testing drive
implementation of automation that makes sense.
To contrast exploratory testing with people refer to as scripted testing, exploratory testing isn’t
driven by scripts. If we create scripts from exploratory testing, we know to use them in exploratory
What is Exploratory Testing? 5
fashion remembering that active thinking should always be present even when the script supports
us in remembering a basic flow. We’ve talked a lot about scripted testing as an approach where we
separate design — deciding what to test, and execution — making the test happen, and thus lowering
our chances of active learning targeting the most recent understanding of the risk profile.
Another contrast to programming centric views to testing comes with embracing the multidisci-
plinary view of of testing where asking questions like “is my application breaking the law today
after recent changes?” is something routinely encoded into exploratory testing, but often out of
scope for a group of automators.
Listen to Language
If you hear: “My boss asked me to test search so I searched something that was found and something
that wasn’t and reported I was done” you may be witnessing very low quality exploratory testing. It
relies on following high-level orders to an extent the tester can imagine based on their knowledge.
If you hear: “My colleague wrote me 50 cases of what to try out with search and I tried them
and reported I was done” you may be witnessing testing that isn’t exploratory. There is no hint
of learning, and owning responsibility of quality of the testing that happens.
What is Exploratory Testing? 6
If you hear: “My boss asked me to test search so I came back with 50 quick ideas of what could
be relevant and we figured out I’d just do 10 of them before we decided if going further was
worthwhile“, you are likely to be witnessing exploratory testing.
Similarly, in the automation first space, if you hear: “I got a Jira ticket saying I should automate this.
I did, and found some problems while at it, and extended existing automation because of the stuff I
learned.”, you may be seeing someone who is exploratory testing.
If you hear: “The Jira ticket said to automate A, B, and C, and I automated A and B, C could not be
automated.”, you may be witnessing testing that isn’t exploratory.
Look at who is in the center: is the tester doing the work and learning actively applying a better way
of doing the overall testing? If yes, that is exploratory testing.
As skilled approach, it is only as good as the skill of the person applying it. With focus on learning
through, skill may be a problem of today, but improved upon every day as exploratory testing is being
done. If you find yourself not learning, you most likely are not exploring. With the product as your
external imagination, you should find yourself imagining new routes through the functionalities,
new users with new perspectives, and relevant information your project teams would be happy to
make justified decisions on their take for the risk. With and without automation, in a good balance.
What is Exploratory Testing - the
Programmer Edition
In the new software world regime where programmers find themselves taking a lot more respon-
sibility of testing, we need to understand what exploratory testing is as it extends what most
programmer’s find their tests covering, and causes us talk past each other in understanding what
testing.
• Specification
• Feedback
• Regression
• Granularity
Specification means that tests you write can be concrete examples of what the program you’re about
to write is supposed to do. No more fancy words around high level concepts — give me an example
and what is supposed to happen with it. And when moving on, you have the specification of what
was agreed. This is what we made the software do, change is fine but this was the specification you
were aware of at time of implementation.
Feedback means that as the tests are around and we run the tests — they are automated, of course —
the tests give us feedback of what is working and what not. The feedback, especially when we
work with modern agile technical practices like test-driven development, gives us a concrete goal of
what we need to make work and if it is working. They help us anchor our intent so that given the
interruptions, we still can stay on the right path. And we can figure out if the path was wrong. This
is the test that passes, yet you say it isn’t right. What are we not seeing?
Regression means that tests don’t only help us when we’re building something for the first time,
but they also help us in changes. And software that does not change is dead, because users using
What is Exploratory Testing - the Programmer Edition 8
and loving it will come back with loads of new ideas and requirements. We want to make sure that
when we change something, we make only changes we intended. And regression is the perspective
of doing more than we intended, without tests without us knowing.
Granularity comes to play when our test fail for a reason. Granularity is about knowing exactly
what is wrong and not having to spend a lot of time figuring it out. We know that small tests pinpoint
problems better. We know that automation pinpoints problems better than people. And we know
that we can ask people to be very precise on their feedback when they complain something doesn’t
work. Not having to waste time on figuring out what is wrong is valuable.
This is not exploratory testing. Exploratory testing often guides this type of testing, but this is testing
as artifact creation. Exploratory testing focuses on testing as performance — like improvisational
theatre — in a multidisciplinary way beyond computer science.
• Guidance
• Understanding
• Models
• Serendipity
What is Exploratory Testing - the Programmer Edition 9
Guidance is not just about the specification, but general directions of what is better and what is
not. Some of it you don’t know to place in yes/no boxes, but have to clarify with your stakeholders
to turn them into something that can be a specification.
Understanding means that we know more about the place of our application in the overall world
of things, why people would find it valuable, how other’s design decisions can cause us trouble and
how what should be true if things were right always are not. It helps us put the details we’re asked
to code into a bigger picture — one that is sociotechnical and extending beyond our own organization
and powers.
Models are ways of encoding knowledge, so that we can make informed decisions, understand
things deeper and learn faster next time or with next people joining our teams.
Serendipity is the lucky accidents, running into information you did not expect to find. The lucky
accidents of new information about how things could and do go wrong when using your application
emerge given enough time and variety to use of your application. And knowing helps you not get
the escalations waking you up to critical maintenance tasks because who else would fix all of this
than the programmers?
An Approach To Testing
Exploratory testing is a approach to testing. It says whoever tests needs to be learning. Learning
needs to change what you are doing. You can’t separate designing of tests and executing them
without losing learning that influences your next tests. It is an approach that frames how we do
testing in a skilled way.
We don’t do just what is asked, but we carefully consider perspectives we may miss.
We don’t look at just what we are building, but the dependencies too, intentional and accidental.
Unsurprisingly, great programmer teams are doing exploratory testing too. Creating great automa-
tion relies on exploratory testing to figure out all the things you want to go check. While with
exploratory testing we believe that premature writing of instructions hinders intellectual processes,
we also know that writing that stuff as code that can be changed as our understanding grows, this
frees our mental capacity to think of other things. The executable test artifacts give us that peace of
mind.
Programmer teams are also doing what I would consider low quality exploratory testing with
limited ideas of what might make a difference in new information being revealed. And that is where
testers often come in — mindspace free of some of the programmer burdens, they focus their energies
elsewhere, raising the overall quality of work coming out of teams.
Finally, I want to leave you with this idea — bad testing is still testing. It just does not give much
of any of the benefits you could get with any testing. Exploratory testing and learning actively
transforms bad testing to better.
The Two Guiding Principles
Whenever I need to define exploratory testing, I bow to people who have come before me.
Cem Kaner introduced me to the idea of exploratory testing with the first testing book I ever read:
Testing Computer Software. He defines exploratory testing as:
Exploratory software testing is a style of software testing that emphasizes the personal
freedom and responsibility of the individual tester to continually optimize the value of her
work by treating test-related learning, test design, test execution, and test result interpretation
as mutually supportive activities that run in parallel throughout the project.
Elisabeth Hendrickson et al. created an invaluable resource, a Cheat Sheet, to summarize some ideas
common to starting with exploratory testing. She defines exploratory testing as:
Exploratory testing is a systematic approach for discovering risks using rigorous analysis
techniques coupled with testing heuristics.
A lot of writing on the topic and techniques are part of Rapid Software Testing Methodology that
James Bach and Michael Bolton have created. They define all testing as exploratory and have
recently deprecated the term.
Exploratory testing, to me, emphasizes the difference to other testing that Julian Harty very clearly
points out: “Most of the testing I see is worthless. It should be automated, and the automation
deleted.” Exploratory testing isn’t that testing. A lot of that testing is around through.
I find myself talking about two guiding principles around exploratory testing again and again. These
two guiding principles are learning and opportunity cost.
Learning
If we run a test but don’t stop to learn and let the results of the test we just run influence our choices
on the next test, we are not exploring. Learning is a core to exploring. Exploring enables discovery
of information that is surprising, and the surprise should lead into learning.
The learning attitude shows in the testing we do so that there is testing against the risk of regression,
but a lot of times the risk isn’t best addressed by running exact same tests again and again.
Understanding the change, seeking out various perspectives in which it might have impact and
introduce problems that were not there before is the primary way an exploratory tester thinks.
Whatever I test, I approach it with the idea of actively avoiding repeating the same test. There’s so
much I can vary, and learning about what I could vary is part of the charm of exploratory testing.
When we optimize for learning and providing as much relevant information as we can with whatever
we have learned by that time, we can be useful in different ways at different phases of our learning
with the system.
The Two Guiding Principles 11
Opportunity Cost
Whatever we choose to do is a choice away from something else. Opportunity cost is the idea of
becoming aware of your choices that have always more dimensions than the obvious.
There are some choices that remove others completely. Here’s a thought experiment to clarify what
I mean: Imagine you’re married and having hard time with your spouse. You’re not exactly happy.
You come up with ideas of what could be changed. Two main ideas are on the table. One is to go
to counceling and the other is to try an open relationship. If you choose the latter and your spouse
feels strongly against this, the latter option may no longer be available. The system has changed.
There are some choices that you can do in different order and they still are both relevant options.
If you choose to test first with a specification, you will never again be the person who has never
read the specification. If you choose to test first without the specification, you will never have the
experience of what you would notice if your first experience was with a specification.
There are some choices that leave others outside scope. If you choose to use all your time in creating
automation and avoiding following the exploration ideas you generate while automating as basic
cases already require effort, you leave the information exploring could provide out of scope. If you
choose to explore and not automate, you leave repetitive work of future to be done manually or
undone.
The idea of being aware of opportunity cost emphasizes a variety of choices where there is no one
obviously correct choice in the stream of small decisions. We seek to provide information, and we
can do so with various orders of tasks.
It’s good to remember that rarely we have an endless schedule and budget. So being aware of
opportunity cost keeps us focused on doing the best testing possible with the time we have available.
Place of exploration
Over the years, I’ve worked with places where release cycles grow shorter. From integrating all
changes into builds a couple of times a week, we’ve moved over to continuous integration. Each
change gets integrated to the latest system and made available for testing as soon as the build
automation finishes running. We don’t get to test the exactly same thing for a very long time, or if
we do, we spend time on something that will not be the final assembly delivered. Similarly, we’ve
moved from giving those assemblies to customers once every six months to continuous delivery and
the version in production can change multiple times a day.
In the fast-paced delivery world, we turn to look heavily at automation. As we need to be able to
run our tests again and again, and deliver the change as soon as it has been checked in and run
through our automated build and test pipeline, surely there is no place for exploratory testing? Or if
there is, maybe we just do all of our exploratory testing against a production version? Or maybe on
top of all this automation, exploratory testing is a separate activity, happening just before accepting
the change into the assembly that gets built and pushed forward? Like a time-box spent on testing
whatever risks we saw the change potentially introducing that our automation may not catch?
Think of exploratory testing as a mindset that frames all testing activities - including automation.
It’s the mindset that suggests that even when we automate, we need to think. That the automation
we are creating for continuous testing is a tool, and will be only as good as the thinking that created
it. Just like the application it tests.
An example
We were working in a small team, building the basis for a feature: managing Windows Firewall
remotely. There were four of us: Alice and Bob were the programmers assigned at the task. Cecile
specialized in end to end test automation. David could read code, but on most days chose not to and
through of themselves as the team’s exploratory testing specialist.
As the team was taking in the new feature, there was a whole group discussion. The group talked
about existing APIs to use for the task at hand, and figured out that the feature had a core. There
was the existing Windows Firewall. There was information about rules to add delivered. And those
rules needed to be applied, or there would not be the feature. After drawing some boxes on the wall,
having discussions about overall and minimal scope, the programmers started their work of writing
the application code.
It did not take long until Alice checked in the module frame, and Bob reviewed the pull request
accepting the changes making something available that was still just a frame. Alice and Bob paired
to build up the basic functionality, leading Cecile and David listening to them bouncing off ideas of
Place of exploration 13
what was the right thing to do. As they introduced functionality, they also included unit tests. And as
they figured out the next slice of functionality to add, David was noticing how much of exploratory
testing the two did between the pair. The unit tests would surprise them on a regular basis, and they
took each surprise as an invitation to explore in the context of code. Soon the functionality of adding
rules was forming, and the pull requests were accepted within the pair.
Meanwhile, Cecile was setting up possibilities to run the Windows Firewall in a multitude of
Windows operating systems. They created a script that introduced five different flavors of Windows
that were supposed to be supported for the new functionality to be run as jobs within the continuous
integration pipeline. They created libraries that allowed to drive the Windows Firewall in those
operating systems, so that one could programmatically see what rules were introduced and shown.
Since the team had agreed on the mechanism of how the rules would be delivered from the outside,
they also created mechanisms of locally creating rules through the same mechanism.
As soon as the module could be run, Alice and Bob would help out Cecile on getting the test
automation scripts running on the module. David also participated as they created the simplest
possible thing that should work: adding a rule called “1” that blocked ping and could be verified in
system context by running ping before and after. Setting up the scripts on top of Cecile’s foundation
was straightforward.
Cecile wanted to test their scripts before leaving them running triggered for an hourly repeat for a
baseline, and manually started the run on one of their operating systems, visually verifying what
was going on. They soon realized that there was a problem they had not anticipated, leaving the
list of rules in an unstable state. Visually, things were flickering when the list of rules was looked
at through the UI. That was not what happened when rules were added manually. And Cecile had
explored enough of the system to know what adding a rule through the existing user interfaces
should look like. Something was off.
Cecile, Bob and Alice figured out that the problem was related to naming the rules. If the rule name
was less than three characters, there was a problem. So Bob introduced a fix limiting rule’s minimum
length, Alice approved the change and Cecile changed the rules to have a name longer than three
characters. Cecile used Windows Firewall more to figure out different types of rules, and added
more cases by exploring what was same and different and made sure they would test things both
locally and remotely - end to end.
David had also started exploratory testing the application as soon as there were features available.
They had learned that Alice and Bob did not introduce logging right from the start, and as they
did, that the log wasn’t consistent with other existing logs in how it was named. They were aware
of things being built into the automation, and focused their search on things that would expand
the knowledge. They identified that there were other ways of introducing rules, or locking the
Windows Firewall so that rules could not be introduced through external mechanisms. They would
pay attention to rule wizard functionalities in the Windows Firewall, enforcing rules around legal
rules, and make notes of those only to realize through testing that Alice and Bob had not considered
that all combinations were not legal. Things David would find would not be bugs as the team defined
a bug - programming errors - but more of missing functionalities, for lacking information about the
execution environment.
Place of exploration 14
David would make lists of tests for Cecile to add to the test automation, and pair with Cecile to
add them. As they were pairing, the possibilities of automatically creating a lot of rules triggered
their minds and they would try introducing a thousand rules to note performance concerns. And as
adding was so much fun, obviously removing some of them would make sense too. They would also
add changing rules. And as they were playing with numbers, they realized that they had uncovered
a security issue: rules were there to protect, and timings would allow for times of unprotection.
The team built it all to a level they felt was minimal to be delivered outside the organization. Unit
tests and test automation allowed for making changes and believing those cases still worked out ok.
They could explore around every change, and every idea.
The functionality also included a bit of monitoring, allowing them to see the use of feature in
production. After having the feature running in production, monitoring provided extra ideas of
what to explore to understand the overall system better.
What this story shows: * everyone explores, not everyone calls it exploratory testing even if it is
that * we explore before be build, during building, while we automate and separately from building
and automating, as well as while in production * exploration can happen in context of code and in
context of the system we’re building as well as in context of use in the overall system * without
exploratory testing, we don’t give ourselves the best chances of knowing what we’re building
started. You would do well acknowledging where your abilities are and developing them further by
practicing intertwining, but also allowing yourself time to focus on just one thing. With exploratory
testing, the formula includes you: what works for you, as you are today.
A Practical Example
Imagine learning to drive a car. You’re taking your first lessons at the driving school and after some
bits of theory you know the basic mechanics of driving but have never done any of it.
You’ve been shown the three pedals, and when you stop to think, you know which one is which.
You know the gear shifter and it’s clear without telling what the steering wheel does (as long as you
drive forward, that is). And finally comes the moment you’re actually going to drive.
The driving instructor makes you drive a couple of laps around the parking lot and then tells you
to drive out, amongst other cars. With newness of all of this, your mind blanks and you remember
nothing of the following half an hour. And if you remember something, it’s the time when your car
stopped at an embarrassing location because it was too hard to do the right combination of clutch
and gears.
All the pieces are new and doing the right combination of even two of them at the same time is
an effort. Think about it, when you looked if you could turn right, didn’t you already start turning
the wheel? And when you were stopped at the lights to turn, didn’t it take significant effort to get
moving and turn at the same time?
After years of driving, you’re able to do the details without thinking much, and you’re free to use
your energy on optimizing your route of the day or the discussion you’re having with the person
next to you. Or choosing a new scenic route without messing up your driving flow.
It’s the same with testing. There’s a number of things to pay attention to. The details of the
application you’re operating. The details of the tools you need to use. The uncertainties of
information. All your thoughts and knowledge. The information you get from others, and whether
you trust it or not. The ideas of what to test and how to test it. The ideas of what would help you test
again later. The expectations driving you to care about particular type of information. Combining
any two of these at a time seems like a stretch and yet with exploratory testing, you’re expected to
keep track of all of these in some way. And most essentially from all the details, you’re expected to
build out and communicate both a long-term and a short-term view of the testing you’ve done and
are about to do.
Learning To Self-manage
I find that a critical skill for an exploratory tester is the skill to self-manage, and to create a structure
that helps you keep track of what you’re doing. Nowadays, with some years of experience behind
me, I just create mind maps. There is a simple tool I found to be brilliant for learning the right kind
of thinking, and that tool is what I want to share with you.
How to Explore with Intent - Exploratory Testing Self-Management 18
When I say tool, I mean more of a thinking tool. The thinking tool here though has a physical
structure.
For a relevant timeframe, I was going around testing with a notebook for a very particular purpose.
Each page in the notebook represented a day of testing, and provided me a mechanism to keep track
of my days. A page was split into four sections, with invisible titles I’ve illustrated in the picture:
Mission (why am I here?), Charter (what I’m doing today?), Details (what am I keeping track of in
details?) and Other Charters (what should I be doing before I’m done?).
At the start of a day of testing, I would open a fresh page and review my status after letting earlier
learning sink in. Each of the pages would stay there to remind me of how my learning journey
developed as the application was built up, one day at a time.
Notebook illustration
Mission
In the top left corner, I would stick a note about my mission, my purpose or as I often liked to think
of it, the sandbox I was hired to play in. What did the organization expect of me as per information
I would provide, having hired me as an exploratory tester? How I could describe that in just a few
sentences?
How to Explore with Intent - Exploratory Testing Self-Management 19
For example, I was hired in an organization with ten teams, each working on a particular area of the
product. My team was specializing in installations. That little note reminded me that while I could
test anything outside the installations if I so wished, there was a sandbox that I was supposed to
cover for relevant findings and it was unlikely that others would feel the urge to dig deep into my
area.
They were likely to travel through it, but all the special things in the area, they would probably
rather avoid. If I would be digging through someone else’s area, nothing would stop me. But I might
leave mine unattended. I might feel that I used all this time, and therefore I’m done, even if I was
only shallowly covering my own area.
The mission note reminded me of the types of information the organization considered relevant,
and the area of responsibility I felt I had accepted. It served as an anchor when the whispers of the
product lead me elsewhere to explore.
Charter
In the top right corner was my note about the work of the day: the Charter. Each morning I would
imagine what I was trying to achieve today - only to learn most evenings I had done something
completely different. Charter is a framing of what I’m testing, and as I learn they change over time.
It’s acceptable to start out with one idea and end up with something completely different when you
are finished.
The note of the day was another anchor keeping me honest. With exploration, I’m not required to
stick to my own plans. But I’m required to be in control of my plans in the sense that I don’t fool
myself into believing something is done just because the time is used.
Continuing on my example with the Installations team, I might set up my charter of the day to
be 2 installations with a deep dive into what actually gets installed. Or I might set it up to be 20
installations, looking through each shallowly. Or I might decide to focus on a few particular features
and their combinations. If I saw something while testing that triggered another thought, I could
follow it. But at the end of the day, I could review my idea from the morning: did I do 20 shallow
installations like I thought I would? If I didn’t, what did I do? What am I learning for myself from
how things turned out?
Details
In the bottom right corner, I would pile up notes. At first, these were just lines of text I would write
that would often fill the page next to the one I was working on. Later, I realized, that for me there
were three things I wanted to make notes of: the bugs, the questions, the ideas for test automation
or test cases, and my notes extended to have a categorization shorthand.
With any of the detailed ideas, I could choose to stop doing the testing I was doing, and attend to
the detail right away. I could decide that instead of focusing on exploring to find new information,
I could create an automated test case from a scenario I cooked up from exploration. I could decide
How to Explore with Intent - Exploratory Testing Self-Management 20
that instead of completing what I was planning on doing today, I would write the great bug report
with proper investigation behind it. I could decide to find a product owner, a support representative,
a programmer, or my manager to get an answer for a burning question I had. Or, I could make note
of any of these with minimum effort, and stick to my idea of what I would do to test the application
before attending to the details.
I learned that people like me can generate so many questions, that if I don’t have a personal throttling
mechanism, I can block others from focusing on other things. So I realized that collecting the
questions and asking them in regular intervals was a good discipline for me. And while looking
through my questions, I would notice that I had answers to more questions myself than I first
thought.
With each detail, the choice is mine. Shall I act on this detail immediately, or could it wait? Am I
losing something relevant if I don’t get my answer right away? Is the bug I found something the
developer would rather know now, than at the end of my working day? Do I want to stop being in
exploratory mode to improve my documentation, or to pair with a developer to implement a piece
of test automation, or do I rather time-box that work for another day from the idea I had while
testing?
Other Charters
In the bottom left corner, I would make notes of exploratory testing work I realized needed doing
while I was testing. I would write down ideas small and large that I would park for future reference,
sometimes realizing later that some of those I had already covered and just forgotten. Sometimes I
would add them to my backlog of work to do, and sometimes tuning the existing backlog of work
to support choosing focus points of upcoming testing days.
Some of my ideas would require creating code for purposes of extending the reach of exploration.
Some ideas would require getting intimately familiar with the details of log files and database
structures. Each new idea would build on the learning that had happened before, making me reassess
my strategy of what information I would invest in to have available first.
You’re In Control
The tool isn’t there to control you, it’s there to give you a structure to make your work visible for
you. You get to decide what happens when you explore, and in what order. If you need to go through
a particular flow 15 times from various angles, you do that. If you find it hard to think about strategy
and importance of particular tasks when you’re deep in doing testing, you reserve time separately
for strategic thinking.
With the days passing, and notes taken, I could go back seeing what types of sessions I would
typically have. There would be days where I’d just survey a functionality, to figure out a plan of
charters without focus on details. There would be target rich functionalities, where the only detail I
could pay attention to was the bugs. Over time, I could pay attention to doing things intentionally
How to Explore with Intent - Exploratory Testing Self-Management 21
with particular focus, and intentionally intertwined. I could stop to think, how different days and
different combinations made me feel. I learned to combine things in ways that were useful for my
organization, but also maximized the fun I could have while testing in a versatile manner.
While most value was in learning to self-manage my testing work around learning, there was also
a side impact. When someone would show up to ask about what I had done and was doing, I could
just flip a page and give an account of what had been going on. Seeing the structure created trust in
those who were interested in my progress.
As an active learner, you will get better every day you spend on testing. Exploratory testing treats
test design, test execution and learning as parallel, as mutually supportive activities to find unknown
unknowns. Doing things in parallel can be difficult, and testing needs to adjust to the tester’s
personal skill level and style. Your skill to self-manage your work and your learning - making
learning and reflection a habit - is what differentiates skilled exploratory testing from randomly
putting testing activities together.
I believe that the thing that makes us, testers, to not be treated as a commodity, is learning. It’s the
same with programmers. Learners outperform the ones that don’t. Exploratory testing has learning
at it’s core.
Exploratory Testing an API
This article was published in Ministry of Testing Testing Planet in 2016. Appropriate pieces of it will
find their place as part of this book.
As an exploratory tester I have honed my skills in testing products and applications through a
graphical user interface. The product is my external imagination and I can almost hear it whispering
to me: “Click here… You want to give me a different input… Have you checked out the log file I’m
producing?”
Exploratory testing is a systematic approach to uncovering risks and learning while testing. The
whispers I imagine are heuristics, based on years of experience and learning of how I could model
the product in relevant ways to identify information relevant to stakeholders. While the product is
my external imagination when I explore, I am my programmer’s external imagination when they
explore. They hear the same, unspoken whispers: “You’d want me to do this… I guess I should then.”
and they then become better testers.
I’ve only recently started applying this skillset on APIs - Application Programming Interfaces. An
application programming interface is a set of routines, protocols, and tools for building software and
applications. What triggered this focus of exploration effort was an idea to show at a conference how
testing something with a code interface is still very similar to testing something with a GUI.
With an API call, I can just fill in the blanks and figure out how the API sits in the bigger picture.
There should be nothing that stops me from exploring through an API, but why haven’t I done
it before? And then as I started exploring an API with a testing mindset, I started seeing APIs
everywhere, and finding more early opportunities to contribute.
covered by detailed test cases. I’ve lived with Agile ideals long enough to have moved to the idea
that whatever is worth documenting in detail, it is probably worth documenting as test automation.
This way of looking at testing tends to focus on what we know.
When we approach testing as artifact creation, our focus is primarily on solving the problem of
creating right artifacts: what kinds of things would be useful automated? Where are the low-hanging
fruit and what kind of automation would help us drive and support the development?
The test automation artifacts at best give us:
We need both sides of the coin. Exploratory testing is a process of discovery, and it is entirely possible
to discover information we did not have from extensive test automation using APIs as our external
imagination.
There’s inherently nothing in exploratory testing that requires we must have a user interface or
have finalized features available. Still often I hear people expressing surprise at the idea that you
can explore an API.
In addition, exploring an API is thought as something that is intended for programmers. And an even
more specific misconception is that exploratory testing could not use programming / automation as
part of the exploration - that there would be something inherently manual required for exploration.
We must, as software testers, help team members understand that we can explore software in a
variety of manual, automated and technical ways.
by a developer friend with significant reliance on the greatness of his unit tests, and I welcomed the
challenge to find problems through the means of exploratory testing.
ApprovalTests is a library for deciding if your tests pass or not, with a mechanism of saving a
result to a file and then comparing to the saved result. It also offers mechanisms of digging into
the differences on failure. It has extensive unit tests, and a developer who likes to brag about how
awesome his code is. The developer is a friend of mine and has a great approach to his open source
project: he pairs with people who complain to fix things together.
ApprovalTests have a couple of main connecting points.
• There are the Approvals that are specific to technology your testing. For example, my company
used ExcelApprovals that packaged a solution to problem of having different yet same results
with every run.
• And then there are the Reporters that are a way of saying how you want to analyze your tests
if they fail.
I personally know enough about programming to know to appreciate an IDE tool and the automatic
word completion feature. The one in Visual Studio is called Intellisense. It’s as if there is a GUI:
I write a few letters, and the program already suggests me options. That’s a user interface I can
explore, just as much as any other! The tools shows what the API includes.
Picture 1
Using the IDE word completion, I learn that Approvals has a lot of options in its API. Here is an
example where to test specific technologies with Approvals, you would want to make different
selections. Documentation reveals that Approvals.Verify() would be a basic scenario.
Exploratory Testing an API 26
Picture 2
I look at Reporters with the same idea of just opening up a menu, and find it hard to figure out what
in the list are reporters.
I later learn that it’s because of the word before, and that naming everything ReportWith would
help make the reporters more discoverable.
I also learn that the names can improve to include better their intent, for example some are supposed
to be silent - to run with continuous integration.
Picture 3
I go for online examples, and learn that they are images - not very user friendly. And I try to look for
in-IDE documentation, and learn it’s almost non-existent. I run existing unit tests to uncover they
don’t work at first run, but the developer fixes them quickly. And browsing through the public API
with the tool, I note a typo that gets fixed right away.
The API has dependencies to other tools, specifically test runners (e.g. nUnit and MSTest) and I want
my environment to enable exploring the similarities and differences with the two. Serendipitous
Exploratory Testing an API 27
order of installing the pieces reveals a bug in combination of using two runners in combination
with a delivery channel (Nuget). Over the course of testing, I draw a map of the environment I’m
working with, around ApprovalTests. The map is a source of test ideas on the dependencies.
I don’t only rely on the information available online, I actively ask for information. I ask the
developer what Approvals and Reporters do, to get a long list of things that some do and some
don’t - this becomes a great source for more exploratory testing. Like a checklist of claims, and a
great source for helping him tune up his user documentation.
Even a short exploration gave me ideas of what to dig in deeper, and issues to address for the
developer. Some additional sessions with groups in conferences revealed more problems, and showed
the power of exploratory testing on an extensively automation tested API.
5 - Usability of an API
There’s great material out there on what makes a good API. It’s really a usability discussion!
When using the different commands/methods, what if they would be consistent in naming and in
parameter ordering so that programmers using them would make fewer mistakes?
Exploratory Testing an API 29
What if the method signatures didn’t repeat many parameters of same type so programmers using
them would get mixed up in the order?
What if using the API incorrectly would fail compile-time, not only run-time to give fast feedback?
What if the methods would follow a conservative overloading strategy, an idea that two methods
with same name would never have the same amount of arguments so that users can confuse the
inputs? I had never heard of the concept before exploring an API, and run into the idea googling for
ideas of what people say that make APIs more usable.
There’s even a position in the world called Developer Experience (DX), applying the user experience
(UX) concepts to the APIs developers use and focusing on things like Time to Hello world (see it run
on your environment) and Time to Working Application (using it for something real). These ideas
come naturally with an exploratory testing mindset.
7 - Think Lifecycle
You’re probably testing a version you have just kindly received into your hands. There were probably
versions before and there will be versions after. The only software that does not change is dead. How
does this make you think about testing?
• Find the core of a new user’s experience and describe that clearly
• Add the deeper understanding of what (and why) the functionalities do into API documenta-
tion
• Clean up and remove things from automatically created API documentation
Exploratory Testing an API 31
• It’s not that I’m so smart, it’s I just that I stay with the problems longer (Albert Einstein)
• The more I practice, the luckier I get (Arnold Palmer)
I find a lot more insights digging in deeper than what is first visible. Manual repetition may be key
for this insight, even on APIs.
Your set of automated tests will require maintenance. When tests fail, you will look into those.
Choose wisely. Choose when you learn what wise means.
Summary
Exploring APIs gives you the power of early feedback and easy access to new skills closer to code.
I suggest you give it a chance. Volunteer to work on something you wouldn’t usually work on as a
non-programmer. You’ll be surprised to see how much of your knowledge is transferrable to “more
technical” environment.