0% found this document useful (0 votes)
6K views

DevOps Culture and Practice With Openshift Section4

Uploaded by

Muhammad
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6K views

DevOps Culture and Practice With Openshift Section4

Uploaded by

Muhammad
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 82

C

om
pl
im
en
ts
of
DevOps Culture
and Practice with
OpenShift
Deliver continuous business value through people,
processes, and technology

Section 4:
Prioritize It

Tim Beattie | Mike Hepburn | Noel O'Connor | Donal Spring


Illustrations by Ilaria Doria
DevOps Culture
and Practice with
OpenShift

Deliver continuous business value through people,


processes, and technology

Tim Beattie

Mike Hepburn

Noel O'Connor

Donal Spring

BIRMINGHAM—MUMBAI
Welcome to DevOps Culture and Practice with OpenShift.
This book will enable readers to learn, understand, and apply many different practices
- some people-related, some process-related, some technology-related - to make
DevOps adoption and in turn OpenShift a success within their organization. It
introduces many DevOps concepts and tools that we use to connect DevOps culture
and practices through a continuous loop of discovery, pivots and delivery. All of this
is underpinned by a foundation of culture, collaboration, and engineering.
This book provides an atlas demonstrating how to build empowered product teams
within your organisation. Through a combination of real world stories, a fabricated use
case (especially fun for dog and cat lovers), facilitation guides and the technical details
of how to implement it all, this book provides tools and techniques to build a DevOps
culture within your organization on Red Hat’s OpenShift Container Platform.
It’s a collection of agile, lean, design thinking, DevOps, culture, facilitation and
hands-on technical enablement books all in one! As Gabrielle Benefield (Business
Outcomes thought-leader) explains in her foreword, this book is "like having a great
travel guide you can pull out on your journey, that gives you the direction and ideas you
need when you need them. I use this book as a go-to reference that I can give to teams
to help them get up and running fast. I also love that the authors speak with candor
and share their real-world war stories including the mistakes and pitfalls."
To help navigate round the book, the 18 chapters have been organized into 7 sections:

• Section 1, Practice makes perfect introduces DevOps culture and practices. It’ll
also give an overview of the navigator we will use to work our way round how we
use continuous discovery and continuous delivery to achieve DevOps culture.
• Section 2, Establishing the foundation provides practices we use to establish an
open culture enabling high performing teams to realize DevOps and the technical
foundation practices they use to bootstrap to achieve DevOps.
• Section 3, Discover It explains practices we use to discover why, for who and
how we build great application products to run on OpenShift and deliver early and
continuous business value.

Section 4, Prioritize It shows how we decide what to work on by taking an


experimental approach and how we organize our work according to business value
and risk.
• Section 5, Deliver It covers Agile and waterfall delivery approaches and the
techniques we use to measure and learn, at several levels, from iterative and
incremental delivery.
• Section 6, Built It, Run It, Own It looks at the technology and walks through the
steps, patterns and tools we use to confidently deliver and operate our case study
application and platform.
• Section 7, Improve It, Sustain It describes how we continue round the infinite loop
to continuously learn about and improve our products and technology and how
the same mental model used for application products can be applied to platforms
and strategy.
Table of Contents

Preface   iii

Section 4: Prioritize It   349


Chapter 11: The Options Pivot   353

Value Slicing ..........................................................................................................  355


The Beer and the Curry .............................................................................................  360
One to Few to Many Slices of Value – Continuous Delivery .................................  362
PetBattle – Slicing Value towards Continuous Delivery ........................................  366
Design of Experiments ........................................................................................  373
Qualitative versus Quantitative Feedback .............................................................  374
Impact and Effort Prioritization Matrix .............................................................  377
How-Now-Wow Prioritization .............................................................................  379
The Design Sprint .................................................................................................  382
Forming the Initial Product Backlog ..................................................................  385
PetBattle — Tracing Value through Discovery and Delivery Practices ...............  388
Product Backlog Refinement ....................................................................................  389
Prioritization .........................................................................................................  391
Value versus Risk .......................................................................................................  391
Cost of Delay and WSJF ..............................................................................................  392
PetBattle – Prioritizing using WSJF ...........................................................................  394
Product Ownership ..............................................................................................  396
Experimenting with Different Product Owners .....................................................  398
Patterns of Early Sprints and the Walking Skeleton ..............................................  399
Advanced Deployment Considerations .............................................................  400
A/B Testing ..................................................................................................................  401
Blue/Green Deployments .........................................................................................  402
Canary Releases .........................................................................................................  403
Dark Launches ............................................................................................................  404
Feature Flags ..............................................................................................................  405
PetBattle – Tech Spikes, Prototypes, Experiments, and
Feature Implementations .........................................................................................  406
Reframing the Question – How Much Can I Borrow or
How Much House Can I Afford? ...............................................................................  408
Research, Experiment, Implement ....................................................................  409
Creating an Options Map ....................................................................................  410
Conclusion ............................................................................................................  412
Preface
>
About

This section briefly introduces the authors, the coverage of this book, the skills you'll need to get
started, and the hardware and software needed to complete all of the technical topics.
iv | Preface

About DevOps Culture and Practice with OpenShift


DevOps Culture and Practice with OpenShift features many different real-world
practices - some people-related, some process-related, some technology-related - to
facilitate successful DevOps, and in turn OpenShift, adoption within your organization.
It introduces many DevOps concepts and tools to connect culture and practice through
a continuous loop of discovery, pivots, and delivery underpinned by a foundation of
collaboration and software engineering.
Containers and container-centric application lifecycle management are now an
industry standard, and OpenShift has a leading position in a flourishing market of
enterprise Kubernetes-based product offerings. DevOps Culture and Practice with
OpenShift provides a roadmap for building empowered product teams within your
organization.
This guide brings together lean, agile, design thinking, DevOps, culture, facilitation, and
hands-on technical enablement all in one book. Through a combination of real-world
stories, a practical case study, facilitation guides, and technical implementation details,
DevOps Culture and Practice with OpenShift provides tools and techniques to build a
DevOps culture within your organization on Red Hat’s OpenShift Container Platform.

About the authors


Tim Beattie is Global Head of Product and a Senior Principal Engagement Lead for Red
Hat Open Innovation Labs. His career in product delivery spans 20 years as an agile
and lean transformation coach - a continuous delivery & design thinking advocate who
brings people together to build meaningful products and services whilst transitioning
larger corporations towards business agility. He lives in Winchester, UK, with his wife
and dog, Gerrard the Labrador (the other Lab in his life) having adapted from being a
cat-person to a dog-person in his 30s.
Mike Hepburn is Global Principal Architect for Red Hat Open Innovation Labs and helps
customers transform their ways of working. He spends most of his working day helping
customers and teams transform the way they deliver applications to production with
OpenShift. He co-authored the book "DevOps with OpenShift" and loves the outdoors,
family, friends, good coffee, and good beer. Mike loves most animals, not the big hairy
spiders (Huntsman) found in Australia, and is generally a cat person unless it's Tuesday,
when he is a dog person.
v | Preface

Noel O'Connor is a Senior Principal Architect in Red Hat's EMEA Solutions Practice
specializing in cloud native application and integration architectures. He has worked
with many of Red Hat's global enterprise customers in both Europe, Middle East & Asia.
He co-authored the book "DevOps with OpenShift" and he constantly tries to learn new
things to varying degrees of success. Noel prefers dogs over cats but got overruled by
the rest of the team.
Donal Spring is a Senior Architect for Red Hat Open Innovation Labs. He works in
the delivery teams with his sleeves rolled up tackling anything that's needed - from
coaching and mentoring the team members, setting the technical direction, to coding
and writing tests. He loves technology and getting his hands dirty exploring new tech,
frameworks, and patterns. He can often be found on weekends coding away on personal
projects and automating all the things. Cats or Dogs? He likes both :)

About the illustrator


Ilaria Doria is an Engagement Lead and Principal at Red Hat Open Innovation Labs.
In 2013, she entered into the Agile arena becoming a coach and enabling large
customers in their digital transformation journey. Her background is in end-user
experience and consultancy using open practices to lead complex transformation and
scaling agile in large organizations. Colorful sticky notes and doodles have always been
a part of her life, and this is why she provided all illustrations in the book and built all
digital templates. She is definitely a dog person.

About the reviewer


Ben Silverman is currently the Chief Architect for the Global Accounts team at
Cincinnati Bell Technology Services. He is also the co-author of the books OpenStack
for Architects, Mastering OpenStack, OpenStack – Design and Implement Cloud
Infrastructure, and was the Technical Reviewer for Learning OpenStack (Packt
Publishing).
When Ben is not writing books he is active on the Open Infrastructure Superuser
Editorial Board and has been a technical contributor to the Open Infrastructure
Foundation Documentation Team (Architecture Guide). He also leads the Phoenix,
Arizona Open Infrastructure User Group. Ben is often invited to speak about cloud and
Kubernetes adoption, implementation, migration, and cultural impact at client events,
meetups, and special vendor sessions.
vi | Preface

Learning Objectives
• Implement successful DevOps practices and in turn OpenShift within your
organization
• Deal with segregation of duties in a continuous delivery world
• Understand automation and its significance through an application-centric view
• Manage continuous deployment strategies, such as A/B, rolling, canary, and
blue-green
• Leverage OpenShift’s Jenkins capability to execute continuous integration
pipelines
• Manage and separate configuration from static runtime software
• Master communication and collaboration enabling delivery of superior software
products at scale through continuous discovery and continuous delivery

Audience
This book is for anyone with an interest in DevOps practices with OpenShift or other
Kubernetes platforms.
This DevOps book gives software architects, developers, and infra-ops engineers
a practical understanding of OpenShift, how to use it efficiently for the effective
deployment of application architectures, and how to collaborate with users and
stakeholders to deliver business-impacting outcomes.

Approach
This book blends to-the-point theoretical explanations with real-world examples to
enable you to develop your skills as a DevOps practitioner or advocate.

Hardware and software requirements


There are five chapters that dive deeper into technology. Chapter 6, Open Technical
Practices - Beginnings, Starting Right and Chapter 7, Open Technical Practices - The
Midpoint focuses on boot-strapping the technical environment. Chapter 14, Build It,
Chapter 15, Run It, and Chapter 16, Own It cover the development and operations of
features into our application running on the OpenShift platform.
vii | Preface

We recommend all readers, regardless of their technical skill, explore the concepts
explained in these chapters. Optionally, you may wish to try some of the technical
practices yourself. These chapters provide guidance in how to do that.
The OpenShift Sizing requirements for running these exercises are outlined in
Appendix A.

Conventions
Code words in the text, database names, folder names, filenames, and file extensions
are shown as follows:
We are going to cover the basics of component testing the PetBattle user interface
using Jest. The user interface is made of several components. The first one you see
when landing on the application is the home page. For the home page component, the
test class is called home.component.spec.ts:
describe('HomeComponent', () => {
let component: HomeComponent;
let fixture: ComponentFixture<HomeComponent>;

beforeEach(async () => {...


});

beforeEach(() => {...


});

it('should create', () => {


expect(component).toBeTruthy();
});
});
viii | Preface

Downloading resources
All of the technology artifacts are available in this book's GitHub repository at https://
github.com/PacktPublishing/DevOps-Culture-and-Practice-with-OpenShift/
High resolution versions of all of the visuals including photographs, diagrams and
digital artifact templates used are available at https://github.com/PacktPublishing/
DevOps-Culture-and-Practice-with-OpenShift/tree/master/figures
We also have other code bundles from our rich catalog of books and videos available at
https://github.com/PacktPublishing/. Check them out!
We are aware that technology will chage over time and APIs will evolve. For the latest
changes of technical content, have a look at the book's GitHub repository above. If you
want to contact us directly for any issue you've encountered, please raise an issue in
this repository.
Section 4: Prioritize It

In Section 3, Discover It, we worked our way around the Discovery Loop. We started
with Why—why are we embarking on this initiative? What is our great idea? We used the
North Star to help us frame this. We defined the problem and understood the context
further by using the Impact Mapping practice to align on our strategic goal. Impact
Mapping helped us converge on all the different actors involved that could help us
achieve or impede our goal. Impact Mapping captures the measurable impacts we want
to effect and the behavioral changes we would like to generate for those actors. From
this, we form hypothesis statements about how the different ideas for deliverables may
help achieve these impacts.
We refined this understanding further by using the human-centered design techniques
and Design Thinking practices such as Empathy Mapping and Contextual Inquiry to
observe and connect with our actors. We explored business processes and domain
models using the Event Storming practice by generating a shared understanding of
the event-driven process. Using the Event Storming notation, a microservices-based
architecture started to emerge. We also discovered non-functional aspects of the
design by using Non-Functional Maps and running Metrics-Based Process Mapping.
The Discovery Loop presented lots of ideas for things we can do in our delivery cycles—
features we can implement; architectures that emerge as we refine and develop the
solution by repeated playthroughs of the Event Storm; research that can be performed
using user interface prototypes or technical spikes that test our ideas further;
experiments that can be run with our users to help get an even better understanding of
their motivations, pain points, and what value means to them; and processes we can put
in place to gather data and optimize metrics.
350 | Section 4: Prioritize It

Figure 11.0.1: The Options Pivot – setting the scene

From just the first iteration of the Discovery Loop, it would be very easy to come
up with hundreds of different tasks we could do from all the conversations and
engagement that those practices generate. It can be a minefield visualizing all these
ideas and it can take weeks, if not months, to generate tasks for a small team just from
a short iteration of the Discovery Loop! So, we need to be careful to ensure we remain
focused on delivering value, outcomes that matter, and that we don't get bogged down
in analysis-paralysis in a world filled purely with busyness!
Before we left the Discovery Loop, we took time to translate all of this learning
into measurable Target Outcomes. This started with the primary target outcomes
associated with the business product, but we also took time to recognize some of
the secondary targets and enabling outcomes that can help support development—
especially those that can be enabled by software delivery processes and underlying
platforms such as OpenShift.
With these outcomes visualized and presented using big visible Information Radiators,
supporting metrics can also be baselined and radiated. We can now think about all
those tasks and ideas that resulted from the Discovery Loop. But we can only do so by
keeping an eye on those outcomes at all times and ensuring everything we do is directly
or indirectly going to take us toward achieving them. This is where the real fun begins,
because we're going to explore how we're going to achieve those measurable outcomes.
351 | 

Mobius uses the word options instead of solutions, or the dreaded term requirements.
Until we validate our ideas, they are simply wild guesses, so calling them solutions
or saying they are required is not logical and there is no evidence to support them.
Instead, we call them potential solutions, options, and we get to test them out in the
Delivery Loop to prove or disprove the hypothesis that we have formed around those
options. This drives us to a more data-driven approach rather than just simply guessing.

Figure 11.0.2: The Options Pivot

When we are on the Options Pivot, we decide which of the outcomes we are going
to target next. We choose which ideas or hypotheses we need to build, test, validate,
and learn from, as well as exploring how we might deliver the options. We also need
to get a sense of priority. We never have the luxury of infinite time and resources, so
prioritization is always going to be the key to achieving business value and fast learning.
Learning fast is an important aspect here. We want to generate options that can
validate, or invalidate, our ideas from the Discovery Loop so we can ultimately revisit
and enhance them. Fast feedback is the key to connecting the Discovery artifacts with a
validated prototype.
Chapter 11, The Options Pivot, will focus on the practices we use before we begin a
Delivery Loop. We will return to the Options Pivot again after the Delivery Loop in
Section 7, Improve It, Sustain It, when we take the learnings and measurements that
have resulted from the latest Delivery Loop iteration and decide what to do next given
these findings.
The Options Pivot
11
During the Discovery Loop, we started to come up with lots of ideas for
implementation. The Impact Map gave us deliverables that formed hypothesis
statements. The human-centered design and Empathy Mapping practices gave us
ideas directly from the user. The Event Storm gave us standalone features (triggered
by commands) that can be implemented using standalone microservices (codifying the
aggregate). The Metrics-Based Process Map and Non-Functional Map gave us ideas
on how we can speed up the development cycle and improve security, maintainability,
operability, scalability, auditability, traceability, reusability, and just about anything else
that ends with ability!
The next step after the Discovery Loop is the Options Pivot, where all the information
from these practices that we've used gets boiled down to a list of options for actions to
take and decisions to make on what to deliver next.
The Options Pivot is the heart of the Mobius Loop. On the left-hand side of it is where
we absorb all the learning and Target Outcomes we aligned on in the Discovery Loop.
We generate further ideas. We refine ideas on what to deliver next and then choose
the options to work on. Later in the book, in Chapter 17, Improve It, we'll look at the
right-hand side of the Options Pivot. This is where we adapt our approach based on
the measurements and learnings from a completed iteration of the Delivery Loop. We
decide whether to do more Discovery, more Delivery, or Pivot completely. We refine
what to discover or deliver next.
354 | The Options Pivot

Remember, we are working in fully autonomous, cross-functional teams. We don't have


separate testing teams for performance or usability, so we can't assume work items
associated with these functions will be dealt with on the other side of the wall! This
makes our job difficult as we have to weigh up the relative values of different options.
We have to decide between new feature development, urgent bug fixes, platform
improvements to speed up development, usability improvements, enhancements to
security, and many other aspects that will make our product better.
In this chapter, we're going to do the following:
1. Visualize all the work we might do using the User Story Mapping practice.
2. Organize all our work into small, thin slices of value using the Value Slicing practice
so that we can continuously deliver value.
3. Start the Design of Experiments practice to test hypotheses that emerged during
the Discovery Loop.
4. Prioritize by exploring different practices that help us in the Options Pivot,
including Impact and Effort Prioritization, How/Now/Wow Prioritization, and the
Design Sprint.
5. Form the initial Product Backlog with traceability to all preceding practices.
6. Set up Product Backlog Refinement to happen on an ongoing basis.
7. Apply Economic Prioritization to Product Backlog items.
8. Explain the importance of the Product Owner role in achieving all of the above.
9. Explore how experiments can be supported by some of the advanced deployment
capabilities enabled by the OpenShift platform and how we can plan to use these to
ensure we maximize learning from our Delivery Loops.
Let's start with one of our favorite visualization practices to radiate and plan lots of
small incremental releases by slicing value.
355 | Value Slicing

Value Slicing
We are approaching the part of the Mobius mental model where we will start delivering
increments of our solution. They will vary from running short prototypes and technical
experiments or spikes, to conducting defined user research, to implementing features
that have resulted from Event Storming and other Discovery practices.
An iteration of the Delivery Loop is not prescribed in length. If you are using a popular
iterative agile delivery framework such as Scrum, an iteration of the Delivery Loop
translates well to one sprint (a fixed time-box between one and four weeks). If you
are using a more continuous delivery approach such as Kanban to enable an ongoing
flow of value, each Delivery Loop may simply represent the processing of one Product
Backlog item and delivering it into the product. You may even be using a non-agile
delivery methodology such as Waterfall whereby the Delivery Loop is more singular
and slower to move around. The Mobius Loop is agnostic to the delivery approach. But
what is consistent regardless of the delivery approach is the idea that we seek to deliver
high‑value work sooner, establish important learning more quickly, and work in small
batch sizes of delivery effort so we can measure and learn the impact to inform our
next set of decisions.
To help us break down all our work items and ensure they are grouped to a level
that will form small increments of value, we use popular visualization and planning
practices.
Simple path mapping techniques break the work down by mapping back from the
Target Outcomes to the least number of steps needed to deliver it. There are many
other practices, such as journey mapping, story mapping, future state mapping, service
blueprints, and more. Mobius is less concerned with the how, as long as you focus on
finding the simplest way to deliver the outcomes. This technique we have found works
very effectively is called Value Slicing.
Let's look at how we approach Value Slicing.
356 | The Options Pivot

First, we note all of the standalone work ideas that have been generated by the
Discovery practices. Our focus here is now on Outputs (and not Outcomes) as we want
to group all of our deliverables together and form an incremental release strategy that
delivers the outcomes. A starting point is to copy each of the following from existing
artifacts:
• Deliverables captured on the Impact Map
• Commands captured on the Event Storm
• Ideas and feedback captured on Empathy Maps
• Non-functional work needed to support decisions made on the Non-Functional
Map
• Ideas and non-functional features captured during discussion of the
Metrics‑Based Process Map (MBPM)
• All the other features and ideas that have come up during any other Discovery
Loop practices you may have used and the many conversations that occurred

Here are a couple of tips we've picked up from our experience. First, don't simply move
sticky notes from one artifact to this new space. You should keep the Impact Map,
Event Storms, Empathy Maps, MBPMs, and other artifacts as standalone artifacts, fully
intact in the original form. They will be very useful when we return to them after doing
some Delivery Loops.
Second, copy word-for-word the items you're picking up from those practices. As
we'll see in the coming chapters, we will really benefit when we can trace work items
through the Discovery Loop, Options Pivot, and Delivery Loop, so keeping language
consistent will help with this. Some teams even invest in a key or coding system to
show this traceability from the outset.

Figure 11.1: Collecting information and ideas from Discovery Loop practices
357 | Value Slicing

To start with, simply spread all the items across a large work surface. There's something
very satisfying about standing back and seeing all the possible work we know of in
front of us. It can be amazing to see just how much has been ideated from those
few practices. It can also be a bit chaotic and daunting. This is why we need to start
organizing the work.
If you're working virtually with people distributed, having a Canvas such as the following
one (and available for download from the book's GitHub repository) may be helpful:

Figure 11.2: User Story and Value Slice Map template

Next, remove any duplicates. For example, you may have identified a deliverable on
your Impact Map and the same feature has ended up in your Event Storm. Your user
interviews may also have found similar feature ideas captured on Empathy Maps. Where
there are identical features, remove the duplicate. If the idea can be broken down into
smaller standalone ideas, refactor and re-write your sticky notes to have these multiple
ideas. The more the better in this practice!
The next step is to categorize each of the items into some kind of common theme and
give that theme a title. We're looking for something that brings all of the items together.
If you were to put each item into a bucket, what would the label on the bucket be? A
top tip is to start with the Target Outcomes that were derived from the Discovery Loop
and set them as the headings to categorize each item under. The reason we do this
is that we want to work with an outcome-driven mindset. We have agreed on some
Target Outcomes so, really, every work item we are considering should be taking us to
one or more of those outcomes. If we pick any one of the items and can't easily see an
outcome it will help achieve, we should be questioning the value of doing that thing at
all. (There are cases where such items that don't map to outcomes are still important,
so if this does happen, just give them their own pile.)
358 | The Options Pivot

We should end up with all items in a neat, straight column directly beneath the Target
Outcome they are categorized under.
If we have a good, well-thought-out set of Primary Outcomes and Enabling Outcomes,
it should be a very positive exercise mapping all of the features, experiments, research
ideas, and so on to an outcome. This exercise should be collaborative and include all
members of the cross-functional team. Developers, operators, designers, Product
Owners, business SMEs, and so on will all have been involved and provided input to the
preceding Discovery Loop practices. They should remain included during the Options
Pivot to ensure their ideas and initiatives are understood and included on the map.
The resulting visualization of work should include functional features and
non-functional initiatives. All of the work that can take place on the platform to enable
faster and safer development and quicker release of product features should be shown.
If we stand back at the end of the exercise, we should start to see our delivery loops
starting to emerge.

Figure 11.3: Clustering tasks under Target Outcomes

The next step is to prioritize all tasks and items on the board. This is never easy but
nearly always needed. If you have worked on a project where time has not been an
issue and it's been obvious that the team will have all the time, they need to confidently
deliver everything asked of them, you are in a unique position! That has never happened
to us and there has always been a need to prioritize work and choose what not to do!
This can start with the Product Owner deciding his or her perspective on priority.
However, as we progress through this chapter, we'll look at a few practices and tools
that you can bring out to help with prioritization in a collaborative environment and
drive consensus. Executing those practices can then be reflected on this value map
we're creating.
359 | Value Slicing

We like to attempt to prioritize each column. So, take each Target Outcome with all of
the features and other items that we believe will achieve them and prioritize them. The
most important and compelling items should be at the top. These are the items that
need to be prioritized above anything else if you are to achieve the outcome. The lesser
understood or "nice to have" items should be further down the column.
The final stage is to slice value out of the value map. Using some sticky tape (ideally
colored, such as painters' tape), we ask the person who holds overall responsibility
for prioritizing work and articulating value (usually this is the Product Owner for
a team using Scrum) to slice horizontally what they see as a slice of value for the
whole product. This means looking at the most important items for each theme and
combining them with some of the other highly important items from other themes.

Figure 11.4: Prioritizing work within clusters

At this point, our Product Owner has a huge amount of power. They can prioritize
within a given outcome. They can prioritize a whole outcome and move everything
down or up. They can combine items together from different outcomes to form
proposed releases. They can slice one, two, three, or fifty slices of value – each
one containing one, two, or more items. Most importantly, they can facilitate
conversations with all stakeholders and team members to arrive at a consensus on this
two-dimensional Value Slice Map.
360 | The Options Pivot

Figure 11.5: Two-dimensional Value Slice Map

During many years of using these practices, we've picked up a few facilitation tips to
help explain them correctly. The first involves how you might visualize and plan two
valuable activities.

The Beer and the Curry


In 2017, I led an engagement with a global
oil company. Toward the end of the first
week, the team was tired. It had been a busy,
intensive week. We'd formed our team and built
our foundation of culture. We'd run several
practices of the Discovery Loop, including user
Empathy Mapping and Event Storming, which
involved lots of standing, lots of thinking, and
lots of conversation.
On Thursday afternoon, I was facilitating User Story Mapping and Value
Slicing based on all the items that had been captured on the Event Storm
and Empathy Maps. This practice was new to the team. After we had gone
through the first few steps by putting all the work on the wall and organizing
it against the outcomes, I talked about the need to slice and prioritize.
361 | Value Slicing

I started by saying, Obviously, we'd like to do all this work, after which one
of the senior stakeholders interrupted and said, YES! We need to do all this
work. I could sense there was some discomfort among stakeholders, as if
I was doing a typical consultants' effort on locking down scope when the
stakeholders wanted everything built. Perhaps my leading choice of words
could have been better.

Figure 11.6: Explaining Value Slicing

But I wasn't trying to decide what was in and out of scope. My whole agile
mindset is based on flexible scope, the ability to adapt and change scope as
we learn more, and always ensuring we're delivering the next most valuable
and important work.
To explain my mindset, my thoughts fast-forwarded to a team social we had
planned for later that day. It had been a long week and we had planned to go
for a few drinks and a curry – again boosting our cultural foundation further
by allowing the team to relax and get to know each other a bit better.
362 | The Options Pivot

I was looking forward to having a beer and having a curry after that beer.
In fact, I was really looking forward to that beer. I felt we'd really earned it
that week and it was going to be great to raise a glass and say cheers with
my new team! But that didn't mean that the curry wasn't important. Nor did
it mean that the curry was not going to happen. We were going to have a
beer first followed by a curry. That was how we'd prioritized the evening. We
hadn't de-scoped anything nor were we planning to. The beer was in my top
slice of value. The curry was in my second slice of value.
The team felt more relaxed understanding we were not de-scoping any work
at all using this practice but simply organizing by value. The team also felt
very relaxed and enjoyed both a beer and a curry!

We've also learned a few simple tricks that can help set up the Value Slicing practice to
work effectively.

One to Few to Many Slices of Value –


Continuous Delivery
I've learned various tricks over the years of
facilitating this exercise.
One of the first times I ran it, we had organized all
the work into columns linked to Target Outcomes
and we progressed to the slicing part of the practice.
I placed one slice of tape on the wall and asked the
Product Owner and stakeholders to move the sticky
notes they deemed the most valuable above that line
of tape and the less valuable ones beneath that line.
363 | Value Slicing

As I observed the team working through this process, I realized that the
single line of tape had generated a misleading point of this practice. There
was a reluctance to put anything beneath the line because there was a
perception that this meant out of scope. I explained this was not the case
and what I was trying to do was slice out the Minimal Viable Product or
MVP. MVP defines the minimum number of features that could form the
product that could be released to users to learn from and build upon. In
reality, many stakeholders see defining the MVP as something negative as
it's where they lose all the great innovative featuresthat they may want but
are not collectively deemed important. I actually try to avoid using the term
MVP, as it is often greeted with some negative emotion.
I learned from this facilitation that one slice should never be used as we are
not defining things as in or out of scope and we are not defining just the
MVP.
Working with another customer in Finland, I took this learning and adapted
my facilitation approach. With all the items that had been captured from
the Discovery Loop on the map, I produced three slices of tape. Hopefully
now the Product Owner and stakeholders would not fall into the in-scope/
out-of-scope trap. However, now there was a new misunderstanding! For
this particular engagement, which was an immersive four-week Open
Innovation Labs residency focused on improved operations, we had planned
three one-week sprints. By coincidence, I had produced three slices of tape
for Value Slicing. So, the stakeholders and Product Owner assumed that
whatever we put in the first slice would form the scope for Sprint 1, the
second slice would be Sprint 2, and the third slice would be Sprint 3.
364 | The Options Pivot

Figure 11.7: Value Slicing of the items captured from the Discovery Loop

I explained that this was not the case. We do not yet know how long it will
take the team to deliver each item in each slice. We will use other practices
in the Delivery Loop to help us understand that. We could end up delivering
more than one slice in one sprint. Or, it may take more than one sprint to
deliver one slice. We just don't know yet.
Since then, I have tweaked my facilitation further. When making the slices
available, I now produce lots of them – at least 10, sometimes more than 20.
I also make the roll of tape accessible and tell the Product Owner to use as
many slices as they would like – the more the better, in fact! I've found Value
Slice Maps now often have many more slices.
A Product Owner from a UK defensecompany once remarked to me that
you could argue that each item on the Value Slice board could be its own
slice of value. I celebrated with a massive smile when I heard this. Yes! When
we reach that mindset and approach, we truly are reaching the goal of
continuous delivery.
365 | Value Slicing

Figure 11.8: Value Slicing with many slices

Visualizing and slicing increments of value has evolved from the amazing thinking
and work produced by Jeff Patton in his book User Story Mapping1 published in 2008.
User Story Mapping is an effective practice for creating lightweight release plans that
can drive iterative and incremental delivery practices. We highly recommend reading
Patton's book and trying out the exercise he describes in his fifth chapter about
visualizing and slicing out the value of something very simple, like everything you do
in the morning to get up, get ready, and travel to work. We use this exercise in our
enablement workshops and find it really brings the practice to life well.
Let's look at how the PetBattle team approached Value Slicing.

1 https://www.jpattonassociates.com/user-story-mapping/
366 | The Options Pivot

PetBattle – Slicing Value towards Continuous Delivery


The PetBattle team reviewed all of the artifacts they produced during their
first Discovery Loop.
The Impact Map identified "Increasing Site Engagement" for Uploaders as
the place they wanted to invest in running experiments and building initial
features. The Empathy Map of Mary, their user, added further support
to building tournament services and a live leaderboard. The team Event
Stormed the idea of Mary entering the daily tournament and winning a
prize to break down the event flow to identify commands, read models,
some UI ideas, and aggregates. The Metrics-Based Process Map identified
some bottlenecks in the existing PetBattle deployment steps, mainly due to
a lack of automation. Finally, the team brainstormed all the non-functional
considerations they had.
They copied all of the features that had resulted from these onto fresh sticky
notes and spread them across the wall.
Then it was time to consider the headings for their Value Slicing Map. The
team recalled that they distilled all of the Discovery Loop information and
learning into three primary Target Outcomes:
• PetBattle is generating revenue from an increased active user base.
• PetBattle is always online.
• Improved team satisfaction with excitement to build and run PetBattle.
They also identified an additional Enabling Outcome:
• Reduce Operational Incidents with impact to customers.
These four outcomes formed the backbone of the PetBattle Value Slice Map.

Figure 11.9: Target Outcomes backbone


367 | Value Slicing

As the team explored these four outcomes further, they thought it might
help to break them down a bit further to help with shared understanding
with stakeholders. The Impact Map had driven focus on four outcomes:
• Increased participation rate of the casual viewer
• Increased uploads
• Increased site engagement of the uploaders
• Increased number of sponsored competitions
Collectively, these would all help with the first primary outcome where
PetBattle would be generating revenue from an increased active user base.
So, these were added to the Value Slice Map:

Figure 11.10: First Target Outcome broken down

The second primary outcome was that PetBattle would be online.


The team reflected on the sections of their Non-Functional Map and
recognized three outcomes that would help achieve this:
• Improve Site Reliability
• Improve Maintainability and Supportability
• Increase Auditability and Observability
368 | The Options Pivot

Figure 11.11: Second Target Outcome broken down

As the team discussed the third primary outcome, Improved team


satisfaction with excitement to build and run PetBattle, their conversations
were all about achieving great testability. Having a foundation of technical
practices that would allow them to automate different levels of tests and
also user-test, utilizing advanced deployment techniques, would make them
very happy. They also reflected on some of the ideas they came up with
when they were forming as a team and building their foundation of culture –
socials, having breakfast and lunches together, and starting a book club were
just a few ideas that would help improve team culture. So, they added these
important headings:

Figure 11.12: Third Target Outcome broken down


369 | Value Slicing

Finally, they had their Enabling Outcome, whereby reducing operational


incidents with impact to customers would help drive all the other outcomes.
This could also be broken into three areas:
• Reduce Security Risks
• Improve Reusability
• Enhance Performance and Scalability

Figure 11.13: Fourth Target Outcome broken down

So, they had a huge collection of outputs spread over one wall and an
organized set of outcomes as headings on another wall:

Figure 11.14: All Target Outcomes at two levels

It was time to connect the outputs to the outcomes by forming columns


beneath each outcome.
They started with the first primary outcome. The outputs sourced here were
mainly commands from the Event Storm, supported by focused impacts on
the Impact Map and high motivations captured on the Empathy Map.
370 | The Options Pivot

Figure 11.15: Outputs to deliver first Target Outcome


371 | Value Slicing

The outputs moved under the second and fourth outcomes were sourced
from the MBPM and Non-Functional Map. This was also true for the third
outcome, which also included some of the ideas captured during the early
social contract and real-time retrospective that was started when building
the cultural foundation.
The team ended up with a UserStory Map that showed the initial journey
through PetBattle as well as the journey the team would go on to deliver and
support it:

Figure 11.16: PetBattle User Story Map

This is the first information radiator that shows functional features of an


application, work to build, operate, and improve the platform and generate
an enthusiastic, high-performing team.
The final step is to start slicing valuable increments of the whole
engagement. Working with Product Ownership, the team was keen to ensure
all outcomes were being tackled early in some minimal form and they would
continue to improve every outcome as they delivered more slices.
372 | The Options Pivot

Figure 11.17: Value Slicing the PetBattle User Map

Looking at the top slice of value brought a feeling of excitement. The team
could see the first items they were going to do to make the PetBattle vision
a reality!

There is a growing set of interesting links, conversations, and further information on


these practices in the Open Practice Library at openpracticelibrary.com/practice/
user-story-mapping. Take a look and, if you have a story or experience to share, you
can help improve the practice further.
Now we've seen the powerful User Story Mapping and Value Slicing technique, we're
going to explore a few other practices that will help make this even more successful and
collaborative. We often find that people have two challenges with User Story Mapping.
First, they don't know how to get everything onto the User Story Map in the first place.
Second, the approach to prioritization and slicing out value can be difficult for some
and can also lack collaboration.
373 | Design of Experiments

Let's look at the first challenge first.


When we introduced the User Story Mapping practice, we said we start by copying all
of the outputs, the deliverables, the features, and the ideas that had surfaced from the
practices used on the Discovery Loop. That sounds very simple and straightforward.
In fact, it really just calls for a human copy-and-paste function to replicate all the
deliverables captured on the Impact Map, all the commands captured on the Event Storm,
and all the ideas and non-functional work captured during discussion of the MBPM.
But is that enough? Are we shutting off the potential for increased innovation by simply
relying on the inspiration that happened a few days ago? A slightly different approach
is to not just think of User Story Mapping and Value Slicing to be about delivering
features. We can try moving to a more experimental mindset where, during the Options
Pivot, we really want to design experiments we can run during the Delivery Loop.

Design of Experiments
All our ideas for new products, services, features, and indeed any changes we
can introduce to make things better (more growth, increased revenue, enhanced
experience, and so on) start off as a hypothesis or an assumption. In a traditional
approach to planning, a team may place bets on which experiment to run based on
some form of return on investment-style analysis, while making further assumptions in
the process.
Design of Experiments is an alternative to this approach, in which we try to validate
as many of the important ideas/hypotheses/assumptions we are making as early as
possible. Some of those objects of the experiments we may want to keep open until
we get some real-world proof, which can be done through some of the advanced
deployment capability (such as A/B Testing) that we'll explore later in this chapter.
Design of Experiments is a practice we use to turn ideas, hypotheses, or assumptions
into concrete, well-defined sets of experiments that can be carried out in order to
achieve validation or invalidation – that is, provide us with valuable learning.
Design of Experiments is a fail-safe way to advance a solution and learn fast. It can
provide a quick way to evolve a product, helps drive innovation in existing as well as
new products, and enables autonomous teams to deliver on leadership intent by placing
small bets.
You may need more than one experiment for each item (idea, hypothesis, assumption).
An experiment usually only changes a small part of the product or service in order
to understand how this change could influence our Target Outcomes. The number
of experiments is really defined based on what you want to learn and how many
distinctive changes you will be introducing.
374 | The Options Pivot

Qualitative versus Quantitative Feedback


Working on a Labs residency with a road and
travel insurance provider in 2018 provided us
with an opportunity to design experiments on
an early prototype of rebuilding the mobile
application with an improved user experience
and increased conversation.
We wanted to measure the current application
versus some ideas brainstormed with business
stakeholders for improved experience, so we
designed an experiment for our test users. We advised them to behave as
if this was a real-life situation, stop where they would normally stop, read
how they would normally read, and so on, when navigating through the
application.
The role of this application was to guide the user through the car insurance
booking process. To complete this process, they needed their license plate
number, social security number, and residential zip code.
Each user was guided to follow a URL (ideally on mobile) for the deployment
of the application running on OpenShift. They were instructed to select a
car and try to compare and buy what seemed to be the best insurance for
the user. The experiment ended at the point where they had purchased
insurance. (Note – each user was told that this was a test and that the
payment details provided to them were for a test credit card and no monies
would be moved.)
The A/B Test meant different approaches for displaying the page could
be used with different users so we could test different prototypes in user
interviews.
Quantitative data from the experiment showed the duration of the journey
through the application, drop-off rates, and completion rates.
Qualitative data from the associated users highlighted pain points in the
user experience, where there remained some confusion, and validated some
of the positive experiences.
375 | Design of Experiments

The qualitative and quantitative feedback combined provided confirmation


of which approach was the most suitable. This meant the product team
could confidently code the "best" approach as validated by data.
This process, from end to end, took one week.

The format of the experiment documentation is really as important as the content. It


is the content that tells you how well you have designed the experiment; for example,
does the experimental design allow for too many opportunities where the outcome may
be ambiguous?
Good experiments need the following minimum details to be successful:
• Hypothesis: Formulated as a sentence, often expressing an assumption.
• Current condition: What is the situation now (as measurable as possible)?
• Target condition: What are we trying to achieve (as measurable as possible)?
• Obstacles: What could prevent us from achieving the target condition? What
could cause interference or noise?
• Pass: How can we define a positive pass? If the target condition may not always
be achieved, then what do we consider a significant enough change to conclude
the experiment is confirming the hypothesis, that is, passing with a positive
outcome?
• Measures: How can we measure the progress?
• Learning: Always capture outcomes and learning, which should ideally lead to
more experiments of higher order.

Once described, the experiments can be implemented, tracked, and measured in order
to analyze the outcomes. In an ideal world, an experiment will have binary success/
failure criteria, but most often we need to analyze data using statistical methods to
find out if there is a significant correlation between the change introduced with the
experiment and the change in the Target Outcome.

NOTE
Successful experiments are not experiments that have proven our assumption is
correct. Successful experiments are those that provide valid and reliable data that
shows a statistically significant conclusion.
376 | The Options Pivot

Design of Experiments is nothing without Discovery or Delivery, which is the main


reason for combining this practice with others. Experiments are sourced from the
information collected during practices on the Discovery Loop. Hypotheses are formed
during Impact Mapping. Assumptions are noted during Event Storming, Empathy
Mapping, and other human-centered design practices. Ideas are captured in all
Discovery practices.
Experiments need to be prioritized as we can only do so much in the time we have.
Combining this practice with the various prioritization matrices, such as Impact-Effort
Prioritization or How-Now-Wow Prioritization, or even economic prioritization
practices such as Weighted-Shortest-Job-First, helps a lot. We are about to explore
each of these in the next part of this chapter.
Experiments are often realized first through rapid prototyping, which requires user
research and user testing, which we do on the Delivery Loop. This combination
provides for fast learning even before a single line of code is written.
Experiments can be run in production as well. In fact, tests in production are the
ultimate form of validation of ideas/hypotheses/assumptions as the validation is
supported by real data and real customer actions and behavior. The A/B Testing
practice provides a very valuable combination.
Often, you may have a set of experiments go through a sequence of Rapid Prototyping/
Prototyping with User Research, and then a subset of successful experiments would
be carried forward to production to pass through A/B Testing. There are other
mechanisms of controlling deployments that will enable measuring and learning from
real customer behavior – we'll introduce all of these later in this chapter.
You can read and discuss this practice further at openpracticelibrary.com/practice/
design-of-experiments.
Designed experiments should end up on a User Story Map and Value Slice Map so that
they can be prioritized against all other work.
Let's look at a couple of other tools that can help with the prioritization discussions,
starting with the Impact and Effort Prioritization Matrix.
377 | Impact and Effort Prioritization Matrix

Impact and Effort Prioritization Matrix


The Impact and Effort Prioritization Matrix is a decision-making/prioritization practice
for the selection of ideas (such as functional feature ideas, performance ideas, other
non-functional ideas, platform growth ideas, and so on).
This practice opens up product development for the whole team (which really
understands effort) and connects them to stakeholders (who really understand impact).
Developing new products goes hand in hand with the generation of ideas, hypotheses,
and their testing/validation. Unfortunately, it is mostly impossible to test and evaluate
all the ideas and hypotheses we can come up with. This requires us to filter and
prioritize which of them to work on.
This matrix is simple, easy to understand, and very visual, and can include the whole
team in the process of transparent selection of ideas and hypotheses to work on first.
It also helps Product Owners and Product Managers build the product roadmaps and
Product Backlogs and explains priorities to stakeholders.
This practice is very powerful in helping to identify direction and ideas for pivoting
purely from the visualization.

Figure 11.18: Two-by-two matrix comparing Impact versus Effort


378 | The Options Pivot

Four separate groups of ideas have emerged from this practice:


• The Best Ideas to Focus on: High Impact / Low Effort – These should be
considered in the higher slices of the Value Slice Map.
• Research Required: High Impact / High Effort – These should be considered in
the higher slices of the Value Slice Map but, in later Product Backlog Refinement
practices, may be deemed lower priority given it will take longer to realize the
value.
• Follow Up: Low Impact / Low Effort – These should be low down on the Value
Slice Map and either followed up if time permits or removed completely.
• No Way – Bad Ideas: Low Impact / High Effort – These should be low down on
the Value Slice Map or removed completely.

The sources of ideas and hypotheses are all the Discovery practices, such as
Event Storming, Impact Mapping, and Empathy Mapping. While we perform those
aforementioned practices, often ideas may emerge for possible improvements or new
hypotheses may form.
Before adding any of them as items to the Product Backlog, these ideas and hypotheses
would typically need some research, analysis, and further elaboration.
Once prioritized, these ideas and hypothesis may lead to:
• New features being added through the User Story Map and Value Slice board
• Complete new features being broken down into smaller features or User Stories
and refactored on the Value Slice board
• User Research
• Design of Experiments
• Technical Spikes and UI Prototypes

The Impact and Effort Prioritization matrix has its own Open Practice Library page at
openpracticelibrary.com/practice/impact-effort-prioritization-matrix/ – a great place
to continue the learning and discussion about this prioritization practice.
A slightly different perspective on prioritization is achieved using the How-Now-Wow
Prioritization practice. Whereas the previous practice is used to filter out and prioritize
the very high-impacting features, this practice is used to identify and prioritize the
quick wins and base features needed for a product.
379 | How-Now-Wow Prioritization

How-Now-Wow Prioritization
How-Now-Wow is an idea selection tool that is often combined with Brainstorming,
How-Might-We2 (HMW), and Design of Experiments. It compares and plots ideas on a
2x2 matrix by comparing the idea's difficulty to implement with its novelty/originality.
Similar to the Impact and Effort Prioritization Matrix, How-Now-Wow Prioritization
is simple, easy to understand, and very visual, and can include the whole team in the
process of transparent selection of ideas/hypotheses to work on first.

Figure 11.19: How-Now-Wow Prioritization map

Again, the sources of ideas and hypotheses are all the Discovery practices, such as
Event Storming, Impact Mapping, HMW, and Empathy Mapping. When we perform
those aforementioned practices, often ideas will emerge for possible improvements or
new hypotheses may form.
We can plot each of these on the How-Now-Wow matrix by assessing each item and
considering how easy or difficult it is to implement (using team members to collaborate
and align on this) and how new and innovative the feature is.

2 https://openpracticelibrary.com/practice/hmw/
380 | The Options Pivot

Three separate groups of ideas have emerged from this practice. There are three we're
particularly interested in:
1. Now Ideas: Easy to implement and considered normal ideas for the product. These
should be considered in the higher slices of the Value Slice Map and are ideas we
would expect to deliver.
2. Wow Ideas: Easy to implement and considered highly innovative or new for the
product. These should be considered as potential ideas for the higher slices of
the Value Slice Map and would be particularly valuable if innovation and market
differentiation were deemed high-priority focus areas. If the Priority Sliders
practice has already been used, it may provide some direction here.
3. How Ideas: Hard to implement and considered highly innovative or new for
the product. These would benefit from further research to understand the
implementation difficulty and potential impact further. Design of Experiments,
prototyping, and further User Research will help validate whether this innovation
is something that would be well received. Technical Spikes and research will help
establish confidence and potentially easier solutions to implement.
4. Other ideas: Hard to implement and not particularly innovative or new for the
product. We're not interested in these ideas at all.
Once placed on the How-Now-Wow matrix, these ideas and hypotheses may lead to:
• New features being added through the User Story Map and Value Slice board
• Complete new features being broken down into smaller features or User Stories
and refactored on the Value Slice board
• User Research
• Design of Experiments
• Technical Spikes and UI Prototypes

For more information on the How-Now-Wow Prioritization practice and to start


a conversation about how to best use it, have a look at openpracticelibrary.com/
practice/how-now-wow-prioritization-matrix.
Our primary motivation behind using practices such as Impact and Effort Prioritization
and How-Now-Wow Prioritization is to facilitate conversation. Practices that get
business folks talking to each other and aligning on an approach with some shared
understandings are great. Practices that get techie folks collaborating and gaining
a common understanding of the implementation approach and complexity are also
great. Practices that get business people and techie people all collaborating to reach
a consensus and a common view of both the business context and implementation
approach are amazing. These two practices are examples of those.
381 | How-Now-Wow Prioritization

Figure 11.20: Different stakeholders collaborating to gain a better understanding of the product

Both practices highlighted some features to do more research on. Practices categorized
in the How quadrant of the How-Now-Wow matrix will benefit from additional
research. Practices categorized in the High Effort / High Impact quadrant of the
Impact and Effort Prioritization matrix will benefit from additional research.
Many of the human-centered design practices outlined in Chapter 8, Discovering the
Why and Who, will help with this research. This includes Empathy Mapping, qualitative
user research, conceptual design, prototyping, and interaction design. If the feature
area is of very high importance, it may be valuable to invest in a specific practice that
will really further the understanding of the feature – the Design Sprint.
382 | The Options Pivot

The Design Sprint


The Design Sprint has become a popular practice to support product research. It is a
five-day customer-centric process for rapidly solving a key challenge, creating new
products, or improving existing ones. Design Sprints enable you to:
• Clarify the problem at hand and identify the needs of potential users.
• Explore solutions through brainstorming and sketching exercises.
• Distill your ideas into one or two solutions that you can test.
• Prototype your solution and bring it to life.
• Test the prototype with people who would use it.

The process phases include Understand, Define, Sketch, Decide, Prototype, and
Validate.
The aim is to fast-forward into the future to see your finished product and customer
reactions, before making any expensive commitments. It is a simple and cheap way to
validate major assumptions and the big question(s) and point to the different options
to explore further through delivery. This set of practices reduces risks when bringing
a new product, service, or feature to the market. Design Sprints are the fastest way to
find out if a product or project is worth undertaking, if a feature is worth the effort, or
if your value proposition is really valid. For the latter, you should also consider running
a Research Sprint.3 It compresses work into one week and most importantly tests the
design idea and provides real user feedback in a rapid fashion.
By now, there are many different variations of the Design Sprint format. You may come
across the Google Ventures variation – the Design Sprint 2.0 – which is the agenda
shown below. The best thing to do is to try different variations and judge which one
works for what context.

3 https://library.gv.com/the-gv-research-sprint-a-4-day-process-for-answering-
important-startup-questions-97279b532b25
383 | The Design Sprint

Monday Tuesday Wednesday Thursday Friday

Decide on Prototype Testing with


Intro
Solution Creation End Users
Analyze Results
Storyboard User Test
How Might we Map and Prepare
Design Preparation Showcase
Findings
(demonstrate
Interview process and
Lightning Demos Decider Vote
Room Setup findings)
Taking Notes Storyboard

Final
Crazy 8's
Walk‑through
User Test
Long-Term Goals
Recruiting

Sprint Questions

Lightning Demos

Drawing Ideas

Start User Test


Recruiting

Develop Concepts

Table 11.1: A five-day Design Sprint

Effectively, we are using the same Mobius Loop mental model as used throughout this
book but micro-focused on a particular option in order to refine understanding and
conduct further research about its value. That improved understanding of relative value
then filters back into the overall Mobius Loop that classified this as an option worthy of
a Design Sprint.
384 | The Options Pivot

Figure 11.21: The Design Sprint – quick trip round the Mobius Loop

A Design Sprint will really help refine the shared understanding of the value a feature
or group of features may offer in a product. They may be functional user features. They
may also be non-functional features that are more focused on improving the platform
and improving the development experience. The same agenda as above can apply where
"users" are developers or operators and the Design Sprint is focused on researching
some potential work that will improve their development or operations experience.
This practice will help elaborate and refine the information that is pushed through
the User Story Map and Value Slicing Map. We see it as a practice on the Options
Pivot because it will help decide whether or not to proceed with the delivery of the
associated features.
Read more about the practice, add your own experiences, or raise any questions you
might have at openpracticelibrary.com/practice/design-sprint/.
The User Story Mapping practice helps us visualize our work into a story with a
clear backbone. Value Slicing allows us to form incremental release plans that can
be delivered in iterations. The Impact and Effort Prioritization and How-Now-Wow
Prioritization practices help provide alternate perspectives to help with the Value
Slicing. The Design Sprint allows us to dive deeper into a specific feature area to
research it further, so we can prioritize with increased confidence.
385 | Forming the Initial Product Backlog

All of these practices (and many others you'll find in the Open Practice Library)
are homing in on us being able to produce an initial Product Backlog – a single,
one-dimensional list of stuff we're going to take into the Delivery Loop.
Let's now look at how we translate the information from our Value Slices into a Product
Backlog and how we can continue to prioritize it.

Forming the Initial Product Backlog


Here's some good news. Forming the Product Backlog is really, really easy. If you've run
some Discovery Loop practices and then run User Story Mapping and Value Slicing
of the resulting learning, all the hard thinking, collaboration, and alignment has been
done.

Figure 11.22: Value Slices to drive the initial Product Backlog

This has been a great blend of practices and, if there was strong collaboration and a
sense of alignment throughout (which is facilitated by having a strong foundation of
open culture), the Value Slice Map should represent a combined, shared view of work
and how it can be incrementally released.
To create the Product Backlog, we simply copy each sticky note in the top slice from
left to right and place them in a single column.
386 | The Options Pivot

Figure 11.23: The initial Product Backlog

The sticky note on the left of the top slice will be copied and placed as the item at the
top of the Product Backlog. The sticky note to the right of it on the top slice will be the
second item on the Product Backlog. Once we've copied all items in the top slice, we
move to the second slice of value and, again, copy each of the items from left to right
onto the Product Backlog.

Figure 11.24: Summary of the Value Slicing practice


387 | Forming the Initial Product Backlog

Figure 11.25: Creating a Product Backlog from the Value Slicing Canvas

We end up with a single column of Product Backlog items that have been sourced
and prioritized through a collection of robust practices. That traceability is important
because we can trace back to the Discovery Loop practice that generated the idea and
the value it is intended to deliver.
Let's look at that traceability in action with our PetBattle organization.
388 | The Options Pivot

PetBattle — Tracing Value through Discovery and


Delivery Practices
We saw earlier in this chapter how slices of value were created from the
User Story Map. We also saw how the User Story Map was built up entirely
from the learning captured through Discovery Loop practices.
Translating this to a Product Backlog is easy.

Figure 11.26: Transforming Value Slices into a Product Backlog


389 | Forming the Initial Product Backlog

This is the beginning of the life of the PetBattle Product Backlog. It will
remain living, breathing, and always ready for updates as long as the
PetBattle product is in operation.
In fact, the team immediately sees some early prioritization needed and
recommends moving the CI/CD workshop and team lunch/breakfast items
to the top. They all agreed there was no point writing any code or building
any features until they had CI/CD in place and a fed and watered team!

The Product Backlog is a living, breathing artifact. It should never be static. It should
never be done. It is a tool that is always available for teams and stakeholders to
reference and, in collaboration with Product Owners, a place to add ideas, elaborate on
existing ideas, and continue to prioritize work items relative to each other.
From this moment onward, we will start and continue the practice of Product Backlog
Refinement.

Product Backlog Refinement


The Product Backlog Refinement practice sits at the heart of the Mobius Loop. We use
it coming out of a Discovery Loop. We use it coming out of a Delivery Loop. We use it all
the time.
It is perhaps the one practice that sits on the Mobius Loop that does not have a
suggested or directed time-box to execute it in. There is also not a suggested or
directed number of times you would run it or when you would run it. Product Backlog
Refinement should occur as often or as little as needed to get to a backlog that
stakeholders and team members have confidence in. You can even use the Confidence
Voting practice (as introduced in Chapter 4, Open Culture) to measure this.
390 | The Options Pivot

There is also no defined agenda for a Product Backlog Refinement session and it can
be attended by a variety of different people, such as development and operational
team members, business stakeholders, and leadership. The activities that take place in
Product Backlog Refinement include:
• Talking through and refining the collective shared understanding of an item on
the backlog, its value to users, and the implementation needs that need to be
satisfied
• Re-writing and refining the title of a Product Backlog item to better reflect the
collective understanding
• Writing acceptance criteria for a specific item on the Product Backlog
• Doing some relative estimation of the effort required to deliver a feature from
the backlog to satisfy the acceptance criteria
• Splitting an item on the backlog into two (or more) smaller items
• Grouping items together into a more standalone item that will deliver a stronger
unit of value
• Capturing new ideas and feedback on the Product Backlog
• Prioritizing and re-ordering items on the Product Backlog

All of the artifacts we've generated on the Discovery Loop and Options Pivot are
useful to look at, collaborate on, and refine further when performing Product Backlog
Refinement. They too are all living, breathing artifacts and, often, conversations during
Product Backlog Refinement trigger further updates to these. So, for example, we
may add a new deliverable to our Impact Map and connect it to an impact and actor
to test with. We may elaborate on some details on the Event Storm as we start to
consider the implementation details of an associated backlog item. As new items are
captured from Product Backlog Refinement, the Impact and Effort Prioritization Matrix,
How-Now-Wow Prioritization Matrix, and Value Slice board artifacts are all available to
relatively plot the new item against existing items. In Chapter 17, Improve It, we'll return
to the Options Pivot following an iteration of the Delivery Loop and look at how the
measurements and learning captured from delivery can drive further Product Backlog
Refinement.
Arguably one of the most important aspects of Product Backlog Refinement is
prioritization and, in particular, prioritizing what is toward the top of the Product
Backlog. This is what the team will pull from when planning their next iteration of the
Delivery Loop. So, it's important that the items at the very top of the backlog truly
reflect what is most valuable and help generate the outcomes that matter.
For more details on Product Backlog Refinement and to converse with the community,
take a look at the Open Practice Library page at openpracticelibrary.com/practice/
backlog-refinement.
391 | Prioritization

We've already seen a few tools that help with initial Product Backlog generation and
giving the first set of priorities. Let's look at a few more that will help with ongoing
Product Backlog prioritization.

Prioritization
Throughout this chapter, we've used the terms features and Product Backlog items
to explain the different units of work that we capture through Discovery and prioritize
and decide which to work on first in the Options Pivot. An important clarification
that's needed is that this does not just mean functional features. We are not just
deciding which shiny new feature the end users are going to get next. We need
to balance customer value against risk mitigation; we need to balance functional
against non-functional work. We do that by balancing research, experimentation, and
implementation.

Value versus Risk


When we prioritize Product Backlog items, we are relatively assessing all options
available to us. That does include new features we're going to implement. It also
includes defects and problems in production that need to be fixed. It includes
non-functional improvements to the architecture to make future development and
operations simpler and stronger. It includes the experiments we might want to execute
or further research we want to do in the form of a user interface prototype or a
technical spike. It's really anything that will consume the time of some of the cross-
functional product team members.
When we prioritize, we need to think about the relative value delivering the item will
bring as compared to the risk it might mitigate through acquiring additional learning
and confidence. There are different kinds of risk, including:
• Business Risk: Are we building the right thing?
• Technical Risk: Will this thing work on the platform and will it scale?
• Cost and Schedule Risk: Will we deliver within the right time-box to meet
market needs? Will we be able to meet any cost constraints?

Running Technical Spikes and proving some of the non-functional aspects of the
platform early can provide the knowledge and confidence value, which can be equally, if
not more, important than customer value achieved from delivering functional features.
In fact, this non-functional work helps us achieve the Enabling Outcomes outlined
in Chapter 10, Setting Outcomes, whereas the functional implementations are more
focused on achieving the primary outcomes.
392 | The Options Pivot

Let's look at an economic prioritization model that can help us quantify risk, knowledge
value, and customer value. It can be used by a Product Owner in collaboration
with wider groups of team members and stakeholders and presented to the wider
organization.

Cost of Delay and WSJF


Weighted Shortest Job First (WSJF) is an economic prioritization model. It is a very
popular practice in the Scaled Agile Framework (SAFe), but works very well as a
standalone practice with any size of product or organization.
Many organizations struggle to prioritize risk mitigation action, learning initiatives,
or delivery value using any scientific approach. Instead, prioritization comes down to
meetings in a room where the Loudest Voice Dominates (LVD) and/or decisions are
made by the Highest Paid Person's Opinion (HIPPO).

Figure 11.27: Decisions are made based on the HIPPO

So, what is WSJF? It is based on Don Reinertsen's research on the Cost of Delay and
the subject of his book The Principles of Product Development Flow – Second Generation
Lean Product Development. Reinertsen famously said, If you quantify one thing, quantify
the cost of delay. Josh Arnold explains how the Cost of Delay is calculated by assessing
the impact of not having something when you need it. As a typical example this might
be the cost incurred while waiting to deliver a solution that improves efficiency. It is the
opportunity cost between having the same thing now, or getting it later.4

4 Source: Mark Richards, SAFe City Simulation Version 2.0


393 | Prioritization

The core thinking behind the Cost of Delay is value foregone over time. For every day
we don't have an item in the market, what is it costing the organization? If the value of
the item is a cost-saving initiative, how much money is the organization not saving by
not implementing this feature? If the value of the item is revenue-related, what is the
additional revenue they're missing out on by not implementing it?
The Cost of Delay can be sensitive to time. There are seasonal influences – for example,
shipping in retail can be very time-sensitive around, say, the holiday season. Changes
may be needed for legislation and compliance. The cost can be very high if something is
not delivered by a certain date when new legislation kicks in. The Cost of Delay will be
nothing in advance of this date and very high after this date.
There are three primary components that contribute to the Cost of Delay:
• Direct business value either to the customer and/or the organization. This
reflects preferences users might have that will drive up their customer
satisfaction. It will also reflect relative financial reward or cost reduction that the
item is expected to drive.
• Time criticality to implementing the solution now or at a later date. This
incorporates any seasonal or regulation factors that might drive time criticality,
as well as whether customers are likely to wait for solutions or if there is a
compelling need for it now.
• Risk reduction and opportunity enablement is the indirect business value this
might bring to the organization. It considers the hidden benefits this might bring
in the future as well as reducing the risk profile.

Using Cost of Delay to prioritize work in agile backlogs will result in items being
prioritized by value and sensitivity to time. It also allows us to have a lens on direct
business value (such as new functional feature development) and indirect business
value (such as non-functional improvements to the OpenShift platform).
Cost of Delay =
Business Value + Timing Value + Risk Reduction/Opportunity Enablement Value
WSJF adds a further dimension to this by considering the cost of implementation.
Reinertsen said it is critical to remember that we block a resource whenever we service a
job. The benefit of giving immediate service to any job is its cost-of-delay savings, and the
cost is the amount of time (duration) we block the resources. Both cost and benefit must
enter into an economically correct sequencing.5
Weighted Shortest Job First (WSJF) = Cost of Delay (COD) / Duration

5 The Principles of Product Development Flow – Second Generation Lean Product


Development by Donald G Reinertsen
394 | The Options Pivot

What unit do we use for the three components in Cost of Delay and Duration? It's
arbitrary. The actual numbers are meaningless by themselves. The agile practice we use
to support COD and WSJF is Relative Estimation,6 whereby we are relatively assessing
the magnitude of business value, timing value, and risk reduction/opportunity
enablement for each item on the Product Backlog relative to each other item. This
allows us to prioritize the Product Backlog according to WSJF.
We've now introduced several practices on this first trip to the Options Pivot that help
us generate more ideas from discovery, refine them, prioritize them, and, ultimately,
decide which options we're going to take into a Delivery Loop next. But who makes this
decision? The term we has been used a lot in this chapter, emphasizing the importance
of collaboration. But what happens when we don't have a consensus? Who gets the final
say? This is where the importance of great production ownership comes in.

PetBattle – Prioritizing using WSJF


The PetBattle team gathered to conduct a WSJF session on all of the features
and work they'd captured on the Value Slice board.
They chose to use this practice in addition to the Impact and Effort
Prioritization and Value Slicing because they felt they needed a way to
quantify both risk reduction and timeliness of upcoming work. As they had
several Enabling Outcomes that are more non-functional based, the team
felt that the Cost of Delay and WSJF would allow them to correctly articulate
the value of this work relative to functional features.
For each item, the team would spend one minute talking about their
understanding of the feature. Each team member would then write
down four values – business value, time criticality, risk and opportunity
enablement, and duration. The first three values would be given a rating of
1 to 10. The duration used a modified Fibonacci sequence7 and could either
be 1, 2, 3, 5, 8, or 13.

6 https://openpracticelibrary.com/practice/relative-estimation/

7 https://www.mountaingoatsoftware.com/blog/why-the-fibonacci-sequence-
works-well-for-estimating
395 | Prioritization

The team members would reveal individual scores to each other and a
conversation would follow to converge and align on the team's assessment
for each score.
This resulted in a Cost of Delay value and a WSJF value for each item.

Table 11.2: Calculating WSJF for PetBattle work


396 | The Options Pivot

Some of the conclusions drawn out during the conversation included:


• Most of the functional features were of high business value and
of high time criticality to launch the minimal version of this
application.
• The non-functional work driven by Enabling Outcomes was deemed
of high value for risk reduction and of lower (but some) business
value.
• Some non-functional work was clearly more time-critical than
others, such as having automated deployment.
• Most work was relatively similar in size with the exception of the
Verify Image feature, which has some uncertainty around the
solution.
The team agreed this had been a useful Product Backlog Refinement
exercise and would help with overall prioritization.

The previous sections on forming the product backlog, refining it and prioritization are
all key responsibilities of Product Ownership which we will now explore further.

Product Ownership
Everything in this chapter is about Product Ownership. Everything in the previous
chapters about Discovery is Product Ownership. Prioritizing early efforts to build a
foundation of open culture, open leadership, and open technology practices requires
strong Product Ownership from the outset.
There are whole books and training courses written about Product Ownership, Product
Owners, and Product Managers. Much of our thinking has been inspired by the amazing
work of Henrik Kniberg. If you have not seen his 15-minute video on YouTube entitled
Product Ownership in a Nutshell,8 please put this book down, go and get a cup of tea,
and watch the video now. Maybe even watch it two or three times. We, the four authors
of this book, reckon we've collectively seen this video over 500 times now!

8 https://www.youtube.com/watch?v=502ILHjX9EE
397 | Product Ownership

Some say it is the best 15-minute video on the internet, which is quite an accolade! It
packs in so many important philosophies around Product Ownership in such a short
amount of time. We tend to show this video during our DevOps Culture and Practice
Enablement sessions, when we start a new engagement with a new team, or simply to
kick-start a conversation with a stakeholder on what agile is really about.
The resulting graphic is well worth printing out and framing on the wall!

Figure 11.28: Agile Product Ownership in a Nutshell, courtesy of Henrik Kniberg

During our time working on hundreds of different engagements, we've seen some
examples of amazing Product Ownership. We've also seen some really bad examples.
Let's look at some of the patterns that we've observed, starting with Product Owners.
398 | The Options Pivot

Experimenting with Different Product Owners


Working with a UK retail organization in
the mid-2010s, we were using the Scrum
framework to deliver a four-tier IBM
WebSphere Commerce and MobileFirst
platform.
This was the first time this organization had
attempted an agile delivery and it was complex
due to multiple Scrum teams, one Waterfall
team, multiple suppliers, multiple time zones,
multiple technology vendors including some immature technology, multiple
business units, and so on.
The first Product Owner assigned to the engagement was a contractor
who had no prior history or engagement with the organization. We quickly
learned that he had no empowerment to make decisions as he constantly
had to assemble meetings with stakeholders any time a Scrum team needed
clarification or asked to do some Product Backlog refinement. Product
Owners need to be empowered.
The organization did adapt and re-staffed the Product Owner role to be a
very senior business-focused person who was a direct report of the CIO.
This was better as he certainly was empowered to make fast decisions and
prioritization choices. The challenge this time was getting access to him
regularly. He promised us one hour per week. With a Product Backlog that
needed a lot of refinement and a lot of clarification to development teams,
this was nowhere near enough. Product Owners need to be available to their
team(s).
There was a further adaptation and the Product Owner role was given to a
long-standing and highly respected technical architect who was dedicated
to the project. This worked really well as he had a great relationship with
both technology and business stakeholders, knew the organization's strategy
and priorities really well, and had a great personality that people felt very
comfortable collaborating with. Product Owners need to understand the
business.
399 | Product Ownership

Over time, the need for direct access to this Product Owner diminished. It
is a pattern I've noticed working with several organizations where Product
Ownership has been particularly strong. Great Product Owners democratize
Product Ownership and provide a direct connection between teams and
stakeholders. Product Owners should see their current role as one to self-
destruct and not be needed in the long term.

Next, let's look at how great Product Owners have approached their first iterations and
what they've prioritized to form their first iteration goals.

Patterns of Early Sprints and the


Walking Skeleton
In the next chapter, we're going to switch to
the Delivery Loop and talk about the practices
we use to plan, deliver, showcase, and learn
from iterations of the delivery loop.
Thinking specifically about the first time a team
does an iteration of delivery for a new product,
I've noted a consistent pattern in what great
teams have as their first delivery goal. I've
worked on several engagements where the goal has been virtually identical.
This may be my coaching and I have influenced this, but there is always a
sense of alignment and confidence that this is the best way to de-risk, learn,
and set up for faster delivery in subsequent iterations.
An example first iteration goal from a European automotive organization
was:
Set up our workspace, provide a walking skeleton connecting the frontend
to API to database underpinned by CI/CD.
400 | The Options Pivot

Several other engagements have had almost identical goals, and the pattern
is strong because:
• The team wants to set up their workspace. That may be their physical
workspace with lots of information radiators and collaboration
space. It may be a virtual workspace with digital tooling. It may be a
development environment using code-ready workspaces and being
familiar with all tools to be used.
• The plan to build a walking skeleton. This is a thin slice of the whole
architecture delivered in one iteration. There won't be any fancy
frontend or complex backend processing. They will prove full-stack
development and that the cross-functional team representing all parts
of the logical architecture can deliver working software together. It's a
walking skeleton because it is a fully working product. It just doesn't do
very much yet!
• Their work will be underpinned by continuous integration and
continuous delivery. This green-from-go practice means they are
set up for success when it comes to automating builds, tests, and
deployments. If they prove this and learn this for a thin slice, it will
become increasingly valuable as we start to put all the flesh and organs
into the walking skeleton!

The final part of this chapter shifts the focus from what we're deciding to deliver
next to how we're going to measure and learn from our experiments and the features
we deliver. The OpenShift platform enables our teams to consider several advanced
deployment capabilities.

Advanced Deployment Considerations


Earlier in this chapter, we explained the practice of Design of Experiments and how
we intend to take an experimental mindset to our development, especially where
assumptions and hypotheses have been formed in our Discovery Loop practices.
401 | Advanced Deployment Considerations

The OpenShift platform enables several different deployment strategies that support
the implementation of experiments. When we are on the Options Pivot, we should
consider these strategies and which (if any) we should plan with the delivery of the
associated Product Backlog item. The advanced deployment strategies we can consider
include:
• A/B Testing
• Blue/Green Deployments
• Canary Releases
• Dark Launches
• Feature Toggling

We introduce these concepts here as, from an options planning perspective, this is
where we need to be aware of them. We'll return to specific implementation details in
Section 6, Build It, Run It, Own It, and we'll explore how we use the resulting metrics in
Section 7, Improve It, Sustain It.

A/B Testing
This is a randomized experiment in which we compare and evaluate the performance
of different versions of a product in pairs. Both product versions are available in
production (live) and randomly provided to different users. Data is collected about
the traffic, interaction, time spent, and other relevant metrics, which will be used
to judge the effectiveness of the two different versions based on the change in user
behavior. The test determines which version is performing better in terms of the Target
Outcomes you have started with.
A/B Testing is simple to apply, fast to execute, and often conclusions can be made
simply by comparing the conversion/activity data between the two versions. It can be
limiting as the two versions should not differ too much and more significant changes
in the product may require a large number of A/B Tests to be performed. This is one
of the practices that allows you to tune the engine, as described in The Lean Startup9 by
Eric Ries.

9 http://theleanstartup.com/
402 | The Options Pivot

Figure 11.29: A/B Testing

For more information on this practice and to discuss it with community members
or contribute your own improvement to it, please look at openpracticelibrary.com/
practice/split-testing-a-b-testing/.

Blue/Green Deployments
Blue/Green Deployment is a technique in software development that relies on two
productive environments being available to the team. One of them, let's call it green,
is operational and takes load from the reverse proxy (load balancer/router). The other
environment, let's call it blue, is a copy upgraded to a new version. It is disconnected
from the load balancing while this upgrade is completed.

Figure 11.30: Blue/Green Deployments


403 | Advanced Deployment Considerations

The team can perform all required tasks for an upgrade of the product version on the
blue environment without the rush of downtime. Once the blue environment is ready
and has passed all tests and checks, the team simply redirects the reverse proxy (load
balancer/router) from the green environment to the blue environment.
If everything works fine with the blue environment, the now outdated green can be
prepared to be recycled to serve as the blue for the next release. If things go bad, the
team can switch back to a stable environment instantly using the reverse proxy/load
balancer/router.
This is a feedback loop practice that allows the team to get prompt feedback from the
real-life use of their changes. It enables continuous delivery and provides safety for
performing complex releases. It removes the time pressure and reduces the downtime
to practically zero. This is beneficial for both technical teams and end users who will
not notice glitches or unavailability of the service/product, provided that the new
version is performing at par. In case of adverse effects, it allows the teams to have an
instant roll-back alternative and limit the negative impact on customers.
To explore this practice further, visit the Open Practice Library page at
openpracticelibrary.com/practice/blue-green-deployments/.

Canary Releases
In software development, this is a form of continuous delivery in which only a small
number of the real users of a product will be exposed to the new version of the product.
The team monitors for regressions, performance issues, and other adverse effects and
can easily move users back to the working old version if issues are spotted.
The term comes from the use of caged birds in coal mines to discover the buildup
of dangerous gases early on. The gases would kill the bird long before they became
life-threatening for the miners. As with the canary in the mine, this release practice
provides an early warning mechanism for avoiding bigger issues.
The canary release provides continuous delivery teams with safety by enabling them to
perform a phased rollout, gradually increasing the number of users on a new version
of a product. While rolling out the new version, the team will be closely monitoring
the performance of the platform, trying to understand the impacts of the new version,
and assessing the risks of adverse effects such as regressions, performance issues, and
even downtime. This approach allows the team to roll back the release as soon as such
adverse effects are observed without the majority of the customers being impacted
even for a limited amount of time.
404 | The Options Pivot

Canary Release is similar to A/B Testing in the sense that it is only exposing a part of
the population to the new feature, but unlike A/B Testing, the new feature can and is
typically a completely new feature and not just a small tweak of an existing one. The
purpose is different too. A/B Testing looks to improve the product performance in
terms of getting business outcomes, while the Canary Release is focused entirely on
technical performance.

Figure 11.31: Canary deployments

You can read more about this practice, contribute improvements, or have a discussion
with the wider community at openpracticelibrary.com/practice/canary-release.

Dark Launches
Dark Launches are another continuous delivery practice that release new features to
a subset of end users and then captures their behaviors and feedback. They enable
the team to understand the real-life impact of these new features, which may be
unexpected for users in the sense that no users asked for them. It is one of the last
steps for validating a product/market fit for new features. Rather than launching the
features to your entire group of users at once, this method allows you to test the waters
to make sure your application works as planned before you go live.
Dark Launches provide safety by limiting the impact of new features to only a subset
of the users. They allow the team to build a better understanding of the impact
created by the new feature and the ways the users would interact with it. Often novel
ways of interaction can surface, ways that were not initially envisioned by the team.
This can be both positive and negative, and the limited availability allows the team to
draw conclusions from the real-life use and decide if the feature will be made widely
available, further developed, or discontinued.
405 | Advanced Deployment Considerations

Figure 11.32: Dark Launches

The Dark Launches practice has its own Open Practice Library page at
openpracticelibrary.com/practice/dark-launches/, so head there for further
information, to start a conversation, or to improve the practice.

Feature Flags
Feature Flags (also known as Feature Bits/Toggles/Flipping/Controls) are an
engineering practice that can be used to change your software's functionality without
changing and re-deploying your code. They allow specific features of an application to
be turned on and off for testing and maintenance purposes.
In software, a flag is one or more bits used to store binary values. So, it's a Boolean that
can either be true or false. A flag can be checked with an if statement. A feature in
software is a bit of functionality that delivers some kind of value. In its simplest form, a
Feature Flag (or Toggle) is just an if statement surrounding a bit of functionality in your
software.
Feature Toggles are a foundational engineering practice and provide a great way to
manage the behavior of the product in order to perform experiments or safeguard
performance when releasing fresh new features.
406 | The Options Pivot

Figure 11.33: Feature Flags

This practice has a page in the Open Practice Library at openpracticelibrary.com/


practice/feature-toggles, so have a look there for more information or to engage with
the community on it.
Having introduced these advanced deployment considerations and how they support
the Design of Experiments, let's look at how our PetBattle team might use them.

PetBattle – Tech Spikes, Prototypes, Experiments, and Feature


Implementations
The team gathered by the new Product Backlog, which was, floor to ceiling,
a huge column of sticky notes! In fact, they didn't all fit in a single column so
the items toward the bottom fanned out to produce what looked like a big
funnel. It was nice and ordered at the top but the bottom would need some
organizing. But that was OK as the team reckoned there was enough work
on the backlog to keep them busy during the next few weeks, so they could
refine the Product Backlog as they went along.
Having attended a workshop on advanced deployment considerations,
they decided to do some refinement on the functional features on the
Product Backlog. They produced three columns on their whiteboard with
the headings Research, Experiment, and Implement. They time-boxed the
discussion for 45 minutes. There were 15 features to talk through, so on
407 | Advanced Deployment Considerations

average, 3 minutes per item. Their goal was to put each feature in one of the
columns with a short note on what their approach to the implementation
was.
• Open PetBattle: This was easy. Anyone using the app would need to
open it. IMPLEMENT.
• Display Leaders: Lots of questions about what and how to display.
How many leaders? Should we add pagination or scroll? They decided
some RESEARCH was needed – perhaps a UI prototype with some user
testing.
• Let me in please: The team had to go back to the Event Storm to
remind themselves what this was about! Again, it was a simple feature
of letting the user in to see Pets uploaded. IMPLEMENT.
• Vote for Cat: This triggered some conversation. Do they vote up or
down? Or do they just give a vote (or nothing at all)? The team was
divided and had heard differing views from user interviews. They
decided to EXPERIMENT with an A/B Test.
• Add my Cat: Not much research or experimentation needed. A
standard uploading tool was needed. Just IMPLEMENT.
• Verify Image: This sounded a bit trickier. There were merging AI/ML
patterns available. It needed some technical RESEARCH and probably a
Technical Spike.
• Enter cat into tournament: Not much ambiguity here. IMPLEMENT.
• Display Tournament Cat: It wasn't clear if this was going to be well
received or not. The team thought they could EXPERIMENT with a
feature toggle and then it's easy enough to turn off.
• Disable "Add my Cat": Some users have more than one cat and will
want to add more than one. Let's EXPERIMENT with a Dark Launch of
this feature to a small subset of users.
• Vote for given cat: Once the team got the results from the A/B Test,
they could EXPERIMENT further and launch as a Canary Test.
• Update the Leaderboard: IMPLEMENT
• End Competition: IMPLEMENT
• Notify Players: Not clear how this would happen – SMS? Email? Other
mechanisms? The team decided to do some user RESEARCH.
408 | The Options Pivot

• Deliver Prize to Winner: The prize strategy was still under


consideration, so more RESEARCH would be needed.
• Begin Next Tournament: This could either happen immediately or
the next day. Perhaps another EXPERIMENT A/B Test would see what
drop-off the team gets by waiting.
The team finished in 40 minutes. It was a great discussion and they felt
ready to do their first iteration planning with these decisions.

Let's look at another real-world experience to see just how simple yet effective this
experimental mindset can be.

Reframing the Question – How Much


Can I Borrow or How Much House Can
I Afford?
It can be very easy to fall into the trap of recreating
what you already have but with a shiny new frontend,
but ultimately that may not solve problems that
are more fundamental. While working for a bank, a
small team of us were tasked with trying to find out
why there was such a large drop-off in mortgage
applications once people had used an online Mortgage
Calculator. The calculator on the bank's site was
pretty simple: you popped in your salary and it told you what you could
borrow from them. There was no ability to add a deposit or specify a term
length to see how it would reflect repayment rates or lending criteria. The
calculator was also very slow and clunky, not mobile-friendly or responsive.
The quick solution for the bank was to just reskin it to make these cosmetic
fixes, but this didn't really help answer the question of why were they
getting hundreds of people doing the calculation but only a tiny percentage
of those people continuing to apply for a loan.
409 | Research, Experiment, Implement

The team was very small, just one designer, two engineers, a business
analyst, and a Product Owner. As a small co-located team, buried in the
heart of the bank, we were able to move fast! We interviewed people who
had recently purchased mortgages with the bank to get insight into their
motivations for using the tool. We did a load of research by going into the
bank branches and asking people open-ended questions while they used
the existing tool. We collated this information along with how they were
accessing the calculator and if they were to complete an application, what
device they would use,that is, their phone or their laptop.
Through this research we stumbled upon an interesting fact – people were
not interested in How much could I borrow but How much house can I afford.
This simple difference might seem inconsequential but it massively affected
how we rebuilt the bank's online mortgage calculator. It meant people
wanted to be able to tailor their calculation to see how their rates and
lending criteria could be affected by, for example, having more income. Or,
if they were to continue to save for another year and have more of a deposit
saved, could they get a better rate? This flip meant people were using the
tool to not see if they could afford a given home but how much of a home
could they afford and by when.
It would have been very simple for us to just recreate the bank's existing
calculator with a new skin that ran on a mobile – but this would not have
addressed the core problem. By reframing the question, we were now in
a position to create a simple calculator tailored to the needs of the bank's
first-time buyers.

All these advanced deployment considerations provide powerful tools for use in Options
planning and how we can conduct research, experimentation, and implementation.

Research, Experiment, Implement


This chapter has highlighted that, when considering our options, it's not just
about prioritizing features and implementing them. We need to balance feature
implementation with ongoing research and experimentation.
A great way to summarize and visualize all of this is by using the Mobius Loop's
Options Map.
410 | The Options Pivot

Creating an Options Map


We concluded Section 3, Discover It, with a Discovery Map – a single information
radiator that summarized the iteration of the Discovery Loop.
We will conclude Section 4, Prioritize It, with an Options Map – this is another
open‑source artifact available in the Mobius Kit under Creative Commons that you
can use to summarize all the learnings and decisions taken during your journey in the
Options Pivot.
This map should slot neatly next to your Discovery Map and is used to summarize:
• What have we just focused on in the Options Pivot?
• Which outcomes are we targeting?
• What ideas or hypotheses do we need to validate?
• How will we deliver the options?
• Which options do we want to work on first?

When we return to the Options Pivot after an iteration of the Delivery Loop, we'll
complete the final section of this map:
• What did we learn?

Figure 11.34: The Options Map


411 | Creating an Options Map

Now let's look at PetBattle's Options Map.

PetBattle – The Options Map


There is a lot of detail captured in this map, so please feel free to explore the
online version available in the book's GitHub repository.

Figure 11.35: The PetBattle Options Map


412 | The Options Pivot

The Options Map provides clarity and direction as to how the product priorities to help
reach outcomes. It helps form our delivery strategy.

Conclusion
In this chapter, we focused on how we are going to deliver the outcomes set in the
previous section.

Figure 11.36: Adding practices to navigate us through Options

We explored the User Story Mapping and Value Slicing practices and how we take all
of the information captured in Discovery practices and push it through these tools.
We also showed how using some helpful practices to look at the same information
with slightly different lenses –Impact versus Effort Prioritization and How/Now/Wow
Prioritization – can help improve Value Slicing. Where proposed feature areas would
benefit from a deeper dive to understand the value, we recommended the Design Sprint
as an option.
413 | Conclusion

We showed how these practices drive the initial Product Backlog prioritized by value
and how this produces a living, breathing artifact that will be subject to continuous
Product Backlog Refinement as we gather more learning, feedback, and metrics for our
delivery. The economic prioritization model WSJF, which is based on Cost of Delay,
provides a repeatable and quantifiable tool to drive this. It's one of many prioritization
tools that can help the Product Ownership function work smoothly and effectively.
Finally, we looked at the advanced deployment considerations that should be taken
when designing experiments and how platforms such as OpenShift enable powerful
evidence-based testing to be conducted in production with users. A/B Testing, Blue/
Green Deployments, Canary Releases, Dark Launches, and Feature Flags were all
introduced from a business perspective. We will return to the implementation details
of these in Section 6, Build It, Run It, Own It and explore how we interpret the measures
from them in Section 7, Improve It, Sustain It.

Figure 11.37: Practices used to complete a Discovery Loop and Options Pivot
on a foundation of culture and technology
414 | The Options Pivot

In the next chapter, we will shift to the Delivery Loop. We'll look at agile delivery and
where and when it is applicable according to levels of complexity and simplicity. We'll
also look at Waterfall and the relative merits and where it might be appropriate. We'll
explore different agile frameworks out there and how all of them relate to the Open
Practice Library and Mobius Loop. We'll explore the importance of visualization and of
capturing measurements and learning during our iterations of the Delivery Loop.
Praise for DevOps Culture and
Practice with OpenShift
"Creating successful, high-performing teams is no easy feat. DevOps Culture and
Practice with OpenShift provides a step-by-step, practical guide to unleash
the power of open processes and technology working together."
—Jim Whitehurst, President, IBM

"This book is packed with wisdom from Tim, Mike, Noel, and Donal and lovingly illustrated
by Ilaria. Every principle and practice in this book is backed by wonderful stories of the
people who were part of their learning journey. The authors are passionate about visualizing
everything and every chapter is filled with powerful visual examples. There is something for
every reader and you will find yourself coming back to the examples time and again."
—Jeremy Brown, Chief Technology Officer/Chief Product Officer at Traveldoo,
an Expedia Company

"This book describes well what it means to work with Red Hat Open Innovation Labs,
implementing industrial DevOps and achieving business agility by listening to the team. I have
experienced this first hand. Using the approach explained in this book, we have achieved a level
of collaboration and engagement in the team we had not experienced before, the results didn't
take long and success is inevitable. What I have seen to be the main success factor is the change
in mindset among team members and in management, which this approach helped us drive."
—Michael Denecke, Head of Test Technology at Volkswagen AG

"This book is crammed full to the brim with experience, fun, passion, and great practice. It
contains all the ingredients needed to create a high performance DevOps culture...it's awesome!"
—John Faulkner-Willcocks, Head of Coaching and Delivery Culture, JUST

"DevOps has the opportunity to transform the way software teams work and the products they
deliver. In order to deliver on this promise, your DevOps program must be rooted in people. This
book helps you explore the mindsets, principles, and practices that will drive real outcomes."
—Douglas Ferguson, Voltage Control Founder, Author of Magical Meetings
and Beyond the Prototype

"Fun and intense to read! Somehow, the authors have encapsulated


the Red Hat culture and expression in this book."
—Jonas Frydal, Director at Volvo Cars
"This book is really valuable for me. I was able to map every paragraph I read to the journey we
took during the residency with Red Hat Open Innovation Labs. It was such an intense but also
rewarding time, learning so much about culture, openness, agile and how their combination
can make it possible to deliver crucial business value in a short amount of time.
Speaking from my personal experience, we enabled each other, my team bringing the deep
knowledge in the industry and Red Hat's team bringing good practices for cloud-native
architectures. This made it possible to reinvent how vehicle electronics technology is tested
while pushing Red Hat‘s OpenShift in an industrial DevOps direction.
I am looking forward to keeping a hard copy of the book at my desk for easy review."
—Marcus Greul, Program Manager at CARIAD, a Volkswagen Group company

"Innovation requires more than ideas and technology. It needs people being well led and the
'Open Leadership' concepts and instructions in DevOps Practice and Culture with OpenShift
should be required reading for anyone trying to innovate, in any environment, with any team."
—Patrick Heffernan, Practice Manager and Principal Analyst,
Technology Business Research Inc.

"Whoa! This has to be the best non-fiction DevOps book I've ever read. I cannot believe how
well the team has captured the essence of what the Open Innovation Labs residency is all
about. After reading, you will have a solid toolbox of different principles and concrete practices
for building the DevOps culture, team, and people-first processes to transform how you use
technology to act as a force multiplier inside your organization."
—Antti Jaakkonen, Lean Agile Coach, DNA Plc

"Fascinating! This book is a must-read for all tech entrepreneurs who want to build scalable
and sustainable companies. Success is now handed to you."
—Jeep Kline, Venture Capitalist, Entrepreneur

"In a digital-first economy where technology is embedded in every business,


innovation culture and DevOps are part and parcel of creating new organizational values
and competitive advantages. A practical and easy to understand guide for both technology
practitioners and business leaders is useful as companies accelerate their
Digital Transformation (DX) strategies to thrive in a changed world."
—Sandra Ng, Group Vice President, ICT Practice

"DevOps Culture and Practice with OpenShift is a distillation of years of experience into
a wonderful resource that can be used as a recipe book for teams as they form and develop,
or as a reference guide for mature teams as they continue to evolve."
—David Worthington, Agile Transformation Coach, DBS Bank, Singapore
Get Your
Own Copy

Buy the paperback from Packt Publishing


for a quick and easy reference.

Buy this book


DevOps Culture and Practice with OpenShift
Copyright © 2021 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system,
or transmitted in any form or by any means, without the prior written permission of
the publisher, except in the case of brief quotations embedded in critical articles or
reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of
the information presented. However, the information contained in this book is sold
without warranty, either express or implied. Neither the author(s), nor Packt Publishing
or its dealers and distributors, will be held liable for any damages caused or alleged to
have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the
companies and products mentioned in this book by the appropriate use of capitals.
However, Packt Publishing cannot guarantee the accuracy of this information.

Authors: Tim Beattie, Mike Hepburn, Noel O'Connor, and Donal Spring
Illustrator: Ilaria Doria
Technical Reviewer: Ben Silverman
Managing Editors: Aditya Datar and Siddhant Jain
Acquisitions Editor: Ben Renow-Clarke
Production Editor: Deepak Chavan
Editorial Board: Vishal Bodwani, Ben Renow-Clarke, Edward Doxey, Alex Patterson,
Arijit Sarkar, Jake Smith, and Lucy Wan

First Published: July 2021


Production Reference: 1100821
ISBN: 978-1-80020-236-8

Published by Packt Publishing Ltd.


Livery Place, 35 Livery Street,
Birmingham, B3 2PB, UK.

www.packt.com

You might also like