DevOps Culture and Practice With Openshift Section4
DevOps Culture and Practice With Openshift Section4
om
pl
im
en
ts
of
DevOps Culture
and Practice with
OpenShift
Deliver continuous business value through people,
processes, and technology
Section 4:
Prioritize It
Tim Beattie
Mike Hepburn
Noel O'Connor
Donal Spring
BIRMINGHAM—MUMBAI
Welcome to DevOps Culture and Practice with OpenShift.
This book will enable readers to learn, understand, and apply many different practices
- some people-related, some process-related, some technology-related - to make
DevOps adoption and in turn OpenShift a success within their organization. It
introduces many DevOps concepts and tools that we use to connect DevOps culture
and practices through a continuous loop of discovery, pivots and delivery. All of this
is underpinned by a foundation of culture, collaboration, and engineering.
This book provides an atlas demonstrating how to build empowered product teams
within your organisation. Through a combination of real world stories, a fabricated use
case (especially fun for dog and cat lovers), facilitation guides and the technical details
of how to implement it all, this book provides tools and techniques to build a DevOps
culture within your organization on Red Hat’s OpenShift Container Platform.
It’s a collection of agile, lean, design thinking, DevOps, culture, facilitation and
hands-on technical enablement books all in one! As Gabrielle Benefield (Business
Outcomes thought-leader) explains in her foreword, this book is "like having a great
travel guide you can pull out on your journey, that gives you the direction and ideas you
need when you need them. I use this book as a go-to reference that I can give to teams
to help them get up and running fast. I also love that the authors speak with candor
and share their real-world war stories including the mistakes and pitfalls."
To help navigate round the book, the 18 chapters have been organized into 7 sections:
• Section 1, Practice makes perfect introduces DevOps culture and practices. It’ll
also give an overview of the navigator we will use to work our way round how we
use continuous discovery and continuous delivery to achieve DevOps culture.
• Section 2, Establishing the foundation provides practices we use to establish an
open culture enabling high performing teams to realize DevOps and the technical
foundation practices they use to bootstrap to achieve DevOps.
• Section 3, Discover It explains practices we use to discover why, for who and
how we build great application products to run on OpenShift and deliver early and
continuous business value.
Preface iii
This section briefly introduces the authors, the coverage of this book, the skills you'll need to get
started, and the hardware and software needed to complete all of the technical topics.
iv | Preface
Noel O'Connor is a Senior Principal Architect in Red Hat's EMEA Solutions Practice
specializing in cloud native application and integration architectures. He has worked
with many of Red Hat's global enterprise customers in both Europe, Middle East & Asia.
He co-authored the book "DevOps with OpenShift" and he constantly tries to learn new
things to varying degrees of success. Noel prefers dogs over cats but got overruled by
the rest of the team.
Donal Spring is a Senior Architect for Red Hat Open Innovation Labs. He works in
the delivery teams with his sleeves rolled up tackling anything that's needed - from
coaching and mentoring the team members, setting the technical direction, to coding
and writing tests. He loves technology and getting his hands dirty exploring new tech,
frameworks, and patterns. He can often be found on weekends coding away on personal
projects and automating all the things. Cats or Dogs? He likes both :)
Learning Objectives
• Implement successful DevOps practices and in turn OpenShift within your
organization
• Deal with segregation of duties in a continuous delivery world
• Understand automation and its significance through an application-centric view
• Manage continuous deployment strategies, such as A/B, rolling, canary, and
blue-green
• Leverage OpenShift’s Jenkins capability to execute continuous integration
pipelines
• Manage and separate configuration from static runtime software
• Master communication and collaboration enabling delivery of superior software
products at scale through continuous discovery and continuous delivery
Audience
This book is for anyone with an interest in DevOps practices with OpenShift or other
Kubernetes platforms.
This DevOps book gives software architects, developers, and infra-ops engineers
a practical understanding of OpenShift, how to use it efficiently for the effective
deployment of application architectures, and how to collaborate with users and
stakeholders to deliver business-impacting outcomes.
Approach
This book blends to-the-point theoretical explanations with real-world examples to
enable you to develop your skills as a DevOps practitioner or advocate.
We recommend all readers, regardless of their technical skill, explore the concepts
explained in these chapters. Optionally, you may wish to try some of the technical
practices yourself. These chapters provide guidance in how to do that.
The OpenShift Sizing requirements for running these exercises are outlined in
Appendix A.
Conventions
Code words in the text, database names, folder names, filenames, and file extensions
are shown as follows:
We are going to cover the basics of component testing the PetBattle user interface
using Jest. The user interface is made of several components. The first one you see
when landing on the application is the home page. For the home page component, the
test class is called home.component.spec.ts:
describe('HomeComponent', () => {
let component: HomeComponent;
let fixture: ComponentFixture<HomeComponent>;
Downloading resources
All of the technology artifacts are available in this book's GitHub repository at https://
github.com/PacktPublishing/DevOps-Culture-and-Practice-with-OpenShift/
High resolution versions of all of the visuals including photographs, diagrams and
digital artifact templates used are available at https://github.com/PacktPublishing/
DevOps-Culture-and-Practice-with-OpenShift/tree/master/figures
We also have other code bundles from our rich catalog of books and videos available at
https://github.com/PacktPublishing/. Check them out!
We are aware that technology will chage over time and APIs will evolve. For the latest
changes of technical content, have a look at the book's GitHub repository above. If you
want to contact us directly for any issue you've encountered, please raise an issue in
this repository.
Section 4: Prioritize It
In Section 3, Discover It, we worked our way around the Discovery Loop. We started
with Why—why are we embarking on this initiative? What is our great idea? We used the
North Star to help us frame this. We defined the problem and understood the context
further by using the Impact Mapping practice to align on our strategic goal. Impact
Mapping helped us converge on all the different actors involved that could help us
achieve or impede our goal. Impact Mapping captures the measurable impacts we want
to effect and the behavioral changes we would like to generate for those actors. From
this, we form hypothesis statements about how the different ideas for deliverables may
help achieve these impacts.
We refined this understanding further by using the human-centered design techniques
and Design Thinking practices such as Empathy Mapping and Contextual Inquiry to
observe and connect with our actors. We explored business processes and domain
models using the Event Storming practice by generating a shared understanding of
the event-driven process. Using the Event Storming notation, a microservices-based
architecture started to emerge. We also discovered non-functional aspects of the
design by using Non-Functional Maps and running Metrics-Based Process Mapping.
The Discovery Loop presented lots of ideas for things we can do in our delivery cycles—
features we can implement; architectures that emerge as we refine and develop the
solution by repeated playthroughs of the Event Storm; research that can be performed
using user interface prototypes or technical spikes that test our ideas further;
experiments that can be run with our users to help get an even better understanding of
their motivations, pain points, and what value means to them; and processes we can put
in place to gather data and optimize metrics.
350 | Section 4: Prioritize It
From just the first iteration of the Discovery Loop, it would be very easy to come
up with hundreds of different tasks we could do from all the conversations and
engagement that those practices generate. It can be a minefield visualizing all these
ideas and it can take weeks, if not months, to generate tasks for a small team just from
a short iteration of the Discovery Loop! So, we need to be careful to ensure we remain
focused on delivering value, outcomes that matter, and that we don't get bogged down
in analysis-paralysis in a world filled purely with busyness!
Before we left the Discovery Loop, we took time to translate all of this learning
into measurable Target Outcomes. This started with the primary target outcomes
associated with the business product, but we also took time to recognize some of
the secondary targets and enabling outcomes that can help support development—
especially those that can be enabled by software delivery processes and underlying
platforms such as OpenShift.
With these outcomes visualized and presented using big visible Information Radiators,
supporting metrics can also be baselined and radiated. We can now think about all
those tasks and ideas that resulted from the Discovery Loop. But we can only do so by
keeping an eye on those outcomes at all times and ensuring everything we do is directly
or indirectly going to take us toward achieving them. This is where the real fun begins,
because we're going to explore how we're going to achieve those measurable outcomes.
351 |
Mobius uses the word options instead of solutions, or the dreaded term requirements.
Until we validate our ideas, they are simply wild guesses, so calling them solutions
or saying they are required is not logical and there is no evidence to support them.
Instead, we call them potential solutions, options, and we get to test them out in the
Delivery Loop to prove or disprove the hypothesis that we have formed around those
options. This drives us to a more data-driven approach rather than just simply guessing.
When we are on the Options Pivot, we decide which of the outcomes we are going
to target next. We choose which ideas or hypotheses we need to build, test, validate,
and learn from, as well as exploring how we might deliver the options. We also need
to get a sense of priority. We never have the luxury of infinite time and resources, so
prioritization is always going to be the key to achieving business value and fast learning.
Learning fast is an important aspect here. We want to generate options that can
validate, or invalidate, our ideas from the Discovery Loop so we can ultimately revisit
and enhance them. Fast feedback is the key to connecting the Discovery artifacts with a
validated prototype.
Chapter 11, The Options Pivot, will focus on the practices we use before we begin a
Delivery Loop. We will return to the Options Pivot again after the Delivery Loop in
Section 7, Improve It, Sustain It, when we take the learnings and measurements that
have resulted from the latest Delivery Loop iteration and decide what to do next given
these findings.
The Options Pivot
11
During the Discovery Loop, we started to come up with lots of ideas for
implementation. The Impact Map gave us deliverables that formed hypothesis
statements. The human-centered design and Empathy Mapping practices gave us
ideas directly from the user. The Event Storm gave us standalone features (triggered
by commands) that can be implemented using standalone microservices (codifying the
aggregate). The Metrics-Based Process Map and Non-Functional Map gave us ideas
on how we can speed up the development cycle and improve security, maintainability,
operability, scalability, auditability, traceability, reusability, and just about anything else
that ends with ability!
The next step after the Discovery Loop is the Options Pivot, where all the information
from these practices that we've used gets boiled down to a list of options for actions to
take and decisions to make on what to deliver next.
The Options Pivot is the heart of the Mobius Loop. On the left-hand side of it is where
we absorb all the learning and Target Outcomes we aligned on in the Discovery Loop.
We generate further ideas. We refine ideas on what to deliver next and then choose
the options to work on. Later in the book, in Chapter 17, Improve It, we'll look at the
right-hand side of the Options Pivot. This is where we adapt our approach based on
the measurements and learnings from a completed iteration of the Delivery Loop. We
decide whether to do more Discovery, more Delivery, or Pivot completely. We refine
what to discover or deliver next.
354 | The Options Pivot
Value Slicing
We are approaching the part of the Mobius mental model where we will start delivering
increments of our solution. They will vary from running short prototypes and technical
experiments or spikes, to conducting defined user research, to implementing features
that have resulted from Event Storming and other Discovery practices.
An iteration of the Delivery Loop is not prescribed in length. If you are using a popular
iterative agile delivery framework such as Scrum, an iteration of the Delivery Loop
translates well to one sprint (a fixed time-box between one and four weeks). If you
are using a more continuous delivery approach such as Kanban to enable an ongoing
flow of value, each Delivery Loop may simply represent the processing of one Product
Backlog item and delivering it into the product. You may even be using a non-agile
delivery methodology such as Waterfall whereby the Delivery Loop is more singular
and slower to move around. The Mobius Loop is agnostic to the delivery approach. But
what is consistent regardless of the delivery approach is the idea that we seek to deliver
high‑value work sooner, establish important learning more quickly, and work in small
batch sizes of delivery effort so we can measure and learn the impact to inform our
next set of decisions.
To help us break down all our work items and ensure they are grouped to a level
that will form small increments of value, we use popular visualization and planning
practices.
Simple path mapping techniques break the work down by mapping back from the
Target Outcomes to the least number of steps needed to deliver it. There are many
other practices, such as journey mapping, story mapping, future state mapping, service
blueprints, and more. Mobius is less concerned with the how, as long as you focus on
finding the simplest way to deliver the outcomes. This technique we have found works
very effectively is called Value Slicing.
Let's look at how we approach Value Slicing.
356 | The Options Pivot
First, we note all of the standalone work ideas that have been generated by the
Discovery practices. Our focus here is now on Outputs (and not Outcomes) as we want
to group all of our deliverables together and form an incremental release strategy that
delivers the outcomes. A starting point is to copy each of the following from existing
artifacts:
• Deliverables captured on the Impact Map
• Commands captured on the Event Storm
• Ideas and feedback captured on Empathy Maps
• Non-functional work needed to support decisions made on the Non-Functional
Map
• Ideas and non-functional features captured during discussion of the
Metrics‑Based Process Map (MBPM)
• All the other features and ideas that have come up during any other Discovery
Loop practices you may have used and the many conversations that occurred
Here are a couple of tips we've picked up from our experience. First, don't simply move
sticky notes from one artifact to this new space. You should keep the Impact Map,
Event Storms, Empathy Maps, MBPMs, and other artifacts as standalone artifacts, fully
intact in the original form. They will be very useful when we return to them after doing
some Delivery Loops.
Second, copy word-for-word the items you're picking up from those practices. As
we'll see in the coming chapters, we will really benefit when we can trace work items
through the Discovery Loop, Options Pivot, and Delivery Loop, so keeping language
consistent will help with this. Some teams even invest in a key or coding system to
show this traceability from the outset.
Figure 11.1: Collecting information and ideas from Discovery Loop practices
357 | Value Slicing
To start with, simply spread all the items across a large work surface. There's something
very satisfying about standing back and seeing all the possible work we know of in
front of us. It can be amazing to see just how much has been ideated from those
few practices. It can also be a bit chaotic and daunting. This is why we need to start
organizing the work.
If you're working virtually with people distributed, having a Canvas such as the following
one (and available for download from the book's GitHub repository) may be helpful:
Next, remove any duplicates. For example, you may have identified a deliverable on
your Impact Map and the same feature has ended up in your Event Storm. Your user
interviews may also have found similar feature ideas captured on Empathy Maps. Where
there are identical features, remove the duplicate. If the idea can be broken down into
smaller standalone ideas, refactor and re-write your sticky notes to have these multiple
ideas. The more the better in this practice!
The next step is to categorize each of the items into some kind of common theme and
give that theme a title. We're looking for something that brings all of the items together.
If you were to put each item into a bucket, what would the label on the bucket be? A
top tip is to start with the Target Outcomes that were derived from the Discovery Loop
and set them as the headings to categorize each item under. The reason we do this
is that we want to work with an outcome-driven mindset. We have agreed on some
Target Outcomes so, really, every work item we are considering should be taking us to
one or more of those outcomes. If we pick any one of the items and can't easily see an
outcome it will help achieve, we should be questioning the value of doing that thing at
all. (There are cases where such items that don't map to outcomes are still important,
so if this does happen, just give them their own pile.)
358 | The Options Pivot
We should end up with all items in a neat, straight column directly beneath the Target
Outcome they are categorized under.
If we have a good, well-thought-out set of Primary Outcomes and Enabling Outcomes,
it should be a very positive exercise mapping all of the features, experiments, research
ideas, and so on to an outcome. This exercise should be collaborative and include all
members of the cross-functional team. Developers, operators, designers, Product
Owners, business SMEs, and so on will all have been involved and provided input to the
preceding Discovery Loop practices. They should remain included during the Options
Pivot to ensure their ideas and initiatives are understood and included on the map.
The resulting visualization of work should include functional features and
non-functional initiatives. All of the work that can take place on the platform to enable
faster and safer development and quicker release of product features should be shown.
If we stand back at the end of the exercise, we should start to see our delivery loops
starting to emerge.
The next step is to prioritize all tasks and items on the board. This is never easy but
nearly always needed. If you have worked on a project where time has not been an
issue and it's been obvious that the team will have all the time, they need to confidently
deliver everything asked of them, you are in a unique position! That has never happened
to us and there has always been a need to prioritize work and choose what not to do!
This can start with the Product Owner deciding his or her perspective on priority.
However, as we progress through this chapter, we'll look at a few practices and tools
that you can bring out to help with prioritization in a collaborative environment and
drive consensus. Executing those practices can then be reflected on this value map
we're creating.
359 | Value Slicing
We like to attempt to prioritize each column. So, take each Target Outcome with all of
the features and other items that we believe will achieve them and prioritize them. The
most important and compelling items should be at the top. These are the items that
need to be prioritized above anything else if you are to achieve the outcome. The lesser
understood or "nice to have" items should be further down the column.
The final stage is to slice value out of the value map. Using some sticky tape (ideally
colored, such as painters' tape), we ask the person who holds overall responsibility
for prioritizing work and articulating value (usually this is the Product Owner for
a team using Scrum) to slice horizontally what they see as a slice of value for the
whole product. This means looking at the most important items for each theme and
combining them with some of the other highly important items from other themes.
At this point, our Product Owner has a huge amount of power. They can prioritize
within a given outcome. They can prioritize a whole outcome and move everything
down or up. They can combine items together from different outcomes to form
proposed releases. They can slice one, two, three, or fifty slices of value – each
one containing one, two, or more items. Most importantly, they can facilitate
conversations with all stakeholders and team members to arrive at a consensus on this
two-dimensional Value Slice Map.
360 | The Options Pivot
During many years of using these practices, we've picked up a few facilitation tips to
help explain them correctly. The first involves how you might visualize and plan two
valuable activities.
I started by saying, Obviously, we'd like to do all this work, after which one
of the senior stakeholders interrupted and said, YES! We need to do all this
work. I could sense there was some discomfort among stakeholders, as if
I was doing a typical consultants' effort on locking down scope when the
stakeholders wanted everything built. Perhaps my leading choice of words
could have been better.
But I wasn't trying to decide what was in and out of scope. My whole agile
mindset is based on flexible scope, the ability to adapt and change scope as
we learn more, and always ensuring we're delivering the next most valuable
and important work.
To explain my mindset, my thoughts fast-forwarded to a team social we had
planned for later that day. It had been a long week and we had planned to go
for a few drinks and a curry – again boosting our cultural foundation further
by allowing the team to relax and get to know each other a bit better.
362 | The Options Pivot
I was looking forward to having a beer and having a curry after that beer.
In fact, I was really looking forward to that beer. I felt we'd really earned it
that week and it was going to be great to raise a glass and say cheers with
my new team! But that didn't mean that the curry wasn't important. Nor did
it mean that the curry was not going to happen. We were going to have a
beer first followed by a curry. That was how we'd prioritized the evening. We
hadn't de-scoped anything nor were we planning to. The beer was in my top
slice of value. The curry was in my second slice of value.
The team felt more relaxed understanding we were not de-scoping any work
at all using this practice but simply organizing by value. The team also felt
very relaxed and enjoyed both a beer and a curry!
We've also learned a few simple tricks that can help set up the Value Slicing practice to
work effectively.
As I observed the team working through this process, I realized that the
single line of tape had generated a misleading point of this practice. There
was a reluctance to put anything beneath the line because there was a
perception that this meant out of scope. I explained this was not the case
and what I was trying to do was slice out the Minimal Viable Product or
MVP. MVP defines the minimum number of features that could form the
product that could be released to users to learn from and build upon. In
reality, many stakeholders see defining the MVP as something negative as
it's where they lose all the great innovative featuresthat they may want but
are not collectively deemed important. I actually try to avoid using the term
MVP, as it is often greeted with some negative emotion.
I learned from this facilitation that one slice should never be used as we are
not defining things as in or out of scope and we are not defining just the
MVP.
Working with another customer in Finland, I took this learning and adapted
my facilitation approach. With all the items that had been captured from
the Discovery Loop on the map, I produced three slices of tape. Hopefully
now the Product Owner and stakeholders would not fall into the in-scope/
out-of-scope trap. However, now there was a new misunderstanding! For
this particular engagement, which was an immersive four-week Open
Innovation Labs residency focused on improved operations, we had planned
three one-week sprints. By coincidence, I had produced three slices of tape
for Value Slicing. So, the stakeholders and Product Owner assumed that
whatever we put in the first slice would form the scope for Sprint 1, the
second slice would be Sprint 2, and the third slice would be Sprint 3.
364 | The Options Pivot
Figure 11.7: Value Slicing of the items captured from the Discovery Loop
I explained that this was not the case. We do not yet know how long it will
take the team to deliver each item in each slice. We will use other practices
in the Delivery Loop to help us understand that. We could end up delivering
more than one slice in one sprint. Or, it may take more than one sprint to
deliver one slice. We just don't know yet.
Since then, I have tweaked my facilitation further. When making the slices
available, I now produce lots of them – at least 10, sometimes more than 20.
I also make the roll of tape accessible and tell the Product Owner to use as
many slices as they would like – the more the better, in fact! I've found Value
Slice Maps now often have many more slices.
A Product Owner from a UK defensecompany once remarked to me that
you could argue that each item on the Value Slice board could be its own
slice of value. I celebrated with a massive smile when I heard this. Yes! When
we reach that mindset and approach, we truly are reaching the goal of
continuous delivery.
365 | Value Slicing
Visualizing and slicing increments of value has evolved from the amazing thinking
and work produced by Jeff Patton in his book User Story Mapping1 published in 2008.
User Story Mapping is an effective practice for creating lightweight release plans that
can drive iterative and incremental delivery practices. We highly recommend reading
Patton's book and trying out the exercise he describes in his fifth chapter about
visualizing and slicing out the value of something very simple, like everything you do
in the morning to get up, get ready, and travel to work. We use this exercise in our
enablement workshops and find it really brings the practice to life well.
Let's look at how the PetBattle team approached Value Slicing.
1 https://www.jpattonassociates.com/user-story-mapping/
366 | The Options Pivot
As the team explored these four outcomes further, they thought it might
help to break them down a bit further to help with shared understanding
with stakeholders. The Impact Map had driven focus on four outcomes:
• Increased participation rate of the casual viewer
• Increased uploads
• Increased site engagement of the uploaders
• Increased number of sponsored competitions
Collectively, these would all help with the first primary outcome where
PetBattle would be generating revenue from an increased active user base.
So, these were added to the Value Slice Map:
So, they had a huge collection of outputs spread over one wall and an
organized set of outcomes as headings on another wall:
The outputs moved under the second and fourth outcomes were sourced
from the MBPM and Non-Functional Map. This was also true for the third
outcome, which also included some of the ideas captured during the early
social contract and real-time retrospective that was started when building
the cultural foundation.
The team ended up with a UserStory Map that showed the initial journey
through PetBattle as well as the journey the team would go on to deliver and
support it:
Looking at the top slice of value brought a feeling of excitement. The team
could see the first items they were going to do to make the PetBattle vision
a reality!
Design of Experiments
All our ideas for new products, services, features, and indeed any changes we
can introduce to make things better (more growth, increased revenue, enhanced
experience, and so on) start off as a hypothesis or an assumption. In a traditional
approach to planning, a team may place bets on which experiment to run based on
some form of return on investment-style analysis, while making further assumptions in
the process.
Design of Experiments is an alternative to this approach, in which we try to validate
as many of the important ideas/hypotheses/assumptions we are making as early as
possible. Some of those objects of the experiments we may want to keep open until
we get some real-world proof, which can be done through some of the advanced
deployment capability (such as A/B Testing) that we'll explore later in this chapter.
Design of Experiments is a practice we use to turn ideas, hypotheses, or assumptions
into concrete, well-defined sets of experiments that can be carried out in order to
achieve validation or invalidation – that is, provide us with valuable learning.
Design of Experiments is a fail-safe way to advance a solution and learn fast. It can
provide a quick way to evolve a product, helps drive innovation in existing as well as
new products, and enables autonomous teams to deliver on leadership intent by placing
small bets.
You may need more than one experiment for each item (idea, hypothesis, assumption).
An experiment usually only changes a small part of the product or service in order
to understand how this change could influence our Target Outcomes. The number
of experiments is really defined based on what you want to learn and how many
distinctive changes you will be introducing.
374 | The Options Pivot
Once described, the experiments can be implemented, tracked, and measured in order
to analyze the outcomes. In an ideal world, an experiment will have binary success/
failure criteria, but most often we need to analyze data using statistical methods to
find out if there is a significant correlation between the change introduced with the
experiment and the change in the Target Outcome.
NOTE
Successful experiments are not experiments that have proven our assumption is
correct. Successful experiments are those that provide valid and reliable data that
shows a statistically significant conclusion.
376 | The Options Pivot
The sources of ideas and hypotheses are all the Discovery practices, such as
Event Storming, Impact Mapping, and Empathy Mapping. While we perform those
aforementioned practices, often ideas may emerge for possible improvements or new
hypotheses may form.
Before adding any of them as items to the Product Backlog, these ideas and hypotheses
would typically need some research, analysis, and further elaboration.
Once prioritized, these ideas and hypothesis may lead to:
• New features being added through the User Story Map and Value Slice board
• Complete new features being broken down into smaller features or User Stories
and refactored on the Value Slice board
• User Research
• Design of Experiments
• Technical Spikes and UI Prototypes
The Impact and Effort Prioritization matrix has its own Open Practice Library page at
openpracticelibrary.com/practice/impact-effort-prioritization-matrix/ – a great place
to continue the learning and discussion about this prioritization practice.
A slightly different perspective on prioritization is achieved using the How-Now-Wow
Prioritization practice. Whereas the previous practice is used to filter out and prioritize
the very high-impacting features, this practice is used to identify and prioritize the
quick wins and base features needed for a product.
379 | How-Now-Wow Prioritization
How-Now-Wow Prioritization
How-Now-Wow is an idea selection tool that is often combined with Brainstorming,
How-Might-We2 (HMW), and Design of Experiments. It compares and plots ideas on a
2x2 matrix by comparing the idea's difficulty to implement with its novelty/originality.
Similar to the Impact and Effort Prioritization Matrix, How-Now-Wow Prioritization
is simple, easy to understand, and very visual, and can include the whole team in the
process of transparent selection of ideas/hypotheses to work on first.
Again, the sources of ideas and hypotheses are all the Discovery practices, such as
Event Storming, Impact Mapping, HMW, and Empathy Mapping. When we perform
those aforementioned practices, often ideas will emerge for possible improvements or
new hypotheses may form.
We can plot each of these on the How-Now-Wow matrix by assessing each item and
considering how easy or difficult it is to implement (using team members to collaborate
and align on this) and how new and innovative the feature is.
2 https://openpracticelibrary.com/practice/hmw/
380 | The Options Pivot
Three separate groups of ideas have emerged from this practice. There are three we're
particularly interested in:
1. Now Ideas: Easy to implement and considered normal ideas for the product. These
should be considered in the higher slices of the Value Slice Map and are ideas we
would expect to deliver.
2. Wow Ideas: Easy to implement and considered highly innovative or new for the
product. These should be considered as potential ideas for the higher slices of
the Value Slice Map and would be particularly valuable if innovation and market
differentiation were deemed high-priority focus areas. If the Priority Sliders
practice has already been used, it may provide some direction here.
3. How Ideas: Hard to implement and considered highly innovative or new for
the product. These would benefit from further research to understand the
implementation difficulty and potential impact further. Design of Experiments,
prototyping, and further User Research will help validate whether this innovation
is something that would be well received. Technical Spikes and research will help
establish confidence and potentially easier solutions to implement.
4. Other ideas: Hard to implement and not particularly innovative or new for the
product. We're not interested in these ideas at all.
Once placed on the How-Now-Wow matrix, these ideas and hypotheses may lead to:
• New features being added through the User Story Map and Value Slice board
• Complete new features being broken down into smaller features or User Stories
and refactored on the Value Slice board
• User Research
• Design of Experiments
• Technical Spikes and UI Prototypes
Figure 11.20: Different stakeholders collaborating to gain a better understanding of the product
Both practices highlighted some features to do more research on. Practices categorized
in the How quadrant of the How-Now-Wow matrix will benefit from additional
research. Practices categorized in the High Effort / High Impact quadrant of the
Impact and Effort Prioritization matrix will benefit from additional research.
Many of the human-centered design practices outlined in Chapter 8, Discovering the
Why and Who, will help with this research. This includes Empathy Mapping, qualitative
user research, conceptual design, prototyping, and interaction design. If the feature
area is of very high importance, it may be valuable to invest in a specific practice that
will really further the understanding of the feature – the Design Sprint.
382 | The Options Pivot
The process phases include Understand, Define, Sketch, Decide, Prototype, and
Validate.
The aim is to fast-forward into the future to see your finished product and customer
reactions, before making any expensive commitments. It is a simple and cheap way to
validate major assumptions and the big question(s) and point to the different options
to explore further through delivery. This set of practices reduces risks when bringing
a new product, service, or feature to the market. Design Sprints are the fastest way to
find out if a product or project is worth undertaking, if a feature is worth the effort, or
if your value proposition is really valid. For the latter, you should also consider running
a Research Sprint.3 It compresses work into one week and most importantly tests the
design idea and provides real user feedback in a rapid fashion.
By now, there are many different variations of the Design Sprint format. You may come
across the Google Ventures variation – the Design Sprint 2.0 – which is the agenda
shown below. The best thing to do is to try different variations and judge which one
works for what context.
3 https://library.gv.com/the-gv-research-sprint-a-4-day-process-for-answering-
important-startup-questions-97279b532b25
383 | The Design Sprint
Final
Crazy 8's
Walk‑through
User Test
Long-Term Goals
Recruiting
Sprint Questions
Lightning Demos
Drawing Ideas
Develop Concepts
Effectively, we are using the same Mobius Loop mental model as used throughout this
book but micro-focused on a particular option in order to refine understanding and
conduct further research about its value. That improved understanding of relative value
then filters back into the overall Mobius Loop that classified this as an option worthy of
a Design Sprint.
384 | The Options Pivot
Figure 11.21: The Design Sprint – quick trip round the Mobius Loop
A Design Sprint will really help refine the shared understanding of the value a feature
or group of features may offer in a product. They may be functional user features. They
may also be non-functional features that are more focused on improving the platform
and improving the development experience. The same agenda as above can apply where
"users" are developers or operators and the Design Sprint is focused on researching
some potential work that will improve their development or operations experience.
This practice will help elaborate and refine the information that is pushed through
the User Story Map and Value Slicing Map. We see it as a practice on the Options
Pivot because it will help decide whether or not to proceed with the delivery of the
associated features.
Read more about the practice, add your own experiences, or raise any questions you
might have at openpracticelibrary.com/practice/design-sprint/.
The User Story Mapping practice helps us visualize our work into a story with a
clear backbone. Value Slicing allows us to form incremental release plans that can
be delivered in iterations. The Impact and Effort Prioritization and How-Now-Wow
Prioritization practices help provide alternate perspectives to help with the Value
Slicing. The Design Sprint allows us to dive deeper into a specific feature area to
research it further, so we can prioritize with increased confidence.
385 | Forming the Initial Product Backlog
All of these practices (and many others you'll find in the Open Practice Library)
are homing in on us being able to produce an initial Product Backlog – a single,
one-dimensional list of stuff we're going to take into the Delivery Loop.
Let's now look at how we translate the information from our Value Slices into a Product
Backlog and how we can continue to prioritize it.
This has been a great blend of practices and, if there was strong collaboration and a
sense of alignment throughout (which is facilitated by having a strong foundation of
open culture), the Value Slice Map should represent a combined, shared view of work
and how it can be incrementally released.
To create the Product Backlog, we simply copy each sticky note in the top slice from
left to right and place them in a single column.
386 | The Options Pivot
The sticky note on the left of the top slice will be copied and placed as the item at the
top of the Product Backlog. The sticky note to the right of it on the top slice will be the
second item on the Product Backlog. Once we've copied all items in the top slice, we
move to the second slice of value and, again, copy each of the items from left to right
onto the Product Backlog.
Figure 11.25: Creating a Product Backlog from the Value Slicing Canvas
We end up with a single column of Product Backlog items that have been sourced
and prioritized through a collection of robust practices. That traceability is important
because we can trace back to the Discovery Loop practice that generated the idea and
the value it is intended to deliver.
Let's look at that traceability in action with our PetBattle organization.
388 | The Options Pivot
This is the beginning of the life of the PetBattle Product Backlog. It will
remain living, breathing, and always ready for updates as long as the
PetBattle product is in operation.
In fact, the team immediately sees some early prioritization needed and
recommends moving the CI/CD workshop and team lunch/breakfast items
to the top. They all agreed there was no point writing any code or building
any features until they had CI/CD in place and a fed and watered team!
The Product Backlog is a living, breathing artifact. It should never be static. It should
never be done. It is a tool that is always available for teams and stakeholders to
reference and, in collaboration with Product Owners, a place to add ideas, elaborate on
existing ideas, and continue to prioritize work items relative to each other.
From this moment onward, we will start and continue the practice of Product Backlog
Refinement.
There is also no defined agenda for a Product Backlog Refinement session and it can
be attended by a variety of different people, such as development and operational
team members, business stakeholders, and leadership. The activities that take place in
Product Backlog Refinement include:
• Talking through and refining the collective shared understanding of an item on
the backlog, its value to users, and the implementation needs that need to be
satisfied
• Re-writing and refining the title of a Product Backlog item to better reflect the
collective understanding
• Writing acceptance criteria for a specific item on the Product Backlog
• Doing some relative estimation of the effort required to deliver a feature from
the backlog to satisfy the acceptance criteria
• Splitting an item on the backlog into two (or more) smaller items
• Grouping items together into a more standalone item that will deliver a stronger
unit of value
• Capturing new ideas and feedback on the Product Backlog
• Prioritizing and re-ordering items on the Product Backlog
All of the artifacts we've generated on the Discovery Loop and Options Pivot are
useful to look at, collaborate on, and refine further when performing Product Backlog
Refinement. They too are all living, breathing artifacts and, often, conversations during
Product Backlog Refinement trigger further updates to these. So, for example, we
may add a new deliverable to our Impact Map and connect it to an impact and actor
to test with. We may elaborate on some details on the Event Storm as we start to
consider the implementation details of an associated backlog item. As new items are
captured from Product Backlog Refinement, the Impact and Effort Prioritization Matrix,
How-Now-Wow Prioritization Matrix, and Value Slice board artifacts are all available to
relatively plot the new item against existing items. In Chapter 17, Improve It, we'll return
to the Options Pivot following an iteration of the Delivery Loop and look at how the
measurements and learning captured from delivery can drive further Product Backlog
Refinement.
Arguably one of the most important aspects of Product Backlog Refinement is
prioritization and, in particular, prioritizing what is toward the top of the Product
Backlog. This is what the team will pull from when planning their next iteration of the
Delivery Loop. So, it's important that the items at the very top of the backlog truly
reflect what is most valuable and help generate the outcomes that matter.
For more details on Product Backlog Refinement and to converse with the community,
take a look at the Open Practice Library page at openpracticelibrary.com/practice/
backlog-refinement.
391 | Prioritization
We've already seen a few tools that help with initial Product Backlog generation and
giving the first set of priorities. Let's look at a few more that will help with ongoing
Product Backlog prioritization.
Prioritization
Throughout this chapter, we've used the terms features and Product Backlog items
to explain the different units of work that we capture through Discovery and prioritize
and decide which to work on first in the Options Pivot. An important clarification
that's needed is that this does not just mean functional features. We are not just
deciding which shiny new feature the end users are going to get next. We need
to balance customer value against risk mitigation; we need to balance functional
against non-functional work. We do that by balancing research, experimentation, and
implementation.
Running Technical Spikes and proving some of the non-functional aspects of the
platform early can provide the knowledge and confidence value, which can be equally, if
not more, important than customer value achieved from delivering functional features.
In fact, this non-functional work helps us achieve the Enabling Outcomes outlined
in Chapter 10, Setting Outcomes, whereas the functional implementations are more
focused on achieving the primary outcomes.
392 | The Options Pivot
Let's look at an economic prioritization model that can help us quantify risk, knowledge
value, and customer value. It can be used by a Product Owner in collaboration
with wider groups of team members and stakeholders and presented to the wider
organization.
So, what is WSJF? It is based on Don Reinertsen's research on the Cost of Delay and
the subject of his book The Principles of Product Development Flow – Second Generation
Lean Product Development. Reinertsen famously said, If you quantify one thing, quantify
the cost of delay. Josh Arnold explains how the Cost of Delay is calculated by assessing
the impact of not having something when you need it. As a typical example this might
be the cost incurred while waiting to deliver a solution that improves efficiency. It is the
opportunity cost between having the same thing now, or getting it later.4
The core thinking behind the Cost of Delay is value foregone over time. For every day
we don't have an item in the market, what is it costing the organization? If the value of
the item is a cost-saving initiative, how much money is the organization not saving by
not implementing this feature? If the value of the item is revenue-related, what is the
additional revenue they're missing out on by not implementing it?
The Cost of Delay can be sensitive to time. There are seasonal influences – for example,
shipping in retail can be very time-sensitive around, say, the holiday season. Changes
may be needed for legislation and compliance. The cost can be very high if something is
not delivered by a certain date when new legislation kicks in. The Cost of Delay will be
nothing in advance of this date and very high after this date.
There are three primary components that contribute to the Cost of Delay:
• Direct business value either to the customer and/or the organization. This
reflects preferences users might have that will drive up their customer
satisfaction. It will also reflect relative financial reward or cost reduction that the
item is expected to drive.
• Time criticality to implementing the solution now or at a later date. This
incorporates any seasonal or regulation factors that might drive time criticality,
as well as whether customers are likely to wait for solutions or if there is a
compelling need for it now.
• Risk reduction and opportunity enablement is the indirect business value this
might bring to the organization. It considers the hidden benefits this might bring
in the future as well as reducing the risk profile.
Using Cost of Delay to prioritize work in agile backlogs will result in items being
prioritized by value and sensitivity to time. It also allows us to have a lens on direct
business value (such as new functional feature development) and indirect business
value (such as non-functional improvements to the OpenShift platform).
Cost of Delay =
Business Value + Timing Value + Risk Reduction/Opportunity Enablement Value
WSJF adds a further dimension to this by considering the cost of implementation.
Reinertsen said it is critical to remember that we block a resource whenever we service a
job. The benefit of giving immediate service to any job is its cost-of-delay savings, and the
cost is the amount of time (duration) we block the resources. Both cost and benefit must
enter into an economically correct sequencing.5
Weighted Shortest Job First (WSJF) = Cost of Delay (COD) / Duration
What unit do we use for the three components in Cost of Delay and Duration? It's
arbitrary. The actual numbers are meaningless by themselves. The agile practice we use
to support COD and WSJF is Relative Estimation,6 whereby we are relatively assessing
the magnitude of business value, timing value, and risk reduction/opportunity
enablement for each item on the Product Backlog relative to each other item. This
allows us to prioritize the Product Backlog according to WSJF.
We've now introduced several practices on this first trip to the Options Pivot that help
us generate more ideas from discovery, refine them, prioritize them, and, ultimately,
decide which options we're going to take into a Delivery Loop next. But who makes this
decision? The term we has been used a lot in this chapter, emphasizing the importance
of collaboration. But what happens when we don't have a consensus? Who gets the final
say? This is where the importance of great production ownership comes in.
6 https://openpracticelibrary.com/practice/relative-estimation/
7 https://www.mountaingoatsoftware.com/blog/why-the-fibonacci-sequence-
works-well-for-estimating
395 | Prioritization
The team members would reveal individual scores to each other and a
conversation would follow to converge and align on the team's assessment
for each score.
This resulted in a Cost of Delay value and a WSJF value for each item.
The previous sections on forming the product backlog, refining it and prioritization are
all key responsibilities of Product Ownership which we will now explore further.
Product Ownership
Everything in this chapter is about Product Ownership. Everything in the previous
chapters about Discovery is Product Ownership. Prioritizing early efforts to build a
foundation of open culture, open leadership, and open technology practices requires
strong Product Ownership from the outset.
There are whole books and training courses written about Product Ownership, Product
Owners, and Product Managers. Much of our thinking has been inspired by the amazing
work of Henrik Kniberg. If you have not seen his 15-minute video on YouTube entitled
Product Ownership in a Nutshell,8 please put this book down, go and get a cup of tea,
and watch the video now. Maybe even watch it two or three times. We, the four authors
of this book, reckon we've collectively seen this video over 500 times now!
8 https://www.youtube.com/watch?v=502ILHjX9EE
397 | Product Ownership
Some say it is the best 15-minute video on the internet, which is quite an accolade! It
packs in so many important philosophies around Product Ownership in such a short
amount of time. We tend to show this video during our DevOps Culture and Practice
Enablement sessions, when we start a new engagement with a new team, or simply to
kick-start a conversation with a stakeholder on what agile is really about.
The resulting graphic is well worth printing out and framing on the wall!
During our time working on hundreds of different engagements, we've seen some
examples of amazing Product Ownership. We've also seen some really bad examples.
Let's look at some of the patterns that we've observed, starting with Product Owners.
398 | The Options Pivot
Over time, the need for direct access to this Product Owner diminished. It
is a pattern I've noticed working with several organizations where Product
Ownership has been particularly strong. Great Product Owners democratize
Product Ownership and provide a direct connection between teams and
stakeholders. Product Owners should see their current role as one to self-
destruct and not be needed in the long term.
Next, let's look at how great Product Owners have approached their first iterations and
what they've prioritized to form their first iteration goals.
Several other engagements have had almost identical goals, and the pattern
is strong because:
• The team wants to set up their workspace. That may be their physical
workspace with lots of information radiators and collaboration
space. It may be a virtual workspace with digital tooling. It may be a
development environment using code-ready workspaces and being
familiar with all tools to be used.
• The plan to build a walking skeleton. This is a thin slice of the whole
architecture delivered in one iteration. There won't be any fancy
frontend or complex backend processing. They will prove full-stack
development and that the cross-functional team representing all parts
of the logical architecture can deliver working software together. It's a
walking skeleton because it is a fully working product. It just doesn't do
very much yet!
• Their work will be underpinned by continuous integration and
continuous delivery. This green-from-go practice means they are
set up for success when it comes to automating builds, tests, and
deployments. If they prove this and learn this for a thin slice, it will
become increasingly valuable as we start to put all the flesh and organs
into the walking skeleton!
The final part of this chapter shifts the focus from what we're deciding to deliver
next to how we're going to measure and learn from our experiments and the features
we deliver. The OpenShift platform enables our teams to consider several advanced
deployment capabilities.
The OpenShift platform enables several different deployment strategies that support
the implementation of experiments. When we are on the Options Pivot, we should
consider these strategies and which (if any) we should plan with the delivery of the
associated Product Backlog item. The advanced deployment strategies we can consider
include:
• A/B Testing
• Blue/Green Deployments
• Canary Releases
• Dark Launches
• Feature Toggling
We introduce these concepts here as, from an options planning perspective, this is
where we need to be aware of them. We'll return to specific implementation details in
Section 6, Build It, Run It, Own It, and we'll explore how we use the resulting metrics in
Section 7, Improve It, Sustain It.
A/B Testing
This is a randomized experiment in which we compare and evaluate the performance
of different versions of a product in pairs. Both product versions are available in
production (live) and randomly provided to different users. Data is collected about
the traffic, interaction, time spent, and other relevant metrics, which will be used
to judge the effectiveness of the two different versions based on the change in user
behavior. The test determines which version is performing better in terms of the Target
Outcomes you have started with.
A/B Testing is simple to apply, fast to execute, and often conclusions can be made
simply by comparing the conversion/activity data between the two versions. It can be
limiting as the two versions should not differ too much and more significant changes
in the product may require a large number of A/B Tests to be performed. This is one
of the practices that allows you to tune the engine, as described in The Lean Startup9 by
Eric Ries.
9 http://theleanstartup.com/
402 | The Options Pivot
For more information on this practice and to discuss it with community members
or contribute your own improvement to it, please look at openpracticelibrary.com/
practice/split-testing-a-b-testing/.
Blue/Green Deployments
Blue/Green Deployment is a technique in software development that relies on two
productive environments being available to the team. One of them, let's call it green,
is operational and takes load from the reverse proxy (load balancer/router). The other
environment, let's call it blue, is a copy upgraded to a new version. It is disconnected
from the load balancing while this upgrade is completed.
The team can perform all required tasks for an upgrade of the product version on the
blue environment without the rush of downtime. Once the blue environment is ready
and has passed all tests and checks, the team simply redirects the reverse proxy (load
balancer/router) from the green environment to the blue environment.
If everything works fine with the blue environment, the now outdated green can be
prepared to be recycled to serve as the blue for the next release. If things go bad, the
team can switch back to a stable environment instantly using the reverse proxy/load
balancer/router.
This is a feedback loop practice that allows the team to get prompt feedback from the
real-life use of their changes. It enables continuous delivery and provides safety for
performing complex releases. It removes the time pressure and reduces the downtime
to practically zero. This is beneficial for both technical teams and end users who will
not notice glitches or unavailability of the service/product, provided that the new
version is performing at par. In case of adverse effects, it allows the teams to have an
instant roll-back alternative and limit the negative impact on customers.
To explore this practice further, visit the Open Practice Library page at
openpracticelibrary.com/practice/blue-green-deployments/.
Canary Releases
In software development, this is a form of continuous delivery in which only a small
number of the real users of a product will be exposed to the new version of the product.
The team monitors for regressions, performance issues, and other adverse effects and
can easily move users back to the working old version if issues are spotted.
The term comes from the use of caged birds in coal mines to discover the buildup
of dangerous gases early on. The gases would kill the bird long before they became
life-threatening for the miners. As with the canary in the mine, this release practice
provides an early warning mechanism for avoiding bigger issues.
The canary release provides continuous delivery teams with safety by enabling them to
perform a phased rollout, gradually increasing the number of users on a new version
of a product. While rolling out the new version, the team will be closely monitoring
the performance of the platform, trying to understand the impacts of the new version,
and assessing the risks of adverse effects such as regressions, performance issues, and
even downtime. This approach allows the team to roll back the release as soon as such
adverse effects are observed without the majority of the customers being impacted
even for a limited amount of time.
404 | The Options Pivot
Canary Release is similar to A/B Testing in the sense that it is only exposing a part of
the population to the new feature, but unlike A/B Testing, the new feature can and is
typically a completely new feature and not just a small tweak of an existing one. The
purpose is different too. A/B Testing looks to improve the product performance in
terms of getting business outcomes, while the Canary Release is focused entirely on
technical performance.
You can read more about this practice, contribute improvements, or have a discussion
with the wider community at openpracticelibrary.com/practice/canary-release.
Dark Launches
Dark Launches are another continuous delivery practice that release new features to
a subset of end users and then captures their behaviors and feedback. They enable
the team to understand the real-life impact of these new features, which may be
unexpected for users in the sense that no users asked for them. It is one of the last
steps for validating a product/market fit for new features. Rather than launching the
features to your entire group of users at once, this method allows you to test the waters
to make sure your application works as planned before you go live.
Dark Launches provide safety by limiting the impact of new features to only a subset
of the users. They allow the team to build a better understanding of the impact
created by the new feature and the ways the users would interact with it. Often novel
ways of interaction can surface, ways that were not initially envisioned by the team.
This can be both positive and negative, and the limited availability allows the team to
draw conclusions from the real-life use and decide if the feature will be made widely
available, further developed, or discontinued.
405 | Advanced Deployment Considerations
The Dark Launches practice has its own Open Practice Library page at
openpracticelibrary.com/practice/dark-launches/, so head there for further
information, to start a conversation, or to improve the practice.
Feature Flags
Feature Flags (also known as Feature Bits/Toggles/Flipping/Controls) are an
engineering practice that can be used to change your software's functionality without
changing and re-deploying your code. They allow specific features of an application to
be turned on and off for testing and maintenance purposes.
In software, a flag is one or more bits used to store binary values. So, it's a Boolean that
can either be true or false. A flag can be checked with an if statement. A feature in
software is a bit of functionality that delivers some kind of value. In its simplest form, a
Feature Flag (or Toggle) is just an if statement surrounding a bit of functionality in your
software.
Feature Toggles are a foundational engineering practice and provide a great way to
manage the behavior of the product in order to perform experiments or safeguard
performance when releasing fresh new features.
406 | The Options Pivot
average, 3 minutes per item. Their goal was to put each feature in one of the
columns with a short note on what their approach to the implementation
was.
• Open PetBattle: This was easy. Anyone using the app would need to
open it. IMPLEMENT.
• Display Leaders: Lots of questions about what and how to display.
How many leaders? Should we add pagination or scroll? They decided
some RESEARCH was needed – perhaps a UI prototype with some user
testing.
• Let me in please: The team had to go back to the Event Storm to
remind themselves what this was about! Again, it was a simple feature
of letting the user in to see Pets uploaded. IMPLEMENT.
• Vote for Cat: This triggered some conversation. Do they vote up or
down? Or do they just give a vote (or nothing at all)? The team was
divided and had heard differing views from user interviews. They
decided to EXPERIMENT with an A/B Test.
• Add my Cat: Not much research or experimentation needed. A
standard uploading tool was needed. Just IMPLEMENT.
• Verify Image: This sounded a bit trickier. There were merging AI/ML
patterns available. It needed some technical RESEARCH and probably a
Technical Spike.
• Enter cat into tournament: Not much ambiguity here. IMPLEMENT.
• Display Tournament Cat: It wasn't clear if this was going to be well
received or not. The team thought they could EXPERIMENT with a
feature toggle and then it's easy enough to turn off.
• Disable "Add my Cat": Some users have more than one cat and will
want to add more than one. Let's EXPERIMENT with a Dark Launch of
this feature to a small subset of users.
• Vote for given cat: Once the team got the results from the A/B Test,
they could EXPERIMENT further and launch as a Canary Test.
• Update the Leaderboard: IMPLEMENT
• End Competition: IMPLEMENT
• Notify Players: Not clear how this would happen – SMS? Email? Other
mechanisms? The team decided to do some user RESEARCH.
408 | The Options Pivot
Let's look at another real-world experience to see just how simple yet effective this
experimental mindset can be.
The team was very small, just one designer, two engineers, a business
analyst, and a Product Owner. As a small co-located team, buried in the
heart of the bank, we were able to move fast! We interviewed people who
had recently purchased mortgages with the bank to get insight into their
motivations for using the tool. We did a load of research by going into the
bank branches and asking people open-ended questions while they used
the existing tool. We collated this information along with how they were
accessing the calculator and if they were to complete an application, what
device they would use,that is, their phone or their laptop.
Through this research we stumbled upon an interesting fact – people were
not interested in How much could I borrow but How much house can I afford.
This simple difference might seem inconsequential but it massively affected
how we rebuilt the bank's online mortgage calculator. It meant people
wanted to be able to tailor their calculation to see how their rates and
lending criteria could be affected by, for example, having more income. Or,
if they were to continue to save for another year and have more of a deposit
saved, could they get a better rate? This flip meant people were using the
tool to not see if they could afford a given home but how much of a home
could they afford and by when.
It would have been very simple for us to just recreate the bank's existing
calculator with a new skin that ran on a mobile – but this would not have
addressed the core problem. By reframing the question, we were now in
a position to create a simple calculator tailored to the needs of the bank's
first-time buyers.
All these advanced deployment considerations provide powerful tools for use in Options
planning and how we can conduct research, experimentation, and implementation.
When we return to the Options Pivot after an iteration of the Delivery Loop, we'll
complete the final section of this map:
• What did we learn?
The Options Map provides clarity and direction as to how the product priorities to help
reach outcomes. It helps form our delivery strategy.
Conclusion
In this chapter, we focused on how we are going to deliver the outcomes set in the
previous section.
We explored the User Story Mapping and Value Slicing practices and how we take all
of the information captured in Discovery practices and push it through these tools.
We also showed how using some helpful practices to look at the same information
with slightly different lenses –Impact versus Effort Prioritization and How/Now/Wow
Prioritization – can help improve Value Slicing. Where proposed feature areas would
benefit from a deeper dive to understand the value, we recommended the Design Sprint
as an option.
413 | Conclusion
We showed how these practices drive the initial Product Backlog prioritized by value
and how this produces a living, breathing artifact that will be subject to continuous
Product Backlog Refinement as we gather more learning, feedback, and metrics for our
delivery. The economic prioritization model WSJF, which is based on Cost of Delay,
provides a repeatable and quantifiable tool to drive this. It's one of many prioritization
tools that can help the Product Ownership function work smoothly and effectively.
Finally, we looked at the advanced deployment considerations that should be taken
when designing experiments and how platforms such as OpenShift enable powerful
evidence-based testing to be conducted in production with users. A/B Testing, Blue/
Green Deployments, Canary Releases, Dark Launches, and Feature Flags were all
introduced from a business perspective. We will return to the implementation details
of these in Section 6, Build It, Run It, Own It and explore how we interpret the measures
from them in Section 7, Improve It, Sustain It.
Figure 11.37: Practices used to complete a Discovery Loop and Options Pivot
on a foundation of culture and technology
414 | The Options Pivot
In the next chapter, we will shift to the Delivery Loop. We'll look at agile delivery and
where and when it is applicable according to levels of complexity and simplicity. We'll
also look at Waterfall and the relative merits and where it might be appropriate. We'll
explore different agile frameworks out there and how all of them relate to the Open
Practice Library and Mobius Loop. We'll explore the importance of visualization and of
capturing measurements and learning during our iterations of the Delivery Loop.
Praise for DevOps Culture and
Practice with OpenShift
"Creating successful, high-performing teams is no easy feat. DevOps Culture and
Practice with OpenShift provides a step-by-step, practical guide to unleash
the power of open processes and technology working together."
—Jim Whitehurst, President, IBM
"This book is packed with wisdom from Tim, Mike, Noel, and Donal and lovingly illustrated
by Ilaria. Every principle and practice in this book is backed by wonderful stories of the
people who were part of their learning journey. The authors are passionate about visualizing
everything and every chapter is filled with powerful visual examples. There is something for
every reader and you will find yourself coming back to the examples time and again."
—Jeremy Brown, Chief Technology Officer/Chief Product Officer at Traveldoo,
an Expedia Company
"This book describes well what it means to work with Red Hat Open Innovation Labs,
implementing industrial DevOps and achieving business agility by listening to the team. I have
experienced this first hand. Using the approach explained in this book, we have achieved a level
of collaboration and engagement in the team we had not experienced before, the results didn't
take long and success is inevitable. What I have seen to be the main success factor is the change
in mindset among team members and in management, which this approach helped us drive."
—Michael Denecke, Head of Test Technology at Volkswagen AG
"This book is crammed full to the brim with experience, fun, passion, and great practice. It
contains all the ingredients needed to create a high performance DevOps culture...it's awesome!"
—John Faulkner-Willcocks, Head of Coaching and Delivery Culture, JUST
"DevOps has the opportunity to transform the way software teams work and the products they
deliver. In order to deliver on this promise, your DevOps program must be rooted in people. This
book helps you explore the mindsets, principles, and practices that will drive real outcomes."
—Douglas Ferguson, Voltage Control Founder, Author of Magical Meetings
and Beyond the Prototype
"Innovation requires more than ideas and technology. It needs people being well led and the
'Open Leadership' concepts and instructions in DevOps Practice and Culture with OpenShift
should be required reading for anyone trying to innovate, in any environment, with any team."
—Patrick Heffernan, Practice Manager and Principal Analyst,
Technology Business Research Inc.
"Whoa! This has to be the best non-fiction DevOps book I've ever read. I cannot believe how
well the team has captured the essence of what the Open Innovation Labs residency is all
about. After reading, you will have a solid toolbox of different principles and concrete practices
for building the DevOps culture, team, and people-first processes to transform how you use
technology to act as a force multiplier inside your organization."
—Antti Jaakkonen, Lean Agile Coach, DNA Plc
"Fascinating! This book is a must-read for all tech entrepreneurs who want to build scalable
and sustainable companies. Success is now handed to you."
—Jeep Kline, Venture Capitalist, Entrepreneur
"DevOps Culture and Practice with OpenShift is a distillation of years of experience into
a wonderful resource that can be used as a recipe book for teams as they form and develop,
or as a reference guide for mature teams as they continue to evolve."
—David Worthington, Agile Transformation Coach, DBS Bank, Singapore
Get Your
Own Copy
All rights reserved. No part of this book may be reproduced, stored in a retrieval system,
or transmitted in any form or by any means, without the prior written permission of
the publisher, except in the case of brief quotations embedded in critical articles or
reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of
the information presented. However, the information contained in this book is sold
without warranty, either express or implied. Neither the author(s), nor Packt Publishing
or its dealers and distributors, will be held liable for any damages caused or alleged to
have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the
companies and products mentioned in this book by the appropriate use of capitals.
However, Packt Publishing cannot guarantee the accuracy of this information.
Authors: Tim Beattie, Mike Hepburn, Noel O'Connor, and Donal Spring
Illustrator: Ilaria Doria
Technical Reviewer: Ben Silverman
Managing Editors: Aditya Datar and Siddhant Jain
Acquisitions Editor: Ben Renow-Clarke
Production Editor: Deepak Chavan
Editorial Board: Vishal Bodwani, Ben Renow-Clarke, Edward Doxey, Alex Patterson,
Arijit Sarkar, Jake Smith, and Lucy Wan
www.packt.com