Ultimate_Guide_to_AB_Testing
Ultimate_Guide_to_AB_Testing
MOBILE APP
A/B TESTING
Contents
Introduction: The Rise of App Demand and The Need for Analytics ........01
Mobile app demand shows no signs of slowing down. In fact, the App Store and Google Play
alone now boast nearly 1 million apps apiece, and over 56 million apps were downloaded
last year. The old pay-per-download model has shifted to in-app efforts such as advertising,
purchases, freemium and lead generation, while non-monetization goals like customer loyalty
and driving market share have risen. These days, apps need to differentiate from the crowd,
and monetize while doing it.
According to a recent Duke University study, the majority of marketers feel the pressure to
prove ROI in every channel, including mobile. So to stand out in a competitive space in which
millions of options beckon, marketers must know with certainty what is working and not
working within their brand’s mobile app, and use that knowledge to create more successful,
engaging interactions.
02 1. Introduction: The Rise of App Demand and The Need for Analytics
Testing and identifying the right analytics is key to eliminating app guesswork and realizing
success. Marketing is no longer a go-by-your-gut business, and “we think” has been replaced
with “we know.” But testing mobile apps has historically been more complicated than testing
a website or an email, both in scale and in execution. And when it comes to development
alone, you don’t want to waste precious time needlessly altering code and re-uploading to
the App Store.
03 1. Introduction: The Rise of App Demand and The Need for Analytics
2 HOW A/B TESTING CAN
SUPERCHARGE YOUR APP
HOW A/B TESTING CAN SUPERCHARGE YOUR
2 APP ANALYTICS
Mobile apps may be booming, but they also are, and always have been, an entirely different
platform than websites, particularly when it comes to measurement. Traditional online
marketing metrics, such as pageviews and clicks, simply don’t apply here. Marketers need to
track app-specific analytics that focus on user behavior, engagement and conversion rate in
order to create an experience that users love and want to return to. Simple but scientific
A/B tests can take your app analytics to the next level by proving what works or doesn’t
work within your app.
Long used by marketers across other channels, an A/B test presents two different versions
of a message, feature or piece of content to the same population segment. Because
of complex code changes and re-submission requirements, A/B testing of mobile apps
once was a slow, arduous process requiring data analysts and developers. But thanks to
automated platforms, mobile app marketers now have the opportunity to run A/B tests
quickly and easily and get immediate, actionable insights.
With further A/B testing, the company was able to try new features and UI changes in
controlled tests. Without the need for code alterations or app store re-submissions, they were
able to perform these tests, measure the results and iterate product design before launching
to all users. Using automated testing and actionable analytics resulted in their mobile apps
contributing to increasing total company revenue by nearly 20% and net income by 25%.
When your end goal is to increase the usage and profitability of your app, ready-to-run
A/B testing is crucial to driving impactful marketing campaigns for your user segments and
optimizing lifetime value.
As we’ve covered, traditional web analytics metrics don’t apply in the app world. While every
business model is different and companies may focus on different goals, there are certain
app metrics that most marketers will want to address, including usage patterns, time spent in
different parts of the app, session length and conversion rates.
Three different types of A/B testing opportunities within mobile apps will help you hone in on
the metrics that count.
In-App Messaging
For instance, you can target super fans (your most engaged set
of fans) with a message encouraging them to rate your app, or
you could send out a creative “thank you” message. A/B tests
can help determine what makes a successful in-app messaging campaign using data and
user feedback, instead of making assumptions.
Take this example: as a news organization, you want to prompt free users to upgrade to a
paid subscription. You can start by creating a custom segment of users in which
everyone has:
Read at least 8 articles in the Politics or World News sections in the last 3 days
Once the A/B test is launched, you can monitor closely which version is most effective at
pushing fence-sitters to upgrade and then refine your in-app messages according to the
results, or conduct further testing around the content, context and timing of the
winning version.
Push Messaging
Let’s say a department store launches a push messaging campaign within its iOS app to
promote a highly sought-after new line of handbags. For this promotion, the retailer sends an
offer to those who have previously shown interest in handbags for 20% off their first in-app
purchase. The goal is to pique their interest and drive in-app handbag purchases.
The retailer could run an A/B test with two versions of the push messaging campaign to
significant samples of its target audience to see which message yields more conversions.
The results? 20% of those customers who received Message A took the first step and opened
the app while 15% of those who received Message B took the same action.
At first glance, it seems that Message A wins the contest, because of its higher open rate.
But the contest is not over yet; the end goal is for users to make an in-app purchase. If you
were only tracking app opens from this campaign, you would continue to invest in
Message A – which may actually have a lower conversion rate to in-app purchase than
Message B. Because of this unknown, it’s critical to look at the full picture and purpose of
the A/B test.
It’s in this case that A/B testing an app is most similar to testing
a website landing page, and identifying which visual features
content changes improve conversion rate. However in apps you have the ability to test
around events, or actions taken, making it more reactive to visitor flow and interaction than
something like a landing page A/B test.
A/B tests allow you to offer different versions of features and content -- buttons, icons, colors,
screen flow and home screen layout, to name a few -- to different segments of your audience
so they all experience the same app but in different ways. The results of various tests can
show you which tested feature moves a user closer to or more quickly towards the ultimate
conversion event -- a purchase, an ad click, or some other measure of engagement.
A/B testing offers a tremendous opportunity to use real data from real users to prove what
works in your app. But doing the most successful A/B testing possible - that is, moving the
needle on the metrics that matter most -- is easier said than done, especially if you’re just
getting started.
Here are six essential elements to keep in mind as you dive into the mobile A/B testing waters:
A/B testing itself is just an orchestration of an experiment. What’s important, though, are
the implications and analysis of the test. So, consider what you want to analyze -- what
repercussions will the test have for the app experience as a whole? What are you trying to
learn? Having a set hypothesis that you know can be proven or disproven by the data is the
only way to identify clear results and move forward.
Until you can prove the power of A/B testing to your company’s decision-makers, keep your
tests fast and simple, focusing on quick wins. Look at easily achievable elements you can test,
like push messaging content, to run simple but effective campaigns. This way, you can help
your organization see the value of A/B testing as early as possible before taking a deeper
data dive.
A/B testing of your mobile app is never “one and done.” Instead, it’s an ongoing process
of continuous learning, which means testing, testing and then more testing. The power of
automated A/B testing is that you can have short test cycles where engineering doesn’t
have to be involved -- so you can take advantage of your testing platform to test regularly
and repeatedly. And when you measure against control groups, you can dynamically change
messaging and content in response, allowing for easier, smarter testing.
Instead of just testing across all audiences broadly, you can use segmentation to create
highly unique user groups, resulting in richer end results. Running multiple A/B tests across
your different segments ensures that you’ll be gaining the most targeted test results possible
from users who naturally behave similarly. Plus, it allows you to conduct more complex
testing across campaigns for better behavioral insight.
Testing and tracking different visitor screen flows is unique to app analytics, and is key to
determining fall-off points and conversions within A/B campaigns. For example, group A may
be given only a one-screen conversion process, but group B has a two-screen process; one
is faster but conveys less information, the other takes more time but provides added context.
You can test which screen experiences drive the greatest impact on retention and revenue,
in addition to testing messaging, context and visuals, for the most in-depth results.
There are many small features you can address with a simple A/B test. The next step to
assessing deeper interactions is multivariate testing -- where you test two different elements
at the same time, such as the order of a feature set as well as the color, particularly by
segment for enhanced results. This needs to be built on a foundation of preliminary A/B tests
that have shown clear and actionable insight. Moving to the multivariate level requires a
baseline set of analytics that prove why you’re testing the next round of features.
The Solution
RunKeeper ran an A/B test of their start screen using Localytics to see if a design change
provoked users to log non-running activities. Specifically, they used funnel analysis tools to
split the reports with custom dimensions for the A/B test groups and analyze which
UI variation resulted in more non- running events logged.
The winning version was 10 times more successful at driving users to log a non-running
event than the other version
Since the A/B test was exclusively done in their Android app, RunKeeper was able
to port the winning variation over to their iPhone app with extra confidence – saving
development time on iterations later on.
“The powerful
one-two-punch of analytics
and testing provides
undeniable results that are
hard for the higher-ups
to ignore.”