0% found this document useful (0 votes)
86 views

A - B Test Planning How To Build A Process That Works

This document outlines a four-step process for developing an effective A/B testing plan: 1) Measure website performance through goals, key performance indicators (KPIs), and targets, 2) Understand why performance levels exist through qualitative customer feedback, 3) Use segmentation to gain insights from visitor attributes and behaviors, 4) Prioritize tests and hypotheses based on measurements and insights. The plan is meant to produce fast gains through continuous testing of conversion optimization hypotheses.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views

A - B Test Planning How To Build A Process That Works

This document outlines a four-step process for developing an effective A/B testing plan: 1) Measure website performance through goals, key performance indicators (KPIs), and targets, 2) Understand why performance levels exist through qualitative customer feedback, 3) Use segmentation to gain insights from visitor attributes and behaviors, 4) Prioritize tests and hypotheses based on measurements and insights. The plan is meant to produce fast gains through continuous testing of conversion optimization hypotheses.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

A/B Test Planning: How to Build a Process that Works 

By: Jaan-Matti Saul  

 
A strong ​A/B testing​ plan will allow you to increase your revenue. You’ll also learn valuable insights about your customers 
because you’ll k​ now​ their preferences instead of guessing what they want. 

A/B testing produces concrete evidence of what actually works in your marketing. Continuously testing your hypotheses 
will not only ​increase conversion rates​, but it will also give you a better understanding of your customers. Having a clear 
idea of what your customers actually like and prefer can do wonders for your branding and marketing in ​other​ channels as 
well. 

At its core, A/B testing belongs to a category of scientific optimization techniques that use statistics to increase the odds 
that your site visitors will see the best-performing version. 

By levels of sophistication, scientific optimization can be broken down into three categories: 

1. A/B split testing. Simple testing of one element of a page against another to see which one results in better 
performance. 
2. Multivariate Testing. Testing several elements at a time. The goal is to get an idea of which elements work together 
on a page and play the biggest role in achieving the objective. 
3. Experimental Design. Developing your own research method for an in-depth analysis of a specific element. 

This post is about the A/B testing process. A good A/B testing plan produces the fastest gains and has a lower chance of 
error through misuse. 

A structured A/B split testing process 

A/B testing is a part of a wider, holistic c


​ onversion optimization process​ and should always be viewed as such. Doing A/B 
testing without thinking about your online goals and user behavior in general will lead to ineffective testing. When 
correctly planned, you can achieve good, measurable results in a short timeframe. 

A structured process to improve your ​conversion rates​ needs to be continuous. It is a cycle of Measurement, 
Prioritization, and Testing (then repeat). Each stage has a goal and a purpose, leading to the next stage. 

For your plan to be successful, follow this four-step A/B testing template. 
Step 1: Measure your website’s performance. 

To continually improve your conversion rates, start by properly measuring your website’s performance. We want to know 
what​ is happening and w
​ hy​ it’s happening. 

What​ is happening: Get actionable data from Google Analytics 

Define your business objectives. 

This is the answer to the question: “Why does your website exist?” Make your objectives DUMB—Doable, Understandable, 
Manageable, Beneficial. Companies often fail in web analytics because their objectives are not simple to understand or 
measure. 

Example: A business objective for an online flower store is to “increase sales by receiving online orders for our bouquets.” 

 
 
Define your website goals. 
 
Goals come from your business objectives and are mostly strategic in nature. So, if we were to continue with the business 
objective example of increasing our bouquet sales, we have to: 

Goals come from your business objectives and are mostly strategic in nature. So, if we were to continue with the business 
objective example of increasing our bouquet sales, we have to: 

1. Do x. Add ​better product images​; 


2. Increase y. Increase c
​ lick-through rates​; 
3. Reduce z. Reduce our ​shopping cart abandonment rate​. 

Goals are your priorities, expressed as simply as possible. Before you start working with your data, make sure you have 
them defined and ​properly set up in Google Analytics​. 

Define your Key Performance Indicators (KPIs). 

KPIs are metrics (numbers). 


“A key performance indicator (KPI) is a metric that helps you understand how you are doing against your 
objectives.” – Avinash Kaushik 

A metric becomes a KPI only when it measures something connected to your objectives. 

Example: Our flower store’s business objective is to sell bouquets. Our KPI could be the number of bouquets sold online. 

This is the reason why you need to define your business objectives clearly—without them, you’re unable to identify your 
KPIs. If you have p
​ roper KPIs​and look at them periodically, you’ll keep your strategy on track. 

Define your target metrics. 

Our flower store sold 57 bouquets last month. Is that good? Or devastatingly bad? For your KPIs to mean something, they 
need target metrics. Define a target for every KPI. For our imaginary flower store, we can define a monthly target of 175 
bouquets sold. 

Now you have a framework that ensures that the work you do is relevant to your business goals. 
Why​ it’s happening: Talk to your visitors 

Getting real feedback from your visitors​ is invaluable. ​Use surveys​ to discover your visitors’ objectives. Set up entry 
surveys to find out why they’re visiting your site and exit surveys to find out if their goals were fulfilled. 

You can find what is happening on your site by using Google Analytics—which features they use, where they exit, and who 
is profitable. But Analytics doesn’t tell you the w
​ hy​ behind this. This is where qualitative data comes in. Qualitative data is 
perfect for finding out why problems occur. 

The best way to get qualitative data from your visitors is through surveys. Your goal is to find out why visitors buy or why 
they leave without purchasing. 

 
Ideas for gathering qualitative data 

● Add an exit survey on your site, asking why your visitors didn’t complete the goal of the site. 
● Add an exit survey on your ​thank-you pages​ to find out why your visitors converted. 
● Perform u
​ sability testing​ with members of your target group. 
● Send out feedback surveys to your clients to find out more about them and their motives. 

If a large amount of people click on your ebook ad but only a few people actually buy after seeing the price, you’ll want to 
dig deeper into the problem. 

In this example, you could put up a survey, asking people if they have any questions that the page doesn’t answer. You can 
also survey people who have already bought your book to see what made them buy. 

Look for trends in your customer feedback 

You’ll start noticing trends after you’ve collected 50+ responses. Often, you’ll find that the site hasn’t addressed an 
important objection of your client. The main takeaway is that qualitative data will help you understand which elements will 
have the highest impact when running an A/B test. 
Think about how you could spot emerging trends with your customer feedback. When something pops up, you can dive 
deeper to find out exactly what’s going on. 

Use segmentation to get actionable data 

The problem with using site averages in your testing is that you’re missing out on what’s going on ​inside​ the average (the 
segments). 

An experiment that seemed to be performing poorly might actually have been successful f​ or a certain segment.​ For 
example, our experiment may have shown that a variation of a mobile landing page isn’t performing well. When looking 
into the segments though, you may see that it’s performing exceptionally well for Android users but badly for iPhone 
users. When not looking at segments, you can miss this detail. 

Never report a metric (even the most hyped or your favorite KPI) without segmenting it a few levels deep. That is 
the only way to get deep insights into what that metric is really hiding or to see what valuable information you can 
use.” -Avinash Kaushik 

To understand segments, we need to understand dimensions. A dimension is any attribute of a visitor to your website. A 
dimension can be a source where they came from (a country, URL, etc.), technical information like their browser, or their 
activity on the site (pages they looked at, images they opened). A segment is made up of a group of rows from one or 
more dimensions. 

By default, a lot of the data you get is useless. The number of visits to your site doesn’t really give you actionable 
information. To get actionable data for testing, you need to segment the data you have, using dimensions. You can also 
split test for single segments of your traffic, which is a part of behavioral targeting. 

Three good segmentation strategies for your testing plan 

Avinash Kaushik has outlined ​a simple strategy for segmenting your traffic. T
​ he best ideas for taking action come from 
segmentation. 

Segment by source. Separate people who arrive on your website from email campaigns, Google, Twitter, YouTube, etc.  
Find answers to questions like: 

● Is there a difference between b


​ ounce rates​ for those segments? 
● Is there a difference in visitor loyalty between those who came from YouTube versus those who came from 
Twitter? 
● What products do people who come from YouTube care about more than people who come from Google? 

Segment by behavior​. Different groups of people behave differently on a website because they have different needs and 
goals. For example, an ecommerce store can separate people who visit more than 10 times a month from those who visit 
only twice. Do these people look for products in different price ranges? Are they from different regions? 

Segment by outcome​. Separate people by the products they purchase, by order size, by people who have signed up, etc. 
Focus on groups of people who have delivered similar outcomes and ask questions like the ones above. 

Keep your most profitable segments in mind when building your split-testing plan. It’s good to know who your most 
profitable visitors are b
​ efore​ you start split testing. 
Step 2: Prioritize your testing opportunities. 

Once you have your metrics in place and know your goals, the next step is to prioritize what to test. You could test 
anything, but everyone needs a place to start from. Google Analytics gives you a lot of data, but it makes sense to start by 
split testing opportunities that promise the biggest gains. 

Prioritize tests based on data​—it’s your most valuable resource. 

Your homepage may not be the most important area of your site. Look at your “top landing pages” report from Google 
Analytics. You’ll likely see many different pages with entrances, some even more than your homepage. 

Look at data on a page template level.When you add together the traffic from all of your pages that use the same 
template, you may see that they get a lot more traffic than your homepage. Do this when you need to determine 
opportunities for testing site-wide template layouts. 
Prioritize pages with high potential for improvement. 

Look for pages that aren’t performing well. Your Analytics data can show some clearly problematic pages, like landing 
pages with high bounce rates, but other areas may not be so obvious. 

If your problem is a high ​shopping-cart abandonment rate​ in the c


​ heckout process​, Google Analytics won’t tell you that 
visitors can’t find shipping information on other pages and, as a result, are going into the checkout process just to see 
that information. 

If you optimize only shopping cart v​ iews,​ you may not fix the problem. You also need to look at your product and category 
pages. 

No single information source will perfectly identify split-testing opportunities. You need to look at several pages together. 

Top exit pages. This is the last page that someone sees before leaving your site. Labeled “​ % Exit”​ in Google Analytics, it 
will show you the percentage of visitors who leave your site immediately after viewing the page. Top exit-rate pages can 
identify problem areas within your u
​ ser flow​. 

You can visualize the user flow with a ​conversion funnel​: 


1. Persuasive end (top of the funnel). ​The persuasive end includes the most-viewed areas of you site like your 
homepage, category pages, and product pages. These are the areas of your site where you’re getting the visitor 
interested in your product or service. 
2. Transactional end (bottom of the funnel).​ The bottom end of your funnel is where a conversion happens—visitors 
buy the product, sign up, or contact you. Most of the data we’ve looked at so far has been focused on the 
persuasive end of the funnel, but we also need to look at the bottom of the funnel. 

Look at funnel drop-off rates. 

 
A funnel in Google Analytics​ focuses on the bottom end of the funnel. If you have your funnels correctly set up, you can 
gain valuable split-testing information from it. 

Look for sudden drop-off rates in the funnel. For example, if only 18% of the traffic proceeds from Step 2 to Step 3 in the 
checkout area, you have a problem in Step 2. 

If you’ve identified a drop-off step in your transactional funnel, you should ask yourself ​why​ the problem is happening: 

1. What information were they looking for? 


2. Is anything stopping them from taking action on the page? 
3. What were they expecting to see on the page? 
4. Where are visitors coming from? 
5. Are they not motivated enough to proceed? 

Answers to these questions should give you ideas for how to plan your split tests. 
Prioritize tests based on value and cost. 

Start with high-value, low-cost testing ideas. An example is testing variations in a checkout process step that shows 
significant abandoment rates compared to previous steps. 

Prioritize pages that are important. 

Pages that have the highest traffic volume are the most important for testing. You have probably identified many pages 
that perform worse than you would like them to, but if they don’t have a high volume of (expensive) traffic, don’t count 
them as priorities. 

Pages with a high volume of traffic are more important. 

You need pages with high traffic for completing your experiments within a reasonable timeframe. Pages with more than 
30,000 monthly, unique visitors can reach ​statistical significance​ in a few weeks. 

With a lower level of traffic, you need to run tests over a longer period of time. Because tests on high-traffic pages finish 
sooner, you can move on to the next tests faster, which will speed up your optimization process. 

Look for: 
1. Most-visited pages​. Look for information on unique visitors. When looking at the number of overall pageviews, 
your conversion data will get skewed. 
2. Top landing pages​. When looking at the most-visited pages, you will see the most popular landing pages on your 
site. You also need to look at the most-visited pages that people see when they first arrive on your site. 
3. Pages with expensive visits​. When choosing between two pages with similar traffic and conversion rates, pick the 
one with a higher cost of traffic for a better split-testing ROI. 

Use the PXL prioritization framework 

CXL created their own prioritization model​ that weeds out as much subjectivity as possible. It’s predicated on the 
necessity of bringing data to the table. It’s called PXL and looks like this: 
 

Grab your own copy of this spreadsheet template here.​ Just click File > Make a Copy to have your own customizable 
spreadsheet. 

Instead of guessing what the impact might be, this framework asks you a set of questions about it: 

● Is the change above the fold? Changes above the fold are noticed by more people, thus increasing the likelihood of 
the test having an impact. 
● Is the change noticeable in under 5 seconds? Show a group of people the control and then a variation(s). Can they 
tell the difference after seeing it for 5 seconds? If not, it’s likely to have less impact. 
● Does it add or remove anything? Bigger changes like removing distractions or adding key information tend to have 
more impact. 
● Does the test run on high-traffic pages? Relative improvement on a high-traffic page results in more absolute 
dollars. 

Many of the variables specifically require you to bring data to the table to prioritize your hypotheses. 

● Is it addressing an issue discovered via ​user testing​? 


● Is it addressing an issue discovered via qualitative feedback (surveys, polls, i​ nterviews​)? 
● Is the hypothesis supported by mouse tracking, h
​ eatmaps​, or eye tracking? 
● Is it addressing insights found via digital analytics? 

Having weekly team discussions on tests with these four questions will quickly make people stop relying on opinions. 

There are also bounds based on the estimated time to implement (“ease of implementation”). Ideally, you’d have a test 
developer as part of prioritization discussions. 

Grading PXL 
This is a binary scale. You have to choose one or the other. So, for most variables (unless otherwise noted), you choose 
either a 0 or a 1. 

Certain variables are also weighted because of their importance—how noticeable the change is, if something is 
added/removed, ease of implementation. 

So, on these variables, we specifically say h


​ ow​ things change. For instance, on the “Noticeability of the Change” variable, 
you either mark it a 2 or a 0. 

Step 3: Test. 

The reason for the in-depth analysis techniques described above is to get a clear understanding about what is happening 
on your website. That process enables you to find areas where A/B testing can bring the best results and to plan 
accordingly. 
Now let’s set up your test. 

Form a clear hypothesis for your test. 

A hypothesis defines w
​ hy​ you believe a problem occurs. If your problem is high abandonment rate in your checkout 
process, your hypothesis may be that “People start questioning our worth after seeing grammar mistakes.” 

 
After you define your problem and articulate your hypothesis, you can come up with specific split-testing variations. 
Clearly defining your hypothesis will help you create test variations that give meaningful results. 

An example of a hypothesis in action: 

● Problem. Less than 1% of visitors sign up for our newsletter. 


● Hypothesis. Visitors don’t see the value in signing up for our newsletter. Adding three bullet points about the 
benefits will increase sign-up rates. 

In this case, we would summarize the benefits that the newsletter member would get from joining the newsletter. Even if 
the original version works better in your A/B test, you learned something about your visitors. You clearly defined why you 
did the test and can draw conclusions based on the outcome. 

A good hypothesis… 

1. Is testable​. Your hypothesis is measurable, so that it can be used in testing. 


2. Has a goal of solving conversion problems​. Split testing is done to solve specific conversion problems. 
3. Gains market insights​. Besides increasing your conversion rates, split testing will give you information about your 
clients. 

Use valid statistical methods. 

Ignoring statistical significance when running an A/B test is worse than not running a test at all. The results will give you 
confidence that you know what works for your site when, in reality, you don’t know more than before running the test. 

If we were to toss a coin 1,000 times, we would reduce the influence of chance but still get slightly different results with 
each trial. 

The statistical significance of your experiment should be over 95%. But statistical significance alone is not a guarantee. It 
is a measure of confidence. 

Read more about this ​here,​ ​here,​ or ​here​. 

Test for revenue. 

At the end of the day, revenue is what matters. An increase in conversion rates may sometimes mean a decrease in 
revenue if you’re tracking the wrong indicators. 
Let’s imagine that you’re selling watches online and suddenly increased your prices by 25%. Your conversion rates will 
probably drop, but your overall revenue may increase if the demand for your product is high enough. 

What to test—the low-hanging fruit 

You should always use analytics and customer feedback to plan your tests. But for a general guideline, here are elements 
that historically have given good results: 

● Call to actions.​ Placement, wording, size; 


● Copywriting.​ Value propositions, product descriptions; 
● Forms​. Length, field types, text; 
● Layout​. Homepage, content pages, landing pages; 
● Produc​t​ pricing​. Try testing for revenue by testing your prices; 
● Images​. Placement, content, and size; 
● Amount of content on the page.​ Short versus long. 
Step 4: Learn from your results and start over. 

“Never stop testing, and your advertising will never stop improving.” -David Ogilvy 

This is the one rule of marketing that surpasses all others. No matter how well your landing pages or emails may be doing, 
they can always be doing better. By not having a goal for constant improvement, you’re leaving money on the table and 
letting your competition get ahead of you. 

Not all split tests will be successful. You are doing well if 20% of split tests improve your conversion rates. Split-testing 
will simply give you a new baseline to improve upon. You will start seeing considerable website performance 
improvements after a number of different split tests. 

Considering how competitive today’s online market is, if you aren’t constantly tracking and optimizing, you’re going to get 
left behind—and outsold—by people who are. 
A/B testing plan considerations 

Be aware of the local maximum. 

Most A/B testing is done one variable at a time. You test headlines, button text, images etc. These variables are simple to 
test; your results will be clear; and your next steps will be obvious. By isolating one variable, you can be more confident in 
your results. There’s a downside, though. 
The argument against testing single variables is that, if you continue to do it for a long period of time, it will be impossible 
for you to arrive at a much better design. 

Instead, you’ll be improving your existing design in small increments and get stuck at the local maximum. You’ll hit a glass 
ceiling in your design and, without a big change, be unable to earn larger gains. 

Protect SEO. 

Google openly endorses split testing because the goal is to improve a website and make users happy. But there are a few 
things you should consider. 

Don’t run your experiments longer than necessary. If your experiment has been running longer than what Google would 
normally expect from a split test, you may confuse the search engine about which version of the content is the “real” 
version. 

The general rule is to update your site with the winning variation as soon as you are sure of its statistical validity. Google 
wants to prevent people from deceiving search engines. 

Use rel=canonical. Instead of using a noindex meta tag on your variation page, use rel=canonical. If you are testing two 
variations of your homepage, you don’t want search engines to deindex both. 
You need search engines to understand that your variations are what they are—variations of the original URL. Using 
noindex in a situation like this may create problems later. 

Use 302 instead of 301. When you need to redirect a visitor to one of you variations, use a 302 temporary redirection 
instead of the permanent 301 version. You want Google to keep the original URL in its index. 

Forget about A/A testing 

A/A testing​ validates your test setup—if your variations display correctly and your software reports the right numbers. The 
biggest problem with A/A testing is that setting it up and running it takes time (and traffic) that could be used for more 
split testing. 

:The volume of tests you start is important, but even more so is how many you finish every month.” - Craig Sullivan 

It’s quicker for you to test your experiments before going live. To make sure your variations are displaying as they should 
in different browsers, use cross-browser testing. For checking the numbers, use your split-testing and analytics packages 
together on every test. 
If, for whatever reason, you still need to do an A/A test, consider an A/B/A test, or 25/50/25 split, instead. There’s an even 
better way: segmentation. 

Avoid the percentage confusion. 

Besides significance, the other common problem A/B tests have is quoting percentages. Since conversions are measured 
in percentages, there are two ways to report them: 

1. Change as the numerical difference between the two variations; 


2. Change as the amount by which one variation is larger than the other. 

Normally, Version 2 is used when reporting conversion rate improvements, but make sure you know this before deciding 
anything based on the numbers. 

Integrate your A/B testing process with conversion optimization methodologies. 

Your A/B testing efforts will bring better returns when you execute them within a conversion optimization framework. 
Different agencies have come up with different versions for this. Here are some of them: 
The Conversion Rate Experts Methodology 

Conversion Rate Experts have developed a systematic process for guiding a business through a series of steps toward 
better conversion rates. 
Building To The Ultimate Yes by MEClabs 

MEClabs has a funnel that represents a series of decisions taken by the prospective customer. The idea revolves around 
the customer taking small decisions step by step, “micro-yeses,” which lead to the Ultimate Yes, or the sale. 

The Kaizen Plan by Widerfunnel 

 
The idea is to prioritize conversion testing opportunities and implement the right experiments, which will drive the most 
impact-filled results. This will help you maximize your conversion rate improvement. 

ResearchXL by CXL 

CXL has developed their own framework that’s used by dozens of agencies. It includes six parts of research and also has 
a prioritization component, solving the oh-so-common problem of what to test. 
 

The framework consists of the following: 

1. Heuristic Analysis; 
2. Technical Analysis; 
3. Web Analytics Analysis; 
4. Mouse Tracking Analysis; 
5. Qualitative Surveys; 
6. User Testing. 

They’ve also written a c


​ onversion optimization guide​ that goes into further detail and explains how ResearchXL ties into a 
holistic optimization program. 

Mobile considerations 

Visitors interact with their mobile devices through tapping, so their ability to quickly locate and hit these tapping areas is 
key to mobile conversion rates. 

When testing on mobile screens, begin testing elements related to the user experience and design of the 
site—navigational elements or m
​ obile forms​. 

Mobile analytics generally fall into two categories: tracking how people interact with websites in a mobile browser and 
in-app analytics. Let’s briefly touch on both. 

Testing in a mobile browser 


A smaller screen forces users to focus more on what is relevant, giving you ample opportunities in the conversion area. 

What to test for mobile users: 

● Call-to-action buttons​. On a desktop screen, the CTA button won’t take up more than 2% of the screen real estate. 
On a mobile device, you have the opportunity to make your CTA button take 25–50% of the screen real estate. You 
can test any standard property on your button size, copy, and visual design. 
● Navigation​. You need to change the site navigation for mobile viewers. This is a good opportunity to direct visitors 
to the most important page content. Think of mobile navigation as a tool for helping your visitors achieve their goal, 
and remove anything that can get in the way. 
● Copywriting​. Since the screen real estate of mobile devices is small, focus your visitor’s full attention on your value 
proposition. Test the wording carefully. 
● Forms​. Experiment with the design and number of fields, dropdowns, and error messaging. 

Google Analytics has a mobile section, but it includes both mobile phones and tablets. Accessing a website on a tablet is 
a different experience from a mobile phone, so make sure you use extra filters for cleaner data. 
Compare your site performance (visit duration, bounce rate, conversion rate, etc.) for mobile visitors vs. desktop visitors 
for insights into areas that may cause trouble for your visitors. 

Note: Google Analytics tracks visitors only from browsers that support JavaScript. There are additional analytics solutions 
available for tracking traffic from non-supporting browsers. 

Use a previously set advanced segment (where you’ve filtered out non-mobile traffic) and apply that to your All Pages 
report. You’ll get a good overview of which pages your mobile visitors use, which may be vastly different than those 
viewed most often by desktop users. 
This information can be helpful to test mobile menus. You can bring out the content your mobile visitors want to see the 
most, creating a different navigation menu from your desktop version. 

For more information about Google Analytics for mobile platforms, s


​ ee this thorough post by Bridget Randolph. 

2. Testing in applications 

Google Analytics has a separate tool for measuring and testing variations in mobile applications. The main difference 
from testing in a browser is that, without putting in some thought, you’ll have a slower reiteration cycle. 

You need to upload every test to the app store and wait for your users to update their apps. This means you should 
carefully think through what you want to track and test in your application before you release it. It will be more difficult to 
change this after your app has been downloaded. 

How is multivariate testing different? 

When you perform a ​multivariate test​, you’re not testing a different version of a web page like you are with an A/B test. 
You’re performing a far more subtle test of the elements ​inside​ one web page. 
 
A/B testing usually involves fewer combinations with more extreme changes. Multivariate tests have a large number of 
variations that usually have subtle differences. 

A/B testing is usually a better choice if you need meaningful results fast.Because the changes between pages are starker, 
it’s easier to tell which page is more effective. A/B testing is also better if you don’t have a lot of traffic to your site. 
Because multiple variables are tested together, multivariate testing needs a site with a lot traffic. 

The goal of multivariate testing is to let you know which elements on your site play the biggest role in achieving your 
objectives. Multivariate testing is more complicated and is better suited for advanced testers. It’s more prone to reporting 
errors and common to have more than 50 combinations. 

Case studies for inspiration 

Value proposition: 

Citycliq increased their conversion rates by 90% after experimenting with Value Propositions 

 
Citycliq was testing their value propositions to see what converts the best. Eventually, they concluded that the value 
proposition with the “Purest, most direct representation of their product” was the winner. 

Call to action: 

Barack Obama raised 60 million dollars by running an A/B test 

During the election period, the Obama team ran several A/B tests on the campaign’s landing page. The goal was to get 
people to sign up with their email addresses. A/B testing generated an additional 2.9 million email addresses, which 
translated to an extra $60 million in donations. 

Images: 

DHL achieved a 98% conversion rate increase 

The challenger page increased the size of the contact form. They also replaced a general logistics image with the image 
of an actual DHL emplyee. 
A/B testing resources 

Testing tools: 

Instead of listing all of the various testing tools out there (and by now, there are many), take a look at our g
​ igantic list of 
CRO tools​. Each tool is reviewed by an actual practitioner, so you can choose which one is right for you. 

Supporting tools 

A/B Split Test Significance Calculator by VWO 

A widely used tool for calculating the significance of your A/B testing results. 

A/B Split and Multivariate Test Duration Calculator by VWO 

The calculator allows you to calculate the maximum duration for which your test should run. 

Crazyegg​, I​ nspectlet​, ​Clicktale​, and ​Mouseflow 

Heatmap software for tracking visitor behavior on your site. You can get good data for hypotheses generation. 
Surveys 

Qualaroo 

This is for surveying visitors that are currently surfing your site. We use this in our agency. Allows you to set up 
behavior-triggered surveys to find out why your customers are behaving the way they do. 

SurveyMonkey 

SurveyMonkey has been around for a long time and offers a stable experience. It has good analytical tools for analyzing 
your responses. 

Google Docs 

Free and easy to use. Data is collected into spreadsheets which makes the data easy to analyze. 

Typeform 

The prettiest survey tool of all. 


Conclusion 

You should always integrate A/B testing into a larger conversion optimization framework for good results. In the end, the 
testing is all about knowing if your hypotheses are right and if your conversion plan is on the right course. 

The main reason for doing split testing is to maximize the conversion potential of your website. It makes sense to invest 
in conversion rate optimization before spending money on large-scale advertising campaigns. 

After a while, your conversion funnel will be effective enough that you can transfer your winning campaigns to other 
media. Because your core is optimized so well, by the time you go to offline media, your campaigns will convert well 
enough to pay off. 

Key takeaways to create a solid A/B testing plan 

1. Make sure you get actionable data from Google Analytics​. To make use of data, you need to have goals first. 
Define your target metrics. 
2. Use qualitative survey​s. Besides knowing ​what​ is going on, you also need to know w
​ hy. 
3. Segment​. It’s the only way to get valuable information. 
4. Prioritize​. Test pages with higher potential first. 
5. Form a hypothesis​. A/B testing has to start with a clearly defined hypothesis. 
6. Statistical relevance​. Never stop split testing before reaching a significance percentage of at least 95%. 
7. Never stop testing​. No matter how well your landing pages or emails may be doing, they can always do better. 
8. A/B testing is part of a conversion rate optimization framework​. Pick a framework and work on it constantly to 
maximize your site’s potential. 

Enjoyed this article? Join 95k+ analysts, optimizers, digital marketers, and UX 
practitioners on our list and get more of this straight to your inbox. 

Subscribe now >>

You might also like