0% found this document useful (0 votes)
2 views

A B Testing 101

A/B Testing is a randomized experimentation method used to compare two or more versions of a product feature to determine which one yields better business results. It involves analyzing data, formulating hypotheses, creating variations, running tests, and analyzing results to declare a winning variant based on statistical significance. The process helps in making informed decisions before rolling out changes to the entire user base.

Uploaded by

P.RAGHU VAMSY
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

A B Testing 101

A/B Testing is a randomized experimentation method used to compare two or more versions of a product feature to determine which one yields better business results. It involves analyzing data, formulating hypotheses, creating variations, running tests, and analyzing results to declare a winning variant based on statistical significance. The process helps in making informed decisions before rolling out changes to the entire user base.

Uploaded by

P.RAGHU VAMSY
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

A/B Testing

A/B Testing

A/B Testing is a
randomized
experimentation
wherein ...

two or more versions of a product


feature is shown to different
segment of users at the same time to
determine which version drives
business results
Why A/B Test?

A/B testing is done to test the feature


changes with small set of users
before the change is rolled out to
entire user base

Once the impact is statistically


significant, the variant can be scaled
to 100% of users
What you can A/B test?
Any change or feature that can be
quantified and measured is eligible
for A/B test

Examples:

Bounce rate of visitors


Form submission %
Items added to cart
Cart to order success ratio
A/B Testing Methodology
Step 1: Data analysis

Measure and look at various metrics that


matter to the business.

e.g. Assuming you are chasing AOV or


Average Order Value in a food ordering app

Step 2: Formulate Hypothesis

Possible hypothesis could be - “Providing an


option to add beverages and sides on cart
page would help drive AOV higher”
A/B testing process
Step 3: Create
Variations

Create design
variations where you
provide an option to
the users to add sides/
beverages (image) -
variant A
A/B testing process
Step 4: Run Test

Make changes to the product and configure


the experiment on tool of your choice such as
firebase, VWO, Optimizely, etc.

Step 5: Analyze results

Compare AO V between variant A and


Control (baseline) Variant. e.g. Variant A has
significantly higher AOV (Rs. 450) vis-a-vis
Variant B (Rs. 400) , scale up variant A to more
users
When do you declare if a
variant is performing
better?
We declare a winner when the A/B test
has achieved statistical significance

Statistical significance mea ns that we have


exposed the test to enough users and on
scaling, the results will be same with an
expected confidence level
Confidence Level
While running an experiment we set a
confidence level (usually set at 95%)

It means that there is 95% probability that


observed conclusion (version A winning by
10% over control) is not due to random error
(i.e. on scaling, version A will perform better)
Hope this was helpful 🤝

Scan QR for best offer

You might also like