How to Avoid the Pitfalls of Declaring a Winning A/B Test a Loser

Tags
How to Avoid the Pitfalls of Declaring a Winning A/B Test a Loser
  • Magda Baciu

    Magda Baciu

    Data-led Ads & Conversion‎ Optimization Expert

We don’t know how other marketers feel when setting up an A/B test, but for us, it’s one of those things we just can’t wait to do. 

Here’s a behind-the-scenes look at how we react when someone mentions anything A/B testing-related.

Yep, we’re that passionate about them! 

You can understand our frustration when we notice marketers completely ignore certain opportunities during our CRO coaching classes. 

It’s precisely those missed opportunities we’d like to talk to you about today. Once you become aware of them, you’ll start banging out A/B test winners one after another. 

Here’s what we’re going to cover in our attempt to inch you closer to your performance targets:

  1. How to avoid mistaking an A/B test winner for a loser
  2. Why A/B tests results can be hugely different on mobile vs desktop
  3. A shortlist of user behavior metrics to look at before declaring a test a loser or inconclusive 

Why Most Specialists Mistakenly Identify A/B Winners for Losers

At GrowthSavvy, our A/B tests are either winners or lessons. 

Everybody knows what to do when the data reveals a clear winner – pop the champagne and start daydreaming about that dream house in the Caribbean.

But what do you do when results seem irrelevant – at least to the untrained eye?

Well, that’s a case for going more in-depth…

Case in point: Imagine you’re running an A/B test across all devices. After collecting enough data, results look like this. 

Test Results all devices

Quite interesting, wouldn’t you agree? Unfortunately, the uplift is almost too small to be considered relevant. Don’t be so quick to judge, though… The next batch of data may completely shift your perspective. 

Let’s take a look at performance across mobile and desktop. 

Test Results Desktop

According to the numbers above, the 2nd Variation brought a 59% conversion uplift compared to the Control. Does this look like an inconclusive test now? Are we going to let that improvement go to waste?

This is one of the most prominent examples of what missed opportunities look like. Analyzing A/B test performance is never a case of comparing a single data set. If you want to understand what’s happening truly, you need to go deeper than what the A/B testing tools show you.

Going back to our example, let’s try to understand how we failed to notice the uplift on the desktop. To do this, we need to keep digging. First, let’s analyze mobile performance. 

Test Results Mobile

Well-well, looky here! When it comes to mobile traffic, the Control performed significantly better than Variation 2. This gives us a clear understanding of why we couldn’t spot the improvement on desktop – the decrease on mobile completely cannibalized it. 

(+59%) on desktop + (-38%) on mobile = inconclusive A/B test = missed opportunity = one less glass of champagne for you = bye-bye house in the Caribbean!  

All this when in fact, the right way to interpret results would be like this: 

  • The variation we tested failed to improve the conversion rate on mobile
  • The desktop version of the 2nd Variation brought in a 59% uplift! 

Why A/B Testing Variations Perform So Hugely Different On Mobile vs Desktop

Surely, you must be wondering what caused such a massive difference between mobile and desktop results. 

From our experience, three main reasons can lead to this. 

  • It’s either that the prospects are at a different awareness and decision-making stage on mobile and desktop. As a result, what they need to see and read is different.
  • The A/B test was not QAed properly, and there were bugs specific to a device.
  • The A/B test was not optimized for one of the devices, so variations added unnecessary friction, resulting in a loss of conversions.

How Can I See the A/B Test Performance at Device Level (And More)

As always, we’ve got an ace up our sleeves that makes generating A/B test reports a cinch. 

Let’s assume you’re running your A/B in Google Optimize, and you’ve integrated it with Google Analytics (which only takes a few clicks). 

The next step would be to set up your very own custom report – a time and energy-consuming endeavor. But what if we showed you a shortcut? 

As you may very well know, you can import pre-built Custom Reports directly into your Google Analytics account. Lucky for you, it just so happens our toolbox includes a special post-A/B test analysis report we’re ready to share with you.      

Just drop your email address at the end of the article, and we’ll send you step-by-step instructions on how to add it to your Analytics account ASAP. 

Google Analytics Custom Reports

Once you’ve added our Custom Report and created your segments, you’re ready to explore the custom report tabs. You’ll be able to analyze results by device, user type, or country, which is our go-to shortlist for post-experiment analysis. 

We’ve put together a quick video walkthrough of how to make the best use of it. Click the play button below to watch it now.

Final Thoughts

Post-A/B test analysis is more than looking at which variation brought in more conversions. Comparing results based on different user behavior metrics can reveal significant differences in the way prospects interact with your website. Insights like that can prove invaluable in designing your next split-test hypothesis.

Get Your A/B Test Cheatsheet Report via Email

Author

Learn From Us And Nail Advertising, Conversion Optimization, and Data Tracking

Free resources shared with our community only.

We're committed to your privacy. You may unsubscribe from these communications at any time. For more information, check out our privacy policy.

Marketing by

Leave a Reply

  • robert_dev

    3 months ago

    comm

    Reply