UX Crash Course: Stupid Question 27 of 30


A/B testing is one of those seemingly simple things that is only simple when you know how it works and what it is good for.
So today we will answer:

“What should I A/B test?”


 

Just starting the Crash Course? Start here!

****

An A/B test is when you design more than one version of something, launch them all at the same time, and you split your users between them to see which version gets more clicks or sells more or whatever.

When you find the best version, you only use that one permanently.

So what should you test?!

****

The Stupid Answer:

“You can A/B test anything you want!”

This is not technically wrong, but 99% of the things you can test are not worth testing.

****

The Real Answer:

Focus your A/B testing on key business metrics. Conversion, sales, engagement, pageviews, landing page bounce rate… that sort of thing.

And test things that are hard to answer with user feedback and data. 

Psychological things.

If you are a store selling products, then you should concentrate your A/B tests on stuff like the product page and checkout forms, and maybe the product search results. 

Stuff that leads to purchases.

If you are generating sales leads, then you should test things that get customers past the landing page, like headline copywriting, product photos, button visibility, form designs, and that sort of thing.

Stuff that helps get more leads.

If you’re a farm-animal-related porn site, then you will want to test things like… well, you get the point.

****

Why this isn’t a stupid question:
You don’t test design elements. You test a hypothesis.

I see A/B tests all the time that are testing random guesses, or just testing opinions because two non-UX people can’t agree on which option they like better.

They might test several button labels like:

Version A: “Click here!”
Version B: “Start now!”
Version C: “Hot pants!”

Even when the test is over, nobody knows why the winner was better, they just know that it was better.

Instead, you should develop a hypothesis about why people are clicking (or not clicking) that button, and test to see whether your hypothesis is correct.

Hypothesis: “Users will click if they know that the next page is a game.”

Version A: “Start doing it now!”
Version B: “Play the game now!”

“Start now” has no reference to a game at all, so if the hypothesis is correct it should lose.

“Play the game now” very clearly states that a game will begin when the user clicks, so it should win.

And if the hypothesis isn’t right you will get some other result.

****

Control your results.

Try to keep your A/B options as similar to each other as possible, so you know what change is causing the result.

If we did the same button test with these options…

Version A: “Start”
Version B: “Play the amazing game of the week now!”

… the length of the options might be a factor, not just the fact that we’re mentioning a game.

If you want to try some different lengths too, just include test options that allow you to compare length — separately from playing the game — like this:

Version A: “Start!”
Version B: “Play!”
Version C: “Start now!”
Version D: “Play now!”
Version E: “Start doing it now!”
Version F: “Play the game now!”

If the original hypothesis is correct, all of the versions that mention playing the game should beat all of the versions that don’t. If you had a second hypothesis that shorter button labels are better, the shorter labels should beat the longer labels too.

This test will take longer to run because there are more options, but you’ll learn two things at once.

(The best A/B tests — and the best UX designers — only test one thing at a time. Don’t get too fancy.)

****

Tomorrow we will answer: “How much does an A/B test cost?”