12 A/B Split Testing Mistakes I See Businesses Make All The Time


A/B testing is fun. With so many easy-to-use tools around, anyone can (and should) do it. However, there’s actually more to it than just setting up a test. Tons of companies are wasting their time and money by making these 12 mistakes.

Here are the top mistakes I see again and again. Are you guilty of making these mistakes? Read and find out.

#1: A/B tests are called early

Statistical significance is what tells you whether version A is actually better than version B—if the sample size is large enough. 5o% statistical significance is a coin toss. If you’re calling tests at 50%, you should change your profession. And no, 75% statistical confidence is not good enough either.

Any seasoned tester has had plenty of experiences where a “winning” variation at 80% confidence ends up losing bad after giving it a chance (read: more traffic).

What about 90%? Come on, that’s pretty good!

Nope. Not good enough. You’re performing a science experiment here. Yes, you want it to be true. You want that 90% to win, but more importantly than having a “winner” is getting to the truth.

Image credit

As an optimizer, your job is to figure out the truth. You have to …read more

Article Curated From…: Conversion XL