A while ago, actually something like a year and a half ago, I launched a little tool called the statistical significance calculator. The thing was working just fine until I changed my design a couple of months ago, and then it stopped.
Since I believe it still is a great tool for assessing your split testing campaign’s results, I’ve decided to relaunch it. Or more accurately, make it work again.
If you know what this is about and don’t want to read any further then just click this link and test it for yourself: Statistical Significance Calculator.
However, if the thing that’s on your mind right now is more like “what the hell is statistical significance?!” then here’s your answer:
Let’s start with the basics. A/B testing or split testing is simply a method of running two versions of something (usually a marketing message or copy) alongside each other, taking note of the results, and then pointing out the better performing version.
Now, this “pointing out the better version” part isn’t always that easy. Sometimes the raw numbers simply don’t convey any clear message, and that’s where statistical significance comes into play.
The deal with statistical significance
In a word, statistical significance is a math-thing.
Its purpose is to tell us if we’re getting significant split testing results or not. In plain English, it gives us an insight if using our split testing results for making any kind of decision about the test subject is a good idea or not. In other words, statistical significance tells us what’s the chance of our test results being purely accidental.
Let me give you an example. Let’s say you have a split test around a given sales message. Version (a) managed to generate 12 conversions out of 100 views, while version (b) got 15 conversions out of 102 views. The question: Which is the better version?
Our simple human mind tells us that it’s version (b) – it has a 15% conversion rate while version (a) has only 12%. However, math tells us that the results are not significant (try them out in my calculator), and therefore, can’t be taken seriously.
In the example above, the test sample is simply too small to be able to determine anything accurately (100 views and 102 views) in relation to the number of actions/conversions (12 and 15).
My statistical significance calculator has been designed to point out every such situation, so you can make better decisions based on your split testing results.
Well, that’s it for the theory part. Here’s the link again: http://newinternetorder.com/statistical-significance-calculator/. The exact how-to guide is also there.
By the way, do you split test various elements of your site a lot?