Greg Kogan answered:
Hey, I'm a conversion optimization consultant so I can offer some help here.
Whether a conversion rate is "good" or not is a relative measure. If you run a test and get it to 35% from 26%, that's good! If if go from 26% to 10%, then you know you could be doing better. Comparing your conversion rate to industry averages rarely tells you anything meaningful or actionable, so the best thing you can do is continue testing and compare to your old baseline.
The confidence level is the likelihood that the detected result actually exists. For example, if you flip a coin 10 times it will likely land on one side more than the other. If you stopped testing after 10 flips, you might assume that one side has a higher probability than the other... But we both know that's wrong. The real result (50/50) would only begin to emerge after 20 flips, and even then you would not have 100% confidence.
So you don't want to start flipping (ie, testing) too soon because you might draw the wrong conclusion ("the coin landed on tails 6 out of 10 times, therefore tails beats heads by 20%"). You also want to know when to stop and what is the smallest meaningful difference ("the coin landed on tails 49 out of 100 times, and the 2% difference is not significant, so we can confidently say the probability of landing on tails is the same as landing on heads.") The confidence level tells you when you've reached this point.
Before you start any A/B test, you need to know how many "flips" (sample number, ie, visitors) are need to reach a result that is likely to be true, and what is the minimum significant change (is a 2% increase significant, or is that just random fluctuation?). Here is a handy calculator to calculate this before you start your test: http://www.evanmiller.org/ab-testing/sample-size.html
If you (or anyone reading) would like some help in setting up and running A/B tests to increase conversions, get in touch!
3 answers
•
over 10 years