Getting a lift from your testing efforts can be satisfying and rewarding.
Not to mention, increases in conversion have changed the fortunes of entire enterprises and the careers of the marketers who advocated testing.
But is a lift truly a lift, or is it simply a false positive resulting from natural variation?
In this MarketingExperiments Blog post, I wanted to share an excellent example of using A/A testing (and yes, you are reading that correctly) from Emily Emmer, Senior Interactive Marketing Manager, Extra Storage Space, presented at Web Optimization Summit 2014.
What does variance in testing look like?
Here’s the example Emily shared with the audience to help put variance in context using a control and treatment of Extra Space Storage’s homepage.
There is absolutely no difference between these pages except for the 15% difference in conversion.
According to Emily, that’s when you need to start investigating how variance is potentially impacting your testing efforts because there should be little to no difference terms of performance in identical pages.
“A 15% lift is more concerning,” Emily explained, “because there should be no difference with the same experience.”
A/A testing is not A/B testing
Emily also noted a key distinction between A/A and A/B testing that is really important to grasp:
- A/B testing – Can help you measure the difference between a two pages by absolute and relative differences in conversion rate.
- A/A testing – Can help you measure the natural variability (noise) of a website by testing an identical experience.
She further explained that variance testing can help you understand:
- How long to run tests
- How much traffic needed for each treatment
- How many treatments are necessary
- How big must a lift be to be called a “winner”
- How many conversions are needed per treatment
Traffic splits and identical experiences are key to discovering a natural range of variance
Above is a summary of Emily’s A/A testing string.
During the course of five experiments, the team decreased the volume of traffic and increased the number of number of treatments week over week.
Here is a deeper breakdown of how the traffic was reduced as the number of treatments increased:
- Experiment 1 – 35% traffic and two treatments
- Experiment 2 – 18% traffic and four treatments
- Experiment 3 – 12% traffic and six treatments
- Experiment 4 – 7% traffic and six treatments
- Experiment 5 – 3% traffic and six treatments
Emily’s team discovered variance thresholds that would later help them establish ground rules for Extra Space Storage’s testing program.
“This data is where I was able to take information and extrapolate it into rules that my organization could use,” Emily explained.
I also wanted to include this illustration that Emily shared with the audience for anyone who is new to using A/A testing and in need of some idea of where to start.
Emily revealed that ultimately, the team’s testing efforts in the past year have resulted in a 45% increase in conversion.
Using variance testing has helped them gain a sense of where natural noise stops and the conversion begins.
Or as Emily simply put it, “Variance testing helps provide consensus and guidance with your testing plan moving forward.”
You may also like
Email Summit 2015 Call for Speakers [Have interesting insights to share like Emily did? Apply to be an Email Summit speaker.]
Web Optimization: 5 steps to create a small testing program [More from the blogs]
Analytics: How metrics can help your inner marketing detective [More from the blogs]
A/B Testing: Product page testing increases conversion 78% [More from the blogs]
A/B Testing: The value of choice in decision-making [More from the blogs]