Analytics is about identifying correlations and trends in your data, while optimization is all about the cause and effect. However, the intersection of analytics and optimization can work wonders for your business. So let's see how to better leverage analytics to drive your testing results, with real-life examples of success.
Define your success metric
Your analytics data will help you define your experiment's success metric and the success measures of your program as a whole. The key here is having a singular metric. A test shouldn't be expected to move six different needles simultaneously. Your test needs to clearly state the sole metric that determines success. Everything else is just a side effect of the test.
So this means you can't run an experiment where you expect to increase purchases, improve email signups, and get more customers to create an account all at once. It's just like driving—you can't drive for speed as well as gas mileage. You've got to pick one needle.
Understand your benchmarks
Analytics can help you understand your benchmarks, so it's common to embark on a testing program with an aggregated view of what your KPIs are. Unfortunately they call them the "flaw of averages" for a reason. So when you're using benchmark data in your testing program, you need to be clear on what benchmarks you're using.
Ask yourself the following questions:
What time period are you looking at?
What segment are you looking at?
Did you test during a major campaign or a huge promotional period?
Are you looking at sufficient data to ensure that you have an accurate rate?
Are the benchmarks that you're optimizing accurate?
And to answer all of these, you need to conduct an extensive analysis.
Control the uncontrollable
Analytics will help you better control the uncontrollable parts of testing. So when you're testing, your goal is to control the non-test variables. But the problem here is that we're not truly controlling the time period of the test.
So let's say you're testing a new site feature. If you run a test on the biggest shopping holiday of the year, you might have a huge sample, but the behavior that you might see on that day is not necessarily going to be representative of what you would see on a random day.
To put it in other words, statistical significance is not enough for a test. Your goal is to have a representative sample. A firm could run a test for a couple of days and get what might be considered a significant result, but it's not going to be a representative sample of all of their users.
What you see in your results may be due to the time period rather than the test variables. You can control this by running tests for an appropriate period based on the seasonality of your business.
You can see the seasonality even in the most basic reports if you look at hourly data across a couple of days. And it gets more dramatic when you start looking at it in weeks and months.
For example, a car dealership might show a high traffic rate on weekends and much lower traffic during the weekdays. So you're going to have a natural pattern that your customers fall into based on the industry that you're in.
You may even have specific yearly seasonality. The retail industry can be a great example of this.
So you have to understand when customers are coming to your site. You don't want to run a test on a Monday and discover that the results are not representative of the other days.
Lend credibility to your testing program
Have you ever doubted your testing tool? If yes, one way to work around this is running a series of A/A tests.
Now, you don't actually need a broken testing platform for this. When you run multiple tests, you're going to have problems with overlapping test groups.
So let's say you're running a test on the home page as well as on the pricing page. Now, when a lead converts, which test gets the credit? Did one test influence the other?
There are different ways of dealing with this. You need to keep the groups separate and not expose a user to multiple tests. You need to understand how your data is being captured and analyze the overlap to know when that conversion happens. This way, you don't jump into the wrong conclusions based on overlaps and can give credit to the right test.
Dive deeper into results
Analytics can help you decode your test results. But this is not an excuse to forget your success metric and judge the test by the first positive sign you can find. Digging into the results and looking at more than just the winner can help you understand why things might have performed differently. So, if your sixth sense is tingling and something doesn't appear right about the test results, dig deeper to understand what might be going on.
Analytics and beyond
Use your analytic data to find accurate benchmarks to measure against.
Use it to help you find testing opportunities, estimate the impact of your testing, decide your test priority, and understand your test results better.
Use it to upgrade your tests and take it beyond just the results and reports!
We hope this gives you a fair idea of the multiple ways you can use analytics to help your optimization efforts.
Happy Optimizing! :)
Conversion Capsule from ZohoPageSense aims to bring you conversion rate optimization best practices from CRO experts around the world. In this blog, we've converted Michele Kiss's session from "The Optimization Summit" we hosted, as digestible takeaways for you.
Michele Kiss is a self-confessed analytics geek and a pioneer in the digital analytics world. She is a Senior Partner at Analytics Demystified and is the winner of the Digital Analytics Association “Rising Star” award and “Practitioner of the Year” award. She also writes for publications like Marketing Profs and Website Magazine and is a regular contributor to industry podcasts.