This post is the first in a series all about CRO. Here you’ll find a breakdown of what CRO is, how to implement it, what to avoid (from experience) and a few suggestions to get you going. By the end of the series, you should be an expert on CRO.
Why you need to know about CRO
CRO is one of the best ways to increase your return from hard-earned marketing dollars, but it’s often overlooked and regularly implemented badly – which does more damage than you would believe.
A significant part of optimising performance for any company centres around Conversion Rate Optimisation, aka CRO. After testing and developing a range of approaches to CRO (using a variety of platforms) over the years, for all kinds of businesses on both websites and applications, I learned how to fuck things up. In fact, he only way to learn is to truly fuck it up.
These days there are a hell of a lot more wins than fuck ups so, I’m here to (hopefully) impart any learned wisdom I’ve acquired over the years.
What is CRO?
Conversion rate optimisation (CRO) is the process of improving the percentage rate of your website visitors/users taking a desired action on your website, such as making a purchase or registering an account.
The conversion rate can be calculated from a website session to desired event, or in a product between 2 desired actions (e.g. step 1 and step 2 in the purchase funnel).
For example, if you get 100 website visitors and you make 5 sales, your conversion rate to sale is 5%.
The process of optimisation consists of many tests aimed at producing incremental improvements in important conversion rates. The success of a CRO programme is a function of the number of complete tests, i.e. the more tests you finish the more you learn, and the more you can improve the user experience on the website.
But before we get ahead of ourselves (I will explain what a CRO test is later) let’s jump in at the beginning.
The history of CRO
Following the dot com bubble there was an increase in competition online, as e-commerce sites tried to identify ways to get more from less (i.e. make more money from the traffic they already had on their sites).
First came website analytics platforms enabling marketers to understand more about what users were doing on their websites.More importantly, it showed marketers where things were going wrong on their websites.
Then came tools that enabled marketers to experiment with design, copy and content. CRO was in overdrive as statistics and software collided. The produced a range of brilliant CRO testing platforms, enabling marketing teams to implement tests without development support.
Then followed the test and learn approach pioneered by Direct Marketing, and a range of standard testing methods emerged such as AB and multivariate tests.
This kind of CRO testing was originally the domain of the big boys, but over the years has been truly democratised. Now any small business willing to put the effort into running their own testing programmes can do this.
The tactic of ‘growth hacking’ has also increased the awareness and interest in this tactic for growth and optimisation.
However, just having the tools and running tests does not mean you’re doing CRO. It’s very easy to get the methodology wrong. Generally the reason for this is a lack of understanding as to what the tests results reveal, where/why tests are being run and in what order.
An understanding of the basics of statistical significance, for example, is critical to be able to gauge and understand test results.
What statistics do I need to know for CRO?
Statistics is a great topic, if you love maths. But for most it’s a bit of a minefield. In order to run a good testing framework you will need to (at least) appreciate how statistics are used in CRO. I will try to make this simple.
Let’s assume you run a test and in that test, Variant B outperforms Variant A. Great.
Then you re-run the test and Variant A outperforms Variant B. Not so great.
Why has this happened? Well, there is natural variation in website visitors’ behaviour. There are many reasons why people’s behaviour will change on a website. We need to ensure we can take this into account when running CRO tests – otherwise we can de-optimise a website.
This is where statistical significance comes into play.
A null hypothesis states there is not a relationship between a test variation and the effect on performance seen.
In layman’s terms a test has statistical significance when the result is very unlikely to have occurred, given that we assume the null hypothesis is to be true.
The significance level is the probability of a test rejecting the null hypothesis, given it were true.
The p-value is the probability of getting another result as extreme, given the null hypothesis were true.
In short, the p-value is the probability your test results are wrong.
P-values can go from 0-1, and a simple/crude way to look at a p-value=0.05 would be that there is a 5% probability of the result being wrong – meaning a 95% chance it was right.
Therefore, the smaller the p-value the more likely the result achieved is correct and not due to an anomaly or natural variation.
So long as you understand small p-values are good in testing and large p-values are bad, you’re onto a winner. You won’t have to calculate these as I will show you products that do this for you.
So the basic rules in CRO testing are:
- The bigger the expected performance differential the smaller the p-value
- The bigger the sample size, the easier it is to get to statistical significance because you need a smaller performance differential
- Plan to use p-values from 0.05-0.01 in your testing (the lower the better)
I won’t go any further into statistics here (standard deviation, z-values, bayesian vs frequentist, sample sizes, test interference and noise etc) but if you want to talk about statistics and probability, find me on Twitter. I will write another post on this more specifically later.
Phew… So that was hard, but I hope you’re still with me. That’s the hardest bit – it gets way more fun from here.
What types of CRO are there?
There are a range of available tests to consider when thinking about CRO. I will run over the major ones.
AB testing is when you take your control (typically the current version of the website/product/page) and test against a new treatment/variation (this is your challenger variation, i.e. the one you think is going to outperform the control). Simply put, it’s A vs B.
You then run a percentage of your test sample through both variations (typically 50:50), and measure the performance differences.
The test result with the highest performance is the winner. However, your consideration of statistical significance is important as to whether you rely on the test result. The lower the p-value the more likely the test result is correct and can be repeated.
Multivariate tests are not dissimilar to AB tests, but they have multiple variations of multiple elements – not just 2.
These are typically used when you are looking the best combination of elements for performance. The more element variations in the test, the more permutations possible.
[# of Variations on Element A] X [# of Variations on Element B] = [Total # of Variations]
Let’s assume you have 3 variations on 3 different elements on a page. This means the total number of possible test variations is 9.
These tests are great if you’re trying to find the best possible configuration of elements on a page and don’t want to test each one individually.
However, be warned – you need a lot of traffic to get to statistical significance on these tests (or a HUGE performance difference).
Sequential Benchmark Tests
A sequential benchmark test (SBT) is when you run the control variant first, and then the challenger variant next.
This means variations are not run at the same time. This is not ideal and builds risk into the test. There are many reasons why performance of a website can change, it could be a recent press article, day of the week or just the weather (running an SBT for an umbrella retailer would be very sensitive to the weather). Therefore, your challenger variant can tank in performance simply because it was a rainy week.
Therefore, these are not a preferred route, but there are times when they are required. When a huge site wide change needs to happen fast, and the development team are more interested in deploying the feature than testing the feature into production these can be useful.
My only suggestion here would be to reduce your tolerance for error, by reducing the required p-value to hit statistical significance. I would suggest a p=0.01, basically meaning you have a 1% of being wrong.
They are a last resort, use with caution… you have been warned.
What technology do I need to do CRO?
Back in the day you would have had to develop your own testing platform which was beyond most businesses. However, these days there are a range of amazing products you can use at various price points all with their own pros/cons.
Tools to use for CRO
All you have to do is to insert the code provided by the platform you choose into your website’s codebase as directed. Then, hey presto, you can create and manage tests from a simple dashboard. The product will randomly insert the variations using css in the proportions you wish and do all the calculations for you. Boom, it’s that easy!
Then, all you have to do is to make sure you’re deploying tests and keeping an eye on the statistical significance!
In our next post in this series, find out how to run a CRO testing plan, including:
- How to decide which tests to run and why
- Calculating your sample size
- Prioritising CRO tests
- How to deploy your CRO test