0203 2877 049 hello@rebelhack.com

Now you know about what CRO is and how to implement it – as well as how to run a CRO testing plan and deploy your test – you can learn how to measure your results and supercharge your CRO.

Recap

Part 1 of our CRO series gives you a breakdown of what CRO is, how to implement it, what to avoid (from experience) and a few suggestions to get you going.

Part 2 explains how to run a CRO testing plan, including how to decide which tests to run, how to calculate your sample size, prioritisation and deploying your CRO test.

Now it’s time to understand how to measure your results and supercharge your SEO.

Measure

Once the test is running there is not an enormous amount you can do, other than plan and prepare new tests.

However, you will obviously be checking the performance of the test as it goes. Apart from general intrigue you will also need check to see when the test is reaching completion.

How you know when a CRO test is finished? Good question.

There is a simple answer, when you have run through the sample size. If you reach statistical significance early in your test this does not mean you have a winner. You might have observed an effect that was chance. You might have observed a false positive. You have a massive risk when you try and push too many tests, too quickly. There are 2 things that might happen:

1. You might see a false winner and deploy a treatment that was not really the winner
2. You might miss the real effect of the treatment, which could be a winner for the other variation.

There are some technical ways around this (such as the statistical power of a test – for another post) but they still don’t get around the requirement to run through a sample size. An example is you might simply see a ‘day of the week’ bias, with visitors behaving differently on Monday than on Sunday.

Make sure you run your tests for long enough to observe any real effects!

I would always advocate running the test for the test duration calculated above. This way you are reducing risk in your results. Quite simply if you have not reached statistical significance you have not found any effect of the treatment. If you have found a statistically significant result you have found a real effect. Bravo!

Why do you need to wait until the end of the test duration? As you can see from the graph above the performance of each variation moves over time. Therefore if you close a test early you might end up selecting a worse performing variation as your winner and ‘de-optimise’ the website!

In some tools, this is reported as confidence. They are effectively the same thing. A p-value of 0.05 = confidence of 95%.

As you can see from the table above we have not reached statistical significance yet on any variation yet, shown as confidence percentage. It’s interesting to note we’re running a multivariate test above, and that we’re given both confidence to beat control and all other variations.

Draft results

Ok, so now you have to write up your results and pull your learning from the test. This is the MOST IMPORTANT part of the process. There is no point in doing all this amazing work if you can’t pull actionable insights from the test.

Write up the results first. What actually happened? This is the easy bit. Here is a simple example of a test we recently ran on our homepage. Our assumption was:

“The tagline “Grow your business like nobody’s business” implies a product business, the assumption is that by changing it to “We grow your business like nobody’s business” will imply it’s a service business and therefore change engagement on the page”

Results: State what happened to the metrics you were measuring. The results show that there was no major statistically significant change, providing a p-value of 0.5551. This was nowhere near statistical significance.

But even with this lack of a result, we can still learn something from the test. This is what our growth marketer wrote about the test, providing both learnings and action points for future work:

This particular test was far too subtle with the traffic on our homepage, therefore, we need to either drive more traffic or make a bigger change to test such as the whole h1 tag or the image etc…

Remember, you might not get what you want out of a test, but you can always learn – even if that is about improving your process.

Share results with team

There is no point in results being siloed in your CRO team. They need to be shared to become valuable. You should really be sharing these with your entire team. At Rebelhack we have weekly SCRUMS where we talk about all complete tests, failures and successes. We talk about what worked, what went wrong and more importantly what we learned and suggestions for future action.

Push to static change

If you do get a win and end up with a variation that provided a statistically significant result, you need to make the change permanent on site so all website visitors are exposed to the change and your business performance will increase.

If you’re using a CMS like WordPress you can do this yourself, however, more often than not you will need to liaise with the Product Manager and get it into the deployment backlog.

Remember, don’t push another test until that change is made otherwise any new test results are invalid once that change is made.

Rinse and repeat

Great, you now have a better performing website! Congratulations. This will now compound over time and your business will reap the rewards.

All you have to do is repeat this again, and again, and again, and again.

Most of your tests won’t provide statistically significant results on which you can rely. Growth via CRO is a function of how many tests you complete and learn from – so get deploying!

How to supercharge CRO

Above is a basic approach to CRO testing. But there are ways you can supercharge it. I will provide a quick glimpse below, but will leave the detail to another post.

  • Run tests for specific segments:
    • New vs returning visitors
      • You can run specific variations for samples based on whether they are new or returning users. Obviously, their intent is wildly different and offers opportunities for granular optimisation.
  • Source-specific
    • You can test variations based on the traffic source. For example, you could run a test for only Direct traffic as the intent of these users will be different to that from Facebook Paid.
  • Demographic variants
    • Amend the variant based on user demographics. Perhaps you can run a test specifically aimed at a younger audience. You might want to use an image that shows younger people as opposed to the control with older people.
  • Mobile vs desktop.
    • This is one that is always missed but offers massive opportunities. Most business really should be mobile first these days, but nevertheless, you can run tests that are aimed at mobile (or desktop) users only. Again, thinking about the context of a user and what device they’re using should help you come out with challenger variations that are more device relevant.
  • Referral restrictions
    • This is a really interesting one where you can showcase a variation for traffic coming from a specific source. Let’s say you have a really solid referral source, and on the page, a link that clearly explains what you do. The traffic coming from that source will be more educated as to what you offer. Therefore repeating this on the landing page might not be a wise call, and instead, you might want to think about taking the user deeper in the user journey funnel.

Web Personalisation

Finally, web personalisation is another area where huge return can be gained. If you know about the user before they land on your website, why not tailor their experience and make it more personal?

For example, if I have already purchased, why not say ‘Hey Logan. welcome back!’ rather than using generic copy. This is a huge topic in itself, and again worthy of another article, but linked with CRO can provide HUGE gains for a business.

If you want to know how we can help you optimise your website and marketing efforts get in touch here. We do this all day every day, so we would love to hear from you.