0203 2877 049 hello@rebelhack.com

In part 2 of our CRO series, find out how to run a CRO testing plan, including how to decide which tests to run, how to calculate your sample size, prioritisation and deploying your CRO test.

How do I run a CRO testing plan?

If the first post I wrote has persuaded you that you should be running CRO, great, I’ve done my job. However, the hardest thing is to do it properly. I have seen a lot of companies trying to run CRO and getting it totally wrong.

There needs to be a defined process, one that everyone knows about, with controls and agreed ways to define and prioritise tests.

I will outline a basic testing framework below, but you can take it tailor it as you see fit. At Rebelhack we have a more comprehensive approach, but the underlying principles are the same. It has also taken us 2.5 years to get to where we are, so don’t worry about starting simple.

As I tell my team – KISS

“Keep it simple, stupid!”

Analyse your website – get quantitative!  

Ok, the first challenge is to find out where to run tests on your website that will deliver the highest return (GOI – aka Growth on Investment).

First off, let’s get quantitative. Jump into your clickstream analytics platform (e.g. Google Analytics). In short, you’re looking for problem pages and user flow bottlenecks.

Below are some basic ways to find areas for testing to get you started:

  1. High volume traffic pages with:
    • High exit rate
  2. High volume landing pages with:
    • High bounce rate
  3. Top ranking exit pages
  4. Top ranking exit pages as a percentage of traffic
  5. Funnel bottlenecks
    • Identify pages with high exit rates in core product funnels
  6. User feedback from sales and customer support
    • Pages that are regularly complained about
  7. Obvious usability issues
    • Simply put, pages with crappy user experience and things that just don’t work

This should provide you with a list of pages you can test to make improvements to deliver ROI for the business more quickly. Remember, you want pages with high traffic if possible, because you will likely reach statistical significance more quickly.

Now it’s time to get qualitative…

So now you have a list of pages where there are potential problems, you want to try and understand what is happening on these pages. It’s time to get qualitative.

To try and understand your users, deploy one or more of the following… On these pages you want to deploy one or more of the following to try and understand more about your users.

Deploy Heatmaps

Heatmaps are great as they tell you where users are clicking and how far they are scrolling on your pages.

These are really useful. You may discover that visitors are be clicking on areas where there is no button, or that they are missing the main CTA totally! You might find visitors are clicking on a small button in the footer that you had not anticipated, suggesting people are looking for information that might be on that page.

Deploy Behavioural Mapping

These are basically recordings of where your users are scrolling and clicking during a website session. They work for both mobile and desktop websites, as well as applications. They are invaluable and help you understand if users are spending time looking around a page for information or dwelling on a piece of content with a video.

These should add more context to your understanding as to what is going on on that page.

Get Survey Results

Onsite surveys are not new, but they have taken a new role in more recent years in website conversion rate optimisation.

You can position surveys on these key pages, and ask a question of your users. Sure, not everyone will answer, but you can normally glean some insight from a few survey responses (c. 10-20) because normally they will contain similar themes.

You can either provide answers that can be selected from a list, leave an empty text box for completion – or a combo of the 2. You might need to test which type gets you more data more quickly. I would suggest you start with predefined answers.

Here are some ideas to get you started:

1. When a user is exiting a page enable an exit intent survey asking website visitors the following:

  • Why are you leaving the site?
  • I did not find what I was looking for
  • I got confused where to go on the website
  • I don’t have enough information to go further
  • I don’t feel comfortable with the brand
  • Other (if they click this you can offer a secondary text box for more information)

This will help you understand what is causing them to leave the page. Once you figure what it could be you can then draw up some ideas for testing that might keep people on the page and moving through the funnel.

2. On a high traffic page enable a time slide-in survey asking:

  • What best describes you?
  • I work for a startup
  • I work for a large company
  • I am a freelancer
  • Other (if they click this you can offer a secondary text box for more information)

This would help define who is on your website (if you’re a B2B company) and whether you have the demographic you’ve been aiming at. It will also help you create more targeted content for the predominant customer segment visiting that page.

As you can imagine there is an infinite number of questions you can ask, just get creative!

User Testing

This is a great way to fast-track your understanding of what is going on from a user perspective. You can set up a user testing workshop where people show up and you provide devices! Run a session aimed to find out more about how users see a certain page/website. This can take a lot of time and you have to incentivise people to turn up. There is also some inherent bias built into in-person user testing that can lead you down a rabbit hole (some people will do anything for free beer and a few slices of pizza!).

I would suggest you use something like Usertesting.com which is a simple to use platform, providing you with real insights from real human beings, without the need to set up the workshop. Within a few hours of setting up the test, you can have insights.

A combo of all these 4 things should give you a good idea as to what is going wrong on any of these key pages.

Ideate

This is the fun bit. Set up a workshop with all those involved and talk through your findings. This should generate lots of ideas (good and bad) as to what to test.

We use something called user story to state our hypothesis.t should say something like this…

Looking at data insight [insert data insight here], I assume that user [insert description of segment here] is [insert your hypothesis as to what user is doing/not doing] and that if we [add in your test variation] we can expect a performance gain of [add in your assumption as to performance gains].

Therefore, a well-crafted user story would look something like this:

Looking at the very high exit rate on the page I am talking about, I assume that users cannot see the main CTA because it is only half visible ON the page and that moving it a little higher up above the fold will increase CTR from this page by 15%.

Calculate the sample size

I think this is one of the major oversights of many running CRO programmes. They rush tests, or deploy tests into the same sample! Big error, as you will generate a lot of noise and not really find the real effect of variations.

If you are able to stop individual users going into multiple tests at once (Optimizely can do this) great, but you are then doubling the length of each test. Unless there are good reasons and you’re top of the game i would advocate 1 test per product at any one time.

Here is a simple way to calculate the sample size and length of a test:

  1. Use this statistical calculator from Optimizely

a. It sets the statistical power at 1 (but you don’t need to worry about this for now).

b.

2. Set the statistical significance at 95%

a. This comes back to your tolerance for risk. Your p-value (if you remember from the beginning of this article) is the risk that your test results are wrong. The lower the p-value the lower the risk your test results are wrong. I would always suggest you work to a p-value < 0.05, which in this calculator would be a value of 95%.

b.

3. Then pull out the baseline conversion rate from unique user session for the primary metric you’re measuring.

a. Perhaps it’s 2.5%

b.

4. Add in your minimal detectable effect.

a. This is ultimately the effect you want to see from the test in order to pay any attention to it. Google would be happy with a lift of 0.1%, but most businesses will want to see +5%.

b. This will have a huge impact on your required sample size.

c. I will use 5% for this example

d.

 

5. Calculate the sample sizea. In this instance we have a sample size per variation of 320,000

b. Now double it. This is your total sample size.

c. 640,000 is your total sample size

6. Calculate the test duration in days

a. Find the number of unique users on your website per day (you can do this using Google Analytics)

b. Let’s assume we have 70,000 unique users per day

c. Divide the total sample size by the unique users per day

d. 640000/70000 = 9.14 days

7. Calculate the test duration in days

a. So that we can remove ‘day of the week’ bias round up to the nearest week

b. In this instance it would be 14 days or 2 weeks.

c. This is how long you should run your test for

Prioritise

One of the things I learned over the years is that “doing stuff” isn’t good enough. You have to do stuff in the right order and this applies to testing more than ever! It’s about doing as little work as possible yet maximising the opportunity for growth. Therefore, prioritisation is critical.

There are 2 well-known prioritisation frameworks, PIE and ICE. I actually don’t like either as they are both highly subjective, meaning that ultimately its guesswork and that tests can easily be pushed up the order simply by one of the team saying it will have a bigger impact than they really think it will.

A framework I have found recently is called the PXL Framework, by CXL. This is much better in my opinion, as it focuses on trying to make the process more quantitative.

It asks a range of questions, that you score as either 0 or 1 (aka, no or yes). This makes things way more data-driven and tries hard to make things more objective. The questions can be tailored to each business, but the ones it uses are:

  • Is the change above the fold? → Changes above the fold are noticed by more people, thus increasing the likelihood of the test having an impact.
  • Is the change noticeable in under 5 seconds? → Show a group of people control and then variation(s), can they tell the difference after seeing it for 5 seconds? If not, it’s likely to have less impact.
  • Does it add or remove anything? → Bigger changes like removing distractions or adding key information tend to have more impact
  • Does the test run on high traffic pages? → Relative improvement on a high traffic page results in more absolute dollars.

If you want to see a spreadsheet with this on you can check this one out here.

Once you have gotten your tests prioritised you’re ready to build the tests.

You can build out your own based on your own specific needs. We have a really detailed approach at Rebelhack as the order in which you deploy tests is really important. You cannot deploy tests over test as you will only generate statistical noise, so the order in which you deploy is critical to the success of a CRO programme.

Build the test

I won’t spend too long on this, as each testing platform will have a slightly different process for deployment. But here are some principles to bear in mind:

  • Ensure you are tracking the main conversion point for the product. This could be the purchase button, the user signup, the lead acquisition event – ultimately whatever the core metric you’re trying to move.
  • Make sure you also track other events on that same funnel, as more often than not you can affect a change at the top that is not carried to the bottom of the funnel. This should tell you something about your test, even if it fails!
  • Make sure you also track other events that you’re interested in on the same page. An example here would be the secondary CTA on the homepage such as ‘find out more’. Tracking this kind of event will tell you whether your change means users end up wanting to find out more, perhaps because your test variation is not clear enough.

Deploy the test

Now it’s time to pull the trigger. Again, just a few things to make sure of before you do:

  • Make sure there are no other development changes planned for deployment whilst you’re supposed to be running the test. If your development teams push out a product update, no matter how small, you need to re-run the test (unless you have ringfenced that part of the product as stated below).
  • Make sure you ring-fence the areas of the product you want to test into. If you’re testing on the blog make sure there are no amends to the blog pages and subsequent funnels you’re testing. Your development team can make amends to other areas of the site, so long as they are not in the funnel you’re testing. However, wherever possible refer to rule #1 – don’t make site amends other than the CRO test.
  • Make sure, wherever possible, the campaign and marketing team are not planning some huge shift in their marketing mix in the marketing funnel above your test. This will break your test (or certainly invalidate the learnings to some degree) because that traffic will behave very differently.
  • Make sure there are no planned sales across the site, or in your marketing offerings. There is no point in pushing a CRO test live and then deploying a campaign offering a discount code to be used at purchase. This obviously changes the quality and intent of traffic as it hits the site and will skew your results.

Now you’re up to speed with deploying your test, you need to collect and measure them. My third post in this series looks at will explain:

  • How you measure your results
  • How to write up your results
  • How to supercharge your CRO

Ok, so now we’re on a roll and you’re almost a CRO superstar!! To see the 3rd and final post in this series click here.