A website isn’t the kind of project that’s finished once you’ve launched it. To make and keep a site successful, it’s important to keep working on it and optimize it from a technical as well as a content standpoint. However, it’s not always easy to know what you should or shouldn’t change. Is that call to action too small? Is that one image inviting enough? And what about the title of that one sub-section?

While there’s nothing stopping you from coming to conclusions and making those decisions on your own, the right data will make sure that you’re improving your site instead of making it worse. To get this data, you need to do A/B Testing. So what is A/B testing? It entails showing two different versions of an element (be it a button, an image or even a whole page) to different users over a period of time and then checking which version led more users to take the desired action, i.e. convert.

First of all, let’s take a look at the A/B testing process:

 

Analyzing data and detecting an issue

In order to make the decision to do A/B Testing, you need to know that there’s an issue to begin with. To find out, you need to consistently be reviewing your website data, meaning event and goal performance (e.g. goal completions, goal conversion rate, event category, event actions, event labels, event value), page performance (sessions, pageview, bounce rate, exit rates, time on page, etc.) and/or traffic source performance (direct, organic search, paid search, social, referral, etc.). Doing that, you can find out which pages are not performing well and where conversions could be improved.

Coming up with a testing idea and action plans

The testing idea is the new version of content or design that you thing may perform better than the current one. As a rule, you should create a hypothesis for your A/B test. This is an idea about what you need to test and why, and what improvements you’ll see after you make any changes. If you base your test on this hypothesis, you can decide on what your test will entail and what success or failure would look like. This step is where you make sure that your test is sound and based on data, not just on guesswork.

To form a hypothesis, use this template based on optimization specialist Craig Sullivan’s work:

“Because we observed [A] (and/or feedback [B]), we believe that changing [C] for visitors [D] will make [E] happen. We’ll know this when we see [F] and obtain [G].”

After coming up with a test idea, you need to build an action plan to make sure all testing ideas can be created and delivered. You may need developer or designer support for this. The action plan should cover three steps: Creating the design, content, or new algorithm for the testing version (variants); Implementation (design, content, or development efforts, including testing configuration); Monitoring, reporting, and decision-making. 

Implementing the campaign

After the design, content, and development work for the new version (variant) is done, you can set up the campaign on a variety of A/B Testing tools. These include, but aren’t limited to Google Optimize, Optimizely, VWO, Adobe Target and others. In fact, many CMS platforms have their own A/B Testing functionality built-in.

Monitoring, reporting, and decision-making

Review your campaign weekly or daily and make sure all stats that contribute to the result of the campaign are measured properly. When you’ve collected enough data, you can look at which version performed the best. That version should then be declared the winner and be put on your live site.

A/B Testing Best Practices

To help guide you along the way, we decided to share the Best Practices we here at Niteco follow to make sure our A/B Testing scenarios give us the data we need to make informed decisions:

  1. Test the right items. When an issue is detected, the tester needs to find the correct cause of the issue, so the testing results will be reliable to support decision-making. Testing a new button when actually, the problem is the image above it, won’t help you or your site.
  2. The sample size should be at least 1000 for each variant. As in all statistics, a sample size of less than 1000 is considered less than reliable.
  3. Review the stats tracking before implementing the campaign to make sure all data is measured correctly. To do this, it’s good practice to spend enough time with the tool you’re using before the actual testing so you know what to look out for.
  4. Pay attention to the period of time. Traffic on your website probably changes between different periods of time, so make sure to check the data from the previous year before scheduling the testing time and estimating the test’s duration. If your tests show user numbers going down but you see that at the same time last year, user numbers also went down, it’s likely not connected to your test.
  5. Don’t make mid-test changes because the results will be affected.
  6. Try to test only one element at a time. That means, don’t change both a button and some additional copy for a single A/B Test, because you won’t be able to tell which of those changes caused a possible change in user behavior. If you just change one element, you can be reasonably sure that any differences between numbers are likely caused by the change in that one element.
  7. Check the statistical significance of your findings. As is the case with any statistical work, you need to make sure that any changes you’re observing wouldn’t have occurred anyway, even without your change. If the tool you’re using doesn’t show you the A/B test statistical significance, you can also use some third-party tools for this job, such as Neil Patel’s A/B testing statistical significance calculator.

 

To find out more about how we conduct A/B Testing and how we could help improve your own site’s conversion, contact us today or view our work now!

Looking for a partner
to transform your business and drive results?
Not visible
Let's get in touch
Looking for a partner<br/>to transform your business and drive results?<div style="visibility: hidden; height: 40px">Not visible</div>