Skip navigation

Make your user experience perfect with A/B testing

9 min read
Download PDF

Whatever content you have, it's important to determine what works with your audience. Whether you want your customers to buy your products or take any other action, you want to tweak your content so that it leads your audience to your goal. A/B testing, also called split testing, is the way to find the path to perfection.

Table of contents

    Key points

    • A/B tests help you verify whether your ideas get you closer to your goal.
    • Before you test, you need to be clear on what it is that you want to achieve.
    • You need a big-enough audience to get meaningful results.
    • Test only small changes to receive clear results.
    • The more structured your content is, the easier it is to A/B test.
    • In Kontent.ai, create different versions of a part of your content and use your logic with a 3rd-party tool to handle the testing

    What is A/B testing

    A/B testing is a randomized experiment in which you test two or more variants of your content against each other. The variants differ in some aspects, and you're comparing which gets you closer to your goal.

    The variants are split among your audience, either evenly, or based on weights given to each variant. Splitting is usually done evenly but uneven distribution may come in handy if you want to limit the number of visitors who see the new tested version. That may be the case if the new version contains something about which you don't want to go public in full yet, for instance.

    Apart from A/B testing, there are other similar types of tests, such as split-URL testing and multi-variant testing. Read more about experiment types on Google Optimize Resource Hub.

    Do the testing right

    Here are some tips to help you get the most out of your testing. These pieces of advice are valid regardless of the platform you (plan to) test your content on.

    1. Define your goal

    The first thing is to define a goal you want to reach. What do you want your audience to do? Purchase your product? Sign up for a newsletter? Spend more time on your website? And what success rate do you need?

    2. Identify the problem and make a hypothesis

    Now that you have your goal, identify the problem – what is it that keeps you from reaching that goal? Analyze the data about your visitors and develop a hypothesis – what is it that prevents them from taking the action? Narrow down the possible causes, so that you don't need to test every single element in your app.

    3. Test the hypothesis

    Once you have the possible causes and their possible solutions, test them using A/B tests. Make sure to always test only one small change. That way, you don't get lost in what was it exactly that lead to a change in your users' behavior.

    For any testing of this kind, you need a big-enough audience. Testing a change on 50 visitors won't give you definitive results because there's a high chance of a coincidence. The optimum is 1000+ visitors per variant for the results to be statistically valid.

    After the test is finished, re-test the change after some time to eliminate false positives. This is important because the typical pain point of A/B testing is that the test usually lasts a fairly short time. 

    Re-testing helps you make sure that the change in your customers' behavior wasn't a time-limited fluke caused by, say, upcoming Christmas. If the same test yields similar results repeatedly, the results are way more trust-worthy.

    4. Evaluate the results

    When your testing is done, look at the results and see if any of the tested variants is getting you closer to your goal. If it is, great! Implement it, and optionally start testing another hypothesis to improve your app further.

    Remember to always look at the results in the context of your whole business. For example, one change may skyrocket the number of individual purchases, but did your total revenue increase correspondingly? You may now have a larger number of smaller purchases which, when summed up, total in a smaller revenue overall.

    An A/B test to verify whether a change of layout is able to keep visitors longer on the page so that they find more information.

    If the hypothesis didn't prove right, test another one. Don't worry, A/B testing is an iterative process of trial and error, one false step doesn't mean you should stop trying 🙂

    How we A/B test in Kontent.ai

    Now that we have the theory covered, let's see how we, at Kontent.ai, do A/B tests. We cover here the approach that's proven successful for us and our website.

    We have our content highly structured using components and linked items. This makes it easy to select one for testing and leave other parts of the page untouched.

    The tools we use for A/B testing

    • A dedicated content type to create variants of the tested content item or component
    • Our own logic for splitting audience among the variants
    • Google Optimize to measure the performance of each variant

    The content type for A/B tests

    For the purpose of A/B testing, we have one dedicated content type. In content items of this type, we link the original content item we're doing the test for, define the time frame of the test, fill in the Google Optimize test ID, and link the variants of the tested item.

    Each tested variant has its own weight, ID to distinguish it from other variants, and the tested content created according to our hypothesis.

    The tested content is a linked item of the same content type as the respective component or linked item in the original page. When we serve the A/B-tested page to our visitors, a selected audience gets the page with the alternative tested content instead of the original.

    Our decision-making logic

    When people visit the page we test, we use our own logic to determine which variant of the page to serve them. This decision is based on several criteria:

    • The weights of the test variants determine the ratio of who gets the alternative versions and who gets the original.
    • We look into the visitor's browser cache to see if there isn't a cookie that would indicate the user has already visited the page during the test period. If we find it, we serve the variant the user has already got once. This is to ensure that the same person sees the same content consistently.
    • We also check the time frame of the test to avoid serving the alternative versions after the test has finished.

    Test results interpretation

    Tracking of the user behavior is handled by Google Optimize. When the test is over, we look at the data and analyze them to see whether our testing variant had the desired effect. If it did, we implement it. If not, we repeat the test with different changes.

    Choose the tool that suits you the best

    Kontent.ai is a headless CMS, which means you can combine it with the A/B-testing tools that suit your needs the best. There's Uniform's integration for Kontent.ai, as well as Optimizely, VWO, Qubit, Adobe Target, SiteSpect, and many more.

    What's next?

    In this tutorial, you've learned how to conduct A/B testing under Kontent.ai. Let us know if you have tips on other possible approaches to testing content optimization in Kontent.ai!

    Here's some further read about user experience testing: