A/B testing in Kontent.ai
Want to know which version of your content truly connects with your audience? A/B testing shows you which version of your page, call-to-action, or message delivers better results. Learn what A/B testing is, how to do it right, and how to set up reliable experiments in Kontent.ai using today’s best practices.
What is A/B testing?
A/B testing (or split testing) is a randomized experiment where you present different versions of your content to separate groups of users. For example:- Version A: “Schedule a demo today”
- Version B: “Book your free demo”
Do the testing right
A/B testing is easy to get wrong if you skip the fundamentals. These key steps are valid regardless of the platform you plan to test your content on.1. Define your goal
The first thing is to define a goal you want to reach. What do you want your audience to do? Purchase your product? Sign up for a newsletter? Spend more time on your website? Be specific about the outcome and the success rate you need.2. Identify the problem and form a hypothesis
Now that you have your goal, identify the problem: what keeps you from reaching that goal? Analyze the data about your visitors and develop a hypothesis: what prevents them from taking action? Narrow down the possible causes so that you don't need to test every single element in your app.3. Test the hypothesis
Once you have the possible causes and their possible solutions, test them using A/B tests. Make sure to always test only one small change. That way, you don't get lost in what exactly led to a change in your users’ behavior.4. Ensure a large enough sample
For any testing of this kind, you need a big enough audience. Testing a change on 50 visitors won't give you definitive results because there's a high chance of a coincidence. The optimum is 1000+ visitors per test version for the results to be statistically valid.5. Re-test to confirm results
After the test is finished, retest the change after some time to eliminate false positives. This is important because the typical pain point of A/B testing is that the test usually lasts a fairly short time. Re-testing helps you ensure that the change in your customers’ behavior wasn’t caused by temporary factors, such as a holiday season or a campaign. Running the same test again at another time helps confirm whether the effect is real.6. Evaluate the results
When your testing is done, look at the results and see if any of the tested variants is getting you closer to your goal. If it is, great! Implement it, and optionally start testing another hypothesis to improve your app further. Remember to always look at the results in the context of your business. For example, one change may skyrocket the number of individual purchases, but did your total revenue increase correspondingly? You may now have a larger number of smaller purchases, which, when summed up, total a smaller revenue overall. If the hypothesis didn't prove right, test another one. A/B testing is an iterative process of trial and error. Each test, whether it confirms or disproves your idea, helps you understand your audience better.How to run A/B tests in Kontent.ai
Kontent.ai’s modular content model makes it easy to test specific parts of a page without disrupting the rest. The example below reflects current best practices and implementation patterns used at Kontent.ai, but it’s not the only way to set up experiments. You can adapt these steps to your own process or preferred A/B testing tool.The content type for A/B tests
To run A/B tests in Kontent.ai, it's best to use a dedicated content type. This keeps experiments organized and ensures that both the content and the metadata required for testing are captured consistently. In the content item of the A/B test type, include:- A reference to the original content item being tested
- A time frame for the test (start and end dates)
- A test identifier from your testing tool
- One or more test versions of the content
- A unique ID to distinguish it from other tested versions
- A weight to control traffic distribution
- The tested content, created according to your hypothesis
Identifying pages under test
Your front-end application needs a way to check which pages are currently being tested. A common approach involves these steps:- Maintain a cache or index of active tests rather than querying your CMS on every page request.
- Update this cache when A/B test content items are created or modified, typically via webhooks.
- When rendering pages, reference the cache to quickly determine if a page is under test.
Decision-making logic for serving test versions
When a visitor lands on a page being tested, the decision logic determines which version they see. This logic may include:- Weights: Version weights define the percentage of traffic each version receives.
- Consistency: Check the visitor’s browser cache for a cookie. If the returning user has already seen one version, serve them the same page again for consistency.
- Time frame: Ensure no alternative versions are served after the test ends.
Test results interpretation
User behavior is tracked using a third-party A/B testing tool. Once the test concludes, analyze the data to see if the tested version achieved the intended outcome. If successful, you can roll it out. If not, refine your hypothesis based on the insights and try again. Avoid stopping tests too early. Wait until your A/B tool shows statistical significance and your traffic volume is high enough to avoid false positives.Alternative setup: Managing metadata outside Kontent.ai
In some cases, it may be preferable to store A/B testing metadata outside of Kontent.ai, such as in a testing orchestration layer or configuration file. This approach can be useful when:
- Running tests across multiple platforms, for example, web, mobile, and email
- Using dynamic experimentation or personalization engines
- Separating test logic from content creation workflows