Want to know which version of your content truly connects with your audience? A/B testing shows you which version of your page, call-to-action, or message delivers better results. Learn what A/B testing is, how to do it right, and how to set up reliable experiments in Kontent.ai using today’s best practices.
A/B testing (or split testing) is a randomized experiment where you present different versions of your content to separate groups of users. For example:
Version A: “Schedule a demo today”
Version B: “Book your free demo”
You then measure which version performs better against your chosen goal.Splitting the audience can be even (50/50) or weighted, for example, 80/20. Uneven distribution may come in handy if you want to limit the exposure to a risky new version. This flexibility makes A/B testing useful not only for marketing campaigns but also for product experiences, CTAs, and page layouts.A/B testing is easy to get wrong if you skip the fundamentals. These key steps are valid regardless of the platform you plan to test your content on.The first thing is to define a goal you want to reach. What do you want your audience to do? Purchase your product? Sign up for a newsletter? Spend more time on your website? Be specific about the outcome and the success rate you need.Now that you have your goal, identify the problem: what keeps you from reaching that goal? Analyze the data about your visitors and develop a hypothesis: what prevents them from taking action? Narrow down the possible causes so that you don't need to test every single element in your app.
Once you have the possible causes and their possible solutions, test them using A/B tests. Make sure to always test only one small change. That way, you don't get lost in what exactly led to a change in your users’ behavior.
For any testing of this kind, you need a big enough audience. Testing a change on 50 visitors won't give you definitive results because there's a high chance of a coincidence. The optimum is 1000+ visitors per test version for the results to be statistically valid.
After the test is finished, retest the change after some time to eliminate false positives. This is important because the typical pain point of A/B testing is that the test usually lasts a fairly short time.
Re-testing helps you ensure that the change in your customers’ behavior wasn’t caused by temporary factors, such as a holiday season or a campaign. Running the same test again at another time helps confirm whether the effect is real.
When your testing is done, look at the results and see if any of the tested variants is getting you closer to your goal. If it is, great! Implement it, and optionally start testing another hypothesis to improve your app further.
Remember to always look at the results in the context of your business. For example, one change may skyrocket the number of individual purchases, but did your total revenue increase correspondingly? You may now have a larger number of smaller purchases, which, when summed up, total a smaller revenue overall.
If the hypothesis didn't prove right, test another one. A/B testing is an iterative process of trial and error. Each test, whether it confirms or disproves your idea, helps you understand your audience better.
Kontent.ai’s modular content model makes it easy to test specific parts of a page without disrupting the rest.
The example below reflects current best practices and implementation patterns used at Kontent.ai, but it’s not the only way to set up experiments. You can adapt these steps to your own process or preferred A/B testing tool.
To run A/B tests in Kontent.ai, it's best to use a dedicated content type. This keeps experiments organized and ensures that both the content and the metadata required for testing are captured consistently.
In the content item of the A/B test type, include:
A reference to the original content item being tested
A time frame for the test (start and end dates)
A test identifier from your testing tool
One or more test versions of the content
Each test version contains:
A unique ID to distinguish it from other tested versions
A weight to control traffic distribution
The tested content, created according to your hypothesis
The tested content is typically a linked item of the same content type as the component or linked item in the original page. When serving the A/B-tested page to your visitors, a selected audience receives the page with the alternative tested content instead of the original.
Your front-end application needs a way to check which pages are currently being tested. A common approach involves these steps:
Maintain a cache or index of active tests rather than querying your CMS on every page request.
Update this cache when A/B test content items are created or modified, typically via webhooks.
When rendering pages, reference the cache to quickly determine if a page is under test.
The specific implementation depends on your application architecture and performance requirements. The key is ensuring your front-end can efficiently determine whether a page is part of an active test before applying the decision logic.
When a visitor lands on a page being tested, the decision logic determines which version they see. This logic may include:
Weights: Version weights define the percentage of traffic each version receives.
Consistency: Check the visitor’s browser cache for a cookie. If the returning user has already seen one version, serve them the same page again for consistency.
Time frame: Ensure no alternative versions are served after the test ends.
User behavior is tracked using a third-party A/B testing tool. Once the test concludes, analyze the data to see if the tested version achieved the intended outcome. If successful, you can roll it out. If not, refine your hypothesis based on the insights and try again.
Avoid stopping tests too early. Wait until your A/B tool shows statistical significance and your traffic volume is high enough to avoid false positives.
In some cases, it may be preferable to store A/B testing metadata outside of Kontent.ai, such as in a testing orchestration layer or configuration file. This approach can be useful when:
Running tests across multiple platforms, for example, web, mobile, and email
Using dynamic experimentation or personalization engines
Separating test logic from content creation workflows
Sign in with your Kontent.ai credentials or sign up for free to unlock the full lesson, track your progress, and access exclusive expert insights and tips!
Keep test versions discoverable but not public When creating duplicate pages for testing, we recommend adding a Visibility element to your content types to mark such pages as excluded. This ensures the page is removed from internal search, excluded from the sitemap, and prevented from being indexed by search engines.