• Cheat sheets
  • Documentation
  • API reference
  • Product updates
  • Sign in
Kontent.ai Learn
  • Try Kontent.ai
  • Plan
  • Set up
  • Model
  • Develop
  • Create
Copyright © 2025 Kontent.ai. All rights reserved.
  • Web
  • Privacy policy
  • Cookies policy
  • Consent settings
  • Security
  • GDPR
  • Overview
  • Manage API keys
  • Hello World
  • Hello Web Spotlight
  • Try sample apps
  • Build apps
  • Decide navigation and URLs
  • Environments
    • Optimize images
    • Optimize performance
    • Enhance SEO
  • Get Developer Certification

A/B testing in Kontent.ai

Martina Farkasova
9 minutes
Download PDF
0% complete
Want to know which version of your content truly connects with your audience? A/B testing shows you which version of your page, call-to-action, or message delivers better results. Learn what A/B testing is, how to do it right, and how to set up reliable experiments in Kontent.ai using today’s best practices.

What is A/B testing?

A/B testing (or split testing) is a randomized experiment where you present different versions of your content to separate groups of users. For example:
  • Version A: “Schedule a demo today”
  • Version B: “Book your free demo”
You then measure which version performs better against your chosen goal. Splitting the audience can be even (50/50) or weighted, for example, 80/20. Uneven distribution may come in handy if you want to limit the exposure to a risky new version. This flexibility makes A/B testing useful not only for marketing campaigns but also for product experiences, CTAs, and page layouts.

Do the testing right

A/B testing is easy to get wrong if you skip the fundamentals. These key steps are valid regardless of the platform you plan to test your content on.

1. Define your goal

The first thing is to define a goal you want to reach. What do you want your audience to do? Purchase your product? Sign up for a newsletter? Spend more time on your website? Be specific about the outcome and the success rate you need.

2. Identify the problem and form a hypothesis

Now that you have your goal, identify the problem: what keeps you from reaching that goal? Analyze the data about your visitors and develop a hypothesis: what prevents them from taking action? Narrow down the possible causes so that you don't need to test every single element in your app.

3. Test the hypothesis

Once you have the possible causes and their possible solutions, test them using A/B tests. Make sure to always test only one small change. That way, you don't get lost in what exactly led to a change in your users’ behavior.

4. Ensure a large enough sample

For any testing of this kind, you need a big enough audience. Testing a change on 50 visitors won't give you definitive results because there's a high chance of a coincidence. The optimum is 1000+ visitors per test version for the results to be statistically valid.

5. Re-test to confirm results

After the test is finished, retest the change after some time to eliminate false positives. This is important because the typical pain point of A/B testing is that the test usually lasts a fairly short time.  Re-testing helps you ensure that the change in your customers’ behavior wasn’t caused by temporary factors, such as a holiday season or a campaign. Running the same test again at another time helps confirm whether the effect is real.

6. Evaluate the results

When your testing is done, look at the results and see if any of the tested variants is getting you closer to your goal. If it is, great! Implement it, and optionally start testing another hypothesis to improve your app further. Remember to always look at the results in the context of your business. For example, one change may skyrocket the number of individual purchases, but did your total revenue increase correspondingly? You may now have a larger number of smaller purchases, which, when summed up, total a smaller revenue overall. If the hypothesis didn't prove right, test another one. A/B testing is an iterative process of trial and error. Each test, whether it confirms or disproves your idea, helps you understand your audience better.

How to run A/B tests in Kontent.ai

Kontent.ai’s modular content model makes it easy to test specific parts of a page without disrupting the rest. The example below reflects current best practices and implementation patterns used at Kontent.ai, but it’s not the only way to set up experiments. You can adapt these steps to your own process or preferred A/B testing tool.

The content type for A/B tests

To run A/B tests in Kontent.ai, it's best to use a dedicated content type. This keeps experiments organized and ensures that both the content and the metadata required for testing are captured consistently. In the content item of the A/B test type, include:
  • A reference to the original content item being tested
  • A time frame for the test (start and end dates)
  • A test identifier from your testing tool
  • One or more test versions of the content
Each test version contains:
  • A unique ID to distinguish it from other tested versions
  • A weight to control traffic distribution 
  • The tested content, created according to your hypothesis
The tested content is typically a linked item of the same content type as the component or linked item in the original page. When serving the A/B-tested page to your visitors, a selected audience receives the page with the alternative tested content instead of the original.

Identifying pages under test

Your front-end application needs a way to check which pages are currently being tested. A common approach involves these steps:
  1. Maintain a cache or index of active tests rather than querying your CMS on every page request.
  2. Update this cache when A/B test content items are created or modified, typically via webhooks.
  3. When rendering pages, reference the cache to quickly determine if a page is under test.
The specific implementation depends on your application architecture and performance requirements. The key is ensuring your front-end can efficiently determine whether a page is part of an active test before applying the decision logic.

Decision-making logic for serving test versions

When a visitor lands on a page being tested, the decision logic determines which version they see. This logic may include:
  • Weights: Version weights define the percentage of traffic each version receives.
  • Consistency: Check the visitor’s browser cache for a cookie. If the returning user has already seen one version, serve them the same page again for consistency.
  • Time frame: Ensure no alternative versions are served after the test ends.

Test results interpretation

User behavior is tracked using a third-party A/B testing tool. Once the test concludes, analyze the data to see if the tested version achieved the intended outcome. If successful, you can roll it out. If not, refine your hypothesis based on the insights and try again. Avoid stopping tests too early. Wait until your A/B tool shows statistical significance and your traffic volume is high enough to avoid false positives.

Alternative setup: Managing metadata outside Kontent.ai

In some cases, it may be preferable to store A/B testing metadata outside of Kontent.ai, such as in a testing orchestration layer or configuration file. This approach can be useful when:
  • Running tests across multiple platforms, for example, web, mobile, and email
  • Using dynamic experimentation or personalization engines
  • Separating test logic from content creation workflows
In this setup:
Sign in with your Kontent.ai credentials or sign up for free to unlock the full lesson, track your progress, and access exclusive expert insights and tips!
Sign in
  • Use sitemaps
  • Personalize content experience
  • A/B test your content
  • Optimize JavaScript apps for SEO
Beyond A/B tests, there are other approaches, such as multivariate testing, where multiple elements are tested at once. These require even larger sample sizes. Read more about experiment types and what to be careful about on Search Engine Journal.
Keep your content highly structured using components and linked items. This makes it easy to select one for testing and leave other parts of the page untouched.
Keep test versions discoverable but not public
When creating duplicate pages for testing, we recommend adding a Visibility element to your content types to mark such pages as excluded. This ensures the page is removed from internal search, excluded from the sitemap, and prevented from being indexed by search engines.
Choose the tool that suits you the bestKontent.ai is a headless CMS, which means you can combine it with the A/B-testing tools that best suit your needs. There’s Uniform's integration for Kontent.ai, as well as VWO, Qubit, Adobe Target, SiteSpect, and many more.
  • What is A/B testing?
  • Do the testing right
  • How to run A/B tests in Kontent.ai
  • Alternative setup: Managing metadata outside Kontent.ai