Petal

How to start A/B testing your Next.js site

A/B tests are crucial in evaluating the performance of landing pages or forms. Sometimes even for little component or text tweaks on your website. How can you set them up with a headless CMS and Next.js?


Ondrej PolesnyPublished on Jan 29, 2023

In this article, I’ll explain how A/B testing works, what components are required from end to end, and show the full implementation of front-end logic in Next.js. At Kontent.ai, we’re using Google Optimize to track the results, so the last part of the article may differ based on your chosen solution.

What is A/B test?

Let’s start with the basics. Using an A/B test, you can evaluate how a specific change in layout or content improves the engagement of your visitors. Take a look at this form:

Long form

It has many fields, and some people may lose interest when they need to put in so much information. Alternatively, you add a streamlined version of the same form:

Streamlined form with a single input field

Now you can test which form performs better based on the conversions, that is, how many people successfully submit the form. In the simplest form, the A/B test redirects half of your traffic to version A (original form) and the other half to the new version B (streamlined form). To ensure consistency, visitors get a small cookie that identifies the chosen variant, so even if they refresh the page or return to it later, they keep seeing the same content.

What is needed to start A/B testing your pages?

There are a few steps to implement A/B testing on your site:

  • Set up the content model
    Before diving into the implementation, you need to figure out how to store the content of the used variants.
  • Divide traffic into multiple groups
    As I mentioned above, simple A/B tests have two variants, but you can add more. The traffic also doesn’t have to be split evenly. You may decide to run A/B tests only for a small portion of traffic, certain regions, or time zones.
  • Provide the content
    In your implementation, you need to compose the right version of the tested page.
  • Track results
    When a visitor performs the action you were aiming for, you need to track the result.

Setup up content model

Content model setup highly depends on how you structure the content of your website. On our site, the majority of pages are built using components that take content from reusable content items:

Kontent.ai

The components are added to each page using a single LinkedItems element called Content:

Overview of page with linked items that are transformed into components.

Therefore, to adjust the structure of the page and create a new variant of the original page, we only need to duplicate this element. It’s the only element that affects the visual representation of the page.

But the content model is not only about the actual content. We’ll also need to store additional data required for the A/B test:

  • Google Optimize ID
    The A/B test identifier. You may have multiple A/B tests running at the same time.
  • Original page reference
    The page you want to test
  • Start date & time
  • End date & time
  • Variants
    • Variant ID
      This depends on the tracking system used. Google Optimize starts with number 1.
    • Variant content (Linked Items element)
    • Variant weight (Text element)
      Determines how much traffic should be directed to this variant.
Kontent.ai
Kontent.ai

Divide traffic into multiple groups

When the content model is ready, we can continue with the code. First, we need to catch requests to the tested page and check which variant to show. For that, we’ll use Next.js’s Middleware.

Add the file middleware.ts to /src/pages:

export function middleware(request: NextRequest) {
}

export const config = {
	matcher: []
}

This is just a frame for middleware functionality. The code will be executed by a serverless function for each request coming to our website. That’s what we want. We want to be able to show A or B variants based on cookies (returning visitors) or randomly according to weight (new visitors). However, there are two problems:

  • Middleware is executed for every single page
  • We would have to check in runtime whether there are any A/B tests running on a given URL

Even though these problems may not feel like the end of the world, they can be exactly that for your site, used API, or your wallet.

We can solve both by pre-generating the A/B test data before the build. In my previous article, I explained how to build a mechanism for pre-build scripts in Next.js, so here I’ll first show the full source code and then explain what each part of it does.

This is the full code of the pre-build script:

export default async function execute(params: IScriptParams) {
	const fullFilePath = path.join(process.cwd(), 'static/ab-tests-data.json')
	const fullMiddlewarePath = path.join(process.cwd(), 'src', 'middleware.ts')

	// remove the old file
	if (fs.existsSync(fullFilePath)) {
		await fs.promises.unlink(fullFilePath)
	}

	const client = new DeliveryClient({
		projectId: params.env.KONTENT_PROJECT_ID,
		propertyNameResolver: camelCasePropertyNameResolver,
	})

	const res = await client
		.items<AbTestModel>()
		.type(contentTypes.ab_test.codename)
		.elementsParameter([...])
		.toPromise()

	const data = res.data.items.map(abTest => {
		const originalPageSlug = abTest.elements.umlpPage.linkedItems[0].elements.urlSlug.value
		const variantsWeightSubtotal = abTest.elements.variants.linkedItems
			.map(v => v.elements.weight.value)
			.reduce((acc, curr) => acc + curr, 0)

		return {
			id: abTest.elements.googleOptimizeId.value,
			originalSlug: {build URL from originalPageSlug},
			originalWeight: 100 - variantsWeightSubtotal,
			startDate: abTest.elements.startDate.value,
			endDate: abTest.elements.endDate.value,
			variants: abTest.elements.variants.linkedItems.map(variant => ({
				id: variant.elements.variantId.value,
				weight: variant.elements.weight.value,
			})),
		}
	})

	await fs.promises.writeFile(fullFilePath, JSON.stringify(data))

	// adjust middleware.ts file
	const regex = /^([\s\S]*\/\* automatically regenerated array of paths \*\/)([\s\S]*)$/

	const matchingPaths = data
		// remove the trailing slash
		.map(abTest => abTest.originalSlug.replace(/\/$/, ""))
	const codeFile = await fs.promises.readFile(fullMiddlewarePath, 'utf8')
	const newCodeFile = codeFile.replace(regex, `$1
export const config = {
	matcher: ${JSON.stringify(matchingPaths)}
}`)
	await fs.promises.writeFile(fullMiddlewarePath, newCodeFile)
}

Avoiding runtime checks for all URLs

The first step is to initialize the output file, in this case, called ab-tests-data.json, where we’ll store the A/B tests data:

const fullFilePath = path.join(process.cwd(), 'static/ab-tests-data.json')

// remove the old file
if (fs.existsSync(fullFilePath)) {
	await fs.promises.unlink(fullFilePath)
}

Then, we’ll fetch all the A/B tests data from the CMS:

const client = new DeliveryClient({
	projectId: params.env.KONTENT_PROJECT_ID,
	propertyNameResolver: camelCasePropertyNameResolver,
})

const res = await client
	.items<AbTestModel>()
	.type(contentTypes.ab_test.codename)
	.elementsParameter([...])
	.toPromise()

Note: The model types and constants in contentTypes code file were automatically generated using TypeScript model generator.

Now, to make the implementation easier, we want to have the data in the following structure. Note the start and end dates. As we’re fetching the data into a static file, and we don’t know when the next rebuild is going to take place, we need to ensure the middleware knows when to start and stop the test.

{
	id: string, // Google Optimize ID
	originalSlug: string, // URL path of the original page you want to AB test
	originalWeight: Number, // how much traffic should keep visiting the original page
	startDate: DateTime, // start of the AB test
	endDate: DateTime, // end of the AB test
	variants: [{
		id: string,
		weight: Number,
	})),
}

Therefore, the next step is to transform the data into the mentioned structure:

const data = res.data.items.map(abTest => {
	const originalPageSlug = abTest.elements.umlpPage.linkedItems[0].elements.urlSlug.value
	const variantsWeightSubtotal = abTest.elements.variants.linkedItems
		.map(v => v.elements.weight.value)
		.reduce((acc, curr) => acc + curr, 0)

	return {
		id: abTest.elements.googleOptimizeId.value,
		originalSlug: {build URL from originalPageSlug},
		originalWeight: 100 - variantsWeightSubtotal,
		startDate: abTest.elements.startDate.value,
		endDate: abTest.elements.endDate.value,
		variants: abTest.elements.variants.linkedItems.map(variant => ({
			id: variant.elements.variantId.value,
			weight: variant.elements.weight.value,
		})),
	}
})

And finally, save the file:

await fs.promises.writeFile(fullFilePath, JSON.stringify(data))

This is an example of the output file ab-tests-data.json:

[
    {
        "id": "2NTrxk87RWqvBkZXbBXs0w",
        "originalSlug": "/specials/cms-for-insurance-companies/",
        "originalWeight": 50,
        "startDate": "2022-09-01T00:00:00Z",
        "endDate": "2022-09-30T00:00:00Z",
        "variants": [
            {
                "id": "1",
                "weight": 50
            }
        ]
    }
]

Avoiding running middleware for all URLs

This pre-generated data file solves the problem of repetitive runtime checks. Now, we need to solve the second problem—by default, middleware is executed for all paths on our website. Fortunately, Next.js allows us to use Matcher, which is a simple array of paths that limits the middleware execution:

export const config = {
  matcher: ['/about/:path*'],
}

The Matcher allows for path filtering but does not support dynamic linking of paths from, for example, a JSON file. That’s problematic, and the only way to add the paths there dynamically is to override the array in the code file before the build. I consider it a workaround and a bad practice, but it’s the only possibility until Next.js supports dynamic imports in this place. So, we pre-generate these paths too, and let the pre-build script adjust the middleware.ts code file:

const fullMiddlewarePath = path.join(process.cwd(), 'src', 'middleware.ts')

// adjust middleware.ts file
const regex = /^([\s\S]*\/\* automatically regenerated array of paths \*\/)([\s\S]*)$/

const matchingPaths = data
	// remove the trailing slash
	.map(abTest => abTest.originalSlug.replace(/\/$/, ""))
const codeFile = await fs.promises.readFile(fullMiddlewarePath, 'utf8')
const newCodeFile = codeFile.replace(regex, `$1
export const config = {
	matcher: ${JSON.stringify(matchingPaths)}
}`)
await fs.promises.writeFile(fullMiddlewarePath, newCodeFile)

The code finds the Matcher part of the middleware.ts file by the specific comment and replaces the array of paths for which the middleware should be executed. This is how the Matcher part looks after generating the paths:

...
/* automatically regenerated array of paths */
export const config = {
	matcher: ['/specials/cms-for-insurance-companies/'],
}

When we have the A/B test data generated, we can move on to the actual middleware implementation. These are the steps we need to implement:

  • Check that the A/B test on the requested path is currently running
  • Check if it’s the first visit or if the variant has already been selected previously
  • Show the original page or reroute the request to another variant

Checking the A/B test is currently running

First, we need to import the generated JSON file and check the A/B test start and end dates:

...
import abTestsData from '../static/ab-tests-data.json'
...

export function middleware(request: NextRequest) {
	// get the requested path
	const slug = request.nextUrl.pathname

	// find the AB test and check if it's running
	const abTest = abTestsData.find(
		abTest => abTest.originalSlug === slug && dayjs(abTest.startDate) <= dayjs() && dayjs(abTest.endDate) >= dayjs())

	// cancel middleware if there's no such AB test
	if (!abTest) {
		return
	}

Note: In our project, we use dayjs for handling data comparisons, but you can also use standard JavaScript Date object.

Handling first visits and returning visitors

Next, we need to distinguish between new and returning visitors as we always want to display the same variant of the page to the same visitors. We achieve that by assigning a specific cookie that holds the variant identifier and randomly selecting one of the variants:

const cookieName = `ab-test.${abTest.id}`
let cookie = request.cookies.get(cookieName)

if (!cookie) {
	// new visitor
	let n = Math.random() * 100

	if (n < abTest.originalWeight) {
		cookie = '0'
	} else {
		n -= abTest.originalWeight
		const variant = abTest.variants.find(v => {
			if (n < v.weight) return true
			n -= v.weight
		})

		cookie = variant.id
		response.cookies.set(cookieName, cookie)
	}
}

Note: With Google Optimize, the original variant is always '0'

Rerouting to the chosen page variant

Finally, when we know which variant to display, we need to reroute the request to that path. We only need to handle the extra variants, and the URL of each variant depends on where you generate them. In our case, we use a special route _abTest/[testId]/[variantId] where we generate all variants during build time:

// we only need to handle extra variants, not the original page
if (cookie !== '0') {
	// test the variant exists (cookie may have been altered by user)
	const variant = abTest.variants.find(v => v.id === cookie)
	if (variant) {
		request.nextUrl.pathname = `/_abTest/${abTest.id}/${variant.id}`
	}
}

const response = NextResponse.rewrite(request.nextUrl)
return response

Note: We’re using NextResponse.rewrite, so the URL of the generated variants is never shown to the visitors. They always see the original URL.

So the A/B-tested page paths can look like this:

Original page/specials/cms-for-insurance-companies/
B variant/_abTest/k5utcYha/1

And here’s the cookie (if the B variant is selected):

Cookie nameCookie domainCookie value
ab-test.k5utcYha.your-site.com1

Provide the content

Now, the middleware divides traffic and rerouting to paths based on the chosen variants. The original page is already part of your website, but we need to take a few extra steps for the extra variants. As I mentioned above, we decided to create a special path (and file) /_abTest/[testId]/[variantId].tsx where we generate the variants during the build.

As we have all the necessary data during the build, we can define all possible paths in getStaticPaths and let Next.js generate everything ahead of time:

export const getStaticPaths: GetStaticPaths = async () => {
	const abTests = await KontentService.Instance()
		.deliveryClient.items<AbTestModel>()
		.type(contentTypes.ab_test.codename)
		.elementsParameter([
			contentTypes.ab_test.elements.google_optimize_id.codename,
			contentTypes.ab_test.elements.variants.codename,
			contentTypes.ab_test_variant.elements.variant_id.codename,
		])
		.depthParameter(2)
		.toPromise()

	const paths = []
	abTests.data.items.forEach(abTest => {
		abTest.elements.variants.linkedItems.forEach(variant => {
			paths.push({
				params: {
					testId: abTest.elements.googleOptimizeId.value,
					variantId: variant.elements.variantId.value,
				},
			})
		})
	})

	return {
		paths,
		fallback: false,
	}
}

Note: We can use fallback: false as every newly published A/B test triggers a new build. If that’s not the case, consider using a different fallback strategy.

In getStaticProps, we need to implement the actual data fetching and prepare the page. As I explained in the beginning, in our case, there is only a single element that affects the visual representation of the page, so we fetch the original page and only exchange that content:

export const getStaticProps: GetStaticProps<ISimplePageProps<UmlpModel>> = async ({ params, preview }) => {
	// getting the AB test data
	const googleOptimizeId = params.testId.toString()
	const variantId = params.variantId.toString()
	...
	const abTestData = await KontentService.Instance()
		.deliveryClient.items<AbTestModel>()
		.type(contentTypes.ab_test.codename)
	.equalsFilter(`elements.${contentTypes.ab_test.elements.google_optimize_id.codename}`, googleOptimizeId)
		.depthParameter(3)
		.limitParameter(1)
		.toPromise()
	const abTest = abTestData.data.items[0]
	const pageData = abTest.elements.umlpPage.linkedItems[0]
	const variant = abTest.elements.variants.linkedItems.find(variant => variant.elements.variantId.value === variantId)

	// change original UMLP content with the variant content
	pageData.elements.content.linkedItems = variant.elements.umlpContentContent.linkedItems

	return {
		props: {
			variantId,
			pageData,
			...
		},
	}
}

We take the original page content named pageData and exchange the data in the content element with the data provided in the variant content item.

Track results

The last step of the whole A/B testing process is to track the results. This is tightly coupled with the form of conversion you have on these pages. In our case, it’s mostly forms, and we track a GTM event every time there’s a new submission:

window.dataLayer.push({
	event: '', // form submission event name
	...
})

However, the form submission logic doesn’t have any additional data about an ongoing A/B test, so we register the Google Optimize data whenever the GTM becomes available on both the original and variant pages:

gtag('event', 'optimize.activate')
gtag('set', { experiments: [{ id: experimentId, variant: variantId }] })

This ensures each registered event is now connected with a specific variant and allows you to track the performance of the A/B test.

Kontent.ai

Conclusion

In this article, I explained what steps you need to take to implement A/B testing in Next.js. I showed you how to adjust your content model, implement the server-side traffic dividing logic, and how properly handle cookies for original and variant pages. In our case, each newly published A/B test triggers a full site rebuild as there are a few steps we always need to do: regenerate the A/B tests data file, adjust the middleware and its Matcher, and generate the variant pages. This is only for performance’s sake, and the implementation can be further simplified if that’s not a concern for you.

If you’re looking to get more information about A/B tests, check out the section dedicated to this topic in our docs or visit our Discord server to discuss your project with the Kontent.ai community.

Written by

Ondrej Polesny

As Developer Evangelist, I specialize in the impossible. When something is too complicated, requires too much effort, or is just simply not possible, just give me a few days and we will see :).

More articles from Ondrej

Feeling like your brand’s content is getting lost in the noise?

Listen to our new podcast for practical tips, tricks, and strategies to make your content shine. From AI’s magic touch to content management mastery and customer experience secrets, we’ll cover it all.

Listen now
Kontent Waves