Pre-rendered, server-rendered, or hybrid: Which should I use?
The web used to be a simpler place. You either served your site as static HTML documents or dynamic server rendered templates with something like PHP. So what changed, and what do you need to consider when architecting modern web experiences in 2021?
There are now many different architectures and approaches to navigate when deciding how to deliver a site. Modern Jamstack frameworks have further complicated this picture with a range of different rendering options available—sometimes within a single site.
In this article, we’ll explore how each rendering method works, the advantages and disadvantages, and, most importantly, we’ll answer how you can decide which approach is right for you and your project.
We’ll start with the simplest, pre-rendered static content, followed by server rendering, then a hybrid approach between the two, before wrapping up with a bonus section on an advanced approach with Next.js’ Incremental Static Regeneration (ISR).
Pre-rendered static content
The idea behind static generation is a simple one. Rather than rendering pages for each request, pages pre-render at build time—as the site deploys. The static files generated during the build can then be pushed out to a global CDN, making static sites fast, cheap, and straightforward to host.
There are other advantages too. With static hosting, there’s no need to worry about the security concerns related to the upkeep and maintenance of back-end application servers. Static sites are resilient to downtime if an outage occurs on an external service that the site relies on. For example, suppose a rebuild of the site encounters an error fetching data. In that case, it’ll fail, leaving the previous build in place with end-users unaffected whilst you investigate and resolve the issue.
However, there are some drawbacks to pre-rendered static sites. Pre-rendering the pages during the build is great, but the time required to do so scales linearly with the number of pages on the site. For example, if an e-commerce site has 50k products and each product page takes 250ms to render, it’d take over 3 hours just to build that section of the site.
Suppose the site relies on external sources for content (e.g., a headless CMS like Kontent.ai or a product catalog API). In that case, each change to that external content needs to trigger a rebuild of the site via webhook, further exacerbating the issue, with content editors having to wait hours to see their updated content reflected on the site.
Not only are long builds frustrating for content editors, but depending on the hosting platform, there may also be cost concerns to consider, as compute resources are inefficiently spent rebuilding the whole site for each content change.
Good for...
Not so good for...
Small sites that don’t require dynamic or personalized content.
Larger sites—due to lengthy build times.
Server rendering
The concept of fetching the necessary data and rendering that on the server-side at request time is not new to the web. We’ve had this mechanism for delivering websites since the first dynamic web pages from the likes of PHP (and its predecessors).
The strengths haven’t changed. Deploys are fast, the content is always up to date, and by deferring the render until request time, it’s easy to dynamically adjust the response content on the fly (e.g., personalize it), based on the context of that request.
Unfortunately, with this flexibility come tradeoffs. Whilst server rendering avoids the computational inefficiency of rebuilding the whole site for every content change, compute resources are spent re-rendering the page for every request. That’s not only slow from the end-users’ perspective but, depending on the nature of the site, can be highly inefficient. If the page content is the same for all users, then the server will unnecessarily repeat the same work, reproducing the same response for every request.
That’ll likely impact hosting costs, which are also almost certainly higher in the first place (compared to pre-rendered static sites), as there’s the need for an application server on the back end.
You can mitigate that inefficiency and improve performance by caching on a CDN (or reverse caching proxy such as Varnish), but with that, you’re returning to a model where the site will sometimes serve stale content. Also, things quickly get complicated if you’re taking advantage of the server rendering model to personalize content.
Good for...
Not so good for...
Sites with larger page volumes.
Pages that require dynamic content (e.g., A/B testing or personalized content based on the end user’s identity).
Pages that require up-to-date content (e.g., e-commerce pricing and availability).
Hosting complexity, costs, and operational overheads.
Page speed performance (unless mitigated by caching).
The hybrid approach
For large sites, it’s often the case that specific areas of the site are suitable for pre-rendering, whilst other areas are either too dynamic or have too many pages for pre-rendering to be feasible.
Using an e-commerce site as an example, the homepage and supporting marketing pages might be pre-rendered during the builds, whilst product pages are server-rendered to keep build times snappy and ensure customers are always shown up-to-date availability and pricing.
Of course, any element of server rendering does incur the additional upkeep of back-end servers. Still, by leveraging pre-rendering for high traffic areas, such as the homepage, this hybrid approach reduces the volume of compute resources required to deliver the server rendering, and, with that, the associated costs.
Good for...
Not so good for...
A best-of-both-worlds approach—where some areas of a larger site are still suitable for pre-rendering (e.g., services pages), but other areas require server-side rendering (e.g., product pages).
Enabling instant preview for pre-rendered pages.
Hosting complexity and operational overheads, as back-end application servers are still required here. However, the costs should be lower than a fully server-rendered approach due to the pre-rendered areas of the site.
Bonus section: Incremental Static Regeneration
Whilst not currently available from all modern frameworks, it’s worth discussing an additional rendering approach offered by Next.js—Incremental Static Regeneration (ISR).
With Next.js’ ISR, you can pre-render pages during the build process without having to pre-render every page. You can then update and even add additional pages within those areas of the site after the site has gone live.
ISR leverages a stale-while-revalidate (SWR) caching model so that if a previously rendered response exists when a request comes in, it is returned instantly in the response. Next.js will then check if that response is stale (has expired based on a prescribed time period) and, if so, regenerate the page in the background so that the following request will see the updated content.
This caching mechanism provides the performance and resilience benefits from the pre-rendered static content, but without the drawback of the lengthy build times for areas of the site with large volumes of pages.
Setting up ISR with Next.js
To drive the ISR, Next.js needs three decisions from you for each URL route:
1. Which pages to pre-render at build time
You can specify that Next.js should pre-render all of the pages, a subset of the pages, or none at all. The primary factor to consider here is page volume.
If there are five pages within a services section, it probably makes sense to pre-render all of these. However, if there are 50k products, then to avoid slow build times, it makes sense not to pre-render any—or perhaps just a subset, e.g., the top 50 most popular products only, leaving the rest to generate at request time.
2. How to behave if the requested page has not yet rendered
If the page has not already rendered when the request comes in, then:
`blocking` behaves as server rendering would. The page is rendered synchronously and returned in the same response.
`true` Next serves a generic fallback version of the page template with placeholders, then renders the page asynchronously in the background before hydrating the fallback version with the full content in the browser once the page render is complete.
`false` will result in a 404.
Setting fallback to `true` will result in a faster time to first byte (TTFB), but `blocking` is likely the approach you want to take for public sites indexed by search engines.
3. How long to cache the rendered result before regeneration
To enable ISR, Next.js requires you to set a `revalidate` duration in seconds. This duration can be as low as one second—providing near real-time data like server rendering would—but with the enhanced performance and resilience of pre-rendered content.
Or it can be set higher if up-to-date content is less critical, or you wish to lighten the load on both the hosting and any external services.
However, you need to remember that even with an extremely short `revalidate` duration, the stale content will still be served to the next user once the revalidate period expires. So in scenarios where up-to-date content is paramount, e.g., the stock and pricing information from our product page example, ISR may not be suitable—unless supplemented with additional client-side data fetching.
We’re big fans of ISR, even making use of it for areas of the site that don’t have large page volumes, as the regeneration mechanic negates the need for external data sources to trigger rebuilds via webhook.
Though, it’s important to consider API usage here. Short revalidate durations will result in pages frequently being regenerated, which may place undesirable load and additional cost on external data sources on higher traffic sites.
Good for...
Not so good for...
Areas of a site where the page volumes are sufficiently large that would otherwise force the use of server-side rendering, as with ISR, you can deliver pre-rendered performance and resilience, but without incurring the lengthy build times.
Areas of the site where you need to customize the response based on the request (e.g., A/B testing or personalized content). Unlike server-side rendering, the response is rendered without the context of the current request.
Pages with ‘live’ data (e.g., stock availability)—unless supported with additional client-side data fetching.
In summary…
If a site is small and the content is consistent between different users, then pre-rendering will result in a performant and resilient site that is easy and cheap to host.
For larger websites, pre-rendering the whole site simply isn’t viable. For sections of the site where the page volume is low, we might take a hybrid approach and pre-render those. However, we’d likely do so whilst leveraging Next.js’s ISR so that the pages will automatically regenerate rather than relying on webhooks from external sources to trigger a complete rebuild.
For areas of the site with large page volumes, we’d opt for ISR. We might pre-render the highest value pages within that section, but we’d build the rest at request time to keep the build times snappy.
Finally, we still reach for server-side rendering when we need to customize the page content based on the user (e.g., for A/B testing or authenticated content).
We are Kyan, a technology agency powered by people.
What if we told you there was a way to make your website a place that will always be relevant, no matter the season or the year? Two words—evergreen content. What does evergreen mean in marketing, and how do you make evergreen content? Let’s dive into it.
How can you create a cohesive experience for customers no matter what channel they’re on or what device they’re using? The answer is going omnichannel.
In today’s world of content, writing like Shakespeare is not enough. The truth is, there are tons of exceptional writers out there. So what will make you stand out from the sea of articles posted every day? A proper blog post structure.
Lucie Simonova
Jul 22, 2022
Subscribe to the Kontent.ai newsletter
Get the hottest updates while they’re fresh! For more industry insights, follow our LinkedIn profile.