Kontent.ai MCP server

Martina Farkasova
6 minutes
AI
The Kontent.ai MCP Server connects your Kontent.ai projects with AI-powered tools (like Claude, Cursor, or VS Code) using the Model Context Protocol (MCP). It allows AI to understand and work with your project’s contents and perform operations via natural language.

Understand the MCP server

The Model Context Protocol (MCP) is an open standard that standardizes how applications provide context to large language models (LLMs). Think of MCP as a universal connector, allowing AI models to interface with various tools and data sources through a standardized protocol. With the Kontent.ai MCP server, you can:
  • Empower AI assistants to read your Kontent.ai content model structure and create new content types, snippets, and taxonomies
  • Use natural language to query, create, and update content items and their language variants
  • Move content items through the workflow steps, publish, and unpublish content as needed
  • Accelerate content model creation by sending diagrams or descriptions that the AI interprets and implements
  • Integrate with multiple AI clients through a standard JSON configuration format
For more information on the protocol specification, visit modelcontextprotocol.io.

How to run the MCP server

Prerequisites

Before setting up the MCP server, ensure you have:

Setup options

You can run the Kontent.ai MCP Server using npx, choosing between STDIO or Streamable HTTP (shttp) transports.
SSE transport is supported in the MCP server up to version 0.16.0. Starting from version 0.17.0, SSE transport has been removed and is no longer available.
STDIO transport: Ideal for direct integration with AI tools that support STDIO communication.
  • Shell
npx @kontent-ai/mcp-server@latest stdio
Streamable HTTP transport: Suitable for scenarios requiring a long-lived, streamable HTTP connection. The default port is 3001, but you can configure the port by setting the PORT environment variable.
  • Shell
npx @kontent-ai/mcp-server@latest shttp

Configuration

The server requires the following environment variables (for STDIO and single-tenant HTTP modes):
VariableDescriptionRequired
KONTENT_API_KEYYour Kontent.ai Management API key
KONTENT_ENVIRONMENT_IDYour environment ID
PORTPort for HTTP transport (defaults to 3001)
Set these variables in your environment or a .env file. In multi-tenant HTTP mode, no environment variables are required.

Transport configuration

STDIO transport

To run the server with the STDIO transport, configure your MCP client with the following JSON:
  • JSON
{
  "kontent-ai-stdio": {
    "command": "npx",
    "args": ["@kontent-ai/mcp-server@latest", "stdio"],
    "env": {
      "KONTENT_API_KEY": "<your-management-api-key>",
      "KONTENT_ENVIRONMENT_ID": "<your-environment-id>"
    }
  }
}

Streamable HTTP transport (shttp)

When using the HTTP transport, you can choose between single-tenant mode (simpler setup with environment variables) and multi-tenant mode (one server handling multiple environments). Single-tenant mode: 1. Start the server.
  • Shell
npx @kontent-ai/mcp-server@latest shttp
2. Set environment variables.
  • Shell
KONTENT_API_KEY=<management-api-key>
KONTENT_ENVIRONMENT_ID=<environment-id>
PORT=3001  # Optional; defaults to 3001
3. Configure your MCP client.
  • JSON
{
  "kontent-ai-http": {
    "url": "http://localhost:3001/mcp"
  }
}
Multi-tenant mode: The server can run without environment variables and accept requests for multiple environments using URL path parameters and Bearer authentication. For more details, see the multi-tenant mode documentation on GitHub.

Supported operations

The Kontent.ai MCP server exposes a set of tools that represent specific operations your AI assistant or application can perform on your Kontent.ai project. These operations align with your content modeling, items, and asset processes, enabling AI to query, create, and manage:
  • Content types and their elements
  • Content type snippets
  • Taxonomy groups and terms
  • Content items
  • Language variants
  • Assets
  • Languages
  • Workflows
You can find the full set of available operations in the MCP Server GitHub repository.

Common use cases

The examples below show how the MCP server tools empower AI assistants to provide meaningful, context-aware interactions, helping teams design, scale, and maintain content efficiently.

Understand and extend a content model

These actions help AI assistants understand the structure of the project and suggest or create new types based on your needs.
  • “What content types are already defined in this environment?”
  • “Add a new ‘Event‘ content type with elements for name, date, location, and description.”
  • “Update the ‘Article’ content type to include a taxonomy element for industries.”
  • “Delete the ‘Press release’ content type.”

Reuse elements across content types with snippets

Content type snippets are ideal for maintaining consistency across content types while avoiding duplication. Assistants can define snippets once and insert them wherever needed, ensuring that updates are propagated automatically.
  • “Create an ‘SEO metadata’ content type snippet with title, description, and image elements.”
  • “List all content type snippets in this environment.”

Categorize and filter content with taxonomies

Taxonomies provide structure for organizing and filtering content. With the MCP Server, AI assistants can create and update taxonomy groups and terms programmatically.
  • “Add a taxonomy group called ‘Industries’ with terms for Healthcare, Finance, and Education.”
  • “List all taxonomy groups defined in this environment.”

Manage content items

Draft, update, and delete content using AI. The AI assistants scan what elements your items contain, and the languages you have in your project. They can then generate content based on a content brief or instructions you provide, and create the content item with its language variants.
  • “Create a blog post called ‘Treatments after heart surgery’ based on this content brief in these languages: English, Spanish, and French.”
  • “Update the title and summary of the English variant of the item with this ID {ID_of_the_item}.” 
  • “Delete the Spanish variant of the content item ‘Mid-century modern décor at home’.”
  • “Search all French variants of articles containing the keyword ‘avocado’.”
  • “Create a new draft version of the English variant of item  {ID_of_the_item}.”

Work with multiple languages

Understanding language settings helps the assistant make localization-aware modeling decisions, adapt to localized workflows, and offer content model recommendations based on supported languages.
  • “What languages are available in this environment?”
  • “Create missing Spanish variants for all English blog posts.”

Move content through workflow steps

Workflows define the lifecycle of content from draft to review to published. AI assistants can automatically move content items between workflow steps, helping teams enforce editorial processes and keep content production on track.
  • “Move all draft blog posts to the Review step.”
  • “Schedule the Spanish variant of item {ID} to publish on 2025-09-01 at 10:00 Madrid time.”
  • “Unpublish the German variant of item {ID} tomorrow at midnight.”

Fetch what‘s in your asset library

Assets like images and documents can add structure and enhance your content. AI assistants can fetch or include assets in your content items.
  • “List all images uploaded in the past week.”
  • “Get the details of asset {assetId}.”
  • “Insert the image of the forest trail {assetId} into the Autumn Hiking Tips article.”

Find and filter what you’re looking for

Filters help refine what content items you‘re looking for using specific keywords and criteria. You can also search for items based on what you mean in natural language. With the MCP Server, AI assistants can help locate items and language variants based on your queries.
  • “Find unfinished articles that have ‘cardiology‘ in the name“
  • “Show recent Spanish blog posts that are assigned to Sofia Peletier in ascending order“
  • “I’m looking for items about plants that thrive in dry environments.“