Introducing MCP support in Grafbase - The first GraphQL platform with native AI integration

Fredrik BjörkFredrik BjörkJulius de BruijnJulius de BruijnBenjamin RabierBenjamin Rabier
Introducing MCP support in Grafbase - The first GraphQL platform with native AI integration

We’re excited to announce support for Model Context Protocol (MCP), marking a major step forward in how developers and AI agents interact with GraphQL APIs.

MCP is a new protocol, launched in November 2024 by Anthropic, designed to make structured data explorable and actionable via natural language. With this release, Grafbase becomes the first GraphQL Federation platform to offer MCP out of the box - removing the need to stand up your own server, configure authentication, or fine-tune access control.

Whether you're building internal tools, enhancing observability, or exploring AI-powered developer workflows, MCP unlocks a powerful new interaction layer on top of your federated data graph. With just a few lines of configuration, you can:

  • Automatically generate MCP servers for your GraphQL APIs
  • Allow AI agents and LLMs to query your data securely and dynamically
  • Reduce the time it takes to go from question to answer—no manual query-writing required

This launch is more than just a feature, it’s our first step into the AI-native GraphQL future. And it's available now in the open-source Grafbase Gateway and CLI. Read on to learn how to enable MCP, configure access, and plug it into tools like Cursor, Windsurf, Zed or VS Code to start experimenting with AI-assisted data exploration.

You can start the MCP server with the Grafbase CLI:

npx grafbase mcp <url>

The MCP server listens to requests at http://127.0.0.1:5000/mcp by default. To add it to Cursor, create a .cursor/mcp.json file in your project with the following:

{ "mcpServers": { "my-graphql-api": { "url": "http://127.0.0.1:5000/mcp" } } }

You can check whether this was correctly enabled in the Cursor settings under MCP. The Grabase MCP server exposes three tools:

  • search which provides the most relevant subset of the GraphQL schema as SDL.
  • introspect to request information on specific types.
  • execute for GraphQL request execution.

As we're not sending the complete GraphQL schema to the agent, those three tools can work with any schema size. To power our search tool, we're building a tantivy index on start up containing all GraphQL fields and their shortest path to a root type. We use the top 5 fields as starting point for the SDL and add what we estimate to be the most relevant nested types until we reach a size threshold.

The quality of this initial response is the most important aspect of our MCP server and we're still working on it. Often a single search and execute will be enough but sometimes the agent will need to iterate a few times with execute to build the query. This brings us to a unique feature of agents, while a human would use search and introspect until it finds all the relevant information, agents tend to execute queries quite quickly even with partial information, inventing fields. So for this we improved our GraphQL errors we send to the LLM to include any relevant parts of the GraphQL schema to allow it to iterate faster.

The Grafbase Gateway can be configured to expose a MCP endpoint with the following grafbase.toml configuration:

[mcp] enabled = true # defaults to false # Path at which to expose the MCP service path = "/mcp" # Whether mutations can be executed execute_mutations = false

Note that while GraphQL requests go through the configured authentication/authorization similar to direct requests, the MCP endpoint itself isn't protected. So anyone can call tools for now.

The MCP specification is evolving rapidly. Coming soon:

  • Streamable HTTP responses
  • OAuth 2.1 support for secure agent access
  • A new Chat feature to enable natural language prompts directly against your GraphQL APIs

MCP is an exciting protocol that has quickly emerged as the standard in the AI-builder community. Stay tuned for more Meanwhile we'd love to hear your feedback so join our Discord to share your thoughts.