Since the launch of Grafbase's edge gateway, edge resolvers, and connectors, requests to APIs connected to Grafbase have always required a round trip to the origin.
Today, we are thrilled to announce Edge Caching!
Edge caching is a technique that optimizes the performance of your GraphQL API by caching responses closer to the end-users. This caching mechanism significantly reduces overall latency and improves the speed at which data is delivered.
As Grafbase continues its mission to unify the data layer, the introduction of edge caching brings the capability to cache data from any connected API. This empowers developers to efficiently cache and retrieve data from a variety of APIs, resulting in enhanced performance and simplified data management.
This directive offers the flexibility to configure caching settings at three distinct levels: global, per type, and per field. With this powerful tool at your disposal, you can fine-tune caching behavior to best suit your specific requirements.
To enable caching, the new @cache
directive is now available.
You can configure global cache rules that apply to all of the types
.
Here we will cache all queries:
extend schema @cache(rules: [{ maxAge: 60, types: ["Query"] }]) {
query: Query
}
You can configure caching on individual types and fields as well as configuring the freshness of the cached responses using the maxAge
setting:
type Post @model @cache(maxAge: 30) {
title: String!
slug: String! @unique
content: String! @cache(maxAge: 15)
}
You can also configure types to cache globally:
extend schema @cache(rules: [{ types: ["Post"], maxAge: 60 }]) {
query: Query
}
type Post @model {
title: String!
slug: String! @unique
content: String!
}
You can cache individual types that are generated by connected OpenAPI specification.
You can also tell Grafbase to serve a stale response while automatically revalidating using the staleWhileRevalidate
setting:
extend schema
@openapi(
name: "OpenAI"
schema: "https://raw.githubusercontent.com/openai/openai-openapi/master/openapi.yaml"
headers: [
{ name: "Authorization", value: "Bearer {{ env.OPENAI_API_KEY }}" }
]
)
@cache(
rules: [{ types: ["OpenAIQuery"], maxAge: 60, staleWhileRevalidate: 60 }]
) {
query: Query
}
You can cache individual types that are generated by connected GraphQL APIs:
extend schema
@graphql(
name: "Contentful"
url: "https://graphql.contentful.com/content/v1/spaces/{{ env.CONTENTFUL_SPACE_ID }}/environments/{{ env.CONTENTFUL_ENVIRONMENT }}"
headers: [
{ name: "Authorization", value: "Bearer {{ env.CONTENTFUL_API_KEY }}" }
]
)
@cache(
rules: [
{ types: ["ContentfulQuery"], maxAge: 60, staleWhileRevalidate: 60 }
]
) {
query: Query
}
That's it! We're excited to launch edge caching and invite all users to sign up to add caching to their existing API.
We'd also love to hear any of your feedback and ideas, so join us on Discord.