The KV feature has been sunset and is no longer available. If you have any questions, please reach out to us on Discord.
Today we're excited to announce the release of Grafbase KV!
Grafbase KV offers a straightforward and speedy serverless key-value storage solution for resolvers. It boasts eventual consistency, edge deployment, and eliminates the need for configuration both locally and in production. The methods it currently supports are:
list
(with prefix filtering and cursor-based pagination)get
set
(with metadata/time-to-live options)delete
Grafbase KV is experimental, and you can enable it for your project using the SDK:
import { config, graph } from '@grafbase/sdk'
const g = graph.Standalone()
export default config({
graph: g,
experimental: {
kv: true,
},
})
Once enabled, you can invoke kv
from the third context
argument inside Edge Resolvers:
export default async function MyResolver(_, { prompt }, { kv }) {
// ...
}
Here's an example of storing the response for a prompt sent to OpenAI:
import { OpenAI } from 'langchain/llms/openai'
export default async function HelloResolver(_, { prompt }, { kv }) {
const model = new OpenAI({
modelName: 'gpt-3.5-turbo-instruct',
openAIApiKey: process.env.OPENAI_API_KEY,
})
try {
const { value } = await kv.get(prompt)
if (value === null) {
const response = await model.invoke(prompt)
await kv.set(prompt, response, { ttl: 60 })
return response
} else {
return value
}
} catch (e) {
return 'error'
}
}
When working locally we can already see the advantages of using Grafbase KV. We are able to store the prompt
and instruct
response as a key-value pair that significantly improves response times globally, as well as saving costs with OpenAI by not running the same prompt repeatedly.
You can learn more about how all of this works by exploring the documentation.
We'd love to hear your feedback and ideas, so join please us on Discord.