Deploying to AWS Lambda

The Grafbase Gateway can be deployed to the AWS Lambda platform as a serverless function. For every release of the gateway, we provide a separate build specifically targeting the Lambda platform. This has some differences from running the gateway binary in a normal server environment. It is important to choose the builds for Lambda; the normal builds will not work in the AWS Lambda platform.

The Grafbase Lambda Gateway uses a minimal amount of memory: the common memory usage is below hundred megabytes. The Lambda memory setting has an effect how much CPU is allocated for the function, so even when the Grafbase Lambda Gateway does not need the memory, choosing a higher base memory amount can have an effect on cold start times and general performance.

The gateway release includes two different binaries targeted for Lambda:

  • grafbase-gateway-lambda-aarch64-unknown-linux-musl is for Amazon Graviton platform, targeting the 64-bit ARM architecture.
  • grafbase-gateway-lambda-x86_64-unknown-linux-musl targets the 64-bit x86 platform.

The binaries will not work if deployed to a wrong CPU architecture. The aarch64 version is a few megabytes smaller, so if Graviton is available in your region, it can lead to faster cold starts due to the smaller size.

The binaries targeted for common Linux deployments will not work in Lambda, and the Lambda binaries only work either in the AWS Lambda platform, or a specific docker image emulating the Lambda platform. The Lambda binaries are optimized for size over runtime speed, minimizing the cold start time to the minimum.

The Lambda function can be configured as needed. Gateway execution settings are handled through environment variables:

  • GRAFBASE_CONFIG_PATH sets the path for the Grafbase configuration file. The default value is ./grafbase.toml, which tries to look for the configuration from the Lambda root directory.
  • GRAFBASE_SCHEMA_PATH is the path to the federated schema. The default value is ./federated.graphql, again looking for the file from the Lambda root directory.
  • GRAFBASE_LOG sets the log level. Allowed values: error, warn, info, debug, trace or off. Defaults is info.
  • GRAFBASE_LOG_STYLE allows setting the style of the log messages. Either text or json and the default is text.

A common deployment strategy for Grafbase Lambda Gateway is to package it together with the gateway configuration and a federated graph. In comparison to to a full server, a Lambda function freezes if not serving any traffic, meaning we cannot run a graph updater in the background. Therefore after successfully publishing a new subgraph, the deployment should execute the following steps:

  • Download the preferred version of the Grafbase Lambda Gateway
  • Rename the binary to bootstrap
  • Copy the configuration to the same directory
  • Get the latest federated graph from the Grafbase API

A deployment should contain following files:

. ├── bootstrap ├── grafbase.toml └── federated.graphql

Continue deploying in your preferred way, following the AWS documentation.

The Grafbase Lambda Gateway is configured with the same configuration file as the binary, with the following changes:

  • The network settings are not respected. The listen address and the port is defined by AWS.
  • Transport layer security is handled by AWS, and the settings for TLS in the Grafbase configuration do nothing.
  • OpenTelemetry settings work almost the same, but batching has no effect in Lambda.
  • CORS can be either set from the Gateway configuration, or from Lambda settings.

It is adviced to use the AWS X-Ray platform for OpenTelemetry tracing in Lambda. The Grafbase Lambda Gateway will propagate the span IDs in the X-Ray format, and will connect the trace from the Lambda invoking to the traces from the Gateway.

Due to the limitation of the function sleeping if not serving any traffic, we must flush all traces before responding to a request. This adds to the response times, so a good practice is to run an OLPC collector in the same datacenter as the Lambda function is running. A good way to do this is installing the AWS distro for OpenTelemetry Collector in Amazon Elastic Computer Cloud. The EC2 instance should be in the same region, and in the same virtual private network for best performance.

When the OpenTelemetry collector is functional, the Grafbase Lambda Gateway must be configured to send the traces to the collector:

[telemetry] service_name = "my-federated-graph" [telemetry.tracing] enabled = true sampling = 1 filter = "grafbase=info,off" [telemetry.tracing.exporters.otlp] enabled = true endpoint = "http://<IP>:4317" protocol = "grpc" timeout = 5
Was this page helpful?