How we built an edge-optimized Postgres connector

Julius de BruijnJulius de BruijnHugo BarrigasHugo BarrigasBenjamin RabierBenjamin Rabier

How we built an edge-optimized Postgres connector


Recently we announced Postgres as our first SQL connector in Grafbase. Grafbase is a serverless GraphQL platform that runs on the edge, which makes implementing a connector for a database such as Postgres an interesting engineering challenge. The database is traditionally run having a fixed amount of connections, which are expensive memory-wise, and a fixed number of servers running their own connection pools with a small amount of connections for each to juggle between requests.

A connector in Grafbase terminology is a piece of code that translates the underlying data source as a GraphQL schema, and something that translates an incoming GraphQL query or mutation into something the underlying data source understands.

In a serverless architecture, connecting to a database has different constraints compared to a long running server application. A V8 isolate can start and stop in a matter of milliseconds, serving only a handful of requests -- usually just one before shutting down. It means in the worst case, there is going to be thousands of isolates opening connections at the same time to the database, which will not work with the typical small amount connections available.

The Grafbase platform is built in Rust, so a lot of tooling for serverless Postgres doesn’t exist or is out in very early versions of the libraries. In this article we’ll be taking a look at some of our design decisions.

Adding a Postgres database is done in the Grafbase configuration:

import { config, connector, graph } from '@grafbase/sdk' const g = graph.Standalone() const pg = connector.Postgres('Postgres', { url: g.env('DATABASE_URL'), }) g.datasource(pg) export default config({ graph: g })

Set the DATABASE_URL environment variable pointing to a database with some tables, and by running grafbase dev in the project, we introspect all the non-system schemas from the database and generate GraphQL types, queries and mutations. After about less than a second of waiting, the development server is ready to respond.

Now every query and mutation is type-checked against the introspected schema, and when a GraphQL query reaches the resolver, it gets converted to SQL and executed in the database.

In a production setting, introspecting the whole database every time a V8 isolate starts would be too slow. Therefore between deployments, the result of an introspection is serialized to a cache, so an isolate can start as fast as possible to serve the request. An SQL schema is a cyclical tree structure, meaning it is necessary for us to not only traverse over tables and columns, but also traverse from a foreign key column to the table where the column points to.

Creating cyclical structures in Rust without a garbage collector is one thing, but creating them anywhere so it is directly serializable can be surprisingly hard. How do you store a link between a table and a column so it can be traversed in both ways, and how do you structure it so that it serializes and de-serializes as-is as fast as possible. A pointer doesn’t round-trip when serialized, because the data structure won’t end up at the same point in memory.

So if we’d model our data like this:

struct Table { name: String, columns: Vec<Rc<Column>> } struct Column { name: String, table: Weak<Table>, }

To have a reference cycle between table and the column, we must use a reference counter. A reference between table and a column is cyclical, so to not leak memory, we set one of the sides to use a weak pointer. The weak pointer interface is quite cumbersome to use, and we waste resources with the smart pointer even though we don’t really need to. And what is the worst with this design, serializing it as JSON is not really possible in a way that it’s readable back in a trivial way.

The structure implemented in our connector uses ideas from the data oriented design patterns. Instead of smart pointers between the tree leaves, we store all the data in sorted flat vectors:

struct DatabaseDefinition { schemas: Vec<String>, tables: Vec<Table>, table_columns: Vec<TableColumn>, }

The underlying introspection queries define a specific order for the vectors: schemas in alphabetical order, tables by the schema name followed by the table name and finally columns by the schema name, table name and the ordinal position of the column.

Now, let’s imagine three tables:

CREATE TABLE foo ( id SERIAL PRIMARY KEY, col2 TEXT, col3 INT ); CREATE TABLE bar ( id SERIAL PRIMARY KEY, col2 TEXT, col1 INET ); CREATE TABLE omg ( id SERIAL PRIMARY KEY, col4 XML, col5 MACADDR );

When we store the columns to the vector, the sorting looks like this:

schematablecolumn
publicbarid
publicbarcol2
publicbarcol1
publicfooid
publicfoocol2
publicfoocol3
publicomgid
publicomgcol4
publicomgcol5

When looking for all the columns for table foo, we find the first column by doing a binary search from right to left, and in a few jumps we have the following range of columns:

schematablecolumn
publicfooid
publicfoocol2
publicfoocol3
publicomgid
publicomgcol4
publicomgcol5

From here we continue with another binary search from left to right, finding the last column for the table foo:

schematablecolumn
publicfooid
publicfoocol2
publicfoocol3

Even when the worst time notation is O(log n) compared to an index map with O(1), the data is always small enough to fit into a CPU cache, giving us a much faster search time compared to whatever the hashing function would give.

Now, of course searching with strings is not great, so what we do next is we give every schema, table and column an ID:

pub struct SchemaId(pub(crate) u32); pub struct TableId(pub(crate) u32); pub struct TableColumnId(pub(crate) u32);

The ID marks the position of the table or the column in their corresponding vectors, and it gets stored to the structure holding the data for the object. Our search structure gets suddenly much faster and much easier to cache, additionally solving a bug what you might already noticed earlier: if we have a table with the same name in multiple different schemas, the range would give a wrong result:

schema idtable idcolumn id
000
001
002
013
014
015
026
027
028

Finally we need a good public API on top of the flat structures, which gives us an easy and efficient way of iterating over different database objects. We call this these the “walkers” and the general definition is very simple:

#[derive(Clone, Copy)] pub struct Walker<'a, Id> { pub(super) id: Id, pub(super) database_definition: &'a DatabaseDefinition, }

A cheap to copy structure holding a reference to the database definition, and a generic ID. We can specialize this easily:

pub type TableWalker<'a> = Walker<'a, TableId>;

To fetch the introspected data, we can define a private method in the walker:

fn get(self) -> &'a Table { &self.database_definition.tables[self.id.0 as usize] }

And this lets us build all the needed methods to deal with the table data. Now, we of course want to iterate all the columns, so we use the binary search methods defined earlier in this chapter, finding the first and last column from the sorted vector of columns, and using the vector positions we can create walker per column:

pub fn columns(self) -> impl ExactSizeIterator<Item = TableColumnWalker<'a>> + 'a { // fetch a range of columns that belong to this table let range = super::range_for_key(&self.database_definition.table_columns, self.id, |column| { column.table_id() }); // create an iterator of walkers with the column id range.map(move |id| Walker { id, database_definition: self.database_definition }) }

This new walker has a different ID type as its generic type parameters, so we can define a new type definition for it:

pub type TableColumnWalker<'a> = Walker<'a, TableColumnId>;

And then, of course define its own set of methods for this type.

Creating walkers from IDs is going to be a very common pattern, and we can build a generic method to “walk over them”:

impl<'a, Id> Walker<'a, Id> where Id: Copy, { fn walk<OtherId>(self, id: OtherId) -> Walker<'a, OtherId> { Self { id, database_definition: self.database_definition, } } }

We solved our cyclical structures by storing the position index values where needed. A column has the ID of the table it’s part of, and a foreign key column knows about the column id it references. When we have the ID, creating a walker from them is very fast and easy, the underlying structure is simple enough to serialize in any format needed, and more importantly we can read them back to the original data structure with a minimal amount of work.

By running Grafbase at the edge, we must compile our code to WebAssembly. This has limitations on many of the system calls available in the V8 isolate, including most of the ways to do network IO. For a long time the only choice was to use HTTP, and it was our choice too to get started working with the connector. The kind developers of the Neon database have an HTTP bridge with a simple enough JSON interface:

{ "query": "SELECT $1", "params": [1] }

We created a quick and dirty docker image from the HTTP proxy, and a simple transport type that allowed us to start writing tests and features for the connector. Our first idea was to release the minimum viable product using the proxy. This would’ve prevented our users to use any other Postgres implementation than the one from Neon, and for sure we wanted to expand the possible user base for our product.

Luckily Cloudflare has been implementing a TCP Socket implementation to their worker API, which allows us – at least on paper – to use standard database libraries in WebAssembly. We settled on tokio-postgres, which can compile and work together in Cloudflare workers using the latest versions:

// Begin configuration let mut config = Config::new(); // Configure username, password, database as needed. config.user("postgres"); // Connect using Worker Socket let socket = Socket::builder().connect("database_hostname", 5432)?; // Inject the socket to tokio-postgres, disable the crate-level // TLS because the connection pool is in the same process, and it // handles the TLS to the actual database. let (client, connection) = config.connect_raw(socket, NoTls).await?; // Spawn the connection part to a task, and continue // using the client normally. wasm_bindgen_futures::spawn_local(async move { if let Err(error) = connection.await { console_log!("connection error: {:?}", error); } });

Now we can test this new connection, and compare queries between the implementations. First by using the HTTP:

query Neon 149ms sql SELECT .. OK 148ms

And by using the TCP socket:

query Neon 300ms sql SELECT .. OK 47ms

As we can see, the SQL query itself got three times faster, but our connection to the database takes 250 milliseconds before we can send the query. And to make it worse, we cannot keep an open connection in the memory for subsequent requests: the isolate can be considered to be dead after the request is done. Every query must open a new connection to the database, so now we have to find ways to make it faster.

Cloudflare announced recently a new product called Hyperdrive. It is basically a database connection pool in the manner of other such tools such as PgBouncer, and it sits in their edge platform very close where the V8 isolates are running. Now, using a pool such as Hyperdrive solves two problems for us:

  • It prevents us from using too many connections to the database. Connections to the pool are cheap, but from the pool to the database very resource-intensive.
  • It sits close to the V8 isolates, making the connection round-trip as small as possible.

A normal Postgres connection can use multiple megabytes of memory. If every V8 isolate opens up a new connection to the database, this can add up to a very expensive server bill in the end of the month; or more commonly the small limit of maximum available connections are used when the first traffic spike happens. A separate pool solves the issue by having a static number of connections, which are then given to a client per transaction: it reserves a connection for you when you start a transaction, and returns it back to the pool when the transaction is finished. While this solves the connection issues, it still requires requires extra care when querying the database, which is shown in the next chapter.

Connecting to Postgres requires quite a many round-trips for TLS negotiation and authentication. If the distance to the database server is far enough, we see a considerable time spent just to open a connection.

When deploying a Grafbase project, we provision a Hyperdrive pool behind the scenes, which is used for connecting to the database instead of using a direct connection. Now we use the worker socket implementation that can cache TLS connections between requests, and we have a pool that is very close to our V8 isolates (almost like you’d run it in a docker container in your development machine). As we can see, the connection and the query are now fast enough for production use.

query Neon 96ms sql SELECT .. OK 93ms

Normal database libraries, such as tokio-postgres, are built to stay running for a long period of time, caching statements and serving a few hundred different queries for weeks in a row. By using a transaction mode connection pool such as Hyperdrive, we have to change a few things how the driver works.

A tokio-postgres query starts by preparing a statement in the database:

PREPARE s1 (int) AS SELECT $1;

As we can see, we prepare a statement with the name s1, taking one int parameter and executing a query which selects the given parameter and returns it back to the user. Now a traditional database application stores this statement somewhere in a cache, where it can be taken from with a similar matching query and executed with different parameters. The second phase is to execute the query with parameters:

EXECUTE s1(1);

And this gives us the response 1. The named statements are stored in the Postgres connection, which has a local namespace and storage for the client. When using a pool, the connections are juggled between different workers, and suddenly our request has a connection with data from another request in the environment, leading to a collision:

PREPARE s1 (int, int) AS SELECT $1 + $2;

This leads to an error: there is already a statement with the name s1 in the connection. We can prevent the situation by using a special name of an empty string to the statement. This makes the connection to delete whatever statement it has stored in its environment with the new one. Our V8 isolates live a short amount of time, and we cannot store and cache statements between requests, so we might just remove whatever is stored in the connection when we need to run the next query.

The second issue is how even with unnamed statements we still do a round-trip with the PREPARE query. The query returns information on what are the correct types to use and how many parameters the statement takes. If we don’t cache this data anywhere and we have no way of reusing statements, it’s pretty much wasted time to prepare anything.

For this use case Postgres has a text mode in its protocol. It means we can just send the query, and all the parameters as strings to the database. The database then parses the query, and the parameters, and figures out the typing for us. Of course this means the query takes a bit longer, but when we save one extra round-trip the time is saved quite easily.

Luckily for the future developers, the recent PgBouncer update adds a support for using prepared statements, funded by Neon. It will probably not be super useful for serverless deployments, but makes the life of ORM developers much easier.

A typical GraphQL endpoint lets types to reference other types. For an SQL database, the references are typically created based on the foreign key constraints. Let’s imagine a platform for blogs. We have users, and the users can have a set of different posts they manage:

CREATE TABLE "User" ( id SERIAL PRIMARY KEY, name VARCHAR(255) NOT NULL, email VARCHAR(255) NOT NULL ); CREATE TABLE "Post" ( id SERIAL PRIMARY KEY, title VARCHAR(255) NOT NULL, content TEXT NULL, user_id INT NOT NULL, CONSTRAINT "Post_User_fkey" FOREIGN KEY (user_id) REFERENCES "User" (id) );

The constraint reads like this: every post has a parent user, which is marked with the user_id column in the Post table. This number has to match an id column in the User table. We can observe by looking the column definitions, that a post can have exactly one user due to the id of the User is a primary key, which is by definition unique. Looking from the other direction, the user can have more than one post due to the user_id column being not unique.

We call this one to many relation, for every user there can be more than one post.

Traditionally an ORM would construct the queries so it first loads the user, and in a subsequent query the posts as an array that’s then added to the user object. Of course this can be quite a many round-trips to the database especially with larger queries, and combining the data in the memory can easily spend the small amount allocated per V8 isolate. What we do instead is we use the JSON aggregation and lateral join capabilities of Postgres:

SELECT coalesce(json_agg("user"), '[]') AS "user" FROM (SELECT row_to_json("User") AS "user" FROM (SELECT "User"."name" AS "name", blogs AS "blogs" FROM (SELECT * FROM "public"."User" WHERE "public"."User"."id" = 1) AS "User" LEFT JOIN LATERAL (SELECT coalesce(json_agg(blogs2 ORDER BY "Blog_id" ASC), '[]') AS blogs FROM (SELECT row_to_json(blogs1) AS blogs, blogs1."id" AS "Blog_id" FROM (SELECT "id", "title" FROM "public"."Blog" WHERE "User"."id" = "user_id" ORDER BY "id" ASC) AS blogs1 ORDER BY "Blog_id" ASC) AS blogs2) AS "blogs" ON true) AS "User") AS root;

A lateral join makes it possible to use results from the nested queries in the subsequent selects. Let’s break up this into pieces:

SELECT * FROM "public"."User" WHERE "public"."User"."id" = 1

We first find the user with the id of 1. We store this as User to the query. Now we can run the select in the lateral join:

SELECT "id", "title" FROM "public"."Blog" WHERE "User"."id" = "user_id" ORDER BY "id" ASC

Then using this result, the outer query converts every row as JSON using the row_to_json function, and selects the id for ordering. The SQL standard doesn’t dictate a default order in a query, so we must order this outer query again using the same order we use in the nested one.

Moving again to the next outer layer, where we call json_agg to aggregate the JSON rows into a JSON array, again defining the order due to SQL standard not dictating what it should be. If we get no rows back from the nested query, we coalesce a possible null value as an empty JSON array.

Finally we select the name of the user, and the aggregated JSON array blogs to the final result, convert the row as JSON object and aggregate all rows to a JSON array.

All our queries and mutations are triggering only one SQL query in the backend. Round-trips are expensive, and combining data in the V8 isolate can lead to high memory usage, which we cannot afford in a low memory environment. Postgres offers robust JSON and join capabilities, which are very helpful for constructing the complete response object directly in the database.

Our MVP is quite performant and capable already for many use cases. It can be used from the Grafbase CLI version 0.42.0 and Grafbase SDK version 0.8.0. We’re continuing our work on Postgres and iterating on the queries available in the GraphQL API. We’d be interested to hear if there are any features we missed, and you’d like to see us implementing, chat with us on our Discord. The source code for the connector can be found from our GitHub repository.

Get Started

Build your API of the future now.