PostgreSQL offers an extension interface, and it's my belief that Rust is a fantastic language to write extensions for it. Eric Ridge thought so too, and started pgx
awhile back. I've been working with him to improve the toolkit, and wanted to share about one of our latest hacks: improving the generation of extension SQL code to interface with Rust.
This post is more on the advanced side, as it assumes knowledge of both Rust and PostgreSQL. We'll approach topics like foreign functions, dynamic linking, procedural macros, and linkers.
Understanding the problem
pgx
based PostgreSQL extensions ship as the following:
These extension objects include a *.control
file which the user defines, a *.so
cdylib that contains the compiled code, and *.sql
which PostgreSQL loads the extension SQL entities from. We'll talk about generating this SQL today!
These sql
files must be generated from data within the Rust code. The SQL generator pgx
provides needs to:
- Create enums defined by
#[derive(PostgresEnum)]
marked enum.-- src/generic_enum.rs:8 -- custom_types::generic_enum::SomeValue ( 'One', 'Two', 'Three', 'Four', 'Five' );
- Create PostgreSQL functions pointing to each
#[pg_extern]
marked function.-- src/complex.rs:16 -- custom_types::complex::known_animals /* custom_types::complex::Animals */ RETURNS Animals STRICT LANGUAGE c /* Rust */ AS 'MODULE_PATHNAME', 'known_animals_wrapper';
- Create PostgreSQL operators pointing to each
#[pg_operator]
marked function.-- src/lib.rs:15 -- operators::my_eq ( "left" MyType, /* operators::MyType */ "right" MyType /* operators::MyType */ ) RETURNS bool /* bool */ STRICT LANGUAGE c /* Rust */ AS 'MODULE_PATHNAME', 'my_eq_wrapper'; -- src/lib.rs:15 -- operators::my_eq = ( PROCEDURE="my_eq", LEFTARG=MyType, /* operators::MyType */ RIGHTARG=MyType /* operators::MyType */ );
- Create 'Shell types', in & out functions, and 'Base types' for each
#[derive(PostgresType)]
marked type.-- src/complex.rs:10 -- custom_types::complex::Animals ; -- src/complex.rs:10 -- custom_types::complex::animals_in ( "input" cstring /* &std::ffi::c_str::CStr */ ) RETURNS Animals /* custom_types::complex::Animals */ IMMUTABLE PARALLEL SAFE STRICT LANGUAGE c /* Rust */ AS 'MODULE_PATHNAME', 'animals_in_wrapper'; -- src/complex.rs:10 -- custom_types::complex::animals_out ( "input" Animals /* custom_types::complex::Animals */ ) RETURNS cstring /* &std::ffi::c_str::CStr */ IMMUTABLE PARALLEL SAFE STRICT LANGUAGE c /* Rust */ AS 'MODULE_PATHNAME', 'animals_out_wrapper'; -- src/complex.rs:10 -- custom_types::complex::Animals ( INTERNALLENGTH = variable, INPUT = animals_in, /* custom_types::complex::animals_in */ OUTPUT = animals_out, /* custom_types::complex::animals_out */ STORAGE = extended );
- Create hash operator families for each
#[derive(PostgresHash)]
:;
-- src/derived.rs:20 -- operators::derived::Thing ; OPERATOR 1 = (Thing, Thing), FUNCTION 1 Thing_hash(Thing);
- Create hash operator families for each
#[derive(PostgresOrd)]
:;
-- src/derived.rs:16 -- operators::derived::Thing ; OPERATOR 1 <, OPERATOR 2 <=, OPERATOR 3 =, OPERATOR 4 >=, OPERATOR 5 >, FUNCTION 1 Thing_cmp(Thing, Thing);
An earlier version of cargo-pgx
had cargo-pgx pgx schema
command that would read your Rust files and generate SQL corresponding to them.
This worked okay! But gosh, it's not fun to do that, and there are a lot of complications such as trying to resolve types!
So, what's more fun than parsing Rust source code and generating SQL? Parsing Rust code in procedural macros to inject metadata foreign functions, then later creating a binary which re-exports those functions via linker tricks, dynamically loads itself, and calls them all to collect metadata, then builds a depdency graph of them to drive the output!
Wait... What, that was not your answer? Oh no... Well, bear with me because that's what we're doing.
So, why else should we do this other than fun?
For more than fun
- Resolving types is hard to DIY: Mapping the text of some function definition to some matching SQL type is pretty easy when you are maping
i32
tointeger
, but it starts to break down when you're mapping things likeArray<Floofer>
toFloofer[]
.Array
is frompgx::datum::Array
, and we'd need to start reading intouse
statements if we were parsing code... and also what about macros (which may create#[pg_extern]
s)? ...Oops! We're a compiler! - We can enrich our data: Instead of scanning code, in proc macros we have opportunities to enrich our data using things like
core::any::TypeId
, or building up accurate Rust to SQL type maps. - We don't need to care about dead files: Mysterious errors might occur if users had Rust files holding conflicting definitions, but one file not being in the source tree.
- We can handle feature flags and release/debug: While we can scan and detect features, using the build process to ensure we only work with live code means we get feature flag support, as well as release/debug mode support for free!
- Better foreign macro interaction: Scanning source trees doesn't give us the ability to interact safely with other proc macros which might create new
#[pg_extern]
(or similar) definitions.
Okay, did I convince you? ... No? Dangit. Oh... well, Let's explore anyways!
Here's some fun ideas we pondered (and, in some cases, tried):
- Create it with
build.rs
: We'd be right back to code parsing like before! Thebuild.rs
of a crate is invoked before macro expansion or any type resolution, so same problems, too! - Make the macros output fragments to
$OUTDIR
: We could output metadata, such a JSON files to some$OUT_DIR
instead, and havecargo pgx schema
read them, but that doesn't give us the last pass where we can callcore::any::TypeId
, etc. - Use
rust-analyzer
to inspect: This would work fine, but we couldn't depend on it directly since it's not on crates.io. We'd need to use the command line interface, and the way we thought of seemed reasonable without depending on more external tools. - Using
inventory
we could sprinkleinventory::submit! { T::new(/* ... */) }
calls around our codebase, and then at runtime call ainventory::iter::<T>
.- This worked very well, but Rust 1.54 re-enabled incremental compliation and broke the functionality. Now, the inventory objects could end up in a different object file and some could be missed. We could 'fix' this by using
codegen-units = 1
but it was not satisfying or ideal.
- This worked very well, but Rust 1.54 re-enabled incremental compliation and broke the functionality. Now, the inventory objects could end up in a different object file and some could be missed. We could 'fix' this by using
- Expose a C function in the library, and call it: This would totally work except we can't load the extension
.so
without also having the postgres headers around, and that's ... Oof! We don't really want to makecargo-pgx
depend on specific PostgreSQL headers.
But wait! It turns out, that can work! We can have the binary re-export the functions and be very careful with what we use!
The (longish) short of it
We're going to produce a binary during the build. We'll have macros output some foreign functions, then the binary will re-export and call them to build up a structure representing our extension. cargo-pgx pgx schema
's job will be to orcestrate that.
Roughly, we're slipping into the build process this way:
┌────────────────────────┐
│cargo pgx discovers │
│__pgx_internal functions│
└───────────────────┬────┘
│
▼
Build.rs ─► Macros ─► Compile ─► Link ─► Discovery ─► Generation
▲ ▲ ▲
│ │ │
┌────────────────┴─────┐ ┌──────────┴────────┐ ┌────────────┴────────────┐
│Parse definitions. │ │Re-export internal │ │Binary recieves list, │
│ │ │functions to binary│ │dynamically loads itself,│
│Create __pgx_internal │ │via dynamic-list │ │creates dependency graph,│
│metadata functions │ │ │ │outputs SQL │
└──────────────────────┘ └───────────────────┘ └─────────────────────────┘
During the proc macro expansion process, we can append the proc_macro2::TokenStream
with some metadata functions. For example:
// We should extend the TokenStream with something like
pub extern "C"
How about functions? Same idea:
// We should extend the TokenStream with something like
pub extern "C"
Then, in our sql-generator
binary, we need to re-export them! We can do this by setting linker
in .cargo/config
to a custom script which includes dynamic-list
:
# ...
[]
= "./.cargo/pgx-linker-script.sh"
# ...
Note: We can't use
rustflags
here since it can't handle environment variables or relative paths.
In .cargo/pgx-linker-script.sh
:
#! /usr/bin/env bash
# Auto-generated by pgx. You may edit this, or delete it to have a new one created.
if ; then
UNAME=
if ; then
TEMP=
else
TEMP=
fi
else
fi
We also need to ensure that the Cargo.toml
has a few relevant settings:
[]
= ["cdylib", "rlib"]
[]
# avoid https://github.com/rust-lang/rust/issues/50007
= "thin"
[]
# avoid https://github.com/rust-lang/rust/issues/50007
= "fat"
Since these functions all have a particular naming scheme, we can scan for them in cargo pgx schema
like so:
let dsym_path = sql_gen_path.resolve_dsym;
let buffer = open?;
let archive = parse.unwrap;
let mut fns_to_call = Vec new;
for object in archive.objects
This list gets passed into the binary, which dynamically loads the code using something like:
let mut entities = Vec default;
// We *must* use this or the extension might not link in.
let control_file = __pgx_marker?;
entities.push;
unsafe ;
Then the outputs of that get passed to our PgxSql
structure, something like:
let pgx_sql = build.unwrap;
!;
infopgx_sql.to_file?;
if let Some = dot
Since SQL is very order dependent, and Rust is largely not, our SQL generator must build up a dependency graph, we use petgraph
for this. Once a graph is built, we can topological sort from the root (the control file) and get out an ordered set of SQL entities.
Then, all we have to do is turn them into SQL and write them out!
Fitting the pieces together
Let's talk more about some of the moving parts. There are a few interacting concepts, but everything starts in a few proc macros.
Understanding syn/quote
A procedural macro in Rust can slurp up a TokenStream
and output another one. My favorite way to parse a token stream is with syn
, to output? Well that's quote
!
syn::Parse
and quote::ToTokens
do most of the work here.
syn
contains a large number of predefined structures, such as syn::DeriveInput
which can be used too. Often, your structures will be a combination of several of those predefined structures.
You can call parse
, parse_str
, or parse_quote
to create these types:
let val: LitBool = parse_str?;
let val: LitBool = parse_quote!;
let val: ItemStruct = parse_quote!;
parse_quote
works along with quote::ToTokens
. Its use is similar to that of how the quote
macro works!
use ToTokens;
let tokens = quote! ;
println!;
// Prints:
// struct Floof ;
They get called by proc macro declarations:
In the case of custom derives, often something like this works:
Rust's TypeId
s & Type Names
Rust's standard library is full of treasures, including core::any::TypeId
and core::any::type_name
.
Created via TypeId::of<T>()
, TypeId
s are unique TypeId
for any given T
.
use TypeId;
assert!;
assert!;
assert!;
We can use this to determine two types are indeed the same, even if we don't have an instance of the type itself.
During the macro expansion phase, we can write out TypeId::of<#ty>()
for each used type pgx
interacts with (including 'known' types and user types.)
Later in the build phase these calls exist as TypeId::of<MyType>()
, then during the binary phase, these TypeId
s get evaluated and registered into a mapping, so they can be queried.
core::any::type_name<T>()
is a diagnostic function available in core
that makes a 'best-effort' attempt to describe the type.
assert_eq!;
Unlike TypeId
s, which result in the same ID at in any part of the code, type_name
cannot promise this, from the docs:
The returned string must not be considered to be a unique identifier of a type as multiple types may map to the same type name. Similarly, there is no guarantee that all parts of a type will appear in the returned string: for example, lifetime specifiers are currently not included. In addition, the output may change between versions of the compiler.
So, type_name
is only somewhat useful, but it's our best tool for inspecting the names of the types we're working with. We can't depend on it, but we can use it to infer things, leave human-friendly documentation, or provide our own diagnostics.
The above gets parsed and new tokens are quasi-quoted atop this template like this:
pub extern "C" fn #inventory_fn_name
The proc macro passes will make it into:
pub extern "C"
Next, let's talk about how the TypeId
mapping is constructed!
Mapping Rust types to SQL
We can build a TypeId
mapping of every type pgx
itself has builtin support. For example, we could do:
let mut mapping = new;
mapping.insert;
This works fine, until we get to extension defined types. They're a bit different!
Since #[pg_extern]
decorated functions can use not only some custom type T
, but also other types like PgBox<T>
, Option<T>
, Vec<T>
, or pgx::datum::Array<T>
we want to also create mappings for those. So we also need TypeId
s for those... if they exist.
Types such as Vec<T>
require a type to be Sized
, pgx::datum::Array<T>
requires a type to implement IntoDatum
. These are complications, since we can't always do something like that unless MyType
implements those. Unfortunately, Rust doesn't really give us the power in macros to do something like what the impls
crate does in macros, so we can't do something like:
// This doesn't work
! parse_quote;
Thankfully we can use the same strategy as impls
:
Inherent implementations are a higher priority than trait implementations.
First, we'll create a trait, and define a blanket implementation:
This lets us do <T as WithTypeIds>::ITEM_ID
for any T
, but the VEC_ID
won't ever be populated. Next, we'll create a 'wrapper' holding only a core::marker::PhantomData<T>
;
Now we can do WithSizedTypeIds::<T>::VEC_ID
for any T
to get the TypeId
for Vec<T>
, and only get Some(item)
if that type is indeed sized.
Using this strategy, we can have our __pgx_internals
functions build up a mapping of TypeId
s and what SQL they map to.
Our pet graph
Once we have a set of SQL entities and a mapping of how different Rust types can be represented in SQL we need to figure out how to order it all.
While this is perfectly fine in Rust:
The same in SQL is not valid.
We use a petgraph::stable_graph::StableGraph
, inserting all of the SQL entities, then looping back and connecting them all together.
If an extension has something like this:
We need to go and build an edge reflecting that the function known_animals
requires the type Animals
to exist. It also needs edges reflecting that these entities depend on the extension root.
Building up the graph is a two step process, first we populate it with all the SqlGraphEntity
we found.
This process involves adding the entity, as well doing things like ensuring the return value has a node in the graph even if it's not defined by the user. Something like &str
, a builtin value pgx
knows how to make into SQL and back.
Once the graph is fully populated we can circle back and connect rest of the things together! This process includes doing things like connecting the arguments and returns of our #[pg_extern]
marked functions.
Something like this:
With a graph built, we can topologically sort the graph and transform them to SQL representations:
Generating the SQL
In order to generate SQL for an entity, define a ToSql
trait which our SqlGraphEntity
enum implements like so:
/// An entity corresponding to some SQL required by the extension.
Then the same trait goes on the types which SqlGraphEntity
wraps, here's what the function looks like for enums:
Other implementations, such as on functions, are a bit more complicated. These functions are tasked with determining, for example, the correct name for an argument in SQL, or the schema which contains the function.
Closing thoughts
With the toolkit pgx
provides, users of Rust are able to develop PostgreSQL extensions using familiar tooling and workflows. There's still a lot of things we'd love to refine and add to the toolkit, but we think you can start using it, like we do in production at TCDI. We think it's even fun to use.
Thanks to @dtolnay for making many of the crates discussed here, as well as being such a kind and wise person to exist in the orbit of. Also to Eric Ridge for all the PostgreSQL knowledge, and TCDI for employing me to work on this exceptionally fun stuff!