CaddySQL: A brief adventure in securing Postgres the wrong way

I’m in the process of setting up a self-hosted PostgreSQL instance for use with prototypes and hobby projects, and while a database in a Docker container on the smallest available VPC of a cloud provider's dirt-cheapest region isn't exactly what I'd call "production grade", I still care about securing connections to provide at least a basic standard of trust for clients reading and/or writing data.

In an effort to simplify my life as a database user as well as administrator, I don’t want to use a self-signed certificate and have to deal with distributing it to clients and, in theory, rotating it appropriately from time to time in order to verify server identity. As an alternative, ordinarily one might set up certbot to automatically provision credentials signed by the Let’s Encrypt CA. Unfortunately, I’m hosting Forgejo on the same server with a single public IP address, so I have a Caddy server already serving HTTP traffic and therefore need to set up certbot in “webroot” mode. This is in itself straightforward, but it introduces an annoying level of architectural complexity for a goal that on its face has nothing to do with HTTP servers. Essentially, it would entail configuring a new HTTP domain in Caddy, wait for it to provision its automatic SSL certificates for the domain, have it run a static file server, point certbot to the static file directory, wait for certbot to stick some magic .well-known/ files in there, and… Wait, wait, wait: “provision automatic SSL certificates”? Hmm.

If I need Caddy running an HTTP(S) server for this domain anyways, can I just grab those credentials and skip the certbot fuss and bother that follows? Let's find out.

At this point, it’s important to note that there are multiple ways to install Caddy, and the following observations apply to the variation in which Caddy is installed as a systemd service on Debian via apt. They should generally work with other installation methods as well, with some modifications.

I’m clearly not the first person to be curious about (ab)using Caddy SSL key provisioning, and one quickly finds that the relevant files are located in “the configured storage facility”. For the systemd installation, that translates to:

  • /var/lib/caddy/.local/share/caddy/certificates/acme-v02.api.letsencrypt.org-directory/<DOMAIN NAME>/<DOMAIN NAME>.crt
  • /var/lib/caddy/.local/share/caddy/certificates/acme-v02.api.letsencrypt.org-directory/<DOMAIN NAME>/<DOMAIN NAME>.key

These need to be mounted into the Postgres Docker container in order to be useful. Initially I expected that file permissions would cause some issues mounting the credentials into the Docker container, requiring a scripted cron job to copy files and adjust ownership appropriately whenever Caddy rotates the certificate. By a stroke of luck, however, the caddy user for systemd on the host system and the postgres user in the Postgres container image each get assigned a UID of 999. File permissions are encoded by UID rather than username, so files owned by the caddy user implicitly change hands to the postgres user when operating from a perspective within the Docker container.

The "I made this" meme format, with a stick figure labeled "Caddy" creating an ambiguous spiky object labeled ".key" and handing it to another figure labeled "Postgres", who in the following panels takes the object and then falsely claims attribution for it.

The .crt and .key files can therefore be mounted directly into the container, and for file permissions as with certificate provisioning, Caddy takes care of everything internally. All that remains is to update the Postgres configuration file to turn ssl = on and specify the locations of the ssl_cert_file and ssl_key_file.

The updated Docker Compose file takes the form:

services:
  db:
    image: "postgres:18beta3"
	command: ["-c", "config_file=/etc/postgresql/postgresql.conf"]
    restart: "always"
    shm_size: "128mb"
    environment:
      POSTGRES_PASSWORD: "example"
	ports:
      - "5432:5432"
    volumes:
	  # Database persistent storage volume mount:
      - "/mnt/HC_Volume_103234511/pgdata:/var/lib/postgresql/18/docker"
      # Config file, with SSL parameters specified:
      - "./postgresql.conf:/etc/postgresql/postgresql.conf:ro"
      # Volume mount definitions for key files:
      - "/var/lib/caddy/.local/share/caddy/certificates/acme-v02.api.letsencrypt.org-directory/<DOMAIN NAME>/<DOMAIN NAME>.key:/etc/postgresql/server.key:ro"
      - "/var/lib/caddy/.local/share/caddy/certificates/acme-v02.api.letsencrypt.org-directory/<DOMAIN NAME>/<DOMAIN NAME>.crt:/etc/postgresql/server.crt:ro"

The result?

$ psql postgresql://postgres@<DOMAIN NAME> 

SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)

It works! Postgres is perfectly happy using the HTTPS-intentioned keys to secure its traffic, and after cleaning up the relics of experiments, stumbles, and false starts I made along the way, it does so with virtually no additional infrastructure to maintain: all that have really been added are a simple entry in the global Caddyfile, three lines of Postgres config, and a couple of Docker volumes.

It remains to be seen how smoothly (or not) Postgres will pick up changes from certificate renewals, though I’m foolishly optimistic given that the docs allude to features thoughtfully designed for compatibility with automatically managed keys. Time will tell, and there will doubtless be other unforeseen bumps in the road as this database gets some real world use, but for now, I’m satisfied. My database tables are being hosted on the cheap, easy to access, and protected from prying eyes. Best of all, I get a free website out of it all, too.