This is more of an engineering and architecture question than a specific technical inquiry. From the Fly.io article (Globally Distributed Postgres · Fly),
It is much, much faster to ship the whole HTTP request where it needs to be than it is move the database away from an app instance and route database queries directly.
My question is about why routing database queries is slower than http.
A bit of context, I am trying to deploy a CloudFlare worker-backed serverless SPA (think Sveltekit, Remix etc., the app logic is tightly coupled with the SSR layer and not as a separate backend API microservice). The database layer would either be Supabase/Nhost or a CloudFlare VPN tunnel into a Fly.io postgres instance. My question is, would it be faster to just skip the whole serverless-backend architecture entirely and just put everything on a single Fly.io node with CloudFlare acting as a dumb CDN? I would prefer to go with serverless because of the lower DevOps overhead but Fly.io’s blog made me question the wisdom of the distributed serverless + centralised database architecture. The app is a CRUD app, balanced between read/write heaviness.
I know there are serverless-specific databases like Cockroachdb/Spanner/PlanetScale but they are out of my budget for my use cases.
Most single region database solutions like Supabase expose their database connection string over https/TLS, how much latency would this add in practice for a serverless app? Is a CloudFlare VPN tunnel into Postgres a better solution? A CloudFlare VPN + postgres would require two containers: the postgres DB + another container for the postgres pool and the VPN. There will be three network jumps, user <-> CloudFlare <-> pool+VPN container <-> database container.