Home Projects Services Benefits
Mikael Lirbank —
Independent consultant. I build robust, high-quality software systems, get in touch.

Liberate yourself from infrastructure over-planning

Conventional wisdom says your backend and database should be on the same cloud provider. Crossing provider boundaries means crossing the internet, and crossing the internet means slow. It's a reasonable assumption. But if it's wrong, teams are giving up optionality for nothing. The freedom to make the right decision today, instead of trying to forecast what they need down the road.

I was migrating a client application to Cloudflare Workers. The database was Postgres on AWS. There's no managed Postgres on Cloudflare's infrastructure. If you want Postgres with workers, cross-provider is the only option. That's supposed to be slow.

But is cross-provider slow because of the distance, or because of the internet? AWS and Cloudflare both have data centers in Ashburn. Distance is zero — does the internet hop alone actually matter?

So instead of switching platforms based on hearsay, I benchmarked it.

The setup

I measured server-side response time. Worker-to-database, not client-to-server.

Every configuration hit the same Postgres database in us-east-1 (AWS) and ran the same queries. I varied three things: where the code runs, which driver it uses, and how it connects.

Three deployment targets:

TargetProviderLocationRole
Cloudflare WorkersCloudflareSan JoseFar (via internet)
Cloudflare WorkersCloudflareAshburnNear (via internet)
Vercel functionAWSAshburnNear (internal)

Four Postgres drivers:

Drivernpm packageProtocolTransactions
node-postgrespgTCPInteractive
Postgres.jspostgresTCPInteractive
neon-http@neondatabase/serverlessHTTPBatched only
neon-ws@neondatabase/serverlessWebSocketInteractive

Three connection strategies:

ConnectionDescription
unpooledDirect to Postgres
pooledThrough Neon's PgBouncer
hyperdriveCloudflare's connection pooler (Cloudflare Workers only)

Each configuration ran 25 iterations, with 2 warmup iterations. Two configuration choices dominate the results: where the worker runs, and which driver it uses.

Where the workers run

The database is in Virginia. If the worker runs in California, 4,000 km away, every query crosses the country. If the worker runs in Virginia, the queries stay local. Same Cloudflare infrastructure, same internet hop to AWS, different distance.

The difference is not subtle.

Transaction latency by driver and distance — pooled connection

DriverFar (San Jose)Near (Ashburn)Improvement
node-postgres742ms31ms24x
Postgres.js880ms38ms23x
neon-ws537ms40ms13x
neon-http87ms12ms7x

Summary statistics · Raw data

A 23–24x improvement from geographic proximity alone. Each round-trip in the transaction multiplies the cross-country penalty.

Every cloud provider correctly tells you to put compute and database in the same region. They're right. That's the 23x difference. But that improvement came from geographic proximity alone. The worker is still on Cloudflare, the database is still on AWS. The traffic still crosses the internet. If the region matters this much, how much does the provider matter?

Which driver to use

If you don't need interactive transactions, neon-http is the fastest option. It sends queries as single HTTP requests. With the worker near the database, queries hit 12ms. But neon-http only supports batched transactions — you can't read a row, make a decision based on it, then write within the same transaction. If your app needs that, neon-http is out. The remaining options are TCP and WebSocket drivers.

Transaction latency by driver and connection — near the database

DriverConnectionTime
node-postgreshyperdrive27ms
node-postgrespooled31ms
Postgres.jshyperdrive34ms
node-postgresunpooled36ms
Postgres.jspooled38ms
Postgres.jsunpooled39ms
neon-wspooled40ms
neon-wsunpooled45ms

Summary statistics · Raw data

Hyperdrive maintains pre-warmed connections, so the worker skips connection setup on each invocation. That advantage shrinks with more round-trips. Connection setup is a smaller share of the total when you're doing four round-trips instead of one.

Variance matters too. The TCP drivers are consistent — p50 and average stay close. neon-ws is less predictable: transactions show a p50 of 40ms but an average of 49ms, with spikes past 120ms. For user-facing APIs, predictability matters as much as the median. The summary statistics include stddev and range for every configuration.

Does the provider matter?

The worker is close to the database, but the traffic still crosses the internet. How much does that cost compared to staying on the same internal network?

Transaction latency — internet vs internal networking

DriverConnectionInternet (Cloudflare)Internal (Vercel)Delta
node-postgrespooled31ms32ms-1ms
Postgres.jspooled38ms36ms+2ms
neon-wspooled40ms31ms+9ms

Summary statistics · Raw data

For TCP drivers, the cross-provider penalty is -1ms to +2ms. In some configurations, Cloudflare is actually faster. The internet hop that was supposed to be the deal-breaker costs effectively nothing when both providers have data centers in the same metro area.

Fastest option per platform

DriverInternet (Cloudflare)Internal (Vercel)Delta
node-postgres27ms (hyperdrive)32ms (pooled)-5ms
Postgres.js34ms (hyperdrive)36ms (pooled)-2ms
neon-ws40ms (pooled)31ms (pooled)+9ms

Crossing the internet is actually 5ms faster than staying on the internal network in this benchmark.

What this means

The cross-provider penalty is effectively zero, so you can defer infrastructure decisions, break dependencies between your compute and your data, and revisit either choice when the requirements change.

A few caveats. I tested one managed Postgres provider (Neon) in one region (us-east-1). The Vercel baseline runs on Fluid Compute, their layer on top of AWS Lambda. How much that affects per-query latency versus cold starts and concurrency is unclear. Other databases, other regions, and other providers may tell a different story.


Proximity matters enormously (23x). Provider boundaries don't (0ms). That's one less constraint to plan around. Now you know.

I'm Mikael, an independent consultant based in the SF Bay Area. I build robust, high-quality software systems—get in touch.

Mikael Lirbank
Mikael Lirbank