So you got hit with that surprise bill and now you're here researching alternatives. You're not alone - I've been tracking migration stories since Neon launched, and three things consistently drive teams away from it. None of this is from some bullshit "analysis" - it's from actual conversations with developers who got burned.
Spoiler alert: if you're here, you've probably already decided to leave. You're just looking for validation that you're not crazy for wanting predictable database bills.
The Autoscaling Bill Shock
Neon's autoscaling will screw you over when you least expect it. I watched a friend's side project get featured on Hacker News - his bill went from like fifteen bucks to... I don't know, maybe 350? Could have been 420? Either way, fucking brutal because traffic spiked and compute scaled to max.
The official pricing looks reasonable until you dig into it. Recent pricing changes bumped Launch tier to $0.14/CU-hour with a $5 monthly minimum, Scale tier is $0.26/CU-hour. When your app needs consistent compute, those hours add up fast. A 2 CU instance running 24/7 costs probably like 180-250 bucks a month on Launch tier - before storage.
Here's the thing nobody tells you: Neon's database branches are brilliant for development but each branch storing changes costs $0.35/GB-month. Create a few feature branches with decent data and you're looking at surprise storage bills.
What actually happens with costs:
- Dev branches pile up storage costs ($0.35/GB-month each)
- Autoscaling hits during traffic spikes with no warning
- Point-in-time recovery adds $0.20/GB-month for data changes
- Connection pooling limits force compute upgrades
What we all learned the hard way: Neon saves money for truly idle apps but costs way more for anything with steady traffic - like 2x, maybe 3x? The compute-hour pricing model works against you for steady workloads.
You Don't Actually Own Your Database
Want superuser access? Too bad. Need a specific PostgreSQL extension that isn't on Neon's approved list? Sorry, can't help you. This managed service approach works great until it doesn't. Other managed PostgreSQL services like Railway or self-hosted solutions give you full control.
I've seen teams hit these walls:
- Custom extensions blocked - can't install what you need
- Configuration limits - can't tune for your specific workload
- Single region deployment - can't get closer to global users
- PgBouncer constraints - connection pooling doesn't fit all patterns
I've seen this story play out over and over: start simple, grow complex, hit limitations, migrate away.
Cold Starts Will Ruin Your Day
Neon's 300-500ms cold starts seem fine on paper. In reality, they'll destroy user experience for anything real-time. I've built APIs where that half-second delay killed the entire feature.
Cold starts break these use cases:
- Real-time chat applications (users see "connecting..." for 500ms)
- Financial APIs with SLA requirements (you get
ECONNREFUSED
during cold starts) - Gaming backends where latency matters (players notice the pause)
- Any API called from mobile apps (users notice the delay and think it's broken)
How Migrations Actually Happen
Forget the three-phase corporate bullshit. Here's how teams really migrate:
- Panic: Usually triggered by a surprise bill or performance issue
- Research: Frantically google alternatives while your app is slow/expensive
- Test: Spin up a PostgreSQL instance somewhere and test your schema
- Migrate: pg_dump during low traffic and pray nothing breaks
Most successful migrations I've seen follow PostgreSQL compatibility. Teams picking MySQL (PlanetScale) or SQLite (Turso) spend weeks rewriting queries and dealing with weird edge cases. PlanetScale's own PostgreSQL migration guide shows how complex this conversion really is - they wouldn't need a detailed guide if it was simple.
The brutal truth: teams don't migrate because Neon is bad - they migrate because they outgrew what serverless PostgreSQL can handle, or got tired of unpredictable bills.