Been deploying Node.js since version 0.8 when it crashed if you breathed on it wrong. Back then it was simple as shit: SSH into some Ubuntu box, git pull
, forever start app.js
, and pray to whatever gods you believed in. Now? There are 50 deployment platforms and they all suck in their own special ways.
Why Deployment Became Hell
The good news: you have dozens of deployment options. The bad news: you have dozens of deployment options. Every platform promises "zero-config deployment" but somehow you'll still burn three days debugging why your app works perfectly on your laptop but shits the bed with 502s in production.
Here's what actually happens:
- Everyone dockerizes everything, then spends weeks optimizing 2GB images
- "Serverless" cold starts take 3 seconds on a "fast" platform
- Your CI/CD pipeline works perfectly until npm decides to break semver
- Kubernetes YAML files become 500-line poetry that nobody understands
The Three Ways to Deploy (And Why They'll All Disappoint You)
1. Serverless: Great Until It's Not
AWS Lambda is fucking amazing until you hit cold starts right when it matters most. That "instant scaling" bullshit becomes multi-second delays when users are trying to buy shit. Our checkout API went from being snappy to slower than molasses during traffic spikes. Wasted way too many hours debugging before realizing it wasn't our code, just Lambda being Lambda.
Pro tip: If your function hasn't been called in 5 minutes, it's cold. If you import more than 10MB of dependencies, you're looking at 1-2 second cold starts minimum.
What actually works:
- Keep functions small (< 50MB zipped)
- Use provisioned concurrency if you can afford 10x the cost
- Database connections in Lambda are a nightmare - just use HTTP APIs
2. Containers: The "It Works On My Machine" Solution
Docker promises consistency but delivers complexity. Your 50MB app becomes a 500MB image because you forgot to use alpine base and multi-stage builds. Then Kubernetes enters the chat with its 200-line YAML files that somehow still can't handle memory limits correctly.
Shit that's actually broken us in production:
- Docker filled up the entire fucking disk because logs weren't rotated - went from 0 to 100GB in like 2 hours from one chatty container
- Kubernetes murdered our containers because memory limits were too low for webpack builds - turns out webpack needs like 4GB just to exist
- Health checks worked perfectly locally, then failed in ECS for absolutely no goddamn reason - spent 6 hours on this
- File permissions completely fucked us because Docker runs as root but somehow the container filesystem still doesn't cooperate
If you absolutely must use containers, here's how to not completely fuck it up:
- Pin your base image versions or random security updates will break your build
- Set memory limits to 2x what you think you need
- Health checks should return 200, not crash your app when called
- Use
.dockerignore
or your images will includenode_modules
from your host
3. Edge Computing: Fast But Weird
Cloudflare Workers run your code in 250+ locations worldwide. Sounds amazing until you realize they don't support the full Node.js API and you can't use half your npm packages. No filesystem access, no native modules, and if you need more than 128MB memory, you're shit out of luck.
What actually works at the edge:
- Simple API endpoints that transform JSON
- Authentication middleware that doesn't need databases
- Rate limiting and bot protection
- URL rewriting and redirects
What doesn't work:
- Anything that needs file uploads (Worker size limits)
- Heavy npm packages that use native bindings
- Long-running computations (10-second timeout)
- Traditional database connections (use HTTP APIs instead)
Platform Categories (And My Honest Take)
PaaS (Platform-as-a-Service): The "Just Works" Option
Heroku was perfect until they killed free dynos and jacked up prices 400% like absolute cunts. Railway is the new hotness - basically Heroku but they don't hate developers or your wallet. Render is decent but their build times are slow as hell - like watching paint dry.
Use PaaS when:
- You want to deploy with
git push
- You're prototyping and don't care about cost optimization
- Your team thinks Kubernetes is a pasta dish
IaaS: For Control Freaks
Rent a VM from AWS or DigitalOcean and do everything yourself. Hope you like SSH key management and security updates.
Use IaaS when:
- You need to install custom system packages
- Compliance requires you to control everything
- You have actual DevOps engineers (not just developers who read a Docker tutorial)
How We Got To This Mess
2009-2012: The Good Old Days
One Ubuntu server, one app, SSH access. Deploy with git pull && pm2 restart app
. When it crashed, you knew exactly where to look. PM2 was revolutionary because it kept your app running when Node.js inevitably segfaulted.
2013-2016: Heroku Makes Everyone Lazy
Heroku showed us git push heroku main
and we thought we'd reached peak fucking civilization. Until you needed more than one dyno and suddenly your $0/month hobby app cost $50/month - highway robbery.
2017-2020: Docker Containerizes Our Pain
Docker promised "runs everywhere" but delivered "fails everywhere differently". Kubernetes entered the scene and suddenly you needed a PhD to deploy a TODO app. CI/CD became mandatory because manually deploying Docker images is masochistic.
2021-2025: Serverless Promises and Edge Complexity
Lambda cold starts completely ruined responsive apps. Everyone moved to the edge, then realized edge computing means "congrats, your database is now 3000 miles away". Deno tried to fix JavaScript deployment but just created another fucking platform to learn.
What Production Actually Demands
Performance Reality Check:
- Your API will be slow until it's not (looking at you, Lambda cold starts)
- 99.9% uptime sounds achievable until your cloud provider has a bad Tuesday
- "Auto-scaling" means your app crashes under load, then scales up perfectly
- CDNs help until you realize your API calls still hit one region
Security Theatre:
- OWASP guidelines are great, but you'll still get hacked via a dependency
- Vulnerability scanners find 500 false positives and miss the actual security hole
- Secrets management works until someone commits
.env
to GitHub - "Runtime monitoring" means getting alerts at 3 AM that something is broken
Developer Experience vs Reality:
- "One-command deployment" becomes "debugging for three hours why the build failed"
- SSL certificates auto-renew until they don't, and your site goes down on Sunday
- Monitoring and logging cost more than your actual servers
- Preview environments work great until you need to test payment flows
How To Actually Choose (Spoiler: You'll Try Them All)
Start with what you know, not what's trendy:
- If you can SSH and know Linux: stick with VPS until it breaks
- If "git push to deploy" sounds amazing: use Railway or Render
- If you need global performance and have a team: consider serverless
- If you hate surprises and want predictable costs: containers on VPS
My actual decision framework after 8 years:
- Prototype: Railway or Vercel - fastest to market
- MVP with users: Add monitoring, move to something with better debugging
- Growing traffic: Migrate to dedicated containers or pay for serverless scaling
- Enterprise scale: Hire actual DevOps engineers and let them decide
The hard truth: You'll probably migrate twice. First from "easy" to "scalable," then from "scalable" to "cost-effective." Budget for it.
Buzzwords to ignore in 2025:
- WASI and WebAssembly deployment - still too experimental for real apps
- HTTP/3 for serverless - might help, might not, nobody knows yet
- Focus on shipping features, not chasing the newest deployment trend
Next up: specific platform comparisons with real numbers, not marketing bullshit.
Resources that actually help:
- Node.js Production Best Practices - Real advice from people who've been burned
- The Twelve-Factor App - Still relevant after 12 years
- Docker Node.js Best Practices - How to not make 2GB images
- AWS Lambda Node.js Guide - Official docs that don't suck