We moved 12 microservices from EKS to serverless containers thinking it would simplify our lives. It fucking didn't. Three months and fifteen grand in unexpected bills later, here's what actually happened.
AWS Fargate - The "Easy" Choice That Wasn't
Fargate promised to eliminate our Kubernetes nightmares. It just created new ones. No more node pool management, no more kubectl get nodes
showing "NotReady" during the middle of the night. But Fargate's networking made us question our career choices.
The NAT Gateway Bill Shock
Got hammered with NAT Gateway charges - over $300 for what should've been a few GB of data transfer. Took us 3 days to figure out it was happening during container startup when they pull from ECR. Every single container boot was routing through the NAT Gateway instead of staying internal. VPC endpoints fixed it, but fuck if I know why that isn't the default.
Cold Starts Are Still Cold
Despite AWS marketing, our Spring Boot services took 11.3 seconds average to cold start on Fargate. 11 seconds of angry users hitting refresh. This Stack Overflow discussion captures the frustration perfectly - you're paying more than Lambda for slower cold starts.
Google Cloud Run - Great Until It Wasn't
Cloud Run looked amazing in demos. Deploy with one command, scale to zero, pay per request. Google's pricing model seemed reasonable until we hit production traffic.
The Concurrency Trap
Cloud Run defaults to 80 concurrent requests per instance. Sounds reasonable until you realize each request opens a new DB connection. Our PostgreSQL connection pool maxed out at 100 connections, so with 2 instances we were fucked. Lowering concurrency to 10 meant we needed 8x more instances during traffic spikes. Our bill went from $67 to $342 in one weekend. This Stack Overflow thread shows the same problem - Cloud Run's instance scaling math makes no sense for real apps.
Update - Multi-Container Support
As of May 2023, Cloud Run now supports sidecar containers. You can finally run database proxies, logging agents, and Nginx reverse proxies alongside your main app. But here's the catch - it adds complexity. Now you need to understand container dependencies, shared volumes, and network coordination. The simple "one container = one service" mental model is gone.
Azure Container Apps - Microsoft's Expensive Experiment
Azure Container Apps launched in 2022 as Microsoft's "we can do serverless containers too" answer to Fargate and Cloud Run. The pricing is brutal - $62.47/month per vCPU at full utilization vs Fargate's $29.50. That's a 111% Microsoft tax for the privilege of running on Azure.
The Windows Tax
The only reason to choose Azure Container Apps is Windows containers. If you're stuck running .NET Framework (not .NET Core), this is your only serverless option. Multiple Azure cost comparison discussions consistently show Azure is significantly more expensive - most teams avoid it for cost reasons.
Free Tier Gotcha
Azure promises 180,000 vCPU-seconds free per month. Sounds generous until you do the math - that's 50 hours total. Our dev environment container running 24/7 burned through the entire free tier in 2.08 days. Got a $47 surprise bill on day 3. This Stack Overflow thread shows we're not the only ones who fell for this.
What Actually Works in Production
After burning through fifteen grand learning these platforms, here's what we'd do differently:
Fargate is good for: Steady-state workloads where you can predict costs. Use VPC endpoints and ARM instances to cut costs.
Cloud Run is good for: Spiky HTTP APIs with proper connection pooling. Use minimum instances if you need consistent latency.
Azure Container Apps is good for: Windows containers and nothing else. The Microsoft tax is real.
Skip them all for: High-throughput workloads. Regular EKS/GKE/AKS clusters are still cheaper at scale.