Here's how AWS gets you: they make you think you're paying for what you use, but you're actually paying for what you might use. Set a function to 512MB? You pay for 512MB even if your function sits there using 50MB waiting for a database response.
Lambda@Edge charges $0.00005001 per GB-second, which sounds like nothing until you realize you're paying for allocated memory during I/O waits. Your function spends 80% of its time waiting for external APIs? Congratulations, you're paying full freight for waiting around.
The real kicker? AWS ties CPU power to memory allocation. Give your function 128MB and it gets 8% of a CPU core - so slow it might timeout. You're forced to over-allocate memory to get decent CPU performance, then charged for memory you never use.
Production Reality Check: A function that needs 256MB of actual memory but requires decent CPU performance ends up allocated at 1GB. Duration: 3 seconds. Cost per execution: around $0.000000375. Multiply by a million executions and you're paying $350-400/month for compute that should cost $75.
The CloudFront Data Transfer Trap
Lambda@Edge requests go through CloudFront, and data transfer isn't free. Regional rates range from $0.085 to $0.16 per GB, which adds up fast when you're processing heavy payloads.
Worse: if your Lambda function is in us-east-1
but processes files from S3 buckets in eu-west-1
, you pay $0.09/GB just to move the data to where your function can work on it. And then you pay CloudFront rates to send responses back to users.
Gotcha Example: Image processing function that resizes photos from a European S3 bucket. Input: 500GB/month of images. Costs: $45 for cross-region data transfer + $42.50 for CloudFront delivery + compute costs. That's $87.50 in hidden transfer fees before you even start processing.
Vercel's Per-Seat Money Grab
Vercel's $20 per team member per month model is designed to fuck startups. Your team grows from 5 to 15 developers? That's $200 to $300 monthly just for platform access, before you deploy a single function.
The sneaky part: everyone with project access counts as a billable seat. Designer who needs to see staging deployments? $20/month. PM who reviews preview links? $20/month. QA engineer who tests branches? $20/month.
Real Team Scenario: 12-person startup with 8 engineers, 2 designers, 1 PM, 1 QA. Monthly seat cost: $240. For a pre-revenue startup, that's $2,880 yearly just for platform access. Meanwhile, Cloudflare Workers charges $5 total regardless of team size.
The Bandwidth Overage Nightmare
Vercel includes 1TB of Fast Data Transfer per month on Pro plans, then charges usage-based rates for overages. Sounds reasonable until your Next.js app hits Reddit's front page.
Reality Check: A viral blog post with high-res images can burn through terabytes in hours. I've heard of developers getting $2,500-3,000 overage bills from a single day of viral traffic. That's not a typo - thousands of dollars because a post hit Hacker News and everyone decided to download your images.
The bandwidth counting is aggressive too. Every preview deployment burns bandwidth. Every image optimization counts against your quota. Every API request from your frontend to your serverless functions eats your allowance.
Preview Deployment Bandwidth Burn
Here's a gotcha they don't emphasize: preview deployments count against bandwidth limits. Active development teams can burn significant bandwidth just from branch previews and testing.
Scenario: 10 developers creating 3 preview deployments daily, each with 100MB of assets. Monthly preview bandwidth: 10 × 3 × 30 × 100MB = 90GB just from development previews. For high-asset projects, this easily hits hundreds of GB monthly.
Cloudflare Workers: The V8 Rewrite Tax
Cloudflare Workers pricing looks amazing until you try migrating existing Node.js code. The platform runs V8 isolates, not actual Node.js, which breaks tons of dependencies.
Migration Reality: Most Node.js applications require 2-6 weeks of rewrites to work with Workers. You're not just changing deployment targets - you're rewriting code to work around missing APIs like fs
, child_process
, and net
.
Breaking Changes You'll Hit:
- No file system access - everything goes through fetch or KV storage
- No
setTimeout
for scheduling - use Durable Objects alarms instead - Limited Node.js standard library - many packages just don't work
- 128MB memory limit - no configuration options
- Different error handling for async operations
The Real Cost: Developer time. A simple Express API that took 2 weeks to build might take 4 weeks to rewrite for Workers. At $150k average developer salary, that "cheap" migration just cost $12,000 in engineering time.
CPU Time vs Wall Clock Confusion
Workers bills for CPU milliseconds, not wall-clock time. Great for I/O-heavy workloads, terrible for CPU-intensive tasks.
The Gotcha: JSON parsing, image processing, or crypto operations eat CPU time fast. A function that takes 5 seconds wall-clock time but uses 50ms of CPU gets billed for 50ms. But heavy computation that maxes out CPU for 2 seconds gets billed for 2000ms.
Pricing Impact: $0.02 per million CPU milliseconds means CPU-heavy functions cost 10-100x more than simple API proxies. Image resizing that would cost $10 on Lambda might cost $200 on Workers.
The Hidden Operational Costs
Database Connection Pooling Nightmares
Traditional ORMs and database pools don't work in serverless environments. Every function invocation needs fresh connections, which creates connection overhead and potential database connection limit issues.
AWS Lambda Reality: New connection per cold start takes 2-3 seconds. At roughly $0.0000063 per 128MB-second, that's around $0.000015-0.000020 per connection establishment. Doesn't sound like much until you're making 100,000 requests monthly and paying $1,500-2,000 just for database handshakes.
Vercel's Approach: Connection pooling is complex and often requires database proxy services like PlanetScale or Neon. These services have their own costs: PlanetScale charges $29/month minimum, Neon charges $19/month for production features.
Cold Start Performance Taxes
Cold starts aren't just about user experience - they cost money because you're paying for execution time during initialization.
Lambda@Edge: 2-5 second cold starts on complex functions mean you're paying for startup time on every cold invocation. For a 1GB function, that's $0.00000625 per second × 3 seconds = $0.00001875 per cold start.
Scaling Impact: 10,000 cold starts monthly = $1.88 just for initialization overhead. This compounds with traffic spikes when auto-scaling creates many new instances simultaneously.
Error Handling and Retry Costs
Failed function executions still cost money. Worse, automatic retries multiply costs when functions fail systematically.
The Compounding Problem: A function that fails after 5 seconds due to timeout gets billed for 5 seconds. If it retries 3 times (AWS default), you pay for 15 seconds of failed execution plus the eventual successful run.
Real Incident: A misconfigured function that failed database connections was failing after 29 seconds (just under timeout) and retrying. Something like 50,000 failed executions over 4 hours cost around $400-500 for literally nothing but error logs. That's a bill you have to explain to your manager.
The error? A typo in the database connection string that took 29 seconds to timeout. The function was set to retry 3 times automatically. So every failed request burned 87 seconds of billing time (29s × 3 retries) before giving up. I found this at 2am when our bill alert went off.
The Pricing Psychology
These platforms don't accidentally make pricing confusing. The complexity is designed to make cost optimization difficult until after you're committed to their platform.
AWS Strategy: Make memory allocation feel like a safety net ("allocate more to be safe") while hiding that CPU performance requires higher memory tiers.
Vercel Strategy: Lead with developer experience and deployment simplicity, treat team costs as a scaling problem you'll solve later.
Cloudflare Strategy: Undercut competitors on headline pricing while banking on migration friction keeping you locked in after you discover runtime limitations.
The real cost isn't just the monthly bill - it's the engineering time spent understanding, optimizing, and working around these pricing models. Every platform wants you to deploy first and optimize later. That's when they've got you.
Bottom Line: Nobody's trying to rip you off maliciously, but these platforms make money when developers don't fully understand the pricing implications of their architectural decisions. The gotchas aren't bugs - they're features.