Lambda lets you run code without managing servers. It's actually pretty great for APIs and small tasks, but there are gotchas that'll bite you in production.
AWS Lambda Architecture: At its core, Lambda uses a multi-tier architecture with Frontend Services handling invocations, Worker Managers provisioning execution environments, and Firecracker microVMs providing secure, isolated containers. Each function runs in its own microVM with configurable memory and CPU allocation.
The Reality Check
Lambda works by running your code in response to event-driven triggers - HTTP requests, file uploads, database changes, whatever. Each function runs in its own firecracker microVM container with configurable memory (128 MB to 10 GB). You get CPU power proportional to the memory you allocate, which is weird but that's how AWS pricing model works.
The catch? Cold starts. When Lambda hasn't run your function recently, it takes time to spin up a new container. For Java, this can be 10+ seconds according to AWS benchmarks. For Node.js and Python, usually under 500ms. For Go, pretty fast at 100-300ms based on runtime analysis.
Languages supported: Node.js, Python, Java, Go, C#, Ruby, PowerShell. You can also use custom runtimes or container images up to 10 GB if you hate yourself and want to debug containers instead.
Why People Love It
No server management: You literally never SSH into anything or install security updates. AWS handles all the infrastructure management including patching, capacity provisioning, and automatic OS updates.
Automatic scaling: Goes from 0 to 1,000 concurrent executions by default without you doing anything. Perfect for unpredictable traffic. Want more? Request a limit increase - AWS is usually pretty accommodating.
Pay-per-use: Only pay when your code runs. $0.20 per million requests plus $0.0000166667 per GB-second. Great for low-traffic APIs, terrible for high-traffic ones where the costs add up fast. The AWS free tier gives you 1 million requests monthly forever.
Why People Hate It
Cold starts ruin everything: Your API randomly becomes slow because Lambda decided to start fresh. Users notice. Your boss notices. You spend weekends optimizing cold starts and reading cold start optimization guides.
Debugging is a nightmare: Good luck stepping through code that only exists for milliseconds in some AWS data center. CloudWatch logs are better than nothing, but finding the actual problem in 10,000 log lines is like finding a needle in a haystack. X-Ray tracing helps but adds complexity.
Vendor lock-in: Once you go Lambda, everything becomes AWS-specific. Your code calls DynamoDB, S3, SNS, SQS. Moving to another cloud? Good luck rewriting everything or dealing with multi-cloud complexity.
The 15-minute limit: Perfect until you need to process a file that takes 20 minutes. Then you're stuck splitting your job or moving to EC2 or Batch.
Architecture Reality
Lambda has three phases: INIT (setup), INVOKE (your code), SHUTDOWN (cleanup). Currently you only pay for the INVOKE phase, but AWS has hinted at potential INIT billing changes that could make cold starts even more expensive.
Performance improvements: Graviton2 processors are 34% cheaper and faster than x86. SnapStart for Java reduces cold starts from 10 seconds to 200ms, which is still slow but at least usable.
Lambda integrates with 200+ AWS services. API Gateway, S3, DynamoDB, EventBridge - if it's AWS, it probably triggers Lambda. Which is convenient until you realize you're trapped in the AWS ecosystem forever.