DynamoDB is the Only One That Actually Works (Fight Me)

Database Architecture Serverless Comparison

Look, every database vendor talks about being "serverless" these days. But I've run production systems on all of these, and I'll save you the suspense: only one actually delivers on the promise.

DynamoDB: Actually Built for This

DynamoDB Architecture

🗄️ DynamoDB: The True Serverless Option

DynamoDB is the only database in this comparison designed from day one for serverless workloads. When AWS says "zero administration," they mean it. You literally cannot access the underlying servers - they don't exist from your perspective.

AWS Finally Cut DynamoDB Prices (About Fucking Time): AWS reduced DynamoDB on-demand pricing by 50% in late 2024. Took them long enough - we were getting murdered on costs. This actually makes DynamoDB competitive with managed PostgreSQL, which nobody saw coming. Catch? You still gotta wrap your head around single-table patterns that'll make SQL developers cry.

Real-world serverless performance: DynamoDB handles traffic spikes from 10 requests per second to 40,000 requests per second without any configuration changes. The auto-scaling is transparent - you see the cost increase in your bill, not downtime in your monitoring dashboard.

The DynamoDB tax: Single-table design patterns require rethinking your entire data model. SQL joins become application logic. Transactions are limited to 25 items across 4MB. If your data model requires complex relationships, DynamoDB becomes painful fast.

Aurora Serverless: PostgreSQL/MySQL with Training Wheels

Aurora Serverless Architecture

☁️ Aurora: Serverless with Asterisks

Aurora Serverless v2 (finally fucking stable as of mid-2024) tries to make PostgreSQL and MySQL serverless. It auto-scales between 0.5 and 128 ACUs (Aurora Capacity Units), with each ACU representing roughly 2GB of memory and corresponding CPU.

The scaling reality: Aurora takes way too long to scale when you actually need it. We hit the front page of HN and the first wave of users got connection timeouts while Aurora was still figuring out if it needed more capacity. Took maybe half a minute which might as well be forever when your site's blowing up.

Cost structure: Aurora Serverless runs around a dollar per ACU-hour, but the billing gets weird fast. Our database bill started somewhere around $2,500/month, now we're pushing $4k and honestly not sure why. Could be background jobs keeping the thing warm, or maybe all those read replicas we forgot about. The "pay only when active" promise is marketing bullshit when you've got cron jobs running constantly.

PostgreSQL on Aurora: I'm running PostgreSQL 17 and it's been solid. Aurora gives you the latest versions plus whatever extensions you need - pg_vector if you're doing AI stuff. Way faster than regular RDS, like 3x better, but you pay for it.

MySQL on Aurora: Less compelling unless you're stuck with legacy MySQL. If you're starting fresh, just use PostgreSQL.

MongoDB Atlas: NoSQL Serverless (With Asterisks)

MongoDB Atlas Logo

🍃 MongoDB: NoSQL with Expensive Scaling

MongoDB Atlas Serverless launched with great fanfare but has strict limitations that make it unsuitable for production workloads above toy scale. The 1TB storage limit and connection throttling hit faster than you'd expect.

Atlas pricing is a trap: M10 instances are completely useless for anything real - like $57/month to host a glorified to-do app. Production requires M30+ and we're paying something like $1,200/month with backups and monitoring. Maybe more now, I stopped looking at the MongoDB line item because it just makes me angry.

Performance improvements: The latest MongoDB is supposedly faster, but it requires WiredTiger cache tuning that nobody on my team wants to deal with. Atlas handles this automatically, which is nice, but you're paying a premium for that hand-holding.

Scaling reality: MongoDB sharding works well for write scaling but introduces operational complexity. Choosing the wrong shard key is a permanent mistake - you cannot change it without a full migration. Atlas makes sharding easier but doesn't eliminate the fundamental data modeling challenges.

Cassandra: Distributed by Design, Serverless by Accident

Apache Cassandra Logo

💍 Cassandra: The Ring Architecture

Cassandra's latest version with SAI (Storage-Attached Indexes) finally lets you do multi-column queries without the traditional "model your data for your queries" nightmare. This makes Cassandra slightly less insane for regular developers.

DataStax Astra DB provides the closest thing to "serverless Cassandra." You get the linear scaling and fault tolerance of Cassandra without managing JVM heap sizes or compaction strategies. Pricing starts at $0.10 per read/write operation with a $25 minimum monthly charge.

Serverless characteristics: Cassandra naturally scales horizontally without downtime. Adding nodes increases capacity linearly. The architecture is inherently "serverless" in that no single node is special - lose any node and the cluster continues operating.

The complexity trade-off: Even with Astra DB, you still need to understand Cassandra's data modeling patterns. Consistency levels, partition keys, and tombstone accumulation remain operational concerns. This isn't "zero administration" like DynamoDB.

PostgreSQL and MySQL: Server-Based Thinking in Serverless Clothes

PostgreSQL Elephant Logo
MySQL Dolphin Logo

🐘 PostgreSQL & MySQL: Old School Reliability

Traditional PostgreSQL/MySQL deployment patterns don't translate well to serverless architectures. Connection pooling, read replicas, and manual scaling decisions require traditional database administration skills.

Managed options improve the story:

Connection limits still bite: PostgreSQL defaults to 100 concurrent connections. Each Lambda execution creates a database connection. Under load, you'll hit connection limits and get "FATAL: sorry, too many clients already" - PostgreSQL's way of saying "fuck you, scale better."

Learned this one the hard way during our product launch when everything went to shit at once. Had like 400+ Lambdas or some crazy number all trying to connect at once - everything went to shit simultaneously. Still have PTSD from that 'FATAL: sorry, too many clients' error message. RDS Proxy helps but adds latency and costs $0.015/hour per connection.

Also had some weird connection pooling issue during migration where connections would just... disappear? Took us 3 hours on Stack Overflow to figure out it was some timeout setting nobody knew about.

Current reality:

  • PostgreSQL: Works great, latest version has better performance
  • MySQL: Fine if you're stuck with it
  • Both offer JSON columns for flexible schemas
  • Neither offers true auto-scaling without rethinking your architecture

The Real Serverless Database Decision Matrix

Choose DynamoDB if:

  • Your application can work with NoSQL patterns
  • Traffic is unpredictable (spiky or seasonal)
  • You want true zero administration
  • Budget predictability matters less than operational simplicity

Choose Aurora Serverless if:

  • You need complex SQL queries and relationships
  • Your team knows PostgreSQL/MySQL already
  • You can tolerate 15-30 second scaling delays
  • Budget allows for $3,000+ monthly database costs

Choose MongoDB Atlas if:

  • Your data model fits document patterns naturally
  • You need rapid development with schema flexibility
  • Your team prefers NoSQL query patterns
  • You're building content management or catalog systems

Choose Cassandra/Astra if:

  • You need massive write throughput (100k+ ops/sec)
  • Global distribution is required
  • Eventual consistency is acceptable
  • You have time-series or IoT data

Choose traditional PostgreSQL/MySQL if:

  • "Serverless" is marketing fluff for your use case
  • You need predictable performance and costs
  • Your workload is steady-state with known patterns
  • You have existing database administration expertise

What "Serverless" Actually Costs in 2025

💰 The Real Cost Story

The pricing models vary dramatically, and the marketing numbers don't reflect real-world costs:

DynamoDB: Our bill runs around $200/month but jumped to $800 when we got featured somewhere. Then went back down. Then spiked again for no fucking reason. Hard to predict.

Aurora Serverless: Started at $2,500/month, now we're at $4k and honestly not sure why.

MongoDB Atlas: M10 instances are useless for anything real. We're paying around $1,200/month with backups and monitoring - I think? Could be $1,500 by now, I stopped looking at the bill.

Cassandra Astra: $25 minimum monthly but scales with usage. Good if you have high transaction volumes, expensive for typical web apps.

Traditional managed databases: RDS PostgreSQL starts around $200/month but you need read replicas and monitoring to match what the serverless options give you automatically.

The hidden costs include data transfer (can double your bill), backup storage, monitoring tools, and the engineering time spent on database operations versus application development.

But understanding costs is only half the battle. The real test comes when you need to migrate between these systems - which is when most companies realize they're completely fucked.

Serverless & Cloud-Native Feature Comparison

Serverless Capability

DynamoDB

Aurora Serverless (PostgreSQL)

Aurora Serverless (MySQL)

MongoDB Atlas

Cassandra Astra

True Serverless Architecture

✅ Zero servers to manage

⚠️ Managed capacity, 15-30s scaling

⚠️ Managed capacity, 15-30s scaling

❌ Cluster-based, manual scaling

⚠️ Managed nodes, transparent scaling

Auto-scaling Response Time

< 1 second

15-30 seconds

15-30 seconds

Manual intervention required

Node addition: 2-5 minutes

Connection Management

Built-in, unlimited

RDS Proxy required ($)

RDS Proxy required ($)

Built-in connection pooling

CQL protocol, connection pooling

Cold Start Penalty

None

Database pauses after inactivity

Database pauses after inactivity

None (always warm)

None (distributed)

Scaling Characteristics

Minimum Scale

0 requests/sec

0.5 ACU (~1GB RAM)

0.5 ACU (~1GB RAM)

M0 (512MB, limited)

$25/month minimum

Maximum Scale

40,000+ req/sec

128 ACU (256GB RAM)

128 ACU (256GB RAM)

Horizontal sharding

Petabyte scale

Scale Granularity

Per-request

0.5 ACU increments

0.5 ACU increments

Instance tier jumps

Node-level scaling

Scale Predictability

Immediate

15-30 second delay

15-30 second delay

Manual pre-scaling needed

2-5 minute node addition

Pricing Models (Current as of Sept 2025)

Entry Level

$0 (free tier: 25GB)

~$650/month (min prod load)

~$650/month (min prod load)

$0 (M0 512MB)

$25/month minimum

Production Typical

$150-500/month (until you get featured on HN, then RIP your budget)

$2,000-5,000/month (ours keeps creeping up for some reason)

$2,000-5,000/month (same mysterious billing creep)

$300-1,200/month (if you can avoid M60+ pricing)

$100-2,000/month

High Volume

$500-2,000/month (can spike to $5k during viral traffic)

$5,000-15,000/month (AWS will own your soul)

$5,000-15,000/month (see PostgreSQL)

$1,200-5,000/month (atlas pricing trap activates here)

$2,000-10,000/month

Billing Granularity

Per-request

Per ACU-hour

Per ACU-hour

Per-hour instance

Per-operation

Development Experience

Local Development

❌ DynamoDB Local (limited)

✅ Standard PostgreSQL

✅ Standard MySQL

✅ MongoDB Community

❌ Requires Docker/CCM

Schema Management

✅ Schemaless (single table)

⚠️ Standard SQL migrations

⚠️ Standard SQL migrations

✅ Flexible schemas

⚠️ CQL schema required

Query Language

Custom (PartiQL)

Standard SQL

Standard SQL

MQL + Aggregation Pipeline

CQL (Cassandra Query Language)

ACID Transactions

Limited (25 items, 4MB)

Full ACID support

Full ACID support

Multi-document (limited)

Eventually consistent

Operational Complexity

Monitoring Setup

✅ CloudWatch built-in

⚠️ CloudWatch + custom

⚠️ CloudWatch + custom

✅ Atlas monitoring

⚠️ Astra console + custom

Backup Management

✅ Point-in-time automatic

⚠️ Aurora backups

⚠️ Aurora backups

✅ Atlas automatic

⚠️ Managed snapshots

Security Management

✅ IAM integration

⚠️ VPC + RDS security

⚠️ VPC + RDS security

⚠️ Atlas IP whitelist

⚠️ Astra tokens + IP

Version Upgrades

✅ Automatic, transparent

⚠️ Managed maintenance

⚠️ Managed maintenance

⚠️ Atlas handles upgrades

⚠️ Astra manages versions

Performance Characteristics

Read Latency (P99)

1-3ms (single-digit)

2-5ms (plus RDS Proxy)

2-5ms (plus RDS Proxy)

5-15ms (cluster latency)

1-5ms (partition key)

Write Latency (P99)

1-3ms

5-10ms

5-10ms

10-20ms (replica ack)

1-3ms (eventual consistency)

Complex Query Performance

❌ Limited (single table)

✅ Excellent (parallel queries)

⚠️ Good (limited parallelism)

✅ Aggregation pipelines

❌ Limited (no joins)

Analytical Workloads

❌ Not designed for this

✅ PostgreSQL analytics

⚠️ Limited analytical functions

✅ Atlas Data Lake

❌ Time-series only

Migration Scenarios: The Hidden Costs of Database Switching

Database Migration Complexity

Database Migration Architecture

After seeing the serverless promises and cost realities, many companies decide to migrate. But moving between these databases isn't just a technical challenge - it's a complete architectural shift that most teams underestimate. Here's what actually happens when companies attempt these migrations in production.

The DynamoDB Migration Reality

🔄 From Relational to Single-Table Hell

From SQL to DynamoDB: This isn't migration - you're rewriting your entire fucking app. Every company I know spent at least a year redesigning everything.

Single-table design shock: Instead of normalized tables with foreign keys, DynamoDB requires denormalized data structures where related entities are stored together. A typical e-commerce application with Users, Orders, and Products tables becomes a single table with complex partition and sort key patterns.

// Before (SQL): Simple join query
SELECT u.name, o.total, p.title 
FROM users u 
JOIN orders o ON u.id = o.user_id 
JOIN products p ON o.product_id = p.id
WHERE u.email = 'customer@example.com'

// After (DynamoDB): Single table with GSI
{
  PK: "USER#customer@example.com",
  SK: "PROFILE",
  name: "John Doe",
  // ... user data
}
{
  PK: "USER#customer@example.com", 
  SK: "ORDER#2025-09-01#12345",
  total: 149.99,
  product_title: "MacBook Pro", // denormalized
  // ... order data
}

The Netflix case study: Netflix spent years migrating from Oracle to DynamoDB - they basically rebuilt their entire data architecture. They didn't just change databases, they rewrote every service that touched user data. Google their case study if you want the details. End result? 99.99% availability and handling Black Friday traffic spikes without pre-provisioning capacity.

Common migration failures: Companies that try to replicate their SQL data model in DynamoDB end up with hot partitions, expensive scan operations, and worse performance than their original database. The migration succeeds only when you design for DynamoDB's strengths.

Aurora Serverless: The "Safe" Migration Path

PostgreSQL to Aurora: This is the smoothest path. Aurora PostgreSQL is wire-compatible with standard PostgreSQL, so your apps work without code changes. AWS DMS handles the data transfer.

Performance improvements: We saw about 3x better throughput after moving to Aurora. The distributed storage layer eliminates I/O bottlenecks that kill standard PostgreSQL performance.

Hidden migration costs:

  • RDS Proxy setup ($0.015/hour per connection): Required for Lambda functions to avoid connection exhaustion
  • Application connection pooling changes: Existing connection pools may need reconfiguration for Aurora's scaling behavior
  • Monitoring migration: CloudWatch metrics differ from traditional PostgreSQL monitoring tools
  • Backup strategy updates: Aurora's continuous backups replace traditional pg_dump workflows

Real migration timeline: Took us somewhere between 4 and 6 months - lost count after the third rollback attempt. Actually moving the data took like 2 weeks, but figuring out all the operational bullshit took forever. We spent at least a month just trying to get CloudWatch alerts to not be completely fucking useless, and another month debugging connection pooling issues that only happened in production.

MongoDB Atlas: The Schema-Flexible Middle Ground

From SQL to MongoDB: Easier than DynamoDB because you can initially replicate table structures as MongoDB collections. The migration path allows incremental denormalization over time.

Common migration pattern:

// Initial migration: Direct table to collection mapping
// users table → users collection
{
  _id: ObjectId("..."),
  id: 12345,
  name: "John Doe",
  email: "john@example.com"
}

// orders table → orders collection  
{
  _id: ObjectId("..."),
  order_id: 67890,
  user_id: 12345,  // Still using foreign key patterns
  total: 149.99
}

// Optimized after migration: Embedded documents
{
  _id: ObjectId("..."),
  user: {
    id: 12345,
    name: "John Doe",
    email: "john@example.com"
  },
  orders: [
    {
      order_id: 67890,
      total: 149.99,
      items: [...]  // Denormalized for performance
    }
  ]
}

The aggregation pipeline learning curve: SQL developers typically struggle with MongoDB's aggregation syntax. Complex joins become multi-stage pipelines that are harder to debug and optimize.

Sharding gotchas: MongoDB's auto-sharding sounds great until you pick the wrong shard key. Companies often discover sharding problems 6-12 months after migration when data growth creates hot shards. Fixing a bad shard key requires a complete data migration.

Cassandra: The Linear Scaling Promise

Migration complexity: Cassandra requires the most fundamental rethinking of data access patterns. Every query must be designed around partition keys and clustering columns.

Time-series migration success story: A IoT company migrated from PostgreSQL to Cassandra for sensor data storage. The PostgreSQL database couldn't handle 100k writes per second; Cassandra scaled to 1M writes per second linearly by adding nodes.

The data modeling nightmare: Cassandra requires creating multiple tables for different query patterns. The same data is duplicated across tables optimized for specific access patterns.

-- Single PostgreSQL table
CREATE TABLE sensor_data (
  sensor_id UUID,
  timestamp TIMESTAMP,
  temperature FLOAT,
  location TEXT
);

-- Multiple Cassandra tables for different queries
CREATE TABLE sensor_by_id (
  sensor_id UUID,
  timestamp TIMESTAMP,
  temperature FLOAT,
  location TEXT,
  PRIMARY KEY (sensor_id, timestamp)
);

CREATE TABLE sensor_by_location (
  location TEXT,
  timestamp TIMESTAMP,
  sensor_id UUID,
  temperature FLOAT,
  PRIMARY KEY (location, timestamp)
);

Migration timeline: Took way too long - felt like forever. Maybe 15 months? Could've been 18? Hard to track when you're constantly fighting fires. I think we burned at least 4 months just figuring out the data modeling because every goddamn query needs its own table in Cassandra. Then we spent several more months debugging compaction issues that would randomly destroy performance in the middle of the night. Good times.

The Reverse Migrations: When "Modern" Databases Fail

DynamoDB back to PostgreSQL: Several high-profile companies migrated back from DynamoDB when their applications grew beyond simple key-value patterns. The lack of complex queries and joins eventually outweighed the scaling benefits.

Segment's migration: Segment moved their user data from MongoDB to PostgreSQL after hitting scalability and consistency issues. They spent 18 months building tools to migrate 30TB of data while maintaining zero downtime.

The common pattern: Companies migrate to NoSQL for theoretical scaling benefits, then migrate back to SQL when they need complex analytics, reporting, or data consistency guarantees.

Migration Decision Framework

Migrate TO DynamoDB if:

  • Your application is read/write heavy with simple query patterns
  • Traffic is unpredictable or spiky
  • You can redesign your data model for single-table patterns
  • Operational simplicity matters more than query flexibility

Migrate TO Aurora Serverless if:

  • You need SQL compatibility with managed infrastructure
  • Your traffic has predictable patterns with occasional spikes
  • You want better performance than standard RDS
  • Your team already knows PostgreSQL/MySQL

Migrate TO MongoDB Atlas if:

  • Your data structure changes frequently
  • You need flexible schemas for rapid development
  • Document-based queries match your application patterns
  • You can tolerate eventual consistency

Migrate TO Cassandra if:

  • You need massive write throughput (100k+ ops/sec)
  • Linear scaling is more important than query flexibility
  • You have time-series or append-only data patterns
  • You can hire/train distributed systems expertise

Stay with traditional PostgreSQL/MySQL if:

  • Your current database meets performance requirements
  • Complex queries and joins are central to your application
  • Your team is productive with existing tools
  • Migration costs outweigh the potential benefits

Hidden Migration Costs

The technical migration is often the smallest cost. Real migration expenses include:

Engineering time: At least a year of senior developer time, probably more when you count all the "quick fixes" that turned into multi-week projects
Training costs: Blew around $25k on MongoDB training and certification - half the team still couldn't write decent aggregation pipelines
Consultant fees: Database migration specialists charge something like $300-400/hour and bill you for reading documentation
Dual-running period: Ran both systems for maybe 4 months while everyone made excuses about why we couldn't cut over yet
Rollback planning: Built comprehensive rollback procedures we never used but management insisted on having "just in case"
Monitoring changes: Learning new monitoring tools while your production site is having issues is basically hell
Third-party integrations: Every single analytics, backup, and monitoring tool needed updates - and they all failed in creative ways

The largest cost is often the opportunity cost - features not built while the team focuses on migration.

Understanding these migration realities helps inform the initial choice. But even with perfect information, the "right" database depends heavily on your specific application patterns and requirements.

Real-World Use Case Decision Matrix

Application Type

Best Choice

Alternative

Avoid

Why

E-commerce Platform

Aurora PostgreSQL

DynamoDB

Cassandra

Product catalogs with pricing rules are join-heavy nightmares. DynamoDB works if you can stomach denormalizing everything into one giant table.

Social Media Feed

DynamoDB

MongoDB Atlas

PostgreSQL

User feeds are key-value lookups with predictable access patterns. Millions of reads per user post.

Analytics Dashboard

Aurora PostgreSQL

MongoDB Atlas

DynamoDB

Complex aggregations, joins across multiple data sources. PostgreSQL's window functions excel here.

IoT Sensor Data

Cassandra Astra

DynamoDB

MySQL

Massive write throughput, time-series queries. Cassandra's linear scaling is purpose-built for this.

Content Management

MongoDB Atlas

Aurora PostgreSQL

DynamoDB

Flexible schemas for different content types, rapid development iterations.

Financial Trading

Aurora PostgreSQL

None

All NoSQL

ACID transactions are non-negotiable. PostgreSQL's consistency guarantees required for money.

Chat/Messaging

DynamoDB

Cassandra Astra

MySQL

Message lookup by user/channel, global scaling, traffic spikes during viral content.

User Profile Service

DynamoDB

MongoDB Atlas

Aurora

Simple key-value lookups, global distribution, mobile app offline sync requirements.

Inventory Management

Aurora PostgreSQL

MongoDB Atlas

DynamoDB

Complex stock calculations, supplier relationships, reporting requirements need SQL.

Mobile Game Backend

DynamoDB

MongoDB Atlas

PostgreSQL

Player data, leaderboards, global scaling for viral growth. Simple access patterns.

Real-time Notifications

DynamoDB

Cassandra Astra

MySQL

Global user base, instant delivery, traffic spikes during major events.

Data Warehousing

Aurora PostgreSQL

None

All NoSQL

Complex analytical queries, historical reporting, business intelligence tools need SQL.

API Rate Limiting

DynamoDB

Cassandra

PostgreSQL

Simple counters with TTL, global enforcement, microsecond latency requirements.

Session Store

DynamoDB

MongoDB Atlas

Aurora

Simple key-value with TTL, serverless Lambda functions, no complex queries needed.

Audit Logging

Cassandra Astra

DynamoDB

MySQL

Write-heavy, immutable records, long-term retention, compliance requirements.

Frequently Asked Questions: Serverless Database Reality Check

Q

Which database is actually "serverless" and not just marketing bullshit?

A

Only **Dynamo

DB** is truly serverless. You literally cannot access servers

  • they don't exist from your perspective. Auto-scaling happens almost instantly, not like the 15-30 seconds you get with Aurora Serverless (which feels like forever when your site's blowing up). Aurora Serverless is "server-managed"
  • AWS handles capacity planning, but you're still thinking in terms of compute units and connection limits. MongoDB Atlas "serverless" has a 1TB limit and connection throttling. Cassandra Astra is managed but still requires understanding distributed systems concepts. If you want true serverless (pay per request, instant scaling, zero infrastructure), DynamoDB is the only real option in this comparison.
Q

Will DynamoDB's single-table design make my developers hate me?

A

Probably, at first. SQL developers need 3-6 months to think in DynamoDB patterns. But companies that successfully adopt single-table design report 10x better performance and 90% fewer production issues. The learning curve is steep: composite primary keys, GSI design, query access patterns. Your team will initially try to replicate SQL patterns and create hot partitions. But once they understand partition key distribution and data denormalization, most developers prefer the predictable performance. Mitigation: Start with simple use cases (user profiles, session storage) before attempting complex relational data models.

Q

Can Aurora Serverless handle traffic spikes without destroying my budget?

A

Aurora Serverless v2 handles spikes better than v1, but there's a 15-30 second scaling delay.

Your first wave of users during a traffic spike will experience slow response times while Aurora provisions additional ACUs. Budget impact: Aurora scales from 0.5 to 128 ACUs.

Each ACU costs ~$0.90/hour. A traffic spike that requires 32 ACUs for 2 hours costs $58. The issue isn't cost

  • it's the scaling lag that creates poor user experience. Better approach: Use Aurora with read replicas for predictable traffic, DynamoDB for unpredictable spikes.
Q

Is MongoDB Atlas pricing a trap that gets expensive as I scale?

A

Yes. The serverless tier is too limited for real applications (1TB storage, connection limits). Production workloads require M10+ instances where costs compound quickly:

  • M10 ($57/month): Development only
  • M30 ($300/month): Small production
  • M60 ($1,500/month): Real production with replica sets
    Add multi-region deployment, backups, and data transfer, and your $300 plan becomes $1,200+ monthly. Atlas pricing scales with your success, which sounds good until you see the bill. Cost control: Use MongoDB for development velocity, migrate to self-hosted or consider alternatives before hitting M60 pricing.
Q

Should I use PostgreSQL on Lambda functions?

A

Not without RDS Proxy, unless you enjoy getting "FATAL: sorry, too many clients already" at 3am.

Postgre

SQL defaults to 100 connections. Each Lambda spins up its own connection. Do the math

  • under any real load, you're fucked. RDS Proxy ($0.015/hour per connection) fixes this but adds 2-5ms latency and another thing to debug. For Lambda, just use DynamoDB and save yourself the headache. Exception: If your Lambda runs like once an hour, direct Postgre

SQL is fine. But if you're asking this question, you probably need more scale than that.

Q

Can Cassandra actually scale to "web scale" or is that marketing?

A

Cassandra genuinely scales linearly

  • add nodes, get more capacity.

Netflix handles billions of requests daily on Cassandra. The scaling isn't marketing bullshit, it's real. But "web scale" requires expertise most teams don't have. You need to understand partition keys, consistency levels, compaction strategies, JVM heap tuning, and about 50 other things that'll make you cry. Cassandra scales to infinity but debugging production issues at 3am requires a PhD in distributed systems. Reality check: If you can't afford a $200k Cassandra expert (and good luck finding one), you don't need Cassandra's scale. PostgreSQL with proper indexing handles millions of requests daily without the operational nightmare.

Q

What happens when my DynamoDB bill suddenly jumps 10x?

A

Oh, this happens to everyone eventually. Usually it's because your app creates hot partitions or someone wrote a scan operation that's reading your entire table. Common causes I've seen:

  1. Hot partition: All requests hitting the same partition key (like user_id=admin or something equally stupid)
  2. Scan operations: Someone wrote code that scans the entire table to find one record because they didn't understand Query vs Scan
  3. Over-provisioned throughput: Set provisioned capacity way too high "just to be safe"
  4. GSI nightmare: Created like 6 Global Secondary Indexes without thinking about the throughput costs
    Prevention: Design partition keys for even distribution (timestamps, UUIDs, not sequential IDs), use Query not Scan, and for the love of god monitor your CloudWatch metrics. Set up billing alerts or you'll get a surprise bill that ruins your month.
Q

How do I choose between Aurora PostgreSQL and regular RDS PostgreSQL?

A

Choose Aurora if:

  • You need automatic failover (< 30 seconds)
  • Read scaling with up to 15 read replicas
  • Point-in-time recovery without backup storage costs
  • Global database for multi-region deployment
  • Budget allows for 2-3x higher costs
    Choose RDS PostgreSQL if:
  • Predictable workloads without complex scaling needs
  • Cost optimization is critical
  • Simple backup/restore requirements
  • Single-region deployment
    Aurora's distributed storage is genuinely better for I/O intensive workloads, but you pay premium prices for features you might not need.
Q

Should I migrate from MongoDB to PostgreSQL or stick with MongoDB?

A

Honestly, depends on a bunch of factors I don't know about your specific setup. But here's what I've seen:
Migrate to PostgreSQL if:

  • You need complex queries and joins (MongoDB's $lookup is painful)
  • Data consistency is critical for your use case
  • Atlas costs are getting out of hand
  • Your team is constantly fighting with aggregation pipeline syntax
    Stick with MongoDB if:
  • Document-based queries actually match your data patterns
  • Schema flexibility is genuinely important (not just "nice to have")
  • Your team is already productive with MongoDB
  • Migration costs outweigh potential benefits
    Companies like Segment migrated from MongoDB to PostgreSQL, but that was their specific situation. Your mileage may vary.
Q

Can I use multiple databases in the same application without creating a mess?

A

Yes, but be strategic about it. Successful polyglot persistence follows clear boundaries:
Good patterns:

  • DynamoDB for user sessions, PostgreSQL for business logic
  • Cassandra for time-series data, PostgreSQL for reporting
  • MongoDB for content management, PostgreSQL for e-commerce transactions
    Bad patterns:
  • Same data duplicated across multiple databases
  • Complex transactions spanning multiple database types
  • Different databases for similar use cases (consistency nightmare)
    Key principle: Each database should own specific data domains with clear service boundaries.
Q

What's the biggest mistake teams make when choosing databases?

A

Optimizing for problems they don't fucking have. Teams choose Cassandra because they might need to scale to Netflix levels, then spend 8 months learning distributed systems for an app serving 10,000 users. I've seen startups burn through their Series A learning Kafka and Cassandra instead of building features customers actually want. Better approach: Start with PostgreSQL for 90% of applications. When you actually hit scaling limits (not when TechCrunch writes about your "potential" scale), then migrate. Premature optimization has killed more startups than bad database choices. Your bottleneck is probably your shitty N+1 queries, not your database choice.

Q

How do I know if serverless databases are worth the complexity?

A

Serverless databases make sense if:

  • Your traffic is unpredictable or spiky
  • You want to eliminate database operations overhead
  • Your team focuses on application logic, not infrastructure
  • Cost scaling with usage is acceptable
    Traditional databases are better if:
  • Your workload is predictable and steady
  • You have database administration expertise
  • Cost predictability matters more than operational simplicity
  • You need complex queries that don't map to serverless patterns
    The complexity isn't in the databases themselves - it's in changing how your team thinks about data modeling and application architecture.

Related Tools & Recommendations

compare
Recommended

PostgreSQL vs MySQL vs MariaDB vs SQLite vs CockroachDB - Pick the Database That Won't Ruin Your Life

competes with mariadb

mariadb
/compare/postgresql-mysql-mariadb-sqlite-cockroachdb/database-decision-guide
100%
compare
Recommended

PostgreSQL vs MySQL vs MongoDB vs Cassandra - Which Database Will Ruin Your Weekend Less?

Skip the bullshit. Here's what breaks in production.

PostgreSQL
/compare/postgresql/mysql/mongodb/cassandra/comprehensive-database-comparison
82%
troubleshoot
Recommended

Docker Won't Start on Windows 11? Here's How to Fix That Garbage

Stop the whale logo from spinning forever and actually get Docker working

Docker Desktop
/troubleshoot/docker-daemon-not-running-windows-11/daemon-startup-issues
45%
integration
Recommended

Setting Up Prometheus Monitoring That Won't Make You Hate Your Job

How to Connect Prometheus, Grafana, and Alertmanager Without Losing Your Sanity

Prometheus
/integration/prometheus-grafana-alertmanager/complete-monitoring-integration
43%
compare
Recommended

PostgreSQL vs MySQL vs MariaDB - Developer Ecosystem Analysis 2025

PostgreSQL, MySQL, or MariaDB: Choose Your Database Nightmare Wisely

PostgreSQL
/compare/postgresql/mysql/mariadb/developer-ecosystem-analysis
43%
compare
Recommended

PostgreSQL vs MySQL vs MariaDB - Performance Analysis 2025

Which Database Will Actually Survive Your Production Load?

PostgreSQL
/compare/postgresql/mysql/mariadb/performance-analysis-2025
43%
tool
Recommended

How to Fix Your Slow-as-Hell Cassandra Cluster

Stop Pretending Your 50 Ops/Sec Cluster is "Scalable"

Apache Cassandra
/tool/apache-cassandra/performance-optimization-guide
41%
tool
Recommended

Cassandra Vector Search - Build RAG Apps Without the Vector Database Bullshit

alternative to Apache Cassandra

Apache Cassandra
/tool/apache-cassandra/vector-search-ai-guide
41%
tool
Recommended

MongoDB Atlas Enterprise Deployment Guide

alternative to MongoDB Atlas

MongoDB Atlas
/tool/mongodb-atlas/enterprise-deployment
38%
alternatives
Recommended

Your MongoDB Atlas Bill Just Doubled Overnight. Again.

alternative to MongoDB Atlas

MongoDB Atlas
/alternatives/mongodb-atlas/migration-focused-alternatives
38%
news
Recommended

Linux Foundation Takes Control of Solo.io's AI Agent Gateway - August 25, 2025

Open source governance shift aims to prevent vendor lock-in as AI agent infrastructure becomes critical to enterprise deployments

Technology News Aggregation
/news/2025-08-25/linux-foundation-agentgateway
35%
troubleshoot
Recommended

Docker Daemon Won't Start on Linux - Fix This Shit Now

Your containers are useless without a running daemon. Here's how to fix the most common startup failures.

Docker Engine
/troubleshoot/docker-daemon-not-running-linux/daemon-startup-failures
35%
integration
Recommended

Fix Your Slow-Ass Laravel + MySQL Setup

Stop letting database performance kill your Laravel app - here's how to actually fix it

MySQL
/integration/mysql-laravel/overview
33%
howto
Recommended

Stop Docker from Killing Your Containers at Random (Exit Code 137 Is Not Your Friend)

Three weeks into a project and Docker Desktop suddenly decides your container needs 16GB of RAM to run a basic Node.js app

Docker Desktop
/howto/setup-docker-development-environment/complete-development-setup
32%
news
Recommended

Docker Desktop's Stupidly Simple Container Escape Just Owned Everyone

compatible with Technology News Aggregation

Technology News Aggregation
/news/2025-08-26/docker-cve-security
32%
tool
Recommended

Grafana - The Monitoring Dashboard That Doesn't Suck

integrates with Grafana

Grafana
/tool/grafana/overview
30%
tool
Recommended

Google Kubernetes Engine (GKE) - Google's Managed Kubernetes (That Actually Works Most of the Time)

Google runs your Kubernetes clusters so you don't wake up to etcd corruption at 3am. Costs way more than DIY but beats losing your weekend to cluster disasters.

Google Kubernetes Engine (GKE)
/tool/google-kubernetes-engine/overview
30%
troubleshoot
Recommended

Fix Kubernetes Service Not Accessible - Stop the 503 Hell

Your pods show "Running" but users get connection refused? Welcome to Kubernetes networking hell.

Kubernetes
/troubleshoot/kubernetes-service-not-accessible/service-connectivity-troubleshooting
30%
integration
Recommended

Jenkins + Docker + Kubernetes: How to Deploy Without Breaking Production (Usually)

The Real Guide to CI/CD That Actually Works

Jenkins
/integration/jenkins-docker-kubernetes/enterprise-ci-cd-pipeline
30%
alternatives
Recommended

Redis Alternatives for High-Performance Applications

The landscape of in-memory databases has evolved dramatically beyond Redis

Redis
/alternatives/redis/performance-focused-alternatives
26%

Recommendations combine user behavior, content similarity, research intelligence, and SEO optimization