The main Docker problems fall into three categories: security nightmares, licensing surprises, and random shit that breaks for no reason.
Docker Daemon Runs as Root (Security Hates This)
Every Docker container process can basically become root on your host. That's not a bug, that's how Docker works. The daemon runs as root, containers inherit those privileges.
Try explaining to security why your web app container can modify /etc/passwd
on the production server. Spoiler: you can't.
NIST has a 42-page guide about why this is terrible, written in government speak that basically says "don't do this." Docker's own security docs admit this is a problem but don't fix it. The CIS Docker Benchmark has 100+ security configurations you need to implement manually - I've seen teams spend months trying to implement all of them.
I once had a junior dev accidentally wipe /var
because a container mount went wrong and they had root permissions. Docker's security model makes every mistake potentially catastrophic.
Docker Desktop Isn't Free Anymore
Docker changed their licensing in 2021. Now companies over 250 employees need paid licenses - $5-21/month per developer.
Not huge money, but annoying when free alternatives exist. Finance teams love asking why you're paying for container software when Podman does the same thing for free. I've been in three budget meetings where this came up, and every time someone asks "why can't we just use the free one?"
Memory Usage Gets Weird at Scale
Docker daemon uses memory even when no containers are running. Add containers and it uses more. The memory usage pattern makes no sense and Docker's documentation doesn't explain why.
I've seen Docker daemons use 2-4GB just sitting there idle. Then when you spin up containers, memory usage jumps unpredictably. Good luck explaining that to monitoring.
Last month we had a production server with 32GB RAM run out of memory. Docker daemon was using 8GB with only three small containers running. Restarting the daemon freed up the memory, but we never figured out why it happened.
Kubernetes Dropped Docker Support
Kubernetes removed the Docker runtime interface in version 1.24. That's k8s saying "Docker daemon is too heavy for production."
Most cloud providers switched to containerd or CRI-O. Both are lighter and designed for orchestration, not developer laptops. Amazon EKS, Google GKE, and Azure AKS all use containerd now.
If you're running Docker on Kubernetes, you're using deprecated tech. I learned this the hard way when our EKS cluster stopped working after an upgrade - spent two days figuring out why all our pods were failing to start.
Network Configuration is a Mystery
Docker's bridge networking works fine for docker-compose up
on your laptop. In production with multiple hosts and services, it becomes black magic.
The networking documentation assumes you understand Linux networking concepts that most developers don't know. When it breaks, the error messages don't help. I spent three days debugging a networking issue where containers couldn't talk to each other - turned out to be a subnet conflict that Docker never mentioned in the logs.
AWS charges extra for data transfer between availability zones. Docker's networking can trigger that accidentally if you don't configure it right. I've seen AWS bills jump by $500/month because of misconfigured Docker networking pulling data across zones.
Bottom line: Docker works great for development. Production is where it falls apart.