Docker's "no space left on device" error is misleading bullshit. You check df -h
and see tons of free space, but Docker still won't work. That's because Docker hides its data in places that df
doesn't show you properly, and Docker never cleans up its own mess.
Where Docker Actually Hides Your Missing Disk Space
/var/lib/docker/
is where all your space went. Docker dumps everything here: container layers, images, volumes, build cache. I've seen fresh Docker installs grow from nothing to 30GB+ in a couple months of normal dev work. Check what's actually using space:
Check what's actually using space:
sudo du -sh /var/lib/docker/*
sudo du -sh /var/lib/docker/overlay2/* | sort -hr | head -10
What's eating your space in /var/lib/docker/
:
overlay2/
- Container filesystems (usually the biggest chunk)containers/
- Container logs and metadata (can get massive)image/
- Image datavolumes/
- Your app databuildkit/
- Build cache (Docker never cleans this)
Container logs are probably your biggest problem. Docker logs everything to /var/lib/docker/containers/
and never rotates the files unless you tell it to. I've seen single containers dump 40GB+ of logs over a weekend.
Find log-heavy containers:
sudo find /var/lib/docker/containers/ -name "*.log" -exec du -Sh {} + | sort -rh | head -5
Docker image deduplication is a lie. Docker claims it shares layers between images, but half the time those "shared" layers end up orphaned and eating space anyway. Interrupted downloads and corrupted metadata leave behind GB of crap that nothing uses.
BuildKit cache is another space hog. Every docker build
saves intermediate layers in /var/lib/docker/buildkit/
and Docker never expires them. I've seen build caches hit 20GB+ on active dev machines.
Hidden Space Problems That Don't Show in df
Inode Exhaustion happens when you run out of file system inodes (file entries) rather than disk space. Docker creates thousands of tiny files for container layers, and older filesystems can exhaust inodes while having plenty of space left. This is particularly common with ext4 filesystems on older systems.
Check for inode problems:
df -i
## Look for high "IUse%" numbers
The Linux filesystem documentation explains inode allocation and management across different filesystem types.
This is common on systems using ext4 with default inode ratios, especially with overlay2 storage driver creating many small files.
Filesystem-Specific Issues
Filesystem-Specific Issues vary by storage backend and Docker storage drivers:
- ext4: Can hit inode limits with Docker's small files
- xfs: Generally more resilient but can fragment with heavy layer usage
- APFS (macOS): Docker Desktop creates a disk image that doesn't shrink automatically
- NTFS (Windows): Layer deduplication fails, consuming excessive space
Docker Desktop Virtual Disk
Docker Desktop Virtual Disk (macOS/Windows) allocates space differently than Linux. The Docker.raw
or ext4.vhdx
files grow to accommodate containers but rarely shrink when containers are deleted. You might have 10GB of containers but a 60GB virtual disk file. The Docker Desktop troubleshooting guide covers virtual disk management and space reclamation.
When "Free Space" Isn't Actually Free
Modern filesystems reserve 5% of disk space for root, so when df
shows you have space, you might not have enough for Docker operations. Docker daemon runs as root but creates files as various users, complicating space calculations.
Temporary Space During Operations
Temporary Space During Operations - Docker needs additional free space for:
- Layer extraction during image pulls (2x image size temporarily)
- Build operations that create intermediate layers
- Container startup when copying files
- Log rotation when it actually works
Network Filesystem Issues
Network Filesystem Issues (NFS, shared storage) where /var/lib/docker
is on network storage can cause space reporting discrepancies. The local system might report free space, but the network filesystem is full.
LVM and Volume Management
LVM and Volume Management systems can show free space in volume groups that isn't allocated to the specific filesystem Docker uses. Your /var/lib/docker
might be on a full logical volume while the system shows overall free space. The LVM documentation explains volume group and logical volume space management in detail.
The Container Logging Disaster
By default, Docker uses the `json-file` logging driver with no size limits. A single container that logs verbosely can fill your entire disk over a weekend.
Real-world disasters I've personally dealt with:
For more details on Docker logging problems, see the container logging best practices guide.
Weekend from hell: Production API with debug logging dumped 50GB of request/response logs over a weekend. Nobody even reads those JSON dumps but they killed our entire stack.
The infinite retry nightmare: Container couldn't connect to DB, so it logged "connection failed" every 100ms. Monday morning: 60GB of error messages in like 8 different languages because someone enabled i18n on error logging. Brilliant.
Docker Desktop space black hole: Colleague's MacBook had a 15GB Docker.raw file containing maybe 2GB of actual containers. Docker Desktop allocated space for deleted shit and never gave it back to macOS. Only fix was nuking everything.
How fast logs grow:
How fast logs grow:
- Dev containers with debug on: couple GB per day
- API services logging everything: 5-10GB daily
- Batch jobs that log every record: tens of GB per run
- Crash-looping containers: same error message over and over until disk dies
Log Location Investigation:
Log Location Investigation:
sudo du -sh /var/lib/docker/containers/*/
sudo du -sh /var/lib/docker/containers/*/*.log | sort -hr
This is probably the #1 cause of unexpected Docker space usage that catches teams off guard.
Docker Desktop vs Docker Engine Space Differences
Docker Desktop (Windows/Mac) uses a virtual machine approach where all Docker data lives inside a disk image file. This file grows dynamically but doesn't shrink automatically when you delete containers. The Desktop app shows container disk usage but doesn't account for image layer overhead or the virtual disk's allocated-but-unused space.
Docker Engine (Linux) stores everything directly in /var/lib/docker/
on the host filesystem. Space usage is more transparent but Docker's metadata tracking can become inconsistent, leading to "phantom" space usage where deleted containers still consume disk.
Anyway, that's where Docker hides your disk space. Now you can stop randomly running cleanup commands hoping something works.
For comprehensive space management strategies, refer to the Docker storage overview and troubleshooting documentation.