Your codespace just crashed with ENOSPC: no space left on device
during npm install
. Docker layers are eating your disk space faster than you can delete them. Here's what actually works.
The Quick Fix (Nuclear Option)
When you're getting the disk space error, this will save your ass right now:
docker system prune -a --force
docker builder prune --all --force
This nukes everything Docker-related and frees up 5-10GB instantly. You'll have to rebuild your containers, but at least you can work.
Why This Keeps Happening
GitHub gives you 32GB of disk space per codespace, which sounds like plenty until you realize:
- Docker layers stack up: Every RUN command in your Dockerfile creates a new layer. Build your container 5 times? You just used 15GB without realizing it per Docker's layered architecture.
- Node modules multiply: Multiple projects mean multiple node_modules directories. Each one can easily hit 500MB-1GB with dependency bloat.
- Build artifacts accumulate: Webpack outputs, compiled binaries, test coverage reports—they don't clean themselves up automatically.
The Real Solution: Devcontainer Optimization
Edit your .devcontainer/devcontainer.json to include cleanup commands:
{
"initializeCommand": "docker system prune --force",
"postCreateCommand": "npm ci && npm run build",
"shutdownAction": "stopContainer"
}
The initializeCommand
runs before your container starts, cleaning up leftover Docker garbage. This prevented me from hitting the disk limit for 3 months straight.
Storage Monitoring That Actually Works
Add this to your shell startup file (.bashrc
or .zshrc
):
## Check disk space on every terminal start
df -h | grep -E "Size|/$" && echo "Docker space:" && docker system df
Now you'll see exactly how much space you have left every time you open a terminal. When Docker is using more than 10GB, it's time to run docker system prune with proper cleanup strategies.
Docker Layer Hell: The Specific Fix
If you're building Docker images inside Codespaces (yes, Docker-in-Docker), your .devcontainer/Dockerfile is probably wasteful following anti-patterns:
BAD (creates 5 layers):
RUN apt-get update
RUN apt-get install -y git
RUN apt-get install -y curl
RUN apt-get install -y vim
RUN apt-get clean
GOOD (creates 1 layer):
RUN apt-get update \
&& apt-get install -y git curl vim \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
This single change reduced my container size from 2.1GB to 800MB following Docker layer optimization. Multiply that by 10 rebuilds and you just saved 13GB per storage efficiency guidelines.
The npm/yarn Storage Bomb
Node.js projects are the worst offenders. Here's the pattern that kills storage:
npm install
downloads 300MB of dependencies- Rebuild container →
npm install
again → another 300MB - Switch branches →
npm install
→ another 300MB - Repeat until dead
Solution: Use npm/yarn cache properly in your devcontainer:
{
"mounts": [
"source=node_modules,target=${containerWorkspaceFolder}/node_modules,type=volume"
],
"postCreateCommand": "npm ci"
}
This mounts node_modules
as a Docker volume, so it persists between container rebuilds. One npm install
, reused forever.
The Machine Type Trap
Upgrading to 4-core or 8-core machines gives you more disk space:
- 2-core: 32GB storage
- 4-core: 64GB storage
- 8-core: 128GB storage
But here's the gotcha: bigger machines don't fix wasteful Docker usage. I've seen developers burn through 128GB in a week by not cleaning up layers.
Fix your Docker hygiene first, then upgrade machine size if you legitimately need more space.
When Storage Monitoring Saves Your Ass
Set up automatic cleanup in your devcontainer configuration:
{
"initializeCommand": [
"bash -c 'USED=$(df / | tail -1 | awk \"{print \$5}\" | sed \"s/%//\"); if [ $USED -gt 80 ]; then docker system prune -f; fi'"
]
}
This automatically runs docker system prune
if disk usage goes above 80%. Saved me from the "no space left" error at least a dozen times.