Your app works on your machine but breaks in production. The real problem isn't Node.js - it's all the shit around it. Different Node versions between your laptop and production. Missing system dependencies. Environment variables that work locally but don't exist on the server. The usual mess.
Docker fixes this (mostly)
Docker packages your app with everything it needs. Same Node version, same dependencies, same environment variables. No more "works on my machine" excuses.
What Docker gives you:
- Your app runs the same everywhere (mostly)
- Dependencies don't fuck with each other
- Rollbacks when you inevitably break something
- Scaling without manually configuring servers
Choosing Node.js Base Images
The official Node.js images come in different flavors. Most tutorials don't tell you which ones work in production.
The reality of Node.js images:
Image | Size | What's Good For |
---|---|---|
node:22 |
~400MB | Development. Don't use in prod - it's bloated |
node:22-slim |
~75MB | Production. This is what works |
node:22-alpine |
~40MB | Looks great until it breaks your native deps |
Tried Alpine first because smaller = better, right? Alpine kept breaking random shit. First bcrypt, then some image processing library, then something else I can't even remember. Gave up after wasting a weekend and just used slim. Alpine uses musl instead of glibc or something, I don't really understand the difference but npm packages don't seem to either.
Alpine's fine if you want to test every npm package for musl compatibility, but slim images just work. Check the Node.js Docker best practices if you want the official recommendations.
Multi-stage builds (they work)
Most Dockerfile tutorials online are garbage. They copy everything and wonder why their images are 400MB. Multi-stage builds separate your build crap from runtime:
## Stage 1: Build dependencies and compile application
FROM node:22-slim AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
## Install build dependencies and build the app
COPY . .
RUN npm run build
## Stage 2: Production runtime with minimal dependencies
FROM node:22-slim AS production
## Create non-root user for security
RUN groupadd -r nodejs && useradd -r -g nodejs nodejs
WORKDIR /app
RUN chown -R nodejs:nodejs /app
USER nodejs
## Copy only production files from builder stage
COPY --from=builder --chown=nodejs:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=nodejs:nodejs /app/dist ./dist
COPY --from=builder --chown=nodejs:nodejs /app/package*.json ./
## Health check for container orchestration
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:${PORT:-3000}/health || exit 1
EXPOSE 3000
CMD ["node", "dist/server.js"]
Why this works:
- Build stage has all your dev dependencies and build tools
- Production stage only gets the compiled code
- Non-root user so you're not running as root (basic security)
- Health checks so Kubernetes or Docker Swarm can restart when shit breaks
- Final image is maybe 100MB instead of 400MB+
Security hardening (don't skip this)
Running as root in production is stupid. Docker security basics that prevent you from getting pwned. Also check OWASP Docker Security and CIS Docker Benchmark:
Essential Security Measures:
## Use specific image versions, not 'latest'
FROM node:22.8.0-slim AS production
## Update system packages for security patches
RUN apt-get update && apt-get upgrade -y && \
apt-get install -y --no-install-recommends curl && \
apt-get clean && rm -rf /var/lib/apt/lists/*
## Create dedicated user with minimal privileges
RUN groupadd -r nodejs && useradd -r -g nodejs nodejs
## Set secure file permissions
WORKDIR /app
RUN chown -R nodejs:nodejs /app && chmod 750 /app
## Switch to non-root user
USER nodejs
## Remove unnecessary packages and files
RUN npm prune --production
Why this matters:
- Pin exact versions so you don't get supply chain attacks from malicious 'latest' images
- Non-root user limits damage if someone breaks out of your container
- Fewer packages = smaller attack surface
- Regular updates patch security holes
Performance stuff that matters
Docker builds were taking 15 minutes and making me question my life choices. What actually speeds things up:
Build Optimization:
## Enable BuildKit for faster builds
## syntax=docker/dockerfile:1
## Use build cache mounts to speed up npm install
FROM node:22-slim AS builder
WORKDIR /app
## Copy package files first for better cache utilization
COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm \
npm ci --only=production
## Copy source code after dependencies are cached
COPY . .
RUN npm run build
Runtime Optimization:
## Use init system to handle zombie processes
FROM node:22-slim AS production
RUN apt-get update && apt-get install -y --no-install-recommends tini
ENTRYPOINT ["tini", "--"]
## Configure Node.js performance settings
ENV NODE_ENV=production
ENV NODE_OPTIONS="--max-old-space-size=1024"
## Warning: experimental flags break randomly between versions
## Had --experimental-specifier-resolution=node working fine until it didn't
## Just use standard imports instead of fighting experimental bullshit
CMD ["node", "server.js"]
What this does:
- Build caching cuts build time from 10 minutes to 2-3 minutes
- Init system handles zombie processes so they don't eat memory
- Memory limits prevent your app from crashing the container
- Don't use clustering in containers - let Docker Compose or Kubernetes handle scaling
Environment config (don't hardcode secrets)
Performance is pointless if your app can't find its database password. Container configuration needs to be environment-aware without baking secrets into images.
Don't put secrets in your Dockerfile. Use environment variables and proper secrets management:
## Configure environment variables with defaults
ENV NODE_ENV=production
ENV PORT=3000
ENV LOG_LEVEL=info
## Health check endpoint configuration
ENV HEALTH_CHECK_PATH=/health
ENV HEALTH_CHECK_TIMEOUT=3000
## Application configuration through environment
ENV DATABASE_POOL_SIZE=10
ENV CACHE_TTL=300
ENV API_RATE_LIMIT=100
Environment Strategy:
- Default values in Dockerfile for development convenience
- Override values in production through orchestration platform
- Secrets injection via mounted volumes or secret management systems
- Configuration validation on container startup
Container Orchestration and Deployment
Your Node.js app will crash in production unless you set up proper health checks and graceful shutdowns. Here's how to avoid getting paged at 3am.
Container orchestration sounds fancy, but it's just keeping your app alive when things break. Kubernetes, Docker Swarm, and cloud services all do the same thing - restart your containers when they die:
Health Check Implementation:
// Health check that catches real problems
router.get('/health', async (req, res) => {
try {
// Don't just check if the port is open - hit a real endpoint that uses your database
await db.query('SELECT 1');
res.json({ status: 'ok', timestamp: Date.now() });
} catch (error) {
// This catches the real issues that bring down prod
res.status(503).json({ status: 'unhealthy', error: error.message });
}
});
Graceful Shutdown Handling:
// src/server.js
import express from 'express';
const app = express();
const server = app.listen(process.env.PORT || 3000);
// Handle SIGTERM from container orchestration
process.on('SIGTERM', () => {
console.log('SIGTERM received, starting graceful shutdown');
server.close(() => {
console.log('HTTP server closed');
process.exit(0);
});
// Force shutdown after timeout
setTimeout(() => {
console.log('Force shutdown after timeout');
process.exit(1);
}, 30000);
});
Development Workflow with Docker Compose
Docker Compose fixes the "it works on my machine" bullshit. Everyone gets the same Postgres version, same Redis config, same environment variables. New devs can git clone && docker-compose up
and start coding instead of spending two days installing dependencies:
## docker-compose.yml
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile
target: development
ports:
- \"3000:3000\"
environment:
- NODE_ENV=development
- DATABASE_URL=postgres://user:pass@db:5432/myapp
- REDIS_URL=redis://redis:6379
volumes:
- .:/app
- /app/node_modules
depends_on:
- db
- redis
db:
image: postgres:16-alpine
environment:
POSTGRES_DB: myapp
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: redis:7-alpine
command: redis-server --appendonly yes
volumes:
- redis_data:/data
volumes:
postgres_data:
redis_data:
Development Benefits:
- Consistent environment for all developers
- External services (database, cache) run in containers
- Hot reloading for code changes during development
- Easy cleanup with
docker-compose down -v
Docker isn't magic, but it makes deployment way less of a mess. Your apps start the same way every time, scaling works without some random server being configured differently, and when things break at least you can reproduce it.
Worth the learning curve? Absolutely. The two weeks you spend learning Docker properly will save you months of 3am production debugging. But remember - Docker doesn't fix bad code, it just makes bad code fail consistently everywhere.
Next steps: Start with the multi-stage Dockerfile above, add proper health checks, and test it locally with Docker Compose. Once that works reliably, you're ready for production deployment with whatever orchestration platform your team prefers.