Why Docker Matters for Node.js

Node.js Logo

Your app works on your machine but breaks in production. The real problem isn't Node.js - it's all the shit around it. Different Node versions between your laptop and production. Missing system dependencies. Environment variables that work locally but don't exist on the server. The usual mess.

Docker fixes this (mostly)

Docker packages your app with everything it needs. Same Node version, same dependencies, same environment variables. No more "works on my machine" excuses.

What Docker gives you:

  • Your app runs the same everywhere (mostly)
  • Dependencies don't fuck with each other
  • Rollbacks when you inevitably break something
  • Scaling without manually configuring servers

Choosing Node.js Base Images

The official Node.js images come in different flavors. Most tutorials don't tell you which ones work in production.

The reality of Node.js images:

Image Size What's Good For
node:22 ~400MB Development. Don't use in prod - it's bloated
node:22-slim ~75MB Production. This is what works
node:22-alpine ~40MB Looks great until it breaks your native deps

Tried Alpine first because smaller = better, right? Alpine kept breaking random shit. First bcrypt, then some image processing library, then something else I can't even remember. Gave up after wasting a weekend and just used slim. Alpine uses musl instead of glibc or something, I don't really understand the difference but npm packages don't seem to either.

Alpine's fine if you want to test every npm package for musl compatibility, but slim images just work. Check the Node.js Docker best practices if you want the official recommendations.

Multi-stage builds (they work)

Most Dockerfile tutorials online are garbage. They copy everything and wonder why their images are 400MB. Multi-stage builds separate your build crap from runtime:

## Stage 1: Build dependencies and compile application
FROM node:22-slim AS builder

WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force

## Install build dependencies and build the app
COPY . .
RUN npm run build

## Stage 2: Production runtime with minimal dependencies  
FROM node:22-slim AS production

## Create non-root user for security
RUN groupadd -r nodejs && useradd -r -g nodejs nodejs

WORKDIR /app
RUN chown -R nodejs:nodejs /app
USER nodejs

## Copy only production files from builder stage
COPY --from=builder --chown=nodejs:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=nodejs:nodejs /app/dist ./dist
COPY --from=builder --chown=nodejs:nodejs /app/package*.json ./

## Health check for container orchestration
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD curl -f http://localhost:${PORT:-3000}/health || exit 1

EXPOSE 3000
CMD ["node", "dist/server.js"]

Why this works:

  • Build stage has all your dev dependencies and build tools
  • Production stage only gets the compiled code
  • Non-root user so you're not running as root (basic security)
  • Health checks so Kubernetes or Docker Swarm can restart when shit breaks
  • Final image is maybe 100MB instead of 400MB+

Security hardening (don't skip this)

Running as root in production is stupid. Docker security basics that prevent you from getting pwned. Also check OWASP Docker Security and CIS Docker Benchmark:

Essential Security Measures:

## Use specific image versions, not 'latest'
FROM node:22.8.0-slim AS production

## Update system packages for security patches
RUN apt-get update && apt-get upgrade -y && \
    apt-get install -y --no-install-recommends curl && \
    apt-get clean && rm -rf /var/lib/apt/lists/*

## Create dedicated user with minimal privileges
RUN groupadd -r nodejs && useradd -r -g nodejs nodejs

## Set secure file permissions
WORKDIR /app
RUN chown -R nodejs:nodejs /app && chmod 750 /app

## Switch to non-root user
USER nodejs

## Remove unnecessary packages and files
RUN npm prune --production

Why this matters:

Performance stuff that matters

Docker builds were taking 15 minutes and making me question my life choices. What actually speeds things up:

Build Optimization:

## Enable BuildKit for faster builds
## syntax=docker/dockerfile:1

## Use build cache mounts to speed up npm install
FROM node:22-slim AS builder
WORKDIR /app

## Copy package files first for better cache utilization
COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm \
    npm ci --only=production

## Copy source code after dependencies are cached
COPY . .
RUN npm run build

Runtime Optimization:

## Use init system to handle zombie processes
FROM node:22-slim AS production
RUN apt-get update && apt-get install -y --no-install-recommends tini
ENTRYPOINT ["tini", "--"]

## Configure Node.js performance settings
ENV NODE_ENV=production
ENV NODE_OPTIONS="--max-old-space-size=1024"

## Warning: experimental flags break randomly between versions
## Had --experimental-specifier-resolution=node working fine until it didn't
## Just use standard imports instead of fighting experimental bullshit
CMD ["node", "server.js"]

What this does:

Environment config (don't hardcode secrets)

Performance is pointless if your app can't find its database password. Container configuration needs to be environment-aware without baking secrets into images.

Don't put secrets in your Dockerfile. Use environment variables and proper secrets management:

## Configure environment variables with defaults
ENV NODE_ENV=production
ENV PORT=3000
ENV LOG_LEVEL=info

## Health check endpoint configuration
ENV HEALTH_CHECK_PATH=/health
ENV HEALTH_CHECK_TIMEOUT=3000

## Application configuration through environment
ENV DATABASE_POOL_SIZE=10
ENV CACHE_TTL=300
ENV API_RATE_LIMIT=100

Environment Strategy:

  • Default values in Dockerfile for development convenience
  • Override values in production through orchestration platform
  • Secrets injection via mounted volumes or secret management systems
  • Configuration validation on container startup

Container Orchestration and Deployment

Your Node.js app will crash in production unless you set up proper health checks and graceful shutdowns. Here's how to avoid getting paged at 3am.

Container orchestration sounds fancy, but it's just keeping your app alive when things break. Kubernetes, Docker Swarm, and cloud services all do the same thing - restart your containers when they die:

Health Check Implementation:

// Health check that catches real problems
router.get('/health', async (req, res) => {
  try {
    // Don't just check if the port is open - hit a real endpoint that uses your database
    await db.query('SELECT 1');
    res.json({ status: 'ok', timestamp: Date.now() });
  } catch (error) {
    // This catches the real issues that bring down prod
    res.status(503).json({ status: 'unhealthy', error: error.message });
  }
});

Graceful Shutdown Handling:

// src/server.js
import express from 'express';

const app = express();
const server = app.listen(process.env.PORT || 3000);

// Handle SIGTERM from container orchestration
process.on('SIGTERM', () => {
  console.log('SIGTERM received, starting graceful shutdown');
  
  server.close(() => {
    console.log('HTTP server closed');
    process.exit(0);
  });
  
  // Force shutdown after timeout
  setTimeout(() => {
    console.log('Force shutdown after timeout');
    process.exit(1);
  }, 30000);
});

Development Workflow with Docker Compose

Docker Compose Architecture

Docker Compose fixes the "it works on my machine" bullshit. Everyone gets the same Postgres version, same Redis config, same environment variables. New devs can git clone && docker-compose up and start coding instead of spending two days installing dependencies:

## docker-compose.yml
version: '3.8'

services:
  app:
    build:
      context: .
      dockerfile: Dockerfile
      target: development
    ports:
      - \"3000:3000\"
    environment:
      - NODE_ENV=development
      - DATABASE_URL=postgres://user:pass@db:5432/myapp
      - REDIS_URL=redis://redis:6379
    volumes:
      - .:/app
      - /app/node_modules
    depends_on:
      - db
      - redis

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: user
      POSTGRES_PASSWORD: pass
    volumes:
      - postgres_data:/var/lib/postgresql/data

  redis:
    image: redis:7-alpine
    command: redis-server --appendonly yes
    volumes:
      - redis_data:/data

volumes:
  postgres_data:
  redis_data:

Development Benefits:

  • Consistent environment for all developers
  • External services (database, cache) run in containers
  • Hot reloading for code changes during development
  • Easy cleanup with docker-compose down -v

Docker isn't magic, but it makes deployment way less of a mess. Your apps start the same way every time, scaling works without some random server being configured differently, and when things break at least you can reproduce it.

Worth the learning curve? Absolutely. The two weeks you spend learning Docker properly will save you months of 3am production debugging. But remember - Docker doesn't fix bad code, it just makes bad code fail consistently everywhere.

Next steps: Start with the multi-stage Dockerfile above, add proper health checks, and test it locally with Docker Compose. Once that works reliably, you're ready for production deployment with whatever orchestration platform your team prefers.

Comparison Table

Docker Image Strategy

Image Size

Security Level

Build Time

Production Suitability

Best Use Case

Known Issues

Single-stage (node:22)

~400MB

Low (dev tools included)

Fast (5 min)

❌ Not recommended

Quick prototyping only

Contains build tools, npm cache, dev dependencies in production

Multi-stage with slim base

~75MB

High

Medium (8 min)

✅ Recommended

Most production apps

This is what works

Alpine-based multi-stage

~40MB

High

Medium (10 min)

⚠️ Use carefully

Resource-constrained

Will randomly break your native deps

Distroless

~30MB

Excellent

Slow (15 min)

✅ Security-critical

Financial/healthcare apps

Good luck debugging without a shell

Scratch-based

~20MB

Excellent

Complex setup

✅ Expert use

Show-offs and masochists

Requires static compilation

Advanced Docker Patterns for Node.js Production

Docker Multi-stage Build Process

Build optimization that works

Most Docker tutorials skip BuildKit - the optimization that cuts build time from 15 minutes to 2 minutes. Build caching and cache mounts make your CI/CD pipeline usable:

## Enable BuildKit and modern syntax
## syntax=docker/dockerfile:1.4

FROM node:22-slim AS base
## Install system dependencies once
RUN apt-get update && apt-get install -y \
    tini \
    && rm -rf /var/lib/apt/lists/*

## Development stage with dev dependencies
FROM base AS development
WORKDIR /app
## Cache mount for npm to persist between builds
RUN --mount=type=cache,target=/root/.npm \
    --mount=type=bind,source=package.json,target=package.json \
    --mount=type=bind,source=package-lock.json,target=package-lock.json \
    npm ci

## Copy source code
COPY . .
EXPOSE 3000
CMD ["npm", "run", "dev"]

## Build stage for compilation
FROM base AS builder
WORKDIR /app

## Copy dependency files first for better cache utilization
COPY package*.json ./

## Use cache mount to persist npm cache
RUN --mount=type=cache,target=/root/.npm \
    npm ci --only=production && npm cache clean --force

## Copy source and build
COPY . .
RUN npm run build

## Production stage - minimal runtime
FROM base AS production
WORKDIR /app

## Create non-root user
RUN groupadd -r nodejs && useradd -r -g nodejs nodejs
RUN chown -R nodejs:nodejs /app
USER nodejs

## Copy only necessary files from builder
COPY --from=builder --chown=nodejs:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=nodejs:nodejs /app/dist ./dist
COPY --from=builder --chown=nodejs:nodejs /app/package*.json ./

## Health check for orchestration
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
    CMD curl -f http://localhost:${PORT:-3000}/health || exit 1

EXPOSE 3000
ENTRYPOINT ["tini", "--"]
CMD ["node", "dist/server.js"]

Build Performance Results:

  • First build: 8-12 minutes (downloading dependencies)
  • Code-only changes: 30-60 seconds (cache hit on dependencies)
  • Dependency changes: 3-5 minutes (partial cache utilization)
  • Image size: 85-120MB (vs 400MB+ without optimization)

These performance improvements require proper BuildKit configuration and understanding Docker layer caching. You'll also want to read about multi-platform builds for ARM64 support and build secrets management for secure builds.

Container security beyond the basics

Running as non-root helps, but it's not enough. Real security needs defense in depth. I've seen apps get compromised through container escapes, privilege escalation, supply chain attacks, and malicious images:

Advanced Security Configuration:

FROM node:22-alpine AS production

## Update packages for security patches
RUN apk --no-cache upgrade

## Create dedicated user with locked account
RUN addgroup -g 1001 -S nodejs && \
    adduser -S nodejs -u 1001 -G nodejs -s /bin/false

## Set up application directory with proper permissions
WORKDIR /app
RUN chown -R nodejs:nodejs /app && chmod 750 /app

## Switch to non-root user early
USER nodejs

## Copy application files with proper ownership
COPY --from=builder --chown=nodejs:nodejs /app .

## Remove package manager to prevent runtime installation
USER root
RUN apk del npm
USER nodejs

## Run with read-only root filesystem
## Set in docker-compose.yml or Kubernetes:
## read_only: true
## tmpfs:
## - /tmp
## - /var/run

Security Hardening Checklist:

  • Non-root user with locked shell account
  • Read-only filesystem except for necessary writable directories
  • Dropped capabilities - remove CAP_NET_RAW, CAP_SYS_ADMIN
  • Security scanning in CI/CD pipeline using Trivy or Snyk
  • Minimal base image with updated packages following CIS benchmarks
  • Secrets via environment or mounted volumes, not in image layers
  • Package manager removal to prevent runtime modifications

Learn more about container security best practices, Docker security configuration, and supply chain security to protect against sophisticated attacks.

Environment-Specific Configuration Patterns

Production Node.js containers need different configurations for development, staging, and production. The configuration strategy must be secure, maintainable, and support easy environment promotion. Follow twelve-factor app methodology for environment configuration best practices:

Docker Compose Environment Strategy:

## docker-compose.yml
version: '3.8'

services:
  app:
    build:
      context: .
      dockerfile: Dockerfile
      target: ${BUILD_TARGET:-production}
    image: myapp:${TAG:-latest}
    environment:
      - NODE_ENV=${NODE_ENV:-production}
      - LOG_LEVEL=${LOG_LEVEL:-info}
      - DATABASE_URL=${DATABASE_URL}
      - REDIS_URL=${REDIS_URL}
      - JWT_SECRET=${JWT_SECRET}
      - API_KEY=${API_KEY}
    ports:
      - "${PORT:-3000}:3000"
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 60s

## docker-compose.override.yml (development)
version: '3.8'

services:
  app:
    build:
      target: development
    environment:
      - NODE_ENV=development
      - LOG_LEVEL=debug
    volumes:
      - .:/app
      - /app/node_modules
    command: npm run dev

Environment Files:

## .env.development
NODE_ENV=development
LOG_LEVEL=debug
PORT=3000
DATABASE_URL=postgres://dev:dev@localhost:5432/myapp_dev
REDIS_URL=redis://localhost:6379

## .env.staging  
NODE_ENV=staging
LOG_LEVEL=info
PORT=3000
DATABASE_URL=postgres://staging_user:staging_pass@staging-db:5432/myapp_staging
REDIS_URL=redis://staging-redis:6379

## .env.production (example - use secrets management in real production)
NODE_ENV=production
LOG_LEVEL=warn
PORT=3000
## Actual values injected by Kubernetes secrets or AWS Parameter Store

Monitoring and Observability in Containers

Containerized applications require different monitoring approaches than traditional deployments. Container orchestration platforms provide infrastructure metrics, but your Node.js application needs proper logging, health checks, and performance monitoring:

Application Monitoring Setup:

// src/monitoring/health.js
import express from 'express';
import os from 'os';
import process from 'process';

const router = express.Router();

// Detailed health check for container orchestration
router.get('/health', async (req, res) => {
  const health = {
    status: 'healthy',
    timestamp: new Date().toISOString(),
    service: process.env.npm_package_name || 'unknown',
    version: process.env.npm_package_version || 'unknown',
    environment: process.env.NODE_ENV || 'unknown',
    uptime: Math.floor(process.uptime()),
    memory: {
      rss: `${Math.round(process.memoryUsage().rss / 1024 / 1024)}MB`,
      heapUsed: `${Math.round(process.memoryUsage().heapUsed / 1024 / 1024)}MB`,
      heapTotal: `${Math.round(process.memoryUsage().heapTotal / 1024 / 1024)}MB`,
      external: `${Math.round(process.memoryUsage().external / 1024 / 1024)}MB`
    },
    system: {
      loadAverage: os.loadavg(),
      cpuCount: os.cpus().length,
      platform: os.platform(),
      nodeVersion: process.version
    }
  };

  // Add dependency health checks
  try {
    // Check database connection
    await checkDatabase();
    health.database = 'healthy';
  } catch (error) {
    health.status = 'unhealthy';
    health.database = 'unhealthy';
    health.error = error.message;
    return res.status(503).json(health);
  }

  // Check external service connectivity
  try {
    await checkExternalServices();
    health.externalServices = 'healthy';
  } catch (error) {
    health.status = 'degraded';
    health.externalServices = 'unhealthy';
    health.warnings = [error.message];
  }

  const statusCode = health.status === 'healthy' ? 200 : 
                    health.status === 'degraded' ? 200 : 503;
  
  res.status(statusCode).json(health);
});

// Kubernetes-style readiness probe
router.get('/ready', (req, res) => {
  // Check if application is ready to receive requests
  const ready = {
    status: 'ready',
    timestamp: new Date().toISOString()
  };
  
  res.json(ready);
});

// Kubernetes-style liveness probe  
router.get('/live', (req, res) => {
  // Simple liveness check - if this responds, container is alive
  res.json({ 
    status: 'alive',
    timestamp: new Date().toISOString(),
    pid: process.pid
  });
});

export default router;

Structured Logging for Container Environments:
Production containers need proper logging that works with centralized logging systems, log aggregation tools, and monitoring platforms.

// src/utils/logger.js
import winston from 'winston';

const logger = winston.createLogger({
  level: process.env.LOG_LEVEL || 'info',
  format: winston.format.combine(
    winston.format.timestamp(),
    winston.format.errors({ stack: true }),
    winston.format.json()
  ),
  defaultMeta: {
    service: process.env.npm_package_name,
    version: process.env.npm_package_version,
    environment: process.env.NODE_ENV,
    containerId: process.env.HOSTNAME
  },
  transports: [
    // Console output for container logs
    new winston.transports.Console({
      format: winston.format.combine(
        winston.format.colorize(),
        winston.format.simple()
      )
    })
  ]
});

// Add request correlation ID middleware
export const correlationMiddleware = (req, res, next) => {
  req.correlationId = req.headers['x-correlation-id'] || 
                     require('crypto').randomUUID();
  
  res.setHeader('x-correlation-id', req.correlationId);
  
  // Add correlation ID to all log messages in this request
  req.logger = logger.child({ correlationId: req.correlationId });
  
  next();
};

export default logger;

Deployment Strategies and Rolling Updates

Container deployment isn't just docker run. Production applications require zero-downtime deployments, rollback capabilities, and traffic management. The deployment strategy depends on your orchestration platform but follows similar patterns:

Docker Swarm Rolling Update:

## docker-stack.yml
version: '3.8'

services:
  app:
    image: myapp:${TAG:-latest}
    deploy:
      replicas: 3
      update_config:
        parallelism: 1
        delay: 30s
        failure_action: rollback
        order: start-first
      rollback_config:
        parallelism: 1
        delay: 10s
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 3
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
    ports:
      - "80:3000"

Kubernetes Deployment with Rolling Updates:

## k8s-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:latest
        ports:
        - containerPort: 3000
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"
        livenessProbe:
          httpGet:
            path: /live
            port: 3000
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /ready
            port: 3000
          initialDelaySeconds: 5
          periodSeconds: 5

Blue-Green Deployment Script:

#!/bin/bash
## scripts/deploy.sh

set -e

IMAGE_TAG=${1:-latest}
ENVIRONMENT=${2:-production}

echo "Deploying ${IMAGE_TAG} to ${ENVIRONMENT}"

## Build and tag new image
docker build -t myapp:${IMAGE_TAG} .
docker tag myapp:${IMAGE_TAG} myapp:blue

## Health check function
check_health() {
  local service_name=$1
  local max_attempts=30
  local attempt=1
  
  while [ $attempt -le $max_attempts ]; do
    if curl -f http://localhost:3000/health > /dev/null 2>&1; then
      echo "Health check passed"
      return 0
    fi
    
    echo "Health check failed, attempt $attempt/$max_attempts"
    sleep 2
    ((attempt++))
  done
  
  echo "Health check failed after $max_attempts attempts"
  return 1
}

## Start blue environment
echo "Starting blue environment"
docker-compose -f docker-compose.blue.yml up -d

## Wait for blue environment to be healthy
if check_health blue; then
  echo "Blue environment is healthy, switching traffic"
  
  # Update load balancer to point to blue
  # This depends on your load balancer (nginx, HAProxy, etc.)
  ./scripts/switch-traffic.sh blue
  
  echo "Stopping green environment"
  docker-compose -f docker-compose.green.yml down
  
  # Tag blue as green for next deployment
  docker tag myapp:blue myapp:green
  
  echo "Deployment completed successfully"
else
  echo "Blue environment failed health check, rolling back"
  docker-compose -f docker-compose.blue.yml down
  exit 1
fi

These patterns make Docker deployment more reliable. Check out Docker production best practices, container orchestration patterns, monitoring containerized apps, Docker logging drivers, container security scanning, deployment automation strategies, and health check patterns for more advanced setups.

Frequently Asked Questions

Q

Alpine vs Slim vs Full images - which one doesn't suck?

A

Use node:22-slim. 75MB and doesn't randomly break your shit. Alpine looks tempting at 40MB but breaks native deps expecting glibc instead of musl. Full images are 400MB of dev tools you don't need in prod.Use Alpine when: You really need to save space and have time to debug weird native dependency issues.Use full images when: You're developing locally and need debugging tools, or your legacy app has weird dependencies that only work with the full image.

Q

My builds take forever, what's wrong?

A

You're copying files in the wrong order.

Docker rebuilds everything when any file changes. Copy package.json first, then install deps, then copy your code:```dockerfile# ❌ Wrong

  • copies everything first, breaks cache on any file changeCOPY . .

RUN npm install# ✅ Right

  • dependencies cached separately from codeCOPY package*.json ./RUN npm ci --only=productionCOPY . .Use BuildKit with cache mounts for even faster builds:dockerfile

RUN --mount=type=cache,target=/root/.npm npm ci --only=production```Builds go from 15 minutes of me staring at logs wanting to throw my laptop to 2-3 minutes for code changes. Still takes forever when you change deps, but that happens way less often.

Q

Where do I put my API keys and database passwords?

A

Not in your Dockerfile.

Don't put secrets in environment variables either

  • docker inspect shows them. Use proper secrets management:Docker Swarm:```bashecho "my-secret-value" | docker secret create db-password -# Reference in docker-compose.ymlsecrets:

  • db-passwordKubernetes:bashkubectl create secret generic app-secrets --from-literal=db-password=secretvalue# Mount as volume or environment variableFor development with Docker Compose:yaml# Use .env file (not committed to repo)env_file:

  • .env.local```Alternative: Use init containers to fetch secrets from AWS Parameter Store, Hashi

Corp Vault, or similar services at startup.

Q

My app can't connect to the database - ECONNREFUSED everywhere

A

You're trying to connect to localhost from inside a container.

That doesn't work because Docker networking isn't magic. Your database is in another container, so use the service name instead:```javascript// ❌ Wrong

  • localhost doesn't work in containersconst connectionString = 'postgres://user:pass@localhost:5432/db'// ✅ Right

  • use Docker service nameconst connectionString = 'postgres://user:pass@postgres:5432/db'Or use environment variables:yaml# docker-compose.ymlenvironment:

  • DATABASE_HOST=postgres # Service name becomes hostname```

Q

My container runs fine locally but crashes in production with memory errors. Why?

A

Different memory limits between local Docker and production. Production orchestration platforms enforce memory limits that Docker Desktop ignores by default.Check your production memory limits:yaml# Kubernetesresources: limits: memory: "512Mi" # This is enforced strictly# Docker Composedeploy: resources: limits: memory: 512MMonitor memory usage in development:bashdocker stats # Shows actual memory usageFix Node.js memory settings to match container limits:dockerfileENV NODE_OPTIONS="--max-old-space-size=400" # 400MB for 512MB container

Q

Should I run multiple Node.js processes in one container or use multiple containers?

A

One process per container. Docker containers work best with single processes. Use container orchestration for scaling, not clustering within containers.❌ Don't do this:dockerfileCMD ["pm2-runtime", "start", "ecosystem.config.js"] # Multiple processes in one container✅ Do this instead:yaml# docker-compose.ymlservices: app: replicas: 3 # Multiple containers, one process each image: myapp:latestException: You can use PM2 in cluster mode within a container if your orchestration platform doesn't support horizontal scaling, but it's not recommended.

Q

How do I debug a Node.js app running in a Docker container?

A

Enable Node.js inspector and expose the debug port:```dockerfile# Development Dockerfile

EXPOSE 3000 9229CMD ["node", "--inspect=0.0.0.0:9229", "server.js"]``````yaml# docker-compose.ymlports:

  • "3000:3000"
  • "9229:9229" # Debug port```Connect Chrome DevTools to localhost:9229 or use VS Code debugger.

For production debugging (be careful):bash# SSH into containerdocker exec -it container-name /bin/sh# Or use kubectl for Kuberneteskubectl exec -it pod-name -- /bin/sh

Q

Why is my containerized app slower than running locally?

A

Docker overhead is minimal (~2-5%).

The slowdown usually comes from:

  1. Resource limits: Production containers have CPU/memory restrictions
  2. Network routing: Extra network hops through Docker's networking stack
  3. Volume mounting: Development volume mounts are slower than copied files
  4. Wrong base image: Some Alpine packages are compiled for size, not speedBenchmark with production-like resource limits:yaml# docker-compose.ymldeploy: resources: limits: cpus: '0.5' # Half a CPU core memory: 512M # 512MB RAMUse multi-stage builds to avoid development overhead in production.
Q

How do I handle log files in Docker containers?

A

Don't write log files to disk in containers.

Write to stdout/stderr and let Docker handle log collection:```javascript// ❌ Wrong

  • log files get lost when container restartswinston.add(new winston.transports.

File({ filename: 'app.log' }));// ✅ Right

  • log to console, Docker captures itwinston.add(new winston.transports.

Console());Configure log rotation at the Docker level:yaml# docker-compose.ymllogging: driver: "json-file" options: max-size: "10m" max-file: "3"```For production, use centralized logging:

  • ELK Stack (Elasticsearch, Logstash, Kibana)
  • Fluentd with cloud providers
  • Structured JSON logs for better parsing
Q

My Docker build works on Intel Macs but fails on Apple Silicon (M1/M2). How do I fix this?

A

Multi-platform build issues with native dependencies.

Build for specific architectures:bash# Build for Intel (x86_64)docker buildx build --platform linux/amd64 -t myapp:latest .# Build for Apple Silicon (arm64) docker buildx build --platform linux/arm64 -t myapp:latest .# Build for both platformsdocker buildx build --platform linux/amd64,linux/arm64 -t myapp:latest .For CI/CD, specify platform in your build process:```yaml# GitHub Actions example

  • name:

Build Docker image run: docker buildx build --platform linux/amd64 -t myapp:${{ github.sha }} .```Some native dependencies don't support ARM64 yet. Check package.json for platform-specific dependencies and find ARM-compatible alternatives.

Q

How do I update Node.js version in my existing containerized app?

A

Update the base image in Dockerfile and test thoroughly:dockerfile# BeforeFROM node:20-slim# AfterFROM node:22-slimCheck for breaking changes:

  1. Update package.json engines field:json"engines": { "node": ">=22.0.0"}2. Test locally first:bashdocker build -t myapp:node22 .docker run --rm myapp:node22 npm test3. Update CI/CD pipelines to use new Node version4. Deploy to staging before productionMajor version changes (18→20→22) may require dependency updates. Run npm outdated and update packages for compatibility.
Q

Should I use Docker Compose or Kubernetes for Node.js apps?

A

Start with Docker Compose, migrate to Kubernetes when you need orchestration features.

Use Docker Compose when:

  • Small teams (1-5 developers)

  • Simple applications (1-5 services)

  • Single-server deployments

  • Development and testing environmentsMigrate to Kubernetes when:

  • Multi-server deployments required

  • Need advanced features: auto-scaling, service mesh, advanced networking

  • Team >10 people with dedicated DevOps

  • Compliance requirements for container orchestrationDon't use Kubernetes just because "it's enterprise"

  • the complexity isn't worth it for simple applications.

Related Tools & Recommendations

tool
Similar content

Docker: Package Code, Run Anywhere - Fix 'Works on My Machine'

No more "works on my machine" excuses. Docker packages your app with everything it needs so it runs the same on your laptop, staging, and prod.

Docker Engine
/tool/docker/overview
100%
tool
Similar content

Node.js Performance Optimization: Boost App Speed & Scale

Master Node.js performance optimization techniques. Learn to speed up your V8 engine, effectively use clustering & worker threads, and scale your applications e

Node.js
/tool/node.js/performance-optimization
87%
tool
Similar content

Node.js Security Hardening Guide: Protect Your Apps

Master Node.js security hardening. Learn to manage npm dependencies, fix vulnerabilities, implement secure authentication, HTTPS, and input validation.

Node.js
/tool/node.js/security-hardening
83%
tool
Similar content

Node.js Production Deployment - How to Not Get Paged at 3AM

Optimize Node.js production deployment to prevent outages. Learn common pitfalls, PM2 clustering, troubleshooting FAQs, and effective monitoring for robust Node

Node.js
/tool/node.js/production-deployment
81%
integration
Similar content

Claude API Node.js Express: Advanced Code Execution & Tools Guide

Build production-ready applications with Claude's code execution and file processing tools

Claude API
/integration/claude-api-nodejs-express/advanced-tools-integration
79%
tool
Similar content

Podman: Rootless Containers, Docker Alternative & Key Differences

Runs containers without a daemon, perfect for security-conscious teams and CI/CD pipelines

Podman
/tool/podman/overview
79%
tool
Similar content

Node.js Microservices: Avoid Pitfalls & Build Robust Systems

Learn why Node.js microservices projects often fail and discover practical strategies to build robust, scalable distributed systems. Avoid common pitfalls and e

Node.js
/tool/node.js/microservices-architecture
77%
tool
Similar content

Node.js ESM Migration: Upgrade CommonJS to ES Modules Safely

How to migrate from CommonJS to ESM without your production apps shitting the bed

Node.js
/tool/node.js/modern-javascript-migration
73%
tool
Similar content

Node.js Production Troubleshooting: Debug Crashes & Memory Leaks

When your Node.js app crashes in production and nobody knows why. The complete survival guide for debugging real-world disasters.

Node.js
/tool/node.js/production-troubleshooting
73%
tool
Similar content

Express.js Production Guide: Optimize Performance & Prevent Crashes

I've debugged enough production fires to know what actually breaks (and how to fix it)

Express.js
/tool/express/production-optimization-guide
70%
integration
Similar content

Prometheus, Grafana, Alertmanager: Complete Monitoring Stack Setup

How to Connect Prometheus, Grafana, and Alertmanager Without Losing Your Sanity

Prometheus
/integration/prometheus-grafana-alertmanager/complete-monitoring-integration
70%
integration
Similar content

Claude API Node.js Express Integration: Complete Guide

Stop fucking around with tutorials that don't work in production

Claude API
/integration/claude-api-nodejs-express/complete-implementation-guide
68%
troubleshoot
Similar content

Fix MongoDB "Topology Was Destroyed" Connection Pool Errors

Production-tested solutions for MongoDB topology errors that break Node.js apps and kill database connections

MongoDB
/troubleshoot/mongodb-topology-closed/connection-pool-exhaustion-solutions
66%
howto
Similar content

Install Node.js & NVM on Mac M1/M2/M3: A Complete Guide

My M1 Mac setup broke at 2am before a deployment. Here's how I fixed it so you don't have to suffer.

Node Version Manager (NVM)
/howto/install-nodejs-nvm-mac-m1/complete-installation-guide
66%
tool
Similar content

Node.js Memory Leaks & Debugging: Stop App Crashes

Learn to identify and debug Node.js memory leaks, prevent 'heap out of memory' errors, and keep your applications stable. Explore common patterns, tools, and re

Node.js
/tool/node.js/debugging-memory-leaks
66%
tool
Similar content

Express.js - The Web Framework Nobody Wants to Replace

It's ugly, old, and everyone still uses it

Express.js
/tool/express/overview
64%
integration
Similar content

MongoDB Express Mongoose Production: Deployment & Troubleshooting

Deploy Without Breaking Everything (Again)

MongoDB
/integration/mongodb-express-mongoose/production-deployment-guide
62%
howto
Similar content

Mastering ML Model Deployment: From Jupyter to Production

Tired of "it works on my machine" but crashes with real users? Here's what actually works.

Docker
/howto/deploy-machine-learning-models-to-production/production-deployment-guide
60%
howto
Similar content

Mastering Docker Dev Setup: Fix Exit Code 137 & Performance

Three weeks into a project and Docker Desktop suddenly decides your container needs 16GB of RAM to run a basic Node.js app

Docker Desktop
/howto/setup-docker-development-environment/complete-development-setup
60%
tool
Similar content

Coolify: Self-Hosted PaaS Review & Heroku Alternative Savings

I've been using Coolify for 18 months and it's saved me $2,400 vs Heroku. Sure, I spent one Saturday debugging webhook timeouts, but most of the time it just wo

Coolify
/tool/coolify/overview
60%

Recommendations combine user behavior, content similarity, research intelligence, and SEO optimization