Currently viewing the human version
Switch to AI version

Production Deployment Strategies That Won't Destroy Your Weekend

You've got Lerna working locally, publishing packages manually like a caveman. Time to automate this before you accidentally publish dev credentials to production like an idiot.

The Authentication Nightmare That Kills Every First Deployment

Your first production deploy fails with npm ERR! 401 Unauthorized and you spend 4 hours debugging npm tokens. The Artifactory E401 mystery hits everyone - Lerna moved from npm_config__auth to _authToken and nobody updated the docs.

GitHub Actions setup that actually works:

env:
  NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
  GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

steps:
- name: "Configure npm"
  run: |
    npm config set //registry.npmjs.org/:_authToken $NPM_TOKEN
    npm config set registry https://registry.npmjs.org/

Don't use npm_config__auth - it's deprecated and will silently fail in ways that waste your entire Tuesday. The authentication token setup needs to be _authToken or you're debugging phantom auth errors.

Docker Production Builds That Don't Suck

Building Docker images in a monorepo is where most teams fuck up caching. You change one comment in Package A and suddenly Package B rebuilds from scratch, because your Dockerfile copies the entire workspace like an amateur.

The problem: Standard Docker builds treat monorepos like a single codebase. Change anything, rebuild everything. Your build takes 15 minutes when it should take 2.

Multi-stage builds that work:

## Stage 1: Install all dependencies
FROM node:18-alpine AS deps
WORKDIR /app
COPY package*.json lerna.json ./
COPY packages/*/package*.json ./packages/
RUN npm ci --only=production

## Stage 2: Build specific package
FROM deps AS build
COPY packages/api ./packages/api
RUN npx lerna run build --scope=@company/api

This uses Docker BuildKit to cache dependencies so you don't rebuild everything when you change one comment. Change your API code, only the API rebuilds. The TurboRepo Docker guide has more examples.

Package Publishing Order Hell

We fucked up the publishing order once and got like 30-45 seconds of broken installs - might've been a full minute, felt like forever when Slack was exploding with complaints.

The publishing sequence that breaks everything:

  1. Update @company/utils from 1.2.0 to 1.3.0
  2. Update @company/ui to depend on @company/utils@^1.3.0
  3. Publish @company/ui@2.1.0 first (WRONG)
  4. Users install UI 2.1.0, get dependency error for utils 1.3.0
  5. Your phone rings at 11pm

Lerna handles this automatically with dependency topology sorting, but only if you configure it right:

Lerna Package Dependency Graph

{
  "version": "independent",
  "npmClient": "npm",
  "command": {
    "publish": {
      "conventionalCommits": true,
      "message": "chore(release): publish",
      "registry": "https://registry.npmjs.org"
    }
  }
}

The lerna publish command calculates the dependency graph and publishes in order. Package A depends on Package B? Lerna publishes B first. This is the main reason you use Lerna instead of manual npm publish.

Environment-Specific Configuration Without Hardcoding

Your app needs different configs for staging vs prod, but hardcoding environment variables in Docker images is how you leak API keys to customers.

Use build args and runtime environment injection:

ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}

## Don't copy .env files into images
COPY --exclude=*.env* . .

Configure secrets at deploy time, not build time. The 12-factor app methodology covers this in detail. Your CI should inject secrets as environment variables, never commit them to images.

Monitoring Deployments Without Going Insane

You publish 8 packages at once. Which one broke production? Without proper monitoring, you're grep'ing logs at 2am trying to figure out if the error came from @company/auth@1.2.3 or @company/api@2.1.1.

Tag deployments in your monitoring:

Lerna generates changelogs automatically if you use conventional commits. The output tells you exactly what changed in each package, which helps track down issues. This is why teams enforce commit message formats - not bureaucracy, but debugging sanity.

When package publishing fails mid-process, you end up with some packages published and others stuck. The lerna publish from-git command recovers from partial failures by reading git tags to determine what needs publishing.

Advanced CI/CD Patterns and Production Gotchas

Your basic pipeline works for 3 packages.

Scale to 15+ packages and suddenly everything is on fire. Here's how to build CI/CD that survives contact with real production workloads.

Selective Package Deployment (The Right Way)

Publishing all packages because one changed is wasteful and dangerous. You touch a README in Package A, suddenly Package B deploys to production for no reason. This is how you accidentally break working systems.

Path-based triggering with GitHub Actions:

name:

 Deploy Changed Packages
on:
  push:
    paths:

- 'packages/api/**'
      
- 'packages/auth/**'

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:

- name:

 Detect changes
        uses: dorny/paths-filter@v2
        id: changes
        with:
          filters: |
            api:

- 'packages/api/**'
            auth:

- 'packages/auth/**'
      
      
- name:

 Deploy API
        if: steps.changes.outputs.api == 'true'
        run: npx lerna run deploy --scope=@company/api

This approach prevents deploying unchanged packages.

The paths-filter action detects which packages actually changed and only triggers deployment for those.

No more accidental prod deployments.

But here's the catch: dependency changes still require coordinated deployments.

If Package A depends on Package B and you update B's API, both packages need deployment even if A's code didn't change. Lerna handles this with --include-dependencies flag.

Nx Cloud Integration for Enterprise Scale

Once your monorepo hits 20+ packages, build times become a problem. Local builds take 15 minutes. CI builds take 25 minutes. Developers start grabbing coffee during test runs, which means they're not catching errors early.

Nx Cloud distributed caching actually helps here:

- name:

 Restore Nx cache
  uses: actions/cache@v3
  with:
    path: node_modules/.cache/nx
    key: nx-cache-${{ hashFiles('**/package-lock.json') }}

- name:

 Run builds with Nx Cloud
  run: |
    npx nx run-many --target=build --all --parallel=3
  env:

    NX_CLOUD_ACCESS_TOKEN: ${{ secrets.

NX_CLOUD_ACCESS_TOKEN }}

The Nx Cloud distributed execution splits your build across multiple agents.

Package A builds on agent 1 while Package B builds on agent 2. Build time went from maybe 20+ minutes down to under 10 on a good day.

**But (big but)

  • Nx Cloud costs money after the free tier.** For most teams, the local caching is sufficient.

Only upgrade to distributed builds when your team is actively losing productivity to slow CI.

Zero-Downtime Deployment Strategies

Deploying 8 packages simultaneously without downtime requires coordination. You can't just kubectl apply and hope for the best

  • that's how you get 5 minutes of 500 errors while services restart.

Blue-green deployments with package versioning: 1.

Deploy new package versions to "green" environment 2. Run smoke tests against green environment 3. Switch traffic from blue to green environment 4. Keep blue environment as rollback option

- name: Deploy to staging slot
  run: |
    az webapp deployment slot create --name ${{ env.

APP_NAME }} --slot staging
    az webapp deploy --name ${{ env.APP_NAME }} --slot staging
    
- name: Run health checks
  run: |
    curl -f https://${{ env.

APP_NAME }}-staging.azurewebsites.net/health
    
- name: Swap slots
  run: |
    az webapp deployment slot swap --name ${{ env.

APP_NAME }} --slot staging

This works for web apps, but microservices need [canary deployments](https://martinfowler.com/bliki/Canary

Release.html).

Deploy new version to 5% of traffic, monitor error rates, gradually increase to 100%. Tools like Istio or Linkerd handle the traffic splitting.

Kubernetes Blue-Green Deployment

Handling Secrets and Environment Configuration

Every team fucks up secrets management.

API keys in git repos, database passwords in Docker images, auth tokens committed to package.json. Don't be that team.

Secrets injection at runtime, not build time:

- name:

 Deploy with secrets
  run: |
    kubectl create secret generic app-secrets \
      --from-literal=DATABASE_URL=${{ secrets.

DATABASE_URL }} \
      --from-literal=API_KEY=${{ secrets.API_KEY }}
    
    npx lerna run deploy --scope=@company/api
  env:
    NODE_ENV: production

Use your platform's secret management: GitHub Secrets, AWS Secrets Manager, Azure Key Vault.

Never hardcode secrets in images or config files.

Environment-specific package configuration:

{
  "scripts": {
    "deploy:staging": "npm run build && deploy --env=staging",
    "deploy:prod": "npm run build && deploy --env=production"
  }
}

Different environments need different configurations.

Your staging API points to staging database, prod API points to prod database. Use environment variables for this, not separate codebases.

Rollback Strategies When Everything Goes Wrong

Your deployment passes all tests but breaks production anyway. Users can't log in, orders aren't processing, and your CEO is asking uncomfortable questions. You need rollback strategies that work in under 5 minutes.

Git-based rollbacks with Lerna:

## Find the last working version
git log --oneline --grep="chore(release): publish"

## Rollback to previous release
git revert <commit-hash>
npx lerna publish from-git --yes

This reverts the version bump and publishes the previous working versions.

Fast but not instantaneous

  • npm takes 30 seconds to propagate new versions.

Infrastructure rollbacks for immediate relief:

  • Container orchestrators: kubectl rollout undo deployment/api
  • App services: az webapp deployment slot swap (back to previous slot)
  • Load balancers:

Route traffic to previous version cluster

Keep previous deployment artifacts available for quick rollbacks. The deployment artifacts from your CI should be downloadable for emergency recovery.

Cost Optimization for Large Monorepos

Nx Cloud costs add up fast.

GitHub Actions minutes aren't free for private repos. Docker registry storage grows every deployment. Your AWS bill includes line items you don't recognize.

Optimize CI/CD costs without breaking functionality:

Monitor your costs. AWS bill went from a few hundred to over 2 grand because someone left debug logging on in production and nobody noticed for weeks.

CI/CD Deployment FAQ - The Problems You'll Actually Hit

Q

Why does `lerna publish` randomly fail with "Error: Command failed with exit code 128: git diff --name-only"?

A

Your CI checkout is shallow by default.

Lerna needs full git history to figure out what packages changed. Add fetch-depth: 0 to your checkout action or you'll get cryptic git errors that make no sense.```yaml

  • uses: actions/checkout@v3 with: fetch-depth: 0 # Required for Lerna change detection```This happens because GitHub Actions shallow clones by default for performance. Lerna compares against previous releases using git tags, which aren't available in shallow clones.
Q

How do I deploy only packages that actually changed without breaking dependencies?

A

Use lerna changed to detect modified packages, then deploy them plus their dependents.

Don't just deploy what changed

  • deploy what's affected.```bash# Wrong
  • misses dependent packagesnpx lerna run deploy --since HEAD~1# Right
  • includes dependenciesnpx lerna run deploy --since HEAD~1 --include-dependencies```The dependency graph detection is why you use Lerna. If Package A depends on Package B and B changes, both need deployment even if A's code is unchanged.
Q

My Docker builds take forever because they rebuild everything when one file changes. How do I fix this?

A

Your Dockerfile is copying the entire workspace before installing dependencies.

Fix the layer ordering to cache dependencies separately from source code:```dockerfile# Bad

  • changes to any file invalidate dependency cacheCOPY . .

RUN npm install# Good

  • dependency cache survives source changes COPY package*.json lerna.json ./COPY packages//package.json ./packages/RUN npm ci --only=productionCOPY . .```Use Docker BuildKit and mount caches for even better performance.

The TurboRepo Docker guide has production examples.

Q

Why do I get npm authentication errors in CI but not locally?

A

Your CI environment uses different npm configuration than your local machine. Most auth errors come from:

  1. Wrong token type: Use _authToken not npm_config__auth (deprecated)
  2. Missing registry config: Explicitly set registry URL in CI
  3. Token scope issues: Ensure your npm token has publish permissions

yaml- name: Configure npm authentication run: | npm config set //registry.npmjs.org/:_authToken ${{ secrets.NPM_TOKEN }} npm config set registry https://registry.npmjs.org/ npm whoami # Verify authentication works

The npm token documentation explains the different token types. Automation tokens are write-only and can't read user info, which breaks some Lerna commands.

Q

How do I handle version conflicts when merging development to main?

A

Use semantic versioning pre-releases for development and graduate them on main. Don't manually merge version bumps - let Lerna manage the versions.

Development branch workflow:
bash# Creates 1.2.0-beta.0, 1.2.0-beta.1, etc.npx lerna version --conventional-prerelease --preid beta --yes

Main branch workflow:
bash# Converts 1.2.0-beta.1 to 1.2.0npx lerna version --conventional-graduate --yes

The prerelease documentation covers this workflow in detail. Don't merge package.json version changes between branches - Lerna handles version progression automatically.

Q

My deployment succeeds but applications crash on startup. How do I debug this?

A

Enable debug logging and check for common production deployment issues:bash# Enable verbose Lerna loggingnpx lerna run deploy --loglevel silly# Check for missing environment variablesnpx lerna run health-check --parallel

Common production startup failures:

  • Missing environment variables (database URLs, API keys)
  • Wrong Node.js version in production vs development
  • Dependencies installed with --only=production missing devDependencies used at runtime
  • File paths that work locally but break in containers

Use health check endpoints to verify each service starts correctly before switching traffic to new deployments.

Q

How do I rollback when a deployment breaks production?

A

Have multiple rollback strategies depending on the failure type:

Fast infrastructure rollback (< 2 minutes):
bash# Kuberneteskubectl rollout undo deployment/api-service# Docker containers docker service rollback api-service

NPM package rollback (5-10 minutes):
bash# Revert git changes and republishgit revert <release-commit>npx lerna publish from-git --yes

Keep previous deployment artifacts available. The GitHub Actions artifacts should include Docker images and build outputs for emergency recovery.

Q

Why does Nx Cloud cost so much and is it worth it?

A

Nx Cloud pricing hits hard when you exceed free tier limits. You pay for compute time, cache storage, and distributed task execution. Teams with large monorepos can hit $500+/month without breaking a sweat.

Worth it when:

  • Build times exceed 15+ minutes locally
  • CI builds block developer productivity
  • Team has 10+ developers working on monorepo daily
  • Build parallelization saves more than the cost

Not worth it when:

  • Total build time under 5 minutes
  • Small team (< 5 developers)
  • Local caching already provides adequate performance
  • Budget constraints outweigh time savings

Check Nx Cloud pricing and monitor your usage. Most teams start with local caching and only upgrade when build times become painful enough to justify the cost.

Related Tools & Recommendations

alternatives
Recommended

Your Monorepo Builds Take 20 Minutes Because Yarn Workspaces Is Broken

Tools that won't make you want to quit programming

Yarn Workspaces
/alternatives/yarn-workspaces/modern-monorepo-alternatives
100%
tool
Recommended

Yarn Workspaces - Monorepo Setup That Actually Works

Stop wrestling with multiple package.json files and start getting shit done.

Yarn Workspaces
/tool/yarn-workspaces/monorepo-setup-guide
100%
compare
Recommended

Pick Your Monorepo Poison: Nx vs Lerna vs Rush vs Bazel vs Turborepo

Which monorepo tool won't make you hate your life

Nx
/compare/nx/lerna/rush/bazel/turborepo/monorepo-tools-comparison
89%
troubleshoot
Recommended

npm Threw ERESOLVE Errors Again? Here's What Actually Works

Skip the theory bullshit - these fixes work when npm breaks at the worst possible time

npm
/troubleshoot/npm-install-error/dependency-conflicts-resolution
66%
news
Recommended

Major npm Supply Chain Attack Hits 18 Popular Packages

Vercel responds to cryptocurrency theft attack targeting developers

OpenAI GPT
/news/2025-09-08/vercel-npm-supply-chain-attack
66%
tool
Recommended

npm - The Package Manager Everyone Uses But Nobody Really Likes

It's slow, it breaks randomly, but it comes with Node.js so here we are

npm
/tool/npm/overview
66%
troubleshoot
Recommended

Fix Yarn Corepack "packageManager" Version Conflicts

Stop Yarn and Corepack from screwing each other over

Yarn Package Manager
/tool/troubleshoot/yarn-package-manager-error-troubleshooting/corepack-version-conflicts
66%
tool
Recommended

pnpm - Fixes npm's Biggest Annoyances

compatible with pnpm

pnpm
/tool/pnpm/overview
66%
alternatives
Recommended

Turborepo Alternatives - When You're Done With Vercel's Bullshit

Escaping Turborepo hell: Real alternatives that actually work

Turborepo
/alternatives/turborepo/decision-framework
45%
tool
Recommended

Turborepo - Make Your Monorepo Builds Not Suck

Finally, a build system that doesn't rebuild everything when you change one fucking line

Turborepo
/tool/turborepo/overview
45%
tool
Recommended

GitHub Actions Marketplace - Where CI/CD Actually Gets Easier

integrates with GitHub Actions Marketplace

GitHub Actions Marketplace
/tool/github-actions-marketplace/overview
41%
alternatives
Recommended

GitHub Actions Alternatives That Don't Suck

integrates with GitHub Actions

GitHub Actions
/alternatives/github-actions/use-case-driven-selection
41%
integration
Recommended

GitHub Actions + Docker + ECS: Stop SSH-ing Into Servers Like It's 2015

Deploy your app without losing your mind or your weekend

GitHub Actions
/integration/github-actions-docker-aws-ecs/ci-cd-pipeline-automation
41%
tool
Popular choice

jQuery - The Library That Won't Die

Explore jQuery's enduring legacy, its impact on web development, and the key changes in jQuery 4.0. Understand its relevance for new projects in 2025.

jQuery
/tool/jquery/overview
41%
tool
Popular choice

AWS RDS Blue/Green Deployments - Zero-Downtime Database Updates

Explore Amazon RDS Blue/Green Deployments for zero-downtime database updates. Learn how it works, deployment steps, and answers to common FAQs about switchover

AWS RDS Blue/Green Deployments
/tool/aws-rds-blue-green-deployments/overview
39%
tool
Popular choice

KrakenD Production Troubleshooting - Fix the 3AM Problems

When KrakenD breaks in production and you need solutions that actually work

Kraken.io
/tool/kraken/production-troubleshooting
35%
troubleshoot
Popular choice

Fix Kubernetes ImagePullBackOff Error - The Complete Battle-Tested Guide

From "Pod stuck in ImagePullBackOff" to "Problem solved in 90 seconds"

Kubernetes
/troubleshoot/kubernetes-imagepullbackoff/comprehensive-troubleshooting-guide
34%
troubleshoot
Popular choice

Fix Git Checkout Branch Switching Failures - Local Changes Overwritten

When Git checkout blocks your workflow because uncommitted changes are in the way - battle-tested solutions for urgent branch switching

Git
/troubleshoot/git-local-changes-overwritten/branch-switching-checkout-failures
32%
tool
Recommended

NGINX Ingress Controller - Traffic Routing That Doesn't Shit the Bed

NGINX running in Kubernetes pods, doing what NGINX does best - not dying under load

NGINX Ingress Controller
/tool/nginx-ingress-controller/overview
30%
tool
Recommended

Nx - Caches Your Builds So You Don't Rebuild the Same Shit Twice

Monorepo build tool that actually works when your codebase gets too big to manage

Nx
/tool/nx/overview
30%

Recommendations combine user behavior, content similarity, research intelligence, and SEO optimization