Currently viewing the human version
Switch to AI version

Production vs Development: Critical Command Differences

Database Migration Workflow

The braindead mistake that'll get you fired faster than using Comic Sans in presentations: running `prisma migrate dev` in production. Mix these up and you'll be explaining to your boss why user data vanished into the digital void.

Development Workflow: prisma migrate dev

In your local development environment, prisma migrate dev automatically handles three operations:

  • Generates migration files from schema changes
  • Applies migrations to your database
  • Regenerates the Prisma client
## Development - generates AND applies migrations
npx prisma migrate dev --name add-user-preferences

This command creates migration files in prisma/migrations/ and immediately applies them to your development database. Perfect for iteration, absolutely fucking dangerous for production.

Production Workflow: prisma migrate deploy

Production uses `prisma migrate deploy` which only applies existing migrations. It never generates new migration files. No surprises, no new files generated. Just runs what you tested locally.

## Production - applies existing migrations only
npx prisma migrate deploy

I once used migrate dev in production because I was half-asleep during a Friday deployment. Deleted a column that was still in use and had to explain to HR why the site was down.

OK, here's why that command was wrong: migrate deploy only reads existing migration files from your repo and applies the ones that haven't run yet. If migration files don't exist, deployment fails and you get to explain to your team why the deploy is broken.

The Correct Production Deployment Process

Production Deployment Flow

First, create migrations locally without applying them:

## Generate migration without applying it
npx prisma migrate dev --create-only --name optimize-user-queries

This creates the migration file for review without changing your local database. The `--create-only` flag generates migration files without applying them.

Next, review the SQL that Prisma generated. Migration files live in prisma/migrations/[timestamp]_[name]/migration.sql. Always check the SQL before deployment:

-- Generated migration example
-- Migration: 20250921000000_optimize_user_queries

-- CreateIndex
CREATE INDEX "User_email_verified_idx" ON "User"("email", "verified");

-- AlterTable
ALTER TABLE "User" ADD COLUMN "preferences" JSONB;

Apply locally for testing:

## Apply the reviewed migration locally
npx prisma migrate dev

Commit migration files:

git add prisma/migrations/
git commit -m "Add user preferences and email index"

Deploy to production through your CI/CD pipeline:

npx prisma migrate deploy
npx prisma generate

Why This Process Matters

Production migrations are irreversible, unlike your career after you DROP TABLE Users. The review step catches destructive changes before they reach user data.

The same migration files apply identically across staging and production environments. No surprises.

Migration files in version control give you complete schema change history. Blame is trackable.

Docker Container

Environment Variables for Production

Production deployments require specific environment setup:

## Production DATABASE_URL
DATABASE_URL="postgresql://user:pass@prod-host:5432/app?sslmode=require&connection_limit=20"

## Required for migrate deploy
PRISMA_MIGRATE_SKIP_GENERATE=true  # Skip client generation during migration

Connection limits matter: Production databases need connection pooling. Without connection_limit=20, your database will die faster than your hopes of a peaceful deployment.

We learned this the hard way when our monitoring showed 500 dead connections and angry users flooding customer support. The migration succeeded, but the database was too overwhelmed to serve traffic for 20 minutes.

CI/CD Integration Best Practices

PostgreSQL Database

Most deployment failures happen because teams run migrations manually. Automated deployment pipelines prevent human error:

## Example GitHub Actions deployment
- name: Run Database Migration
  run: npx prisma migrate deploy
  env:
    DATABASE_URL: ${{ secrets.DATABASE_URL }}

- name: Generate Prisma Client
  run: npx prisma generate

Deploy order matters: Always run migrate deploy before `prisma generate`. Client generation needs the updated database schema.

When Deployments Fail

The most common production deployment error:

Error: P3005 The database schema is not empty.
Read more about how to baseline an existing production database

This happens when deploying Prisma Migrate to an existing database. The solution is database baselining, covered in the troubleshooting section below.

Never run `prisma db push` in production. This command bypasses migration history and can cause irreversible schema drift between environments. You'll regret it.

Common Production Deployment Issues

Q

"P3005: The database schema is not empty" - How do I baseline an existing database?

A

This error shows up when Prisma discovers your database isn't empty and freaks out. It's basically Prisma saying "WTF is all this stuff?" You need to baseline the database to establish migration history.

Solution:

## 1. Create initial migration representing current state
mkdir -p prisma/migrations/0_init
npx prisma migrate diff \
  --from-empty \
  --to-schema-datamodel prisma/schema.prisma \
  --script > prisma/migrations/0_init/migration.sql

## 2. Mark this migration as already applied
npx prisma migrate resolve --applied 0_init

## 3. Now you can run migrate deploy
npx prisma migrate deploy
Q

"Cannot find module '@prisma/client'" after deployment - Why does the client disappear?

A

Your build dies because someone forgot to run prisma generate. Yes, it's always this stupid. No, you're not the first person to make this mistake.

Solutions:

  • Always run prisma generate after migrate deploy in your deployment script
  • Add to package.json postinstall script: "postinstall": "prisma generate"
  • For Docker: Include RUN npx prisma generate in your Dockerfile after copying prisma files
Q

"Too many connections" errors during deployment - How do I fix connection pooling?

A

Prisma creates new connections during migration deployment like a drunk person ordering shots. Without connection limits, you'll exhaust your database connection pool and everything goes to shit.

Solution:

## Add connection_limit to your DATABASE_URL
DATABASE_URL="postgresql://user:pass@host:5432/db?connection_limit=10"

For serverless deployments, consider Prisma Accelerate for connection pooling.

Q

"Migration failed with 'relation does not exist'" - Schema pointing to wrong location?

A

This happens when your DATABASE_URL points to the wrong schema, especially with PostgreSQL.

Check your connection string:

## Make sure schema parameter is correct
DATABASE_URL="postgresql://user:pass@host:5432/db?schema=public"

Verify schema in database:

-- Check which schema your tables are in
SELECT schemaname, tablename FROM pg_tables WHERE tablename = 'User';
Q

How do I handle migration failures mid-deployment?

A

If a migration fails partway through, your database is probably fucked and you're about to have a very bad day.

Recovery steps:

  1. Don't panic (easier said than done when Slack is blowing up) - most issues are recoverable
  2. Check migration status: npx prisma migrate status
  3. For failed migrations: npx prisma migrate resolve --failed [migration_name]
  4. Fix the issue (usually SQL syntax or constraint violations)
  5. Re-run deployment: npx prisma migrate deploy
Q

Can I rollback a migration that went wrong?

A

Prisma doesn't provide automatic rollback because apparently they hate us. You need to manually create a new migration that reverses the changes while crying into your coffee.

Manual rollback process:

## 1. Create rollback migration
npx prisma migrate dev --create-only --name rollback_user_preferences

## 2. Edit the generated migration to reverse changes
## For example, if you added a column, drop it:
## ALTER TABLE \"User\" DROP COLUMN \"preferences\";

## 3. Apply the rollback
npx prisma migrate deploy
Q

"Could not connect to the database" during deployment - Network issues?

A

Database connectivity during deployment often fails due to network policies or firewall rules.

Troubleshooting checklist:

  • Verify DATABASE_URL is accessible from deployment environment
  • Check security groups/firewall rules for database port access
  • Test connection with a simple database client before running migrations
  • For cloud providers: Ensure the deployment environment has database access permissions
Q

How do I safely deploy breaking schema changes?

A

For changes that could cause data loss or application downtime, use the expand and contract pattern.

Example - changing column type:

  1. Add new column with desired type
  2. Deploy code that writes to both old and new columns
  3. Migrate data from old to new column
  4. Switch reads to new column
  5. Remove old column in final deployment

This ensures zero downtime and no data loss during complex schema changes.

Team Development: Avoiding Migration Conflicts

Team Collaboration

Multiple developers changing schemas simultaneously creates migration hell that will consume your soul. Understanding team development workflows prevents hours of migration debugging and workplace violence.

The Migration Conflict Scenario

Common conflict situation:

  1. Developer A adds user.preferences field, creates migration 001_add_preferences
  2. Developer B adds post.published_at field, creates migration 001_add_published_at
  3. Both push to feature branches simultaneously
  4. Git merge creates naming conflicts and migration ordering issues
## Conflicting migration files
prisma/migrations/20250921120000_add_preferences/
prisma/migrations/20250921120001_add_published_at/  # Different timestamp, same feature

Two devs pushed migrations at the same time. Spent 3 hours untangling the mess.

Git Branching

Branch-Based Development Strategy

Anyway, here's the workflow that prevents this shit:

## Feature branch development
git checkout -b feature/user-preferences

## Make schema changes
## Edit prisma/schema.prisma

## Create migration without applying
npx prisma migrate dev --create-only --name add_user_preferences

## Test migration locally
npx prisma migrate dev

## Commit migration files
git add prisma/migrations/
git commit -m \"Add user preferences schema\"

Before merging to main:

## Rebase on latest main to catch conflicts early
git checkout main && git pull
git checkout feature/user-preferences
git rebase main

## If migration conflicts exist, resolve them now

Resolving Migration Conflicts

When multiple developers create migrations simultaneously, you'll get conflicts in the `prisma/migrations` directory.

Conflict resolution process:

  1. Reset local database to clean state using `prisma migrate reset`:
npx prisma migrate reset
  1. Delete conflicting migration folders:
## Remove your feature branch migration
rm -rf prisma/migrations/20250921120000_add_preferences/
  1. Create new migration on top of merged changes:
## Schema.prisma now has both changes from main and your branch
npx prisma migrate dev --name add_user_preferences_v2
  1. Test the combined migration with your test suite:
## Ensure both schema changes work together
npm run test:db

Migration Squashing for Clean History

When developing complex features, you may create multiple migrations during development. Squash migrations before merging to main.

Example scenario:

## During feature development
001_add_user_table
002_add_user_email_index
003_add_user_preferences
004_fix_user_constraints

Squash into single migration:

## 1. Note your schema changes
git diff main...HEAD prisma/schema.prisma

## 2. Reset to main and create single migration
git checkout main
git checkout -b feature/user-system-squashed

## 3. Apply all schema changes at once
## Edit prisma/schema.prisma with final changes

## 4. Create single migration
npx prisma migrate dev --name add_complete_user_system

Database Seeding in Team Environments

Production-like test data prevents migration conflicts from breaking local development.

Create consistent seed data:

// prisma/seed.ts
import { PrismaClient } from '@prisma/client'

const prisma = new PrismaClient()

async function main() {
  // Create consistent test users
  await prisma.user.createMany({
    data: [
      { email: 'alice@test.com', name: 'Alice' },
      { email: 'bob@test.com', name: 'Bob' }
    ]
  })
}

main().finally(() => prisma.$disconnect())

Run seeding after migrations:

npx prisma migrate dev
npx prisma db seed

This gives your local database consistent test data after each migration.

GitHub Actions

CI/CD Pipeline for Team Development

Automated testing catches migration issues before they reach production.

GitHub Actions example:

name: Database Migration Test
on: [pull_request]

jobs:
  test-migrations:
    runs-on: ubuntu-latest
    services:
      postgres:
        image: postgres:14
        env:
          POSTGRES_PASSWORD: postgres
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5

    steps:
      - uses: actions/checkout@v3

      - name: Setup Node.js
        uses: actions/setup-node@v3
        with:
          node-version: '18'

      - name: Install dependencies
        run: npm ci

      - name: Run migrations on clean database
        run: npx prisma migrate deploy
        env:
          DATABASE_URL: \"postgresql://postgres:postgres@localhost:5432/test\"

      - name: Generate Prisma client
        run: npx prisma generate

      - name: Run tests
        run: npm test

Communication Strategies

Team coordination prevents murder:

  1. Slack bombing: Announce schema changes so nobody gets surprised by broken builds
  2. Naming that doesn't suck: Use descriptive migration names, not 'migration_001' garbage
  3. Daily rebase ritual: Rebase every morning or suffer merge conflict hell
  4. Schema freeze for sanity: Nobody touches the database during critical deploys unless they want to explain to the CEO why the site is down

Handling Large Team Migrations

For teams with 5+ developers making frequent schema changes:

  1. Designated migration days: Batch schema changes into scheduled migration windows
  2. Schema ownership: Assign database table ownership to prevent overlapping changes
  3. Staging environment testing: Always test merged migrations in staging before production
  4. Migration review process: Require senior developer review for complex schema changes

Database review checklist:

  • Migration includes appropriate indexes for new queries
  • Foreign key constraints maintain data integrity
  • Default values handle existing data appropriately
  • Migration is reversible (if required by team policy)
  • Performance impact analyzed for large tables

The key to successful team development is early conflict detection and clear communication about schema changes. Automated testing and consistent workflows prevent most migration disasters.

Advanced Production Deployment Scenarios

Q

How do I deploy schema changes that require downtime?

A

Some migrations require application downtime to prevent data corruption during the transition.

Downtime-required scenarios:

  • Changing column types that require data transformation
  • Adding NOT NULL constraints to existing columns
  • Removing columns that the application actively uses

Maintenance window deployment process:

## 1. Deploy application in \"maintenance mode\" (budget 30 minutes for this alone)
## 2. Stop all application instances (pray nothing is stuck)
## 3. Create database backup (this will take longer than you think)
pg_dump $DATABASE_URL > backup_$(date +%Y%m%d_%H%M%S).sql

## 4. Run migration (hold your breath)
npx prisma migrate deploy

## 5. Verify migration success (if you're lucky)
npx prisma migrate status

## 6. Restart application with new code (and hope everything still works)

Budget 2-3x your estimated downtime. If you think it'll take 30 minutes, block 2 hours and thank me later.

Database Performance

Q

How do I handle migrations with large datasets that timeout?

A

Large table alterations will timeout spectacularly, leaving your database in a half-migrated state that makes everyone panic.

Strategy for large datasets:

## 1. Use --create-only to generate migration without applying
npx prisma migrate dev --create-only --name optimize_large_table

## 2. Modify generated SQL for large table safety
## Replace generated SQL with:
-- Instead of: ALTER TABLE \"LargeTable\" ADD COLUMN \"new_field\" TEXT;
-- Use chunked approach:

BEGIN;
-- Add column as nullable first
ALTER TABLE \"LargeTable\" ADD COLUMN \"new_field\" TEXT;

-- Create index concurrently (doesn't lock table)
CREATE INDEX CONCURRENTLY \"LargeTable_new_field_idx\" ON \"LargeTable\"(\"new_field\");

-- Update in batches to avoid long locks
DO $$
DECLARE
    batch_size INT := 10000;
    processed INT := 0;
BEGIN
    LOOP
        UPDATE \"LargeTable\"
        SET \"new_field\" = 'default_value'
        WHERE \"new_field\" IS NULL
        AND \"id\" IN (
            SELECT \"id\" FROM \"LargeTable\"
            WHERE \"new_field\" IS NULL
            LIMIT batch_size
        );

        GET DIAGNOSTICS processed = ROW_COUNT;
        EXIT WHEN processed = 0;

        -- Brief pause between batches
        PERFORM pg_sleep(0.1);
    END LOOP;
END $$;

-- Make column NOT NULL after data migration
ALTER TABLE \"LargeTable\" ALTER COLUMN \"new_field\" SET NOT NULL;
COMMIT;

We tried migrating a 50GB table during peak hours once. The database locked up for 4 hours and customer support got 2,000 angry emails. Don't be us.

Q

Can I run migrations on read replicas or during high traffic?

A

Read replicas cannot accept write operations (migrations). High traffic requires careful timing.

Read replica considerations:

  • Primary database only: Migrations must run on the primary/master database
  • Replication lag: Allow time for changes to propagate to replicas
  • Connection management: Ensure application connects to primary during migration

High traffic deployment:

## 1. Deploy during lowest traffic period (use analytics to identify)
## 2. Scale down non-essential background jobs
## 3. Monitor database performance during migration
## 4. Have rollback plan ready

## Example monitoring during migration
watch 'psql $DATABASE_URL -c \"SELECT schemaname, tablename, n_tup_ins, n_tup_upd, n_tup_del FROM pg_stat_user_tables WHERE schemaname = \'public\'\"'

Schema Monitoring

Q

How do I handle schema drift between environments?

A

Schema drift occurs when databases get out of sync with migration history.

Detecting schema drift:

## Compare schemas between environments
npx prisma db pull --url $STAGING_DATABASE_URL --schema staging.prisma
npx prisma db pull --url $PRODUCTION_DATABASE_URL --schema production.prisma

## Compare files
diff prisma/schema.prisma staging.prisma
diff prisma/schema.prisma production.prisma

Resolving drift:

## 1. Identify missing migrations
npx prisma migrate status --url $PRODUCTION_DATABASE_URL

## 2. For manual changes made outside Prisma:
npx prisma migrate diff \
  --from-schema-datamodel production.prisma \
  --to-schema-datamodel prisma/schema.prisma \
  --script > fix_drift.sql

## 3. Apply drift correction
psql $PRODUCTION_DATABASE_URL < fix_drift.sql

## 4. Mark migrations as applied if needed
npx prisma migrate resolve --applied [migration_name]
Q

What's the safest way to rename columns or tables?

A

Direct renames can break running application instances that haven't been updated yet.

Safe rename strategy (expand & contract):

-- Step 1: Add new column/table
ALTER TABLE \"User\" ADD COLUMN \"full_name\" TEXT;

-- Step 2: Deploy code that writes to both old and new columns
-- Application writes to both \"name\" and \"full_name\"

-- Step 3: Migrate existing data
UPDATE \"User\" SET \"full_name\" = \"name\" WHERE \"full_name\" IS NULL;

-- Step 4: Deploy code that reads from new column only
-- Application now uses \"full_name\" exclusively

-- Step 5: Remove old column
ALTER TABLE \"User\" DROP COLUMN \"name\";
Q

How do I handle migrations in multi-tenant applications?

A

Multi-tenant applications require migrations across multiple databases.

Shared database approach:

## Single migration across all tenants
npx prisma migrate deploy

Database-per-tenant approach:

#!/bin/bash
## Script to migrate all tenant databases

TENANT_DBS=(
  \"postgresql://user:pass@host/tenant1\"
  \"postgresql://user:pass@host/tenant2\"
  \"postgresql://user:pass@host/tenant3\"
)

for db_url in \"${TENANT_DBS[@]}\"; do
  echo \"Migrating $db_url\"
  DATABASE_URL=\"$db_url\" npx prisma migrate deploy

  if [ $? -ne 0 ]; then
    echo \"Migration failed for $db_url\"
    exit 1
  fi
done

echo \"All tenant migrations completed successfully\"
Q

How do I test complex migrations before production?

A

Comprehensive testing prevents production disasters.

Migration testing checklist:

## 1. Test on production data copy
pg_dump $PRODUCTION_URL | psql $TESTING_URL

## 2. Time the migration
time npx prisma migrate deploy

## 3. Verify data integrity
npm run test:data-integrity

## 4. Check application functionality
npm run test:integration

## 5. Measure performance impact
npm run benchmark:database

Load testing during migration:

## Simulate production load during schema changes
k6 run --vus 100 --duration 5m load-test.js &
LOAD_TEST_PID=$!

npx prisma migrate deploy

## Check if application remained responsive
kill $LOAD_TEST_PID

Emergency Recovery

Q

How do I recover from a catastrophic migration failure?

A

When migrations fail and leave the database in an inconsistent state that makes you question your life choices.

Emergency recovery process:

## 1. Stop all application instances immediately
## 2. Assess the damage
npx prisma migrate status

## 3. Restore from backup (if available)
psql $DATABASE_URL < backup_20250921_120000.sql

## 4. If no backup, manual recovery:
## - Identify which tables/columns are in inconsistent state
## - Manually fix data integrity issues
## - Mark problematic migration as resolved
npx prisma migrate resolve --failed [migration_name]

## 5. Fix the migration file and redeploy
npx prisma migrate deploy

The key to handling advanced scenarios is thorough testing, proper monitoring, having rollback plans ready, and accepting that every migration will take 3x longer than you estimated. Budget your time accordingly and keep coffee nearby.

Prisma Migrate Production Resources

Related Tools & Recommendations

tool
Similar content

Deploy Drizzle to Production Without Losing Your Mind

Master Drizzle ORM production deployments. Solve common issues like connection pooling breaks, Vercel timeouts, 'too many clients' errors, and optimize database

Drizzle ORM
/tool/drizzle-orm/production-deployment-guide
100%
integration
Recommended

Vercel + Supabase + Stripe: Stop Your SaaS From Crashing at 1,000 Users

integrates with Vercel

Vercel
/integration/vercel-supabase-stripe-auth-saas/vercel-deployment-optimization
78%
tool
Similar content

PlanetScale - MySQL That Actually Scales Without The Pain

Database Platform That Handles The Nightmare So You Don't Have To

PlanetScale
/tool/planetscale/overview
76%
tool
Similar content

Neon - Serverless PostgreSQL That Actually Shuts Off

PostgreSQL hosting that costs less when you're not using it

Neon
/tool/neon/overview
76%
compare
Recommended

PostgreSQL vs MySQL vs MariaDB vs SQLite vs CockroachDB - Pick the Database That Won't Ruin Your Life

integrates with sqlite

sqlite
/compare/postgresql-mysql-mariadb-sqlite-cockroachdb/database-decision-guide
75%
tool
Recommended

SQLite - The Database That Just Works

Zero Configuration, Actually Works

SQLite
/tool/sqlite/overview
75%
tool
Recommended

SQLite Performance: When It All Goes to Shit

Your database was fast yesterday and slow today. Here's why.

SQLite
/tool/sqlite/performance-optimization
75%
tool
Recommended

MongoDB Node.js Driver Connection Pooling - Fix Production Crashes

integrates with MongoDB Node.js Driver

MongoDB Node.js Driver
/tool/mongodb-nodejs-driver/connection-pooling-guide
73%
howto
Similar content

Deploy Next.js to Vercel Production Without Losing Your Shit

Because "it works on my machine" doesn't pay the bills

Next.js
/howto/deploy-nextjs-vercel-production/production-deployment-guide
67%
tool
Recommended

Drizzle Migration - I Broke Production So You Don't Have To

your prisma bundle is 400kb and you wonder why vercel hates you

Drizzle ORM
/brainrot:tool/drizzle-orm/migration-guide
51%
howto
Recommended

Drizzle ORM Setup That Actually Works

stop getting rekt by typescript ORM hell

Drizzle ORM
/brainrot:howto/setup-drizzle-orm/complete-setup-guide
51%
howto
Recommended

Next.js 14 App Router 설치하기 - 진짜 삽질함

2시간 삽질한 거 정리해둠

Next.js
/ko:howto/setup-nextjs-14-app-router/complete-setup-guide
51%
howto
Recommended

Migrating CRA Tests from Jest to Vitest

integrates with Create React App

Create React App
/howto/migrate-cra-to-vite-nextjs-remix/testing-migration-guide
51%
tool
Recommended

Next.js App Router - File-System Based Routing for React

App Router breaks everything you know about Next.js routing

Next.js App Router
/tool/nextjs-app-router/overview
51%
troubleshoot
Recommended

PostgreSQL Breaks in Creative Ways - Here's How to Fix the Disasters

The most common production-killing errors and how to fix them without losing your sanity

PostgreSQL
/troubleshoot/postgresql-performance/common-errors-solutions
51%
tool
Recommended

PostgreSQL - The Database You Use When MySQL Isn't Enough

integrates with PostgreSQL

PostgreSQL
/tool/postgresql/overview
51%
tool
Recommended

PostgreSQL - MySQL로 충분하지 않을 때 쓰는 진짜 데이터베이스

integrates with PostgreSQL

PostgreSQL
/ko:tool/postgresql/overview
51%
tool
Recommended

MySQL HeatWave - Oracle's Answer to the ETL Problem

Combines OLTP and OLAP in one MySQL database. No more data pipeline hell.

Oracle MySQL HeatWave
/tool/oracle-mysql-heatwave/overview
51%
howto
Recommended

MySQL 프로덕션 최적화 가이드

실전 MySQL 성능 최적화 방법

MySQL
/ko:howto/optimize-mysql-database-performance/production-optimization-guide
51%
alternatives
Recommended

MySQL Alternatives - Time to Jump Ship?

MySQL silently corrupted our production data for the third time this year. That's when I started seriously looking at alternatives.

MySQL
/alternatives/mysql/migration-ready-alternatives
51%

Recommendations combine user behavior, content similarity, research intelligence, and SEO optimization