Check if your binlogs are enabled first or you'll waste half a day wondering why nothing works. Took me maybe 3 hours to figure this out because I'm an idiot and didn't read the prereqs.
SHOW VARIABLES WHERE Variable_name IN ('log_bin', 'gtid_mode', 'binlog_format');
If log_bin
returns OFF
, you can't use PlanetScale's Import Tool. You're stuck with manual migration and downtime. If gtid_mode
is OFF
, same problem. And if binlog_format
isn't ROW
, fix it or you'll get weird replication bugs that make no sense.
Here's What Nobody Tells You About PlanetScale Migration
Your foreign keys are completely fucked. All of them. PlanetScale doesn't support foreign key constraints because Vitess can't handle them across shards. Found this out when constraint violations started flooding our error logs after the first test migration.
Your stored procedures are also dead. Triggers too. If you have business logic in the database, start extracting it now because PlanetScale won't run any of it.
Option 1: Import Tool (Works Great Until It Doesn't)
The PlanetScale Import Tool is your best bet if your binlogs work. It's "zero downtime" but there's still connection drops during cutover. I've done this maybe 6 times now - budget 30 seconds of connection drops, not zero. Their marketing oversells the "zero downtime" part.
How it actually works:
- Copies your data while serving traffic (this part actually works if your binlog retention holds up)
- Keeps syncing changes via binlog replication (can lag if you have heavy writes)
- Switches traffic over in one scary moment (you'll hold your breath)
- Keeps your old database synced for rollback (which you'll probably need)
Real requirements nobody mentions:
- Binary logging enabled (
log_bin = ON
) - check with that query above - GTID mode enabled (
gtid_mode = ON
) - pain in the ass to enable on existing replicas - Row-based binlog format (
binlog_format = ROW
) - statement-based won't work with Vitess - Binlog retention of at least 3 days (
expire_logs_days >= 3
) - learned this when migration failed due to log gap - Database user with specific permissions (they don't tell you SUPER privilege is needed)
When it'll break:
- Your binlog retention is too short and gap appears during migration
- Network hiccups between your database and PlanetScale's ingest - their status page won't help you here
- Your database has weird MySQL features that confuse Vitess (like spatial indexes)
- You're using MyISAM tables (convert to InnoDB first or it won't work at all)
Option 2: pscale CLI (For Control Freaks)
The pscale CLI is decent when the web interface isn't cutting it. You can script it, which means you can also break it in creative ways.
Why you'd use this:
- The Import Tool doesn't work for your edge case
- You need to migrate multiple databases and want to script it
- You're testing migrations and need to iterate quickly
- You want detailed logs when everything goes wrong
Reality check:
pscale database dump
is just mysqldump under the hood with some flags- Progress reporting is optimistic at best
- Network interruptions will make you restart from scratch
- The restore process is slower than you think
Actual CLI workflow:
## Install CLI and authenticate (this part never works smoothly)
brew install planetscale/tap/pscale
pscale auth login
## Create target database (pick your region wisely)
pscale database create prod-migration --region us-east
## Dump your source (this will take forever)
pscale database dump source-db main > migration.sql
## Pro tip: pipe through gzip or you'll run out of disk space
## Restore to PlanetScale (pray nothing times out)
pscale database restore-dump prod-migration main migration.sql
## Check if it actually worked
pscale shell prod-migration main
Where this goes wrong:
- Authentication expires mid-migration
- Dump fails with
Error 2013: Lost connection
for large databases - Restore times out after 2 hours (yes, there's a timeout)
- You realize you picked the wrong region and have to start over
Option 3: Manual Migration (Accept Your Downtime Fate)
Sometimes you just need to mysqldump everything and deal with the downtime. Usually because your binlogs aren't configured or your database is small enough that you don't care.
When you're stuck with this:
- Your database is under 10GB and downtime is acceptable
- Your MySQL is old and doesn't have GTID support
- You tried the Import Tool and it broke in a weird way
- It's 3 AM and you just want to get this done
Real timelines (not marketing bullshit):
- Export: maybe 10-20 minutes per GB, could be way longer if you have lots of indexes
- Transfer: However long it takes to upload a huge file (your internet sucks more than you think)
- Import: 15-30 minutes per GB on PlanetScale, sometimes longer for no obvious reason
- Testing: at least 2 hours if you're being responsible, probably more like 4-6 hours
The reality of manual migration:
## This will take longer than you think
mysqldump --single-transaction --routines --triggers \\
--opt --verbose --lock-tables=false \\
production_db > migration.sql
## Upload to PlanetScale however you can
## (Their web interface has a file size limit)
## Import and cross your fingers
mysql -h your-planetscale-host.psdb.cloud \\
-u username -p new_database < migration.sql
What'll definitely go wrong:
- Export hangs on a big table and you restart with
--where=\"1 LIMIT 1000000\"
- Import fails halfway through because of some weird character encoding
- You forgot to drop foreign keys first and everything breaks
- The connection times out during import because PlanetScale has limits
Which Migration Method Will Screw You Over Least
Your Situation | What You Should Do | What'll Actually Happen |
---|---|---|
< 10GB, can take downtime, no binlogs | Manual migration | Works fine, maybe 2-3 hours total |
< 10GB, need zero downtime, have binlogs | Import Tool | Works great, like 30-60 minutes |
10GB - 1TB, need zero downtime, have binlogs | Import Tool | Takes all day, probably works |
10GB - 1TB, can take downtime, no binlogs | CLI migration in chunks | Weekend project, test first |
> 1TB, need zero downtime | Import Tool + prayer | Budget a week, have rollback plan |
> 1TB, can take downtime | You're fucked, call PlanetScale support | Seriously, get professional help |
What You Need to Check Before You Start
Run this query to see what you're dealing with:
-- Count your foreign keys (these all need to be handled in code)
SELECT COUNT(*) as foreign_key_count
FROM information_schema.KEY_COLUMN_USAGE
WHERE REFERENCED_TABLE_NAME IS NOT NULL;
-- Find your stored procedures (these need to be rewritten)
SELECT ROUTINE_TYPE, ROUTINE_NAME
FROM information_schema.ROUTINES
WHERE ROUTINE_SCHEMA = 'your_database';
-- Check for triggers (also need rewriting)
SELECT TRIGGER_NAME, EVENT_MANIPULATION, EVENT_OBJECT_TABLE
FROM information_schema.TRIGGERS
WHERE TRIGGER_SCHEMA = 'your_database';
-- Find FULLTEXT indexes (not supported, need external search)
SELECT TABLE_NAME, INDEX_NAME
FROM information_schema.STATISTICS
WHERE INDEX_TYPE = 'FULLTEXT' AND TABLE_SCHEMA = 'your_database';
If any of these return results, you have work to do before migration. Check what Vitess actually supports because it's not everything MySQL can do.
The Shit That Always Goes Wrong
Your connection string will change. Update your config management, environment variables, and any hardcoded connections. Test this in staging first.
Query performance will be different. Vitess adds routing overhead. Some queries get faster (especially with sharding), others get slower. Monitor everything for the first 48 hours.
Your monitoring will break. Database metrics work differently in PlanetScale. Set up their monitoring before you migrate, not after.
Someone will try to connect to the old database. Keep it running for at least a week after migration. Set up alerts when anything connects to it so you can hunt down the stragglers.
The migration will take longer than estimated. Whatever timeline you give your boss, multiply by 2. If it's mission-critical, multiply by 3. If it's end-of-quarter crunch time, just quit now and save yourself the pain.