AWS Application Migration Service copies your servers to AWS. That's it. It installs an agent on your source machines, continuously replicates data to AWS, and eventually launches copies of your servers as EC2 instances.
Real talk: It's faster than doing it manually, but anyone selling you a "70% reduction in migration time" is probably measuring against someone doing file-by-file copies with SCP. Your mileage will vary dramatically based on how much legacy crap you're dragging along.
What Actually Works Well
The continuous replication approach is solid. While your production servers keep running, MGN quietly syncs changes to AWS. When you're ready to cut over, the downtime is usually measured in minutes, not hours. That part genuinely works as advertised.
For VMware shops, the agentless option via vCenter 6.7+ is a lifesaver if your security team freaks out about installing agents on production boxes. Though honestly, if they're that paranoid about a replication agent, wait until they see what happens when you try to explain egress charges.
OS Support (The Good and Bad News)
AWS MGN supports Windows Server 2016-2022, most modern Linux distributions, but drops Windows 2003 support in 2026
MGN supports most operating systems you actually care about. Windows Server 2016-2022 works fine. Modern Linux distros (RHEL 7+, Ubuntu 18.04+, Amazon Linux 2) are solid.
The gotcha: Windows 2003 support is getting axed in 2026. If you're still running Server 2003 in production in 2025, migration is the least of your problems - that thing should have been put out of its misery years ago.
Network Configuration Hell
Network Requirements: TCP 443 and 1500 to AWS endpoints, plus staging subnet with S3/EC2/IAM access
Here's where things get interesting. MGN needs to talk home during replication, which means opening specific ports (443 and 1500) to AWS endpoints. Your firewall team will want IP ranges. AWS will tell you to use FQDNs. Your firewall team will insist on IPs. AWS will change the IPs. You'll spend a Tuesday morning troubleshooting why replication broke.
Common network failures:
ECONNREFUSED 443
- firewall blocking AWS endpoints (again)EHOSTUNREACH
- routing tables fucked up after "minor network changes"SSL_ERROR_SYSCALL
- corporate proxy intercepting SSL traffic- Agent log shows
Unable to resolve mgn-dr-gateway-1234567890.us-east-1.elb.amazonaws.com
- DNS issues
The staging area lives in your AWS account, so you're paying for those instances while replication happens. Budget for it - that t3.micro staging server for your 500GB database is going to cost you about $15/month, plus EBS storage costs. MGN itself doesn't charge per-server fees, but you pay for all the AWS infrastructure during replication.
Integration Reality Check
MGN plugs into Migration Hub for tracking, which is useful if you have dozens of servers to migrate. The dashboard actually works, unlike some AWS consoles. CloudWatch integration gives you replication health metrics, though you'll probably set up alerts after the first time replication silently fails overnight.
Launch templates are better than the old blueprint system, but you'll still need to manually configure security groups, IAM roles, and probably fix whatever broke during the conversion process. Plan for manual cleanup work post-migration.
Useful Resources for Real Implementation:
- AWS MGN Troubleshooting Guide - When shit breaks at 2 AM
- Common Replication Errors - Error codes you'll actually see
- Network Requirements - Firewall rules that matter
- VMware Agentless Setup - When vCenter hates you
- AWS re:Post MGN Discussions - Real user experiences
- MGN Best Practices Blog - Actual implementation tips
- Migration Hub Setup - Progress tracking that works
- CloudWatch Metrics Guide - Monitoring replication health
- Launch Settings Configuration - Getting your instances right
- AWS Direct Connect for Migrations - When bandwidth matters