Google launched this thing in 2015 to solve the problem of moving massive amounts of data without losing your mind. You have two options: agentless transfers for cloud-to-cloud stuff, and agent-based transfers for everything else.
Transfer Architecture Overview
The service operates in two modes: agentless cloud-to-cloud transfers that handle S3/Azure to GCS migrations directly through Google's infrastructure, and agent-based transfers that deploy Docker containers in your network to move on-premises data.
Cloud-to-Cloud Transfers (The Easy Ones)
Moving data from AWS S3 or Azure to Google Cloud Storage? This is the sweet spot. No agents to install, no firewall headaches. Just set it up and let Google handle the heavy lifting. It's free too, which is rare for Google.
The setup is straightforward if you know your way around IAM permissions. If you don't, plan for a couple hours of "Access Denied" errors until you figure out which permissions you're missing. Stack Overflow has your back when Google's docs leave you hanging. The troubleshooting guide covers most permission issues you'll encounter.
Agent-Based Transfers (Where It Gets Complicated)
This is for moving data from your servers, NFS shares, HDFS clusters, or anything that's not S3/Azure. You install a Docker container on your network that talks to Google's service. Costs $0.0125 per GB - so moving 100TB will run you $1,250 plus whatever AWS egress charges you for outbound data.
The agent setup is usually fine until your corporate firewall decides to block random ports. Then you're hunting down your network admin at 2am trying to figure out why your transfer died halfway through. Pro tip: the error messages are terrible, so good luck debugging what went wrong.
Real talk: Agent version 1.18+ needs outbound HTTPS to *.googleapis.com
on ports 443 and 80. Your security team will hate the wildcard IP ranges - plan for 3 meetings with network ops before they'll open the ports. The Docker agent also gets memory-starved below 4GB RAM and will silently crash after 72 hours of runtime without any useful error message.
Performance Reality Check
Google says it's "optimized for transfers over 1TB" which is marketing speak for "don't bother with small files." The parallel processing works well when it works, but transfer speeds are all over the map. Their time estimates are wildly optimistic - multiply by 3 for a realistic timeline based on real-world performance benchmarks.
Production reality: On our 1Gbps connection, 10TB took 5 days instead of Google's estimated 2. Small files under 1MB transfer at roughly 1/10th the speed of large files. The agent memory usage spikes to 8GB+ during transfer startup, then crashes with "disk full" errors even when you have plenty of space left (it's actually running out of inodes on systems with millions of tiny files).
Incremental transfers are decent if your data doesn't change much. But if you have a lot of small files or constantly changing data, you're better off with something else.