Currently viewing the human version
Switch to AI version

Emergency Troubleshooting - Fix It Now

Q

Discovery Client won't start after Windows reboot - "connection refused" error?

A

Solution: The discovery client service has a startup dependency that fails. Run these commands in PowerShell as administrator:

Stop-Service "Migration Center Discovery Client"
Set-Service "Migration Center Discovery Client" -StartupType "Automatic (Delayed Start)"
Start-Service "Migration Center Discovery Client"

The delayed start gives Windows time to initialize network dependencies. This is a known issue with version 6.2.0 that Google still hasn't bothered fixing. I've seen this kill entire Discovery Client deployments during Windows patching cycles - particularly painful when you've got 50+ discovery clients across multiple data centers all failing simultaneously.

Q

Authentication keeps failing with "roles/migrationcenter.discoveryClient" error?

A

Root cause: Your organization has disabled service account key creation at the org policy level.

Fix:

  1. Ask your Org Policy Administrator to check IAM > Organization Policies
  2. Find policy "Disable service account key creation"
  3. Set enforcement to Off for your project
  4. Alternative: Have admin create the service account manually and provide you the key file
Q

Discovery stops working after scanning exactly 2,000 servers?

A

The 2,000 server limit? That's not a recommendation, that's where everything dies. Discovery client becomes unstable beyond 2,000 servers in version 6.2.0. Solutions:

  • Immediate: Deploy multiple discovery clients, each scanning <2,000 servers
  • Proper fix: Upgrade to discovery client 6.3.7+ which removes this limitation
  • Workaround: Use IP range exclusions to stay under the limit
Q

VMware guest collection fails with timeout on Windows VMs?

A

Common cause: Windows Management Instrumentation (WMI) query failures with Win32_Service.

Solution steps:

  1. On target Windows VMs, run: services.msc
  2. Verify "Windows Management Instrumentation" service is running
  3. Run WMI repair: winmgmt /verifyrepository
  4. If corrupt: winmgmt /resetrepository
  5. Restart WMI service

This issue was fixed in version 6.3.6, but older clients still have problems.

Q

Cost estimates are 40-60% lower than actual bills?

A

Yeah, this is normal Migration Center being Migration Center. Their cost estimates are consistently optimistic, like a used car salesman promising this Honda has "never seen rain". Plan for 60% higher costs unless you enjoy budget meetings where you explain why your 'simple migration' costs twice what you promised:

  • Base estimate: Migration Center number (cute, but wrong)
  • Network reality: Add 60% for egress costs they never mention
  • Learning curve tax: Add $50K+ for training/consulting (minimum)
  • Initial over-provisioning: Add 25% because nobody optimizes on day one
  • Murphy's Law: Add 20% for the shit that always breaks

Real example: $47K estimate → $140K actual first AWS bill → 6 months of explaining to management.

Q

IP range scan stays "In Progress" forever?

A

Cause: Discovery client service was restarted during an active scan, leaving it in a stuck state.

Fix:

  1. Stop the discovery client service
  2. Delete scan database: C:\ProgramData\Google\mcdc\data\scans.db
  3. Restart discovery client service
  4. Recreate the IP range scan

Prevention: Never restart the service during active scans.

Q

Linux guest collection reports wrong disk space?

A

They've been slowly fixing this mess over several releases:

  • ZFS filesystems: Fixed in version 6.3.6
  • Mount points with spaces: Fixed in version 6.3.6
  • Network mounts included: Fixed in version 6.3.4
  • Logical volume miscalculation: Fixed in version 6.3.4

Solution: Upgrade to discovery client 6.3.7+ for accurate disk reporting.

Q

Discovery finds most servers but misses critical applications?

A

Expected behavior. Automated discovery has blind spots:

Always missed:

  • Custom applications on non-standard ports
  • Services behind complex load balancers
  • Applications that only run during specific time windows
  • Anything communicating through custom protocols

Solution: Use hybrid approach:

  1. Let automated discovery find 80% of infrastructure
  2. Manually document the 20% of critical custom applications
  3. Import additional data via CSV upload for complete inventory

Optimizing Discovery Performance for Large Environments

The Reality of Enterprise-Scale Discovery

Migration Architecture Overview

Standard Migration Center setup breaks in predictable ways at enterprise scale. You have 3 ways to deploy this. Use option 2 unless you hate yourself. Here's what actually works after you've been burned twice by the default approach in environments with 5,000+ assets, complex networks, and legacy systems that haven't been touched since 2019.

Discovery Client Architecture for Scale

The discovery client 6.3.7 has performance improvements, but Google designed this thing for toy environments with 1,000 servers in a single data center. Real enterprises with multiple vCenters and legacy infrastructure? Yeah, you're on your own. The August 6, 2025 release fixed ZFS partition issues and floppy disk drives showing up in Linux scans (seriously, who still has floppy drives in 2025?), but they still haven't fixed the core scaling problems.

Google Cloud Discovery Client Configuration

Deploy Multiple Discovery Clients Strategically

Instead of trying to scan everything from one discovery client, deploy multiple instances with clear boundaries:

  • One discovery client per major data center - Network latency and firewall rules make cross-data center scanning unreliable
  • Separate discovery clients for different credential domains - VMware environments with different service accounts, AWS accounts with different IAM roles
  • Dedicated discovery clients for performance-sensitive environments - Production databases and critical applications should be scanned during maintenance windows with separate agents

Each discovery client maintains its own database and scan schedule, but all upload to the same Migration Center project. This approach prevents the single-point-of-failure issues that plague large monolithic discovery deployments.

Optimize Scan Scheduling for Production Impact

The custom scheduling feature introduced in version 6.3.0 lets you define opt-out schedules per server. This is critical for production environments where even the lightweight guest collection scripts can impact performance during peak hours.

What actually works in production:

  • Development/test servers: Scan during business hours (9 AM - 5 PM)
  • Production applications: Scan only during maintenance windows (Sunday 2-6 AM)
  • Database servers: Scan during low-transaction periods (avoid month-end, quarter-end)
  • Legacy systems: Manual coordination with application owners before any scanning

The discovery client collects performance metrics over time, so inconsistent scanning schedules will result in incomplete utilization data and inaccurate rightsizing recommendations.

Network and Credential Management at Scale

Managing Credentials Without Losing Your Mind

Most large discoveries die here. Google's credential management works great if you have one AD domain and a simple setup. Got 12 different AD forests, legacy systems with local accounts, and a security team that changes passwords weekly? Good luck.

Segmentation approach:

  • Domain credentials: One set per AD domain/forest
  • Local admin accounts: Separate credentials for servers not joined to domain
  • Service accounts: Dedicated accounts for VMware vCenter, database instances
  • Cloud credentials: AWS IAM roles, Azure service principals

Store credentials in enterprise password managers, not in the discovery client interface. Use the credential reset functionality to rotate accounts monthly as part of security compliance.

vCenter Client Configuration

Network Dependencies and Firewall Planning

The discovery client needs network access on multiple protocols, and enterprise firewalls rarely have the ports configured correctly from the start. Plan for these network requirements:

Outbound from discovery client:

  • TCP 443: HTTPS to Google Cloud APIs (required for uploading data)
  • TCP 443: HTTPS to vCenter APIs (for VMware discovery)
  • TCP 22: SSH to Linux servers (for guest-level collection)
  • TCP 135, 445: WMI to Windows servers (for guest-level collection)
  • TCP 3389: RDP for Windows troubleshooting (optional but recommended)

Inbound to target servers:

  • Network scanning requires ICMP ping responses
  • SSH daemon must accept connections from discovery client IP
  • Windows Firewall must allow WMI traffic from discovery client subnet

Most enterprise network teams will push back on opening these ports broadly. The compromise solution is to deploy discovery clients inside each network segment, minimizing firewall rule changes.

Data Collection Optimization

Performance Impact Mitigation

Even with the performance improvements in version 6.3.4, guest-level collection can impact target systems during collection. The discovery client sets Linux collection scripts to run with higher nice levels and optimizes Windows collection, but this still causes CPU spikes on resource-constrained systems.

What actually happens to your servers:

  • CPU impact: 5-15% CPU spike for 30-60 seconds (or until your monitoring explodes)
  • Memory usage: 50-100MB temporary increase (watch out on 2GB legacy boxes)
  • Disk I/O: "Minimal" until it hits that ancient Windows 2008 server with a dying disk
  • Network bandwidth: 1-5MB per server (multiply by 2,000 servers during peak hours)

I learned this the hard way when discovery scanning triggered a cascade failure on a production ERP system running at 95% CPU. That was a fun Saturday morning explaining to the CTO why payroll was down.

Storage and Database Performance

The discovery client stores collected data in a local SQLite database before uploading to Migration Center. For large environments, this database can grow to several gigabytes and become a performance bottleneck.

Database maintenance practices:

  • Monitor database size: Check C:\ProgramData\Google\mcdc\data\ folder size weekly
  • Storage requirements: Plan for 2-5MB per discovered server for historical data
  • Backup strategy: Export collected data to Migration Center before major discovery client upgrades
  • Database corruption: Use the recovery commands if database issues occur

The discovery client periodically cleans up old data, but environments with frequent re-scanning can accumulate significant data volumes.

Integration with Enterprise Tools

CMDB and Asset Management Integration

Migration Center doesn't integrate directly with enterprise Configuration Management Databases (CMDBs), but you can bridge this gap with the CSV import functionality. Export server data from ServiceNow, BMC Remedy, or other CMDB tools and import into Migration Center to supplement automated discovery.

Common integration scenarios:

  • Asset ownership data: Application owners, business units, cost centers
  • Compliance classifications: PCI, HIPAA, SOX scope servers
  • Change management schedules: Maintenance windows, upgrade schedules
  • Business criticality: Tier 1/2/3 application classifications

This manual data enrichment is essential for meaningful migration wave planning and risk assessment.

Monitoring and Alerting Integration

The discovery client generates logs that you can forward to Cloud Logging, but enterprise environments need integration with existing monitoring tools. Export discovery client logs to SIEM platforms like Splunk or elastic stack for centralized monitoring.

Key metrics to monitor:

  • Discovery success rates: Percentage of servers successfully scanned per cycle
  • Authentication failures: Credential expiration or permission changes
  • Network timeouts: Firewall or connectivity issues preventing discovery
  • Resource utilization: Discovery client CPU, memory, and storage usage

Set up alerts for discovery success rates below 95% - this indicates systematic issues that need immediate attention.

Essential Resources for Scale Operations:

Advanced Troubleshooting and Edge Case Resolution

When Standard Solutions Don't Work

Data Center Migration Phases

After getting burned by Migration Center in production more times than I can count, you develop a sixth sense for what's going to break at 2 AM during go-live. Google's docs are great if your environment was built yesterday by following best practices. The rest of us deal with legacy configurations that would make a consultant weep, custom security policies that block everything, and applications that were architected by someone who clearly hated their job.

Deep Troubleshooting with Log Analysis

Migration Center Discovery Client Installation

Discovery Client Logging Architecture

Discovery client spits out logs in 4 different places because Google loves making troubleshooting fun. These logs are your only friend when things go sideways at 3 AM and you're trying to explain to management why the $2M migration project is stuck.

Log locations and purposes:

  • Application logs: C:\ProgramData\Google\mcdc\logs\application.log - High-level discovery operations
  • Collection logs: C:\ProgramData\Google\mcdc\logs\collection.log - Guest-level scanning details
  • Upload logs: C:\ProgramData\Google\mcdc\logs\upload.log - Migration Center API communication
  • Database logs: C:\ProgramData\Google\mcdc\logs\database.log - Local data storage operations

The log forwarding to Cloud Logging feature introduced in version 5.3.5.7 helps with centralized monitoring, but local log analysis is still required for detailed troubleshooting.

Decoding Cryptic Error Messages

Migration Center error messages are designed by someone who clearly never had to debug this shit in production. As of September 2025, they've been "improving" error messages but it's like putting lipstick on a pig. Here's what the errors actually mean:

"Authentication failed" - Multiple Root Causes:

  • Credential expiration: Domain passwords rotated without updating discovery client (happens every 90 days if you follow security best practices)
  • Permission creep: Service account permissions modified by security team without notification to migration team
  • Network timeouts: Firewall rules blocking authentication traffic (particularly UDP 88 for Kerberos)
  • Time skew: Domain controller time synchronization issues affecting Kerberos (more than 5 minutes of drift will kill authentication)

Debug approach: Check Windows Event Logs on both discovery client and target servers for security audit events. Look for event ID 4625 (failed logon) with detailed failure reasons. Pro tip: 90% of authentication failures are because someone changed the service account password and forgot to update the discovery client. The other 10% are time synchronization issues that will drive you insane - especially in virtualized environments where NTP drift is common.

"Connection timeout" - Not Always Network Issues:

  • Resource exhaustion: Target server running out of memory or CPU
  • Antivirus interference: Endpoint protection blocking WMI or SSH connections
  • Windows Firewall changes: Group Policy updates modifying firewall rules
  • SSH configuration drift: Server hardening scripts disabling SSH access

Debug approach: Test connectivity manually from discovery client to target server using the same protocols. Use telnet server.domain.com 22 for SSH and Test-NetConnection PowerShell cmdlet for WMI ports.

Database and Storage Issues

Discovery Client Database Corruption

The SQLite database that stores discovery data can become corrupted in several scenarios, particularly in environments with frequent reboots or storage performance issues.

Corruption symptoms:

  • Discovery client UI shows "Database is locked" errors
  • Scans appear to complete but no data appears in Migration Center
  • Discovery client service fails to start with database-related errors
  • Export operations fail with SQLite error messages

Recovery procedure:

## Stop discovery client service
Stop-Service \"Migration Center Discovery Client\"

## Backup corrupted database
Copy-Item \"C:\ProgramData\Google\mcdc\data\discovery.db\" \"C:	emp\discovery_corrupted.db\"

## Use recovery command (available in CLI versions 2.0+)
& \"C:\Program Files\Google\Migration Center Discovery Client\mcdc.exe\" recover-db

## If recovery fails, delete database and rescan
Remove-Item \"C:\ProgramData\Google\mcdc\data\*\" -Force
Start-Service \"Migration Center Discovery Client\"

When Your Storage Array Hates Discovery Client

Put discovery client on slow enterprise storage and watch it die a slow, painful death. The symptoms are subtle - UI gets sluggish, scans mysteriously incomplete, but no obvious errors. I've spent weeks debugging "network issues" that turned out to be a 7200 RPM drive from 2015. SQLite starts throwing "database is locked" (SQLITE_BUSY error code 5) when disk I/O latency exceeds 200ms. Fun fact: the September 15, 2025 release added AI-powered software detection, but if your storage can't keep up, you won't see any of those fancy insights anyway.

Performance indicators:

  • Database operations taking >5 seconds: Check disk I/O latency on discovery client server
  • Incomplete VM collection: Storage timeouts during large vCenter inventory operations
  • UI freezing during report generation: Database queries timing out due to storage latency

Mitigation strategies:

  • Move discovery client to local SSD storage instead of network-attached storage
  • Increase discovery client VM resources: 8GB RAM minimum for large environments
  • Separate database and application storage: Use different drives for logs vs. data

VMware Integration Edge Cases

VMware vSphere Monitoring Dashboard

vCenter API Limitations in Large Environments

VMware vCenter has API rate limiting and resource constraints that aren't well documented but become apparent when scanning large environments with thousands of VMs.

vCenter performance issues:

  • API throttling: vCenter limits concurrent API connections from discovery client
  • Memory exhaustion: Large inventory queries can consume significant vCenter memory
  • Database locks: Heavy API usage can cause vCenter database performance issues
  • Network timeouts: Complex cluster configurations with slow storage response

Optimization approach:

## Recommended vCenter scanning schedule for large environments
Peak Hours (8 AM - 6 PM):
  - Disable automated inventory collection
  - Manual scans only for critical issues
  
Off Hours (6 PM - 8 AM):
  - Enable automated collection
  - Stagger scanning across multiple vCenters
  - Limit concurrent VM guest collections to 10

Custom vSphere Configurations Breaking Discovery

Enterprise VMware environments often have custom configurations that automated discovery tools don't handle gracefully.

Common configuration issues:

  • Distributed virtual switches: Complex network configurations confusing dependency mapping
  • Custom VM attributes: Non-standard metadata fields causing collection failures
  • Resource pools with strict limits: VM performance data skewed by resource constraints
  • vMotion policies: VM location changing during collection causing incomplete data

Workaround strategies:

  • Export RVTools data manually: Use RVTools import for complex environments
  • Supplement with CSV data: Import additional VM metadata through manual data tables
  • Schedule discovery during change freezes: Avoid scanning during maintenance windows with active vMotion

Network Dependency Mapping Challenges

The Reality of Network Discovery

The network dependencies feature introduced in version 6.3.0 collects network statistics and open ports, but it has significant limitations in enterprise environments with complex network architectures.

Network discovery blind spots:

  • Load balancer traffic: Connections through F5, Citrix NetScaler appear as single connections
  • Proxy server mediation: Applications communicating through proxy servers show proxy dependencies, not actual endpoints
  • Encrypted protocols: SSL/TLS connections don't reveal application-level dependencies
  • Batch job dependencies: Scheduled tasks that only run monthly or quarterly
  • Database connection pooling: Middleware masking actual database connection patterns

Enhanced dependency mapping approach:

  1. Combine automated discovery with application team interviews: Network data provides the foundation, but human knowledge fills gaps
  2. Review application documentation: Architecture diagrams, deployment guides, and configuration management databases
  3. Analyze application logs: Look for connection patterns, error messages, and integration points
  4. Monitor during different time windows: Capture weekend batch jobs, month-end processing, and seasonal workloads

Cost Estimation Accuracy Issues

Data Centre Migration Discovery Checklist

Why Migration Center Cost Estimates Are Fantasy Fiction

Migration Center's cost estimates are consistently wrong by 40-60%, and it's not random - it's systematically optimistic in ways that will get you fired if you trust them for budget planning.

Systematic underestimation factors:

Network costs underestimated by 60%: Migration Center assumes minimal egress traffic, but enterprise applications have significant east-west traffic patterns. Database replication, backup transfers, and integration flows generate substantial network costs.

Learning curve tax not included: The estimate assumes you'll nail rightsizing on day one. In reality, everyone over-provisions by 30-50% because nobody wants to be the person who undersized the production database. Then it takes 18 months to optimize because other priorities always come up.

Migration tool costs excluded: Database Migration Service, VM Migration, consulting fees, and training costs add $50-100K to typical enterprise migrations.

Operational overhead underestimated: New monitoring tools, security controls, and compliance requirements increase operational costs beyond the infrastructure estimates.

What you should actually budget:

Migration Center says: $47,000/month (adorable)
+ Network reality check (1.6x): $75,200/month
+ Over-provisioning safety net (40%): $105,280/month  
+ Migration tools & consulting: $25,000/month
+ Operational shit they forgot (30%): $137,000/month
= Real budget: $140,000/month for first year, then maybe $90K after optimization

September 2025 Update: Google finally added the "granular Compute Engine preferences" feature on September 15th. Now you can balance "latest technology vs cost" or "prioritize lowest price". Cool in theory, but it just gives you more knobs to turn while the underlying cost estimates are still fantasy numbers. I tested this last week - same broken estimates, just with prettier configuration options.

License Cost Calculation Errors

The Microsoft licensing calculator introduced in version 6.3.1 has assumptions that don't match enterprise licensing realities.

Common licensing miscalculations:

  • Enterprise Agreement discounts: Migration Center uses list pricing, not negotiated EA rates
  • SQL Server core licensing: Estimation assumes optimal core allocation, not actual deployment patterns
  • Windows Server datacenter licensing: Doesn't account for existing datacenter licenses that could be transferred
  • Software Assurance benefits: Migration Center doesn't factor in existing SA coverage

Accurate licensing approach: Export the detailed pricing CSV file and have your Microsoft licensing team review the assumptions. The generic calculations are useful for initial estimates, but enterprise licensing requires specialist review.

Security and Compliance Edge Cases

Discovery in Highly Regulated Environments

Financial services, healthcare, and government organizations have security requirements that standard Migration Center deployment doesn't address.

Common security challenges:

  • Air-gapped networks: Discovery client can't communicate with Google Cloud APIs
  • Privileged access management: Discovery credentials must be managed through enterprise PAM tools
  • Data sovereignty: Collected data must remain in specific geographic regions
  • Audit logging: All discovery activities must be logged for compliance review

Compliance-compatible deployment:

  • Use offline export functionality: Disconnected environment support for air-gapped networks
  • Deploy regional discovery clients: Keep data in required geographic boundaries
  • Integrate with SIEM platforms: Forward all discovery logs to enterprise security monitoring
  • Document discovery scope: Create detailed records of what data is collected and where it's stored

Essential Troubleshooting Resources:

Advanced Issues and Complex Scenarios

Q

Our enterprise has 15,000+ servers across multiple data centers. How do we approach discovery at this scale?

A

Don't try to scan everything from one discovery client. Seriously. I've seen this kill entire Discovery Client deployments. Deploy multiple discovery clients with clear boundaries:

  • Regional deployment: One discovery client per major data center/region
  • Credential boundaries: Separate clients for different AD domains or AWS accounts
  • Performance tiers: Dedicated clients for production vs. development environments

What actually works for 15K+ servers:

Data Center 1 (5,000 servers): 3 discovery clients (1,667 servers each - any more and performance dies)
Data Center 2 (4,000 servers): 2 discovery clients (learned this after the first one choked)
AWS Environment (3,000 instances): 2 discovery clients by VPC (API rate limits are real)
Azure Environment (2,000 VMs): 1 discovery client (and pray Azure doesn't have an outage)
Legacy DMZ (1,000 servers): 1 discovery client (with lots of manual coordination and crying)

All discovery clients upload to the same Migration Center project, giving you unified reporting with distributed collection.

Q

Migration Center shows dependency connections that don't make sense. How do we get accurate dependency mapping?

A

Network discovery has fundamental limitations. It sees network connections, not business logic dependencies. Common issues:

Load balancer confusion: Connections through F5 or NetScaler show as dependencies on the load balancer, not the actual backend servers.

Proxy server masking: Applications using corporate proxy servers show dependencies on proxy servers instead of actual endpoints.

Database connection pooling: Middleware tools mask the actual database connection patterns.

Better dependency mapping approach:

  1. Use Migration Center network data as a starting point (finds 60-70% of connections)
  2. Interview application teams to fill gaps
  3. Review architecture documentation and CMDBs
  4. Monitor during different time periods (capture monthly batch jobs)
  5. Use application modernization assessment for code-level analysis
Q

vCenter inventory collection is extremely slow and sometimes fails. How do we optimize VMware discovery?

A

vCenter API has rate limits and resource constraints that aren't well documented. Large environments with 1,000+ VMs can overwhelm vCenter.

Optimization strategies:

  • Schedule during off-peak hours: Avoid vCenter collection during backup windows or peak usage
  • Reduce concurrent operations: Limit guest-level scanning to 10-15 VMs simultaneously
  • Stagger multiple vCenters: Don't scan all vCenter servers at the same time
  • Use RVTools for initial inventory: Import RVTools data for baseline, then use discovery client for guest-level details

vCenter performance monitoring:

  • Monitor vCenter CPU and memory during discovery operations
  • Check vCenter database performance (SQL Server or PostgreSQL)
  • Review vSphere events for API timeout errors
  • Consider upgrading vCenter if running older versions (6.5 or earlier)
Q

Discovery client database keeps getting corrupted. Is this normal?

A

Database corruption is not normal - it means something is fundamentally fucked with your setup. We tried ignoring it for months until we lost 3 weeks of scan data. Don't be us. SQLite corruption throws specific error codes like "database is locked" (SQLITE_BUSY) or "database disk image is malformed" (SQLITE_CORRUPT). When you see these, don't restart the service hoping it fixes itself.

Storage issues:

  • Discovery client on slow network storage (NAS/SAN with high latency)
  • Insufficient disk space causing SQLite write failures
  • Storage array issues during database operations

System stability issues:

  • Frequent Windows reboots during discovery operations
  • Memory pressure causing application crashes
  • Antivirus software interfering with database files

Prevention strategies:

  • Move to local SSD storage: Avoid network-attached storage for discovery client
  • Increase VM resources: 8GB RAM minimum, 16GB for large environments
  • Exclude from antivirus: Add C:\ProgramData\Google\mcdc\ to antivirus exclusions
  • Regular database maintenance: Export data and restart discovery client weekly

Recovery when corruption occurs:

## Stop service and backup corrupted database
Stop-Service "Migration Center Discovery Client"
Copy-Item "C:\ProgramData\Google\mcdc\data\*" "C:\backup\"

## Use built-in recovery (discovery client 6.3+)
& "C:\Program Files\Google\Migration Center Discovery Client\mcdc.exe" recover-db

## If recovery fails, delete and restart
Remove-Item "C:\ProgramData\Google\mcdc\data\*" -Force
Start-Service "Migration Center Discovery Client"
Q

Our security team won't allow discovery client to communicate with Google Cloud. Can we use Migration Center in air-gapped environments?

A

Yes, with the offline export feature introduced in version 6.3.7 on August 6, 2025. This enables discovery in completely disconnected environments. Took Google long enough - only 3 years of enterprise customers begging for this basic feature.

Air-gapped deployment process:

  1. Install discovery client in disconnected mode: No internet connectivity required for scanning
  2. Perform complete inventory and guest discovery: Collect all data locally
  3. Export data bundle: Generate encrypted data package containing all discovery results
  4. Transfer via approved method: USB, secure file transfer, or approved network bridge
  5. Import to Migration Center: Upload data bundle to connected Google Cloud project

Security considerations:

  • Data is encrypted during export and transfer
  • No Google Cloud credentials stored on air-gapped systems
  • Complete audit trail of what data was collected
  • Can be integrated with enterprise data loss prevention tools

Limitations of offline mode:

  • No real-time data upload or monitoring
  • Manual process for data transfer
  • Cannot use continuous performance monitoring features
Q

Cost estimates are so far off they're useless. How do we get realistic migration budgets?

A

Migration Center's cost estimates are about as accurate as a weatherman in a hurricane. They assume perfect execution in a world that doesn't exist. I've never seen a Migration Center estimate that was within 40% of reality.

Step 1: Start with Migration Center estimate
Example: $50,000/month

Step 2: Apply reality multipliers

  • Network costs: Add 60% ($80,000/month)
  • Learning curve tax: Add 30% ($104,000/month)
  • Initial over-provisioning: Add 25% ($130,000/month)
  • Migration tools and services: Add $20,000/month ($150,000/month)

Step 3: Include hidden costs

  • Training and certification: $50,000 one-time
  • Consulting and professional services: $100,000-500,000
  • Extended parallel run period: 3-6 months of dual costs
  • Rollback insurance: Reserve 20% of total budget for potential rollback

Real-world example from my last gig:

  • Migration Center estimate: $47,000/month (so cute)
  • Actual first AWS bill: $89,000/month (panic meeting time)
  • Peak costs during learning curve: $140,000/month (CTO not happy)
  • Optimized costs after 18 months of tuning: $85,000/month (still 80% higher than estimate)

Better cost modeling approach:

  1. Export detailed pricing CSV from Migration Center
  2. Have enterprise architects review sizing assumptions
  3. Get Microsoft licensing team to validate license calculations
  4. Include 6-month parallel run period in budget
  5. Plan for 18-month optimization timeline to reach target costs
Q

We have compliance requirements (SOC 2, HIPAA, PCI). Can Migration Center meet these requirements?

A

Migration Center can support compliance environments with proper configuration, but requires additional controls beyond default setup.

SOC 2 Type II compliance considerations:

  • Data encryption: All data encrypted in transit and at rest by default
  • Access controls: Use Google Cloud IAM with principle of least privilege
  • Audit logging: Enable Cloud Audit Logs for all Migration Center activities
  • Data retention: Configure automatic data deletion after migration completion

HIPAA compliance factors:

  • Business Associate Agreement: Required with Google Cloud
  • Data minimization: Configure discovery to exclude PHI from collection
  • Geographic restrictions: Use VPC Service Controls to restrict data location
  • Access logging: Monitor all access to discovery data

PCI DSS considerations:

  • Network segmentation: Deploy discovery clients outside of cardholder data environment
  • Credential management: Use enterprise PAM tools for discovery credentials
  • Data classification: Tag servers in CDE separately from other infrastructure
  • Regular security testing: Include discovery infrastructure in penetration testing scope

Implementation recommendations:

  • Deploy discovery clients in management network segments, not production
  • Use service perimeters to control data egress
  • Integrate with enterprise SIEM for centralized logging
  • Document data flows for compliance auditors
Q

Network dependency mapping shows thousands of connections. How do we make sense of this data?

A

Raw network data is overwhelming without proper filtering and analysis. The network dependencies report needs post-processing to be useful for migration planning.

Data filtering strategies:

  1. Ignore administrative connections: RDP, SSH, SNMP monitoring traffic
  2. Focus on application ports: HTTP/HTTPS (80, 443), database ports (1433, 3306, 5432)
  3. Filter by traffic volume: Connections with <10MB/day are often monitoring or maintenance
  4. Time-based analysis: Look for connections that only occur during batch processing windows

Dependency analysis workflow:

Raw dependency data (10,000+ connections)
↓ Filter administrative traffic
Relevant application connections (2,000 connections)
↓ Group by application function
Business-critical dependencies (500 connections)
↓ Validate with application teams
Migration-blocking dependencies (100 connections)

Making dependencies actionable:

  • Create dependency groups: Cluster related servers that must migrate together
  • Identify external dependencies: Services that cannot be migrated and require hybrid connectivity
  • Plan migration waves: Use dependency data to sequence migration order
  • Design network connectivity: Plan VPN, Interconnect, and firewall rules based on actual traffic patterns

Tool integration recommendations:

  • Export dependency data to network diagramming tools (Visio, Lucidchart)
  • Import into project management tools for migration wave planning
  • Integrate with CMDB to add business context to technical dependencies
  • Use application performance monitoring data to validate dependency importance

Essential Troubleshooting Resources and Emergency Contacts

Related Tools & Recommendations

tool
Similar content

Migrate Your Infrastructure to Google Cloud Without Losing Your Mind

Google Cloud Migration Center tries to prevent the usual migration disasters - like discovering your "simple" 3-tier app actually depends on 47 different servic

Google Cloud Migration Center
/tool/google-cloud-migration-center/overview
95%
tool
Similar content

GKE Security That Actually Stops Attacks

Secure your GKE clusters without the security theater bullshit. Real configs that actually work when attackers hit your production cluster during lunch break.

Google Kubernetes Engine (GKE)
/tool/google-kubernetes-engine/security-best-practices
91%
tool
Recommended

AWS MGN Enterprise Production Deployment - Security & Scale Guide

Rolling out MGN at enterprise scale requires proper security hardening, governance frameworks, and automation strategies. Here's what actually works in producti

AWS Application Migration Service
/tool/aws-application-migration-service/enterprise-production-deployment
70%
tool
Recommended

AWS Application Migration Service (MGN) - Copy Your Servers to AWS

MGN replicates your physical or virtual servers to AWS. It works, but expect some networking headaches and licensing surprises along the way.

AWS Application Migration Service
/tool/aws-application-migration-service/overview
70%
tool
Recommended

Azure Migrate - Microsoft's Tool for Moving Your Crap to the Cloud

Microsoft's free migration tool that actually works - helps you discover what you have on-premises, figure out what it'll cost in Azure, and move it without bre

Azure Migrate
/tool/azure-migrate/overview
70%
news
Recommended

Accenture Drops Half a Billion on AI Consultants Because Everyone's Going Crazy for ChatGPT

Consulting giant panic-buys NeuraFlash for $500M+ because every client meeting now starts with "what's our AI strategy?"

Samsung Galaxy Devices
/news/2025-08-31/accenture-neuraflash-acquisition
60%
tool
Popular choice

jQuery - The Library That Won't Die

Explore jQuery's enduring legacy, its impact on web development, and the key changes in jQuery 4.0. Understand its relevance for new projects in 2025.

jQuery
/tool/jquery/overview
60%
tool
Recommended

Google Kubernetes Engine (GKE) - Google's Managed Kubernetes (That Actually Works Most of the Time)

Google runs your Kubernetes clusters so you don't wake up to etcd corruption at 3am. Costs way more than DIY but beats losing your weekend to cluster disasters.

Google Kubernetes Engine (GKE)
/tool/google-kubernetes-engine/overview
58%
tool
Popular choice

Hoppscotch - Open Source API Development Ecosystem

Fast API testing that won't crash every 20 minutes or eat half your RAM sending a GET request.

Hoppscotch
/tool/hoppscotch/overview
57%
tool
Popular choice

Stop Jira from Sucking: Performance Troubleshooting That Works

Frustrated with slow Jira Software? Learn step-by-step performance troubleshooting techniques to identify and fix common issues, optimize your instance, and boo

Jira Software
/tool/jira-software/performance-troubleshooting
55%
pricing
Recommended

Databricks vs Snowflake vs BigQuery Pricing: Which Platform Will Bankrupt You Slowest

We burned through about $47k in cloud bills figuring this out so you don't have to

Databricks
/pricing/databricks-snowflake-bigquery-comparison/comprehensive-pricing-breakdown
55%
pricing
Recommended

BigQuery Pricing: What They Don't Tell You About Real Costs

BigQuery costs way more than $6.25/TiB. Here's what actually hits your budget.

Google BigQuery
/pricing/bigquery/total-cost-ownership-analysis
55%
tool
Recommended

BigQuery Editions - Stop Playing Pricing Roulette

Google finally figured out that surprise $10K BigQuery bills piss off customers

BigQuery Editions
/tool/bigquery-editions/editions-decision-guide
55%
tool
Popular choice

Northflank - Deploy Stuff Without Kubernetes Nightmares

Discover Northflank, the deployment platform designed to simplify app hosting and development. Learn how it streamlines deployments, avoids Kubernetes complexit

Northflank
/tool/northflank/overview
52%
tool
Popular choice

LM Studio MCP Integration - Connect Your Local AI to Real Tools

Turn your offline model into an actual assistant that can do shit

LM Studio
/tool/lm-studio/mcp-integration
50%
tool
Popular choice

CUDA Development Toolkit 13.0 - Still Breaking Builds Since 2007

NVIDIA's parallel programming platform that makes GPU computing possible but not painless

CUDA Development Toolkit
/tool/cuda/overview
47%
tool
Recommended

Google Cloud Platform - After 3 Years, I Still Don't Hate It

I've been running production workloads on GCP since 2022. Here's why I'm still here.

Google Cloud Platform
/tool/google-cloud-platform/overview
45%
news
Popular choice

Taco Bell's AI Drive-Through Crashes on Day One

CTO: "AI Cannot Work Everywhere" (No Shit, Sherlock)

Samsung Galaxy Devices
/news/2025-08-31/taco-bell-ai-failures
45%
tool
Recommended

Dynatrace - Monitors Your Shit So You Don't Get Paged at 2AM

Enterprise APM that actually works (when you can afford it and get past the 3-month deployment nightmare)

Dynatrace
/tool/dynatrace/overview
44%
tool
Recommended

Dynatrace Enterprise Implementation - The Real Deployment Playbook

What it actually takes to get this thing working in production (spoiler: way more than 15 minutes)

Dynatrace
/tool/dynatrace/enterprise-implementation-guide
44%

Recommendations combine user behavior, content similarity, research intelligence, and SEO optimization