The VirtioFS Reality Check
OrbStack's new filesystem hits 75-95% of native macOS performance, which sounds amazing until you realize that's still 5-25% slower than running natively. For most development work, this is fine. For database-heavy applications or builds with thousands of files, it's the difference between "fast enough" and "I want to throw my laptop."
What actually gets faster:
pnpm install
: 88% native speed (vs 40% on Docker Desktop)- Large file operations: 87% native (vs 60% on Docker Desktop)
- Database operations: 76% native with proper fsyncing
What's still slow:
- Lots of tiny file operations (webpack, Go builds with many packages)
- File watching for hot reload (better but not perfect)
- Any workflow that creates thousands of temporary files
The key insight: OrbStack optimizes the VM boundary crossing, but can't make macOS filesystem calls faster than they are. If your workflow is filesystem-heavy, named volumes are still your friend.
Memory Limits and The 8GB Problem
OrbStack defaults to using up to 8GB of system memory, which is reasonable until you're running multiple large containers on a 16GB machine. Unlike Docker Desktop's fixed allocation, OrbStack's memory is dynamic - it grows and shrinks based on actual usage.
Real-world memory consumption:
- Idle OrbStack: ~100MB
- Single Rails app container: ~1-2GB total
- Multiple microservices: 4-6GB is common (killed my 16GB MacBook Pro once with 6 containers running Redis, Postgres, and Elasticsearch - lesson learned)
- Large databases (Postgres, MongoDB): easily 3-4GB each
When memory becomes a problem:
## Check actual usage, not Docker stats
docker system df
orb info
## See what's eating memory inside containers
docker exec -it [container] free -h
docker exec -it [container] ps aux --sort=-%mem
If you're hitting limits, don't just bump the OrbStack memory allocation. Profile your containers first - most memory issues are application-level, not virtualization overhead.
File Descriptor Limits and Container Sprawl
The file descriptor leak that hit OrbStack 1.6.0-1.6.1 is fixed, but it exposed a real problem: modern containerized applications open way more files than you expect.
What eats file descriptors:
- Each bind mount: 10-50 descriptors per mounted directory
- Database connections: 1-3 descriptors each
- Log files: 1 descriptor per log stream
- Network connections: 2 descriptors per active connection
Monitor your descriptor usage:
## Check system-wide limits
ulimit -n
## See OrbStack's current usage
lsof -p $(pgrep OrbStack) | wc -l
## Find descriptor leaks in containers
docker exec -it [container] ls -la /proc/self/fd | wc -l
Practical limits:
- macOS default: 256 per process (too low)
- Reasonable limit: 4096-8192
- When you need more: You probably have a different problem
CPU Usage Patterns That Matter
OrbStack's ARM64 optimization means native containers barely register CPU usage, but x86 emulation through Rosetta adds overhead. The performance hit varies wildly by workload.
CPU overhead by container type:
- Native ARM64: 0-2% overhead vs native
- x86 through Rosetta: 15-30% overhead
- Mixed architectures: Overhead stacks badly
Real bottlenecks:
- Build processes: ARM64 Docker builds are 2-3x faster than x86 - learned this the hard way migrating a Rails app that went from 8-minute builds to 3 minutes just switching architectures
- Database operations: Postgres ARM64 vs x86 shows 40% performance difference - our test suite went from 12 minutes to 7 minutes after switching to native arm64 postgres:15
- Node.js applications: V8 JIT optimization works better on native architecture - same app, 30% faster cold start times
Check what you're actually running:
## See architecture for all containers
docker ps --format \"table {{.Names}} {{.Image}}\" | while read name image; do
if [ \"$name\" != \"NAMES\" ]; then
arch=$(docker inspect $name | jq -r '.[0].Config.Architecture // \"unknown\"')
echo \"$name: $arch\"
fi
done
Network Performance and VPN Hell
OrbStack's network stack follows macOS routing exactly, which is great for consistency but terrible when your corporate VPN does stupid things. Unlike Docker Desktop's isolated network, OrbStack containers inherit all your macOS network quirks.
Common network performance killers:
- DNS resolution: Some VPNs add 100-500ms per lookup
- Proxy auto-discovery: Corporate networks that auto-configure proxies
- Split tunneling: When container traffic goes through VPN but shouldn't
Debug network performance:
## Test from container
docker exec -it [container] time curl -I https://httpbin.org/ip
## Check DNS timing
docker exec -it [container] time nslookup google.com
## See actual routing
docker exec -it [container] ip route
VPN-specific fixes:
- Restart OrbStack after connecting to VPN (annoying but works)
- Configure split tunneling to exclude container subnets
- Use explicit DNS servers:
docker run --dns=8.8.8.8
The reality: if your VPN breaks Docker Desktop, it'll probably break OrbStack too. The difference is OrbStack fails in more predictable ways - you get actual network unreachable errors instead of Docker Desktop's mysterious 5-minute timeouts that make you question your sanity.