Minikube networking will ruin your day. You deploy a service, gets a ClusterIP, looks good in kubectl get svc
, but you can't reach it. This is where people give up and go back to Docker Compose.
The minikube tunnel Problem
The number one networking headache is LoadBalancer services stuck in "pending" state. Your service manifest looks perfect, kubectl shows everything running, but EXTERNAL-IP
just says <pending>
forever.
The problem: Minikube doesn't have a real load balancer. Unlike cloud providers that automatically provision AWS ELB or GCP Load Balancer, Minikube needs manual intervention to make LoadBalancer services work.
The solution: minikube tunnel
- but it's not as simple as the docs make it seem.
## Start tunnel (requires sudo on Linux/macOS)
sudo minikube tunnel
## Keep this running in separate terminal
## Your LoadBalancer services will get 10.x.x.x IPs
Reality check: minikube tunnel
is flaky. It crashes every time my laptop sleeps, leaving me staring at services that worked 5 minutes ago. Spent half a demo trying to figure out why my API was unreachable before realizing tunnel had died with some "route already exists" error.
Better approach for development:
## Use NodePort instead of LoadBalancer
kubectl patch svc myservice -p '{\"spec\":{\"type\":\"NodePort\"}}'
## Get the URL directly
minikube service myservice --url
## Or use the service command docs: https://minikube.sigs.k8s.io/docs/commands/service/
DNS Resolution Hell
Another classic: pods can't resolve service names. You're inside a pod, try to curl my-service
, and get Name or service not known
.
Check DNS is working:
## Test from inside a pod
kubectl run test-pod --image=busybox --rm -it -- nslookup kubernetes.default
Check the DNS debugging guide if this fails.
If that fails, your CoreDNS is broken:
## Check CoreDNS pods
kubectl get pods -n kube-system | grep coredns
## Restart CoreDNS
kubectl delete pods -n kube-system -l k8s-app=kube-dns
CoreDNS breaks often, especially after restarts or driver changes.
Port Forward: The Nuclear Option
When all else fails, kubectl port-forward
usually works:
## Forward pod port to localhost
kubectl port-forward pod/mypod 8080:80
## Forward service port (more reliable)
kubectl port-forward svc/myservice 8080:80
This bypasses all the Kubernetes networking magic and just creates a direct tunnel. Not elegant, but it works at 3 AM when you need to demo something. See the port forwarding documentation for more details.
The Docker Driver Networking Gotcha
Using --driver=docker
has a special networking quirk: your pods run inside Docker containers, which run inside a Docker network, which may or may not be reachable from your host. The Docker driver documentation explains this in more detail.
Symptoms:
kubectl port-forward
worksminikube service
URLs don't work- Can't reach NodePort services from browser
Fix:
## Check what IP Minikube is using
minikube ip
## If it returns 127.0.0.1, you're in Docker mode
## Services are available at different ports on localhost
minikube service myservice --url
IPv6 and DNS Shenanigans
If you're on a network with IPv6 (common in corporate environments), Minikube can get confused about which IP stack to use. This manifests as:
- Services unreachable intermittently
- DNS timeouts
minikube tunnel
connecting but traffic not flowing
Quick fix:
## Force IPv4 for DNS
minikube start --extra-config=kubelet.resolv-conf=/etc/resolv.conf
Network Policy Confusion
If you've enabled the Network Policy addon and things suddenly stop working, it's probably blocking your traffic.
## Check if NetworkPolicy is causing issues
kubectl get networkpolicies --all-namespaces
## Disable the addon
minikube addons disable networkpolicy
Network policies in Minikube are great for learning but terrible for debugging other issues. Disable them first, then re-enable once everything else works.
The Ultimate Networking Debug Sequence
When networking is completely broken and you can't figure out why:
## 1. Check basic connectivity
minikube status
## 2. Verify cluster IP ranges
kubectl cluster-info dump | grep -i cidr
## 3. Test pod-to-pod networking
kubectl run test1 --image=busybox --rm -it -- ping 8.8.8.8
kubectl run test2 --image=busybox --rm -it -- nslookup kubernetes.default
## 4. Nuclear option - restart networking
minikube ssh 'sudo systemctl restart kubelet'
The Minikube networking documentation is decent but doesn't cover these real-world failure modes. Most engineers learn this stuff the hard way - by debugging at inconvenient times.
Remember: in production Kubernetes, networking "just works" because cloud providers handle the complexity. Minikube exposes all the ugly details that normally stay hidden.