Why Docker Networking Breaks (And Why It's Always DNS)

Docker Network Architecture

Remember those four types of networking failures mentioned in the opening? Here they are in all their glory. Docker networking fails in predictable, stupid ways - I've debugged this shit for years and it's always one of these four things that broke overnight, usually after something "harmless" like a system update.

DNS is Fucked (90% of Problems)

The most common disaster: containers suddenly can't reach the internet. You'll see DNS resolution failures during apt update, npm installs timing out, or the generic "could not resolve host" errors that tell you nothing useful. Recent users report the same shit - DNS working one minute, broken the next.

Here's what actually happens: your Ubuntu system updated systemd-resolved and now Docker inherits broken DNS config from your host. I learned this the hard way when Docker updates broke DNS inheritance for half the Ubuntu users after system updates. Spent 6 hours debugging before realizing systemd-resolved was fighting with Docker's DNS. This systemd-resolved + Docker conflict affects rootless Docker too, and private networks get completely fucked because of systemd-resolved's caching.

The nuclear fix: Edit /etc/docker/daemon.json and force DNS servers:

{
  "dns": ["8.8.8.8", "1.1.1.1"]
}

Corporate networks are special hell - they block external DNS and you have to figure out their internal DNS servers. Good fucking luck.

Port Forwarding is a Lie

Docker Port Forwarding

Your container runs fine on localhost:8080 but the outside world can't reach it. Docker says the port is published (docker port shows 0.0.0.0:8080->80/tcp) but it's a black hole.

This happens because Docker automatically fucks with iptables rules without asking. Your existing firewall rules block Docker's traffic and Docker doesn't tell you. UFW particularly hates Docker because Docker bypasses UFW completely.

WSL2 is even worse - port forwarding just doesn't work and you have to manually configure Windows firewall and network adapters. Docker containers are invisible from Windows, mirrored networking breaks everything, and VSCode can't reach containers. I've seen devs waste entire days on this. Recent WSL2 users still report port forwarding randomly stopping after Windows updates, and Docker Desktop + WSL2 makes networking even more unstable. Bridge networks don't work at all in some WSL2 setups.

Containers Can't Talk to Each Other

Docker Bridge Network

You put containers on the same network, they should talk using container names. Except they can't and you get "connection refused" or DNS resolution failures.

The default bridge network is garbage. It doesn't do DNS resolution between containers. Everyone gets confused by this because the docs don't explain it clearly. Container networking constantly breaks for no apparent reason, and containers can't connect to each other even when they're supposedly on the same network.

Quick fix: Stop using the default bridge. Create your own:

docker network create myapp-network

User-defined networks actually work for container-to-container communication.

Host Access is Platform Hell

Containers trying to reach services on your host machine - database, API, whatever. The error is always "connection refused" even though the service is running.

host.docker.internal works on Docker Desktop but not Linux. Linux needs --add-host host.docker.internal:host-gateway but only on newer Docker versions. Older versions need the bridge gateway IP which changes randomly. Docker-to-host communication breaks constantly and bridge networks conflict with host networks in unpredictable ways. Networking connections time out randomly and WSL2 makes it worse.

It's a clusterfuck of platform-specific workarounds that break when you move between dev environments. Connecting containers to both host and bridge networks is impossible because Docker won't let you do it.

The Pattern: Docker Networking Always Breaks the Same Ways

After debugging this shit for years, the pattern is clear. Docker networking fails in four predictable categories, usually after something changes (system update, Docker update, network change, or just random Docker weirdness).

Current reality check (August 2025): Docker Engine 27.x and the newer 28.x versions still exhibit these same networking issues. If anything, Docker 28 introduced new networking problems with firewall integration that break container port access after firewalld reloads. The more things change, the more they break in the same predictable ways.

The good news? The fixes are also predictable once you know what you're doing.

How to Actually Fix This Shit

Docker Network Debug

Now that you know the four ways Docker networking breaks, here's how to fix each one. Stop googling random solutions that worked for someone else's completely different setup. These are the battle-tested fixes that actually work when you're debugging at 3am and need shit to work now.

DNS is Broken: The Dumb Fixes First

Test if DNS is fucked (takes 30 seconds):

docker exec -it container_name nslookup google.com

If that fails, DNS is your problem. Here are the fixes that actually work, in order of success rate:

Nuclear option (works 90% of the time, takes 2 minutes):

## Edit /etc/docker/daemon.json
{
  \"dns\": [\"8.8.8.8\", \"1.1.1.1\"]
}

## Restart Docker (this kills running containers)
sudo systemctl restart docker

Per-container DNS (when you can't nuke the daemon):

docker run --dns 8.8.8.8 --dns 1.1.1.1 your-image

For Docker Compose (add this to every fucking service because inheritance is broken):

services:
  web:
    image: nginx
    dns:
      - 8.8.8.8
      - 1.1.1.1

More DNS troubleshooting approaches if the nuclear option doesn't work, and systemd-resolved debugging tips for when you need to fix the root cause.

Corporate network hell (good luck, you'll need it):

First, figure out your company's DNS servers. IT won't tell you, so:

nmcli device show | grep DNS

Then pray this works:

{
  \"dns\": [\"192.168.1.1\", \"8.8.8.8\"],
  \"dns-search\": [\"your-corp.com\"]
}

Port Forwarding: Why Your Ports Disappear Into the Void

Check if Docker actually published the port (5 seconds):

docker port container_name
## Should show: 0.0.0.0:8080->80/tcp

Test locally first (saves hours of debugging):

curl localhost:8080

If local works but external fails, your firewall is cockblocking Docker.

Ubuntu/Debian firewall fix (works immediately):

## Allow Docker's subnet through your firewall
sudo ufw allow from 172.17.0.0/16
sudo ufw reload

WSL2 port forwarding is completely fucked (3-hour fix minimum):

WSL2 Networking

Windows blocks everything by default. Run this in PowerShell as Administrator:

## This command is cursed but it works
netsh interface portproxy add v4tov4 listenport=8080 listenaddress=0.0.0.0 connectport=8080 connectaddress=$(wsl hostname -I).trim()

You'll also need to configure Windows Defender Firewall because Windows assumes any network activity is malicious. Port forwarding randomly stops working after updates. WSL2 port forwarding is fundamentally broken and container connections from Windows fail randomly. Check SuperUser for more WSL2 port forwarding solutions that might work.

macOS Docker Desktop usually works but sometimes Docker Desktop randomly breaks and you have to restart it. That's the fix. Restart Docker Desktop.

Container Communication: Stop Using the Default Network

Custom Network

The default bridge network doesn't do DNS between containers. I don't know why Docker made this the default. It's useless.

Do this instead (30 seconds):

docker network create myapp
docker run --network myapp --name db postgres
docker run --network myapp --name app my-app

Now app can reach db by hostname. Fucking magic.

If it still doesn't work, check that both containers joined the network:

docker network inspect myapp

Look for your containers in the "Containers" section. If they're not there, you fucked up the --network flag.

Connecting to Host Services: Platform Lottery

This is where Docker's "works on my machine" really shines.

Docker Desktop (Windows/macOS) - should work:

## Inside container, connect to host services on:
host.docker.internal:5432

Linux - requires extra bullshit:

## Recent Docker versions
docker run --add-host host.docker.internal:host-gateway your-image

## Older Docker versions (find bridge IP manually)
docker network inspect bridge | grep Gateway
## Use that IP (usually 172.17.0.1)

The service must bind to 0.0.0.0, not 127.0.0.1. If your database binds to localhost only, containers can't reach it. This fucks up so many people. Half of StackOverflow is people trying to connect to localhost from containers. This guide explains connecting to host localhost in detail if you're still stuck.

Nuclear Option: When Everything is Fucked

When nothing works and you've tried everything (usually after 2-3 hours of debugging), nuke it from orbit:

## Nuclear reset (destroys everything)
docker stop $(docker ps -q)
docker system prune -af
docker network prune -f
sudo systemctl restart docker

This deletes all containers, networks, and cached data. You'll have to rebuild everything but it fixes 95% of persistent networking issues.

Time estimate: 15 minutes to nuke and rebuild simple setups, 2 hours if you have complex networking or didn't document your configs.

You're Not Crazy - Docker Networking Really is This Bad

Docker networking consistently breaks because it tries to do too much magic behind the scenes. The default bridge network sucks, DNS inheritance from the host is fragile, and WSL2 adds another layer of networking hell. But now you know the actual fixes that work, not the random Stack Overflow solutions that waste hours.

Remember the hierarchy: DNS first (always), then port publishing, then container communication, then host access. Most networking problems are DNS problems in disguise. When in doubt, nuke it and start clean - it's faster than debugging Docker's networking quirks.

The Same 5 Questions Everyone Asks

Q

My container can't reach the internet, what gives?

A

DNS is fucked. Run docker exec -it container_name nslookup google.com to confirm.

Quick fix: docker run --dns 8.8.8.8 your-image

Permanent fix: Add this to /etc/docker/daemon.json and restart Docker:

{"dns": ["8.8.8.8", "1.1.1.1"]}

If you're on a corporate network, you're screwed. Ask IT for DNS servers (they won't give them to you) or try nmcli device show | grep DNS to find them yourself.

Q

Port mapping says it's working but I can't connect from other machines

A

Your firewall is blocking Docker. Docker publishes the port (docker port shows it) but your system firewall kills the traffic.

Ubuntu fix: sudo ufw allow from 172.17.0.0/16

WSL2 fix: You'll need 3 hours and a bottle of whiskey. Start with this cursed PowerShell command and pray to whatever deity you believe in.

Q

Containers can't talk to each other using names

A

You're using the default bridge network. It's garbage for container communication.

Fix: Stop using default bridge. Create your own network:

docker network create myapp
docker run --network myapp --name db postgres  
docker run --network myapp --name web nginx

Now web can connect to db:5432. Revolutionary technology from 2016.

Q

How do I connect to services on my host machine?

A

Docker Desktop: Use host.docker.internal:5432 inside containers.

Linux: Add --add-host host.docker.internal:host-gateway when running containers.

Most important: Your host service must bind to 0.0.0.0:5432, not 127.0.0.1:5432. Localhost-only binding blocks container access.

Q

Docker says the port is mapped but nothing connects

A

The service inside the container isn't actually listening, or it's listening on 127.0.0.1 only.

Debug: docker exec -it container netstat -tlnp

Look for your port in the list. If it shows 127.0.0.1:8080, that's your problem. The service needs to bind to 0.0.0.0:8080 to accept external connections.

Q

"Network is unreachable" - what the fuck does that mean?

A

Usually means your Docker subnet conflicts with existing networks (VPN, corporate network, etc.). This got worse in 2025 with more corporate VPNs using Docker's default subnet ranges.

Check: docker network inspect bridge | grep Subnet

If it shows 172.17.0.0/16 and your VPN also uses 172.17.x.x, you're hosed. Either disconnect the VPN or change Docker's default subnet in daemon.json:

{
  "default-address-pools": [
    {"base": "192.168.100.0/24", "size": 28}
  ]
}
Q

DNS works but takes forever

A

IPv6 is probably fucking things up, or your DNS search domains are broken.

Quick fix: docker run --sysctl net.ipv6.conf.all.disable_ipv6=1 your-image

Better fix: Add to daemon.json:

{
  "dns": ["8.8.8.8"],
  "dns-opts": ["ndots:1", "single-request-reopen"]
}
Q

It worked yesterday, today it's broken - nothing changed

A

Something changed. Docker updated, system updated, network config changed, someone "fixed" the firewall rules.

Check recent changes: journalctl -u docker.service --since yesterday

Nuclear option: Restart Docker daemon and recreate all networks. Yeah, it sucks.

Q

Containers randomly can't talk to each other

A

Container crashed and restarted with a different IP. Or the network got fucked.

Check: docker network inspect network_name - are both containers listed?

Fix: Restart both containers or recreate the network. Docker networking is fragile.

Q

Port forwarding works sometimes, fails other times

A

Your container is probably crashing under load or the health check is broken.

Check: docker logs container_name for crash/restart messages

Monitor: docker stats container_name for resource usage

Reality check: Most intermittent networking issues are actually application crashes.

Debugging Tools: What Actually Works

Tool

Reality Check

When to Use

Pain Level

docker network inspect

Actually useful for seeing WTF is going on

Always start here

Easy

docker exec -it container ping

Works when DNS is working

First connectivity test

Easy

netstat -tlnp

Shows what's actually listening (inside container)

Port binding issues

Easy

nslookup/dig

Proves DNS is fucked

DNS problems

Easy

curl

Tests HTTP but won't tell you why it fails

App-level debugging

Easy

telnet

Great for testing specific ports

Port connectivity

Easy but often not installed

docker port

Shows Docker's port mapping (may be lies)

Port publishing verification

Easy

ss

Better than netstat when available

Modern alternative to netstat

Easy

nicolaka/netshoot

Nuclear option with all the tools

When basic tools fail

Complex setup

tcpdump

Packet-level analysis (overkill usually)

Deep network debugging

Expert level

iptables -L

Shows firewall rules (host level)

Firewall conflicts

Moderate

Resources That Don't Suck

Related Tools & Recommendations

integration
Similar content

Jenkins Docker Kubernetes CI/CD: Deploy Without Breaking Production

The Real Guide to CI/CD That Actually Works

Jenkins
/integration/jenkins-docker-kubernetes/enterprise-ci-cd-pipeline
100%
troubleshoot
Similar content

Fix Kubernetes Service Not Accessible: Stop 503 Errors

Your pods show "Running" but users get connection refused? Welcome to Kubernetes networking hell.

Kubernetes
/troubleshoot/kubernetes-service-not-accessible/service-connectivity-troubleshooting
86%
troubleshoot
Similar content

Fix Docker Networking Issues: Troubleshoot Container Connectivity

Your containers worked fine locally. Now they're deployed and nothing can talk to anything else.

Docker Desktop
/troubleshoot/docker-cve-2025-9074-fix/fixing-network-connectivity-issues
71%
tool
Recommended

Google Kubernetes Engine (GKE) - Google's Managed Kubernetes (That Actually Works Most of the Time)

Google runs your Kubernetes clusters so you don't wake up to etcd corruption at 3am. Costs way more than DIY but beats losing your weekend to cluster disasters.

Google Kubernetes Engine (GKE)
/tool/google-kubernetes-engine/overview
71%
tool
Similar content

Podman: Rootless Containers, Docker Alternative & Key Differences

Runs containers without a daemon, perfect for security-conscious teams and CI/CD pipelines

Podman
/tool/podman/overview
58%
tool
Similar content

Docker Desktop: GUI for Containers, Pricing, & Setup Guide

Docker's desktop app that packages Docker with a GUI (and a $9/month price tag)

Docker Desktop
/tool/docker-desktop/overview
57%
troubleshoot
Similar content

Docker Desktop CVE-2025-9074 Fix: Container Escape Mitigation Guide

Any container can take over your entire machine with one HTTP request

Docker Desktop
/troubleshoot/cve-2025-9074-docker-desktop-fix/container-escape-mitigation
57%
troubleshoot
Similar content

Trivy Scanning Failures - Common Problems and Solutions

Fix timeout errors, memory crashes, and database download failures that break your security scans

Trivy
/troubleshoot/trivy-scanning-failures-fix/common-scanning-failures
55%
tool
Recommended

GitHub Actions Security Hardening - Prevent Supply Chain Attacks

integrates with GitHub Actions

GitHub Actions
/tool/github-actions/security-hardening
41%
alternatives
Recommended

Tired of GitHub Actions Eating Your Budget? Here's Where Teams Are Actually Going

integrates with GitHub Actions

GitHub Actions
/alternatives/github-actions/migration-ready-alternatives
41%
tool
Recommended

GitHub Actions - CI/CD That Actually Lives Inside GitHub

integrates with GitHub Actions

GitHub Actions
/tool/github-actions/overview
41%
troubleshoot
Similar content

Fix Docker Build Context Too Large: Optimize & Reduce Size

Learn practical solutions to fix 'Docker Build Context Too Large' errors. Optimize your Docker builds, reduce context size from GBs to MBs, and speed up develop

Docker Engine
/troubleshoot/docker-build-context-too-large/context-optimization-solutions
39%
troubleshoot
Similar content

Docker 'No Space Left on Device' Error: Fast Fixes & Solutions

Stop Wasting Hours on Disk Space Hell

Docker
/troubleshoot/docker-no-space-left-on-device-fix/no-space-left-on-device-solutions
39%
troubleshoot
Similar content

Fix Snyk Authentication Registry Errors: Deployment Nightmares Solved

When Snyk can't connect to your registry and everything goes to hell

Snyk
/troubleshoot/snyk-container-scan-errors/authentication-registry-errors
38%
troubleshoot
Similar content

Git Fatal Not a Git Repository: Enterprise Security Solutions

When Git Security Updates Cripple Enterprise Development Workflows

Git
/troubleshoot/git-fatal-not-a-git-repository/enterprise-security-scenarios
35%
tool
Similar content

Docker: Package Code, Run Anywhere - Fix 'Works on My Machine'

No more "works on my machine" excuses. Docker packages your app with everything it needs so it runs the same on your laptop, staging, and prod.

Docker Engine
/tool/docker/overview
35%
troubleshoot
Similar content

Fix Trivy & ECR Container Scan Authentication Issues

Trivy says "unauthorized" but your Docker login works fine? ECR tokens died overnight? Here's how to fix the authentication bullshit that keeps breaking your sc

Trivy
/troubleshoot/container-security-scan-failed/registry-access-authentication-issues
33%
troubleshoot
Similar content

Docker Container Escape Prevention: Security Hardening Guide

Containers Can Escape and Fuck Up Your Host System

Docker
/troubleshoot/docker-container-escape-prevention/security-hardening-guide
32%
tool
Recommended

MongoDB Atlas Enterprise Deployment Guide

built on MongoDB Atlas

MongoDB Atlas
/tool/mongodb-atlas/enterprise-deployment
30%
news
Recommended

Google Guy Says AI is Better Than You at Most Things Now

Jeff Dean makes bold claims about AI superiority, conveniently ignoring that his job depends on people believing this

OpenAI ChatGPT/GPT Models
/news/2025-09-01/google-ai-human-capabilities
30%

Recommendations combine user behavior, content similarity, research intelligence, and SEO optimization