Vucense

Docker Networking Explained 2026: Bridge, Host & Overlay Networks

🟡Intermediate

Master Docker networking for sovereign deployments on Ubuntu 24.04. Covers bridge networks, container DNS, host networking, port publishing, multi-container communication, and network isolation.

Marcus Thorne

Author

Marcus Thorne

Local-First AI Infrastructure Engineer

Published

Duration

Reading

20 min

Build

20 min

Docker Networking Explained 2026: Bridge, Host & Overlay Networks
Article Roadmap

Key Takeaways

  • Container name = DNS hostname on custom networks: On a custom bridge, ping db resolves to the db container’s IP automatically. On the default bridge, only IPs work.
  • 127.0.0.1:PORT:PORT not PORT:PORT: Bind published ports to localhost, then proxy via Nginx Reverse Proxy Tutorial. PORT:PORT exposes directly to the internet.
  • --network host removes isolation: The container shares the host’s network namespace — useful for performance-sensitive services but bypasses all network security controls.
  • Compose networks define communication boundaries: Use Docker Compose Tutorial 2026 to orchestrate networks. Services only communicate with services on the same network — use multiple networks to segment frontend/backend/database tiers. Combine with Docker Volumes for persistent data.

Introduction

Direct Answer: How do I isolate self-hosted Docker services from the public internet?

By default, Docker binds published ports to 0.0.0.0, exposing every container to the internet. Sovereign deployments require 127.0.0.1 binding + internal networks + Tailscale/WireGuard for remote access. Use multiple networks (public/internal zones), bind frontend-only services to 127.0.0.1:PORT, route through Nginx on the public network, and keep database/cache services on internal-only networks unreachable from outside.


Why 127.0.0.1 Binding Matters

Docker’s default ports: ["80:80"] maps to 0.0.0.0:80, which means:

  • 🔴 Bypasses ufw/nftables firewall rules
  • 🔴 Exposes the container directly to the internet
  • 🔴 No TLS, no rate limiting, no authentication layer

Prefixing with 127.0.0.1 forces traffic through your reverse proxy (Nginx, Caddy, Traefik), which provides:

  • 🟢 HTTPS/TLS termination
  • 🟢 Rate limiting and DDoS protection
  • 🟢 Request authentication and logging

Example:

# ❌ INSECURE: Binds to 0.0.0.0
ports:
  - "80:80"

# ✅ SECURE: Binds to localhost only
ports:
  - "127.0.0.1:80:80"

Sovereign Network Architecture: Public vs Internal Zones

Sovereign Network Architecture: Public vs Internal Zones

Isolate services by trust level. This prevents attackers who compromise one container from accessing your entire infrastructure.

# docker-compose.yml — zero-trust network architecture
networks:
  sovereign-internal:
    driver: bridge
    internal: true  # 🔒 Blocks outbound internet access
  sovereign-public:
    driver: bridge

services:
  # ── Internal Database (no ports exposed) ────────────────────────────────
  postgres:
    image: postgres:17-alpine
    networks: [sovereign-internal]
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    # ⚠️ NO ports section — completely internal

  # ── Internal Cache (no ports exposed) ───────────────────────────────────
  redis:
    image: redis:7-alpine
    networks: [sovereign-internal]
    # ⚠️ NO ports section — completely internal

  # ── Application Backend (internal only) ──────────────────────────────────
  api:
    image: myapi:latest
    networks: [sovereign-internal]
    depends_on:
      - postgres
      - redis
    # ⚠️ NO ports section — only reachable by nginx on same network

  # ── Nginx Reverse Proxy (gateway between public & internal) ──────────────
  nginx:
    image: nginx:alpine
    networks: [sovereign-public, sovereign-internal]
    ports:
      - "127.0.0.1:80:80"    # 🔒 Localhost only
      - "127.0.0.1:443:443"  # 🔒 Localhost only
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    depends_on:
      - api

Result: Even if nginx is compromised:

  • ✅ Attacker cannot reach postgres or redis — they’re on a different network
  • ✅ No internet access from sovereign-internal — internal: true blocks outbound
  • ✅ API traffic must traverse nginx first — rate limiting and TLS required

Sovereign Remote Access: Tailscale Mesh Routing

Instead of opening ports to the public internet, install Tailscale on your Docker host for secure, encrypted remote access:

# Install Tailscale on Docker host
curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale up --ssh

# Check your Tailscale IP (example: 100.123.45.67)
sudo tailscale ip -4

# Now access services via VPN:
# https://100.123.45.67/    → Nginx (with TLS cert)
# ssh [email protected]    → SSH access

Benefits:

  • 🔒 Zero public port exposure
  • 🔐 Automatic WireGuard encryption
  • 🔑 ACL-based access control (who can connect)
  • 📱 Works from anywhere with Tailscale app
  • 🚀 No extra DNS or firewall configuration

Docker Compose with Tailscale:

services:
  nginx:
    # Still binds to 127.0.0.1 (not 0.0.0.0)
    ports:
      - "127.0.0.1:443:443"
    # Access via: https://<tailscale-ip>/
    # Nginx can still serve public content if you want,
    # but Tailscale users get authenticated access first

Part 1: Network Types and DNS

Custom bridge networks enable automatic DNS resolution between containers by service name. This is the foundation for reliable multi-container communication in Docker Compose.

# Create a custom network — enables DNS between containers
docker network create --driver bridge myapp-network

# Run containers on the custom network
docker run -d --name web    --network myapp-network nginx:alpine
docker run -d --name cache  --network myapp-network redis:7-alpine
docker run -d --name client --network myapp-network alpine sleep 3600

# DNS resolution by container name
docker exec client ping -c2 web    # Works!
docker exec client ping -c2 cache  # Works!

Expected output:

PING web (172.20.0.2): 56 data bytes
64 bytes from 172.20.0.2: seq=0 ttl=64 time=0.091 ms
# Inspect the network
docker network inspect myapp-network | python3 -c "
import json, sys
n = json.load(sys.stdin)[0]
print('Subnet:', n['IPAM']['Config'][0]['Subnet'])
print('Containers:')
for name, info in n['Containers'].items():
    print(f'  {info[\"Name\"]}: {info[\"IPv4Address\"]}')
"

Expected output:

Subnet: 172.20.0.0/16
Containers:
  web:    172.20.0.2/16
  cache:  172.20.0.3/16
  client: 172.20.0.4/16
# Cleanup
docker stop web cache client && docker rm web cache client
docker network rm myapp-network

Part 3: Network Isolation in Docker Compose

Multi-network configurations enforce the principle of least privilege. Services communicate only with services on the same network, preventing unnecessary exposure of database services to frontend containers.

# docker-compose.yml — multi-tier network isolation
name: webapp

services:
  nginx:
    image: nginx:alpine
    ports:
      - "127.0.0.1:80:80"    # Only to localhost — Nginx faces internet via UFW
    networks:
      - frontend              # Can reach api, cannot reach db

  api:
    image: myapi:latest
    networks:
      - frontend              # Reachable from nginx as "api:3000"
      - backend               # Can reach db

  db:
    image: postgres:17-alpine
    networks:
      - backend               # Only reachable from api, NOT from nginx
    environment:
      POSTGRES_PASSWORD: secret

networks:
  frontend:   # nginx ↔ api
  backend:    # api ↔ db
docker compose up -d

# Verify isolation: nginx CANNOT reach db directly
docker compose exec nginx ping -c1 db 2>&1 | head -2

Expected output:

ping: bad address 'db'   ← db not on frontend network — isolation works
# API CAN reach db
docker compose exec api ping -c1 db 2>&1 | head -2

Expected output:

PING db (172.21.0.3): 56 data bytes
64 bytes from 172.21.0.3: seq=0 ttl=64 time=0.089 ms

Part 4: Port Publishing Best Practices

# ── INSECURE: binds to all interfaces ─────────────────────────────────────
docker run -d -p 3000:3000 myapp   # Exposed to internet

# ── SECURE: binds to localhost only ──────────────────────────────────────
docker run -d -p 127.0.0.1:3000:3000 myapp   # Only Nginx can reach it

# ── NO PORT PUBLISH: only other containers can reach ──────────────────────
docker run -d myapp   # No -p flag — only containers on same network

# Verify what's exposed
ss -tlnp | grep docker

Expected output (secure setup):

LISTEN  127.0.0.1:3000   users:(("docker-proxy",pid=1234))

Only localhost — not 0.0.0.0:3000 — confirms port is not internet-exposed.


Part 4.5: Container-to-Host Communication

Containers often need to reach services running on the host machine (PostgreSQL on localhost, local Redis, development servers). Here’s how:

# ── On Linux: Gateway IP ─────────────────────────────────────────────────
# Containers see the host at 172.17.0.1 (docker0 gateway IP)
docker run -d --name app alpine sh -c "
  ping -c2 172.17.0.1    # Reaches host
  curl http://172.17.0.1:5432   # Can reach services on host
"

# Find the gateway IP programmatically
docker inspect app | python3 -c "
  import json, sys
  c = json.load(sys.stdin)[0]
  gateway = c['NetworkSettings']['Gateway']
  print(f'Host from container: {gateway}')
"
# Output: Host from container: 172.17.0.1

# ── On macOS / Windows: Special Hostname ────────────────────────────────
# Use `host.docker.internal` instead (Docker Desktop provides this)
docker run -d --name app alpine sh -c "
  curl http://host.docker.internal:5432   # macOS / Windows
"

# ── Production Docker Compose: Service Name ────────────────────────────
# If the host service is in Compose on a different network:
docker compose exec app curl http://hostdb:5432

# Or add the host service to the app's network:
# docker network connect backend app   # Add app to backend network

Common patterns:

ScenarioCommandNotes
App → PostgreSQL on hostcurl http://172.17.0.1:5432Linux only; use service name in Compose
App → Local dev servercurl http://172.17.0.1:8000Works on Linux; use host.docker.internal on macOS
App → Another containercurl http://db:5432Requires same custom network
Host → Containercurl http://127.0.0.1:8080Port must be published (-p)

Debugging container-to-host connectivity:

# Inside container — which IP reaches the host?
docker run -it alpine sh
  # Try these in order:
  ping -c1 172.17.0.1           # Linux docker0
  ping -c1 host.docker.internal # macOS/Windows
  ping -c1 8.8.8.8              # External (verify network works)
  
# From host — can containers reach you?
# Start a simple server
python3 -m http.server 8000 &   # Port 8000 on host

# Inside container
curl http://172.17.0.1:8000     # Should work on Linux
curl http://host.docker.internal:8000   # Should work on macOS/Windows

For Docker Compose, the simplest approach:

services:
  app:
    image: myapp
    networks:
      - app-net
    environment:
      DATABASE_URL: postgresql://apiuser:pwd@db:5432/dbname  # Service name, not IP

  db:
    image: postgres:17
    networks:
      - app-net                          # Same network as app

networks:
  app-net:    # Custom network enables DNS service discovery

Part 4: Sovereign Network Architecture

For production self-hosted stacks, isolate services by trust level. This prevents attackers who compromise one container from accessing your entire infrastructure.

# Create networks by security zone
docker network create --internal sovereign-internal  # DB, cache, internal APIs
docker network create sovereign-public               # Reverse proxy, public endpoints

# Run database on internal network only
docker run -d --name postgres \
  --network sovereign-internal \
  postgres:17

# Run Nginx proxy on both networks (acts as gateway)
docker run -d --name nginx \
  --network sovereign-public \
  --network sovereign-internal \
  -p 127.0.0.1:443:443 \
  nginx:alpine

Result: Even if the public-facing container is compromised, attackers cannot directly reach your database — they must traverse the proxy first (and fail if the proxy has strict firewall rules).

In docker-compose.yml:

services:
  nginx:
    image: nginx:alpine
    networks:
      - public
      - internal
    ports:
      - "127.0.0.1:443:443"

  api:
    image: myapi:latest
    networks:
      - internal
    depends_on:
      - postgres

  postgres:
    image: postgres:17-alpine
    networks:
      - internal
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}

networks:
  public:
    driver: bridge
  internal:
    driver: bridge
    # Optional: completely isolate from host network
    # driver_opts:
    #   com.docker.network.driver.overlay.bind_interface: eth1

Sovereign Access Tip: Use Tailscale or WireGuard to securely access your Docker host remotely without exposing ports to the public internet:

# Install Tailscale on Docker host
curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale up
# Access services via tailscale IP: https://100.x.y.z:443
# Traffic is encrypted and authenticated end-to-end

Part 6: Host Networking

# Host network: container shares host's network namespace
docker run -d --network host nginx:alpine

# Container binds to host's port 80 directly (no port mapping needed)
curl -sI http://localhost | head -2

# ── When to use --network host ────────────────────────────────────────────
# ✓ Performance-critical services (removes NAT overhead)
# ✓ Services needing raw socket access
# ✗ Never for multi-tenant or internet-facing services (no network isolation)

Part 7: Debugging Network Issues

# Which network is a container on?
docker inspect mycontainer | python3 -c "
import json, sys
c = json.load(sys.stdin)[0]
for net, info in c['NetworkSettings']['Networks'].items():
    print(f'{net}: {info[\"IPAddress\"]}')
"

# Can container A reach container B?
docker exec container-a curl -sf http://container-b:8080/health || echo "Cannot reach"

# Inspect traffic (install tcpdump in container temporarily)
docker exec -it mycontainer sh -c "apk add tcpdump && tcpdump -i eth0 port 5432"

# List all containers on a network
docker network inspect myapp-network --format '{{json .Containers}}' | python3 -m json.tool

Conclusion

Docker networking is now clear: containers on the same custom network communicate by name via Docker’s DNS, port bindings to 127.0.0.1 keep services off the internet, and multi-network compose setups enforce least-privilege communication between tiers.

See Docker Compose Tutorial 2026 for how networks integrate into a full multi-service stack, and Docker Security Best Practices 2026 for the security hardening that builds on this isolation.


People Also Ask

Why can’t containers on the default bridge network communicate by name?

Docker’s embedded DNS server only runs on user-defined (custom) networks, not on the default bridge network. The default bridge was designed before Docker had automatic service discovery — it uses the old /etc/hosts file approach which requires --link (deprecated) for name resolution. Custom networks created with docker network create automatically get the DNS resolver. In Docker Compose, every service is on a custom network by default.

How do I connect a container to multiple networks?

A container can be on multiple networks simultaneously, giving it access to services on all of them. In Docker Compose: networks: - frontend - backend in the service definition. With docker run: docker network connect second-network mycontainer adds the container to an additional network after creation. The container gets a separate IP address on each network.


Troubleshooting & Common Issues

Issue: Error response from daemon: Name does not resolve

Cause: Container can’t resolve service hostname (wrong network or DNS issue).

# Fix: Verify container is on same network as service
docker network ls | grep service-network
docker network inspect service-network | grep Containers

# Or restart DNS:
docker exec myapp cat /etc/resolv.conf  # Check nameserver
# nameserver 127.0.0.11:53 = working Docker DNS

Issue: Connection refused: cannot reach localhost:5432 from container

Cause: Localhost inside container != host machine. Container sees its own localhost.

# Fix: Use host.docker.internal (macOS/Windows) or gateway IP (Linux)
# In container:
psql -h host.docker.internal -p 5432 -U postgres  # macOS/Windows
psql -h 172.17.0.1 -p 5432 -U postgres           # Linux

# Or better: use Docker Compose service name:
psql -h db -p 5432 -U postgres

Issue: Network is full (no more IP addresses available)

Cause: Network subnet too small for container count.

# Fix: Create network with larger subnet
docker network create --subnet=10.0.0.0/16 large-network
# /16 = 65,536 addresses; default /24 = 256 addresses

# Check current usage:
docker network inspect mynetwork | grep IPv4Address | wc -l

Issue: Container cannot reach external internet

Cause: Network driver doesn’t support egress, or firewall blocks it.

# Fix: Verify network type
docker network inspect mynetwork | grep Driver  # Should be "bridge"

# Test connectivity:
docker exec mycontainer ping 8.8.8.8  # Google DNS
docker exec mycontainer curl -I https://example.com  # HTTPS request

# Enable IP forwarding (Linux):
sudo sysctl -w net.ipv4.ip_forward=1

Issue: Port binding fails: address already in use

Cause: Another container or host process using port.

# Fix: Find process using port
lsof -i :8080  # Linux/macOS
netstat -ano | findstr :8080  # Windows

# Kill process or change port mapping
docker run -p 8081:8080 myimage  # Use different host port

Network Type Comparison

NetworkUse CaseContainer DNSIsolationPerformance
bridgeDevelopment, single-host❌ Requires —linkModerateGood
Custom bridgeProduction, multi-service✅ AutomaticStrongGood
hostPerformance-criticalVia hostNoneBest
overlaySwarm, multi-host✅ AutomaticStrongGood
macvlanLegacy apps needing MAC❌ Via hostStrongBest

Docker Networking Decision Tree

What type of deployment?
├─ Single Docker host (Docker Compose)
│  └─ Use custom bridge network (automatic in Compose)
├─ Multi-host orchestration
│  └─ Use overlay network (Docker Swarm/Kubernetes)
├─ High-performance edge cases
│  └─ Use host network (lose container isolation)
├─ Legacy app needs real MAC address
│  └─ Use macvlan network
└─ Simple testing/learning
   └─ Default bridge (minimal security)

Frequently Asked Questions (FAQ)

Q: What’s the difference between container IP and host IP?

A:

  • Container IP (e.g., 172.17.0.2): Internal, seen within Docker network
  • Host IP (e.g., 192.168.1.100): External, reachable from other machines
  • Port mapping (e.g., -p 8080:3000): Maps host:8080 → container:3000

Use host IP if you need to access from another machine.

Q: Can I use DNS aliases for containers?

A: Yes, in Docker Compose:

services:
  db:
    image: postgres
    networks:
      mynet:
        aliases:
          - database
          - postgres-primary

Container can be reached as db, database, or postgres-primary.

Q: How do I monitor network traffic between containers?

A: Use tools inside container:

docker exec mycontainer tcpdump -i eth0 -n port 5432
# Or from host:
sudo tcpdump -i docker0 -n  # Bridge interface

Q: Can I set static IP for a container?

A: Yes, in Docker Compose:

services:
  myapp:
    image: app
    networks:
      mynet:
        ipv4_address: 10.0.1.100

In Docker CLI: --ip 10.0.1.100 with --network custom-network.

Q: How do I debug DNS resolution failures?

A: Test inside container:

docker exec myapp nslookup db  # Should resolve to service IP
docker exec myapp cat /etc/resolv.conf  # Check nameserver
docker exec myapp getent hosts db  # Another lookup method

Q: Can containers access services on the host machine?

A:

  • Linux: Use gateway IP 172.17.0.1 or find with ip route | grep default
  • macOS: Use host.docker.internal
  • Windows: Use host.docker.internal

Example:

docker run alpine curl http://host.docker.internal:8000

Q: What’s the performance impact of user-defined bridge vs. default bridge?

A: Negligible (<1% difference). User-defined networks are preferred because:

  • Built-in DNS resolution (no —link needed)
  • Better isolation
  • More flexible networking options
  • More secure (no inter-container connectivity by default)

Q: How do I prevent containers from communicating?

A: Use separate networks or firewall rules:

# Separate networks: containers can't reach each other
docker network create frontend
docker network create backend
docker run --network frontend app1
docker run --network backend app2
# app1 and app2 can't reach each other

Q: Can I use IPv6 with Docker?

A: Yes, but disabled by default. Enable in daemon config:

echo '{"ipv6": true}' | sudo tee /etc/docker/daemon.json
sudo systemctl restart docker
# Create IPv6 network:
docker network create --ipv6 mynetwork

Q: How do I set MTU (Maximum Transmission Unit) for a network?

A: For performance optimization:

docker network create --opt com.docker.network.driver.mtu=1450 mynetwork
# Standard MTU = 1500; some networks require 1450 (jumbo frames = 9000)

Network Monitoring Checklist

  • ✅ Verify containers can resolve service names: nslookup db
  • ✅ Test inter-container connectivity: ping other-container
  • ✅ Check port bindings: docker port mycontainer
  • ✅ Monitor network stats: docker stats (includes network I/O)
  • ✅ Verify DNS server: cat /etc/resolv.conf inside container
  • ✅ Check network driver: docker network inspect mynetwork | grep Driver


Further Reading

Vucense Guides

Official Docker Documentation

Networking Tools & Utilities

Advanced Topics

Tested on: Ubuntu 24.04 LTS (Hetzner CX22). Docker CE 27.3.1. Last verified: May 16, 2026.

Further Reading

All Dev Corner

Comments