Vucense

Docker Compose Tutorial 2026: Multi-Container Apps from Zero to Production

🟡Intermediate

Master Docker Compose v2 on Ubuntu 24.04. Covers services, networks, volumes, health checks, environment variables, production patterns, and Compose Watch for development. Fully tested.

Divya Prakash

Author

Divya Prakash

AI Systems Architect & Founder

Published

Duration

Reading

19 min

Build

25 min

Docker Compose Tutorial 2026: Multi-Container Apps from Zero to Production
Article Roadmap

Key Takeaways

  • v2 is the only version: docker compose (space) is Compose v2, bundled with Docker CE. The old docker-compose (hyphen) Python binary is end-of-life. If your scripts use docker-compose, update them to docker compose now.
  • Five primitives: Every compose file is built from services, networks, volumes, configs, and secrets. Master these five and you can model any multi-container architecture.
  • Health checks are mandatory: depends_on: condition: service_healthy is non-negotiable for database-backed services. Without it, your application container starts before the database is accepting connections.
  • One file, two environments: Use docker-compose.yml as a base and docker-compose.override.yml for development overrides — a single docker compose up merges them automatically, keeping production and dev configs clean and separate.

Introduction

Direct Answer: How do I use Docker Compose v2 to run multi-container applications in 2026?

Create a docker-compose.yml file in your project root that defines your services (each service becomes a container), networks (how services communicate), and volumes (persistent data). Run docker compose up -d to start all containers in detached mode, docker compose ps to check status, docker compose logs -f service-name to tail logs, and docker compose down to stop and remove containers. The minimal three-service stack for a web application is: an app service (your application), a db service (PostgreSQL or MySQL), and a cache service (Redis). Add depends_on: condition: service_healthy to guarantee the database is ready before your app starts. Docker Compose v2 ships with Docker CE 23.0+ — verify with docker compose version. No separate installation needed. On Ubuntu 24.04, install Docker CE via the official repository and Compose v2 is included automatically.

“Docker Compose is the fastest path from ‘it works on my machine’ to ‘it works on every machine’. One YAML file describes the entire environment — every container, every network, every volume.”

Docker Compose v2 reached maturity in 2024 and is the standard for local development and single-server deployments in 2026. This tutorial builds three progressively complex stacks: a simple web app, a production-grade API with PostgreSQL and Redis, and a development environment with Compose Watch for live reload.


Prerequisites

Docker CE must be installed. If you haven’t done this yet, follow How to Install Docker on Ubuntu 24.04 LTS first.

# Verify Docker and Compose v2 are installed
docker --version
docker compose version

Expected output:

Docker version 27.3.1, build ce12230
Docker Compose version v2.29.7
# Confirm Compose v2 is responding (not legacy v1)
docker compose version | grep -c "v2" && echo "Compose v2 confirmed" \
  || echo "WARNING: v1 detected — update Docker CE"

Part 1: The Compose File Anatomy

Every compose file follows the same structure. Here is the full schema with every common field annotated:

# docker-compose.yml
# Docker Compose v2 — all fields annotated

name: myapp          # Optional project name (defaults to directory name)

services:
  web:               # Service name — becomes the DNS hostname on internal networks
    image: nginx:1.27-alpine            # Use a pre-built image...
    build:                              # ...OR build from a Dockerfile
      context: ./frontend              # Build context (directory)
      dockerfile: Dockerfile           # Dockerfile name (default: Dockerfile)
      args:                            # Build-time ARG values
        NODE_ENV: production
    container_name: myapp-web          # Fixed container name (optional)
    restart: unless-stopped            # Restart policy: no|always|on-failure|unless-stopped
    ports:
      - "127.0.0.1:80:80"              # HOST:CONTAINER — bind to localhost only (secure)
      - "443:443"
    environment:                       # Environment variables (plain values)
      NODE_ENV: production
      API_URL: http://api:3000
    env_file:                          # Load variables from a file
      - .env
      - .env.production
    volumes:
      - ./static:/usr/share/nginx/html:ro   # Bind mount (host path:container path:options)
      - nginx-cache:/var/cache/nginx        # Named volume
    networks:
      - frontend
      - backend
    depends_on:
      api:
        condition: service_healthy     # Wait until api health check passes
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 10s               # Grace period before health checks begin
    deploy:
      resources:
        limits:
          cpus: '1.0'
          memory: 512M

  api:
    build: ./backend
    restart: unless-stopped
    ports:
      - "127.0.0.1:3000:3000"
    environment:
      DATABASE_URL: postgresql://appuser:${DB_PASSWORD}@db:5432/myapp
      REDIS_URL: redis://cache:6379
    depends_on:
      db:
        condition: service_healthy
      cache:
        condition: service_started
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 15s
      timeout: 5s
      retries: 5
      start_period: 30s
    networks:
      - backend

  db:
    image: postgres:17-alpine
    restart: unless-stopped
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: appuser
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    volumes:
      - postgres-data:/var/lib/postgresql/data
      - ./init-scripts:/docker-entrypoint-initdb.d:ro
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U appuser -d myapp"]
      interval: 10s
      timeout: 5s
      retries: 5
      start_period: 20s
    networks:
      - backend
    expose:
      - "5432"                         # Expose to other services (not to host)

  cache:
    image: redis:7.4-alpine
    restart: unless-stopped
    command: redis-server --maxmemory 256mb --maxmemory-policy allkeys-lru
    volumes:
      - redis-data:/data
    networks:
      - backend

networks:
  frontend:                            # App-to-nginx network
  backend:                             # App-to-database network (isolated)

volumes:
  postgres-data:                       # Named volume — persists across container restarts
  redis-data:
  nginx-cache:

Part 2: Your First Compose Stack — Simple Web App

Build a real three-service stack step by step.

mkdir -p ~/compose-demo/{app,nginx}
cd ~/compose-demo

The application (Python/Flask):

cat > app/app.py << 'EOF'
from flask import Flask, jsonify
import redis
import os

app = Flask(__name__)
r = redis.from_url(os.environ.get("REDIS_URL", "redis://cache:6379"))

@app.route("/")
def index():
    visits = r.incr("visits")
    return jsonify({"message": "Hello from Docker Compose!", "visits": int(visits)})

@app.route("/health")
def health():
    return jsonify({"status": "ok"})

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5000)
EOF

cat > app/requirements.txt << 'EOF'
flask>=3.0.0
redis>=5.0.0
gunicorn>=22.0.0
EOF

cat > app/Dockerfile << 'EOF'
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "2", "app:app"]
EOF

Nginx config:

cat > nginx/default.conf << 'EOF'
server {
    listen 80;

    location / {
        proxy_pass         http://app:5000;
        proxy_set_header   Host $host;
        proxy_set_header   X-Real-IP $remote_addr;
    }

    location /health {
        access_log off;
        return 200 "OK\n";
        add_header Content-Type text/plain;
    }
}
EOF

The compose file:

cat > docker-compose.yml << 'EOF'
name: compose-demo

services:
  app:
    build: ./app
    restart: unless-stopped
    environment:
      REDIS_URL: redis://cache:6379
    depends_on:
      cache:
        condition: service_started
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:5000/health"]
      interval: 10s
      timeout: 5s
      retries: 3
      start_period: 15s
    networks:
      - internal

  nginx:
    image: nginx:1.27-alpine
    restart: unless-stopped
    ports:
      - "127.0.0.1:8080:80"
    volumes:
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
    depends_on:
      app:
        condition: service_healthy
    networks:
      - internal

  cache:
    image: redis:7.4-alpine
    restart: unless-stopped
    volumes:
      - redis-data:/data
    networks:
      - internal

networks:
  internal:

volumes:
  redis-data:
EOF

Start the stack:

docker compose up -d

Expected output:

[+] Running 4/4
 ✔ Network compose-demo_internal  Created    0.1s
 ✔ Container compose-demo-cache-1   Started  0.3s
 ✔ Container compose-demo-app-1     Started  0.8s
 ✔ Container compose-demo-nginx-1   Started  1.2s

Check all containers are healthy:

docker compose ps

Expected output:

NAME                    IMAGE              COMMAND                  SERVICE   CREATED         STATUS                   PORTS
compose-demo-app-1      compose-demo-app   "gunicorn --bind 0.0…"   app       12 seconds ago  Up 11 seconds (healthy)
compose-demo-cache-1    redis:7.4-alpine   "docker-entrypoint.s…"   cache     12 seconds ago  Up 12 seconds
compose-demo-nginx-1    nginx:1.27-alpine  "/docker-entrypoint.…"   nginx     10 seconds ago  Up 9 seconds             127.0.0.1:8080->80/tcp

Test the application:

curl -s http://localhost:8080/ | python3 -m json.tool

Expected output:

{
    "message": "Hello from Docker Compose!",
    "visits": 1
}
# Run again — visit counter increments via Redis
curl -s http://localhost:8080/ | python3 -m json.tool

Expected output:

{
    "message": "Hello from Docker Compose!",
    "visits": 2
}

Redis is persisting the visit counter across requests.


Part 3: Essential Compose Commands

# ── Start / Stop ──────────────────────────────────────────────────────────
docker compose up -d                  # Start all services detached
docker compose up -d --build          # Rebuild images before starting
docker compose up -d service-name     # Start only one service
docker compose start                  # Start stopped containers (no rebuild)
docker compose stop                   # Stop containers (keep them)
docker compose restart app            # Restart a single service
docker compose down                   # Stop AND remove containers and networks
docker compose down -v                # Also remove named volumes (⚠ deletes data)
docker compose down --rmi all         # Also remove built images

# ── Inspect ───────────────────────────────────────────────────────────────
docker compose ps                     # List containers and their status
docker compose ps --format json       # JSON output for scripting
docker compose logs app               # Print logs for service
docker compose logs -f app            # Follow/tail logs
docker compose logs -f --tail=50      # Last 50 lines, then follow
docker compose top                    # Show running processes inside containers

# ── Execute ───────────────────────────────────────────────────────────────
docker compose exec app bash          # Interactive shell in running container
docker compose exec app python manage.py migrate   # Run one-off command
docker compose run --rm app pytest    # Run command in NEW container, remove after

# ── Build ─────────────────────────────────────────────────────────────────
docker compose build                  # Build all services with build: directive
docker compose build --no-cache app   # Rebuild without layer cache
docker compose pull                   # Pull latest image versions

# ── Scale ─────────────────────────────────────────────────────────────────
docker compose up -d --scale app=3   # Run 3 instances of app service

# ── Config ────────────────────────────────────────────────────────────────
docker compose config                 # Print merged, validated config
docker compose config --services      # List service names
docker compose convert                # Convert to canonical format

Part 4: Environment Variables and Secrets

The .env file pattern

Docker Compose automatically loads a .env file in the same directory as docker-compose.yml. Variables in .env are available as ${VARIABLE} substitutions in the compose file.

# Create .env (never commit this to Git)
cat > .env << 'EOF'
# Application
APP_PORT=8080
APP_ENV=production

# Database
DB_NAME=myapp
DB_USER=appuser
DB_PASSWORD=change_me_to_a_strong_password_32chars

# Redis
REDIS_MAXMEM=256mb
EOF

# Add .env to .gitignore
echo ".env" >> .gitignore
echo ".env.*" >> .gitignore

# Create .env.example for teammates (safe to commit)
cat > .env.example << 'EOF'
APP_PORT=8080
APP_ENV=production
DB_NAME=myapp
DB_USER=appuser
DB_PASSWORD=REPLACE_WITH_STRONG_PASSWORD
REDIS_MAXMEM=256mb
EOF

Reference in compose file:

services:
  app:
    ports:
      - "${APP_PORT}:5000"        # Uses APP_PORT from .env
    environment:
      APP_ENV: ${APP_ENV}         # Substituted from .env
      DB_URL: postgresql://${DB_USER}:${DB_PASSWORD}@db:5432/${DB_NAME}
  db:
    environment:
      POSTGRES_DB: ${DB_NAME}
      POSTGRES_USER: ${DB_USER}
      POSTGRES_PASSWORD: ${DB_PASSWORD}

Verify variable substitution:

docker compose config | grep -A5 "environment:"

Expected output (secrets are substituted but visible in config — use Docker secrets for production):

    environment:
      APP_ENV: production
      DB_URL: postgresql://appuser:change_me_to_a_strong_password_32chars@db:5432/myapp

Docker secrets (production-safe)

For production deployments where you don’t want secrets appearing in docker inspect output:

# Create secret files (store securely, not in Git)
mkdir -p secrets/
echo "strong_db_password_32chars_here" > secrets/db_password.txt
echo ".secrets/" >> .gitignore

# Reference in compose file
cat >> docker-compose.yml << 'EOF'

secrets:
  db_password:
    file: ./secrets/db_password.txt
EOF
# In the service that needs the secret:
services:
  db:
    image: postgres:17-alpine
    secrets:
      - db_password
    environment:
      POSTGRES_PASSWORD_FILE: /run/secrets/db_password  # Postgres reads this file

The secret is mounted at /run/secrets/db_password inside the container — never passed as an environment variable.


Part 5: Production-Grade Stack — FastAPI + PostgreSQL + Redis

A complete production-ready compose stack based on the stack from our Build a REST API with FastAPI guide:

mkdir -p ~/prod-stack/{app,nginx,scripts}
cd ~/prod-stack

# Minimal FastAPI app
cat > app/main.py << 'EOF'
from fastapi import FastAPI
import os

app = FastAPI()

@app.get("/health")
async def health():
    return {"status": "ok", "env": os.environ.get("APP_ENV", "unknown")}

@app.get("/")
async def root():
    return {"message": "Sovereign API — running locally"}
EOF

cat > app/requirements.txt << 'EOF'
fastapi>=0.115.0
uvicorn[standard]>=0.32.0
asyncpg>=0.30.0
redis>=5.0.0
EOF

cat > app/Dockerfile << 'EOF'
FROM python:3.12-slim
WORKDIR /app
RUN adduser --disabled-password --gecos "" appuser
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
RUN chown -R appuser:appuser /app
USER appuser
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "2"]
EOF

# Nginx reverse proxy config
cat > nginx/api.conf << 'EOF'
upstream api_backend {
    server app:8000;
    keepalive 32;
}

server {
    listen 80;
    server_name _;

    client_max_body_size 10M;

    location / {
        proxy_pass         http://api_backend;
        proxy_http_version 1.1;
        proxy_set_header   Connection "";
        proxy_set_header   Host $host;
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;
        proxy_read_timeout 60s;
    }
}
EOF

# Production compose file
cat > docker-compose.yml << 'EOF'
name: prod-stack

services:
  app:
    build:
      context: ./app
      dockerfile: Dockerfile
    restart: unless-stopped
    environment:
      APP_ENV: ${APP_ENV:-production}
      DATABASE_URL: postgresql+asyncpg://appuser:${DB_PASSWORD}@db:5432/${DB_NAME}
      REDIS_URL: redis://cache:6379/0
    depends_on:
      db:
        condition: service_healthy
      cache:
        condition: service_started
    healthcheck:
      test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:8000/health')"]
      interval: 15s
      timeout: 5s
      retries: 5
      start_period: 30s
    networks:
      - internal
    deploy:
      resources:
        limits:
          memory: 512M

  nginx:
    image: nginx:1.27-alpine
    restart: unless-stopped
    ports:
      - "127.0.0.1:80:80"
    volumes:
      - ./nginx/api.conf:/etc/nginx/conf.d/default.conf:ro
    depends_on:
      app:
        condition: service_healthy
    networks:
      - internal

  db:
    image: postgres:17-alpine
    restart: unless-stopped
    environment:
      POSTGRES_DB: ${DB_NAME:-myapp}
      POSTGRES_USER: appuser
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_INITDB_ARGS: "--encoding=UTF8 --lc-collate=en_US.UTF-8 --lc-ctype=en_US.UTF-8"
    volumes:
      - postgres-data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U appuser -d ${DB_NAME:-myapp}"]
      interval: 10s
      timeout: 5s
      retries: 5
      start_period: 20s
    networks:
      - internal

  cache:
    image: redis:7.4-alpine
    restart: unless-stopped
    command: >
      redis-server
      --maxmemory ${REDIS_MAXMEM:-256mb}
      --maxmemory-policy allkeys-lru
      --save 60 1
      --loglevel warning
    volumes:
      - redis-data:/data
    networks:
      - internal

networks:
  internal:
    driver: bridge

volumes:
  postgres-data:
  redis-data:
EOF

# Create .env
cat > .env << 'EOF'
APP_ENV=production
DB_NAME=myapp
DB_PASSWORD=sovereign_strong_password_change_me
REDIS_MAXMEM=256mb
EOF

echo ".env" >> .gitignore

Start and verify:

docker compose up -d

# Wait for health checks
sleep 15
docker compose ps

Expected output:

NAME                 IMAGE                  SERVICE   STATUS                    PORTS
prod-stack-app-1     prod-stack-app         app       Up 18 seconds (healthy)
prod-stack-cache-1   redis:7.4-alpine       cache     Up 20 seconds
prod-stack-db-1      postgres:17-alpine     db        Up 20 seconds (healthy)
prod-stack-nginx-1   nginx:1.27-alpine      nginx     Up 16 seconds             127.0.0.1:80->80/tcp

Test all services:

# API via nginx
curl -s http://localhost/

Expected output:

{"message":"Sovereign API — running locally"}
# Database connectivity
docker compose exec db psql -U appuser -d myapp -c "SELECT version();" | head -3

Expected output:

                                                version
-----------------------------------------------------------------------
 PostgreSQL 17.4 on x86_64-pc-linux-gnu, compiled by gcc 13.2.0, 64-bit
# Redis connectivity
docker compose exec cache redis-cli ping

Expected output:

PONG

Part 6: Development Overrides with Compose Watch

For development, you want live code reload without rebuilding the image. Docker Compose v2 solves this with two approaches:

Approach A: Override file (classic)

# docker-compose.override.yml — auto-merged with docker-compose.yml in development
cat > docker-compose.override.yml << 'EOF'
# Development overrides — NOT for production
services:
  app:
    build:
      target: development       # Use dev stage in multi-stage Dockerfile
    volumes:
      - ./app:/app              # Bind mount source code (live reload)
    environment:
      APP_ENV: development
      DEBUG: "true"
    command: uvicorn main:app --host 0.0.0.0 --port 8000 --reload
    ports:
      - "127.0.0.1:8000:8000"  # Expose app port directly in dev
  db:
    ports:
      - "127.0.0.1:5432:5432"  # Expose DB port for local tools (TablePlus, pgAdmin)
  cache:
    ports:
      - "127.0.0.1:6379:6379"  # Expose Redis for local inspection
EOF
# Development: docker compose uses both files automatically
docker compose up -d
# Production: specify only the base file
docker compose -f docker-compose.yml up -d

docker compose watch was added in Compose v2.22. It watches for file changes and syncs them into the running container — no bind mounts, no volume conflicts.

# Add to your service in docker-compose.yml:
services:
  app:
    develop:
      watch:
        - action: sync           # Copy changed files into container
          path: ./app
          target: /app
          ignore:
            - __pycache__/
            - "*.pyc"
        - action: rebuild        # Rebuild image when these files change
          path: app/requirements.txt
        - action: restart        # Restart container on config changes
          path: .env
# Start with watch mode
docker compose watch

# In another terminal — edit a file
echo "# comment" >> app/main.py

Expected output in watch terminal:

Watch enabled
  Watching: ./app → /app
  Watching: app/requirements.txt (rebuild)
  Watching: .env (restart)
  ...
Syncing app (app/main.py) 0/1
Syncing app (app/main.py) 1/1

File is live in the container within milliseconds — no rebuild, no restart.


Part 7: Useful Patterns

Wait for database with healthcheck

The most common Docker Compose mistake is not using condition: service_healthy. Here is the correct pattern:

services:
  app:
    depends_on:
      db:
        condition: service_healthy      # ← Correct: waits for DB to accept connections
      cache:
        condition: service_started      # ← Acceptable for Redis (fast start)

  db:
    image: postgres:17-alpine
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${DB_USER} -d ${DB_NAME}"]
      interval: 5s
      timeout: 3s
      retries: 10
      start_period: 15s               # Give Postgres time to initialise on first run

Run database migrations on startup

services:
  migrate:
    build: ./app
    command: alembic upgrade head      # Run Alembic migrations
    environment:
      DATABASE_URL: postgresql://appuser:${DB_PASSWORD}@db:5432/${DB_NAME}
    depends_on:
      db:
        condition: service_healthy
    restart: on-failure                # Retry if DB not ready yet
    networks:
      - internal

  app:
    depends_on:
      migrate:
        condition: service_completed_successfully  # Only start after migrations succeed

Multi-stage Dockerfile for dev/prod

# Dockerfile — two stages: development and production
FROM python:3.12-slim AS base
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

FROM base AS development
RUN pip install pytest pytest-asyncio httpx   # Dev-only deps
CMD ["uvicorn", "main:app", "--reload", "--host", "0.0.0.0", "--port", "8000"]

FROM base AS production
RUN adduser --disabled-password --gecos "" appuser
COPY . .
RUN chown -R appuser:appuser /app
USER appuser
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "4"]

Troubleshooting

dependency failed to start: container is unhealthy

Cause: The dependency’s health check is failing, so dependent services won’t start. Fix:

docker compose logs db          # Check why the dependency failed
docker compose ps               # See which container is unhealthy
# Fix the healthcheck issue (wrong credentials, port, or start_period too short)
# Increase start_period for slow-starting services like PostgreSQL

Error response from daemon: Ports are not available: address already in use

Cause: Another process is already using the host port. Fix:

sudo ss -tlnp | grep :80        # See what's using port 80
# Stop the conflicting process or change the host port in docker-compose.yml

Data is lost after docker compose down

Cause: docker compose down removes containers but not named volumes — unless you used -v. Fix:

# Check volumes still exist
docker volume ls | grep myapp
# Restart — data is still in the volume
docker compose up -d
# If you ran down -v, data is gone. Restore from backup.

exec /usr/local/bin/docker-entrypoint.sh: exec format error

Cause: Image was built for a different CPU architecture (e.g., amd64 image on ARM64). Fix:

# Build for both platforms with buildx
docker buildx build --platform linux/amd64,linux/arm64 -t myapp:latest --push .
# Or explicitly pull the correct arch
docker pull --platform linux/arm64 postgres:17-alpine

Conclusion

Docker Compose v2 is now part of your workflow: declarative multi-container stacks, guaranteed startup order via health checks, clean secret management via .env files, and a live development environment via Compose Watch. The production stack (FastAPI + PostgreSQL + Redis + Nginx) matches the architecture used throughout the Dev Corner guides.

The natural next step is securing these containers — see Docker Security Best Practices 2026 for the hardening checklist, or GitHub Actions CI/CD to deploy this stack automatically on every push to main.


People Also Ask

What is the difference between docker compose up and docker compose start?

docker compose up creates and starts containers — it pulls images, builds services with build:, creates networks and volumes, and starts everything. docker compose start only starts already-created containers that have been stopped with docker compose stop. Use up for first launch and after config changes; use start/stop to pause and resume a running stack without losing the container state.

How do I update a single service without restarting the whole stack?

docker compose build app          # Rebuild the service image
docker compose up -d --no-deps app  # Restart only that service, not its dependencies

The --no-deps flag prevents Compose from restarting the services listed in depends_on.

How do I back up a named volume in Docker Compose?

# Backup: dump volume contents to a tar archive
docker run --rm \
  -v prod-stack_postgres-data:/data \
  -v $(pwd):/backup \
  alpine tar czf /backup/postgres-backup-$(date +%Y%m%d).tar.gz -C /data .

# Restore:
docker run --rm \
  -v prod-stack_postgres-data:/data \
  -v $(pwd):/backup \
  alpine tar xzf /backup/postgres-backup-20260422.tar.gz -C /data

Should I use Docker Compose or Kubernetes for production?

Docker Compose is the right choice for: single-server deployments, development environments, small-to-medium applications (< 10 containers), and teams without Kubernetes expertise. Kubernetes is appropriate for: multi-server clusters requiring auto-scaling, high availability across nodes, and teams with dedicated platform engineering. For most sovereign self-hosted applications — personal projects, small SaaS, internal tools — Docker Compose on a single Hetzner or DigitalOcean VPS is simpler, cheaper, and easier to maintain. Kubernetes adds significant operational complexity; only add it when you genuinely need horizontal scaling across multiple nodes.


Further Reading


Tested on: Ubuntu 24.04 LTS (Hetzner CX22), macOS Sequoia 15.4 (Apple M3 Max). Docker CE 27.3.1, Docker Compose v2.29.7. Last verified: April 22, 2026.

Further Reading

All Dev Corner

Comments