Vucense

Docker Volumes Guide 2026: Persistent Data Storage for Containers

🟢Beginner

Persist data in sovereign Docker deployments: named volumes, bind mounts, tmpfs mounts, volume backup strategies, and migrating data between containers safely on Ubuntu 24.04.

Divya Prakash

Author

Divya Prakash

AI Systems Architect & Founder

Published

Duration

Reading

20 min

Build

20 min

Docker Volumes Guide 2026: Persistent Data Storage for Containers
Article Roadmap

Key Takeaways

  • Named volumes for production: Managed by Docker, persist independently of containers, easy to inspect and back up. See Docker Compose for orchestration.
  • Bind mounts for development: Direct host directory access, great for live reload, brittle in production.
  • docker compose down vs down -v: Without -v your named volumes survive. With -v all volumes are deleted — data is gone permanently. Always backup first with systemd timers.
  • Backup before deleting: docker run --rm -v volname:/data alpine tar czf - -C /data . exports volume contents safely. Combine with Docker Networking to build resilient multi-container stacks. For databases, see PostgreSQL Performance Tuning.

Introduction

Direct Answer: How do I persist data with Docker volumes on Ubuntu 24.04 in 2026?

Use named volumes for production. Declare in Docker Compose under volumes: at the root level, then mount under services with volumes: - postgres-data:/var/lib/postgresql/data. The volume persists after docker compose down (without the -v flag) and container restarts. Backup with docker run --rm -v volname:/data alpine tar czf - -C /data . | gzip > backup.tar.gz and restore with docker run --rm -v volname:/data alpine tar xzf - < backup.tar.gz -C /data.


Part 1: Understanding Volume Types

Docker offers three persistence mechanisms: named volumes, bind mounts, and tmpfs. Each serves different use cases.

# ── Named Volumes (recommended for production) ────────────────────────────
docker volume create mydata
docker run --rm -v mydata:/data alpine sh -c "echo 'persistent' > /data/test.txt"
docker run --rm -v mydata:/data alpine cat /data/test.txt
# Output: persistent
# Data survives container deletion

# ── Bind Mounts (development) ─────────────────────────────────────────────
mkdir -p /tmp/myapp
echo "bind mount data" > /tmp/myapp/file.txt
docker run --rm -v /tmp/myapp:/data alpine cat /data/file.txt
# Output: bind mount data
# Edits on host immediately visible in container

# ── tmpfs (in-memory, not persistent) ────────────────────────────────────
docker run --rm --tmpfs /cache:size=100m alpine sh -c "
    echo 'in-memory only' > /cache/temp.txt && cat /cache/temp.txt
"
# Data lost when container stops

Part 2: Volume Management

# List all volumes
docker volume ls

# Inspect a volume (shows location on disk)
docker volume inspect mydata

Expected output:

[{
    "Name": "mydata",
    "Driver": "local",
    "Mountpoint": "/var/lib/docker/volumes/mydata/_data",
    "CreatedAt": "2026-04-29T14:00:00Z"
}]
# View volume contents
sudo ls /var/lib/docker/volumes/mydata/_data/

# Remove unused volumes (⚠ careful — check nothing needs them)
docker volume prune              # Remove all unused
docker volume rm mydata          # Remove specific volume

Part 2.5: Named Volumes in Docker Compose

Named volumes are Docker-managed storage. They persist independently of containers and survive docker compose down without the -v flag. This is the standard approach for production databases.

# docker-compose.yml — production volume patterns
name: myapp

services:
  db:
    image: postgres:17-alpine
    volumes:
      - postgres-data:/var/lib/postgresql/data      # Named volume — survives restarts
      - ./init-scripts:/docker-entrypoint-initdb.d:ro  # Bind mount — read-only init scripts

  redis:
    image: redis:7-alpine
    volumes:
      - redis-data:/data                             # Named volume for persistence

  app:
    image: myapp:latest
    volumes:
      - app-uploads:/app/uploads                     # Named volume for user uploads
      - /tmp/app-cache:/app/cache                    # Bind mount for temp cache

volumes:
  postgres-data:    # Declared here — survives 'docker compose down'
  redis-data:
  app-uploads:
docker compose up -d
docker volume ls | grep myapp

Expected output:

local     myapp_postgres-data
local     myapp_redis-data
local     myapp_app-uploads

Part 3: Encrypting Docker Volumes

For sensitive data (credentials, personal documents, AI training data), encrypt volumes at rest to protect against physical disk theft or unauthorized access.

Option 1: Encrypted LVM (Linux — Most Secure)

# Create encrypted logical volume
sudo lvcreate -L 50G -n sovereign_vol vg0
sudo cryptsetup luksFormat /dev/vg0/sovereign_vol
sudo cryptsetup open /dev/vg0/sovereign_vol encrypted_vol
sudo mkfs.ext4 /dev/mapper/encrypted_vol

# Mount the encrypted volume (prompt for password)
sudo mkdir -p /mnt/encrypted_vol
sudo mount /dev/mapper/encrypted_vol /mnt/encrypted_vol

# Use in Docker Compose — bind mount to encrypted partition
services:
  db:
    image: postgres:17-alpine
    volumes:
      - /mnt/encrypted_vol/postgres_data:/var/lib/postgresql/data

Key principle: Data on disk is encrypted with AES-256. Even if the physical drive is stolen, attackers cannot read the data without the LUKS passphrase.

Option 2: Application-Level Encryption

Encrypt data before writing to volume (language-agnostic, works with any storage):

# Example: encrypt secrets before saving to volume
from age import encrypt
import os

plaintext = os.environ['SECRET_API_KEY']
public_key = os.environ['ENCRYPTION_PUBLIC_KEY']

ciphertext = encrypt(plaintext, public_key)
Path("/data/vault.enc").write_bytes(ciphertext)

# On container restart, decrypt:
ciphertext = Path("/data/vault.enc").read_bytes()
plaintext = decrypt(ciphertext, private_key)

Trade-off: More flexible (works on any storage), but requires application changes.

Option 3: tmpfs for Ephemeral Sensitive Data

Use tmpfs volumes for data that should never hit disk:

services:
  app:
    image: myapp:latest
    volumes:
      session-data:
        driver_opts:
          type: tmpfs
          device: tmpfs
          o: "size=1g"

Use cases:

  • JWT tokens (in-memory only)
  • Session cache
  • Temporary encryption keys
  • Password hashes during processing

Key principle: Encryption at rest protects data if the physical disk is stolen — critical for sovereign deployments in untrusted locations. Combine with file permissions (chmod 600, run as non-root user) and encrypted network transport (TLS) for defense-in-depth.


Part 4: Backup and Restore Strategy

Production databases require automated backups. The 3-2-1 rule is the industry standard: keep 3 copies of data, on 2 different media types, with 1 offsite copy. This protects against hardware failure, accidental deletion, and site disasters.

# Backup a named volume to a tar archive
VOLUME="myapp_postgres-data"
BACKUP_DIR="/var/backups"

docker run --rm \
    -v ${VOLUME}:/data \
    -v ${BACKUP_DIR}:/backup \
    alpine \
    tar czf /backup/${VOLUME}-$(date +%Y%m%d_%H%M).tar.gz -C /data .

ls -lh ${BACKUP_DIR}/${VOLUME}*.tar.gz

Expected output:

-rw-r--r-- 1 root root 847M Apr 29 14:00 myapp_postgres-data-20260429_1400.tar.gz
# Restore from backup
docker run --rm \
    -v ${VOLUME}:/data \
    -v ${BACKUP_DIR}:/backup \
    alpine \
    tar xzf /backup/${VOLUME}-20260429_1400.tar.gz -C /data

echo "Restore complete"

Automated Backups with Cron

For production deployments, automate backups using systemd timers or cron:

# Create backup script — Automated Docker Volume Backup Strategy
# This script backs up multiple Docker volumes on a schedule using tar + gzip compression
# Benefits: versioned backups, data retention policy, automated cleanup of old backups

sudo mkdir -p /usr/local/bin
sudo tee /usr/local/bin/docker-backup-volumes.sh << 'EOF'
#!/bin/bash
# ══════════════════════════════════════════════════════════════════════════════════════════════
# Docker Volume Backup Script — Production-Grade Automation
# Purpose: Back up named Docker volumes to gzip-compressed tarballs with retention policy
# ══════════════════════════════════════════════════════════════════════════════════════════════

set -e  # Exit on first error (prevents silent failures in cron jobs)

# ── Configuration ──────────────────────────────────────────────────────────────────────────────
# VOLUMES: array of Docker volume names to backup
# Docker Compose creates volumes with prefix: if service is 'myapp', volume is 'myapp_postgres-data'
# Find all volumes: docker volume ls | grep "myapp"
VOLUMES=("myapp_postgres-data" "myapp_redis-data")

# BACKUP_DIR: where to store compressed backup files (must have sufficient disk space)
# 1 GB database ×  7 days retention = 7 GB disk required
# Consider: /var/backups (system standard), /mnt/backups (external drive), or cloud storage
BACKUP_DIR="/var/backups/docker-volumes"

# RETENTION_DAYS: keep backups for N days, then delete old files
# 7 days: daily backups, week of history before cleanup
# For critical systems: 30+ days, or move old backups to archive storage
RETENTION_DAYS=7

# ── Create Backup Directory ────────────────────────────────────────────────────────────────────
# mkdir -p: create directory and parent paths if needed; don't error if already exists
mkdir -p "$BACKUP_DIR"

# ── Backup Each Volume ─────────────────────────────────────────────────────────────────────────
# Loop through VOLUMES array
for VOL in "${VOLUMES[@]}"; do
    # Generate timestamp: YYYYMMDD_HHMMSS format (sortable, unique, human-readable)
    # Example: 20260516_023000 = May 16, 2026 at 02:30:00 AM
    TIMESTAMP=$(date +%Y%m%d_%H%M%S)
    BACKUP_FILE="$BACKUP_DIR/${VOL}-${TIMESTAMP}.tar.gz"
    
    echo "[$(date)] Backing up $VOL to $BACKUP_FILE"
    
    # Docker backup approach: run temporary container with volume mounted
    # --rm: automatically remove container after backup (cleanup, no orphaned containers)
    # -v "$VOL":/data: mount Docker volume as /data directory in container
    # -v "$BACKUP_DIR":/backup: mount backup directory so container can write files
    # alpine: minimal image (~5 MB), includes tar and gzip
    # tar czf: create compressed (z), file (f) tarball; -C /data . backs up entire volume
    
    docker run --rm \
        -v "$VOL":/data \
        -v "$BACKUP_DIR":/backup \
        alpine \
        tar czf "/backup/$(basename $BACKUP_FILE)" -C /data .
    
    # $(basename $BACKUP_FILE): extracts just the filename (not the full path)
    # Example: /var/backups/docker-volumes/myapp_postgres-data-20260516_023000.tar.gz → myapp_postgres-data-20260516_023000.tar.gz
    
    echo "[$(date)] Backup complete: $BACKUP_FILE"
done

# ── Cleanup Old Backups (Data Retention Policy) ───────────────────────────────────────────────
# find: search for files in BACKUP_DIR
# -name "*.tar.gz": only backup files (not logs, temp files)
# -mtime +$RETENTION_DAYS: modified more than N days ago (older than retention policy)
# Example: RETENTION_DAYS=7 deletes files modified 8+ days ago, keeps 7-day sliding window
# -delete: remove matching files (be careful with this! consider testing with -print first)

find "$BACKUP_DIR" -name "*.tar.gz" -mtime +$RETENTION_DAYS -delete

echo "[$(date)] Cleaned up backups older than $RETENTION_DAYS days"
EOF

sudo chmod +x /usr/local/bin/docker-backup-volumes.sh

# ── Create systemd Service ─────────────────────────────────────────────────────────────────────
# Systemd service defines HOW to run the backup script
# Type=oneshot: script runs once and exits (not a long-running daemon)
# ExecStart: command to execute
# StandardOutput/Error: send logs to journalctl (systemd journal, not /var/log/syslog)

sudo tee /etc/systemd/system/docker-backup-volumes.service << 'EOF'
[Unit]
# Metadata for systemd
Description=Docker Volume Backup  # Human-readable name
Requires=docker.service           # Depends on Docker being available
After=docker.service              # Start AFTER Docker is running

[Service]
Type=oneshot              # Service runs once and exits (suitable for cron-like tasks)
ExecStart=/usr/local/bin/docker-backup-volumes.sh  # Path to backup script
StandardOutput=journal    # Send stdout to systemd journal (visible in journalctl -u docker-backup-volumes.service)
StandardError=journal     # Send stderr to systemd journal (catches errors and logs them)
EOF

# Create systemd timer (runs daily at 2 AM)
sudo tee /etc/systemd/system/docker-backup-volumes.timer << 'EOF'
[Unit]
Description=Daily Docker Volume Backup
Requires=docker-backup-volumes.service

[Timer]
OnCalendar=daily
OnCalendar=*-*-* 02:00:00
Persistent=true

[Install]
WantedBy=timers.target
EOF

# Enable and start
sudo systemctl daemon-reload
sudo systemctl enable docker-backup-volumes.timer
sudo systemctl start docker-backup-volumes.timer

# Verify
sudo systemctl status docker-backup-volumes.timer
sudo systemctl list-timers docker-backup-volumes.timer

Expected output:

NEXT                        LEFT       LAST PASSED UNIT
Thu 2026-05-17 02:00:00 UTC 4h 22min  Wed 2026-05-16 02:00:00 UTC yes docker-backup-volumes.timer

Check logs: sudo journalctl -u docker-backup-volumes.service -f

Option 2: Cron (Traditional)

# Edit crontab
sudo crontab -e

# Add this line (runs daily at 2 AM)
0 2 * * * /usr/local/bin/docker-backup-volumes.sh >> /var/log/docker-backup.log 2>&1

Option 3: Scheduled Docker Container

# docker-compose.yml — backup sidecar
services:
  backup:
    image: alpine:latest
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - myapp_postgres-data:/source-data:ro
      - /var/backups/docker-volumes:/backups
    entrypoint: /bin/sh
    command: -c '
      while true; do
        tar czf /backups/backup-$(date +%Y%m%d-%H%M).tar.gz -C /source-data .
        find /backups -name "backup-*.tar.gz" -mtime +7 -delete
        sleep 86400  # Sleep 24 hours
      done
    '
    restart: unless-stopped

Restore from Automated Backup

# List available backups
ls -lh /var/backups/docker-volumes/

# Restore from a specific backup
BACKUP_FILE="/var/backups/docker-volumes/myapp_postgres-data-20260516_020000.tar.gz"
VOLUME="myapp_postgres-data"

# Stop containers using the volume
docker compose down

# Restore
docker run --rm \
    -v "$VOLUME":/data \
    -v "$(dirname $BACKUP_FILE)":/backup \
    alpine \
    tar xzf "/backup/$(basename $BACKUP_FILE)" -C /data

# Start containers
docker compose up -d

echo "Restored from $(basename $BACKUP_FILE)"

Monitoring backups:

#!/bin/bash
# Alert if last backup is older than 25 hours (backup runs at 2 AM daily)
BACKUP_DIR="/var/backups/docker-volumes"
LATEST=$(ls -t "$BACKUP_DIR"/*.tar.gz 2>/dev/null | head -1)
LAST_MOD=$(stat -f%m "$LATEST" 2>/dev/null || stat -c%Y "$LATEST")
NOW=$(date +%s)
AGE=$((NOW - LAST_MOD))

if [ $AGE -gt 90000 ]; then  # 25 hours in seconds
    echo "WARNING: Latest backup is $(($AGE / 3600)) hours old" | mail -s "Docker Backup Alert" [email protected]
fi

Add to crontab: 0 4 * * * /usr/local/bin/check-backup-age.sh


Troubleshooting

Error: No such volume

Cause: Volume name mismatch. Docker Compose prefixes volume names with the project name. Fix: docker volume ls | grep KEYWORD to find the correct full name.

Data not persisting between deployments

Cause: Running docker compose down -v (the -v flag deletes volumes) or using a bind mount that points to a non-persistent path. Fix: Use named volumes, not bind mounts, for production data. Never run down -v in production.


Conclusion

Named volumes are the correct production storage mechanism — Docker-managed, persistent, independently backupable, and transferable between containers. Bind mounts belong in development environments for live code reloading.

See Docker Compose Tutorial 2026 for volumes in the context of a full multi-service stack.


People Also Ask

What happens to Docker volumes when I run docker compose down?

Named volumes survive docker compose down — they are preserved and will be reattached when you run docker compose up again. Only docker compose down -v deletes named volumes. Anonymous volumes (created without a name) are always deleted by down, but named volumes declared in the volumes: section at the bottom of your compose file are safe. Always use named volumes for any data you care about.

Can I share a volume between multiple containers?

Yes — multiple containers can mount the same named volume simultaneously. This is common for shared file storage (uploaded files, static assets). For databases, avoid concurrent writes from multiple containers to the same volume — most databases don’t support this. Read-only mounts for shared configuration are safe: volumes: - shared-config:/config:ro.


Troubleshooting & Common Issues

Issue: ERROR: Volume mount failed: No such volume

Cause: Volume name mismatch or Docker Compose project prefix.

# Fix: Find correct volume name
docker volume ls | grep myapp
# Output: myapp_postgres-data (Docker Compose prefixes with project name)
# Correct: volumes: - myapp_postgres-data:/var/lib/postgresql/data

# Or remove prefix in compose:
version: '3'
volumes:
  postgres-data:  # Not myapp_postgres-data; Docker adds prefix

Issue: Permission denied: cannot write to volume

Cause: Container running as non-root user without write permissions.

# Fix: Set correct ownership
docker exec myapp_db chown -R postgres:postgres /var/lib/postgresql/data

# Or in Dockerfile
RUN mkdir -p /app/data && chown -R appuser:appuser /app/data

Issue: Disk full: cannot write to volume

Cause: Volume filled to capacity.

# Fix: Check disk usage
docker exec myapp_db du -sh /var/lib/postgresql/data
df -h /var/lib/docker/volumes/  # Check host disk

# Solution: Increase disk or clean old data
# Or: Move volume to larger disk

Issue: Data lost after container removed

Cause: Used local bind mount instead of named volume, or didn’t use -v.

# ❌ Wrong: Data deleted when volume removed
docker rm -v container_name  # -v deletes volumes

# ✅ Correct: Named volumes persist
docker volume ls  # Volume still exists after rm
docker volume rm myapp_postgres-data  # Explicit removal only

# Or restore from backup:
docker volume create --name myapp_postgres-data
docker run --rm -v backup.tar.gz:/backup -v myapp_postgres-data:/data \
  alpine tar xzf /backup/backup.tar.gz -C /data

Issue: Backup file corrupted or empty

Cause: Volume not flushed before backup or tar command error.

# Fix: Ensure database flushed before backup
docker exec myapp_db pg_dump > /backup/dump.sql  # For PostgreSQL
# Or flush filesystem:
docker exec myapp_db sync  # Flush write buffers

# Verify backup integrity:
tar tzf backup.tar.gz | head -20  # List contents

Issue: Automated backup not running (cron job)

Cause: Cron PATH doesn’t include docker, or environment variables not set.

# Fix: Use full paths in cron
0 2 * * * /usr/local/bin/docker-backup-volumes.sh >> /var/log/docker-backup.log 2>&1
# Include in script:
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

Volume Type Comparison

TypeUse CasePerformancePersistenceBackup
Named VolumeProduction databases, app stateFast✅ Survives removalEasy (docker run)
Bind MountDev code sync, config filesMediumDepends on hostHost filesystem
tmpfsCache, temporary filesFastest❌ Lost on stopNone

Docker Volume Backup Strategy Decision Tree

What are you backing up?
├─ Single service (PostgreSQL)
│  └─ Simple backup script (docker run --rm tar)
├─ Multiple services (app + db + redis)
│  └─ Automated systemd timer with retention
├─ Critical production data
│  └─ Offsite backup (S3, Azure, Backblaze)
└─ Development databases
   └─ Manual backup before major changes

Frequently Asked Questions (FAQ)

Q: What’s the difference between named volumes and bind mounts?

Named VolumeBind Mount
Managed by DockerManaged by host filesystem
Survives after docker rmDeleted if host directory removed
/var/lib/docker/volumes/ storagePoints to any host path
Easier to backupHarder to backup (need rsync)
Better for productionBetter for development

Use named volumes for databases, bind mounts for dev code.

Q: Can I access files in a volume from the host?

A: Named volumes are inside /var/lib/docker/volumes/, which requires root access. Better approach:

# Extract files without root
docker run --rm -v myapp_postgres-data:/data -v $(pwd):/export \
  alpine cp /data/file.txt /export/

# Or mount the export path
docker run --rm -v myapp_postgres-data:/data:ro \
  alpine cat /data/file.txt > local-copy.txt

Q: How do I resize a Docker volume?

A: Volumes can’t be resized directly. Solution:

# 1. Create new larger volume
docker volume create myapp_postgres-data-new

# 2. Copy data
docker run --rm \
  -v myapp_postgres-data:/old:ro \
  -v myapp_postgres-data-new:/new \
  alpine sh -c 'cp -r /old/* /new/'

# 3. Update compose to use new volume
# 4. Delete old volume
docker volume rm myapp_postgres-data

Q: What’s the maximum volume size?

A: Depends on host filesystem. Typically no limit — you can grow volumes as large as your disk. Monitor with docker volume inspect <name> or du -sh /var/lib/docker/volumes/.

Q: Can I encrypt Docker volumes?

A: Docker doesn’t encrypt volumes natively. Options:

  1. dm-crypt: Encrypt filesystem before Docker
  2. ecryptfs: Transparent encryption per volume
  3. Cloud backup: Backup to encrypted S3/Azure blob
  4. Database encryption: PostgreSQL pgcrypto extension

Q: How often should I backup volumes?

A: Depends on RTO (Recovery Time Objective):

  • Business-critical: Hourly backups
  • Production: Daily backups (2 AM)
  • Staging: Weekly backups
  • Development: Manual before major changes

Q: Can I move a volume to another Docker host?

A: Volumes are host-specific. To move:

# On host A: Backup volume
docker run --rm -v myapp_postgres:/data -v $(pwd):/export \
  alpine tar czf /export/backup.tar.gz -C /data .

# Transfer backup.tar.gz to host B

# On host B: Restore volume
docker volume create myapp_postgres
docker run --rm -v myapp_postgres:/data -v $(pwd):/import \
  alpine tar xzf /import/backup.tar.gz -C /data

Q: What’s the performance impact of volumes vs. SSD?

A: Named volumes: nearly identical to SSD. Bind mounts: 5–10% slower (overhead from mount translation). tmpfs: 100× faster (RAM-based, no disk I/O).

Q: How do I prevent data loss in multi-container setups?

A: Use 3-2-1 backup rule:

  • 3 copies: Original + 2 backups
  • 2 different media: Host disk + cloud storage
  • 1 offsite: At least one backup off-premises

Example for production:

# Copy 1: Original volume (host)
docker-backup-volumes.sh  # Daily local backup

# Copy 2: Cloud backup
aws s3 cp /var/backups/postgres-data*.tar.gz s3://my-backups/

# Copy 3: Offsite (AWS Glacier for long-term)
aws s3api put-object-acl --bucket my-backups --key postgres.tar.gz \
  --storage-class GLACIER_IR


Further Reading

Vucense Guides

Official Docker Documentation

Backup & Disaster Recovery Tools

Advanced Volume Management

Tested on: Ubuntu 24.04 LTS (Hetzner CX22). Docker CE 27.3.1. Last verified: May 16, 2026.

Further Reading

All Dev Corner

Comments