Docker Compose: Define and Run Multi-Container Applications
Move beyond single-container Docker commands. Define your entire stack in one file, bring it up with one command, and share it with your team.
Running a single container with docker run works fine for simple cases. Once your application needs a database, a cache, a background worker, and a reverse proxy, the command line becomes unmanageable. Docker Compose solves this with a single YAML file that describes the whole stack.
A minimal example
A Node.js API and a Postgres database:
# compose.yml
services:
api:
build: .
ports:
- "3000:3000"
environment:
DATABASE_URL: postgres://app:secret@db:5432/mydb
depends_on:
db:
condition: service_healthy
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: app
POSTGRES_PASSWORD: secret
POSTGRES_DB: mydb
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U app -d mydb"]
interval: 5s
retries: 5
volumes:
pgdata:
Start everything:
docker compose up
Add -d to run in the background:
docker compose up -d
Stop and remove containers (volumes are kept):
docker compose down
Stop and remove containers and volumes:
docker compose down -v
How networking works
Compose creates a private network for your stack automatically. Services reach each other by their service name — db in the example above, not localhost and not an IP address. This is why the DATABASE_URL uses @db:5432 instead of @localhost:5432.
Port mappings (ports: - "3000:3000") expose a port from the container to your host machine. Traffic from outside the stack goes through the host port. Traffic between services goes directly over the internal network without touching the host.
depends_on and healthchecks
depends_on with condition: service_healthy tells Compose not to start the api service until the db service passes its healthcheck. Without this, the API container starts immediately after Postgres does — but Postgres takes a few seconds to become ready to accept connections, causing the first database queries to fail.
The healthcheck runs pg_isready inside the container every 5 seconds. Once it returns success, the dependent service starts.
depends_on without a condition (the default) only waits for the container to start, not for the process inside it to be ready. The health condition is almost always what you actually want.
Environment variables and .env files
Hard-coding credentials in compose.yml is fine for local development and bad for everything else. Move them to a .env file:
# .env
POSTGRES_USER=app
POSTGRES_PASSWORD=secret
POSTGRES_DB=mydb
Compose picks up .env automatically. Reference the variables with ${VAR_NAME}:
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
Add .env to .gitignore. Commit a .env.example with placeholder values so teammates know which variables they need to set.
Useful day-to-day commands
docker compose ps # status of all services
docker compose logs api # logs for a specific service
docker compose logs -f api # follow logs in real time
docker compose exec api sh # open a shell inside a running container
docker compose run api npm run seed # run a one-off command in a new container
docker compose restart api # restart a single service
docker compose build # rebuild images without starting
docker compose pull # pull latest versions of all images
Multiple compose files
Override configurations per environment without duplicating the base file:
docker compose -f compose.yml -f compose.override.yml up
A common pattern: compose.yml holds the base definition, compose.override.yml adds volume mounts for live code reloading during development, and compose.prod.yml adds production-specific settings like resource limits and restart policies.
Compose merges the files in order — later files override earlier ones. Keys that are lists (like environment and volumes) are merged by appending, not replacing.
The shape of a production-ready service
services:
api:
image: registry.example.com/myapp:${IMAGE_TAG:-latest}
restart: unless-stopped
ports:
- "3000:3000"
environment:
NODE_ENV: production
DATABASE_URL: ${DATABASE_URL}
deploy:
resources:
limits:
memory: 512m
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 5s
retries: 3
restart: unless-stopped means the container comes back automatically after a crash or a host reboot, but stays stopped if you explicitly brought it down with docker compose stop. That is the right behaviour for a persistent service.
SysEmperor