News
🛠️ DevOps Tutorials Multi-Stage Docker Builds: Smaller Images, Cleaner Production

Multi-Stage Docker Builds: Smaller Images, Cleaner Production

Use build stages to compile your code in a full environment and ship only the result — without build tools, source files, or test dependencies in the final image.

The compiler, your test suite, the dev dependencies, the .git folder — none of these belong in a production container. Multi-stage builds let you use one environment to build and a separate, minimal environment to run, without any manual cleanup.


The problem with single-stage builds

A typical Dockerfile for a Go service:

FROM golang:1.22
WORKDIR /app
COPY . .
RUN go build -o server .
CMD ["./server"]

This works. It also ships with the entire Go toolchain, the module download cache, and every source file. The image is 800 MB or more. A multi-stage build cuts this to under 20 MB.


The fix: a build stage and a runtime stage

# Stage 1: build
FROM golang:1.22-alpine AS build
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o server .

# Stage 2: runtime
FROM alpine:3.19
WORKDIR /app
COPY --from=build /app/server .
USER nobody
EXPOSE 8080
CMD ["./server"]

What happens:

  1. Stage 1 (build) starts from the full Go image. It downloads modules first — this layer is cached as long as go.mod and go.sum do not change — then copies source and compiles the binary.
  2. Stage 2 (the final stage) starts from a minimal Alpine image. It copies only the compiled binary from stage 1 with COPY --from=build. The Go toolchain, source files, and module cache never touch this stage.
  3. CGO_ENABLED=0 produces a statically linked binary that runs without any C library. Required when copying a binary compiled in one environment into a different base image.
  4. USER nobody runs the process as a non-root user. If the container is compromised, the attacker has minimal privileges.

The final image contains: Alpine base (~7 MB) + the binary. Nothing else.


The same pattern for Node.js

# Stage 1: install and build
FROM node:20-alpine AS build
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build

# Stage 2: runtime
FROM node:20-alpine
WORKDIR /app
ENV NODE_ENV=production
COPY --from=build /app/dist ./dist
COPY --from=build /app/package.json ./
RUN npm ci --omit=dev
USER node
EXPOSE 3000
CMD ["node", "dist/server.js"]

npm ci --omit=dev in the runtime stage installs only production dependencies. Dev dependencies — test frameworks, type checkers, bundlers — are left behind.


Use a named stage as a cache source

When a stage is expensive to build (running tests, compiling a large project), name it and use --target to build only up to that stage in CI:

FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci

FROM deps AS test
COPY . .
RUN npm test

FROM deps AS build
COPY . .
RUN npm run build

FROM node:20-alpine AS runtime
WORKDIR /app
COPY --from=build /app/dist ./dist
# ...

In CI, run tests first:

docker build --target test -t myapp:test .

If tests fail, the build stage never runs. If tests pass, build the final image:

docker build -t myapp:latest .

The deps stage result is cached and reused by both test and build, so dependencies are only installed once.


FROM scratch: the smallest possible base

For a statically compiled binary with no system dependencies at all:

FROM golang:1.22-alpine AS build
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o server .

FROM scratch
COPY --from=build /app/server /server
EXPOSE 8080
CMD ["/server"]

FROM scratch is a completely empty image — no shell, no package manager, no system libraries. The only thing in the container is your binary. The image is as small as the binary itself.

The tradeoff: no sh means you cannot docker exec into the container for debugging. For that, FROM gcr.io/distroless/static-debian12 gives you the same minimal footprint with slightly better debugging capabilities.


Check what ended up in the image

After any build:

docker run --rm your-image sh -c "du -sh /* 2>/dev/null | sort -h"

If you see node_modules/ at 400 MB in a production image, dev dependencies leaked through. If /app/src is present, the source copy step is in the wrong stage. If /usr/local/go is there, the build stage and the runtime stage got merged somehow.

The output of this command tells you exactly what landed in the image and where the next size reduction is.