News
🛠️ DevOps Tutorials Write a Dockerfile That Builds in Under 10 Seconds

Write a Dockerfile That Builds in Under 10 Seconds

Cache layers in the right order, keep context small, and stop rebuilding the world every time you change a source file.

The first Dockerfile most people write looks something like this:

FROM node:20
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build
CMD ["node", "dist/server.js"]

It works. It also rebuilds from scratch every time you touch a source file, because COPY . . invalidates the cache before npm install gets a chance. On a medium project that is a 90-second rebuild to change one line.

Here is the same image, rewritten for caching.


The fix: copy dependency manifests first

FROM node:20-alpine AS build
WORKDIR /app

COPY package.json package-lock.json ./
RUN npm ci

COPY . .
RUN npm run build

FROM node:20-alpine
WORKDIR /app
COPY --from=build /app/dist ./dist
COPY --from=build /app/node_modules ./node_modules
CMD ["node", "dist/server.js"]

What changed:

  1. package.json and the lockfile are copied first. Docker caches each instruction. As long as these two files are unchanged, the npm ci layer is reused — no dependency reinstall when you change source code.
  2. npm ci instead of npm install. ci is deterministic, respects the lockfile strictly, and fails fast if the lockfile is out of sync. In a Docker build you never want install's "helpfully" updating the lockfile.
  3. Multi-stage build. The final image does not need npm, your .git folder, your test fixtures, or any of the other build-time cruft. Stage 1 builds, stage 2 ships.
  4. Alpine base. node:20 is ~350 MB. node:20-alpine is ~50 MB. Same Node, smaller runtime.

A typical rebuild on the above — with only a source file changed — runs in 3 to 8 seconds, because the dependency layer is reused from cache.


Shrink the build context

The "build context" is the tarball Docker sends to the daemon before it even starts building. If your repo has a 2 GB node_modules/ or a .git/ folder full of history, that entire thing is uploaded every time. On remote builders it is painfully slow.

Create a .dockerignore next to the Dockerfile:

node_modules
.git
dist
coverage
*.log
.env
.env.*
.DS_Store
.vscode
.idea
Dockerfile
docker-compose.yml
README.md

Two reasons this matters beyond speed:

  • It keeps .env files out of the image, which is a real security boundary
  • It prevents your host's node_modules/ from overwriting the one installed inside the container (a common source of "works on my machine but not in Docker" bugs tied to native modules)

Use BuildKit cache mounts for package managers

BuildKit is the modern Docker builder, enabled by default on recent versions. It supports --mount=type=cache, which gives a directory that persists across builds without being baked into the image:

# syntax=docker/dockerfile:1.7
FROM node:20-alpine AS build
WORKDIR /app
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm \
    npm ci
COPY . .
RUN npm run build

The first line (# syntax=...) is required — it opts into the features. Now npm's global cache is reused across builds, so even when the lockfile does change, the download step is near-instant for unchanged packages.

The same pattern works for pnpm (/root/.local/share/pnpm), pip (/root/.cache/pip), Go modules (/root/go/pkg/mod), and Cargo (/root/.cargo).


Pin base images by digest for reproducible builds

FROM node:20-alpine changes over time — a new patch release ships and your builds silently pick it up. Usually fine. Occasionally a nightmare.

For anything production-facing, pin by digest:

FROM node:20-alpine@sha256:bd2c44afc3b6fdcf7d2d6bce84b5aa934a94bcf3ba54dbc60c5e3b5c26c09b0f AS build

Get the digest from docker pull node:20-alpine, then copy the sha256:... value it prints. Now every build produces the same result until you explicitly update the pin.


One-liner: audit what your image actually contains

After a build, check what ended up inside:

docker run --rm -it your-image sh -c "du -sh /* 2>/dev/null | sort -h"

If node_modules/ is 600 MB, it is time to look at devDependencies leaking in. If /app/.git is there, your .dockerignore is not doing its job. If /var/cache/apk/ is 80 MB, add --no-cache to your apk add calls.


The shape that works

After a few iterations, a well-tuned Dockerfile converges on something like:

# syntax=docker/dockerfile:1.7

# ── Build stage ────────────────────────────
FROM node:20-alpine AS build
WORKDIR /app
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm npm ci
COPY . .
RUN npm run build

# ── Runtime stage ──────────────────────────
FROM node:20-alpine
WORKDIR /app
ENV NODE_ENV=production
COPY --from=build /app/dist ./dist
COPY --from=build /app/package.json ./
COPY --from=build /app/node_modules ./node_modules
USER node
EXPOSE 3000
CMD ["node", "dist/server.js"]

Small. Cached. Runs as a non-root user. Does not ship a compiler or a copy of git.

A change to a single source file rebuilds the last three layers — the final COPY commands — and nothing else. That is the whole point.