News
🐧 Linux Tutorials Analyse Disk Usage on Linux with df, du, and ncdu

Analyse Disk Usage on Linux with df, du, and ncdu

Find out what is consuming your disk space — from a quick filesystem overview down to the exact directory or file responsible.

A server hits 100% disk usage and writes start failing. The question is always the same: what is using all the space? Here is how to find out, from broad to specific.


df — filesystem overview

df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        40G   37G  1.2G  97% /
tmpfs           2.0G  1.4M  2.0G   1% /run
/dev/sdb1       200G   89G  100G  47% /data

-h gives human-readable sizes. The Use% column is what matters when diagnosing a full disk.

A few things worth knowing:

  • tmpfs entries are RAM-backed — they do not touch physical disk. A full tmpfs means something is writing too many files into memory-backed storage.
  • / near 100% is the common emergency. Logs, caches, temp files, and application data all land here unless you have separate mounts.
  • df tells you which filesystem is full. It cannot tell you which directory inside that filesystem is responsible — that is du's job.

du — how much space a directory uses

du -sh /var
4.3G    /var

-s summarises (one line instead of recursing through every subdirectory), -h makes it human-readable. To find what inside /var is responsible:

du -sh /var/* | sort -h
12K     /var/backups
42M     /var/cache
180M    /var/lib
3.9G    /var/log

sort -h sorts by human-readable size, largest last. /var/log is the culprit. Go one level deeper:

du -sh /var/log/* | sort -h
4.0K    /var/log/auth.log
8.0K    /var/log/syslog
3.8G    /var/log/app.log

This is the full pattern: df to identify the full filesystem, then du -sh ./* | sort -h descending into directories until the specific file or folder appears.


Find large files directly

If a single oversized file is the cause, target it with find:

find / -type f -size +500M -exec ls -lh {} \; 2>/dev/null

Adjust +500M to taste. 2>/dev/null suppresses permission errors on directories you cannot read. Adding -mtime -7 narrows the search to files modified in the last seven days — useful when disk usage spiked recently and you want to know what changed.


ncdu — interactive disk usage browser

du output becomes hard to navigate once the tree gets deep. ncdu gives you an interactive browser:

# Install
sudo apt install ncdu      # Debian / Ubuntu
sudo dnf install ncdu      # RHEL / Fedora
brew install ncdu          # macOS

# Run from a specific directory
ncdu /var

ncdu scans the tree and shows a sorted, interactive list. Arrow keys navigate, Enter descends into a directory, d deletes with a confirmation prompt, and q quits. The biggest directories and files are always at the top, so you rarely have to scroll.

For remote servers where scanning from root is slow, scope it to the directory you already suspect:

ncdu /var/log

Common culprits

Application logs. A service that logs verbosely and never rotates will fill a disk in days. Check /var/log/ first. logrotate handles automatic rotation — confirm it is configured for your application's log files in /etc/logrotate.d/.

systemd journal. journalctl stores logs in /var/log/journal/ when persistent storage is configured. Trim it immediately:

sudo journalctl --vacuum-size=500M
sudo journalctl --vacuum-time=30d

Docker build cache. Images, stopped containers, and dangling layers accumulate silently under /var/lib/docker. Clean up:

docker system prune        # removes stopped containers, dangling images, unused networks
docker system prune -a     # also removes images not attached to a running container

Package cache. apt keeps downloaded packages on disk. Free them:

sudo apt clean

Core dumps. A crashing process can write gigabytes to /var/crash or wherever kernel.core_pattern points. Find them:

find / -name "core" -o -name "core.*" 2>/dev/null | xargs ls -lh 2>/dev/null