News
🗄️ Databases Tutorials Back Up and Restore a PostgreSQL Database

Back Up and Restore a PostgreSQL Database

The tools, commands, and approaches for taking reliable backups and recovering from them — from a single database to a full server restore.

PostgreSQL ships with two backup tools: pg_dump for logical backups (SQL or custom format) and pg_basebackup for physical backups (a copy of the data directory). For most use cases — backing up a single database, automating nightly backups, migrating between servers — pg_dump is the right tool.


pg_dump: back up a single database

pg_dump -U postgres -d mydb -F c -f mydb.dump

Flags:

  • -U postgres — connect as the postgres user
  • -d mydb — the database to back up
  • -F c — custom format (compressed, supports parallel restore)
  • -f mydb.dump — output file

The custom format (-F c) is the best default. It compresses the output, supports selective table restores, and works with pg_restore's parallel mode. Plain SQL (-F p) is human-readable and works with psql, but is larger and cannot be restored in parallel.


Restore a custom-format dump

pg_restore -U postgres -d mydb -F c mydb.dump

If the database does not exist yet, create it first:

createdb -U postgres mydb
pg_restore -U postgres -d mydb -F c mydb.dump

Restore with parallel workers (faster for large databases):

pg_restore -U postgres -d mydb -j 4 -F c mydb.dump

-j 4 uses four parallel restore jobs. A good starting point is the number of CPU cores on the machine.


pg_dumpall: back up all databases and roles

pg_dumpall -U postgres -f full_backup.sql

pg_dumpall outputs plain SQL and includes global objects — roles, tablespaces — that pg_dump skips. Restore with:

psql -U postgres -f full_backup.sql

Dump to stdout for piping

Pipe directly to a compressed file without a temporary uncompressed dump:

pg_dump -U postgres -d mydb -F c | gzip > mydb_$(date +%Y%m%d).dump.gz

Or pipe directly to a remote machine:

pg_dump -U postgres -d mydb -F c | ssh user@backup-server "cat > /backups/mydb_$(date +%Y%m%d).dump"

Automate with a cron job

A nightly backup that keeps 7 days of history:

# /etc/cron.d/pg-backup
0 2 * * *  postgres  pg_dump -d mydb -F c -f /backups/mydb_$(date +\%Y\%m\%d).dump && find /backups -name "mydb_*.dump" -mtime +7 -delete

The find ... -mtime +7 -delete removes dumps older than 7 days. Adjust the retention period based on your recovery requirements.

Use SysEmperor's Cron Visual Editor to build and validate the schedule expression before adding it to crontab.


Verify a backup

A backup you have never tested is not a backup. Restore into a separate database periodically to confirm the dump is valid and complete:

createdb -U postgres mydb_test
pg_restore -U postgres -d mydb_test mydb.dump
psql -U postgres -d mydb_test -c "SELECT COUNT(*) FROM users;"
dropdb -U postgres mydb_test

A dump that restores without errors and has the expected row counts is a valid backup. Automate this check as part of your backup pipeline.


pg_basebackup: physical backup of the entire cluster

For point-in-time recovery and high availability setups, pg_basebackup takes a binary snapshot of the entire data directory:

pg_basebackup -U replication -h localhost -D /var/lib/postgresql/backup -P -Ft -z

This requires a replication role and a configured pg_hba.conf entry. It captures the entire cluster — all databases, roles, and configuration — in a format suitable for base restores and WAL replay. For single-database backups or migrations, pg_dump is simpler.


Backup checklist

  • Dumps are landing in the expected location
  • The backup process runs without errors (check cron logs)
  • A recent dump restores successfully into a test database
  • Old backups are being pruned (disk does not fill up)
  • Backups are stored somewhere other than the database server (a separate machine, or object storage)

The last point matters most. Backups stored on the same machine as the database are destroyed in the same incident that takes down the database.