What’s the Difference Between .sql and .dump Files in PostgreSQL Backups

When you start working with PostgreSQL seriously, you will quickly meet two very common backup file types:

  • A plain .sql file
  • A binary or custom-format .dump file

They both come from tools like pg_dump, but they are very different in how they store data, how you restore them, and what you can inspect inside.

This blog walks step by step through:

  • What is inside a .sql file
  • What is inside a .dump file
  • Key differences between them
  • What metadata you can see from each
  • Which one is better in which scenario

The goal is that by the end, you can confidently decide which format to use for your PostgreSQL backups.

1. What is a .sql Backup File?

A .sql backup file from PostgreSQL is just a plain text file that contains SQL commands.

When you run a plain-text backup, PostgreSQL essentially generates all the SQL that would be needed to recreate your database:

  • CREATE TABLE, CREATE INDEX, constraints, sequences
  • ALTER TABLE statements
  • INSERT statements with all data rows
  • GRANT and REVOKE statements
  • Other object definitions like views, functions, triggers, etc.

Example: Creating a .sql backup

Plain-text backup with pg_dump:

pg_dump -h your_host -p 5432 -U your_user -d your_database \
  -F p \
  -f backup_2025_11_27.sql

Explanation:

  • -F p selects “plain” format (text .sql). In many versions this is the default, but using it explicitly is clearer.
  • -f specifies the output file name.
  • The resulting file is pure SQL.

What can you see inside a .sql file?

Because it is human-readable, you can open it and see, for example:

GNU nano 6.2                                                                                    testing_2.sql                                                                                             
--
-- PostgreSQL database dump
--
\restrict Kzde5NWpXlN1LhvLycPoheB4Vons7GLfg19TKZuz477YnXMyPuplE284UoJVa0E
-- Dumped from database version 18.1 (Ubuntu 18.1-1.pgdg22.04+2)
-- Dumped by pg_dump version 18.1 (Ubuntu 18.1-1.pgdg22.04+2)
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET transaction_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;
--
-- Name: pg_trgm; Type: EXTENSION; Schema: -; Owner: -
--
CREATE EXTENSION IF NOT EXISTS pg_trgm WITH SCHEMA public;
--
-- Name: EXTENSION pg_trgm; Type: COMMENT; Schema: -; Owner: 
--
COMMENT ON EXTENSION pg_trgm IS 'text similarity measurement and index searching based on trigrams';
SET default_tablespace = '';
SET default_table_access_method = heap;
--
-- Name: auth_totp_device; Type: TABLE; Schema: public; Owner: averigouser
--
CREATE TABLE public.auth_totp_device (
    id integer NOT NULL,
    name character varying NOT NULL,
    user_id integer NOT NULL,
    scope character varying,
    expiration_date timestamp without time zone,
    index character varying(8),
    key character varying,
    create_date timestamp without time zone DEFAULT (now() AT TIME ZONE 'utc'::text),
    CONSTRAINT auth_totp_device_index_check CHECK ((char_length((index)::text) = 8))
);

This makes .sql good for:

  • Auditing object definitions
  • Manually editing some parts (carefully)
  • Moving schema between environments
  • Debugging migration issues

2. What is a .dump Backup File?

A .dump file usually refers to a non-plain backup created by pg_dump using formats like:

  • Custom format (-F c)
  • Directory format (-F d)
  • Tar format (-F t)

Custom Format (-F c)

Key Features

  • Produces a single compressed binary file (e.g., backup.dump).
  • Supports parallel restore using pg_restore -j.
  • Allows selective restore (specific tables, schemas, or functions).
  • Usually the smallest file size due to compression.
  • Most commonly used format.

When to Use

  • When you need a flexible, compressed backup.
  • When you want faster restoration using multiple CPU cores.

Directory Format (-F d)

Key Features

  • Creates a directory with multiple files instead of one file.
  • Supports parallel dump (pg_dump -j) and parallel restore.
  • This is the fastest format for both backup and restore.
  • Very good for very large databases (100GB+).

When to Use

  • When your database is huge and you want maximum speed.
  • When you want to run parallel jobs for backup and restore.

Tar Format (-F t)

Key Features

  • Produces a tar archive file.
  • Does not support parallel restore.
  • Does not support compression (unless you gzip it manually).
  • Allows selective restore.

When to Use

  • When you specifically need a tarball.
  • Rarely used today.

Most people use .dump as a file extension for the custom format, but technically PostgreSQL does not force any extension. The important part is the format you choose in the pg_dump command.

A custom-format or tar-format dump is not human-readable. It is a binary structure or archive that pg_restore understands.

Example: Creating a .dump backup (custom format)

pg_dump -h your_host -p 5432 -U your_user -d your_database \
  -F c \
  -f backup_2025_11_27.dump

Key points for custom format:

  • -F c selects custom format.
  • The resulting .dump file is compressed by default and structured.
  • You cannot simply cat or open it to see SQL. You need pg_restore to interpret it.

Example: Directory format dump

pg_dump -h your_host -p 5432 -U your_user -d your_database \
  -F d \
  -f backup_directory_2025_11_27

This creates a directory containing multiple files that together represent the dump. pg_restore can read it.

You get files like this

root@cybrosys:/tmp/ackup_2025_11_27# ls
4884.dat.gz  4895.dat.gz  4917.dat.gz  4932.dat.gz  4946.dat.gz  4957.dat.gz  4972.dat.gz  4986.dat.gz  5002.dat.gz  5016.dat.gz  5029.dat.gz  5045.dat.gz  5060.dat.gz  5074.dat.gz  5089.dat.gz  toc.dat

Purpose of toc.dat file

  • Stores metadata about the entire backup
  • Defines the order in which objects must be restored
  • Contains object IDs, dependencies, and types
  • pg_restore reads this file first
  • Without this file, the directory backup cannot be restored.

Purpose of .dat.gz file

  • Table data
  • Index definitions
  • Sequences
  • Functions
  • Constraints
  • Types
  • Extensions
  • Comments
  • Large objects (if any)

Each .dat.gz file corresponds to one entry in the table of contents (toc.dat).

3. What Is Inside .sql vs .dump?

Let us compare them in terms of content.

Inside a .sql file

  • Explicit SQL statements:
    • DDL: CREATE TABLE, CREATE INDEX, CREATE VIEW, CREATE FUNCTION, etc.
    • DML: INSERT statements for data.
    • Privileges: GRANT and REVOKE.
    • Ownership: ALTER TABLE ... OWNER TO ....
  • Everything is linear: if you scroll from top to bottom, you see the order of restoration.
  • No internal compression beyond what the operating system or external tools provide if you gzip it.

Inside a .dump file

  • Structured representation of database objects:
    • Each object (table, index, function, etc.) has an internal ID.
    • Data and definitions are stored in sections.
  • Optional compression:
    • Custom format supports internal compression.
  • Not directly readable:
    • You need pg_restore to list, filter, and extract contents.

You can inspect a .dump file using pg_restore -l:

pg_restore -l backup_2025_11_27.dump

You get result like this

postgres@cybrosys:/tmp$ pg_restore -l /tmp/backup_2025_11_27.dump 
;
; Archive created at 2025-11-27 22:11:00 IST
;     dbname: testing_2
;     TOC Entries: 2315
;     Compression: gzip
;     Dump Version: 1.16-0
;     Format: CUSTOM
;     Integer: 4 bytes
;     Offset: 8 bytes
;     Dumped from database version: 18.1 (Ubuntu 18.1-1.pgdg22.04+2)
;     Dumped by pg_dump version: 18.1 (Ubuntu 18.1-1.pgdg22.04+2)
;
;
; Selected TOC Entries:
;
5108; 0 0 ACL - SCHEMA public pg_database_owner
2; 3079 5506400 EXTENSION - pg_trgm 
5109; 0 0 COMMENT - EXTENSION pg_trgm 
411; 1259 5509359 TABLE public auth_totp_device averigouser
410; 1259 5509358 SEQUENCE public auth_totp_device_id_seq averigouser
5110; 0 0 SEQUENCE OWNED BY public auth_totp_device_id_seq averigouser
413; 1259 5509381 TABLE public auth_totp_wizard averigouser

This will print a list of items like:

  • TABLE data public.users
  • TABLE public.users
  • SEQUENCE public.users_id_seq
  • ACL public.users

This list is a kind of metadata for what is inside the .dump.

4. Metadata You Can See From .sql and .dump

From a .sql file

Since it is plain text, metadata is “implicit”:

  • Names of schemas, tables, indexes, sequences, constraints.
  • Data types of columns.
  • Default values and constraints.
  • Comments (if dumped with --comments).
  • Grants, roles, and ownership.
  • Function definitions and trigger definitions.

You can quickly search the file for:

  • CREATE TABLE to see schema.
  • CREATE INDEX to inspect indexes.
  • COMMENT ON to see descriptions.

But there is no structured “header” with version, encoding, and so on. You infer these from SQL statements or from the environment.

From a .dump file

You cannot open it directly, but pg_restore gives structured metadata views.

For example, list contents:

pg_restore -l backup_2025_11_27.dump

You will see items like:

  • ; Archive created at 2025-11-27 20:30:40
  • ; dbname: your_database
  • ; TOC Entries: ...
  • ; Compression: ...

And then many lines representing individual objects.

You can also use flags to filter metadata, for example to restore only certain object types.

5. How to Restore .sql vs .dump

Restoring a .sql backup

Because it is plain text, you typically restore it using psql.

If your .sql includes CREATE DATABASE, you would first connect to postgres or another maintenance DB:

psql -h your_host -p 5432 -U your_user -d postgres -f backup_2025_11_27.sql

psql reads the SQL file and executes each statement in order.

Restoring a .dump backup (custom or directory)

For non-plain formats you use pg_restore. There are two styles:

  1. Restore into an already created empty database (recommended for more control).
  2. Use pg_restore -C to create the database.

Example: Restore custom-format .dump into a new database.

First create the database:

createdb -h your_host -p 5432 -U your_user new_database

Then restore:

pg_restore -h your_host -p 5432 -U your_user \
  -d new_database \
  backup_2025_11_27.dump

Or, recreate database automatically:

pg_restore -h your_host -p 5432 -U your_user \
  -C \
  -d postgres \
  backup_2025_11_27.dump

The -C option tells pg_restore to create the database before restoring into it.

6. Performance: Backup and Restore Speed

Now let us compare .sql and .dump in terms of speed and efficiency.

Backup speed

  • .sql (plain format):
    • Writes all data as text with INSERT statements.
    • Generates more output bytes, especially for large tables.
    • Usually slower for very large databases because text generation and I/O cost are high.
  • .dump (custom or directory format):
    • Uses a structured binary format, with compression available.
    • Typically produces smaller files for large datasets.
    • Faster in many scenarios because less I/O is needed due to compression and efficient encoding.

For small databases, the difference may be small. For very large databases (tens or hundreds of GB), custom-format dumps are usually preferred for performance and storage reasons.

Restore speed

  • Restoring from .sql:
    • psql reads and executes each SQL statement.
    • A large number of individual INSERT statements can slow down restore.
    • Parallel restore is not available for plain .sql with psql.
  • Restoring from .dump:
    • pg_restore can perform parallel restore when using custom or directory format.
    • You can use -j to run multiple jobs in parallel.

Example:

pg_restore -h your_host -p 5432 -U your_user \
  -d new_database \
  -j 4 \
  backup_2025_11_27.dump

The -j 4 option runs up to 4 parallel jobs, speeding up restore significantly on a multi-core machine.

In practice:

  • For very large databases, custom-format dumps with parallel restore are often much faster than plain .sql restore.
  • Plain .sql is simpler but slower when data volume is high.

7. Which One Is Better?

There is no single “best” format. It depends on your use case.

Situations where .sql is better:

  • You want full transparency and readability.
  • You want to version-control schema in Git as SQL.
  • The database is small to medium and restore time is not critical.
  • You need a simple migration script that can be run on any PostgreSQL instance with psql.

Situations where .dump is better:

  • The database is large, and you care about backup size and restore speed.
  • You want to use parallel restore to speed up recovery.
  • You want flexible restore options (only one table, only schema, etc.).
  • You want built-in compression without separate tools.

Many DBAs use both in different contexts:

  • .dump (custom format) for regular automated backups and disaster recovery.
  • .sql for schema migrations, one-time exports, or debugging.

When you compare .sql and .dump step by step, it becomes clear that they are designed for different priorities.

  • If you want human readability, simple restore steps, and smaller databases, a .sql backup is perfectly fine, especially for development, testing, and schema sharing.
  • If you are working with production-sized databases, need faster restores, parallelism, compression, and selective restores, custom or directory .dump formats are usually the better choice.

In many real-world PostgreSQL setups, teams use both:

  • .dump for daily automated backups and disaster recovery.
  • .sql for schema migrations, code reviews, or quick exports when debugging.
whatsapp_icon
location

Calicut

Cybrosys Technologies Pvt. Ltd.
Neospace, KINFRA Techno Park
Kakkanchery, Calicut
Kerala, India - 673635

location

Kochi

Cybrosys Technologies Pvt. Ltd.
1st Floor, Thapasya Building,
Infopark, Kakkanad,
Kochi, India - 682030.

location

Bangalore

Cybrosys Techno Solutions
The Estate, 8th Floor,
Dickenson Road,
Bangalore, India - 560042

Send Us A Message