How to Benchmark PostgreSQL Query Performance Using pgbench

PostgreSQL is a robust and reliable database used widely in production environments. But as your database workload grows, it’s essential to understand how well your PostgreSQL server handles different types of queries and loads. Benchmarking your database helps you measure performance, identify bottlenecks, and optimize configurations.

In this blog, you will learn how to use pgbench, PostgreSQL’s built-in benchmarking tool, to simulate realistic workloads, run performance tests, and interpret the results. We'll walk through setup, running tests, customizing scenarios, and analyzing outputs — all with practical examples.

What is pgbench and Why Use It?

pgbench is a simple yet powerful benchmarking tool that comes packaged with PostgreSQL. It simulates multiple clients executing transactions on a test database, allowing you to:

* Measure transaction throughput (transactions per second)

* Analyze latency and response times

* Simulate concurrent load with multiple clients/threads

* Customize transaction scripts for complex benchmarking scenarios

pgbench is ideal for:

* Load testing new PostgreSQL setups

* Comparing performance before/after configuration changes

* Stress testing hardware and tuning parameters

* Testing how well your server handles concurrent connections

Setting Up pgbench for Benchmarking

Step 1: Install pgbench

pgbench comes bundled with PostgreSQL, so if you already have PostgreSQL installed, you likely have it as well.

To check:

pgbench --version

If it’s not installed, install the PostgreSQL client tools (Ubuntu example):

sudo apt update
sudo apt install postgresql-client

Step 2: Initialize the pgbench Test Database

pgbench requires a dedicated test database with sample tables.

Create a test database:

createdb pgbench_test

Initialize the pgbench schema and data inside it:

pgbench -i -s 10 pgbench_test

*  -i initializes the database

*  -s 10 is the scale factor (controls size of the dataset)

10 means ~10 million rows in total; scale affects the size and complexity of the test

You should see output like:

initializing vacuum...
creating tables...
creating primary keys...
done.

Running Basic Benchmark Tests with pgbench

Step 3: Run a Simple Benchmark

Run the default benchmarking test (built-in simple transaction) with 10 clients and 1 minute duration:

pgbench -c 10 -T 60 pgbench_test

*  -c 10 = number of concurrent clients (connections)

*  -T 60 = duration of test in seconds

Sample output:

starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 10
query mode: simple
number of clients: 10
number of threads: 1
duration: 60 s
number of transactions actually processed: 12345
tps = 205.75 (including connections establishing)
tps = 210.12 (excluding connections establishing)

*  tps = transactions per second — higher is better.

Step 4: Increasing Load and Clients

Try increasing the number of clients and threads to simulate more realistic concurrent loads.

Example: 50 clients with 4 threads for 2 minutes:

pgbench -c 50 -j 4 -T 120 pgbench_test

 -j 4 = number of worker threads

This simulates 50 simultaneous clients split across 4 threads.

Customizing pgbench Tests with SQL Scripts

The default test simulates a simple TPC-B style transaction. You can write your own SQL scripts to benchmark custom queries.

Step 5: Create a Custom Transaction Script

Create a file named custom.sql with:

BEGIN;
UPDATE pgbench_accounts SET abalance = abalance + 10 WHERE aid = :aid;
SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
COMMIT;

* :aid is a variable automatically populated by pgbench with random account IDs.

Run pgbench with the custom script:

pgbench -c 10 -T 60 -M prepared -f custom.sql pgbench_test

* -c = number of client connections

* -T = times in seconds

* -M =  Using -M prepared switches it to prepared mode, which binds variables safely and correctly.

This simulates clients running the update + select transaction you defined.

Step 6: Interpreting Results

Key metrics to watch:

*  TPS (transactions per second): How many transactions the server can process per second.

*  Latency: Time taken per transaction (can be obtained with -r flag).

*  Errors: Check for any failed transactions reported.

* Scaling: Observe how TPS changes as you increase clients (-c) or threads (-j).

Step 7: Advanced pgbench Usage Tips

To go beyond basic benchmarking, pgbench provides advanced options to help you analyze latency, simulate different workloads, and monitor progress during long test runs.

Show Detailed Latency Stats with -r

Using the -r flag displays detailed latency statistics (average, min, max, stddev) for each SQL statement executed during the test. This helps you pinpoint which queries are consuming the most time.

Example:

pgbench -c 20 -T 60 -r pgbench_test

*  -c 20: Simulates 20 concurrent clients

*  -T 60: Runs the benchmark for 60 seconds

*  -r: Enables per-statement latency reporting

Sample Output (Based on Real Results)
latency average = 6.007 ms
tps = 3329.52 (transactions per second)
statement latencies in milliseconds:
  0.188  BEGIN;
  0.321  UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
  0.277  SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
  0.960  UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
  2.674  UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
  0.254  INSERT INTO pgbench_history (...)
  1.282  END;

As seen here, the UPDATE pgbench_branches query took the most time (~2.67 ms on average), which might be a good candidate for performance tuning or indexing in a real application.

Disable Per-Transaction Vacuuming with -n

By default, pgbench vacuums tables between runs. If you want to skip this (for speed), add -n:

pgbench -c 20 -T 60 -n pgbench_test

Note: Disabling vacuum may lead to unrealistic results if dead tuples accumulate.

Run Only SELECT Queries with -S

To benchmark read-only workloads, use the -S flag. This skips updates/inserts and runs only SELECT statements.

pgbench -c 20 -T 60 -S pgbench_test

This is useful for testing reporting queries or read replicas.

Skip SELECTs with -N (Write-Only Mode)

To benchmark write-heavy scenarios, use -N to skip the SELECT queries in the default workload.

pgbench -c 20 -T 60 -N pgbench_test

Track Progress with --progress

To monitor execution during long tests, add --progress=5 to get progress reports every 5 seconds:

pgbench -c 20 -T 300 --progress=5 pgbench_test

This is especially helpful for long-term performance monitoring or CI testing.

Conclusion

pgbench is a powerful and flexible benchmarking tool designed specifically for PostgreSQL. It enables you to simulate concurrent users and transactions, helping you evaluate how your database performs under different workloads. Whether you're testing basic performance, tuning server configurations, or simulating your actual application logic, pgbench provides valuable insights into transaction throughput, query latency, and system stability.

Start with the built-in TPC-B test for quick analysis, then advance to custom SQL scripts to mirror real-world usage scenarios. By integrating pgbench into your performance testing workflow, you can proactively detect bottlenecks, optimize query behavior, and ensure your PostgreSQL deployment scales efficiently and reliably. Experiencing slow queries or high disk usage? Discover how to understand & manage PostgreSQL table & index bloat effectively and implement a proactive maintenance plan to keep your database healthy.

whatsapp_icon
location

Calicut

Cybrosys Technologies Pvt. Ltd.
Neospace, Kinfra Techno Park
Kakkancherry, Calicut
Kerala, India - 673635

location

Kochi

Cybrosys Technologies Pvt. Ltd.
1st Floor, Thapasya Building,
Infopark, Kakkanad,
Kochi, India - 682030.

location

Bangalore

Cybrosys Techno Solutions
The Estate, 8th Floor,
Dickenson Road,
Bangalore, India - 560042

Send Us A Message