How to Use the pg_statviz Extension in PostgreSQL 18

Modern PostgreSQL environments generate a large amount of runtime statistics through views like pg_stat_database, pg_stat_bgwriter, pg_stat_io, and others. However, these views only show the current state of the server. Database administrators often need historical visibility to understand trends, performance changes, and resource behavior over time.

The pg_statviz extension solves this problem by capturing periodic snapshots of PostgreSQL statistics and storing them inside dedicated tables. This allows DBAs to perform time-series analysis directly inside the database without external monitoring tools.

Installing pg_statviz on PostgreSQL 18

If PostgreSQL is already installed locally, the extension can be compiled and installed from source.

Clone the repository:

git clone https://github.com/vyruss/pg_statviz.git

Install the extension using PGXS:

sudo make USE_PGXS=1 PG_CONFIG=/usr/lib/postgresql/18/bin/pg_config install

Switch to the postgres user and connect to the database:

sudo su postgres
psql -p 5433

Create the extension:

create extension pg_statviz;

Verify installation:

select * from pg_available_extensions where name = 'pg_statviz';

If installed correctly, PostgreSQL shows the extension version and description like this

    name    | default_version | installed_version |                   comment                    
------------+-----------------+-------------------+----------------------------------------------
 pg_statviz | 0.9             | 0.9               | stats visualization and time series analysis
(1 row)

Understanding How pg_statviz Works

Unlike standard statistics views, pg_statviz does not automatically collect data. You must explicitly create snapshots using:

select pgstatviz.snapshot();

A snapshot is simply a record of the database’s current statistics at a specific moment in time.

Each snapshot records server metrics into multiple tables inside the pgstatviz schema. These tables represent different subsystems of PostgreSQL, such as WAL activity, connections, IO behavior, locks, replication, and configuration.

Without snapshots, all tables remain empty because no historical data exists.

Functions Provided by pg_statviz

Running \dx+ pg_statviz shows the available functions:

           Objects in extension "pg_statviz"
                     Object description                     
------------------------------------------------------------
 function pgstatviz.delete_snapshots()
 function pgstatviz.snapshot()
 function pgstatviz.snapshot_buf(timestamp with time zone)
 function pgstatviz.snapshot_conf(timestamp with time zone)
 function pgstatviz.snapshot_conn(timestamp with time zone)
 function pgstatviz.snapshot_db(timestamp with time zone)
 function pgstatviz.snapshot_io(timestamp with time zone)
 function pgstatviz.snapshot_lock(timestamp with time zone)
 function pgstatviz.snapshot_repl(timestamp with time zone)
 function pgstatviz.snapshot_slru(timestamp with time zone)
 function pgstatviz.snapshot_wait(timestamp with time zone)
 function pgstatviz.snapshot_wal(timestamp with time zone)
 table pgstatviz.buf
 table pgstatviz.conf
 table pgstatviz.conn
 table pgstatviz.db
 table pgstatviz.io
 table pgstatviz.lock
 table pgstatviz.repl
 table pgstatviz.slru
 table pgstatviz.snapshots
 table pgstatviz.wait
 table pgstatviz.wal

pgstatviz.snapshot()

This is the main function. It gathers statistics from multiple PostgreSQL system views and stores them in the extension tables using a single timestamp.

select * from pgstatviz.snapshot();

Example result:

NOTICE:  created pg_statviz snapshot
             snapshot             
----------------------------------
 2026-02-12 19:56:39.861724+05:30
(1 row)

Each call creates a new time point that allows comparison with previous snapshots.

pgstatviz.delete_snapshots()

select * from pgstatviz.delete_snapshots();

Result :

NOTICE:  truncating table "snapshots"
NOTICE:  truncate cascades to table "buf"
NOTICE:  truncate cascades to table "conf"
NOTICE:  truncate cascades to table "conn"
NOTICE:  truncate cascades to table "lock"
NOTICE:  truncate cascades to table "repl"
NOTICE:  truncate cascades to table "slru"
NOTICE:  truncate cascades to table "wait"
NOTICE:  truncate cascades to table "wal"
NOTICE:  truncate cascades to table "db"
NOTICE:  truncate cascades to table "io"
 delete_snapshots 
------------------
 
(1 row)

This function removes all collected history by truncating every pg_statviz table. It is useful when resetting monitoring data or cleaning storage.

Tables and Their Monitoring Purpose

Each table inside the extension represents a different subsystem of PostgreSQL. Below is a detailed explanation of what information you gain from each one.

pgstatviz.buf - Buffer and Checkpoint Statistics

select * from pgstatviz.buf;

Result :

-[ RECORD 1 ]---------+---------------------------------
snapshot_tstamp       | 2026-02-12 19:56:39.861724+05:30
checkpoints_timed     | 779
checkpoints_req       | 16
checkpoint_write_time | 120231
checkpoint_sync_time  | 981
buffers_checkpoint    | 18512
buffers_clean         | 800
maxwritten_clean      | 8
buffers_backend       | 2
buffers_backend_fsync | 0
buffers_alloc         | 193485
stats_reset           | 2026-02-05 11:13:09.808183+05:30

This table stores metrics related to shared buffers and checkpoint activity.

Example information obtained:

  • Number of timed and requested checkpoints
  • Buffer writes triggered by checkpoints
  • Backend writes

What DBAs can learn:

  • If checkpoints are too frequent or too slow
  • Whether buffers are being written by backends instead of background processes
  • Memory pressure and write patterns

Useful for:

  • Diagnosing slow COMMIT operations
  • Tuning shared_buffers and checkpoint settings

pgstatviz.conf - PostgreSQL Configuration Snapshot

select * from pgstatviz.conf;

Result :

-[ RECORD 1 ]
snapshot_tstamp | 2026-02-12 19:56:39.861724+05:30
conf            | {"work_mem": "8192", "autovacuum": "on", "max_wal_size": "1024", "bgwriter_delay": "200", "shared_buffers": "16384", "max_connections": "100", "max_wal_senders": "10", "vacuum_cost_delay": "0", "vacuum_cost_limit": "200", "autovacuum_naptime": "60", "checkpoint_timeout": "300", "server_version_num": "180001", "autovacuum_work_mem": "-1", "maintenance_work_mem": "65536", "max_parallel_workers": "8", "bgwriter_lru_maxpages": "100", "max_replication_slots": "10", "autovacuum_max_workers": "3", "bgwriter_lru_multiplier": "2", "checkpoint_completion_target": "0.9", "max_parallel_maintenance_workers": "2"}

This table stores a JSON representation of important PostgreSQL configuration parameters at the moment of the snapshot.

Example values:

  • shared_buffers
  • work_mem
  • max_connections
  • checkpoint_timeout

Why this is important:

Configuration changes can drastically impact performance. By storing configuration history, administrators can correlate performance problems with configuration modifications.

Use cases:

  • Comparing performance before and after tuning
  • Auditing configuration changes

pgstatviz.conn - Connection Statistics

select * from pgstatviz.conn;

Result :

-[ RECORD 1 ]-----------+-----------------------------------------
snapshot_tstamp         | 2026-02-12 19:56:39.861724+05:30
conn_total              | 1
conn_active             | 1
conn_idle               | 0
conn_idle_trans         | 0
conn_idle_trans_abort   | 0
conn_fastpath           | 0
conn_users              | [{"user": "postgres", "connections": 1}]
max_query_age_seconds   | 0.002387
max_xact_age_seconds    | 0.002389
max_backend_age_seconds | 1568.350966

This table captures backend activity and connection states.

Information available:

  • Total connections
  • Active and idle sessions
  • Idle transactions
  • Query age and transaction age
  • Per-user connection distribution

Benefits:

  • Detecting connection leaks
  • Monitoring long-running queries
  • Understanding workload patterns

For Odoo or ERP workloads, this is especially useful to detect idle transactions that may block VACUUM or cause locks.

pgstatviz.db - Database Level Activity

 select * from pgstatviz.db; 

Result :

-[ RECORD 1 ]---------+---------------------------------
snapshot_tstamp       | 2026-02-12 19:56:39.861724+05:30
xact_commit           | 8202
xact_rollback         | 29
blks_read             | 5154
blks_hit              | 496585
tup_returned          | 5112495
tup_fetched           | 124958
tup_inserted          | 8111
tup_updated           | 1840
tup_deleted           | 1028
temp_files            | 0
temp_bytes            | 0
block_size            | 8192
stats_reset           | 
postmaster_start_time | 2026-02-12 09:06:10.72562+05:30
checksum_failures     | 0
checksum_last_failure | 

This table records aggregated statistics similar to pg_stat_database.

Metrics include:

  • Transaction commits and rollbacks
  • Block reads and cache hits
  • Tuple operations (insert, update, delete)
  • Temporary file usage

Insights gained:

  • Cache efficiency using hit vs read ratio
  • Write-heavy vs read-heavy workload patterns
  • Growth of table modifications

This helps administrators analyze application behavior over time.

pgstatviz.io - Detailed IO Statistics

 select * from pgstatviz.io; 

Result :

snapshot_tstamp | 2026-02-12 19:56:39.861724+05:30
io_stats        | [{"hits": 254316, "reads": 6634, "fsyncs": 0, "object": "relation", "reuses": null, "writes": 2, "context": "normal", "extends": 448, "evictions": 542, "read_time": 0, "fsync_time": 0, "read_bytes": 75505664, "write_time": 0, "writebacks": 0, "extend_time": 0, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 16384, "backend_type": "client backend", "extend_bytes": 3686400, "writeback_time": 0}, {"hits": null, "reads": 0, "fsyncs": 98, "object": "wal", "reuses": null, "writes": 98, "context": "normal", "extends": null, "evictions": null, "read_time": 0, "fsync_time": 0, "read_bytes": 0, "write_time": 0, "writebacks": null, "extend_time": null, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 10436608, "backend_type": "client backend", "extend_bytes": null, "writeback_time": null}, {"hits": 17566, "reads": 14, "fsyncs": 0, "object": "relation", "reuses": null, "writes": 0, "context": "normal", "extends": null, "evictions": 1, "read_time": 0, "fsync_time": 0, "read_bytes": 114688, "write_time": 0, "writebacks": 0, "extend_time": null, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 0, "backend_type": "autovacuum launcher", "extend_bytes": null, "writeback_time": 0}, {"hits": 4340372, "reads": 2981, "fsyncs": 0, "object": "relation", "reuses": null, "writes": 0, "context": "normal", "extends": 16, "evictions": 0, "read_time": 0, "fsync_time": 0, "read_bytes": 77250560, "write_time": 0, "writebacks": 0, "extend_time": 0, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 0, "backend_type": "autovacuum worker", "extend_bytes": 147456, "writeback_time": 0}, {"hits": 1124, "reads": 115, "fsyncs": null, "object": "relation", "reuses": 173, "writes": 62, "context": "vacuum", "extends": 0, "evictions": 0, "read_time": 0, "fsync_time": null, "read_bytes": 5570560, "write_time": 0, "writebacks": 0, "extend_time": 0, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 507904, "backend_type": "autovacuum worker", "extend_bytes": 0, "writeback_time": 0}, {"hits": null, "reads": null, "fsyncs": 25, "object": "wal", "reuses": null, "writes": 27, "context": "normal", "extends": null, "evictions": null, "read_time": null, "fsync_time": 0, "read_bytes": null, "write_time": 0, "writebacks": null, "extend_time": null, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 4358144, "backend_type": "autovacuum worker", "extend_bytes": null, "writeback_time": null}, {"hits": null, "reads": null, "fsyncs": 0, "object": "relation", "reuses": null, "writes": 800, "context": "normal", "extends": null, "evictions": null, "read_time": null, "fsync_time": 0, "read_bytes": null, "write_time": 0, "writebacks": 768, "extend_time": null, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 6553600, "backend_type": "background writer", "extend_bytes": null, "writeback_time": 0}, {"hits": null, "reads": null, "fsyncs": 917, "object": "relation", "reuses": null, "writes": 18512, "context": "normal", "extends": null, "evictions": null, "read_time": null, "fsync_time": 0, "read_bytes": null, "write_time": 0, "writebacks": 18512, "extend_time": null, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 151650304, "backend_type": "checkpointer", "extend_bytes": null, "writeback_time": 0}, {"hits": null, "reads": 0, "fsyncs": 42, "object": "wal", "reuses": null, "writes": 42, "context": "normal", "extends": null, "evictions": null, "read_time": 0, "fsync_time": 0, "read_bytes": 0, "write_time": 0, "writebacks": null, "extend_time": null, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 352256, "backend_type": "checkpointer", "extend_bytes": null, "writeback_time": null}, {"hits": 306701, "reads": 21, "fsyncs": 0, "object": "relation", "reuses": null, "writes": 156749, "context": "normal", "extends": 0, "evictions": 157462, "read_time": 0, "fsync_time": 0, "read_bytes": 172032, "write_time": 0, "writebacks": 0, "extend_time": 0, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 1284087808, "backend_type": "startup", "extend_bytes": 0, "writeback_time": 0}, {"hits": null, "reads": null, "fsyncs": 1, "object": "wal", "reuses": null, "writes": 1, "context": "init", "extends": null, "evictions": null, "read_time": null, "fsync_time": 0, "read_bytes": null, "write_time": 0, "writebacks": null, "extend_time": null, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 16777216, "backend_type": "startup", "extend_bytes": null, "writeback_time": null}, {"hits": null, "reads": 130300, "fsyncs": 2, "object": "wal", "reuses": null, "writes": 2, "context": "normal", "extends": null, "evictions": null, "read_time": 0, "fsync_time": 0, "read_bytes": 1067417600, "write_time": 0, "writebacks": null, "extend_time": null, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 16384, "backend_type": "startup", "extend_bytes": null, "writeback_time": null}, {"hits": null, "reads": null, "fsyncs": 181, "object": "wal", "reuses": null, "writes": 184, "context": "normal", "extends": null, "evictions": null, "read_time": null, "fsync_time": 0, "read_bytes": null, "write_time": 0, "writebacks": null, "extend_time": null, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 5955584, "backend_type": "walwriter", "extend_bytes": null, "writeback_time": null}]
stats_reset     | 2026-02-05 11:13:09.808183+05:30

One of the most powerful tables in pg_statviz. It records IO metrics grouped by backend type and context.

Information stored:

  • Relation reads and writes
  • WAL IO activity
  • Background writer and checkpointer behavior
  • Autovacuum IO

What administrators can analyze:

  • Which process generates the most disk IO
  • WAL pressure during heavy transactions
  • Autovacuum efficiency

This is highly valuable for diagnosing slow storage performance and understanding write amplification.

pgstatviz.lock — Lock Monitoring

select * from pgstatviz.lock ;

Result :

-[ RECORD 1 ]---+---------------------------------
snapshot_tstamp | 2026-02-12 19:56:39.861724+05:30
locks_total     | 0
locks           | []

This table captures lock statistics at snapshot time.

Metrics:

  • Total number of locks
  • Detailed lock data in JSON format

Usefulness:

  • Detecting blocking situations
  • Understanding concurrency bottlenecks

In busy ERP databases, lock monitoring is essential for identifying slow business operations caused by transaction conflicts.

pgstatviz.repl — Replication Status

 select * from pgstatviz.repl; 

Result :

-[ RECORD 1 ]---+---------------------------------
snapshot_tstamp | 2026-02-12 19:56:39.861724+05:30
standby_lag     | 
slot_stats      | 

This table contains replication and slot information.

Data stored:

  • Standby lag
  • Replication slot statistics

Ideal for:

  • Monitoring streaming replication health
  • Detecting replication delays before they become critical

pgstatviz.slru — SLRU Activity

select * from pgstatviz.slru ;

Result :

-[ RECORD 1 ]
snapshot_tstamp | 2026-02-12 19:56:39.861724+05:30
slru_stats      | [{"name": "commit_timestamp", "flushes": 36, "blks_hit": 0, "blks_read": 0, "truncates": 0, "blks_exists": 0, "blks_zeroed": 0, "blks_written": 0}, {"name": "multixact_member", "flushes": 51, "blks_hit": 911, "blks_read": 15, "truncates": 0, "blks_exists": 0, "blks_zeroed": 0, "blks_written": 16}, {"name": "multixact_offset", "flushes": 51, "blks_hit": 926, "blks_read": 15, "truncates": 0, "blks_exists": 15, "blks_zeroed": 0, "blks_written": 16}, {"name": "notify", "flushes": 0, "blks_hit": 11, "blks_read": 0, "truncates": 0, "blks_exists": 0, "blks_zeroed": 1, "blks_written": 0}, {"name": "serializable", "flushes": 0, "blks_hit": 0, "blks_read": 0, "truncates": 0, "blks_exists": 0, "blks_zeroed": 0, "blks_written": 0}, {"name": "subtransaction", "flushes": 36, "blks_hit": 0, "blks_read": 0, "truncates": 35, "blks_exists": 0, "blks_zeroed": 15, "blks_written": 14}, {"name": "transaction", "flushes": 36, "blks_hit": 2159, "blks_read": 24, "truncates": 0, "blks_exists": 0, "blks_zeroed": 0, "blks_written": 24}, {"name": "other", "flushes": 0, "blks_hit": 0, "blks_read": 0, "truncates": 0, "blks_exists": 0, "blks_zeroed": 0, "blks_written": 0}]

SLRU (Simple Least Recently Used) structures store internal PostgreSQL metadata like transaction status and multixact information.

Metrics include:

  • Block hits and reads
  • Flushes and truncations
  • Writes to internal structures

Why it matters:

High SLRU activity can indicate heavy transactional workloads or contention issues.

pgstatviz.wait — Wait Events

select * from pgstatviz.slru ;

Result :

-[ RECORD 1 ]-----+---------------------------------
snapshot_tstamp   | 2026-02-12 19:56:39.861724+05:30
wait_events_total | 0
wait_events       | []

This table records wait event statistics.

Information provided:

  • Number of wait events
  • Details about waiting backends

Administrators can use this to identify whether sessions are waiting on locks, IO, or background processes.

pgstatviz.wal — WAL Statistics

select * from pgstatviz.wal;

Result :

-[ RECORD 1 ]----+---------------------------------
snapshot_tstamp  | 2026-02-12 19:56:39.861724+05:30
wal_records      | 57195
wal_fpi          | 2021
wal_bytes        | 18403110
wal_buffers_full | 0
wal_write        | 354
wal_sync         | 349
wal_write_time   | 0
wal_sync_time    | 0
stats_reset      | 2026-02-05 11:13:09.808183+05:30

WAL activity is critical for performance and durability.

Metrics available:

  • WAL records generated
  • Full page images
  • WAL bytes written
  • WAL sync operations

What DBAs learn:

  • Write-ahead logging pressure
  • Commit overhead
  • Replication readiness

For write-heavy systems like Odoo, WAL analysis helps optimize checkpoint and durability settings.

pgstatviz.snapshots — Snapshot History

select * from pgstatviz.snapshots;

Result :

        snapshot_tstamp          
----------------------------------
 2026-02-12 19:56:39.861724+05:30
(1 row)

This table stores timestamps for every collected snapshot.

Purpose:

  • Allows time-series queries
  • Enables performance comparison between different moments

Example:

Two snapshots taken at different times allow you to see growth in buffer writes or WAL usage.

Working With Multiple Snapshots

After taking another snapshot:

select * from pgstatviz.snapshot();

Querying a table such as pgstatviz.buf shows multiple records, each representing the server state at different times.

Result :

-[ RECORD 1 ]---------+---------------------------------
snapshot_tstamp       | 2026-02-12 19:19:58.921983+05:30
checkpoints_timed     | 771
checkpoints_req       | 16
checkpoint_write_time | 95477
checkpoint_sync_time  | 957
buffers_checkpoint    | 18265
buffers_clean         | 800
maxwritten_clean      | 8
buffers_backend       | 2
buffers_backend_fsync | 0
buffers_alloc         | 193423
stats_reset           | 2026-02-05 11:13:09.808183+05:30
-[ RECORD 2 ]---------+---------------------------------
snapshot_tstamp       | 2026-02-12 19:30:54.011441+05:30
checkpoints_timed     | 773
checkpoints_req       | 16
checkpoint_write_time | 114114
checkpoint_sync_time  | 969
buffers_checkpoint    | 18451
buffers_clean         | 800
maxwritten_clean      | 8
buffers_backend       | 2
buffers_backend_fsync | 0
buffers_alloc         | 193480
stats_reset           | 2026-02-05 11:13:09.808183+05:30

This enables historical analysis such as:

  • Comparing checkpoint activity over time
  • Tracking connection growth
  • Measuring workload spikes

pg_statviz brings structured historical monitoring directly inside PostgreSQL. Instead of relying only on external monitoring systems, administrators can query performance history using SQL.

Key advantages:

  • Historical analysis without external tools
  • Deep insight into WAL, IO, buffers, and locks
  • Easy correlation between configuration changes and performance
  • Lightweight monitoring using standard PostgreSQL functions

For environments running ERP workloads or large transactional systems, pg_statviz provides valuable visibility into performance patterns and helps administrators make informed tuning decisions.

pg_statviz extends PostgreSQL monitoring by turning real-time statistics into historical data. By taking periodic snapshots, administrators gain long-term insight into system behavior, making it easier to diagnose performance issues, track workload evolution, and optimize database configuration.

Each table inside the extension focuses on a specific subsystem, like buffers, WAL, IO, connections, locks, or replication, and it gives a structured view of PostgreSQL internals.

When used consistently, pg_statviz becomes a powerful internal monitoring layer that helps database administrators understand how their system evolves over time and respond proactively to performance changes.

Modern PostgreSQL environments generate a large amount of runtime statistics through views like pg_stat_database, pg_stat_bgwriter, pg_stat_io, and others. However, these views only show the current state of the server. Database administrators often need historical visibility to understand trends, performance changes, and resource behavior over time.

The pg_statviz extension solves this problem by capturing periodic snapshots of PostgreSQL statistics and storing them inside dedicated tables. This allows DBAs to perform time-series analysis directly inside the database without external monitoring tools.

Installing pg_statviz on PostgreSQL 18

If PostgreSQL is already installed locally, the extension can be compiled and installed from source.

Clone the repository:

git clone https://github.com/vyruss/pg_statviz.git

Install the extension using PGXS:

sudo make USE_PGXS=1 PG_CONFIG=/usr/lib/postgresql/18/bin/pg_config install

Switch to the postgres user and connect to the database:

sudo su postgres
psql -p 5433

Create the extension:

create extension pg_statviz;

Verify installation:

select * from pg_available_extensions where name = 'pg_statviz';

If installed correctly, PostgreSQL shows the extension version and description like this

    name    | default_version | installed_version |                   comment                    
------------+-----------------+-------------------+----------------------------------------------
 pg_statviz | 0.9             | 0.9               | stats visualization and time series analysis
(1 row)

Understanding How pg_statviz Works

Unlike standard statistics views, pg_statviz does not automatically collect data. You must explicitly create snapshots using:

select pgstatviz.snapshot();

A snapshot is simply a record of the database’s current statistics at a specific moment in time.

Each snapshot records server metrics into multiple tables inside the pgstatviz schema. These tables represent different subsystems of PostgreSQL, such as WAL activity, connections, IO behavior, locks, replication, and configuration.

Without snapshots, all tables remain empty because no historical data exists.

Functions Provided by pg_statviz

Running \dx+ pg_statviz shows the available functions:

           Objects in extension "pg_statviz"
                     Object description                     
------------------------------------------------------------
 function pgstatviz.delete_snapshots()
 function pgstatviz.snapshot()
 function pgstatviz.snapshot_buf(timestamp with time zone)
 function pgstatviz.snapshot_conf(timestamp with time zone)
 function pgstatviz.snapshot_conn(timestamp with time zone)
 function pgstatviz.snapshot_db(timestamp with time zone)
 function pgstatviz.snapshot_io(timestamp with time zone)
 function pgstatviz.snapshot_lock(timestamp with time zone)
 function pgstatviz.snapshot_repl(timestamp with time zone)
 function pgstatviz.snapshot_slru(timestamp with time zone)
 function pgstatviz.snapshot_wait(timestamp with time zone)
 function pgstatviz.snapshot_wal(timestamp with time zone)
 table pgstatviz.buf
 table pgstatviz.conf
 table pgstatviz.conn
 table pgstatviz.db
 table pgstatviz.io
 table pgstatviz.lock
 table pgstatviz.repl
 table pgstatviz.slru
 table pgstatviz.snapshots
 table pgstatviz.wait
 table pgstatviz.wal

pgstatviz.snapshot()

This is the main function. It gathers statistics from multiple PostgreSQL system views and stores them in the extension tables using a single timestamp.

select * from pgstatviz.snapshot();

Example result:

NOTICE:  created pg_statviz snapshot
             snapshot             
----------------------------------
 2026-02-12 19:56:39.861724+05:30
(1 row)

Each call creates a new time point that allows comparison with previous snapshots.

pgstatviz.delete_snapshots()

select * from pgstatviz.delete_snapshots();

Result :

NOTICE:  truncating table "snapshots"
NOTICE:  truncate cascades to table "buf"
NOTICE:  truncate cascades to table "conf"
NOTICE:  truncate cascades to table "conn"
NOTICE:  truncate cascades to table "lock"
NOTICE:  truncate cascades to table "repl"
NOTICE:  truncate cascades to table "slru"
NOTICE:  truncate cascades to table "wait"
NOTICE:  truncate cascades to table "wal"
NOTICE:  truncate cascades to table "db"
NOTICE:  truncate cascades to table "io"
 delete_snapshots 
------------------
 
(1 row)

This function removes all collected history by truncating every pg_statviz table. It is useful when resetting monitoring data or cleaning storage.

Tables and Their Monitoring Purpose

Each table inside the extension represents a different subsystem of PostgreSQL. Below is a detailed explanation of what information you gain from each one.

pgstatviz.buf - Buffer and Checkpoint Statistics

select * from pgstatviz.buf;

Result :

-[ RECORD 1 ]---------+---------------------------------
snapshot_tstamp       | 2026-02-12 19:56:39.861724+05:30
checkpoints_timed     | 779
checkpoints_req       | 16
checkpoint_write_time | 120231
checkpoint_sync_time  | 981
buffers_checkpoint    | 18512
buffers_clean         | 800
maxwritten_clean      | 8
buffers_backend       | 2
buffers_backend_fsync | 0
buffers_alloc         | 193485
stats_reset           | 2026-02-05 11:13:09.808183+05:30

This table stores metrics related to shared buffers and checkpoint activity.

Example information obtained:

  • Number of timed and requested checkpoints
  • Buffer writes triggered by checkpoints
  • Backend writes

What DBAs can learn:

  • If checkpoints are too frequent or too slow
  • Whether buffers are being written by backends instead of background processes
  • Memory pressure and write patterns

Useful for:

  • Diagnosing slow COMMIT operations
  • Tuning shared_buffers and checkpoint settings

pgstatviz.conf - PostgreSQL Configuration Snapshot

select * from pgstatviz.conf;

Result :

-[ RECORD 1 ]
snapshot_tstamp | 2026-02-12 19:56:39.861724+05:30
conf            | {"work_mem": "8192", "autovacuum": "on", "max_wal_size": "1024", "bgwriter_delay": "200", "shared_buffers": "16384", "max_connections": "100", "max_wal_senders": "10", "vacuum_cost_delay": "0", "vacuum_cost_limit": "200", "autovacuum_naptime": "60", "checkpoint_timeout": "300", "server_version_num": "180001", "autovacuum_work_mem": "-1", "maintenance_work_mem": "65536", "max_parallel_workers": "8", "bgwriter_lru_maxpages": "100", "max_replication_slots": "10", "autovacuum_max_workers": "3", "bgwriter_lru_multiplier": "2", "checkpoint_completion_target": "0.9", "max_parallel_maintenance_workers": "2"}

This table stores a JSON representation of important PostgreSQL configuration parameters at the moment of the snapshot.

Example values:

  • shared_buffers
  • work_mem
  • max_connections
  • checkpoint_timeout

Why this is important:

Configuration changes can drastically impact performance. By storing configuration history, administrators can correlate performance problems with configuration modifications.

Use cases:

  • Comparing performance before and after tuning
  • Auditing configuration changes

pgstatviz.conn - Connection Statistics

select * from pgstatviz.conn;

Result :

-[ RECORD 1 ]-----------+-----------------------------------------
snapshot_tstamp         | 2026-02-12 19:56:39.861724+05:30
conn_total              | 1
conn_active             | 1
conn_idle               | 0
conn_idle_trans         | 0
conn_idle_trans_abort   | 0
conn_fastpath           | 0
conn_users              | [{"user": "postgres", "connections": 1}]
max_query_age_seconds   | 0.002387
max_xact_age_seconds    | 0.002389
max_backend_age_seconds | 1568.350966

This table captures backend activity and connection states.

Information available:

  • Total connections
  • Active and idle sessions
  • Idle transactions
  • Query age and transaction age
  • Per-user connection distribution

Benefits:

  • Detecting connection leaks
  • Monitoring long-running queries
  • Understanding workload patterns

For Odoo or ERP workloads, this is especially useful to detect idle transactions that may block VACUUM or cause locks.

pgstatviz.db - Database Level Activity

 select * from pgstatviz.db; 

Result :

-[ RECORD 1 ]---------+---------------------------------
snapshot_tstamp       | 2026-02-12 19:56:39.861724+05:30
xact_commit           | 8202
xact_rollback         | 29
blks_read             | 5154
blks_hit              | 496585
tup_returned          | 5112495
tup_fetched           | 124958
tup_inserted          | 8111
tup_updated           | 1840
tup_deleted           | 1028
temp_files            | 0
temp_bytes            | 0
block_size            | 8192
stats_reset           | 
postmaster_start_time | 2026-02-12 09:06:10.72562+05:30
checksum_failures     | 0
checksum_last_failure | 

This table records aggregated statistics similar to pg_stat_database.

Metrics include:

  • Transaction commits and rollbacks
  • Block reads and cache hits
  • Tuple operations (insert, update, delete)
  • Temporary file usage

Insights gained:

  • Cache efficiency using hit vs read ratio
  • Write-heavy vs read-heavy workload patterns
  • Growth of table modifications

This helps administrators analyze application behavior over time.

pgstatviz.io - Detailed IO Statistics

 select * from pgstatviz.io; 

Result :

snapshot_tstamp | 2026-02-12 19:56:39.861724+05:30
io_stats        | [{"hits": 254316, "reads": 6634, "fsyncs": 0, "object": "relation", "reuses": null, "writes": 2, "context": "normal", "extends": 448, "evictions": 542, "read_time": 0, "fsync_time": 0, "read_bytes": 75505664, "write_time": 0, "writebacks": 0, "extend_time": 0, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 16384, "backend_type": "client backend", "extend_bytes": 3686400, "writeback_time": 0}, {"hits": null, "reads": 0, "fsyncs": 98, "object": "wal", "reuses": null, "writes": 98, "context": "normal", "extends": null, "evictions": null, "read_time": 0, "fsync_time": 0, "read_bytes": 0, "write_time": 0, "writebacks": null, "extend_time": null, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 10436608, "backend_type": "client backend", "extend_bytes": null, "writeback_time": null}, {"hits": 17566, "reads": 14, "fsyncs": 0, "object": "relation", "reuses": null, "writes": 0, "context": "normal", "extends": null, "evictions": 1, "read_time": 0, "fsync_time": 0, "read_bytes": 114688, "write_time": 0, "writebacks": 0, "extend_time": null, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 0, "backend_type": "autovacuum launcher", "extend_bytes": null, "writeback_time": 0}, {"hits": 4340372, "reads": 2981, "fsyncs": 0, "object": "relation", "reuses": null, "writes": 0, "context": "normal", "extends": 16, "evictions": 0, "read_time": 0, "fsync_time": 0, "read_bytes": 77250560, "write_time": 0, "writebacks": 0, "extend_time": 0, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 0, "backend_type": "autovacuum worker", "extend_bytes": 147456, "writeback_time": 0}, {"hits": 1124, "reads": 115, "fsyncs": null, "object": "relation", "reuses": 173, "writes": 62, "context": "vacuum", "extends": 0, "evictions": 0, "read_time": 0, "fsync_time": null, "read_bytes": 5570560, "write_time": 0, "writebacks": 0, "extend_time": 0, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 507904, "backend_type": "autovacuum worker", "extend_bytes": 0, "writeback_time": 0}, {"hits": null, "reads": null, "fsyncs": 25, "object": "wal", "reuses": null, "writes": 27, "context": "normal", "extends": null, "evictions": null, "read_time": null, "fsync_time": 0, "read_bytes": null, "write_time": 0, "writebacks": null, "extend_time": null, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 4358144, "backend_type": "autovacuum worker", "extend_bytes": null, "writeback_time": null}, {"hits": null, "reads": null, "fsyncs": 0, "object": "relation", "reuses": null, "writes": 800, "context": "normal", "extends": null, "evictions": null, "read_time": null, "fsync_time": 0, "read_bytes": null, "write_time": 0, "writebacks": 768, "extend_time": null, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 6553600, "backend_type": "background writer", "extend_bytes": null, "writeback_time": 0}, {"hits": null, "reads": null, "fsyncs": 917, "object": "relation", "reuses": null, "writes": 18512, "context": "normal", "extends": null, "evictions": null, "read_time": null, "fsync_time": 0, "read_bytes": null, "write_time": 0, "writebacks": 18512, "extend_time": null, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 151650304, "backend_type": "checkpointer", "extend_bytes": null, "writeback_time": 0}, {"hits": null, "reads": 0, "fsyncs": 42, "object": "wal", "reuses": null, "writes": 42, "context": "normal", "extends": null, "evictions": null, "read_time": 0, "fsync_time": 0, "read_bytes": 0, "write_time": 0, "writebacks": null, "extend_time": null, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 352256, "backend_type": "checkpointer", "extend_bytes": null, "writeback_time": null}, {"hits": 306701, "reads": 21, "fsyncs": 0, "object": "relation", "reuses": null, "writes": 156749, "context": "normal", "extends": 0, "evictions": 157462, "read_time": 0, "fsync_time": 0, "read_bytes": 172032, "write_time": 0, "writebacks": 0, "extend_time": 0, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 1284087808, "backend_type": "startup", "extend_bytes": 0, "writeback_time": 0}, {"hits": null, "reads": null, "fsyncs": 1, "object": "wal", "reuses": null, "writes": 1, "context": "init", "extends": null, "evictions": null, "read_time": null, "fsync_time": 0, "read_bytes": null, "write_time": 0, "writebacks": null, "extend_time": null, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 16777216, "backend_type": "startup", "extend_bytes": null, "writeback_time": null}, {"hits": null, "reads": 130300, "fsyncs": 2, "object": "wal", "reuses": null, "writes": 2, "context": "normal", "extends": null, "evictions": null, "read_time": 0, "fsync_time": 0, "read_bytes": 1067417600, "write_time": 0, "writebacks": null, "extend_time": null, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 16384, "backend_type": "startup", "extend_bytes": null, "writeback_time": null}, {"hits": null, "reads": null, "fsyncs": 181, "object": "wal", "reuses": null, "writes": 184, "context": "normal", "extends": null, "evictions": null, "read_time": null, "fsync_time": 0, "read_bytes": null, "write_time": 0, "writebacks": null, "extend_time": null, "stats_reset": "2026-02-05T11:13:09.808183+05:30", "write_bytes": 5955584, "backend_type": "walwriter", "extend_bytes": null, "writeback_time": null}]
stats_reset     | 2026-02-05 11:13:09.808183+05:30

One of the most powerful tables in pg_statviz. It records IO metrics grouped by backend type and context.

Information stored:

  • Relation reads and writes
  • WAL IO activity
  • Background writer and checkpointer behavior
  • Autovacuum IO

What administrators can analyze:

  • Which process generates the most disk IO
  • WAL pressure during heavy transactions
  • Autovacuum efficiency

This is highly valuable for diagnosing slow storage performance and understanding write amplification.

pgstatviz.lock — Lock Monitoring

select * from pgstatviz.lock ;

Result :

-[ RECORD 1 ]---+---------------------------------
snapshot_tstamp | 2026-02-12 19:56:39.861724+05:30
locks_total     | 0
locks           | []

This table captures lock statistics at snapshot time.

Metrics:

  • Total number of locks
  • Detailed lock data in JSON format

Usefulness:

  • Detecting blocking situations
  • Understanding concurrency bottlenecks

In busy ERP databases, lock monitoring is essential for identifying slow business operations caused by transaction conflicts.

pgstatviz.repl — Replication Status

 select * from pgstatviz.repl; 

Result :

-[ RECORD 1 ]---+---------------------------------
snapshot_tstamp | 2026-02-12 19:56:39.861724+05:30
standby_lag     | 
slot_stats      | 

This table contains replication and slot information.

Data stored:

  • Standby lag
  • Replication slot statistics

Ideal for:

  • Monitoring streaming replication health
  • Detecting replication delays before they become critical

pgstatviz.slru — SLRU Activity

select * from pgstatviz.slru ;

Result :

-[ RECORD 1 ]
snapshot_tstamp | 2026-02-12 19:56:39.861724+05:30
slru_stats      | [{"name": "commit_timestamp", "flushes": 36, "blks_hit": 0, "blks_read": 0, "truncates": 0, "blks_exists": 0, "blks_zeroed": 0, "blks_written": 0}, {"name": "multixact_member", "flushes": 51, "blks_hit": 911, "blks_read": 15, "truncates": 0, "blks_exists": 0, "blks_zeroed": 0, "blks_written": 16}, {"name": "multixact_offset", "flushes": 51, "blks_hit": 926, "blks_read": 15, "truncates": 0, "blks_exists": 15, "blks_zeroed": 0, "blks_written": 16}, {"name": "notify", "flushes": 0, "blks_hit": 11, "blks_read": 0, "truncates": 0, "blks_exists": 0, "blks_zeroed": 1, "blks_written": 0}, {"name": "serializable", "flushes": 0, "blks_hit": 0, "blks_read": 0, "truncates": 0, "blks_exists": 0, "blks_zeroed": 0, "blks_written": 0}, {"name": "subtransaction", "flushes": 36, "blks_hit": 0, "blks_read": 0, "truncates": 35, "blks_exists": 0, "blks_zeroed": 15, "blks_written": 14}, {"name": "transaction", "flushes": 36, "blks_hit": 2159, "blks_read": 24, "truncates": 0, "blks_exists": 0, "blks_zeroed": 0, "blks_written": 24}, {"name": "other", "flushes": 0, "blks_hit": 0, "blks_read": 0, "truncates": 0, "blks_exists": 0, "blks_zeroed": 0, "blks_written": 0}]

SLRU (Simple Least Recently Used) structures store internal PostgreSQL metadata like transaction status and multixact information.

Metrics include:

  • Block hits and reads
  • Flushes and truncations
  • Writes to internal structures

Why it matters:

High SLRU activity can indicate heavy transactional workloads or contention issues.

pgstatviz.wait — Wait Events

select * from pgstatviz.slru ;

Result :

-[ RECORD 1 ]-----+---------------------------------
snapshot_tstamp   | 2026-02-12 19:56:39.861724+05:30
wait_events_total | 0
wait_events       | []

This table records wait event statistics.

Information provided:

  • Number of wait events
  • Details about waiting backends

Administrators can use this to identify whether sessions are waiting on locks, IO, or background processes.

pgstatviz.wal — WAL Statistics

select * from pgstatviz.wal;

Result :

-[ RECORD 1 ]----+---------------------------------
snapshot_tstamp  | 2026-02-12 19:56:39.861724+05:30
wal_records      | 57195
wal_fpi          | 2021
wal_bytes        | 18403110
wal_buffers_full | 0
wal_write        | 354
wal_sync         | 349
wal_write_time   | 0
wal_sync_time    | 0
stats_reset      | 2026-02-05 11:13:09.808183+05:30

WAL activity is critical for performance and durability.

Metrics available:

  • WAL records generated
  • Full page images
  • WAL bytes written
  • WAL sync operations

What DBAs learn:

  • Write-ahead logging pressure
  • Commit overhead
  • Replication readiness

For write-heavy systems like Odoo, WAL analysis helps optimize checkpoint and durability settings.

pgstatviz.snapshots — Snapshot History

select * from pgstatviz.snapshots;

Result :

        snapshot_tstamp          
----------------------------------
 2026-02-12 19:56:39.861724+05:30
(1 row)

This table stores timestamps for every collected snapshot.

Purpose:

  • Allows time-series queries
  • Enables performance comparison between different moments

Example:

Two snapshots taken at different times allow you to see growth in buffer writes or WAL usage.

Working With Multiple Snapshots

After taking another snapshot:

select * from pgstatviz.snapshot();

Querying a table such as pgstatviz.buf shows multiple records, each representing the server state at different times.

Result :

-[ RECORD 1 ]---------+---------------------------------
snapshot_tstamp       | 2026-02-12 19:19:58.921983+05:30
checkpoints_timed     | 771
checkpoints_req       | 16
checkpoint_write_time | 95477
checkpoint_sync_time  | 957
buffers_checkpoint    | 18265
buffers_clean         | 800
maxwritten_clean      | 8
buffers_backend       | 2
buffers_backend_fsync | 0
buffers_alloc         | 193423
stats_reset           | 2026-02-05 11:13:09.808183+05:30
-[ RECORD 2 ]---------+---------------------------------
snapshot_tstamp       | 2026-02-12 19:30:54.011441+05:30
checkpoints_timed     | 773
checkpoints_req       | 16
checkpoint_write_time | 114114
checkpoint_sync_time  | 969
buffers_checkpoint    | 18451
buffers_clean         | 800
maxwritten_clean      | 8
buffers_backend       | 2
buffers_backend_fsync | 0
buffers_alloc         | 193480
stats_reset           | 2026-02-05 11:13:09.808183+05:30

This enables historical analysis such as:

  • Comparing checkpoint activity over time
  • Tracking connection growth
  • Measuring workload spikes

pg_statviz brings structured historical monitoring directly inside PostgreSQL. Instead of relying only on external monitoring systems, administrators can query performance history using SQL.

Key advantages:

  • Historical analysis without external tools
  • Deep insight into WAL, IO, buffers, and locks
  • Easy correlation between configuration changes and performance
  • Lightweight monitoring using standard PostgreSQL functions

For environments running ERP workloads or large transactional systems, pg_statviz provides valuable visibility into performance patterns and helps administrators make informed tuning decisions.

pg_statviz extends PostgreSQL monitoring by turning real-time statistics into historical data. By taking periodic snapshots, administrators gain long-term insight into system behavior, making it easier to diagnose performance issues, track workload evolution, and optimize database configuration.

Each table inside the extension focuses on a specific subsystem, like buffers, WAL, IO, connections, locks, or replication, and it gives a structured view of PostgreSQL internals.

When used consistently, pg_statviz becomes a powerful internal monitoring layer that helps database administrators understand how their system evolves over time and respond proactively to performance changes.

whatsapp_icon
location

Calicut

Cybrosys Technologies Pvt. Ltd.
Neospace, KINFRA Techno Park
Kakkanchery, Calicut
Kerala, India - 673635

location

Kochi

Cybrosys Technologies Pvt. Ltd.
1st Floor, Thapasya Building,
Infopark, Kakkanad,
Kochi, India - 682030.

location

Bangalore

Cybrosys Techno Solutions
The Estate, 8th Floor,
Dickenson Road,
Bangalore, India - 560042

Send Us A Message