When learning PostgreSQL, many people focus on tables, indexes, queries, and performance tuning. But behind every fast query is something just as important: memory management. PostgreSQL depends heavily on memory to cache data, manage connections, and process workloads efficiently.
One memory-related topic that often surprises learners is Huge Pages. The name sounds technical, and many beginners are unsure whether it refers to PostgreSQL storage pages, operating system memory pages, or some advanced feature meant only for large production servers.
In reality, Huge Pages are an operating system feature that PostgreSQL can use to manage shared memory more efficiently. When configured properly, they can reduce overhead, improve memory handling, and help large PostgreSQL systems perform more smoothly.
If you have ever seen settings like huge_pages, huge_page_size, or huge_pages_status and wondered what they mean, this article will break the concept down step by step in a simple and practical way.
What Are Huge Pages?
To understand huge pages, we first need to understand how an operating system uses RAM.
The operating system does not manage memory byte by byte.It splits memory into equal-sized units known as pages.
A normal memory page is commonly:
Some systems also support much larger pages called Huge Pages.
A common huge page size on Linux is:
That means instead of managing many small 4 KB chunks, the operating system can manage memory using fewer large chunks.
Why Huge Pages Exist
Managing millions of small memory pages creates overhead.
The operating system must maintain:
- page tables
- address mappings
- translation metadata
The CPU must repeatedly translate virtual addresses to physical addresses.
Huge pages reduce that overhead because one large page covers much more memory than one small page.
This can improve efficiency on systems that use large amounts of RAM.
PostgreSQL and Memory
PostgreSQL uses memory heavily. One of the most important memory areas is:
Shared_buffers
show shared_buffers ;
Result :
shared_buffers
----------------
128MB
(1 row)
You can check more about the shared_buffers parameter from pg_settings like this
Query :
select * from pg_settings where name = 'shared_buffers';
Result :
-[ RECORD 1 ]---+-------------------------------------------------------------
name | shared_buffers
setting | 16384
unit | 8kB
category | Resource Usage / Memory
short_desc | Sets the number of shared memory buffers used by the server.
extra_desc |
context | postmaster
vartype | integer
source | configuration file
min_val | 16
max_val | 1073741823
enumvals |
boot_val | 16384
reset_val | 16384
sourcefile | /etc/postgresql/18/main/postgresql.conf
sourceline | 131
pending_restart | f
This is PostgreSQL’s main cache for database pages.
When queries run, PostgreSQL frequently reads and writes data through this shared memory area.
Because this memory region can become very large, huge pages may help PostgreSQL use it more efficiently.
PostgreSQL Pages vs Operating System Pages
The word page is used in two different ways.
PostgreSQL Page
PostgreSQL stores the data that are mainly in the form of 8kb size named pages.
Default PostgreSQL page size:
This is the unit PostgreSQL uses for storage and buffering.
Operating System Page
The operating system manages RAM using its own page size.
Common sizes:
- 4 KB normal page
- 2 MB huge page
These are not the same thing.
How They Are Connected
Suppose PostgreSQL loads one database page of 8 KB into memory.
PostgreSQL sees:
But the operating system maps that memory using its own page size.
If normal 4 KB pages are used:
One PostgreSQL 8 KB page occupies:
If 2 MB huge pages are used:
Many PostgreSQL pages fit inside one huge page.
Calculation:
- 2 MB / 8 KB = 256 PostgreSQL pages
So one huge page can hold 256 PostgreSQL pages.
PostgreSQL still thinks in 8 KB pages. Huge pages only change how the operating system manages the memory underneath.
Why Huge Pages Help PostgreSQL
Fewer Memory Mappings
Suppose:
set shared_buffers = 8GB
Using normal 4 KB pages:
- 8 GB / 4 KB = 2,097,152 memory pages
Using 2 MB huge pages:
- 8 GB / 2 MB = 4096 memory pages
The operating system manages far fewer memory entries.
Better CPU Efficiency
Modern processors include a component known as the Translation Lookaside Buffer (TLB), which helps accelerate address translation.
The TLB is a small, high-speed cache within the CPU that temporarily holds recently used mappings from virtual addresses to their corresponding physical addresses.
Huge pages allow each TLB entry to cover more memory, reducing misses and improving performance under load.
Huge pages are most useful when PostgreSQL runs on:
- large RAM servers
- high concurrency systems
- production workloads
- systems with large shared_buffers
PostgreSQL Huge Page Settings
PostgreSQL provides three related settings.
- huge_pages
- huge_page_size
- huge_pages_status
Query :
select * from pg_settings where name = 'huge_pages';
Result :
name | huge_pages
setting | try
unit |
category | Resource Usage / Memory
short_desc | Use of huge pages on Linux or Windows.
extra_desc |
context | postmaster
vartype | enum
source | default
min_val |
max_val |
enumvals | {off,on,try}
boot_val | try
reset_val | try
sourcefile |
sourceline |
pending_restart | f
This controls whether PostgreSQL should request huge pages.
Possible values:
Do not use huge pages.
Require huge pages. If unavailable, PostgreSQL will fail to start.
Try to use huge pages. If not available, continue with normal pages.
This is the default value of the configuration parameter named huge_pages.
Query :
select * from pg_settings where name = 'huge_page_size';
Result :
name | huge_page_size
setting | 0
unit | kB
category | Resource Usage / Memory
short_desc | The size of huge page that should be requested.
extra_desc | 0 means use the system default.
context | postmaster
vartype | integer
source | default
min_val | 0
max_val | 2147483647
enumvals |
boot_val | 0
reset_val | 0
sourcefile |
sourceline |
pending_restart | f
We can check the size of huge pages in our system by using the below command
grep -i hugepage /proc/meminfo
Result :
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
FileHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Meaning:
- 0 = use the operating system default huge page size
Hugepagesize: 2048 kB
So the default huge page size is 2 MB.
Query :
select * from pg_settings where name = 'huge_pages_status';
Result :
name | huge_pages_status
setting | off
unit |
category | Preset Options
short_desc | Indicates the status of huge pages.
extra_desc |
context | internal
vartype | enum
source | default
min_val |
max_val |
enumvals | {off,on,unknown}
boot_val | unknown
reset_val | off
sourcefile |
sourceline |
pending_restart | f
Here the value of setting is off ,That Means PostgreSQL is currently not using huge pages.
From this, the PostgreSQL setting is:
huge_pages = try
This means PostgreSQL attempts to use huge pages, but only if the operating system has them available.
The result from the above Linux memory info shows:
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
This means your system has zero preallocated huge pages.
Since none are available, PostgreSQL falls back to normal memory pages.
Understanding Linux Output
cat /proc/meminfo | grep -i HugePage
Output:
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
FileHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
HugePages_Total
Total explicit huge pages configured in the kernel.
HugePages_Free
Huge pages currently unused.
HugePages_Rsvd
Huge pages reserved for processes but not yet consumed.
HugePages_Surp
Temporary extra huge pages beyond configured total.
Hugepagesize
Each huge page would be 2 MB.
AnonHugePages
Transparent huge pages used for anonymous memory.
ShmemHugePages
Huge pages used for shared memory segments.
FileHugePages
Huge pages used for file-backed mappings.
What Happens During an UPDATE
Suppose you run:
UPDATE employees SET salary = salary + 1000 WHERE id = 10;
PostgreSQL locates the row inside an 8 KB database page.
If that page is not already cached:
- PostgreSQL reads the 8 KB page into shared_buffers
- Modifies the row in memory
- Marks the page dirty
- Later writes it back to disk
If huge pages are enabled, the memory area holding that page may reside inside a 2 MB huge page.
If not enabled, it may be backed by two 4 KB normal pages.
PostgreSQL still treats the data as one 8 KB page.
Huge pages are worth evaluating when you have:
- dedicated database server
- large RAM
- large shared_buffers
- many concurrent users
- CPU-heavy workloads
How to Enable Huge Pages on Linux
When you normally change the value of the PostgreSQL configuration parameter named huge_pages through postgres.conf , you get error like this
postgres=# show config_file ;
config_file
-----------------------------------------
/etc/postgresql/18/main/postgresql.conf
(1 row)
Open the conf file and edit the value
sudo nano /etc/postgresql/18/main/postgresql.conf
Change the value to on
huge_pages = on # on, off, or try
Now when you restart the postgresql server, the postgres becomes down4
cybrosys@cybrosys:~$ pg_lsclusters
Ver Cluster Port Status Owner Data directory Log file
18 main 5432 down postgres /var/lib/postgresql/18/main /var/log/postgresql/postgresql-18-main.log
Now check the logfile to find the cause of this server down
cybrosys@cybrosys:~$ tail -f /var/log/postgresql/postgresql-18-main.log
2026-04-25 10:24:46.262 IST [1411] LOG: background worker "logical replication launcher" (PID 1507) exited with exit code 1
2026-04-25 10:24:46.264 IST [1464] LOG: shutting down
2026-04-25 10:24:46.266 IST [1464] LOG: checkpoint starting: shutdown immediate
2026-04-25 10:24:46.275 IST [1464] LOG: checkpoint complete: wrote 0 buffers (0.0%), wrote 0 SLRU buffers; 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.012 s; sync files=0, longest=0.000 s, average=0.000 s; distance=0 kB, estimate=0 kB; lsn=31/A06A6310, redo lsn=31/A06A6310
2026-04-25 10:24:46.293 IST [1411] LOG: database system is shut down
2026-04-25 10:24:46.468 IST [127712] FATAL: could not map anonymous shared memory: Cannot allocate memory
2026-04-25 10:24:46.468 IST [127712] HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap space, or huge pages. To reduce the request size (currently 157286400 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing "shared_buffers" or "max_connections".
2026-04-25 10:24:46.468 IST [127712] LOG: database system is shut down
Now configure the operating system:
sudo sysctl -w vm.nr_hugepages=100
Now restart the postgresql server and check:
Huge pages do not:
- make slow SQL automatically fast
- replace indexes
- replace VACUUM
- change PostgreSQL page size from 8 KB
They only reduce memory-management overhead at the operating system level.
Huge pages are a memory optimization feature that helps PostgreSQL use large shared memory regions more efficiently.
- PostgreSQL data pages are normally 8 KB
- Operating system memory pages are commonly 4 KB
- Huge pages are commonly 2 MB
- PostgreSQL still uses 8 KB pages internally
- Huge pages only change how RAM is managed underneath
- Most useful on large production servers
This means PostgreSQL tried to use huge pages, but none were configured, so it safely used normal memory pages instead.
Huge pages are not a PostgreSQL storage feature. They are an operating system memory feature that can improve PostgreSQL efficiency when the database server grows large enough to benefit from it.