Why Large OFFSET Queries Are Slow in PostgreSQL and What to Use Instead

If you’ve ever built a pagination feature using PostgreSQL, chances are you’ve reached for the trusty LIMIT ... OFFSET ... pattern. It’s simple, readable, and works perfectly, at first. But as your data grows, you might notice something strange: pages toward the end of your dataset start loading painfully slowly.

This slowdown isn’t just bad luck or a slow disk; it’s baked into how PostgreSQL executes queries with large offsets. Under the hood, the database isn’t jumping directly to the starting row you asked for. Instead, it’s walking through every single row before your OFFSET, discarding them, and only then delivering the rows you actually want.

In this post, we’ll take a deep dive into:

  • Why large OFFSETs are slow in PostgreSQL.
  • What happens inside the executor (nodeLimit.c and ExecLimit).

How PostgreSQL Handles OFFSET Internally

When you write:

SELECT * FROM orders
ORDER BY id
LIMIT 10 OFFSET 1000000;

PostgreSQL’s executor doesn’t have a “teleport” function that jumps to the millionth row. Instead, the query plan usually looks something like:

Limit
  -> Index Scan (or Seq Scan) on orders

The Limit plan node is implemented in the source file nodeLimit.c. The key function is ExecLimit, which is called repeatedly to fetch rows for the parent node.

Here’s what happens step-by-step:

Executor setup:

During execution, ExecLimit receives a stream of tuples (rows) from its child node, which could be a sequential scan, an index scan, or even a join. The child node produces rows starting from the very first qualifying one.

Skipping phase (OFFSET): 

Inside ExecLimit, PostgreSQL maintains counters for:

  • How many rows have been skipped so far?
  • How many rows have been returned?

Before it can return any rows to the client, ExecLimit repeatedly calls ExecProcNode on its child node, discarding each tuple until the number of skipped rows matches the OFFSET value.

This means if your OFFSET is 1_000_000, the function will:

  • Call the child node one million times.
  • Discard each tuple immediately.
  • Never short-circuit the scan.

Returning phase (LIMIT): 

Once the skip counter reaches the OFFSET, ExecLimit begins returning rows to its caller. It will do this until it has returned the number specified by LIMIT.

No index fast-forward: Even if there’s an index, PostgreSQL still iterates over rows in index order to count them. There’s no built-in “jump to Nth index entry” feature; B-tree indexes don’t store row counts for direct offsetting. So the executor still walks each entry.

Costs propagate: 

If your child node is expensive, e.g., it’s performing joins, filtering, or sorting, all of that work is done for skipped rows too. The fact that they’re discarded at the ExecLimit stage doesn’t reduce the cost of producing them.

A Simplified View of ExecLimit

In pseudo-code, the logic inside ExecLimit looks like this:

for (;;) {
    slot = ExecProcNode(outerPlan)
    if (TupIsNull(slot)) {
        // The subplan produced fewer tuples than OFFSET requires
        node->lstate = LIMIT_EMPTY
        return NULL
    }
    if (node->limitOption == LIMIT_OPTION_WITH_TIES &&
        node->position - node->offset == node->count - 1) {
        // Save this tuple for later tie comparison
        ExecCopySlot(node->last_slot, slot)
    }
    node->subSlot = slot
    node->position++
    if (node->position > node->offset) {
        // We've skipped enough tuples
        break
    }
}

This explains why OFFSET grows linearly in cost; you’re making one call to the child executor for every skipped row.

The Performance Problem

For small offsets, the cost isn’t noticeable. But as offset grows, so does the query time, often in a straight-line fashion.

For example, on a 1-million-row table:

  • OFFSET 0 might take under a millisecond.
  • OFFSET 500,000 might take tens or hundreds of milliseconds.
  • OFFSET 1,000,000 could take hundreds of milliseconds or more.

In complex queries, the slowdown can be dramatic enough to impact your entire application.

Large OFFSET values in PostgreSQL are expensive because the database must process every skipped row. If your application often needs deep pagination, switch to keyset pagination or indexed WHERE clauses. You’ll keep queries fast, reduce server load, and improve scalability.

whatsapp_icon
location

Calicut

Cybrosys Technologies Pvt. Ltd.
Neospace, Kinfra Techno Park
Kakkancherry, Calicut
Kerala, India - 673635

location

Kochi

Cybrosys Technologies Pvt. Ltd.
1st Floor, Thapasya Building,
Infopark, Kakkanad,
Kochi, India - 682030.

location

Bangalore

Cybrosys Techno Solutions
The Estate, 8th Floor,
Dickenson Road,
Bangalore, India - 560042

Send Us A Message