We’ve all been there. You spend days optimizing a PostgreSQL query for an Odoo dashboard. You add indexes, you tweak the config, you maybe even sprinkle in some materialized views. You get the query down from 5 seconds to 2 seconds. You feel like a wizard.
Then you run the same query on ClickHouse, and it finishes in 0.005 seconds.
That’s exactly what happened to me recently. I benchmarked 1 million Odoo sales records on both databases. The difference wasn’t just "faster"—it was instant. It made my optimized Postgres query look like it was moving in slow motion.
But this isn't a post about how "ClickHouse is better." (It’s terrible at things Postgres is great at). This is a post about how it pulls off that speed. It’s not magic; it’s just a completely different way of thinking about data.
1. The Library Analogy (Columnar Storage)
Imagine you walk into a library and want to know the average publication year of all 10,000 books on the shelves.
- PostgreSQL (Row-Store): You have to pull every single book off the shelf. You open it, ignore the Title, ignore the Author, find the Year, write it down, and put the book back. You do this 10,000 times. It’s exhausting.
- ClickHouse (Column-Store): Imagine if the library tore the "Year" page out of every book and glued them all into one giant binder called "Years." To get the average, you just grab that one binder and scroll down the list. You don't even touch the actual books.
In my benchmark, this meant ClickHouse ignored 90% of the data on the disk because I only asked for the amount_total. It didn't "read faster"—it just read less.
2. The Art of Laziness (Granules & Sparse Indexing)
Postgres is a perfectionist. If you ask for records where amount > $2000, it checks its massive index to find the exact location of every single match.
ClickHouse is lazy—in a good way. It splits data into "Granules" (logical blocks, often ~8,000 rows). It puts a sticky note on each bucket that says: "Low: $100, High: $500."
When I ran my query for amounts over $2,000, ClickHouse looked at that sticky note and said, "Nope, nothing in this bucket for you," and skipped it entirely. It didn't even look inside. It only opened the buckets that might have the data.
3. The Assembly Line (Vectorized Execution)
This is where the CPU magic happens.
Postgres processes data like an artisan craftsman: one item at a time.
"Okay, take Row 1. Add it to the total. Done. Now take Row 2. Add it. Done."
ClickHouse processes data like a factory assembly line. It uses something called SIMD (Single Instruction, Multiple Data). It grabs a block of 1,000 numbers and yells at the CPU:
"ADD THESE ALL UP AT ONCE!"
The CPU loves this. It’s cleaner, fewer interruptions, and drastically faster.
4. The Squeeze (Compression)
Because ClickHouse stores data in columns (e.g., a whole file of just "Dates"), it can compress that data tightly. A list of dates looks very similar, so the compression algorithms (LZ4/ZSTD) can squash it down to almost nothing.
In my test, the ClickHouse table was 10x smaller than the Postgres table. That’s not just saving disk space—that’s speed. Reading 10MB from the disk is always faster than reading 100MB, no matter how fast your SSD is.
After seeing this, I realized we often ask PostgreSQL to do things it hates. We ask it to scan millions of rows for a monthly report, forcing it to work against its own architecture.
The "fix" for slow Odoo reports isn't always another Postgres index. Sometimes, the fix is realizing you're using a hammer to drive a screw.
PostgreSQL delivers strong performance across a wide range of workloads through proven optimization techniques, while ClickHouse specializes in analytical speed by design.
When you pair them together? That’s when you stop optimizing queries and start building real features.