When working with Odoo 19, performance bottlenecks rarely come from “big features”—they come from small, repeated inefficiencies. One of the most common culprits is the N+1 query problem in the ORM. It’s subtle, easy to introduce, and can quietly degrade performance as your dataset grows.
Let’s break down what it is, why it happens in Odoo, and how to design your data model and code to avoid it.
Understanding the N+1 Query Problem
The N+1 problem occurs when:
- You execute 1 query to fetch records, and then
- Execute N additional queries—one per record—to fetch related data
Example (Problematic Code)
orders = self.env['sale.order'].search([])
for order in orders:
print(order.partner_id.name)
On the surface, this looks perfectly fine. And in simple cases, Odoo's ORM actually handles it better than you might expect—but it's still a pattern you need to understand.
What Odoo Actually Does Under the Hood
Odoo's ORM includes a built-in prefetch cache. When you access partner_id on the first iteration, Odoo batch-fetches that relation for all records in the recordset at once. So even if you have 1,000 orders — which would naively mean 1,000 individual partner queries plus the original fetch — Odoo reduces this to roughly 2 queries total through batching.
However, this prefetch cache can break down in several common situations:
- Iterating over filtered sub-recordsets — orders.filtered(...) creates a smaller recordset that loses its prefetch context
- Re-browsing records inside a loop — calling self.env['res.partner'].browse(id) per iteration bypasses the cache entirely
- Chaining deeply nested relations — accessing order.partner_id.country_id.name across multiple levels can still trigger multiple round trips
- Using sudo() or with_context() inside a loop — these create a new ORM environment, busting the prefetch cache
This is when N+1 becomes a real production problem.
Why This Happens in Odoo ORM
Odoo’s ORM uses lazy loading:
- Related fields (many2one, one2many, etc.) are fetched on demand
- If the prefetch cache is disrupted, each access can trigger a separate query
- Prefetching only works efficiently when fields are accessed in a batch-friendly way
Strategy 1: Leverage Prefetching Properly
Use mapped() — it's the idiomatic, cache-safe approach that guarantees batch fetching regardless of context.
Better Approach
orders = self.env['sale.order'].search([])
partners = orders.mapped('partner_id.name')
Why this works:
- mapped() triggers batch fetching
- Odoo fetches all related partners in one query
Strategy 2: Use read() for Bulk Data Access
If you only need raw data (not full recordsets), read() is faster and avoids ORM overhead.
orders_data = self.env['sale.order'].search([]).read(['partner_id'])
for data in orders_data:
print(data['partner_id'])
This reduces:
making it ideal for read-only operations where you don't need recordset methods.
Strategy 3: Avoid Loops That Trigger Queries
A common anti-pattern:
for order in orders:
total = sum(line.price_total for line in order.order_line)
This can trigger:
- One query per order_line fetch
Optimized Version
orders.mapped('order_line.price_total')Or better, fetch all lines in a single batch query:
lines = self.env['sale.order.line'].search([
('order_id', 'in', orders.ids)
])
Strategy 4: Use read_group for Aggregations
Instead of looping and summing in Python:
for order in orders:
total = sum(line.price_total for line in order.order_line)
Push the computation to PostgreSQL using read_group():
data = self.env['sale.order.line'].read_group(
[('order_id', 'in', orders.ids)],
['order_id', 'price_total:sum'],
['order_id']
)
This is far more efficient — the database does the heavy lifting instead of Python.
Strategy 5: Use store=True on Computed Fields When Appropriate
Computed fields without store=True are recalculated on every access and can trigger N+1 issues.
Problem
total_amount = fields.Float(compute='_compute_total')
If not stored:
- Recomputed per record
- Can trigger multiple queries
Solution
total_amount = fields.Float(
compute='_compute_total',
store=True
)
With store=True, the value is computed once and stored in the database — no repeated computation on every read. Use this for fields that don't need to be dynamically recalculated every time.
Strategy 6: Batch Your Business Logic
Bad pattern:
for record in records:
record._compute_something()
Better:
def _compute_something(self):
for record in self:
...
Even better:
- Use set-based logic
- Minimize per-record queries
Strategy 7: Control Prefetch with with_context
In some cases, you want to limit unnecessary prefetching:
self.with_context(prefetch_fields=False)
Use carefully:
- Helps when dealing with very large datasets
- But it can increase query count if misused
Strategy 8: Drop to Raw SQL When the ORM Isn't Enough
For heavy reporting or large datasets, ORM abstraction can become a bottleneck. In those cases, raw SQL is the right tool:
self.env.cr.execute("""
SELECT order_id, SUM(price_total)
FROM sale_order_line
WHERE order_id = ANY(%s)
GROUP BY order_id
""", [orders.ids])
results = self.env.cr.fetchall()Use SQL when:
- You need performance over abstraction.
- You’re handling large aggregations ORM can't express efficiently.
Debugging N+1 Issues
Enable SQL logging to spot the problem:
--log-level=debug_sql
Watch for:
- Repeated SELECT statements
- Same query executed multiple times in loops
You can also use:
- Odoo profiler tools
- PostgreSQL EXPLAIN ANALYZE to inspect query execution plans
The N+1 query problem is one of those issues that hides in plain sight. Your code looks clean, your tests pass, and everything works fine — until your dataset grows and suddenly your application crawls. By understanding how Odoo's ORM prefetch cache works, and more importantly, when it breaks down, you can write code that stays fast at any scale.
The strategies covered in this post aren't workarounds or hacks. They are the correct way to work with Odoo's ORM, thinking in recordsets instead of loops, pushing computation to PostgreSQL where it belongs, and being deliberate about how and when related data is fetched. Apply these patterns consistently, and your Odoo modules will remain performant, maintainable, and production-ready—no matter how large your data grows
To read more about How to Improve Data Model Performance for Large Datasets in Odoo 19, refer to our blog How to Improve Data Model Performance for Large Datasets in Odoo 19.