Enable Dark Mode!
overview-of-data-model-optimization-in-odoo-19-avoiding-orm-n1-queries.jpg
By: Najiya Rafi

Overview of Data Model Optimization in Odoo 19 (Avoiding ORM N+1 Queries)

Technical Odoo 19 Odoo Enterprises Odoo Community

When working with Odoo 19, performance bottlenecks rarely come from “big features”—they come from small, repeated inefficiencies. One of the most common culprits is the N+1 query problem in the ORM. It’s subtle, easy to introduce, and can quietly degrade performance as your dataset grows.

Let’s break down what it is, why it happens in Odoo, and how to design your data model and code to avoid it.

Understanding the N+1 Query Problem

The N+1 problem occurs when:

  • You execute 1 query to fetch records, and then
  • Execute N additional queries—one per record—to fetch related data

Example (Problematic Code)

orders = self.env['sale.order'].search([])
for order in orders:
  print(order.partner_id.name)

On the surface, this looks perfectly fine. And in simple cases, Odoo's ORM actually handles it better than you might expect—but it's still a pattern you need to understand.

What Odoo Actually Does Under the Hood

Odoo's ORM includes a built-in prefetch cache. When you access partner_id on the first iteration, Odoo batch-fetches that relation for all records in the recordset at once. So even if you have 1,000 orders — which would naively mean 1,000 individual partner queries plus the original fetch — Odoo reduces this to roughly 2 queries total through batching.

However, this prefetch cache can break down in several common situations:

  • Iterating over filtered sub-recordsets — orders.filtered(...) creates a smaller recordset that loses its prefetch context
  • Re-browsing records inside a loop — calling self.env['res.partner'].browse(id) per iteration bypasses the cache entirely
  • Chaining deeply nested relations — accessing order.partner_id.country_id.name across multiple levels can still trigger multiple round trips
  • Using sudo() or with_context() inside a loop — these create a new ORM environment, busting the prefetch cache

This is when N+1 becomes a real production problem.

Why This Happens in Odoo ORM

Odoo’s ORM uses lazy loading:

  • Related fields (many2one, one2many, etc.) are fetched on demand
  • If the prefetch cache is disrupted, each access can trigger a separate query
  • Prefetching only works efficiently when fields are accessed in a batch-friendly way

Strategy 1: Leverage Prefetching Properly

Use mapped() — it's the idiomatic, cache-safe approach that guarantees batch fetching regardless of context.

Better Approach

orders = self.env['sale.order'].search([])
partners = orders.mapped('partner_id.name')

Why this works:

  • mapped() triggers batch fetching
  • Odoo fetches all related partners in one query

Strategy 2: Use read() for Bulk Data Access

If you only need raw data (not full recordsets), read() is faster and avoids ORM overhead.

orders_data = self.env['sale.order'].search([]).read(['partner_id'])
for data in orders_data:
  print(data['partner_id'])

This reduces:

making it ideal for read-only operations where you don't need recordset methods.

Strategy 3: Avoid Loops That Trigger Queries

A common anti-pattern:

for order in orders:
  total = sum(line.price_total for line in order.order_line)

This can trigger:

  • One query per order_line fetch

Optimized Version

orders.mapped('order_line.price_total')

Or better, fetch all lines in a single batch query:

lines = self.env['sale.order.line'].search([
  ('order_id', 'in', orders.ids)
])

Strategy 4: Use read_group for Aggregations

Instead of looping and summing in Python:

for order in orders:
  total = sum(line.price_total for line in order.order_line)

Push the computation to PostgreSQL using read_group():

data = self.env['sale.order.line'].read_group(
  [('order_id', 'in', orders.ids)],
  ['order_id', 'price_total:sum'],
  ['order_id']
)

This is far more efficient — the database does the heavy lifting instead of Python.

Strategy 5: Use store=True on Computed Fields When Appropriate

Computed fields without store=True are recalculated on every access and can trigger N+1 issues.

Problem

total_amount = fields.Float(compute='_compute_total')

If not stored:

  • Recomputed per record
  • Can trigger multiple queries

Solution

total_amount = fields.Float(
  compute='_compute_total',
  store=True
)

With store=True, the value is computed once and stored in the database — no repeated computation on every read. Use this for fields that don't need to be dynamically recalculated every time.

Strategy 6: Batch Your Business Logic

Bad pattern:

for record in records:
  record._compute_something()

Better:

def _compute_something(self):
  for record in self:
      ...

Even better:

  • Use set-based logic
  • Minimize per-record queries

Strategy 7: Control Prefetch with with_context

In some cases, you want to limit unnecessary prefetching:

self.with_context(prefetch_fields=False)

Use carefully:

  • Helps when dealing with very large datasets
  • But it can increase query count if misused

Strategy 8: Drop to Raw SQL When the ORM Isn't Enough

For heavy reporting or large datasets, ORM abstraction can become a bottleneck. In those cases, raw SQL is the right tool:

self.env.cr.execute("""
  SELECT order_id, SUM(price_total)
  FROM sale_order_line
  WHERE order_id = ANY(%s)
  GROUP BY order_id
""", [orders.ids])
results = self.env.cr.fetchall()

Use SQL when:

  • You need performance over abstraction.
  • You’re handling large aggregations ORM can't express efficiently.

Debugging N+1 Issues

Enable SQL logging to spot the problem:

--log-level=debug_sql

Watch for:

  • Repeated SELECT statements
  • Same query executed multiple times in loops

You can also use:

  • Odoo profiler tools
  • PostgreSQL EXPLAIN ANALYZE to inspect query execution plans

The N+1 query problem is one of those issues that hides in plain sight. Your code looks clean, your tests pass, and everything works fine — until your dataset grows and suddenly your application crawls. By understanding how Odoo's ORM prefetch cache works, and more importantly, when it breaks down, you can write code that stays fast at any scale.

The strategies covered in this post aren't workarounds or hacks. They are the correct way to work with Odoo's ORM, thinking in recordsets instead of loops, pushing computation to PostgreSQL where it belongs, and being deliberate about how and when related data is fetched. Apply these patterns consistently, and your Odoo modules will remain performant, maintainable, and production-ready—no matter how large your data grows

To read more about How to Improve Data Model Performance for Large Datasets in Odoo 19, refer to our blog How to Improve Data Model Performance for Large Datasets in Odoo 19.


Frequently Asked Questions

Does the N+1 problem always cause performance issues in Odoo?

Not always. Odoo's built-in prefetch cache handles the problem automatically in simple cases, keeping query counts low even when you loop over recordsets. The real danger appears when the prefetch cache is disrupted — for example, when using sudo(), with_context(), or filtered() inside a loop. This is when N+1 quietly turns into a production bottleneck.

Is mapped() always better than a for loop?

For accessing related fields and traversing relations, yes — mapped() is almost always the better choice. It operates on the full recordset in a batch-friendly way and is less likely to disrupt the prefetch cache. However, for complex business logic that can't be expressed as a simple field traversal, a well-structured for loop inside a method that accepts a recordset (self) is still acceptable.

When should I use read() instead of mapped()?

Use read() when you only need raw field values and don't require any recordset methods or ORM behavior. It's faster because it skips ORM overhead and returns plain Python dictionaries. If you need to call methods, apply filters, or work with the records further, stick with mapped() and standard recordset operations.

Is it always safe to use store=True on computed fields?

Not always. store=True is a great optimization for fields that are read frequently but don't change often. However, if the computed field depends on many other fields that change frequently, storing it can cause excessive recomputation and trigger more write operations than it saves. Always evaluate the read-to-write ratio of the field before deciding to store it.

When is it appropriate to skip the ORM and write raw SQL in Odoo?

Raw SQL is appropriate when you are dealing with large aggregations, complex reporting queries, or performance-critical operations where the ORM abstraction adds unnecessary overhead. It should be a deliberate choice, not a default. Always ensure your SQL is parameterized to prevent injection vulnerabilities, and consider whether read_group() or other ORM tools can achieve the same result before dropping down to raw queries.

If you need any assistance in odoo, we are online, please chat with us.



0
Comments



Leave a comment



whatsapp_icon
location

Calicut

Cybrosys Technologies Pvt. Ltd.
Neospace, KINFRA Techno Park
Kakkanchery, Calicut
Kerala, India - 673635

location

Kochi

Cybrosys Technologies Pvt. Ltd.
1st Floor, Thapasya Building,
Infopark, Kakkanad,
Kochi, India - 682030.

location

Bangalore

Cybrosys Techno Solutions
The Estate, 8th Floor,
Dickenson Road,
Bangalore, India - 560042

Send Us A Message