← All articles
Semantic LayerApril 23, 20268 min read

Omni's $120M Series C Is Not a BI Story. It Is a Semantic Layer Story.

By Wesley Nitikromo

Field notes — April 23, 2026

This morning, Omni announced a $120 million Series C at a $1.5 billion valuation, led by ICONIQ, with participation from Theory Ventures, First Round Capital, Redpoint Ventures, and GV. Their ARR grew 4x over the last year. They turned profitable last month. Revenue has tripled already in 2026.

The headline is the number. But the number is not the story.

Read the announcement carefully — specifically the section Colin Zima titles "The context layer is the hard part" — and you will find one of the clearest public statements any well-funded company has made about where intelligence actually needs to live in the modern data stack. Not in the dashboards. Not in the AI agents. In the layer that translates raw data into governed, reusable business meaning.

The semantic layer.

What ICONIQ just bet $120 million on is a semantic layer moat thesis: that the most defensible position in enterprise AI is not the model layer, not the interface layer, but the governed context layer underneath everything. This matters beyond Omni. It is a market signal about which layer of the Intelligence Allocation Stack is becoming the actual battleground for enterprise AI — and most data teams are still underinvesting in it.

What Omni Actually Said

Zima's announcement does not read like a traditional Series C press release. There is no slide-deck language about "democratizing data" or "making insights accessible." Instead, there is a disarmingly honest diagnosis of why AI analytics keeps breaking.

The argument goes like this: natural language is the best interface we have ever had for querying data. It is faster than clicking through fields, more flexible than pre-built reports, more accessible than SQL. But it only works at scale if the system underneath it understands what you actually mean — not just what the database says.

"Every company has its own idiosyncratic definition of 'revenue,' 'customer,' and even 'last quarter,'" Zima writes. "The logic behind these definitions is typically tribal knowledge in people's heads and scattered documentation. Without this specific context, an LLM will confidently guess."

This is not a new problem. Data teams have been fighting this war for a decade. What is new is that the cost of getting it wrong has scaled dramatically. When a dashboard showed the wrong number, a human caught it eventually. When an AI agent gives a confident, wrong answer to a thousand employees simultaneously, the blast radius is entirely different.

Omni's response was architectural from the start. They built around a semantic layer because, as Zima put it, "this had to be architectural." One place to define metrics, encode business logic, and enforce permissions across every query. What they built for reliable business intelligence turned out to be exactly what AI agents need to operate with trust.

That is the thesis they just raised $120 million on.

Where This Lives in the Intelligence Allocation Stack

I have written before about the Intelligence Allocation Stack as a framework for understanding where companies should be directing their attention and budget. Four layers, built from the bottom up:

  1. Data Foundation — governance, quality, pipelines, source of truth
  2. Semantic Layer — governed business logic, shared definitions, metric consistency
  3. Orchestration Layer — data movement, automation, real-time flows
  4. AI Layer — agents, conversational AI, autonomous systems

Most companies are trying to build their AI strategy starting at Layer 4. They are deploying agents on top of data infrastructure that was never designed to support them. The agents hallucinate. The answers contradict each other. The AI project gets quietly shelved.

Omni's entire product thesis is a bet on Layer 2. And ICONIQ just confirmed that bet is worth $1.5 billion.

What is notable is that Omni did not change their mind about the semantic layer when AI arrived. They built around it in 2022, before ChatGPT launched. The AI wave did not require them to pivot — it validated the architectural decision they had already made. "What we built for great business intelligence is exactly what AI needs," Zima writes. That sentence is worth sitting with.

The semantic layer moat is not something you can acquire your way into, or bolt on after the fact. It has to be designed in. Companies that made that architectural decision early — as Omni did, as the data teams quietly building dbt semantic layers and Cube deployments did — are now discovering that what looked like infrastructure investment was actually competitive positioning.

The Competitive Proof That Layer 2 Is the Battleground

Omni is not the only signal pointing at the semantic layer as the decisive layer. Look at what else is happening right now.

OpenAI launched Frontier in February, positioning it explicitly as "a semantic layer for the enterprise that all AI coworkers can reference." The company building the most capable LLMs in the world concluded that raw model power was not enough — you still need the governed translation layer underneath.

Snowflake has Semantic Views. Databricks has Metric Views. Both platforms are racing to bake semantic layer functionality into stacks enterprises are already paying for. The bundling threat to pure-play semantic layer vendors is real — which is why Omni's architectural depth matters. As ICONIQ partner Matt Jacobson noted, legacy players would have to rearchitect their entire products to match what Omni built from the ground up — he compared it to Snowflake's early architectural advantage over Amazon's Redshift.

The BI software market sits at roughly $47 billion. The semantic layer sub-segment is projected to grow at 30% annually through 2031. That is not a niche. That is the fastest-growing layer in the entire data stack.

Everyone — from the biggest LLM labs to the major cloud platforms to the best-funded BI startups — is converging on the same conclusion: the semantic layer moat is real, and whoever builds it deepest wins.

What "Context Compounds" Actually Means for Your Data Team

There is a phrase in Zima's announcement that I keep returning to: "In Omni, context compounds."

What he means is that a semantic layer is not a static artifact. It is not a schema you write once and maintain reluctantly. When it is built well, it gets smarter as people interact with it. Every definition added encodes more institutional knowledge. Every query run through it reinforces what the business actually means by its own terms.

This is the opposite of what happens in most organizations right now. Instead of compounding context, they are accumulating confusion. Each new team that joins builds their own dashboard with their own definition of the key metrics. Each new AI tool pointed at the data warehouse gets a slightly different answer to the same question. The gap between what the data says and what the business means grows wider over time, not narrower.

Fixing this is not a tooling problem. It is an architectural decision that has to be made deliberately, early, and at the right layer of the stack.

The semantic layer is not a feature you add to your BI tool. It is Layer 2 of the Intelligence Allocation Stack — the layer that makes every layer above it trustworthy. Dashboards built on it show consistent numbers. AI agents running through it apply correct business logic. The entire system inherits the same vocabulary.

The Practitioner Angle: What to Do With This Signal

When a company raises $120 million on a specific architectural thesis, the signal is not just for investors. It is for every data team deciding where to invest their next quarter.

Here is what this means practically:

If you do not have a semantic layer, you have a context debt problem. Your data team knows this. Your AI agents will surface it loudly. The business logic that should be encoded in a governed, reusable layer is currently living in individual Looker explores, scattered dbt models, someone's spreadsheet, and the head of the analyst who built the original dashboard three years ago. Every new AI use case you try to build will hit this wall.

If you are deciding between platform-native and headless semantic layers, the architecture question matters more than the tool question. Snowflake Semantic Views and Databricks Metric Views will pull organizations toward their respective platforms. The right answer depends on whether your stack is genuinely converged or whether you need portability across tools and clouds. I have written about this comparison in detail — but the key question is not which tool wins, it is whether the business logic lives in one governed place or is fragmented across systems.

If you are already building a semantic layer, this is the moment to treat it as a first-class product. Not a data engineering side project. Not a dbt job that someone maintains between other work. The semantic layer is the product that everything else in your AI strategy runs on. Staff it accordingly. Version it. Govern it. Make it the organizational truth layer that it has to be.

The Real Bet ICONIQ Made

Fortune described what Omni builds as "a living rulebook that defines what revenue means, who can see which numbers, and how key metrics should be calculated." That is a precise definition. But I would add one more dimension: it is a rulebook that gets more valuable the longer it runs.

That compounding quality is exactly what makes the semantic layer moat defensible. It is not just that Omni is hard to replace — it is that every week it runs, it encodes more institutional knowledge, and the cost of switching grows. The moat deepens with usage.

ICONIQ looked at the enterprise AI landscape and concluded that the most defensible position is not in the LLM layer, not in the orchestration layer, and not in the agent framework layer. It is in the layer that translates raw enterprise data into governed business context — reliably, at scale, in a way that compounds over time.

That is exactly Layer 2 of the Intelligence Allocation Stack.

The market is not wrong about AI being transformative. But the market has consistently underestimated how much of that transformation depends on getting the layer underneath it right. You do not win on AI. You win on the data architecture that makes AI trustworthy.

Omni's Series C is not a BI funding event. It is evidence that the smartest money in enterprise software has figured out where the semantic layer moat actually is.

The question is whether your data team has.


Allocating Intelligence is a field notes publication exploring where intelligence should live in the modern data stack. If this resonated, the Intelligence Allocation Stack framework is the best place to start.

Wesley Nitikromo

Founder of Unwind Data. Previously co-founded DataBright (acquired 2023). Data architect, analytics engineering specialist, and builder of AI-ready data infrastructure. Based in Amsterdam.