The question every AI project
avoids asking

Where should intelligence live in your stack? Not which model to use, but where the logic, the governance, and the reasoning should actually sit. That question determines whether your AI project returns value or quietly fails.

The Problem

88% use AI. 39% see results.

For every dollar companies spend on AI, they should spend six on the data architecture underneath it. Almost none do. That single imbalance explains why most AI projects produce impressive demos and no measurable impact.

This is not a technology problem. It is an allocation problem. Companies are putting intelligence in the wrong layer — deploying agents on top of pipelines one engineer understands, semantic layers that don't exist, and data foundations held together by institutional knowledge that lives in someone's head.

Every company that fails with AI builds from the top down. Every company that succeeds builds from the bottom up. The order is not negotiable. The Intelligence Allocation Stack is a framework for understanding why.

The Framework

The Intelligence Allocation Stack

01

Data Foundation

Where data enters, gets stored, and becomes governable. Clean ingestion, consistent warehousing, automated quality checks. The test: can three different people in your organisation run the same query and get the same answer? If not, Layer 1 is broken — and everything built on top of it will produce results nobody can trust.

02

Semantic Layer

Where business logic becomes machine-readable. Revenue means one thing in finance and another in marketing. Active customer has a different definition in every department. The semantic layer creates a single governed vocabulary that every downstream tool, dashboard, and AI agent can rely on. Without it, your AI doesn't understand your business — it understands your data. Those are not the same thing.

03

Orchestration Layer

The nervous system. CRM syncs, reverse ETL, workflow automation, event-driven pipelines, API integrations. This is where data gets connected, transformed, and routed to where it needs to go. Most organisations overspend on Layer 1 (collecting data) and underspend here (making it usable). The result: warehouses full of data that nobody can efficiently activate.

04

AI Layer

Models, agents, natural language interfaces, and autonomous decision systems. The most visible layer — the one executives get excited about and vendors sell hardest. It is also entirely dependent on the three layers beneath it. An AI agent querying a broken pipeline will confidently return wrong answers. An LLM without a semantic layer will interpret revenue five different ways in the same report. The AI doesn't hallucinate because the model is bad. It hallucinates because the foundation failed.

What We Cover

Three conversations the industry isn't having clearly enough

Every episode and article is grounded in verifiable, source-traceable facts — not analyst hype, not vendor narratives. The arguments are architectural. The evidence is empirical.

Layer 02

The Semantic Layer

The semantic layer is the most underdeveloped piece of modern data infrastructure — and the piece AI needs most. Only 5% of data teams have implemented one. We cover the tooling landscape (dbt Semantic Layer, Cube, AtScale, Omni, Snowflake Semantic Views), the emerging Open Semantic Interchange standard, and what it actually means to design business logic that an AI agent can reason over.

Layer 04

The Non-Deterministic Nature of AI

AI systems are probabilistic by design. They produce different outputs given the same inputs, and they cannot tell you why they were wrong. This is not a bug to fix — it is an architectural constraint to design around. We explore what non-determinism means for data governance, audit trails, model monitoring, and the trust layer that organisations need to build before deploying AI in any decision-critical workflow.

Layer 01

Data Foundations with Fully Traceable Facts

The companies getting durable ROI from AI share one characteristic: they can trace every output back to a source of truth. Lineage, observability, data contracts, and automated quality checks are not compliance overhead — they are the mechanism by which you verify that the data your AI is acting on is the data you think it is. We cover the tooling, the architecture, and the organisational practices that make facts traceable end-to-end.

“Every claim on this platform is grounded in traceable evidence. Vendor announcements are verified against independent sources. Market figures are cited with methodology. Code examples are tested. We don't speculate dressed as analysis.”

Editorial standard, Allocating Intelligence

The Host

Wesley Nitikromo

Wesley Nitikromo

Founder, Unwind Data · Amsterdam

Co-founded DataBright (acquired 2023) · Looker Solution Partner

Wesley Nitikromo has spent a decade building data infrastructure across fintech, e-commerce, sustainability, and SaaS — and has watched the same architectural mistake repeat itself in every era of the industry. Teams start at the AI layer and work down. The foundation cracks. The insights never materialise.

He co-founded DataBright, built it from zero to acquisition, and has since taken interim data leadership roles at companies ranging from early-stage startups to platforms handling millions of transactions daily. He was an early practitioner in analytics engineering in 2019, a Looker Solution Partner, and is now positioning at the convergence of semantic layers and AI-ready data architecture.

Allocating Intelligence is the platform where he thinks out loud about what actually works — with evidence, with architecture diagrams, and without vendor narratives.

Come on the podcast

If you are building in the semantic layer, data foundation, or AI-readiness space — as a practitioner, engineer, or CDO — and want to have an honest architectural conversation on record, get in touch.