← All articles
PodcastMarch 29, 20264 min read

Welcome to Allocating Intelligence

By Wesley Nitikromo

The most expensive mistake in enterprise AI is not picking the wrong model. It is starting at the wrong layer.

This is the first episode of the Allocating Intelligence podcast — a show about getting AI right by building from the ground up. Every episode is a field note, a conversation, or a provocation from inside the messy, high-stakes work of deploying AI in production environments.

What This Podcast Is About

My name is Wesley Nitikromo. I am a data and AI practitioner based in Amsterdam, founder of Unwind Data, and someone who has spent years in the engine room of enterprise AI deployments. Not the conference keynotes. The production incidents at 2 AM when the pipeline fails and the AI agent is reporting last week's numbers with full confidence.

That experience taught me something the industry does not talk about enough: the intelligence in "artificial intelligence" has to live somewhere. And most organizations are putting it in the wrong places.

They invest heavily in the AI layer — the agents, the LLMs, the orchestration frameworks — while neglecting the three layers underneath that determine whether any of that AI can actually be trusted. Models are procured. Prompts are engineered. And then the AI agent queries a raw database table, misinterprets a column called "rev_net_adj" as total revenue, and confidently reports a number that is off by 23%.

This is not an AI problem. It is an architecture problem. And it is entirely avoidable.

The Allocating Intelligence podcast exists to map the territory between the AI layer and the data layer — and to give data practitioners and data leaders a clear framework for building AI that actually works in production.

The Intelligence Allocation Stack

The framework at the center of this show is the Intelligence Allocation Stack: four layers that define where intelligence should live in a modern data and AI architecture.

Layer 1 is the data foundation. Data governance, data quality, ingestion pipelines, and a single source of truth. This is where everything starts — and where most enterprise AI initiatives quietly fail before they begin. If three people in your organization cannot run the same query and get the same answer, you are not ready for AI agents. You have a data problem wearing an AI costume. See The Intelligence Allocation Stack: Start at Layer One for the full breakdown.

Layer 2 is the semantic layer. Business logic translated for machines. Governed metric definitions, shared vocabulary, and a consistent, machine-readable meaning for terms like "revenue," "active user," and "gross margin" across every tool, team, and AI system that queries your data. The semantic layer is the highest-leverage investment in enterprise AI — and the most consistently skipped. Posts like The Truth Architect: Semantic Layer AI Strategy and How to Implement a Semantic Layer go deeper on why this layer changes everything.

Layer 3 is the orchestration layer. Data pipelines, CRM syncs, reverse ETL, workflow automation, and real-time event processing. Making governed data flow reliably to the right places at the right time. In 2026, this layer is being extended by protocols like MCP (Model Context Protocol) that connect AI agents directly to governed data sources — turning orchestration from a data-moving infrastructure into the active interface between AI agents and your business definitions.

Layer 4 is AI. The agents, conversational interfaces, autonomous systems, and predictive models. The layer that gets all the attention, attracts all the budget, and dominates every board-level conversation about technology strategy. Also the layer that produces nothing trustworthy unless the three layers underneath it are solid.

Companies that allocate intelligence correctly — that build the foundation before the agents — are the ones whose AI programs survive contact with production. The rest spend 18 months cycling through demos that cannot be deployed at scale.

Why This Show Is Different

There is no shortage of AI content. What is missing is practitioner content — field notes from people who have seen the failure modes up close and built systems that actually work in production.

This is not a vendor podcast. No sponsored segments. No "our tool solves this" conclusions. The Intelligence Allocation Stack is a framework, not a product.

Every episode is designed to be useful to data practitioners, analytics engineers, and data and AI leaders who are trying to make real decisions about where to invest, what to build, and how to sequence it. Whether you are evaluating semantic layer tools, debating whether to build agents on Snowflake or Databricks, or trying to explain to your CTO why the AI initiative failed — this show is for you.

What to Expect

Expect field notes from real production deployments. Conversations with practitioners who have stories about what actually broke and how they fixed it. Structured frameworks for decisions that most organizations are making ad hoc. And a consistent point of view: intelligence belongs in the layer where it can be governed, trusted, and reused — not in the layer where it produces impressive demos.

The Allocating Intelligence podcast publishes regularly. Subscribe wherever you get podcasts, or follow along at allocatingintelligence.com.

Where to Start

If you are new here, these are the best entry points depending on where you are in your AI journey:

Each represents a different entry point into the Intelligence Allocation Stack — all connected by the same underlying thesis: build the foundation before the agents.

Follow and subscribe. There is a lot to cover.

Wesley Nitikromo

Founder of Unwind Data. Previously co-founded DataBright (acquired 2023). Data architect, analytics engineering specialist, and builder of AI-ready data infrastructure. Based in Amsterdam.