The Data Consulting Agentic Shift: Tony Zeljkovic on What Survives
The data consulting agentic shift is not a future concern. It is happening inside active engagements right now. Five years of consulting across life science, biotech, and finance. A book in progress on the discipline. And a question that is now impossible to ignore: when AI agents start handling the work that built your practice, which parts of your expertise survive — and which parts just become prompts?
Tony Zeljkovic joins Allocating Intelligence to work through exactly that. Not from a vendor stage and not with a product to sell. From the field, where the real decisions about data architecture, semantic layer ownership, and consulting value get made.
This is a long one. It earns the runtime.
Why This Conversation Now
The modern data stack delivered for a decade. dbt, Snowflake, Fivetran, Looker — the combination genuinely changed what was possible for data teams. Transformation pipelines that used to take months to build could be stood up in weeks. Analytics engineering became a real discipline. The stack worked.
Then it ballooned. What started as a tight toolkit became a platform sprawl negotiation. One-day asks turned into three-week scoping projects because every answer required touching five tools, three owners, and a stakeholder alignment meeting. Executives who had been told "just trust the data team" started asking why the data team needed six months to answer a question they could get from a well-prompted LLM in thirty seconds.
The semantic layer arrived as the architectural response — a way to centralize business logic so every tool and every query drew from the same definitions. Then AI agents arrived on top of it. And now the data consulting world is facing the oldest question in professional services: when the technology automates what you used to charge for, what is your actual value?
Tony has been sitting with that question longer than most. His consulting work spans domains — life sciences, biotech, finance — where the stakes for getting data wrong are high and the tolerance for "it depends" is low. That context shapes how he thinks about the agentic shift. It is not abstract for him.
The Semantic Layer Arc: From LookML to the Open Semantic Interchange
One of the most substantive threads in this episode runs from LookML's origins through to the Open Semantic Interchange debate happening right now.
LookML was the first serious attempt to encode business logic in a layer that sat between raw data and end-user queries. It made Looker possible, and it made a generation of analytics engineers think carefully about semantic definitions for the first time. The problem it solved — "whose metric is right?" — is still the problem the semantic layer solves today.
The OSI represents the next ambition: a standardized interchange format for semantic layer definitions that would let business logic move across tools without being rewritten every time. In theory, it is exactly what the ecosystem needs. In practice, Tony raises the harder question — whether the standardization debate is the wrong fight entirely.
The real issue is not whether we have a common format. It is whether organizations can agree on what the format should contain. Business logic is political. Metrics are contested. The fight over what "revenue" means in a given organization has nothing to do with the schema of the file that stores the definition. A standard interchange format solves a technical problem on top of an organizational one that most data teams have not resolved.
That tension — technical solutions arriving before organizational alignment — runs through the entire episode.
LookML Agents, MCP Servers, and What Actually Gets Commoditized
The conversation gets specific about what LookML agents and MCP servers actually do — and more importantly, what they do not do.
LookML agents can read semantic definitions, traverse metric relationships, and generate queries against a governed layer. MCP servers expose that same semantic context to AI models in a structured way. Together they represent a real shift: an AI that can query your data with business context already baked in, without someone having to explain what "ARR" means every time.
What that does not replace is judgment about which metrics matter for a given decision, how to design a semantic layer that can accommodate a business that is actively changing, and what to do when the model returns something that is technically correct but strategically misleading. The commodity is the implementation layer. The durable value is the architectural thinking above it.
This is where Tony's framing of "moving up the ladder" lands with precision. When implementation gets cheap, the question is not whether your skills become obsolete — it is whether you have built up enough of the layer above to have somewhere to go. Data consultants who have been living at the implementation layer and calling it strategy are the ones with the most exposure.
Pricing Data Consulting Work in the Agent Era
The pricing conversation is one of the more honest exchanges in this episode. The agentic shift does not just change what you deliver — it changes how you justify what you charge.
Time-and-materials pricing made sense when the work was primarily execution. If an agent can do in two hours what a consultant used to do in two weeks, the billing model breaks before the skill set does. Tony works through what the alternatives look like: outcome-based pricing, retainer structures anchored to decision quality rather than delivery volume, advisory arrangements that are explicitly not implementation.
None of these are new models. What is new is the urgency. Data consulting practices that have been meaning to move upmarket have a shorter runway than they thought.
Context Anxiety and the Real Production Blockers
The part of this conversation that will probably age best is the discussion of what Tony calls "context anxiety" — the state that organizations find themselves in when they know they need to give AI agents access to their data but are not confident the context those agents will operate with is accurate or complete.
This is distinct from the hallucination problem that gets most of the press coverage. Hallucinations are model failures. Context anxiety is an organizational failure — the recognition that the business logic the AI will reason about has never been formally defined, is inconsistently applied across systems, and may actively contradict itself depending on which team you ask.
The non-determinism problem compounds this. In a deterministic SQL pipeline, the same inputs produce the same outputs every time. AI agent pipelines are not like that. The same question asked twice may produce different answers. For organizations in regulated industries — life sciences, finance, biotech, the exact domains Tony works in — that non-determinism is not a performance concern. It is a compliance one. And it is a concern that no amount of model improvement fully resolves.
The semantic layer is part of the answer here: governed definitions reduce the surface area of context that an agent can get wrong. But as Tony notes, a semantic layer is only as reliable as the organizational process that maintains it. This is where the consulting value persists: not in building the layer, but in building the governance around it that makes it trustworthy over time.
What Disappears, What Becomes More Valuable
By the final stretch of the conversation, the picture is fairly clear. Implementation work — data modeling, pipeline construction, dashboard development — is in structural decline as a consulting revenue source. Not because it disappears, but because it gets absorbed into platforms, automated by agents, and priced accordingly.
What becomes more valuable: the ability to identify where a business's intelligence should live, who should own it, and how to make it trustworthy enough that AI agents can act on it without a human checking every output. That is the allocation question. It is also the hardest question, because it requires understanding both the technology and the organization — and most people who are deep on one are shallow on the other.
Tony's book is, in his framing, an attempt to document the discipline before the agentic shift rewrites what the discipline means. Whether that window is a year or three years is unclear. That it is closing is not.
Episode Chapters
- 00:00 — Cold open: three takes from the conversation
- 01:03 — The agentic shift comes for data work
- 05:04 — Tony's path: five years of data consulting across life sciences, biotech, finance
- 06:45 — The Intelligence Allocation Stack
- 07:14 — Why "just slap Claude on a Snowflake stack" fails
- 11:54 — What ages worst in a data consulting book — and what doesn't
- 16:20 — Independent consultants vs vendor incentives
- 20:10 — The Open Semantic Interchange: threat or opportunity?
- 24:23 — Single source of truth: from LookML to today
- 25:40 — Where LookML agents fit (and where they don't)
- 29:10 — Moving up the ladder when implementation gets cheap
- 34:03 — Pricing data consulting work in the agent era
- 39:13 — The AI-native agency trap
- 43:34 — How to spot a real semantic layer in the wild
- 45:25 — Knowledge Catalog and the future of data work
- 47:54 — Why data people are coming for software engineers
- 50:14 — What disappears, what becomes more valuable
- 51:21 — Context anxiety, hallucinations, and the cost reckoning
- 55:29 — MCP everywhere: does it commoditize the work?
About Tony Zeljkovic
Tony Zeljkovic is an independent data consultant with five years of project experience across life science, biotech, and financial services. He is currently writing a book on the discipline of data consulting — what it is, what it should be, and what the agentic shift means for everyone who has built a practice in it. He brings an explicitly practitioner perspective: no vendor alignment, no platform to sell, just the view from the field.
Connect with Tony on LinkedIn: linkedin.com/in/tony-zeljkovic
About Allocating Intelligence
Allocating Intelligence is a podcast about where intelligence should live in the modern data and AI stack. Hosted by Wesley Nitikromo — founder of Unwind Data, formerly co-founded DataBright (acquired). Each episode explores one layer of the Intelligence Allocation Stack: data foundation, semantic layer, orchestration, and AI agents.
Subscribe for new episodes and conversations with the people building the next generation of enterprise data and AI infrastructure.
Links: Allocating Intelligence · Unwind Data · Wesley on LinkedIn · Listen on Spotify