What Data Leaders Need to Build for the Age of AI

Sucheta Klein
Founder, Aquerius
10 Mins
Published on
May 12, 2026

Why Data Leaders Are Stuck

If you run data or analytics at an enterprise right now, you are probably getting pulled into conversations that feel both urgent and strangely imprecise. 

Your CEO is asking about agents because every leadership team now feels pressure to prove they are AI-native. Your board wants to know how AI will show up across the product roadmap and internal operations. Your existing vendors are quietly rebranding existing products as AI, while new startups are pitching you on buzzwords you have barely had time to bother ChatGPT about.

Amid all of that noise, you are still a data or analytics leader and you still have a job to do. You probably have a team that knows SQL and is not particularly excited to learn OWL. They are also probably nervous that a new tool will mean more work for them, or eventually no work at all.

Understandably, that is a lot of chaos to navigate. We have been talking to data leaders who are trying to understand what this moment means for the work they have been doing for years.

The Stack You Already Have + What’s Missing

Most enterprises already have a semantic layer in some form. It sits on top of the data and makes business definitions operationally consistent. When finance and sales pull revenue, they should get the same answer, even if the underlying tables, joins, and filters look different.

That layer was built for a world where humans interpreted the answer. Agents change that equation because they consume the answer and use it to take action. That means alongside consistent definitions, systems also need to understand the structure of the business itself, including what exists, how things relate, what constraints govern those relationships, and what is true at a given point in time.

This is precisely where ontology enters the conversation.

An ontology is the layer that turns connected data into reasoned knowledge. This does not mean your data team needs to become semantic web experts overnight. But it does mean they need to start thinking in terms of typed entities, typed relationships, validity windows, constraints, and provenance that the system can rely on when it reasons.

Why This Distinction Matters in Your Role

This is the gap your CEO is intuiting when they ask about AI strategy, even if they cannot yet articulate it in architectural terms. They sense that something is missing between governed analytics and reliable autonomous systems. That missing piece is a substrate that allows AI to reason over enterprise knowledge rather than simply retrieve from it.

The work of translating that executive intuition into architectural decisions falls to the data leader. That is what makes this moment difficult. Your existing semantic layer is doing real and important work, so the natural instinct is to extend it, add AI features on top, and call that the strategy.

But the missing layer sits underneath the semantic layer. Treating ontology as a feature of your existing stack, rather than a distinct layer of the architecture, is one of the easiest mistakes to make right now.

Where Agents Fit in the Stack

Agents are being introduced into workflows where their outputs may influence real decisions. They may help prioritize accounts, evaluate exceptions, recommend next steps, trigger workflows, or surface risks that other teams act on, which is why the architecture underneath them matters so much.

If an agent works directly on raw warehouse data, it has to figure out definitions on its own. If it works only on a semantic layer, it may use the right definition in the wrong context. If it works on a loose graph, it may follow connections that exist in the data but do not matter to the decision.

The architecture that supports agents in production puts ontology underneath as the reasoning substrate, the semantic layer above it as the business contract, and the agent operating across both. In that setup, the agent is working from a structured model where relationships have meaning, validity has boundaries, and evidence can be traced. That is what allows an agent to know when a question is well-formed, when the evidence is strong enough to act, and when it should stop instead of guessing.

For data leaders, the work is figuring out how to get there without breaking the systems that already serve the business well. The good news is that the path is more incremental than it looks, and much of what you have already built is part of the answer.

What to Look for in a Vendor

The vendor landscape right now is loud. Almost every company is using the same language around agents, context, graphs, reasoning, and AI infrastructure, regardless of if their products actually do the work those words imply. The best way to separate real reasoning infrastructure from repackaged retrieval is to ask questions most pitches are not built to answer cleanly. These are the questions we address with potential customers  and the ones worth bringing into your next vendor conversation.

1. Does the system understand the business meaning of relationships, or does it only connect records? The vendor should be able to define what each relationship means, what has to be true for it to hold, what the system can infer from it, and when it stops applying. Ask whether relationships are first-class parts of the model or simply labels between records. If they cannot explain the rules, constraints, validity conditions, and inference logic behind those connections, the system may be structured, but it is not yet reasoning-ready.

2. Can the system understand what was true, when it was true, and whether it is still true now? Business context changes all the time. A vendor should be able to explain how the system knows what is true now, what used to be true, and what has been replaced. If the answer is limited to adding timestamps, the system is tracking record history, not modeling temporal validity.

3. Can the system explain the evidence behind its recommendation? A recommendation that cannot be traced cannot be trusted. A vendor should be able to show how the system arrived at an answer, including which sources were used, which relationships were followed, which constraints applied, and which version of the data the system relied on. Auditability should be the standard. If the system cannot explain the reasoning path behind its output, it is not ready for decisions that require governance, review, or accountability.

4. Can the system recognize when it does not have enough evidence to act? A vendor should be able to explain how the system handles incomplete evidence, invalid constraints, conflicting information, and malformed questions. The system should not be optimized to always produce an answer. It should know when the available evidence does not support a conclusion, when required conditions cannot be validated, or when a path through the data is technically available but not relevant to the decision. In production, refusal is a control mechanism. A system that cannot recognize uncertainty will eventually manufacture confidence where the business needs judgment.

5. Can the system add reasoning without replacing the systems your business already depends on? A vendor should be able to explain how the system works with your warehouse, transformation layer, BI tools, and semantic definitions without forcing a rebuild of the stack. The approach needs to be additive. Preserve the systems that already work and introduce the layer that allows AI to reason over them safely. A vendor who wants to replace your existing semantic logic is creating an implementation nightmare and a political fight you do not need.

6. Can the vendor show you the actual model the system uses to reason? A vendor should be able to show the class definitions, relationship types, constraints, validity logic, and provenance model the system uses to reason. If the ontology is real, it should be inspectable. If the vendor cannot show the structure behind the reasoning, they may not have reasoning infrastructure. They may have a graph database with added rules, which is a different product than the one being marketed.

The Future of Your Current Tools

Ontology can be introduced without discarding the systems your team already depends on. Your warehouse remains the system of record. Your transformation layer still handles modeling, testing, documentation, and governance. Your BI and semantic layers still provide the shared definitions that keep reporting consistent. The aspect that changes is the layer beneath those semantic definitions.

That means the work your team has already done becomes more valuable. The people who understand dimensional models, metric governance, and data transformations are well-positioned to think in terms of typed entities, validity windows, and constrained relationships. There is a real skill shift involved, but it is not a total reinvention.

What This Means for Monday Morning

Start with a focused assessment of your current architecture. Look at what it already supports, where it reaches its limits, and what kind of reasoning layer your AI strategy will eventually require. Here is what we are advising data leaders to start thinking about:

Step 1: Identify Teams with Complex Data 

Before getting into the technical details, look across the teams whose work depends on complex, changing, or high-stakes data. These are usually teams dealing with layered relationships, changing states, regulatory exposure, or judgment-heavy decisions. For many enterprises, that means procurement, customer operations, compliance, supply chain, finance, or engineering. Start with the teams where a wrong answer would create real business risk.

Step 2: Identify the Questions Those Teams Actually Ask

Once you have identified the teams working with complex data, look at the questions they already ask when making decisions. Do not start with the obvious questions your current stack already handles. Look for questions that require context across systems, records, policies, exceptions, and team memory. These are the questions that reveal whether your architecture can support reasoning. If answering the question requires a human to stitch together context from multiple places, capture it. Build a small set of these questions so that when you evaluate a vendor, you are testing against the actual complexity of your business rather than the clean version shown in a demo.

Step 3: Map the Gaps in Your Stack

Once you have a working list of complex questions, map what it takes to answer each one today. Note which systems need to be checked, which records need to be reconciled, which policies or exceptions need interpretation, which teams need to be consulted, and how long the process takes. This turns an abstract architecture gap into something concrete. It also gives you a practical way to evaluate vendors against the real complexity your teams deal with every week. It helps with budget conversations too. If a vendor can reduce manual work, shorten time to answer, improve confidence, or reduce escalation, you have a clearer case for why the investment matters. Your CFO will care more about the operational drag removed than the architecture diagram.

Step 4: Bring Your Agenda to Vendor Conversations

By this point, you should have a clear view of what kind of AI support your business actually needs. Bring that perspective into every vendor conversation. Do not let a polished demo control the conversation. Bring your hardest questions into the room and ask how the vendor’s system would model the entities, relationships, constraints, validity windows, and provenance required to support them.

Step 5: Pilot Complex, Multivariate Cases

If the vendor conversation goes well, you will likely move from demo to pilot. A pilot is still a controlled environment, but it gives you a chance to see whether the tool is actually compatible with your employees, workflows, and operating reality. Avoid centering the pilot around easy use cases that look impressive but sidestep the real complexity of your business. Bring forward the hard questions your teams identified earlier and test cases with incomplete data, ambiguous relationships, changing terms, expired exceptions, competing sources, and real consequences if the system gets the answer wrong. If a system can handle those cases, it can likely handle the simpler ones underneath them. Our favorite problems to demo are headcount optimization and demand planning because they force the system to reason across multiple variables, constraints, assumptions, and business goals.

Step 6: Document Everything

Even though it feels like a lot has already happened in AI, we are still early in how enterprises consume it, build with it, and govern it. As you evaluate vendors, keep a clear record of what you tested, where your current stack held up, where it struggled, and what still required human judgment. Over time, those notes will help your team compare vendors more objectively, make stronger internal business cases, and avoid repeating the same evaluation cycles while the category is still taking shape.

Where Aquerius Fits

Aquerius is built on a simple belief. Retrieval is not reasoning, fluency is not correctness, and a graph without semantics is just connected data. That is why ontology sits at the foundation of what we build.

We are not focused on generating summaries, creating reports, or augmenting low-level day-to-day workflows. There are already plenty of vendors doing that well. Aquerius is for teams that want agents, workflows, and skills to reason through typed relationships, temporal validity, constraints, and provenance.

If you are thinking about how to make your existing semantic stack ready for AI, we should probably have a conversation.

See Reasoning in Action

See how Aquerius transforms raw data into trusted, verifiable enterprise logic.