There’s a conversation happening in every data org right now. It goes something like this:
“If AI can answer business questions in seconds, what exactly are we paying our data analysts to do?”
It’s a fair question. And if you’re asking it, you’re probably looking at the problem the wrong way.
I’ve spent the last few years working side by side with data teams at Fortune 500 companies like ConocoPhillips and Cisco. What I’ve watched unfold is not the obsolescence of the data analyst. It’s the beginning of their most important chapter yet, if they’re willing to step into it.
Let me explain. But first, I need to take you back to where we started.
The old world: BI as a relay race
For the better part of the last decade, Business Intelligence worked like a relay race. The baton passed through many hands before a business user ever got an answer.
It started with a request. A VP of Sales would send a Slack to the data team: “Can you build me a view of pipeline coverage by region, segmented by deal size and expected close date?” Simple enough, in theory.
What happened next was anything but simple.
The data analyst would first go spelunking in the data warehouse. Which tables held the CRM data? Was it in Salesforce, synced to Snowflake or still sitting in a legacy system? Were the field names consistent? Did close_date in one schema mean the same thing as expected_close in another? This data prep phase alone — cleaning, joining, validating — could consume two to three days before a single chart was drawn. Research has long confirmed what every analyst already knows in their bones: The preparation work can swallow the majority of their time, leaving precious little for actual analysis.
Then came the workbook. The analyst would build a Tableau dashboard or a Looker Explore, carefully constructing the logic. In Looker, this meant writing LookML: Defining views, dimensions, measures and the relationships between them. This is the semantic layer — the translation dictionary between raw database columns and business-friendly concepts like “pipeline coverage” or “at-risk deals.” It’s sophisticated work. It requires understanding both the technical data model and the intent of the business question.
Once the semantic model was right, the analyst would build the dashboard itself — choosing the right visualization, applying filters, establishing drill-down hierarchies. Then a round of review with the stakeholder. Revisions. More revisions. Finally, the dashboard was published.
The business user got their answer, often one to two weeks after they asked the question.
And then a slightly different question would come in, and the whole cycle would begin again.
This wasn’t a failure of the data team. It was a structural problem. The old model of BI was built for a world where data was scarce, questions were infrequent and business moved slowly enough that a two-week turnaround was acceptable. That world no longer exists.
The missing ingredient: context
Here’s what keeps getting overlooked in the “AI replaces analysts” conversation: AI doesn’t know your business.
A large language model is trained on the internet. It knows what “churn rate” means in the abstract. It does not know that at your company, “churn” excludes accounts that downgraded but didn’t cancel, per a decision made in Q3 2021 during a board-driven metric refresh. It does not know that your fiscal year ends in October, not December. It does not know that the anomaly in the Southeast region’s numbers last quarter was caused by a one-time restructuring of territory assignments, not a real decline in performance. It does not know that when your CFO asks about “revenue,” she means recognized revenue, not booked and that your revenue recognition policy is tied to a specific contract milestone that lives in a field called milestone_event_type = ‘GO_LIVE’ in your ERP.
Without that context, even the most capable AI will produce answers that are technically correct and completely wrong.
This is not a model problem. It is not a data quality problem, though data quality matters. It is a context problem. And it is the central architectural challenge of the AI analytics era.
Gartner reinforces this directly: Organizations that prioritize semantics in AI-ready data will increase their GenAI model accuracy by up to 80% and reduce costs by up to 60%. As they put it, poor semantics lead to greater hallucinations, more tokens required and higher costs. Context is not a nice-to-have. It is the lever.
Consider what “context” actually means in an enterprise setting. It has multiple layers:
- Data context is the structural knowledge: What tables exist, how they join, what the columns mean, where data comes from and what edge cases cause anomalies. This is what a veteran data engineer carries in their head after three years in a particular data warehouse. It’s the knowledge that customer_id in the CRM doesn’t always match customer_id in the billing system, and here’s the lookup table that reconciles them.
- Business context is the semantic knowledge: How the organization defines its metrics, which definitions have changed over time, what initiatives are underway that might affect the numbers and which data sources to trust for which questions. It’s knowing that “active user” means something different to the product team than it does to the finance team.
- Historical context is institutional memory: What questions have been asked before, what anomalies were investigated and why, what decisions were made and on what basis and what the AI should not learn from because it reflects a one-time event rather than a durable pattern.
- Presentational context is judgment about how to communicate: Which audiences need which level of detail, when a number needs a narrative, when a trend needs a benchmark and how to frame an insight so it drives action rather than confusion.
AI can process information at superhuman speed. It cannot originate context it was never given. One of the more clarifying observations I’ve encountered is from research at Tellius, which noted that “human analysts carry this context in their heads. They remember what they investigated before, what patterns they’ve seen and what explanations they’ve already validated or ruled out. They build institutional knowledge over time.” Today’s AI systems, by contrast, are largely stateless. Every query starts from zero.
That is the gap. And filling it is now the most important job in any data organization.
The rise of the AI context engineer
I want to propose a new title. Not because titles matter, but because names shape how we think about roles, and the role that is emerging deserves a name that captures its true significance: The AI context engineer (ACE).
The ACE is not a dashboard builder. The ACE is not a SQL writer. The ACE is the person who makes AI analytics actually work — by building, curating, governing and continuously refining the context layer that sits between raw enterprise data and intelligent AI responses.
Think about what this role actually requires.
The ACE must understand the business deeply enough to know what questions will be asked, what answers will matter and what edge cases will cause the AI to go wrong. They are, in some sense, the organizational ethnographer — the person who has absorbed years of institutional knowledge and can translate it into something a machine can act on.
The ACE must understand the data architecture well enough to model it accurately — to define not just what columns mean, but how they relate, what their lineage is, what their known quality issues are and when they should and shouldn’t be used.
The ACE must be a curator of history: Documenting past analyses, flagging one-time anomalies, preserving the reasoning behind metric definitions so that future AI-generated answers reflect the organization’s evolving understanding of itself.
The ACE must be a quality controller: Continuously evaluating the AI’s outputs, identifying where the context layer is incomplete or misleading, and closing the gaps before they propagate into bad decisions.
And the ACE must be a translator: Communicating to business stakeholders not just what the data shows, but why the AI answered a question a particular way, and when human judgment should override an automated insight.
This is not a less sophisticated role than the data analyst of the past. It is a vastly more sophisticated one. The data analyst used to be primarily a technical craftsperson — skilled at SQL, at visualization, at data modeling. The ACE is all of that, plus strategist, plus organizational psychologist, plus AI systems architect.
The companies we work with that are getting the most from AI analytics — the ones where adoption doubles month over month, where business users genuinely trust the outputs, where AI is changing how decisions get made — they all have people functioning in this role, even if they don’t call it that yet. They have someone who owns the context layer. Who champions it. Who treats it as a living system that needs ongoing investment.
What an ACE actually does: A day in the life
Let me make this concrete.
A new quarter begins at a mid-sized technology company. The CRO sends a message to the data team: The board wants a new way of looking at net revenue retention — one that breaks out expansion, contraction and churn separately, and accounts for the company’s recent shift from annual to monthly billing cycles.
In the old world, this was a two-week project: Schema discovery, SQL development, semantic model updates, dashboard build, review, revision, publish.
In the AI analytics world, the CRO can ask this question directly — if the context layer is ready to support it. The ACE’s job is to make sure it is.
First, understanding the business intent. That means sitting down with the CRO to understand not just the mechanics of the new metric, but the decision it will inform. What will the CRO do differently if expansion is trending up but contraction is also rising? What benchmark matters — industry average, internal historical trend, competitor proxy?
Second, translating intent into data logic. Where does billing cycle information live? How is a “contraction” event recorded in the system? Is there a field, or does it need to be inferred from a month-over-month delta in contract value? The ACE knows the data well enough to answer these questions without a weeks-long discovery sprint.
Third, encoding the context. Adding the new metric definition, its calculation logic, its relevant filters, its known edge cases and its relationships to adjacent metrics into the context layer. This is the equivalent of writing LookML in the old world — but richer, because it includes not just the formula but the intent, the history and the caveats.
Fourth, validating the AI’s output. Running a battery of test questions to ensure the AI returns the right answer for the right reasons. Not just “is the number correct?” but “does the AI understand when not to use this metric?”
Fifth, governing ongoing accuracy. As the company’s business model evolves, the ACE monitors the AI’s outputs for drift, flags questions that reveal gaps in the context layer and continuously updates the system.
The CRO gets answers in minutes instead of weeks. But only because the ACE did the upstream work to make that possible.
Why this is good news for data analysts
If you’re a data analyst reading this and wondering whether you have a future, the answer is yes — an extraordinary one. But it requires a shift in how you think about your value.
Your value was never really in writing SQL. It was in knowing which SQL to write. It was in the institutional knowledge that told you which table to trust, which definition to use and which anomaly to flag. It was in the judgment about how to frame a number for a CEO who thinks in stories, not schemas.
AI can write SQL. AI cannot originate the judgment that makes SQL meaningful.
The data points are encouraging. A 2025 Alteryx survey of 1,400 data analysts worldwide found that 87% say their role has become more strategically important in the past year, and 94% say AI is enhancing that strategic nature. Only 17% are worried about being replaced — a sharp reversal from just a year prior, when 65% of data leaders expected AI to take analyst jobs within two to three years. What changed? Analysts who leaned into AI found it made them more powerful, not less necessary.
What you carry in your head — the business context, the data context, the historical patterns, the organizational definitions — is precisely what AI needs and cannot generate on its own. Your job is not to compete with AI at tasks AI can now do faster. Your job is to feed AI the knowledge that makes it worth trusting.
The ACE role is the formalization of that value. Context is not a side effect of good analytics work. It is the core of it. The people who have spent years accumulating that context are exactly the right people to build and steward the systems that make AI analytics possible.
Data analysts are not being sidelined by AI. They are becoming the people who make AI work for everyone else in the organization. That is a remarkable elevation in status, if they’re willing to claim it.
The organizations getting this right
The companies seeing transformational results from AI analytics share a common pattern: They have invested in the context layer, and they have humans who own it.
The organizations still struggling with AI analytics are the ones that deployed a tool and then waited for it to figure out their business. It doesn’t work that way. The AI is the engine. Context is the fuel. Without the fuel, the engine goes nowhere.
A broader Alteryx survey reinforces this pattern: Only 23% of organizations have successfully scaled AI pilots into production, and just 28% fully trust AI to support decision-making. The diagnosis is consistent with what we see every day — trust breaks down when AI is deployed without the business context and logic needed to produce consistent, explainable results.
The lesson for data leaders is clear. The transition from traditional BI to AI analytics is not primarily a technology transition. It is an organizational one. The technology works. What determines whether it delivers value is whether your organization has people who understand that their job is now to build and maintain the context that makes AI trustworthy.
The inflection point
We are at an inflection point in the history of enterprise data. Gartner estimates that by 2028, over half the GenAI models used in enterprises will be domain-specific, with “context emerging as one of the most critical differentiators for successful agent deployments.” The era of static dashboards and reactive reporting is ending. The era of always-on, conversational, proactive AI analytics is here.
The question every data organization needs to answer is not “will AI replace my analysts?” It’s “Are my analysts ready to become AI context engineers?”
The ones who are — the ones who lean into this shift, take ownership of the context layer and become the bridge between organizational knowledge and AI capability — will find themselves more valued, more strategic and more impactful than any data analyst has ever been.
The ones who wait for someone else to define their role may find that someone else already has.
The missing ingredient for AI to truly transform BI has never been model capability. It has been context. And the people best positioned to provide that context are the analysts who have been building it, living it and protecting it for years.
That’s not a threat to their career. It’s the foundation of their next one.
The missing piece is already on your payroll. Time to make them an ACE.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: The missing piece in every failed AI/BI rollout is already on your data team
Source: News

