Why the emerging tech conversation feels incomplete
I keep hearing the same emerging tech story repeated: better models, smarter copilots, more autonomous agents. The demos look impressive, but they often miss what is happening inside real organizations.
From where I sit, the challenge is no longer whether AI works. It is whether we can run it responsibly once it is embedded in day-to-day operations.
Many organizations have already moved beyond experimentation. AI is shaping customer interactions, internal workflows, operational decisions and risk exposure. That changes the conversation. It is not about potential anymore. It is about accountability.
When something goes wrong, the questions are simple and direct:
- Why did the system recommend this action?
- Why did it flag this person, case or transaction?
- Why did an automated workflow trigger that decision?
- Who owns the outcome when AI is part of the chain?
These questions rarely appear during a pilot. They surface during incidents, audits, escalations and senior stakeholder reviews, when trust is on the line. In those moments, “the model decided” is not a real answer. Neither is “we cannot see inside it”.
That is why I think the most important “emerging” topic is not the next wave of AI capability. It is the control layer that makes AI safe to adopt at scale.
Explainability sits at the centre of that. Not as a specialist feature, but as an enterprise requirement CIOs will increasingly be expected to own. If AI is going to influence decisions that affect customers, money or compliance, we need to explain what happened and why, in plain language, with evidence we can stand behind.
How AI quietly became a decision-maker
A lot of AI content still frames adoption as a future state, but in most organizations I work with, it is already here. Not as a single system labelled AI, but as a layer embedded into everyday processes.
AI rarely arrives with a big launch moment. It shows up in small ways that feel harmless at first: a recommendation that nudges an outcome, a model that prioritizes one case over another, a chatbot that changes what a customer does next.
Over time, those small decisions become operational reality.
In many businesses, AI is no longer just generating insights. It is influencing actions. Even when a human approves the final step, AI is often shaping the direction of the outcome.
That is where accountability starts to shift.
When AI sits inside workflows that touch customers, finance or compliance, the impact is no longer purely technical. It becomes a business outcome that leaders have to defend.
The pattern is familiar: a pilot proves value, adoption accelerates, AI spreads into more systems, then the first serious challenge arrives through a complaint, an audit or an unexpected failure. That is when the organization realizes the AI is not simply supporting decisions, it is shaping them at scale.
CIOs are usually pulled in at the moment clarity is needed most. Stakeholders want to know what happened, why it happened and what controls exist to prevent a repeat. Traditional IT governance struggles here because AI does not behave like conventional software. It can degrade over time, behave inconsistently across edge cases and produce confident outputs even when it is wrong.
Deploying AI is not the hard part anymore. Operating it with discipline is. Explainability is one of the controls that makes that possible.
Why explainable AI is emerging now
Explainable AI has been around for years, but it is only recently becoming unavoidable at the leadership level. What has changed is not the idea. It is the pressure around it.
In the last 12 to 24 months, I have seen explainability shift from a “nice to have” to something stakeholders actively ask for. AI is no longer limited to isolated pilots. It is running in production, across multiple functions and often influencing customer outcomes. That creates a simple expectation: if a decision affects a person, a transaction or a business-critical process, the organization should be able to explain what happened and why.
Three forces are driving this.
- Governance and regulatory expectations are catching up. CIOs are being asked to demonstrate oversight, not just performance. Even when specific rules do not apply to a given use case, the direction is clear. Frameworks like the EU AI Act are pushing organizations towards stronger accountability and more defensible decision-making.
- AI systems are becoming more interconnected. Many deployments now combine models, retrieval, workflow automation, business rules and human approvals. When outcomes are shaped by multiple components, it becomes difficult to explain decisions after the fact unless explainability is designed in from the start.
- The tolerance for mostly right is shrinking. Early pilots can absorb a few errors. Production systems cannot. If AI influences customer service, fraud detection, HR screening, pricing or compliance workflows, small failure rates become serious issues at scale.
Explainability can no longer be treated as a technical detail to address after the model is built. When AI influences outcomes tied to customers, revenue, compliance or operational risk, the question stops being “does it work?” and becomes “can we stand behind it?” That shift makes explainability a leadership responsibility, not just a data science concern.
For CIOs, explainability now sits alongside core enterprise controls such as security, privacy, resilience and auditability, aligned with expectations around transparency and accountability reflected in the OECD AI Principles. In practice, it shapes what gets approved for production, how systems are governed across their lifecycle, and the decisions made in procurement and architecture. The priority is evidence that supports audits, investigations and stakeholder confidence, not just accuracy.
The agentic effect on opacity and risk
Agentic systems raise the stakes because they do more than generate answers. They plan, choose actions, call tools and move work forward across systems. That is powerful, but it also changes the risk profile.
With a traditional application, you can usually trace a failure back to a specific rule, input or component. With agentic systems, the path is rarely that clean. Decisions can be shaped by multiple steps, multiple prompts, changing context and external data sources. Two runs of the same workflow can produce different outcomes, even when the goal is the same.
This is where opacity becomes a real operational issue and why explainable AI has moved from academic interest to practical necessity, as outlined in work from the Alan Turing Institute.
If an agent triggers an action that affects a customer, a payment or a compliance workflow, you need to know what led to that decision. Not in vague terms, but with enough detail to investigate, explain and correct it. Without that, incidents become harder to diagnose, teams lose confidence and leaders end up limiting adoption to avoid risk.
I have also seen a second-order problem appear with agentic systems: responsibility gets blurred.
When an outcome is driven by a chain of automated steps, ownership can become unclear. Was it the model? The prompt? The retrieval layer? The workflow design? The data source? The integration? The person who approved the rollout? In an enterprise environment, those questions matter because they drive accountability, remediation and future governance.
What I believe CIOs should prioritize next
If explainability is going to matter in practice, it has to show up in delivery, not just in governance documents. I do not think this is solved by one policy or one specialist team. It comes down to a few choices that shape what gets built, what gets approved and what gets scaled.
- First, I would separate systems that inform from systems that act. A tool that summarizes information is one thing. A system that triggers actions in production is another. The closer it gets to real outcomes, the higher the bar for control and evidence.
- Second, I would require traceability by default. When a system produces an output, I would want to know what it used to reach that conclusion. What data was involved, what context was pulled in, what logic was applied and what happened next. This is not about paperwork. It is about being able to investigate quickly when something looks wrong.
- Third, I would treat explainability as part of the build, not something to bolt on later. If you only start asking for explanations after an incident, you are already behind. It needs to be designed in, tested properly and checked over time as the environment changes.
- Fourth, I would make ownership explicit. Every workflow needs a named business owner and a named technical owner. When an outcome is challenged, it should be obvious who is accountable for the behavior, the controls and the fix.
- Finally, I would focus on communication as much as engineering. Explainability is only useful if the people who manage risk can understand it. The goal is not to impress specialists. The goal is to give decision makers enough clarity to approve, challenge or stop a system with confidence.
From AI capability to AI credibility
For the last couple of years, the focus has been on capability. Bigger models, faster delivery, more automation, more autonomy. That progress is real and it has unlocked serious value.
But I think the next phase will be defined by credibility.
CIOs will not be judged on whether they can deploy new systems. They will be judged on whether those systems can be trusted, governed and defended when challenged. That is a different standard and it changes what “good” looks like.
In that world, explainability is not a constraint. It is an enabler. It is what allows teams to scale adoption without relying on blind trust. It gives leaders a way to approve use cases with confidence, respond to incidents with evidence and improve systems based on facts rather than assumptions.
If I were advising my team as a CIO today, I would not tell them to slow down. I would tell them to build the control layer as they scale. That means designing for traceability, defining ownership and making sure decisions can be explained in plain language.
Because the real risk is not that AI will be too weak. The risk is that it will be deployed faster than the organization can control it.
And once trust is lost, it is very difficult to win back.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: AI isn’t the risk — not being able to explain it is
Source: News

