Every board I’ve sat in over the past year gets to the same question, usually about 40 minutes in: “What is our AI risk exposure?”
It’s a reasonable question and the answers — model governance, data controls, vendor exposure and regulatory posture — are correct as far as they go.
It doesn’t tell a complete story, though. The real risk isn’t in the models. (Or at least that’s not where all the risk is.) It’s in the gap between what leadership thinks is happening with AI inside their organizations and what is actually happening.
That gap is widening fast.
A change in the structure of knowledge work
What’s going on underneath most leadership teams’ radar is that AI has quietly shifted from a tool employees use to a layer employees work inside. This distinction sounds subtle, but it isn’t. It makes a world of difference when it comes to awareness of what your team is doing and ultimately satisfying your board.
A tool gets deployed, adopted and measured. An operating layer changes the architecture of work itself, including who does what, how they sequence it and the level of human judgment involved. When AI becomes that layer, the unit of risk and value is no longer the model. It’s the workflow that makes it far harder to measure.
I’ve watched this shift happen in real time across many companies and teams. For example, a sales team that started using an AI agent to draft renewal briefs, not because it was sanctioned by IT, just because it was useful. Or a group of engineers working alongside tools like Claude Code to scaffold and debug software as a daily practice, not as a pilot. (At this point, AI is perpetually reshaping engineering workflows.) Then there was a customer support operations team using AI to draft responses before a human even reads the ticket.
None of this shows up in traditional technology dashboards. Most of it wasn’t centrally deployed. Smart, high-functioning teams make their own additions to their workflows. These behaviors grew because they are genuinely useful, team by team, quarter by quarter.
The examples I shared aren’t a governance failure. This is what real adoption actually looks like. For many organizations, though, this shift means the structure of knowledge work has already changed, and leadership doesn’t have full visibility into how.
Changing the question on AI
The primary risk from this evolution is organizational blindness. That is, leadership making decisions about AI strategy, investment and governance based on a picture of adoption that is months behind reality.
Hallucinations, vendor lock-in and regulatory exposure are all worthy problems that deserve attention. It’s important to zoom out to the bigger picture of what is shifting, though.
Two failure modes follow from organizational blindness, and both are understandable given how fast things are moving.
The first is overreaction. Companies become more restrictive with their AI usage policies, effectively pushing legitimate experimentation underground. This contributes to their blindness, because it becomes invisible to governance and impossible to improve, feeling like responsible management. Often, it just means AI usage continues, undocumented. (It’s also not really what you want if one of your teams is developing effective solutions.)
The second failure mode is neglect. In organizations like this, there’s widespread adoption with no understanding of which workflows are actually changing business outcomes. Human accountability is quietly disappearing from decisions that require it. This looks like progressive adoption. Sometimes it is. Sometimes it’s just unmanaged complexity accumulating. This creates lots of risk.
The organizations navigating the changing landscape well are doing something simple. They’re shifting the question from “Are people using AI?” to something more nuanced: “Which workflows are changing, and what’s happening to outcomes?” That’s where CIOs should focus their time, not in the failure modes of either over-regulating or under-regulating.
The shift CIOs must understand
All of this change implies a shift in what boards should actually be asking CIOs.
It’s no longer just, “Are we in compliance with our AI policies?” It’s a range of questions:
- “Where are agents already being used in our workflows?”
- “Which business outcomes are moving?”
- “Where is human judgment still essential? Is it there?”
Even with these specific questions, a bigger one often gets ignored: “What capacity has AI created, and what are we doing with it?”
AI frequently generates organizational capacity before it generates financial results. A team that can produce proposals twice as fast has more time to sell. A support team resolving tickets more quickly can absorb higher volume. Whether that capacity becomes margin expansion, growth or simply absorbed idle time is a management decision. Many organizations are in this delta right now. They haven’t made a choice deliberately. It’s just arrived.
A starting framework for CIOs
In this environment, that suggests a four-area framework CIOs can use to improve their own awareness and ultimately improve visibility for their boards:
- Look at workflow penetration. Understand which processes now have AI participating in them, even informally. This means going beyond IT-sanctioned deployments and actually asking teams how work gets done. A simple quarterly survey of department leads — which tasks are you using AI for, and which outputs go directly into decisions — will surface adoption that never appeared in a procurement record.
- Look for human accountability signals. Assess where human judgment should be in the loop and make sure it’s there. Approval rates and escalation patterns can offer early clues about whether accountability is holding or quietly eroding.
- Look at usage patterns. Are people working inside sanctioned tools and within defined guardrails? Understanding where governance gaps are forming before they become incidents matters more than enforcement.
- Look at capacity conversion. Where AI has created time or throughput, what is the organization doing with it? This is the metric most boards never ask for and most CIOs aren’t yet tracking — but it’s where the strategic value question actually lives. It’s also where regulators are beginning to probe. As AI enters consequential workflows, the question of whether freed capacity came at the cost of human oversight is one that governance frameworks in financial services, healthcare and federal contracting are starting to formalize. Getting ahead of that measurement is easier than retrofitting it.
None of these requires a mature AI program to start measuring, but they require focus and discipline to surface. And if you are a CIO managing a board’s expectations, this is where you should put most of your energy.
Creating visibility to lead
Boards are good at governing what they can see. Financial controls, capital allocation and strategic risk. The mechanisms exist to measure them because the visibility exists.
AI is forcing a new kind of visibility problem. (And, it’s important to note, boards are themselves wrestling with how they should evolve as AI advances, as this McKinsey and Co. research highlights.) The technology is moving at a pace that outruns traditional reporting cycles. The adoption is happening laterally, at the workflow level and in ways that don’t surface through normal IT channels. The decisions that matter most — where human judgment stays in the loop or what to do with open bandwidth — are exactly the ones that aren’t getting made explicitly.
The CIOs who will serve their boards best in the next few years are the ones building operational intelligence about how work is changing, not just system-level reporting about what’s deployed. They’re manifesting viability.
The boards that will govern AI well are the ones demanding that intelligence and asking the question underneath the question.
Risk exposure is a start. Understanding how work itself is changing is where it leads.
>>>>This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: What boards get wrong about AI risk: Here’s what you should be measuring
Source: News

