On a holiday weekend, when most of a company is offline, a critical system fails. An AI-driven workflow stalls, or worse, produces flawed decisions at scale that misprice products or expose sensitive data. In that moment, organizational theory disappears and the question of who’s responsible is immediately raised.
As AI moves from experimentation into production, accountability is no longer a technical concern, it’s an executive one. And while governance frameworks suggest responsibility is shared across legal, risk, IT, and business teams, courts may ultimately find it far less evenly distributed when something goes wrong.
AI, after all, may diffuse decision-making, but not legal liability.
AI doesn’t show up in court — people do
Jessica Eaves Mathews, an AI and intellectual property attorney and founder of Leverage Legal Group, understands that when an AI system influences a consequential decision, the algorithm isn’t what will show up in court. “It’ll be the humans who developed it, deployed it, or used it,” she says. For now, however, the deeper uncertainty is there’s very little case law to guide those decisions.
“We’re still in a phase where a lot of this is speculative,” says Mathews, comparing the moment to the early days of the internet, when courts were still figuring out how existing legal frameworks applied to new technologies. Regulators have signaled that responsibility can’t be outsourced to algorithms. But how liability will be apportioned across vendors, deployers, and executives remains unsettled — an uncertainty that’s unlikely to persist for long.
Jessica Eaves Mathews, founder, Leverage Legal Group
LLG
“There are going to be companies that become the poster children for how not to do this,” she says. “The cases working their way through the system now are going to define how this plays out.”
In most scenarios, responsibility will attach first and foremost to the deploying organization, the enterprise that chose to implement the system. “Saying that we bought it from a vendor isn’t likely to be a defense,” she adds.
The underlying legal principle is familiar, even if the technology isn’t: liability follows the party best positioned to prevent harm. In an AI context, that tends to be the organization integrating the system into real-world decision-making, so what changes isn’t who’s accountable but how difficult it becomes to demonstrate appropriate safeguards were in place.
CIO as the system’s last line of defense
If legal accountability points to the enterprise, operational accountability often converges on the CIO. While CIOs don’t formally own AI in most organizations, they do own the systems, infrastructure, and data pipelines through which AI operates.
“Whether they like it or not, CIOs are now in the AI governance and risk oversight business,” says Chris Drumgoole, president of global infrastructure services at DXC Technology and former global CIO and CTO of GE.
The pattern is becoming familiar, and increasingly predictable. Business teams experiment with AI tools, often outside formal processes, and early results are promising. Adoption accelerates but controls lag. Then something breaks. “At that moment,” Drumgoole says, “everyone looks to the CIO first to fix it, then to explain how it happened.”
Chris Drumgoole, president, global infrastructure services, DXC Technology
DXC
The dynamic is intensified by the rise of shadow AI. Unlike earlier forms of shadow IT, the risks here aren’t limited to cost or inefficiency. They extend to things like data leakage, regulatory exposure, and reputational damage.
“Everyone is an expert now,” Drumgoole says. “The tools are accessible, and the speed to proof of concept is measured in minutes.” For CIOs, this creates a structural asymmetry. They’re accountable for systems they don’t fully control, and increasingly for decisions they didn’t directly authorize.
In practice, that makes the CIO the enterprise’s last line of defense, not because governance models assign that role, but because operational reality does.
The illusion of distributed accountability
Most organizations, however, aren’t building governance structures around a single accountable executive. Instead, they’re constructing distributed models that reflect the cross-functional nature of AI.
Ojas Rege, SVP and GM, privacy and data governance, OneTrust
OneTrust
Ojas Rege, SVP and GM of privacy and data governance at OneTrust, sees this distribution as unavoidable, but also potentially misleading. “AI governance spans legal, compliance, risk, IT, and the business,” he says. “No single function can manage it end to end.”
But that doesn’t mean accountability is shared in the same way. In Rege’s view, responsibility for outcomes remains firmly with the business. “You still keep the owners of the business accountable for the outcomes,” he says. “If those outcomes rely on AI systems, they have to figure out how to own that.”
In practice, however, governance is fragmented. Legal teams interpret regulatory exposure, risk and compliance define frameworks, and IT secures and operates systems. The result is a model in which responsibility appears distributed while accountability, when tested, is not — and it often compresses to a single point of failure. “AI doesn’t replace responsibility,” says Simon Elcham, co-founder and CAIO at payment fraud platform Trustpair. “It increases the number of points where things can go wrong.”
Simon Elcham, CAIO, Trustpair
Trustpair
And those points are multiplying. Beyond traditional concerns such as security and privacy, enterprises must now manage algorithmic bias and discrimination, intellectual property infringement, trade secret exposure, and limited explainability of model outputs.
Each risk category may fall under a different function, but when they intersect, as they often do in AI systems, ownership becomes blurred. Mathews frames the issue more starkly in that accountability ultimately rests with whoever could have prevented the harm. The difficulty in AI systems is that multiple actors may plausibly claim, or deny, that role. So the result is a governance model that’s distributed by design, but not always coherent in execution.
The emergence and limits of the CAIO
To address this ambiguity, some organizations are beginning to formalize AI accountability through new leadership roles. The CAIO is one attempt to centralize oversight without constraining innovation.
At Hi Marley, the conversational platform for the P&C insurance industry, CTO Jonathan Tushman recently expanded his role to include CAIO responsibilities, formalizing what he describes as executive accountability for AI infrastructure and governance. In his view, effective AI governance depends on structured separation. “AI Ops owns how we build and run AI internally,” he says. “But AI in the product belongs to the CTO and product leadership, and compliance and legal act as independent checks and balances.”
The intention isn’t to eliminate tension, but to institutionalize it. “You need people pushing AI forward and people holding it back,” says Tushman. “The value is in that tension.”
Jonathan Tushman, CTO, Hi Marley
Hi Marley
This reflects a broader shift in enterprise governance away from centralized control and toward managed friction between competing priorities — speed versus safety, innovation versus compliance. Yet even this model has limits.
When disagreements inevitably arise, someone must decide whether to proceed, pause, or reverse course. “In most organizations, that decision escalates often to the CEO or CFO,” says Tushman.
The CAIO, in other words, may coordinate accountability. But ultimate responsibility still sits at the top and can’t be delegated.
The widening gap between deployment and governance
If organizational models for AI accountability are still evolving, the gap between deployment and governance is already widening. “Companies are deploying AI at production speed, but governing at committee speed,” Mathews says. “That’s where the risk lives.”
Consequences are beginning to surface as a result. Many organizations lack even a basic inventory of AI systems in use across the enterprise. Shadow AI further complicates visibility, as employees adopt tools independently, often without understanding the implications.
The risks are both immediate and systemic. Employees may input sensitive corporate data into public AI platforms, inadvertently exposing trade secrets. AI-generated content may infringe on copyrighted material, and decision systems may produce biased or discriminatory outcomes that trigger regulatory scrutiny.
At the same time, regulatory expectations are rising, even in the absence of clear legal precedent. That combination — rapid deployment, limited governance, and legal uncertainty — makes it likely that a small number of high-profile cases will shape the future of AI accountability, as Mathews describes.
Where the buck stops
For all the complexity surrounding AI governance, one pattern is becoming clear. Responsibility may be distributed, authority may be shared, and new roles may emerge to coordinate oversight, but accountability doesn’t remain diffused indefinitely.
When systems fail, or when regulators intervene, it often points at enterprise leadership, and, in operational terms, to the executives closest to the systems in question. AI may decentralize how decisions are made, obscure the pathways through which those decisions emerge, and challenge traditional notions of control, but what it doesn’t do is eliminate responsibility. If anything, it magnifies it.
AI accountability is a familiar problem, refracted through a more complex system. The difference is the system is moving faster, and the cost of getting it wrong is increasing.
Read More from This Article: AI is spreading decision-making, but not accountability
Source: News

