For decades, structural engineers and IT teams have shared the same testing logic: apply controlled pressure, find where things give way and fix. In IT, that means a server that buckles at scale, a query that times out under load or a process that degrades when pushed past its limits.
Agentic AI could upend the way we approach testing. When an agent stops, there is no bug to fix, no threshold to raise. The agent is at a dead end: a system it can’t reach, an approval with no interface, a data handoff that lived in someone’s morning routine instead of in the architecture. This becomes about not a flaw in what was built, but of what wasn’t.
Humans filled those gaps without anyone noticing until now. An agent can’t. And every place it stops is a precise record of where the enterprise assumed a connection that was never made. These gaps were always load-bearing, patched up and held up by hand. Now you have a blueprint.
The workaround gaps, finally visible
The workarounds that keep enterprises running have never appeared on any org chart or job description. They live in people: the clinical coordinator who reconciles a patient’s discharge summary, current medications and specialist referrals across three separate systems, because those systems never shared data with each other. Or a procurement manager who shoots an email to get missing information, then manually approves a vendor payment because the automated workflow consistently breaks whenever a supplier invoice doesn’t match the purchase order format. Or a junior analyst at a financial institution who re-enters customer identity information from one platform into another during the account opening process, because the two systems store the same field in incompatible formats and no one ever built the translation layer.
This happens when an organization digitizes rather than transforms.
During the ERP rollouts of the 1990s, companies automated individual functions, gave each department its own system of record and found the gaps between those systems filled by people rather than resolved. The gap-filling became invisible, absorbed into roles, normalized into process and eventually undetectable to the decision-makers in the C-suite.
An AI agent assigned to any of those three tasks will stop at exactly the point where the human used to improvise. Worse yet, it will proceed and hallucinate its way past a gap that requires judgment, not guesswork.
That’s what makes agentic AI failures structurally different from other kinds of IT failures. Unlike IT failures that point at what the technology got wrong, these highlight what the organization never formalized. They mark exactly where a process should have been automated but was kept alive by human effort instead.
It shouldn’t come as a surprise that a recent study of 1,000 senior executives found that roughly 70% of the barriers to scaling AI trace back to people-and-process issues, compared to just 10% attributed to AI algorithms themselves. Agents didn’t create those gaps. They just happen to be the first tool that can accurately map and show you every place across your enterprise where the load-bearing walls are held up by hand.
The coordination tax
A blueprint helps only if you read it before breaking ground on the next phase. Once organizations see what well-deployed agents can deliver, the instinct is to accelerate: more agents, more functions, faster. That instinct, unexamined, is how structural problems compound.
PwC’s May 2025 survey concluded that deploying agents in isolation cannot deliver meaningful value; the real opportunity lies in orchestrating multiple agents across complex, cross-functional workflows.
When multiple agents operate independently, each optimizing for its own narrow task with no awareness of what the others are doing, they produce conflicting outputs, redundant work and contradictory decisions. Humans get reinserted as arbitrators. The automation benefit reverses.
Researchers at Google DeepMind, MIT, the University of Washington and other institutions studying the scaling of multi-agent architectures found that in tool-heavy environments, coordination overhead can grow faster than the productivity gains from adding more agents. Industry analysts examining the work have described this dynamic as a “coordination tax.” When uncoordinated, scale works against you.
Without shared infrastructure connecting them, organizations exacerbate the coordination problem. They recreate it one level up, at machine speed, with humans reinserted to arbitrate between agents exactly as they once bridged the gaps between systems.
Where the value actually lies
Most organizations are sitting on failure intelligence they haven’t learned to decipher. Every time an agent stalls or halts, it’s pointing at something specific: a system boundary, a data gap, an approval process that exists only because two teams that depend on each other never had a formal connection. Each stopping point is a coordinate. Taken together, they’re a prioritized integration roadmap built from operational reality, not from a whiteboard exercise.
The timing matters. Agentic AI’s share of total AI business value is projected to nearly double by 2028. The integration decisions made now will determine how much of that compounding value any given organization is positioned to capture. It’s time to put on your CAP: catalog, assess, prioritize.
- Catalog where your agents stop, treating each stall as a data point rather than a quality control problem. What type of boundary is it? A system the agent can’t reach? A data field it can’t read? An approval process with no interface? Categorize it.
- Assess those failures by business impact. An agent stalling on a customer-facing handoff carries different weight than one blocked on an internal reporting task. Triage by the value that removing each obstacle would unlock.
- Prioritize investments based on where agent friction is costing the business the most today. Existing vendor relationships, quarterly planning cycles and in-flight infrastructure work all become sharper when they have a fault map to work from. This is your integration roadmap.
Transformation always sounds urgent and rarely feels concrete. Agent failures change that. They hand you a specific system, with specific gaps to be filled. Just as a structural engineer produces a stress report to find where the building needs reinforcement, your agents have been identifying exactly which walls to build and reinforce.
The CIOs who succeed won’t be those who deployed the most agents. They’ll be those who read what their agents reported back when they ran into a gap — and built from there.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: When agents hit the walls
Source: News

