Agentic AI has quickly become one of the most loaded terms in enterprise technology. Vendors promise systems that can make decisions and act autonomously, moving AI beyond assistance and into execution. For CIOs under pressure to deliver measurable returns from AI investments, the appeal is obvious. But behind the momentum, a growing number of enterprises are hitting the pause button.
Gartner predicts that more than 40% of agentic AI projects will be canceled by the end of 2027. The reasons aren’t mysterious, says Anushree Verma, senior director analyst at Gartner. “Most agentic AI projects right now are early-stage experiments or proof of concepts that are mostly driven by hype and often misapplied,” she says.
Part of the problem, according to Gartner, is the market itself has become muddled by what Verma calls agent washing. As enthusiasm for agentic AI has surged, many vendors have rebranded existing chatbots or gen AI assistants as agents without delivering meaningful outcomes. “Most agentic AI propositions lack significant value or ROI, as current models don’t have the maturity and agency to autonomously achieve complex business goals, or follow nuanced instructions over time,” she says.
Anushree Verma, senior director analyst, Gartner
Gartner
That mismatch often doesn’t become visible until projects move beyond pilots into complex operational settings. With so many pilots failing to move to real deployment, costs are rising, and so is pressure from leadership to justify continued investment. Increasingly, as a result, projects are paused or canceled altogether.
The coming wave of cancellations, though, is less about the technology failing outright and more about a mismatch between expectations and operational reality. Enterprises are discovering that autonomy is far harder and more expensive to deploy than early demos suggest.
When pilots stop telling the truth
In early trials, agentic AI often looks promising. Narrow focus, clean data, and heavy human oversight create conditions where systems appear capable and efficient. But those conditions rarely survive first contact with production environments.
Verma points to value framing as an early warning sign. “If we’re still talking about time savings and individual productivity, that’s not justifiable for the investment clients are making,” she says. Agentic systems, she argues, must be tied directly to functional business outcomes, showing value in areas like finance, HR, security, or operations or they’ll struggle to survive scrutiny from leadership teams.
Jeremy Ung, CTO at cloud-based software provider BlackLine, sees the same pattern from the vendor side. “Pilots are often really promising,” he says. “You get exciting results in an isolated environment.” The problem emerges at scale. Documents vary in structure. Exceptions multiply. Human users behave inconsistently. “Scaling is where I see most of them fail,” Ung adds.
Jeremy Ung, CTO, BlackLine
BlackLine
Once agentic systems are embedded in real workflows, reversibility becomes difficult. If an autonomous process produces inconsistent results, enterprises need to understand not just what went wrong, but how the system reasoned its way there. Without that visibility, rollback is risky and slow.
Change management compounds the challenge. As Ung puts it, this is the first time the workforce is managing humans and AI agents at the same time. Training people to supervise autonomous systems, and trust them appropriately, has proven harder than many organizations expected.
The cost models break first
Even when pilots deliver apparent value, economics often derail expansion. Agentic systems consume resources very differently from traditional enterprise software. Each autonomous task can trigger multiple reasoning steps, tool calls, retries, and validations. “As you get more complex workflows, multiple tokens are consumed in the process,” Ung explains. “And as you move toward agentic workflows, they consume more resources to do independent work.”
This makes costs volatile and difficult to forecast. Token-based pricing fluctuates with behavior, not capacity, confounding finance teams accustomed to predictable infrastructure spend. Boards also increasingly ask why AI costs resemble open-ended operating expenses rather than bounded investments with defined returns.
Verma notes that many enterprises miscalculate costs because they apply gen AI assumptions to agentic systems. “It’s still relying on simple LLM cost criteria, which isn’t true for agents,” she says. “When you add orchestrators, governance layers, and multiple agents, costs start escalating very quickly.”
As a result, some organizations are narrowing scope deliberately, while others are freezing expansion altogether until cost controls mature.
When agentic AI reaches the boardroom
As agentic AI projects grow more visible and expensive, they’re also moving out of the IT silo and into board-level conversations. That shift is proving uncomfortable for many organizations.
Unlike earlier waves of automation, agentic AI introduces risks that are harder to delegate downward. Autonomous systems now make decisions, trigger actions, and interact with customers and financial systems in ways that directly affect enterprise liability. As a result, CIOs are increasingly being asked not just whether a system works, but if it can be defended.
Gartner’s Verma notes that this is where many initiatives falter. “Governance and risk controls aren’t really designed precisely for agentic systems at this time,” she says, particularly when multiple agents interact and access different applications. As autonomy increases, so does the difficulty of answering basic governance questions like who approved this behavior, under what conditions, and with what safeguards.
Boards are also pressing for clarity on accountability. When an agent makes a poor decision, responsibility doesn’t disappear into the model. It lands with executives who approved deployment. That reality is forcing enterprises to treat agentic AI less like experimental innovation and more like core infrastructure subject to the same scrutiny as financial systems or cybersecurity controls.
For many organizations, this moment marks a turning point. Projects that can’t be explained clearly and justified economically are no longer quietly tolerated. They’re explicitly questioned, and often stopped.
Autonomy meets real-world complexity
Contrary to popular belief, model accuracy isn’t the primary constraint on agentic AI. The deeper challenge lies in deploying autonomous systems into environments defined by fragmentation, exceptions, and uncertainty.
“The hardest problem isn’t the modeling,” says Udo Sglavo, VP of applied AI and modeling at SAS. “It’s putting agents into the operational environment.” Enterprises, he notes, are full of partial failures, delayed integrations, and edge cases that compound quickly when systems act autonomously.
Humans handle these situations using judgment and experience. Agents don’t. “Humans have intuition,” Sglavo says. “An agent doesn’t have any sense that something feels off.” When agents encounter situations they’ve never seen before, the risk of hallucination increases, sometimes with serious consequences.
Udo Sglavo, VP of applied AI and modeling, SAS
SAS
This is why human-in-the-loop design remains essential. “Most, if not all, implementations we’ve done require it,” says Sglavo. Autonomy works best when systems handle routine cases and surface exceptions, rather than make high-severity decisions independently.
Interpretability and auditability also become gating factors. “If we can’t explain why a system acted and reconstruct how a decision unfolded, our customers won’t use it,” Sglavo says, particularly about regulated industries where decisions must be defended long after they’re made.
Governance becomes the real bottleneck
As agentic AI moves closer to production, governance, not intelligence, emerges as the decisive constraint. Ahmed Zaidi, CEO of AI services provider Accelirate, frames governance across people, process, and technology. On the technical side, enterprises struggle to apply access controls and guardrails to probabilistic systems. “We already have trouble figuring out access control for structured systems,” he says. “Now you’re giving tools to an LLM that may hallucinate.”
Process governance is equally challenging. Manual workflows often contain implicit checks that disappear when automated. Without redesign, automation can accelerate errors rather than reduce them. And people governance adds another layer: training employees, redefining accountability, and preparing organizations for new failure modes.
Ahmed Zaidi, CEO, Accelirate
Accelirate
Zaidi emphasizes that mature governance includes the ability to stop projects. His teams routinely pause or cancel initiatives that combine high risk with unclear or eroding ROI. “Canceling a project doesn’t mean governance failed,” he says. “It means governance worked.”
One recurring pattern, he says, is that the mitigations required to manage risk — additional controls, validation layers, or human oversight — often wipe out the projected return. In those cases, canceling the project is the rational decision.
What actually survives
Despite the growing list of stalled projects, agentic AI isn’t retreating. It’s narrowing. The initiatives that survive share common traits. They focus on task-specific autonomy rather than generalized agents, and operate in constrained environments where inputs and outputs can be bounded. They also define success in terms of measurable business outcomes, not abstract productivity gains.
Verma sees this shift clearly. “We’re moving toward task-specific agents that are incrementally added into existing applications,” she says, adding that the projects that succeed are those that deliver tangible outcomes at the organizational level, not just individual efficiency.
Ung agrees. “It’s not about time saved,” he says. “It’s about outcomes for your business.” Mature deployments tie agent behavior to KPIs and executive dashboards, enabling leaders to assess value and course-correct when results fall short.
According to these experts, one principle stands out: autonomy is earned incrementally. Humans remain embedded at high-severity decision points, rollback paths are designed in advance, and governance is continuous, not reactive.
The next phase of agentic AI adoption will be quieter than the last, with fewer sweeping announcements, more paused initiatives, and more scrutiny from finance and boards. That shift shouldn’t be mistaken for disappointment. It marks the transition of agentic AI from experimentation to accountability.
As Zaidi puts it, enterprises are relearning an old lesson: systems are expected to be perfect, even when humans aren’t, and meeting that expectation requires discipline, not hype. So for CIOs, the question is no longer whether agents can act but whether the organization is prepared to govern, explain, and pay for the consequences when they do.
Read More from This Article: Why most agentic AI projects stall before they scale
Source: News

