Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Why most agentic AI projects stall before they scale

Agentic AI has quickly become one of the most loaded terms in enterprise technology. Vendors promise systems that can make decisions and act autonomously, moving AI beyond assistance and into execution. For CIOs under pressure to deliver measurable returns from AI investments, the appeal is obvious. But behind the momentum, a growing number of enterprises are hitting the  pause button.

Gartner predicts that more than 40% of agentic AI projects will be canceled by the end of 2027. The reasons aren’t mysterious, says Anushree Verma, senior director analyst at Gartner. “Most agentic AI projects right now are early-stage experiments or proof of concepts that are mostly driven by hype and often misapplied,” she says.

Part of the problem, according to Gartner, is the market itself has become muddled by what Verma calls agent washing. As enthusiasm for agentic AI has surged, many vendors have rebranded existing chatbots or gen AI assistants as agents without delivering meaningful outcomes. “Most agentic AI propositions lack significant value or ROI, as current models don’t have the maturity and agency to autonomously achieve complex business goals, or follow nuanced instructions over time,” she says. 

srcset=”https://b2b-contenthub.com/wp-content/uploads/2026/02/Anushree-Verma-senior-director-analyst-Gartner.jpg?quality=50&strip=all 1800w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Anushree-Verma-senior-director-analyst-Gartner.jpg?resize=300%2C200&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Anushree-Verma-senior-director-analyst-Gartner.jpg?resize=768%2C513&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Anushree-Verma-senior-director-analyst-Gartner.jpg?resize=1024%2C684&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Anushree-Verma-senior-director-analyst-Gartner.jpg?resize=1536%2C1026&quality=50&strip=all 1536w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Anushree-Verma-senior-director-analyst-Gartner.jpg?resize=1240%2C826&quality=50&strip=all 1240w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Anushree-Verma-senior-director-analyst-Gartner.jpg?resize=150%2C100&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Anushree-Verma-senior-director-analyst-Gartner.jpg?resize=1044%2C697&quality=50&strip=all 1044w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Anushree-Verma-senior-director-analyst-Gartner.jpg?resize=252%2C168&quality=50&strip=all 252w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Anushree-Verma-senior-director-analyst-Gartner.jpg?resize=126%2C84&quality=50&strip=all 126w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Anushree-Verma-senior-director-analyst-Gartner.jpg?resize=719%2C480&quality=50&strip=all 719w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Anushree-Verma-senior-director-analyst-Gartner.jpg?resize=539%2C360&quality=50&strip=all 539w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Anushree-Verma-senior-director-analyst-Gartner.jpg?resize=374%2C250&quality=50&strip=all 374w” width=”1240″ height=”828″ sizes=”auto, (max-width: 1240px) 100vw, 1240px”>

Anushree Verma, senior director analyst, Gartner

Gartner

That mismatch often doesn’t become visible until projects move beyond pilots into complex operational settings. With so many pilots failing to move to real deployment, costs are rising, and so is pressure from leadership to justify continued investment. Increasingly, as a result, projects are paused or canceled altogether.

The coming wave of cancellations, though, is less about the technology failing outright and more about a mismatch between expectations and operational reality. Enterprises are discovering that autonomy is far harder and more expensive to deploy than early demos suggest.

When pilots stop telling the truth

In early trials, agentic AI often looks promising. Narrow focus, clean data, and heavy human oversight create conditions where systems appear capable and efficient. But those conditions rarely survive first contact with production environments.

Verma points to value framing as an early warning sign. “If we’re still talking about time savings and individual productivity, that’s not justifiable for the investment clients are making,” she says. Agentic systems, she argues, must be tied directly to functional business outcomes, showing value in areas like finance, HR, security, or operations or they’ll struggle to survive scrutiny from leadership teams.

Jeremy Ung, CTO at cloud-based software provider BlackLine, sees the same pattern from the vendor side. “Pilots are often really promising,” he says. “You get exciting results in an isolated environment.” The problem emerges at scale. Documents vary in structure. Exceptions multiply. Human users behave inconsistently. “Scaling is where I see most of them fail,” Ung adds.

srcset=”https://b2b-contenthub.com/wp-content/uploads/2026/02/Jeremy-Ung-CTO-BlackLine.jpg?quality=50&strip=all 1800w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Jeremy-Ung-CTO-BlackLine.jpg?resize=300%2C200&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Jeremy-Ung-CTO-BlackLine.jpg?resize=768%2C512&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Jeremy-Ung-CTO-BlackLine.jpg?resize=1024%2C683&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Jeremy-Ung-CTO-BlackLine.jpg?resize=1536%2C1024&quality=50&strip=all 1536w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Jeremy-Ung-CTO-BlackLine.jpg?resize=1240%2C826&quality=50&strip=all 1240w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Jeremy-Ung-CTO-BlackLine.jpg?resize=150%2C100&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Jeremy-Ung-CTO-BlackLine.jpg?resize=1046%2C697&quality=50&strip=all 1046w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Jeremy-Ung-CTO-BlackLine.jpg?resize=252%2C168&quality=50&strip=all 252w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Jeremy-Ung-CTO-BlackLine.jpg?resize=126%2C84&quality=50&strip=all 126w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Jeremy-Ung-CTO-BlackLine.jpg?resize=720%2C480&quality=50&strip=all 720w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Jeremy-Ung-CTO-BlackLine.jpg?resize=540%2C360&quality=50&strip=all 540w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Jeremy-Ung-CTO-BlackLine.jpg?resize=375%2C250&quality=50&strip=all 375w” width=”1240″ height=”827″ sizes=”auto, (max-width: 1240px) 100vw, 1240px”>

Jeremy Ung, CTO, BlackLine

BlackLine

Once agentic systems are embedded in real workflows, reversibility becomes difficult. If an autonomous process produces inconsistent results, enterprises need to understand not just what went wrong, but how the system reasoned its way there. Without that visibility, rollback is risky and slow.

Change management compounds the challenge. As Ung puts it, this is the first time the workforce is managing humans and AI agents at the same time. Training people to supervise autonomous systems, and trust them appropriately, has proven harder than many organizations expected.

The cost models break first

Even when pilots deliver apparent value, economics often derail expansion. Agentic systems consume resources very differently from traditional enterprise software. Each autonomous task can trigger multiple reasoning steps, tool calls, retries, and validations. “As you get more complex workflows, multiple tokens are consumed in the process,” Ung explains. “And as you move toward agentic workflows, they consume more resources to do independent work.”

This makes costs volatile and difficult to forecast. Token-based pricing fluctuates with behavior, not capacity, confounding finance teams accustomed to predictable infrastructure spend. Boards also increasingly ask why AI costs resemble open-ended operating expenses rather than bounded investments with defined returns.

Verma notes that many enterprises miscalculate costs because they apply gen AI assumptions to agentic systems. “It’s still relying on simple LLM cost criteria, which isn’t true for agents,” she says. “When you add orchestrators, governance layers, and multiple agents, costs start escalating very quickly.”

As a result, some organizations are narrowing scope deliberately, while others are freezing expansion altogether until cost controls mature.

When agentic AI reaches the boardroom

As agentic AI projects grow more visible and expensive, they’re also moving out of the IT silo and into board-level conversations. That shift is proving uncomfortable for many organizations.

Unlike earlier waves of automation, agentic AI introduces risks that are harder to delegate downward. Autonomous systems now make decisions, trigger actions, and interact with customers and financial systems in ways that directly affect enterprise liability. As a result, CIOs are increasingly being asked not just whether a system works, but if it can be defended.

Gartner’s Verma notes that this is where many initiatives falter. “Governance and risk controls aren’t really designed precisely for agentic systems at this time,” she says, particularly when multiple agents interact and access different applications. As autonomy increases, so does the difficulty of answering basic governance questions like who approved this behavior, under what conditions, and with what safeguards.

Boards are also pressing for clarity on accountability. When an agent makes a poor decision, responsibility doesn’t disappear into the model. It lands with executives who approved deployment. That reality is forcing enterprises to treat agentic AI less like experimental innovation and more like core infrastructure subject to the same scrutiny as financial systems or cybersecurity controls.

For many organizations, this moment marks a turning point. Projects that can’t be explained clearly and justified economically are no longer quietly tolerated. They’re explicitly questioned, and often stopped.

Autonomy meets real-world complexity

Contrary to popular belief, model accuracy isn’t the primary constraint on agentic AI. The deeper challenge lies in deploying autonomous systems into environments defined by fragmentation, exceptions, and uncertainty.

“The hardest problem isn’t the modeling,” says Udo Sglavo, VP of applied AI and modeling at SAS. “It’s putting agents into the operational environment.” Enterprises, he notes, are full of partial failures, delayed integrations, and edge cases that compound quickly when systems act autonomously.

Humans handle these situations using judgment and experience. Agents don’t. “Humans have intuition,” Sglavo says. “An agent doesn’t have any sense that something feels off.” When agents encounter situations they’ve never seen before, the risk of hallucination increases, sometimes with serious consequences.

srcset=”https://b2b-contenthub.com/wp-content/uploads/2026/02/Udo-Sglavo-VP-of-applied-AI-and-modeling-SAS-1.jpg?quality=50&strip=all 1800w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Udo-Sglavo-VP-of-applied-AI-and-modeling-SAS-1.jpg?resize=300%2C200&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Udo-Sglavo-VP-of-applied-AI-and-modeling-SAS-1.jpg?resize=768%2C512&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Udo-Sglavo-VP-of-applied-AI-and-modeling-SAS-1.jpg?resize=1024%2C683&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Udo-Sglavo-VP-of-applied-AI-and-modeling-SAS-1.jpg?resize=1536%2C1025&quality=50&strip=all 1536w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Udo-Sglavo-VP-of-applied-AI-and-modeling-SAS-1.jpg?resize=1240%2C826&quality=50&strip=all 1240w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Udo-Sglavo-VP-of-applied-AI-and-modeling-SAS-1.jpg?resize=150%2C100&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Udo-Sglavo-VP-of-applied-AI-and-modeling-SAS-1.jpg?resize=1045%2C697&quality=50&strip=all 1045w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Udo-Sglavo-VP-of-applied-AI-and-modeling-SAS-1.jpg?resize=252%2C168&quality=50&strip=all 252w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Udo-Sglavo-VP-of-applied-AI-and-modeling-SAS-1.jpg?resize=126%2C84&quality=50&strip=all 126w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Udo-Sglavo-VP-of-applied-AI-and-modeling-SAS-1.jpg?resize=719%2C480&quality=50&strip=all 719w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Udo-Sglavo-VP-of-applied-AI-and-modeling-SAS-1.jpg?resize=540%2C360&quality=50&strip=all 540w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Udo-Sglavo-VP-of-applied-AI-and-modeling-SAS-1.jpg?resize=375%2C250&quality=50&strip=all 375w” width=”1240″ height=”827″ sizes=”auto, (max-width: 1240px) 100vw, 1240px”>

Udo Sglavo, VP of applied AI and modeling, SAS

SAS

This is why human-in-the-loop design remains essential. “Most, if not all, implementations we’ve done require it,” says Sglavo. Autonomy works best when systems handle routine cases and surface exceptions, rather than make high-severity decisions independently.

Interpretability and auditability also become gating factors. “If we can’t explain why a system acted and reconstruct how a decision unfolded, our customers won’t use it,” Sglavo says, particularly about regulated industries where decisions must be defended long after they’re made.

Governance becomes the real bottleneck

As agentic AI moves closer to production, governance, not intelligence, emerges as the decisive constraint. Ahmed Zaidi, CEO of AI services provider Accelirate, frames governance across people, process, and technology. On the technical side, enterprises struggle to apply access controls and guardrails to probabilistic systems. “We already have trouble figuring out access control for structured systems,” he says. “Now you’re giving tools to an LLM that may hallucinate.”

Process governance is equally challenging. Manual workflows often contain implicit checks that disappear when automated. Without redesign, automation can accelerate errors rather than reduce them. And people governance adds another layer: training employees, redefining accountability, and preparing organizations for new failure modes.

srcset=”https://b2b-contenthub.com/wp-content/uploads/2026/02/Ahmed-Zaidi-CEO-Accelirate.jpg?quality=50&strip=all 1800w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Ahmed-Zaidi-CEO-Accelirate.jpg?resize=300%2C200&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Ahmed-Zaidi-CEO-Accelirate.jpg?resize=768%2C512&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Ahmed-Zaidi-CEO-Accelirate.jpg?resize=1024%2C683&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Ahmed-Zaidi-CEO-Accelirate.jpg?resize=1536%2C1024&quality=50&strip=all 1536w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Ahmed-Zaidi-CEO-Accelirate.jpg?resize=1240%2C826&quality=50&strip=all 1240w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Ahmed-Zaidi-CEO-Accelirate.jpg?resize=150%2C100&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Ahmed-Zaidi-CEO-Accelirate.jpg?resize=1046%2C697&quality=50&strip=all 1046w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Ahmed-Zaidi-CEO-Accelirate.jpg?resize=252%2C168&quality=50&strip=all 252w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Ahmed-Zaidi-CEO-Accelirate.jpg?resize=126%2C84&quality=50&strip=all 126w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Ahmed-Zaidi-CEO-Accelirate.jpg?resize=720%2C480&quality=50&strip=all 720w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Ahmed-Zaidi-CEO-Accelirate.jpg?resize=540%2C360&quality=50&strip=all 540w, https://b2b-contenthub.com/wp-content/uploads/2026/02/Ahmed-Zaidi-CEO-Accelirate.jpg?resize=375%2C250&quality=50&strip=all 375w” width=”1240″ height=”827″ sizes=”auto, (max-width: 1240px) 100vw, 1240px”>

Ahmed Zaidi, CEO, Accelirate

Accelirate

Zaidi emphasizes that mature governance includes the ability to stop projects. His teams routinely pause or cancel initiatives that combine high risk with unclear or eroding ROI. “Canceling a project doesn’t mean governance failed,” he says. “It means governance worked.”

One recurring pattern, he says, is that the mitigations required to manage risk — additional controls, validation layers, or human oversight — often wipe out the projected return. In those cases, canceling the project is the rational decision.

What actually survives

Despite the growing list of stalled projects, agentic AI isn’t retreating. It’s narrowing. The initiatives that survive share common traits. They focus on task-specific autonomy rather than generalized agents, and operate in constrained environments where inputs and outputs can be bounded. They also define success in terms of measurable business outcomes, not abstract productivity gains.

Verma sees this shift clearly. “We’re moving toward task-specific agents that are incrementally added into existing applications,” she says, adding that the projects that succeed are those that deliver tangible outcomes at the organizational level, not just individual efficiency.

Ung agrees. “It’s not about time saved,” he says. “It’s about outcomes for your business.” Mature deployments tie agent behavior to KPIs and executive dashboards, enabling leaders to assess value and course-correct when results fall short.

According to these experts, one principle stands out: autonomy is earned incrementally. Humans remain embedded at high-severity decision points, rollback paths are designed in advance, and governance is continuous, not reactive.

The next phase of agentic AI adoption will be quieter than the last, with fewer sweeping announcements, more paused initiatives, and more scrutiny from finance and boards. That shift shouldn’t be mistaken for disappointment. It marks the transition of agentic AI from experimentation to accountability.

As Zaidi puts it, enterprises are relearning an old lesson: systems are expected to be perfect, even when humans aren’t, and meeting that expectation requires discipline, not hype. So for CIOs, the question is no longer whether agents can act but whether the organization is prepared to govern, explain, and pay for the consequences when they do.


Read More from This Article: Why most agentic AI projects stall before they scale
Source: News

Category: NewsFebruary 18, 2026
Tags: art

Post navigation

PreviousPrevious post:Cloud sovereignty: squaring compliance with innovationNextNext post:María Ribagorda (Legit.Health): “La IA no sustituye al médico, potencia su capacidad diagnóstica a nivel experto”

Related posts

Oracle NetSuite announces AI coding skills for SuiteCloud developers
April 29, 2026
Your AI agent is ready to go. Is your infrastructure?
April 29, 2026
독일 소버린 AI 대표주자 알레프 알파, 코히어와 손잡고 글로벌 연합 선택
April 29, 2026
Las empresas se están replanteando Kubernetes
April 29, 2026
Enterprises still chase incremental, not transformational, AI gains
April 29, 2026
SAP 2027 deadline for S/4HANA out of reach for most customers
April 29, 2026
Recent Posts
  • Oracle NetSuite announces AI coding skills for SuiteCloud developers
  • Your AI agent is ready to go. Is your infrastructure?
  • 독일 소버린 AI 대표주자 알레프 알파, 코히어와 손잡고 글로벌 연합 선택
  • Las empresas se están replanteando Kubernetes
  • Enterprises still chase incremental, not transformational, AI gains
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.