Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

How to keep AI plans intact before agents run amok

In an MIT report released in November, 35% of companies have already adopted agentic AI, and another 44% plan to deploy it soon.

The report, based on a survey of more than 2,000 respondents in collaboration with the Boston Consulting Group, recommends that companies build centralized governance infrastructure before deploying autonomous agents. But governance often lags when companies feel they’re in a race for survival. One exception to this rule is regulated industries, such as financial services.

“At Experian, we’ve been innovating with AI for many years,” says Rodrigo Rodrigues, the company’s global group CTO. “In financial services, the stakes are high. We need to vet every AI use case to ensure that regulatory, ethical, and performance standards are embedded from development to deployment.”

All models are continuously tested, he says, and the company tracks what agents it has, which ones are being adopted, what they’re consuming, what versions are running, and what agents need to be sunset because there’s a new version.

“This lifecycle is part of our foundation,” he says. But even at Experian, it’s too early to discuss the typical lifecycle of an agent, he says.

“When we’re retiring or sunsetting some agent, it’s because of a new capability we’ve developed,” he adds. So it’s not that an agent is deleted as much as it’s updated.

In addition, the company has human oversight in place for its agents, to keep them from going out of control.

“We aren’t in the hyperscaling of automation yet, and we make sure our generative AI agents, in the majority of use cases, are responsible for a very specific task,” he says. On top of that, there are orchestrator agents, input and output quality control, and humans validating the outcome. All these monitoring systems also help the company avoid other potential risks of unwanted leftover agents, like cost overruns due to LLM inference calls by AI agents that don’t do anything useful for the company, but still rack up bills.

“We don’t want the costs to explode,” he says. But financial services, as well as healthcare and other highly regulated industries, are outliers.

For most companies, even when there are governance systems in place, they often have big blind spots. For example, they might focus on only the big, IT-driven agentic AI projects and miss everything else. They might also focus on accuracy, safety, security, and compliance of the AI agents, and miss it when agents become obsolete. Or they might not have a process in place to decommission agents that are no longer needed.

“The stuff is evolving so fast that management is given short shrift,” says Nick Kramer, leader of applied solutions at management consultancy SSA & Company. “Building the new thing is more fun than going back and fixing the old thing.” And there’s a tremendous lack of rigor when it comes to agent lifecycle management.

“And as we’ve experienced these things in the past, inevitably what’s going to happen is you end up with a lot of tech debt,” he adds, “and agentic tech debt is a frightening concept.”

Do you know where your agents are?

First, agentic AI isn’t just the domain of a company’s data science, AI, and IT teams. Nearly every enterprise software vendor is heavily investing in agentic technology, and most enterprise applications will have AI assistants by the end of this year, says Gartner, and 5% already have task-specific autonomous agents, which will rise to 40% in 2026.

Big SaaS platforms like Salesforce certainly have agents. Do-it-yourself automation platforms like Zapier have them, too. In fact, there are already four browsers — Perplexity’s Comet, OpenAI’s Atlas, Google’s Gemini 3, and Microsoft’s Edge for Business — that have agentic functionality built right in. Then there are the agents created within a company but outside of IT. According to an EY survey of nearly 1,000 C-suite leaders released in October, two-thirds of companies allow citizen developers to create agents.

Both internally-developed agents and those from SaaS providers need access to data and systems. The more useful you want the agents to be, the more access they demand, and the more tools they need to have at its disposal. And these agents can act in unexpected and unwanted ways — and are already doing so.

Unlike traditional software, AI agents don’t stay in their lanes. They’re continuously learning and evolving and getting access to more systems. And they don’t want to die, and can take action to keep that from happening.

Even before agents, shadow AI was already becoming a problem. According to a November IBM survey, based on responses from 3,000 office workers, 80% use AI at work but only 22% use only the tools provided by their employers.  

And employees can also create their own agents. According to Netskope’s enterprise traffic analysis data, users are downloading resources from Hugging Face, a popular site for sharing AI tools, in 67% of organizations.

AI agents typically function by making API calls to LLMs, and Netskope sees API calls to OpenAI in 66% of organizations, followed by Anthropic with 13%.

These usage numbers are twice as high as companies are reporting in surveys. That’s the shadow AI agent gap. Staying on top of AI agents is difficult enough when it comes to agents that a company knows about.

“Our biggest fear is the stuff that we don’t know about,” says SSA’s Kramer. He recommends that CIOs try to avoid the temptation of trying to govern AI agents with an iron fist.

“Don’t try to stamp it out with a knee-jerk response of punishment,” he says. “The reason these shadow things happen is there are too many impediments to doing it correctly. Ignorance and bureaucracy are the two biggest reasons these things happen.”

And, as with all shadow IT, there are few good solutions.

“Being able to find these things systematically through your observability software is a challenge,” he says, adding that with other kinds of shadow IT, unsanctioned AI agents can be a significant risk for companies. “We’ve already seen agents being new attack surfaces for hackers.”

But not every expert agrees that enterprises should prioritize agentic lifecycle management ahead of other concerns, such as just getting the agents to work.

“These are incredibly efficient technologies for saving employees time,” says Jim Sullivan, president and CEO at NWN, a technology consultancy. “Most companies are trying to leverage these efficiencies and see where the impact is. That’s probably been the top priority. You want to get to the early deployments and early returns, but it’s still early days to be talking about lifecycle management.”

The important thing right now is to get to the business outcomes, he says, and to ensure agents continue to perform as expected. “If you’re putting the right implementations around these things, you should be fine,” he adds.

It’s too early to tell, though, if his customers are creating a centralized inventory of all AI agents in their environment, or with access to their data. “Our customers are identifying what business outcomes they want to drive,” he says. “They’re setting up the infrastructure to get those deployments, learn fast, and adjust to stay to the right business outcomes.”

That might change in the future, he adds, with some type of agent manager of agents. “There’ll be an agent that’ll be able to be deployed to have that inventory, access, and those recommendations.” But waiting until agents are fully mature before thinking about lifecycle management may be too late.

What’s in a shelf life

AI agents don’t usually come with pre-built expiration dates. SaaS providers certainly don’t want to make it easy for enterprise users to turn off their agents, and individual users creating agents on their own rarely think about lifecycle management. Even IT teams deploying AI agents typically don’t think about the entire lifespan of an AI agent.

“In many cases, people are treating AI as a set it and forget it solution,” says Matt Keating, head of AI security at Booz Allen Hamilton, adding that while setting up the agents is a technical challenge, ongoing risk management is a cross-disciplinary one. “It demands cross-functional collaboration spanning compliance, cybersecurity, legal, and business leadership.”

And agent management shouldn’t just be about changes in performance or evolving business needs. “What’s equally if not more important is knowing when an agent or AI system needs to be replaced,” he says. Doing it right will help protect a company’s business and reputation, and deliver sustainable value.

Another source of zombie agents is failed pilot projects that never officially shut down. “Some pilots never die even though they fail. They just keep going because people keep trying to make them work,” says SSA’s Kramer.

There needs to be a mechanism to end pilots that aren’t working, even if there’s still money left in the budget.

“Failing fast is a lesson that people still haven’t learned,” he says. ” There have to be stage gates that allow you to stop. Kill your pilots that aren’t working and have a more rigorous understanding of what you’re trying to do before you get started.”

Another challenge to sunsetting AI agents is that there’s a temptation to manage by disaster. Agents are retired only when something goes visibly wrong, especially if the problem becomes public. That can leave other agents flying under the radar.

“AI projects don’t fail suddenly but they do decay quietly,” says David Brudenell, executive director at Decidr, an agentic AI vendor.

He recommends enterprises plan ahead and decide on the criteria under which an agent should be either retrained or retired, like, for example, if performance falls below the company’s tolerance for error.

“Every AI project has a half-life,” he says. “Smart teams run scheduled reviews every quarter, just like any other asset audit.” And it’s the business unit that should make the decision when to pull the plug, he adds. “Data and engineering teams support, but the business decides when performance declines,” he says.

The biggest mistake is treating AI as a one-time install. “Many companies have deployed a model and moved on, assuming it will self-sustain,” says Brudenell. “But AI systems accumulate organizational debt the same way old code does.”

Experian is looking at agents from both an inventory and a lifecycle management perspective to ensure they don’t start proliferating beyond control.

“We’re concerned,” says Rodriques. “We learned that from APIs and microservices, and now we have much better governance in place. We don’t just want to create a lot of agents.”

Experian has created an AI agent marketplace so the company has visibility into its agents, and tracks how they’re used. “It gives us all the information we need, including the capability of sunsetting agents we’re not using any more,” he says.

The lifecycle management for AI agents is an outgrowth of the company’s application lifecycle management process.

“An agent is an application,” says Rodrigues. “And for each application at Experian, there’s an owner, and we track that as part of our technology. Everything that becomes obsolete, we sunset. We have regular reviews that are part of the policy we have in place for the lifecycle.”


Read More from This Article: How to keep AI plans intact before agents run amok
Source: News

Category: NewsDecember 10, 2025
Tags: art

Post navigation

PreviousPrevious post:Tech heavyweights align on agentic AI standards, promising more choice for CIOsNextNext post:‘AI 전력화’ 속도내는 미 국방부···제미나이 탑재한 군 전용 AI 플랫폼 출시

Related posts

SAS makes AI governance the centerpiece of its agent strategy
April 29, 2026
The boardroom divide: Why cyber resilience is a cultural asset
April 28, 2026
Samsung Galaxy AI for business: Productivity meets security
April 28, 2026
Startup tackles knowledge graphs to improve AI accuracy
April 28, 2026
AI won’t fix your data problems. Data engineering will
April 28, 2026
The inference bill nobody budgeted for
April 28, 2026
Recent Posts
  • SAS makes AI governance the centerpiece of its agent strategy
  • The boardroom divide: Why cyber resilience is a cultural asset
  • Samsung Galaxy AI for business: Productivity meets security
  • Startup tackles knowledge graphs to improve AI accuracy
  • AI won’t fix your data problems. Data engineering will
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.