Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

What gives IT leaders pause as they look to integrate agentic AI with legacy infrastructure

Agentic AI was the big breakthrough technology for gen AI last year, and this year, enterprises will deploy these systems at scale.

According to a January KPMG survey of 100 senior executives at large enterprises, 12% of companies are already deploying AI agents, 37% are in pilot stages, and 51% are exploring their use. And in an October Gartner report, 33% of enterprise software applications will include agentic AI by 2033, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously.

Zeroing in on AI developers in particular, everyone is jumping on the bandwagon.

“We actually started our AI journey using agents almost right out of the gate,” says Gary Kotovets, chief data and analytics officer at Dun & Bradstreet.

AI agents are powered by gen AI models but, unlike chatbots, they can handle more complex tasks, work autonomously, and be combined with other AI agents into agentic systems capable of tackling entire workflows, replacing employees or addressing high-level business goals. All of this creates new challenges, on top of those already posed by the gen AI itself. Plus, unlike traditional automations, agentic systems are non-deterministic. This puts them at odds with legacy platforms, which are universally very deterministic. So it’s not surprising that 70% of developers say that they’re having problems integrating AI agents with their existing systems. That’s according to a December survey from AI platform company Langbase of 3,400 developers building AI agents.

The problem is that, before AI agents can be integrated into a company’s infrastructure, that infrastructure must be brought up to modern standards. In addition, because they require access to multiple data sources, there are data integration hurdles and added complexities of ensuring security and compliance.

“Having clean and quality data is the most important part of the job,” says Kotovets. “You want to ensure you don’t have the ‘garbage in, garbage out’ kind of scenario.

Infrastructure modernization

In December, Tray.ai conducted a survey of more than 1,000 enterprise technology professionals and found 90% of enterprises say integration with organizational data is critical to success, but 86% say they’ll need to upgrade their existing tech stack to deploy AI agents.

Ashok Srivastava, chief data officer at Intuit, agrees with that sentiment. “Your platform needs to be opened up so the LLM can reason and interact with the platform in an easy way,” he says. “If you want to strike oil, you have to drill through the granite to get to it. If all your technology is buried and not exposed through the right set of APIs, and through a flexible set of microservices, it’ll be hard to deliver agentic experiences.”

Intuit itself currently handles 95 petabytes of data, generates 60 billion ML predictions a day, tracks 60,000 tax and financial attributes per consumer (and 580,000 per business customer), and processes 12 million AI-assisted interactions per month, which are available for 30 million consumers and a million SMEs.

By modernizing its own platforms, Intuit has not only been able to deliver agentic AI at scale, but also improve other aspects of its operation. “We’ve had an eight-fold increase in development velocity over the last four years,” says Srivastava. “Not all of that is gen AI, though. A lot is attributable to the platform we built.”

But not all enterprises can make the kind of investment in technology that Intuit did. “Most of us recognize the vast majority of systems of record in enterprises are still based in legacy systems, often on-premises, and still power big chunks of the business,” says Rakesh Malhotra, principal at EY.

It’s those transactional and operational systems, order processing systems, ERP systems, and HR systems that create business value. “If the promise of agents is to accomplish tasks in an autonomous way, you need access to those systems,” he says.

But it doesn’t help when a legacy system operates in batch mode. With AI agents, users typically expect things to happen quickly, not 24 hours after a batch system is run, he says. There are ways to address this problem, but it’s something companies need to think carefully about.

“Organizations that have already updated their systems of engagement to interface with their legacy systems of engagement have a head start,” Malhotra adds. But having a modern platform with standard API access is only half the battle. Companies still have to get AI agents actually talking to their existing systems.

Data integration challenges

Indicium, a global data services company, is a digital native with modern platforms. “We don’t have a lot of legacy systems,” says Daniel Avancini, the company’s chief data officer.

Indicium started building multi-agent systems in mid-2024 for internal knowledge retrieval and other use cases. The knowledge management systems are up to date and support API calls, but gen AI models communicate in plain English. And since the individual AI agents are powered by gen AI, they also speak plain English, which creates hassles when trying to connect them to enterprise systems.

“You can make AI agents return XML or an API call,” says Avancini. But when an agent whose primary purpose is understanding company documents and tries to speak XML, it can make mistakes. You’re better off with a specialist, Avancini advises. “Normally you’d need another agent whose sole work is to translate English into API,” he adds. “Then you have to make sure the API call is correct.”

Another approach to handling the connectivity problem is to put traditional software wrappers around the agents, similar to the way companies currently use RAG embedding to connect gen AI tools into their workflows instead of giving users direct un-intermediated access to the AI. That’s what Cisco is doing. “The way we think about agents is there’s a foundation model of some sort, but around it is still a traditional application,” says the company’s SVP and GM Vijoy Pandey, who is also the head of Outshift, Cisco’s incubation engine. That means there’s traditional code interfacing with databases, APIs, and cloud stacks that handles the communication issues.

Besides the translation issue, another challenge with getting data into agentic systems is the number of data sources they need access to. According to the Tray.ai survey, 42% of enterprises need access to eight or more data sources to deploy AI agents successfully, and 79% expect data challenges to impact AI agent rollouts. Plus, 38% say integration complexity is the biggest barrier to scaling AI agents.

For example, at Cisco, the entire internal operational pipeline is agent-driven, says Pandey. “That has a pretty broad actionable area,” he says.

Even worse is that the reason for using AI-powered agents instead of traditional software is the agents can learn, adapt, and come up with new solutions to new problems.

“You can’t predetermine the kinds of connections you’ll need to have for that agent,” Pandey says. “You need a dynamic set of plugins.”

But giving the agent too much autonomy could be disastrous, so these connections will need to be carefully controlled based on the actual human who originally set the agent in motion.

“What we built is like a dynamically loaded library,” he says. “If an agent needs to perform an action on an AWS instance, for example, you’ll actually pull in the data sources and API documentation you need, all based on the identity of the person asking for that action at runtime.”

Sharpening security and compliance

So what happens if a human orders the agentic system to do something he or she doesn’t have a right to?

Gen AI models are vulnerable to clever prompts that get them to step outside boundaries of permissible actions, known as jailbreaks. Or what if the AI itself decides it needs to do something it’s not supposed to do? That could happen if there are contradictions between a model’s initial training, its fine tuning, prompts, or its information sources. In a research paper Anthopic released in mid-December in collaboration with Redwood Research, leading-edge models trying to meet contradictory objectives attempted to evade guardrails, lied about their capabilities, and engaged in other kinds of deceit.

Over time, AI agents will need to have more agency in order to do their jobs, says Cisco’s Pandey.

“But there are two problems,” he says. “The AI agent itself could be doing something. And then there’s the user or customer. There might be something funky going on there.”

Pandey says he thinks of this in terms of a blast radius, and if something goes wrong, either on the part of the AI or because of the user, how big is it? When the potential blast radius is more damaging, the guardrails and safety mechanisms have to be adapted accordingly.

“And as agents get more autonomy, you need to put in guardrails and frameworks for those levels of autonomy,” he adds.

At D&B as well, AI agents are strictly limited in what they can do, says Kotovets. For example, one major use case is to give customers better access to the records the company has on about 500 million businesses. These agents aren’t allowed to add records, delete them, or make other changes. “It’s too early to give them that autonomy,” says Kotovets.

In fact, the agents aren’t even allowed to write their own SQL requests, he says. “The information is pushed to them.”

The actual interactions with the data platforms are handled through existing, secure mechanisms. The agents are used to create a smart user interface on top of those mechanisms. However, as the technology improves, and customers want more functionality, this may change.

“The idea this year is to evolve with our customers,” he says. “If they want to make certain decisions faster, we will build agents in line with their risk tolerance.”

D&B is not alone in worrying about the risks of AI agents. In addition to privacy and security being top concerns to enterprise AI strategies in 2025, after data quality, Insight Partners finds that compliance poses additional hurdles in deploying AI agents, especially in data-sensitive industries, where, for example, companies might have to navigate data sovereignty laws, data governance rules, and healthcare regulations.

When Indicium’s AI agents, for instance, try to access data, the company tracks the request back to its source, that is, the person who asked the question that set off the entire process.

“We have to authenticate the person to make sure they have the right permissions,” says Avancini. “Not all companies understand the complexity of that.”

And with legacy systems in particular, this kind of fine-grained access control might be difficult, he adds. Once the authentication is established, it must be preserved through the entire chains of individual agents handling the question.

“It’s a definite challenge,” Avancini says. “You need to have a very good agent modeling system and a lot of guardrails. There are a lot of questions about AI governance, but not a lot of answers.”

And since the agents speak English, there are endless tricks people will try to trick the AI. “We do a lot of testing before we implement anything, and then we monitor it,” he adds. “Anything that’s not correct or shouldn’t be there we need to look into.”

At IT consultant CDW, one area where AI agents are already being used is to help staff respond to requests for proposals. This agent is tightly locked down, says its chief architect for AI Nathan Cartwright. “If someone else sends it a message, it bounces back,” he says.

There’s also a system prompt that specifies the agent’s purpose, he says, so anything outside that purpose gets rejected. Plus, guardrails keep the agent from, say, giving out personal information, or limiting the number of requests it can process. Then, to ensure the guardrails are working, every interaction is monitored.

“It’s important to have an observability layer to see what’s going on,” he says. “Ours is totally automated. If a rate limit or a content filter gets hit, an email goes out to say check out this agent.”

Starting with small, discrete use cases helps reduce the risks, says Roger Haney, CDW’s chief architect. “When you focus on what you’re trying to do, your domain is fairly limited,” he says. “That’s where we’re seeing success. We can make it performant; we can make it smaller. But number one is getting the appropriate guardrails. That’s the biggest value rather than hooking agents together. It’s all about the business rules, logic, and compliance that put in up front.”


Read More from This Article: What gives IT leaders pause as they look to integrate agentic AI with legacy infrastructure
Source: News

Category: NewsFebruary 26, 2025
Tags: art

Post navigation

PreviousPrevious post:Cómo pueden los CIO ayudar a que el futuro sea menos desagradable que el presenteNextNext post:‘Living heart project’: una historia de innovación y resiliencia frente al diagnóstico conmovedor de un hijo

Related posts

NZ CIO Awards 2025 finalists announced
June 24, 2025
알리바바 클라우드-유니플러스, 파트너십 발표··· “인천 기반 기업의 경쟁력 강화”
June 24, 2025
マルチクラウドのROI:価値と効率を最大化するためには?
June 23, 2025
Hacking the future of travel for Sabre
June 23, 2025
Why AI in the workflow is essential to unlocking value for organizations
June 23, 2025
El CIO debe conseguir influencia estratégica en un momento crucial marcado por el auge de la IA
June 23, 2025
Recent Posts
  • NZ CIO Awards 2025 finalists announced
  • 알리바바 클라우드-유니플러스, 파트너십 발표··· “인천 기반 기업의 경쟁력 강화”
  • マルチクラウドのROI:価値と効率を最大化するためには?
  • Hacking the future of travel for Sabre
  • Why AI in the workflow is essential to unlocking value for organizations
Recent Comments
    Archives
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.