Agentic AI has replaced generative AI at the top of the technology hype cycle, but there’s one major problem: A standard definition of an AI agent doesn’t yet exist.
With dozens, if not hundreds, of vendors touting their agentic AI products, a lack of definition could lead to confusion as CIOs and other IT leaders seek to purchase and deploy the emerging technology.
Some AI experts define agentic AI as a tool that can make autonomous decisions within the enterprise, learn from past experiences, and adapt its responses, whereas others suggest that any AI with some decision-making functionality qualifies as agentic.
In most cases, vendors aren’t yet offering truly agentic AI with real autonomy, some critics say, but are instead pitching simpler AI chatbots, assistants, or add-ons to large language models (LLMs) as agentic AI. Many so-called agents are just LLM wrappers or “glorified LLM workflows,” says Zach Bartholomew, VP of product at Perigon, provider of an AI-powered and context-based search tool.
The agent bandwagon
There’s a lot of “agent-washing” in the IT industry right now, says Chris Shayan, head of AI at Backbase, a banking software vendor.
“I’ve sat through dozens of vendor pitches where basic automation was rebranded as autonomous agents,” he says. “Many solutions being marketed as agents are actually just traditional algorithms with better interfaces, and there’s a world of difference that CIOs and CTOs are struggling to navigate.”
In Shayan’s definition, true agents can reason through multiple steps and have some independent decision-making authority. For example, the banking industry has begun to implement AI agents that can detect unusual transaction patterns and take appropriate action without constant human supervision, he says.
“True autonomy in software means the ability to handle end-to-end processes independently — from gathering information, analyzing options, executing actions, to learning from outcomes,” Shayan adds. “What distinguishes a true agent from other AI systems is this ability to operate within defined guardrails while adapting to new situations they encounter.”
CIOs on the leading edge of this trend are also finding out that not all business processes are ripe for agentic AI, given the current state of the technology, their available data, and the ways various processes are enmeshed in their business.
This agent isn’t autonomous
Without a clear, standardized definition, IT leaders may purchase products that don’t work as advertised, critics say.
“When everything’s called an agent, CIOs can waste budget on software that doesn’t deliver true autonomy — leading to frustrated teams, wasted resources, and a loss of confidence in AI,” Bartholomew says. “We’re definitely headed towards that future of truly having agents, but I don’t think we’re quite there.”
The confusion can lead to misaligned expectations and poor purchasing decisions, Shayan adds. “When CIOs implement what they believe is an agent-based solution but get glorified automation instead, they miss out on the transformative potential of true agents while still paying the premium,” he says. “This leads to disappointing ROI and can undermine broader AI initiatives.”
The autonomy continuum
Just as there’s differing definitions of AI agents, there’s some disagreement among AI experts about the problem. Bartholomew believes true agents are about a year away from deployment, but David Lloyd, CAIO at human resources software vendor Dayforce, sees agentic AI as more of a spectrum of capabilities than a yes-or-no definition.
Many AI tools are beginning to have some level of autonomy in them, including AI assistants that learn from past user actions and then take actions or make recommendations based on that knowledge, Lloyd says.
“This is a continuum,” he adds. “It’s just that one end is very aspirational, and the other end is very practical.”
To Lloyd, defining agentic AI is less important than finding the right uses for AIs that organizations are adopting.
“Let’s ask ourselves the question, ‘Does it drive business value or quantifiable value?’” he says. “Because if it doesn’t, then it’s all just wonderful conjecture.”
The overlap between agents and other AIs will continue to blur as LLMs add on functionality that looks more and more like agents, adds Ilia Badeev, head of data science at TrEvolution, a travel software and services provider.
Currently, “AI agent” is more of a marketing label than a well-defined term, Badeev contends, and many vendors are slapping the word “agent” on AI assistants and other tools to get in on the recent hype.
“There is no clear-cut difference between AI agents and assistants,” he says. “It’s nothing more than a marketing differentiation.”
Confused CIOs and IT procurement leaders shouldn’t focus on whether a product is labeled as an agent, but instead should look for the capabilities they need, Badeev recommends. In some cases, IT leaders may need agents, but many other AI tools can be useful.
“The only thing that matters is, what kind of functionality are you are getting?” he says. “How accurate is AI within these functionalities? What is the price?”
Ask the right questions
Bartholomew and Lloyd both recommend that CIOs and IT procurement leaders ask a series of questions before they buy an AI agent from a vendor. Lloyd recommends organizations start small, with sequential capabilities, when they deploy agent-like technologies.
“The term I use when I’m talking to people is they need to be deliberate,” he says. “From a business point of view and a procurement point of view, do you have a portfolio of simple and maybe more complex use cases and tasks you have built up in the organization that you’d like to solve?”
If CIOs want an AI agent, they should ask the following questions, Bartholomew says:
- Can it plan and execute multi-step processes on its own?
- Does it learn or improve over time, or is it just running a script?
- What kind of decisions can it handle on its own?
- Can it take meaningful actions without someone hitting “approve”?
- Does it get better over time?
- How well does it integrate with the existing IT stack?
While agents are designed to make decisions on their own, CIOs will also want to retain the option to audit the agent’s actions, Bartholomew adds.
“For the foreseeable future, I think we’re going to have a human in the loop,” he says. “I don’t think it’s going to be that every single time you take an action, you need a human in the loop, but there’s ultimately going to be someone who is overseeing how these things are operating.”
Read More from This Article: What makes a true AI agent? CIOs struggle with the definition as hype blurs lines
Source: News