Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Agentic AI has big trust issues

Enterprises are deploying AI agents at a rapid pace, but serious doubts about agentic AI accuracy suggest potential disaster ahead, according to many experts.

The irony facing AI agents is that they need decision-making autonomy to provide full value, but many AI experts still see them as black boxes, with the reasoning behind their actions invisible to deploying organizations. This lack of decision-making transparency creates a potential roadblock to the full deployment of agents as autonomous tools that drive major efficiencies, they say.

The trust concerns voiced by many AI practitioners don’t seem to be reaching potential users, however, as many organizations have jumped on the agent hype train.

About 57% of B2B companies have already put agents into production, according to a survey released in October by software marketplace G2, and several IT analyst firms expect huge growth in the AI agent market in the coming years. For example, Grand View Research projects a compounded annual growth rate of nearly 46% between 2025 and 2030.

Many agentic customer organizations don’t yet grasp how opaque agents can be without the right safeguards in place, AI experts suggest. And, even as guardrails roll out, most current tools aren’t yet sufficient to stop agent misbehavior.

Misunderstood and misused

Widespread misunderstandings about the role and functionality of agents could hold back the technology, says Matan-Paul Shetrit, director of product management at agent-building platform Writer. Many organizations view agents as similar to straightforward API calls, with predictable outputs, when users should treat them more like junior interns, he says.

“Like junior interns, they need certain guardrails, unlike APIs, which are a relatively simple thing to control,” Shetrit adds. “Controlling an intern is actually much harder, because they can knowingly or unknowingly do damage, and they can access or reference pieces of information that they shouldn’t. They can hear Glenda talking to our CIO and hear something that is proprietary information.”

The challenge for AI agent developers and user enterprises will be to manage all these agents that are likely to be deployed, he says.

“You can very easily imagine that an organization of 1,000 people deploys 10,000 agents,” Shetrit contends. “They’re no longer an organization of 1,000 people, they’re an organization of 11,000 ‘people,’ and that’s a very different organization to manage.”

For huge corporations like banks, the agent population could reach 500,000 over time, Shetrit surmises — a situation that would require entirely new approaches to organizational resource management and IT observability and supervision.

“That requires rethinking your whole org structure and the way you do business,” he says. “Until we as an industry solve that, I don’t believe that agent tech is going to be widespread and adopted in a way that delivers on the promise of agents.”

Many organizations deploying agents don’t yet realize there’s a problem that needs to be solved, adds Jon Morra, chief AI officer at advertising technology provider Zefr.

“It’s not well understood in the zeitgeist how many trust issues there are with agents,” Morra says. “The idea of AI agents is still relatively new to people, and a lot of times they’re a solution in need of a problem.”

In many cases, Morra argues, a simpler, more deterministic technology can be deployed instead of an agent. Many organizations deploying the large language models (LLMs) that power agents still appear to lack a basic understanding of the risks, he says.

“People have too much trust in the agents right now, and that’s blowing up in people’s faces,” he says. “I’ve been on a number of calls where people who are using LLMs are like, ‘Jon, have you ever noticed that they get math wrong or sometimes make up stats?’ And I’m like, ‘Yeah, that happens.’”

While many AI experts see faith in agents improving over the long term as AI models improve, Morra believes complete trust will never be warranted because AI will always have the potential to hallucinate.

Workflow friction in autonomy distrust

While Morra and Shetrit believe AI users don’t understand the agent transparency issue, G2’s October research report notes a growing trust in agents to perform some tasks, such as autoblocking suspicious IPs or rolling back failed software deployments, although 63% of respondents say their agents need more human supervision than expected. Less than half of those surveyed say they trust agents in general to make autonomous decisions, even with guardrails in place, and only 8% are comfortable giving agents total autonomy.

Tim Sanders, chief innovation officer at G2, disagrees with some of the warnings: He sees a lack of trust in agents as more of a problem than a lack of transparency in the technology. While distrust of a new technology is natural, the promise of agents is in their ability to act without human intervention, he says.

The survey shows nearly half of all B2B companies are buying agents but not giving them real autonomy, he notes. “This means human beings are having to evaluate and then approve every action,” Sanders says. “And that seems to defeat the entire purpose of adopting agents for the sake of efficiency, productivity, and velocity.”

This trust gap could be costly to organizations that are too cautious with agents, he contends. “They will miss out on billions of dollars of cost savings because they have too many humans in the loop, creating a bottleneck inside agentic workflows,” Sanders explains. “Trust is hard-earned and easily lost. However, the economic and operational promise of agents is actually pushing growth-minded enterprise leaders to extend trust rather than retreat.”

Care required

Other AI experts caution enterprise IT leaders to be careful when deploying agents, given the transparency problem AI vendors still need to solve.

Tamsin Deasey-Weinstein, leader of the AI Digital Transformation Task Force for the Cayman Islands, says AI works best with a human in the loop and stringent governance applied, but a lot of AI agents are over-marketed and under-governed.

“Whilst agents are amazing because they take the human out of the loop, this also makes them hugely dangerous,” Deasey-Weinstein says. “We’re selling the prospects of autonomous agents when what we actually have are disasters waiting to happen without stringent guardrails.”

To combat this lack of transparency, she recommends limiting agents’ scope.

“The most trustworthy agents are boringly narrow in their ability,” Deasey-Weinstein says. “The broader and freer rein the agent has, the more that can go wrong with the output. The most trustworthy agents have small, clearly defined jobs and very stringent guardrails.”

She recognizes, however, that deploying highly targeted agents may not be appealing to some users. “This is neither saleable nor attractive to the ever-demanding consumer that wants more work done for less time and skill,” she says. “Just remember, if your AI agent can write every email, touch every document, and hit every API, with no human in the loop, you have something you have no control over. The choice is yours.”

Many AI experts also believe autonomous agents are best deployed to make low-risk decisions. “If a decision affects someone’s freedom, health, education, income, or future, AI should only be assisting,” Deasey-Weinstein says. “Every action has to be explainable, and with AI it is not.”

She recommends frameworks such as the OECD AI Principles and the US NIST AI Risk Management Framework as guides to help organizations understand AI risk.

Observe and orchestrate

Other AI practitioners point to the emerging practice of AI observability as a solution to agent misbehavior, although others say observability tools alone may not diagnose an agent’s underlying issues.

Organizations using agents can deploy an orchestration layer that manages lifecycle, context sharing, authentication, and observability, says James Urquhart, field CTO at AI orchestration vendor Kamiwaza AI.

Like Deasey-Weinstein, Urquhart advocates for agents to have limited roles, and he compares orchestration to a referee that can oversee a team of specialist agents. “Don’t use one ‘do-everything’ agent,” he says. “Treat agents like a pit crew and not a Swiss army knife.”

AI has a trust problem, but it’s an architectural issue, he says.

“Most enterprises today can stand up an agent but very few can explain, constrain, and coordinate a swarm of them,” he adds. “Enterprises are creating more chaos if they don’t have the control plane that makes scale, safety, and governance possible.”


Read More from This Article: Agentic AI has big trust issues
Source: News

Category: NewsNovember 13, 2025
Tags: art

Post navigation

PreviousPrevious post:Why it’s time to build dumb applicationsNextNext post:Netskope CIO Mike Anderson on making the leap to a startup

Related posts

What is CMMI? A model to optimize development processes
May 15, 2026
The biggest mistakes CIOs make in the boardroom — and how to avoid them
May 15, 2026
How AI is transforming software development
May 15, 2026
From cautious to scaling: SAP customers span the AI readiness spectrum
May 15, 2026
AI 시대 CIO, ‘생존 시험대’ 올랐다…조직 혁신·AI 역량이 성패 좌우
May 15, 2026
앤트로픽, 클로드 에이전트 과금 전환…‘무제한 AI’ 시대 막 내리나
May 15, 2026
Recent Posts
  • What is CMMI? A model to optimize development processes
  • The biggest mistakes CIOs make in the boardroom — and how to avoid them
  • How AI is transforming software development
  • From cautious to scaling: SAP customers span the AI readiness spectrum
  • AI 시대 CIO, ‘생존 시험대’ 올랐다…조직 혁신·AI 역량이 성패 좌우
Recent Comments
    Archives
    • May 2026
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.