Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Trust in the age of agentic AI systems

By the end of 2025, more than 45 billion non-human and agentic identities — or more than 12 times the number of humans in the global workforce — will be deployed into organizational workflows. Use cases are cross-functional, with nearly two-thirds of AI agents focused on automating critical business processes across HR, finance, sales operations, supply chain management, customer service and administrative tasks.

Agentic AI poses a two-way authentication threat: AI agents can both harvest credentials at scale and exploit them to impersonate legitimate services. 

This recent case is a preview of what could come: AI-enabled chat agents impersonated Salesforce’s Data Loader application by using the real software’s client ID to compromise administrators across multiple organizations, impacting more than a million customers, including those of major cybersecurity vendors. Because the client ID matched an already-approved application, the consent screen was skipped and attackers received valid access tokens invisibly. Traditional monitoring struggled to distinguish between legitimate AI usage and malicious exfiltration.

According to Okta, 23% of IT professionals reported that their AI agents have been tricked into revealing access credentials. Yet, as the earlier mentioned report from the World Economic Forum revealed, only 10% of organizations have a well-developed strategy for managing their non-human and agentic identities. The window to establish authentication safeguards is closing at a clip measured by months, not years.

Deconstructing the agentic AI playing field

Unlike traditional chatbots that simply respond to questions, AI agents are autonomous systems that can plan, make decisions and take actions across multiple systems with minimal human oversight. They are expected to work across networks on end-to-end business processes — processing payroll, approving refunds, managing supply chains, writing code and making financial decisions with access to your most sensitive systems and data.

AI agents are trained to not just advise. They are purposely integrated to be active decision-makers. And that’s where the problem begins. And who created them and is guiding their intent is needed to build trust.

Authenticate agents first, then understand intent

Much as email did before the emergence of DMARC/BIMI authentication standards, our current AI ecosystem needs some form of upfront authentication to establish trust. The key questions to ask before understanding what an agent is meant to do are:

  • Who sent you?
  • Who is allowed to tell you what to do and do I trust them?

In other words, we need to first establish that we trust the entity/people behind the agent before we explore what the agent is meant to accomplish.

Modern security systems have difficulty distinguishing between legitimate and malicious intents. An agent created by your supply chain partner, querying pricing and invoicing, is good. The same agent created by your competitor or a criminal is bad. Emerging standards such as the Linux Foundation’s (Google originated) A2A framework and its agent Cards are great for establishing what the agent is meant to do. Yet the higher-order question “who owns you, who is allowed to tell you what to do and do I trust them?” needs to be addressed upfront.

One way to think about this issue is to draw an analogy to the email space. You receive an email purporting to be from Wells Fargo. Only DMARC/BIMI makes it clear that the email is not from Wells Fargo. Does it really matter what the email says/requests? I would argue the best next step is to delete the email entirely — it is fake.

In a similar way, if an agent’s providence is faked or not trusted, does it even matter what it is meant to do? The agent should be stopped and contained immediately. The initial response should be to deny this agent access.

How attacks exploit the trust gap

The challenge compounds because AI agents operate differently from human users. They have dynamic lifespans, requiring specific permissions for limited periods and access to sensitive information, which forces organizations to rapidly provision and de-provision access.

The fundamental issue isn’t what the agent does, it’s who controls it. Using AI to screen resumes is a perfectly legitimate function. The threat emerges when you can’t verify the identity behind the agent. Just as an email’s content is irrelevant if it is not actually from a legitimate sender.

Attackers can also exploit agents without spoofing their identity — they can hide malicious instructions on web pages, in HTML comments or in invisible images accessible to AI systems. Business documents such as the PDFs your HR agent reviews, the screenshots your customer service agent processes or even routine emails can serve as vehicles for bad actors to gain network access.

What makes AI agents vulnerable is that they’re most threatening when working correctly.

Take this scenario: Your AI agent receives a resume and hidden within that PDF are invisible instructions. As AI evaluates the candidate, the embedded prompts influence the model’s response, resulting in a recommendation regardless of the candidate’s actual qualifications. No systems were breached and no passwords were stolen. The agent simply couldn’t distinguish between legitimate programming and the malicious commands embedded in the content it was processing. In a real case, albeit a less threatening scenario, one job candidate successfully redirected an AI agent’s instructions by embedding a recipe in a resume.

Traditional defenses focus on what the agent does rather than who authorized it. Both conditions are needed: authorized actions and providence. The agent performed its functions properly, but for the benefit of an adversary. This is why authentication must come first.

The authentication foundation we need

Trusting AI agents requires upfront authentication at a foundational level, establishing not only what an agent can do but also who is instructing it to act.

DNS, the internet’s distributed trust anchor, is well-suited to serve as this foundation. It is already a secure, non-hierarchical and globally distributed database: while only domain owners can modify DNS records, anyone can read them. PKI (public key infrastructure) complements this by providing cryptographic certificates that prove an agent’s identity — think of DNS as the registry that lists who owns what and PKI as the passport that proves you are who you claim to be. And every agent should be attached to a DNS record, creating a natural, efficient and upfront authentication mechanism.

An agent’s providence and ownership would be established via DNS prior to deployment. This answers the “who sent you and do I trust them” questions. Only if the agent passes the authentication test should the next level of authorization be investigated (via A2A or other protocols). This allows for the establishment of “who sent you and what do you want to do,” creating a secure and trusted path for agents to carry out their tasks. Fortunately, A2A’s agent cards enable this approach with efficient, low-cost upfront authentication and revocation of billions of agent identities within seconds.

The principle is straightforward. Our domains are our digital identity. Everything a domain issues, like emails, AI agents and IoT devices, should be authenticated at the DNS level before interaction. Just as email authentication protocols help verify senders, DNS-based authentication can authenticate AI agents before they execute actions.

Building trust with the non-human workforce

Any CIO’s go-forward strategy must address three critical requirements: security (verifying the source of instructions before agents act), traceability (maintaining clear audit trails of who instructed what) and scalability (handling billions of agent identities with sub-second authentication).

Before your HR agent processes a resume or your customer service agent reads a support ticket, first verify the request at the DNS level. Is the instruction from a trusted, authenticated source with authority to make this request? Without this step, a single malicious document uploaded by a customer could compromise your entire AI infrastructure without any warning signs.

Without clear insights into agent actions and access patterns, anomalous behaviors can go unnoticed. DNS-based authentication creates the audit trails and verification mechanisms that make agent behavior traceable and accountable.

The authentication challenge is operational and no longer theoretical. As organizations continue to integrate agentic AI into workflows, trust can be built on three fronts:

  • Extend zero-trust identity management principles to non-human actors. Cover AI agents with role-based access controls to ensure least-privileged access. The autonomous nature of AI agents means they can chain together permissions to access resources they shouldn’t have access to. Granular policies must prevent this.
  • Implement continuous authentication. AI agents must undergo real-time authentication checks using ephemeral credentials valid only for specific tasks, which automatically expire after completion. Agents have dynamic lifespans, requiring extremely specific permissions for limited periods and authentication must reflect this reality.
  • Establish DNS-based authentication upfront as the foundation. This isn’t another security layer; it’s the upfront foundational trust layer that makes all other controls effective. Domain trust must encompass everything your organization represents digitally.

The technology exists. Standards are emerging. However, the opportunity to act is narrowing. Organizations that integrate authentication into their AI foundations now will navigate the agentic future securely. Those who delay may face breaches at machine speed against systems designed for human-paced threats.

In an age where 45 billion non-human identities make human-driven decisions, we’re past the point of asking whether we need better authentication. The only question left is whether we’ll implement it before the next breach makes the case for us.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?


Read More from This Article: Trust in the age of agentic AI systems
Source: News

Category: NewsFebruary 11, 2026
Tags: art

Post navigation

PreviousPrevious post:Your dev team isn’t a cost center — it’s about to become a multiplierNextNext post:Workday vuelve a poner sus riendas en manos de Aneel Bhusri para hacer frente a los retos de la IA

Related posts

Data centers are costing local governments billions
April 17, 2026
Robot Zuckerberg shows how IT can free up CEOs’ time
April 17, 2026
UK wants to build sovereign AI — with just 0.08% of OpenAI’s market cap
April 17, 2026
Oracle delivers semantic search without LLMs
April 17, 2026
Secure-by-design: 3 principles to safely scale agentic AI
April 17, 2026
No sólo IA marca la transformación digital de los sectores clave
April 17, 2026
Recent Posts
  • Data centers are costing local governments billions
  • Robot Zuckerberg shows how IT can free up CEOs’ time
  • UK wants to build sovereign AI — with just 0.08% of OpenAI’s market cap
  • Oracle delivers semantic search without LLMs
  • Secure-by-design: 3 principles to safely scale agentic AI
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.