By the end of 2025, more than 45 billion non-human and agentic identities — or more than 12 times the number of humans in the global workforce — will be deployed into organizational workflows. Use cases are cross-functional, with nearly two-thirds of AI agents focused on automating critical business processes across HR, finance, sales operations, supply chain management, customer service and administrative tasks.
Agentic AI poses a two-way authentication threat: AI agents can both harvest credentials at scale and exploit them to impersonate legitimate services.
This recent case is a preview of what could come: AI-enabled chat agents impersonated Salesforce’s Data Loader application by using the real software’s client ID to compromise administrators across multiple organizations, impacting more than a million customers, including those of major cybersecurity vendors. Because the client ID matched an already-approved application, the consent screen was skipped and attackers received valid access tokens invisibly. Traditional monitoring struggled to distinguish between legitimate AI usage and malicious exfiltration.
According to Okta, 23% of IT professionals reported that their AI agents have been tricked into revealing access credentials. Yet, as the earlier mentioned report from the World Economic Forum revealed, only 10% of organizations have a well-developed strategy for managing their non-human and agentic identities. The window to establish authentication safeguards is closing at a clip measured by months, not years.
Deconstructing the agentic AI playing field
Unlike traditional chatbots that simply respond to questions, AI agents are autonomous systems that can plan, make decisions and take actions across multiple systems with minimal human oversight. They are expected to work across networks on end-to-end business processes — processing payroll, approving refunds, managing supply chains, writing code and making financial decisions with access to your most sensitive systems and data.
AI agents are trained to not just advise. They are purposely integrated to be active decision-makers. And that’s where the problem begins. And who created them and is guiding their intent is needed to build trust.
Authenticate agents first, then understand intent
Much as email did before the emergence of DMARC/BIMI authentication standards, our current AI ecosystem needs some form of upfront authentication to establish trust. The key questions to ask before understanding what an agent is meant to do are:
- Who sent you?
- Who is allowed to tell you what to do and do I trust them?
In other words, we need to first establish that we trust the entity/people behind the agent before we explore what the agent is meant to accomplish.
Modern security systems have difficulty distinguishing between legitimate and malicious intents. An agent created by your supply chain partner, querying pricing and invoicing, is good. The same agent created by your competitor or a criminal is bad. Emerging standards such as the Linux Foundation’s (Google originated) A2A framework and its agent Cards are great for establishing what the agent is meant to do. Yet the higher-order question “who owns you, who is allowed to tell you what to do and do I trust them?” needs to be addressed upfront.
One way to think about this issue is to draw an analogy to the email space. You receive an email purporting to be from Wells Fargo. Only DMARC/BIMI makes it clear that the email is not from Wells Fargo. Does it really matter what the email says/requests? I would argue the best next step is to delete the email entirely — it is fake.
In a similar way, if an agent’s providence is faked or not trusted, does it even matter what it is meant to do? The agent should be stopped and contained immediately. The initial response should be to deny this agent access.
How attacks exploit the trust gap
The challenge compounds because AI agents operate differently from human users. They have dynamic lifespans, requiring specific permissions for limited periods and access to sensitive information, which forces organizations to rapidly provision and de-provision access.
The fundamental issue isn’t what the agent does, it’s who controls it. Using AI to screen resumes is a perfectly legitimate function. The threat emerges when you can’t verify the identity behind the agent. Just as an email’s content is irrelevant if it is not actually from a legitimate sender.
Attackers can also exploit agents without spoofing their identity — they can hide malicious instructions on web pages, in HTML comments or in invisible images accessible to AI systems. Business documents such as the PDFs your HR agent reviews, the screenshots your customer service agent processes or even routine emails can serve as vehicles for bad actors to gain network access.
What makes AI agents vulnerable is that they’re most threatening when working correctly.
Take this scenario: Your AI agent receives a resume and hidden within that PDF are invisible instructions. As AI evaluates the candidate, the embedded prompts influence the model’s response, resulting in a recommendation regardless of the candidate’s actual qualifications. No systems were breached and no passwords were stolen. The agent simply couldn’t distinguish between legitimate programming and the malicious commands embedded in the content it was processing. In a real case, albeit a less threatening scenario, one job candidate successfully redirected an AI agent’s instructions by embedding a recipe in a resume.
Traditional defenses focus on what the agent does rather than who authorized it. Both conditions are needed: authorized actions and providence. The agent performed its functions properly, but for the benefit of an adversary. This is why authentication must come first.
The authentication foundation we need
Trusting AI agents requires upfront authentication at a foundational level, establishing not only what an agent can do but also who is instructing it to act.
DNS, the internet’s distributed trust anchor, is well-suited to serve as this foundation. It is already a secure, non-hierarchical and globally distributed database: while only domain owners can modify DNS records, anyone can read them. PKI (public key infrastructure) complements this by providing cryptographic certificates that prove an agent’s identity — think of DNS as the registry that lists who owns what and PKI as the passport that proves you are who you claim to be. And every agent should be attached to a DNS record, creating a natural, efficient and upfront authentication mechanism.
An agent’s providence and ownership would be established via DNS prior to deployment. This answers the “who sent you and do I trust them” questions. Only if the agent passes the authentication test should the next level of authorization be investigated (via A2A or other protocols). This allows for the establishment of “who sent you and what do you want to do,” creating a secure and trusted path for agents to carry out their tasks. Fortunately, A2A’s agent cards enable this approach with efficient, low-cost upfront authentication and revocation of billions of agent identities within seconds.
The principle is straightforward. Our domains are our digital identity. Everything a domain issues, like emails, AI agents and IoT devices, should be authenticated at the DNS level before interaction. Just as email authentication protocols help verify senders, DNS-based authentication can authenticate AI agents before they execute actions.
Building trust with the non-human workforce
Any CIO’s go-forward strategy must address three critical requirements: security (verifying the source of instructions before agents act), traceability (maintaining clear audit trails of who instructed what) and scalability (handling billions of agent identities with sub-second authentication).
Before your HR agent processes a resume or your customer service agent reads a support ticket, first verify the request at the DNS level. Is the instruction from a trusted, authenticated source with authority to make this request? Without this step, a single malicious document uploaded by a customer could compromise your entire AI infrastructure without any warning signs.
Without clear insights into agent actions and access patterns, anomalous behaviors can go unnoticed. DNS-based authentication creates the audit trails and verification mechanisms that make agent behavior traceable and accountable.
The authentication challenge is operational and no longer theoretical. As organizations continue to integrate agentic AI into workflows, trust can be built on three fronts:
- Extend zero-trust identity management principles to non-human actors. Cover AI agents with role-based access controls to ensure least-privileged access. The autonomous nature of AI agents means they can chain together permissions to access resources they shouldn’t have access to. Granular policies must prevent this.
- Implement continuous authentication. AI agents must undergo real-time authentication checks using ephemeral credentials valid only for specific tasks, which automatically expire after completion. Agents have dynamic lifespans, requiring extremely specific permissions for limited periods and authentication must reflect this reality.
- Establish DNS-based authentication upfront as the foundation. This isn’t another security layer; it’s the upfront foundational trust layer that makes all other controls effective. Domain trust must encompass everything your organization represents digitally.
The technology exists. Standards are emerging. However, the opportunity to act is narrowing. Organizations that integrate authentication into their AI foundations now will navigate the agentic future securely. Those who delay may face breaches at machine speed against systems designed for human-paced threats.
In an age where 45 billion non-human identities make human-driven decisions, we’re past the point of asking whether we need better authentication. The only question left is whether we’ll implement it before the next breach makes the case for us.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: Trust in the age of agentic AI systems
Source: News

