A CISO walked out of the RSA conference last month and asked an honest question. “When does it make sense to create agents, sub-agents and swarms of agents versus digital twins?”
He wasn’t looking for a sales pitch. He had just sat through days of keynotes, breakouts and vendor pitches where AI got more airtime than anything else on the agenda, and he walked out with less clarity than when he walked in.
That’s the thing about this moment. Every vendor has an AI story. Every session touches on agents. Very few are offering a working model for how to govern any of it once it’s inside your business.
Similar questions are surfacing in almost every conversation I have. Agents, swarms and digital twins are landing in customer experience, treasury management and executive decision support. That’s the CIO’s world. It’s the CFO’s world too, and the CEO’s. When AI entities act, decide and speak on your organization’s behalf, someone must answer for who they are and who controls them.
A taxonomy: Operational vs. perspective complexity
It’s easy to use agents, swarms and digital twins as if they’re different words for the same thing. They aren’t. Each demands a different governance model and lumping them together is a governance mistake waiting to happen.
At the top of the frame, AI entities either solve operational complexity (how do we get this done?) or perspective complexity (how would our most experienced leader think about this?). Inside operational complexity, three distinct things are getting conflated:
- Synthetic agents are trained on the aggregated expertise of many practitioners. Think of a model trained on the combined knowledge of 100 pediatricians, validated by a pediatrician. It represents a domain, not a person. The expert grounding is there. Individual accountability is not.
- AI workers are task-specific single agents given foundational capability and turned loose to figure out the job. They’re often ephemeral, spinning up to execute a workflow and going away when it finishes. The person directing the worker may not be an expert in what the worker is doing. Attribution gets murky fast.
- Swarms are N instances of the above interacting. A swarm inside a single level is one kind of problem. A swarm that mixes synthetic agents, AI workers and digital twins across trust levels is a different problem entirely, because a high-trust entity can spawn a low-trust one, and what comes back up doesn’t get reclassified to its origin.
Digital twins sit on the perspective-complexity side. A digital twin isn’t a chatbot or a prompt persona. It’s a verified, governed representation of a specific human’s expertise or an organization’s unique institutional knowledge. The individual puts their judgment on the line. Every output traces back to an authorized source. Where AI workers are designed to act, a digital twin is designed to represent — which is why the governance model for one can’t be borrowed from the other.
You can’t manage a digital twin like a service account. You can’t manage an AI worker like an employee. And you can’t let cross-level swarms run without a registry that tracks what spawned what.
The dark side of the taxonomy: Governed vs. feral
Once you’ve got the taxonomy, a second axis shows up quickly. Governed versus feral. Authorized digital twins sit in the governed-perspective quadrant. Adversarial swarms sit in the feral-operational quadrant.
In January, a group of researchers led by Daniel Schroeder and Jonas Kunst published a policy forum in Science magazine on how malicious AI swarms can threaten democracy. The paper describes a technique they call LLM grooming, where swarms flood the web with fabricated content designed to be ingested by future AI training runs. Their warning is that AI swarms can rig the epistemic substrate on which future AI tools depend.
That’s a data integrity problem hiding inside a disinformation problem. If your organization relies on AI for pricing, market intelligence, competitive analysis or strategic planning, the content your models train on tomorrow is being shaped today. The upstream data feeding your downstream decisions is under active manipulation, and most enterprises have no visibility into any of it.
What makes the story more interesting is that the same researchers also see the other side. In a CXOTalk interview, one of the authors was asked whether AI swarms could ever be used for good. Schroeder affirmed, “Yes. They can fact check. They can collaborate. They can collaborate and just build digital twins of humans in order to process information in a way this particular human would understand.”
That’s the tension in one sentence. The same capability that can manufacture consensus can also preserve expertise. The difference comes down to whether the intelligence is governed or feral. Verified Intelligence becomes necessary because the threat and the solution share the same root.
Identity has become a question of authorship
If anyone can spin up a high-fidelity digital version of your CEO, your brand voice or your strategic reasoning, authentication has to answer a different set of questions than it used to. Access stops being the point. Authorship takes over.
Five questions now define the control plane, and they’re governance questions:
- Who created this entity?
- Who trained it?
- Who authorized it?
- Who can revoke it?
- Who is it economically aligned to?
Digital twin forking isn’t a fringe risk. It’s inevitable. Unauthorized swarms acting in your organization’s likeness will be a normal threat vector by 2027. (The timeline will feel fast until it feels obvious.) The companies that win will track provenance the way finance tracks capital.
On April 1st, a colleague shared her “Retirement Certificate” from ReplacedByClawd, which lets anyone spin up a digital version of a named person in minutes. The tone is played for laughs. The capability underneath is serious business. Anyone with a browser can fork a likeness, train it on public content and set it loose with no tie back to the real human it mimics. Unfortunately, this was not an April Fool’s joke.
We need authorized versions of our digital twins, and we need them before the unauthorized ones become the norm. A twin your organization actually owns. A twin whose training data, scope and boundaries can be attested. A twin that can be revoked when a leader changes roles or leaves.
Once the humor wears off, the cognition layer becomes a social engineering playground. A convincing digital version of your CFO approving a wire. A cloned voice of a senior engineer pushing a late-night code review. Hackers are headed for this layer. Most security programs are still locked on the session.
The good news is that the framework is starting to take shape. On April 17th, the Coalition for Secure AI (CoSAI) published Agentic Identity and Access Management, a foundational reference that treats agents as first-class identities with their own lifecycle, delegation model and accountability. The paper introduces an agent registry as the system of record, scope attenuation at every hop in a delegation chain, and a “prove control on demand” standard for logging and lineage. It’s the clearest signal yet that the industry is moving past session-layer thinking and closer to cognitive governance this moment requires.
From identity perimeter to cognitive governance
The real shift happens at the control plane itself. Governance has to extend to the cognitive layer. To what an AI entity is authorized to know, say, decide and spawn.
On a recent a16z podcast, Box CEO Aaron Levie and former Microsoft executive Steven Sinofsky talked about what happens when agents become the primary users of enterprise software. Sinofsky made a point that should anchor every CIO’s next 18 months of planning. Enterprises will live in a read-only consumption layer for years before they allow agents to write, act or transact with full autonomy.
That’s a feature, not a bug. And it’s exactly where governed digital twins fit. They answer questions. They prepare context. They surface governance guidance. They rehearse decisions before the executive team commits, and they stress-test strategy before the market stress-tests the brand. They preserve institutional judgment when a senior leader retires or changes roles. This is the agentic enterprise maturing from experimentation into production, without handing the keys to a feral swarm.
Aysha Khan, CIO and CISO at Treasure Data, captured the human side of this shift when she told me recently that “By encoding legacy expertise into governed AI, we do not make ourselves irrelevant. We free ourselves from the maintenance of our past and tap into the possibilities of who we can become next.”
That framing matters. Cognitive governance scales a leader’s judgment while protecting their identity. The people get elevated. The work gets amplified.
The executive imperative
If your AI entities carry your judgment, your voice and your authority, identity governance stops being something the IT team owns in isolation. It becomes a leadership discipline that shapes what the organization can become. Treat AI lineage with the same discipline you bring to capital allocation — visibility, accountability and the ability to trace every decision back to a legitimate source.
Every AI entity you deploy carries a lineage. The companies that can trace that lineage will govern it. The ones that can’t will learn what they’ve lost after the fact.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: The death of identity as we know it
Source: News

