Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

The death of identity as we know it

A CISO walked out of the RSA conference last month and asked an honest question. “When does it make sense to create agents, sub-agents and swarms of agents versus digital twins?”

He wasn’t looking for a sales pitch. He had just sat through days of keynotes, breakouts and vendor pitches where AI got more airtime than anything else on the agenda, and he walked out with less clarity than when he walked in.

That’s the thing about this moment. Every vendor has an AI story. Every session touches on agents. Very few are offering a working model for how to govern any of it once it’s inside your business.

Similar questions are surfacing in almost every conversation I have. Agents, swarms and digital twins are landing in customer experience, treasury management and executive decision support. That’s the CIO’s world. It’s the CFO’s world too, and the CEO’s. When AI entities act, decide and speak on your organization’s behalf, someone must answer for who they are and who controls them.

A taxonomy: Operational vs. perspective complexity

It’s easy to use agents, swarms and digital twins as if they’re different words for the same thing. They aren’t. Each demands a different governance model and lumping them together is a governance mistake waiting to happen.

At the top of the frame, AI entities either solve operational complexity (how do we get this done?) or perspective complexity (how would our most experienced leader think about this?). Inside operational complexity, three distinct things are getting conflated:

  • Synthetic agents are trained on the aggregated expertise of many practitioners. Think of a model trained on the combined knowledge of 100 pediatricians, validated by a pediatrician. It represents a domain, not a person. The expert grounding is there. Individual accountability is not.
  • AI workers are task-specific single agents given foundational capability and turned loose to figure out the job. They’re often ephemeral, spinning up to execute a workflow and going away when it finishes. The person directing the worker may not be an expert in what the worker is doing. Attribution gets murky fast.
  • Swarms are N instances of the above interacting. A swarm inside a single level is one kind of problem. A swarm that mixes synthetic agents, AI workers and digital twins across trust levels is a different problem entirely, because a high-trust entity can spawn a low-trust one, and what comes back up doesn’t get reclassified to its origin.

Digital twins sit on the perspective-complexity side. A digital twin isn’t a chatbot or a prompt persona. It’s a verified, governed representation of a specific human’s expertise or an organization’s unique institutional knowledge. The individual puts their judgment on the line. Every output traces back to an authorized source. Where AI workers are designed to act, a digital twin is designed to represent — which is why the governance model for one can’t be borrowed from the other.

You can’t manage a digital twin like a service account. You can’t manage an AI worker like an employee. And you can’t let cross-level swarms run without a registry that tracks what spawned what.

The dark side of the taxonomy: Governed vs. feral

Once you’ve got the taxonomy, a second axis shows up quickly. Governed versus feral. Authorized digital twins sit in the governed-perspective quadrant. Adversarial swarms sit in the feral-operational quadrant.

In January, a group of researchers led by Daniel Schroeder and Jonas Kunst published a policy forum in Science magazine on how malicious AI swarms can threaten democracy. The paper describes a technique they call LLM grooming, where swarms flood the web with fabricated content designed to be ingested by future AI training runs. Their warning is that AI swarms can rig the epistemic substrate on which future AI tools depend.

That’s a data integrity problem hiding inside a disinformation problem. If your organization relies on AI for pricing, market intelligence, competitive analysis or strategic planning, the content your models train on tomorrow is being shaped today. The upstream data feeding your downstream decisions is under active manipulation, and most enterprises have no visibility into any of it.

What makes the story more interesting is that the same researchers also see the other side. In a CXOTalk interview, one of the authors was asked whether AI swarms could ever be used for good. Schroeder affirmed, “Yes. They can fact check. They can collaborate. They can collaborate and just build digital twins of humans in order to process information in a way this particular human would understand.”

That’s the tension in one sentence. The same capability that can manufacture consensus can also preserve expertise. The difference comes down to whether the intelligence is governed or feral. Verified Intelligence becomes necessary because the threat and the solution share the same root.

Identity has become a question of authorship

If anyone can spin up a high-fidelity digital version of your CEO, your brand voice or your strategic reasoning, authentication has to answer a different set of questions than it used to. Access stops being the point. Authorship takes over.

Five questions now define the control plane, and they’re governance questions:

  • Who created this entity?
  • Who trained it?
  • Who authorized it?
  • Who can revoke it?
  • Who is it economically aligned to?

Digital twin forking isn’t a fringe risk. It’s inevitable. Unauthorized swarms acting in your organization’s likeness will be a normal threat vector by 2027. (The timeline will feel fast until it feels obvious.) The companies that win will track provenance the way finance tracks capital.

On April 1st, a colleague shared her “Retirement Certificate” from ReplacedByClawd, which lets anyone spin up a digital version of a named person in minutes. The tone is played for laughs. The capability underneath is serious business. Anyone with a browser can fork a likeness, train it on public content and set it loose with no tie back to the real human it mimics. Unfortunately, this was not an April Fool’s joke.

We need authorized versions of our digital twins, and we need them before the unauthorized ones become the norm. A twin your organization actually owns. A twin whose training data, scope and boundaries can be attested. A twin that can be revoked when a leader changes roles or leaves.

Once the humor wears off, the cognition layer becomes a social engineering playground. A convincing digital version of your CFO approving a wire. A cloned voice of a senior engineer pushing a late-night code review. Hackers are headed for this layer. Most security programs are still locked on the session.

The good news is that the framework is starting to take shape. On April 17th, the Coalition for Secure AI (CoSAI) published Agentic Identity and Access Management, a foundational reference that treats agents as first-class identities with their own lifecycle, delegation model and accountability. The paper introduces an agent registry as the system of record, scope attenuation at every hop in a delegation chain, and a “prove control on demand” standard for logging and lineage. It’s the clearest signal yet that the industry is moving past session-layer thinking and closer to cognitive governance this moment requires.

From identity perimeter to cognitive governance

The real shift happens at the control plane itself. Governance has to extend to the cognitive layer. To what an AI entity is authorized to know, say, decide and spawn.

On a recent a16z podcast, Box CEO Aaron Levie and former Microsoft executive Steven Sinofsky talked about what happens when agents become the primary users of enterprise software. Sinofsky made a point that should anchor every CIO’s next 18 months of planning. Enterprises will live in a read-only consumption layer for years before they allow agents to write, act or transact with full autonomy.

That’s a feature, not a bug. And it’s exactly where governed digital twins fit. They answer questions. They prepare context. They surface governance guidance. They rehearse decisions before the executive team commits, and they stress-test strategy before the market stress-tests the brand. They preserve institutional judgment when a senior leader retires or changes roles. This is the agentic enterprise maturing from experimentation into production, without handing the keys to a feral swarm.

Aysha Khan, CIO and CISO at Treasure Data, captured the human side of this shift when she told me recently that “By encoding legacy expertise into governed AI, we do not make ourselves irrelevant. We free ourselves from the maintenance of our past and tap into the possibilities of who we can become next.”

That framing matters. Cognitive governance scales a leader’s judgment while protecting their identity. The people get elevated. The work gets amplified.

The executive imperative

If your AI entities carry your judgment, your voice and your authority, identity governance stops being something the IT team owns in isolation. It becomes a leadership discipline that shapes what the organization can become. Treat AI lineage with the same discipline you bring to capital allocation — visibility, accountability and the ability to trace every decision back to a legitimate source.

Every AI entity you deploy carries a lineage. The companies that can trace that lineage will govern it. The ones that can’t will learn what they’ve lost after the fact.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?


Read More from This Article: The death of identity as we know it
Source: News

Category: NewsMay 13, 2026
Tags: art

Post navigation

PreviousPrevious post:CISA’s AI SBOM guidance pushes software supply-chain oversight into new territoryNextNext post:“공급망 보안 미흡하면 EU 판매 어려워진다”···블랙덕, CRA 대응 강조

Related posts

AI, power and the trade-off between freedom and innovation
May 14, 2026
Building an AI CoE: Why you need one and how to make it work
May 14, 2026
AI-driven layoffs aren’t making business sense
May 14, 2026
CIOs are put to the test as security regulations across borders recalibrate
May 14, 2026
How deepfakes are rewriting the rules of the modern workplace
May 14, 2026
Decision-making speed is a hidden constraint on transformation success
May 14, 2026
Recent Posts
  • AI, power and the trade-off between freedom and innovation
  • Building an AI CoE: Why you need one and how to make it work
  • AI-driven layoffs aren’t making business sense
  • CIOs are put to the test as security regulations across borders recalibrate
  • How deepfakes are rewriting the rules of the modern workplace
Recent Comments
    Archives
    • May 2026
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.