Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

The real AI risk isn’t AGI — it’s unregulated machine identity

AGI hype vs. today’s real risk

Artificial general intelligence (AGI) dominates the headlines. It is often painted as an existential risk: a system that sets its own objectives, operates without oversight and produces outputs we cannot explain. As a security leader, I share those concerns, but they are concerns about a future we have not reached yet.

The reality is that today’s most urgent problem looks very different. We do not yet have the tools to govern AGI, but we do have ways to build guardrails around the systems already in use. One of the most effective guardrails is controlling what AI agents can interact with: which services they communicate with, what data they access and under what conditions. That control ultimately comes down to managing their credentials, the non-human identities that underpin machine-to-machine communication.

And that is where today’s risk lies. Non-human identities — API keys, authentication tokens, certificates and cryptographic keys — already outnumber human identities. In some large-scale environments, the ratio of machine to human identities is 40,000 to 1. As someone who has led cryptography and enterprise security teams, I see this imbalance as the real battleground right now.

Why non-human identities are the weakest link

The data backs this up. According to the 2025 Verizon Data Breach Investigations Report, credential abuse is the top initial access vector, involved in 22% of breaches, and in North America, credentials factored into nearly a quarter of breaches. Attackers are not breaking in, they are logging in.

Identity has become a critical extension of the security perimeter and non-human identity is the newest, least-defended dimension of that perimeter. Recent events underscore how dangerous this blind spot is. In one widely reported incident, Lenovo’s chatbot was compromised when researchers demonstrated that a single malicious prompt could steal session cookies and access customer support systems. It shows how quickly things can go wrong when new technology is rolled out without the same security rigor as other enterprise systems, as well as why the next major incident may come from weaknesses in AI and non-human identities.

The analogy I often use with security leaders is the “hotel key” problem. When you issue a physical key to a guest, just as you issue a credential to an application or service, you immediately lose visibility and control. You do not know if the key has been copied, where it is being used or by whom. If a thief — the attacker — presents the same key, they are indistinguishable from the legitimate guest or trusted system.

And when you finally discover the problem, remediation is painful: you need to change the locks on every door, just as you would have to rotate thousands of credentials after a breach. That is exactly what it looks like when machine credentials are compromised.

Speedy AI adoption can be risky

At the same time, organizations are under pressure to accelerate AI adoption. According to a recent report, the number of S&P 500 companies disclosing board-level AI oversight increased by more than 84% between 2023 and 2024. Boards are paying attention and pushing for faster deployment.

But speed often comes at the expense of security. I have seen organizations strip away long-established controls to feed AI models more data. Tools to manage non-human identities are still in the making, which means many enterprises are running blind. And in security, a blind spot is not just a vulnerability, it is an open invitation for attackers.

The risks compound when you consider the scale. According to the SandboxAQ AI Security Benchmark Report 2025, only 6% of organizations have reached an AI-native security posture, with protections integrated across both IT and AI systems. That means very few have effective controls in place for governing the credentials their AI agents rely on, creating a massive and growing attack surface without guardrails.

Part of the problem is that regulations and frameworks have not kept pace with advances in AI. There are still no widely accepted standards for managing AI agents or machine identities and basic questions remain unresolved. If an AI agent causes harm, who is responsible: the agent, the developer or the person who gave the prompt?

Without governance and identity management working hand in hand, enterprises are essentially gambling. We have seen this before: the 2017 Equifax breach was tied to a missed patch and the more recent Storm-0558 attack exploited a stolen key from a crash dump. The lesson is consistent: Credentials are a weak link and yet we continue to treat them as an afterthought.

What security leaders must do now

Get complete visibility

Build a real-time inventory of every key, certificate and secret. You cannot protect what you cannot see. Many organizations underestimate how many non-human identities they have and those hidden identities often become the attacker’s entry point.

Visibility should cover not just the assets themselves but also the connections between them — which applications rely on which keys and where those secrets are stored. Without this understanding, security teams remain reactive instead of proactive.

Automate the lifecycle

Manual credential management cannot keep up with systems that live only seconds. Provisioning and rotation must be automated to limit the window an attacker has to exploit a stolen credential. Short-lived credentials are effective only when they can be issued and replaced continuously. In practice, this requires integration with the same cloud and devops tools that development teams already use.

Protect the last mile

Isolate secrets from applications so they are never directly exposed. Even the best vault does not help if a secret can be pulled out of memory or logged in plain text once an application retrieves it. Last-mile protection shifts trust away from vulnerable endpoints and into hardened cryptographic services that can sign or verify without ever releasing the underlying key.

In short: start with visibility, move to automation and finish with isolation.

Keep the real danger in sight

The battleground has shifted. It is no longer in the networking layer and not yet AGI. It is in the identity space, especially the non-human ones, that quietly outnumber us by tens of thousands to one.

Attackers have already adapted. They are not breaking down the walls, they are simply logging in with legitimate credentials. Until we catch up, we are gambling with our enterprises. AGI may dominate the conversation, but the immediate, clear and present danger is unsupervised non-human identity.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?


Read More from This Article: The real AI risk isn’t AGI — it’s unregulated machine identity
Source: News

Category: NewsOctober 21, 2025
Tags: art

Post navigation

PreviousPrevious post:Architecting a high-performance delivery engineNextNext post:Shaping enterprise talent strategies in the age of AI

Related posts

人の経験に頼った物流から、データで動く物流へ──SGHグループが挑む「データドリブン経営」の真価
April 22, 2026
Carles Llach: “La tecnología ha generado unas eficiencias enormes en el notariado”
April 22, 2026
The 4 disciplines of delivery — and why conflating them silently breaks your teams
April 22, 2026
The silent failure between approval and delivery
April 22, 2026
AI hype to AI value: Escaping the activity trap
April 22, 2026
Ways CIOs can prove to boards that AI projects will deliver
April 22, 2026
Recent Posts
  • 人の経験に頼った物流から、データで動く物流へ──SGHグループが挑む「データドリブン経営」の真価
  • Carles Llach: “La tecnología ha generado unas eficiencias enormes en el notariado”
  • The 4 disciplines of delivery — and why conflating them silently breaks your teams
  • The silent failure between approval and delivery
  • AI hype to AI value: Escaping the activity trap
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.