Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Your AI agent deletes critical data: Who is responsible?

A Replit AI coding agent deleted a company’s live production database during an active code freeze last year. “This was a catastrophic failure on my part,” it nonchalantly admitted. “I destroyed months of work in seconds.” While the data was eventually restored with a rollback, the agent believed the destruction was permanent and had no built-in mechanism to undo its own actions.

For a CIO, this isn’t just a technical glitch. It’s a total breakdown in enterprise accountability. When an agent causes this much damage, the blame game usually circles between the business unit that requested the tool, the engineer who gave it write-access and the security team that signed off on it.

The software alone can’t be held responsible. And as AI adoption reaches 88% of enterprises, according to McKinsey, many organizations still lack a clear answer for who actually owns the fallout. A new Rubrik Zero Labs report highlights this problem: 86 percent of IT and security leaders expect AI agents to outpace their organization’s security guardrails within the next year.

IT must lead to mitigate agent risk

Organizations that treat AI agents as experiments rather than core infrastructure do so with increased risk. That approach fails at scale because of operational maturity, not technical capability. An MIT survey suggests that 95% of generative AI pilots fail to deliver measurable business impact, often because they are forced into existing processes without a proper management framework.

I’ve talked to numerous IT leaders who report this problem. Teams experiment with agents for data analysis or customer service, but when an issue arises, the first hurdle is figuring out who coordinates the response. Part of the confusion stems from a misunderstanding of what these agents actually are. Unlike a standard SaaS API, which is built for a narrow, specific function requiring constant re-authentication, AI agents can be partially or fully autonomous.

By utilizing the Model Context Protocol (MCP), agents can interact with an entire SaaS platform rather than just one “door.” Essentially, you authenticate once and the agent has the keys to the whole building to consume whatever it needs for a workflow. The shift from functional isolation to platform-wide autonomy is why the old governance rules no longer apply.

The shared responsibility framework

At Rubrik, we use a shared responsibility model through our AI Center of Excellence (CoE). To lead this, we’ve developed a specific roles and responsibilities matrix that governs our AI strategy. Our CTO takes the lead alongside the general counsel, the CFO and me to act as executive decision-makers. A senior strategy team includes the CISO, general counsel and head of global structure, followed by the architects and cross-functional leaders in IT, InfoSec and legal who enable the actual training, tool approval and execution.

Our approach focuses on three distinct pillars: secure adoption and governance of third-party tools like Claude, building our own internal AI capabilities and integrating AI into our core products. Under this CoE, we apply the same principles we use for any enterprise technology but with defined departmental stakes.

IT owns the architecture and deployment standards. InfoSec provides continuous assessment, looking for prompt injection risks and vulnerabilities. Legal defines the guardrails for data handling and automated decision-making. Finally, business teams act as the consumers using AI to transform operations. The CoE exists to provide for them, ensuring that if they don’t follow these standards, risk isn’t introduced through misalignment.

Make governance practical

We want to move fast but not be reckless. Enabling agents to write actions should not be a fearful decision if the guardrails in place include strong governance and recoverability. Our process ensures that when a team identifies a need for an agent, there is a direct route from the initial request through technical and security vetting into a monitored production environment.

We’ve seen the need for this firsthand during our own internal AI deployments. As we rolled out more tools, each with its own set of terms and regulations, we hit a point of chaos. There was no holistic way to establish safeguards. By using an agent cloud framework, we established full observability and remediation and automatically enforced security at the agent level.

For example, when we expanded our use of Claude Code in internal test environments, we discovered a class of security issues that did not map cleanly to our existing controls. To control that behavior, we defined a policy boundary barring the transfer of data from the agent environment to external code repositories, forums and other public-facing platforms.

The recovery time problem

The operational stakes for these failures are rising. According to the Rubrik Zero Labs report, nearly nine in ten leaders expressed concern about meeting recovery objectives as agent-driven threats increase. In addition, 88% say they cannot roll back agent actions without system disruption. When agent failures compound security or data integrity issues, recovery becomes impossible without a framework.

In practice, detection usually starts with the consumer. For example, we use a “PTO Agent” that scans calendars and cross-references them with our HR system to ensure time-off requests are aligned. I recently received a Slack alert from this agent noting OOO time in April and asking to log it, even though I had already cleared it. While a minor “hallucination,” it tested our process: the issue flows to the IT help desk, which automatically notifies the AI delivery team and the business owner. Currently, our team triages these errors manually to fix the bug and redeploy, but our roadmap involves automating this triage with a human-in-the-loop component.

AI agents: from innovation to operations

Organizations that formalize AI governance attribute 27% of their total AI efficiency gains to those guardrails. Many AI governance failures come down to two things organizations skip in the rush to deploy:

  1. Treat agents as first-class identities. Most “rogue” behavior is a permissions failure. If an agent isn’t integrated into your identity provider with strict least-privilege access and a clear audit trail, it shouldn’t be on your network. We must treat agents like employees: They need a “manager” in the system and an identity that can be instantly revoked.
  2. Demand architectural reversibility. Legacy environments rely on “undo” buttons and version control. AI agents operate in live production where the “undo” is often invisible. Before an agent moves past the pilot stage, your architectural review must answer: If this agent makes an unauthorized change, how do we surgically reverse it without taking the business offline? Agent reversibility requires intent-driven, context-rich AI governance engines to maintain oversight.

Organizations must have the right strategy for secure agent operations. Build the model gradually. Begin with IT-led oversight for critical functions and expand as you gain experience. The organizations that establish operational accountability now will scale AI effectively. Those that continue with scattered, ungoverned deployments will keep playing the “who’s responsible?” game every time something breaks.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?


Read More from This Article: Your AI agent deletes critical data: Who is responsible?
Source: News

Category: NewsMay 13, 2026
Tags: art

Post navigation

PreviousPrevious post:SAP’s AI offer to legacy customers comes with a catchNextNext post:CISA’s AI SBOM guidance pushes software supply-chain oversight into new territory

Related posts

AI, power and the trade-off between freedom and innovation
May 14, 2026
Building an AI CoE: Why you need one and how to make it work
May 14, 2026
AI-driven layoffs aren’t making business sense
May 14, 2026
CIOs are put to the test as security regulations across borders recalibrate
May 14, 2026
How deepfakes are rewriting the rules of the modern workplace
May 14, 2026
Decision-making speed is a hidden constraint on transformation success
May 14, 2026
Recent Posts
  • AI, power and the trade-off between freedom and innovation
  • Building an AI CoE: Why you need one and how to make it work
  • AI-driven layoffs aren’t making business sense
  • CIOs are put to the test as security regulations across borders recalibrate
  • How deepfakes are rewriting the rules of the modern workplace
Recent Comments
    Archives
    • May 2026
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.