Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Cloud required DevSecOps. AI requires DevSecEng

The first wave of security “left-shifting” was driven by containers and the cloud, requiring unprecedented CIO-CISO collaboration. But that’s no longer enough. AI has introduced new attack surfaces that can be exploited at machine speed, with equally complex governance challenges. As 84% of developers integrate AI tools into their workflows and Gartner predicts AI governance issues will cause 2026 security budgets to surge by as much as $29 billion over 2025, the gap between engineering and security widens daily. DevSecOps was built for the cloud. AI needs DevSecEng.

The developer-led AI adoption gap

In the cloud era, security intervened at procurement. With AI tools, developers are first adopters, integrating Model Context Protocol (MCP) servers, custom agents and API connections before security teams know these systems exist, creating a proliferation of overprivileged AI agents.

Consider the widely covered phenomenon of OpenClaw (previously Clawdbot, then Moltbot). The open-source AI agent exploded from 9,000 to over 106,000 GitHub stars in 48 hours, the largest two-day gain in GitHub history. OpenClaw can be described as a funky, scrappy, always-on version of Claude Cowork and can browse the web, execute shell commands and manage files. In one case, an OpenClaw agent realized it lacked a Google Cloud API key, opened a browser, navigated to the console, configured OAuth and provisioned its own credentials. That level of autonomy should terrify security teams.

The new attack surface: MCP and AI supply chains

MCP has emerged as the universal API protocol for AI integrations — a USB port for AI projects (including OpenClaw). While enabling powerful capabilities, it creates a concentrated attack surface. In 2025, researchers discovered a malicious MCP server masquerading as an email integration that BCC’d all company communications to attackers for weeks.

“Tool poisoning” is even worse. In April 2025, Invariant Labs found a vulnerability in MCP Servers that exposed sensitive data exfiltration and unauthorized actions by AI models. As AI DevOps researcher Elena Cross states, “MCP tools can mutate their own definitions after installation. You approve a safe-looking tool on Day 1, and by Day 7, it quietly rerouted your API keys to an attacker.” In other words, it’s the ultimate AI-era software supply chain attack.

“IDEsaster” research revealed universal attack chains affecting every major AI IDE, exposing 1.8 million developers. Traditional security controls struggle with AI-specific vectors: Prompt injection, MCP poisoning, credential exposure and agents with excessive permissions.

Why CISOs and CTOs must collaborate

Effective AI security requires CTO involvement because AI is embedded in multiple layers of homegrown and third-party enterprise applications. This creates two distinct silos: Complex application authorization for engineering, and agent authorization for security. When an AI agent acts “as the user,” who’s responsible when it exceeds its programming? When a bot learns and acts in ways not explicitly allowed by the user, liability becomes murky.

That’s why secure-by-design can’t be lip service and may well require organizational restructuring. Should security and engineering be a single team for AI?  Maybe not, but we’ve seen cyber-fraud fusion centers successfully merge SOC and fraud functions. Similar constructs for AI security at a minimum deserve consideration.

Either way, two imperatives emerge. First is to make secure-by-design explicit in AI workflows. Second is to shift zero trust left to pre-engineer agent guardrails. Bots should have only explicitly allowed access, with continuous authorization and authentication. We need granular, enforceable governance models, with humans-in-the-loop for critical decisions. Critically, agent kill switches must be compartmentalized outside AI access to prevent tampering. If an AI system can modify its own shutdown mechanism, the control is meaningless.

Operationalizing DevSecEng: 5 practical approaches

1. Treat MCP servers like any other supply chain risk

Think of MCP servers the same way you’d think about npm packages or Docker images — they’re third-party code running with significant privileges. Keep an inventory: What MCP servers are installed, what commands can they run, what environment variables do they access. Watch for servers from unknown sources. Most importantly, set up alerts when tool definitions change between versions. That’s how you catch tool poisoning before it becomes a breach.

2. Stop hardcoding credentials in AI configs

API keys have a way of ending up in the wrong places, such as instruction files, environment variables, configuration JSONs. Scan for them systematically and look for obvious suspects such as ANTHROPIC_API_KEY, anything ending in _SECRET or _TOKEN. Check the. cursorrules and CLAUDE.md files developers use to customize their agents. Credentials belong in secure vaults with environment variable references, not hardcoded in config files that get committed to repos or shared across teams.

3. Apply least privilege access controls

When an MCP server requests sudo access, ask yourself: Would you give a contractor root on production systems? Probably not. The same principle applies here. Flag servers with elevated privileges, destructive commands, or the ability to execute arbitrary code. Audit what tools can access what paths. Validate that sandbox settings actually contain what they’re supposed to contain. Least privilege isn’t just for people anymore. Even if we’re not there yet, it needs to become table stakes for (non-deterministic) agentic deployments.

4. Know what versions you’re running

This one’s basic hygiene, but it matters more than ever. Keep an inventory of every AI tool with version numbers. Cross-reference against CVE databases. When Cursor and Windsurf shipped with Chromium versions carrying 94+ known vulnerabilities, organizations with good asset management could respond immediately. Those without are still figuring out what’s exposed.

5. Monitor AI agents at machine speed

Traditional SOC tools weren’t built for agents that make hundreds of decisions per minute. You need monitoring that operates at agent speed — think of it as a “CloudBot” watching your other bots. Track what’s actually running: Which AI processes, what network connections they’re making, which MCP servers they’re calling. This isn’t your grandfather’s DevSecOps. It’s a new function for a new threat model, built for environments where code writes code and agents call agents.

The Moltbook phenomenon and agent-to-agent risk

Moltbook, marketed as “Reddit for Agents” was a social network where over 150,000 AI agents self-organized and bots shared ‘Today I Learned’ insights, Tailscale configurations and security tips. One agent spotted and reported 552 failed SSH login attempts on its host VPS.

This illustrates what Chase CISO Patrick Opet calls “fourth-party dependencies”: Agent-to-agent interactions across organizations that create downstream exposures analogous to “agent zero-days.” The Human-AI Dyad concept suggests the primary trust unit is no longer the individual agent but the human-bot pair working together. Without CTO-CISO collaboration on policies accommodating this reality, we’re building on sand.

Humans in the loop don’t necessarily solve the AI quandary either. One of the most challenging vectors is arguably employees creating personal AI agents at home without their company knowing. It’s analogous to early BYOD email access, but risks are amplified because these agents operate with corporate credentials.

Authenticating both the user and the device becomes essential to identify bot-originated requests. We need to distinguish whether actions originate from users or agents, and fingerprint agents to prevent impersonation — a complex challenge without simple solutions.

Moving forward: Joint governance for the AI era

With AI, everything old is new again. Insider threats, accidental data loss, social engineering — these patterns are finding new expression in AI contexts with expanded attack surfaces and machine-speed execution. We’re no longer just securing applications. We’re securing agency itself: The ability of autonomous systems to act on behalf of our organizations.

The question is whether security and engineering can rise together to meet the challenge or continue operating in silos until the first major breach forces change. DevSecEng isn’t a thing yet but dismissing it as another cybersecurity buzzword would be foolish. The CTO-CISO partnership will determine whether we seize this opportunity or learn the hard way.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?


Read More from This Article: Cloud required DevSecOps. AI requires DevSecEng
Source: News

Category: NewsApril 1, 2026
Tags: art

Post navigation

PreviousPrevious post:How automation & AI can help organizations anticipate disruptions and adapt dynamicallyNextNext post:Los presupuestos para IA se disparan, pero el retorno de la inversión sigue siendo difícil de alcanzar

Related posts

SAS makes AI governance the centerpiece of its agent strategy
April 29, 2026
The boardroom divide: Why cyber resilience is a cultural asset
April 28, 2026
Samsung Galaxy AI for business: Productivity meets security
April 28, 2026
Startup tackles knowledge graphs to improve AI accuracy
April 28, 2026
AI won’t fix your data problems. Data engineering will
April 28, 2026
The inference bill nobody budgeted for
April 28, 2026
Recent Posts
  • SAS makes AI governance the centerpiece of its agent strategy
  • The boardroom divide: Why cyber resilience is a cultural asset
  • Samsung Galaxy AI for business: Productivity meets security
  • Startup tackles knowledge graphs to improve AI accuracy
  • AI won’t fix your data problems. Data engineering will
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.