Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Giving AI ‘hands’ in your SaaS stack

I love AI copilots. I also don’t trust them with write access.

That’s not a philosophical stance. It’s an operator scar. In high-growth startups, helpful automation has a habit of turning into a 2 a.m. incident when it gets too much privilege and not enough supervision. Now we’re wiring large language models to the same systems that run lead-to-cash, billing, provisioning and customer support.

I’ve spent the last 15 years building the commerce engine for some of Silicon Valley’s fastest-growing companies. From the early, scrappy days at Eventbrite to architecting the lead-to-cash systems that supported Slack’s IPO and now leading enterprise systems at Gusto, I’ve lived through every phase of the business technology maturity curve.

Moving from talk to action without going off the rails

For most of that time, my job was to build deterministic rails for deterministic trains. If a sales rep updated a contract in Salesforce, we built rigid, unforgiving integrations to ensure that data flowed perfectly into NetSuite and Snowflake. It was binary: it worked or it failed.

But we are now at an inflection point that scares me as much as it excites me. We are moving from the era of chat with your data where AI is a passive oracle to the era of work with your data. We are giving AI agents hands.

We are asking these probabilistic models to not just summarize a deal, but to update the record, provision the license and email the customer. We are connecting stochastic reasoning engines to mission-critical systems of record. As someone who has spent sleepless nights worrying about data integrity during financial audits, I can tell you that the potential for Excessive Agency — an OWASP Top 10 vulnerability where an agent does more than you intended — is the single biggest risk keeping IT leaders up at night.

I’m seeing a lot of leaders paralyzed by this risk, while others are recklessly handing out API keys. There is a middle ground. Drawing from my experience stabilizing complex stacks at Gusto and OneTrust, I want to lay out a pragmatic architecture for safe agency — how to let AI take the wheel without driving your GTM stack off a cliff.

The God-mode trap

In the early days of a startup, as I vividly recall from my time at Eventbrite, velocity is everything. You embed technical teams directly into sales ops or CX just to keep the lights on. In that environment, if you were deploying an AI agent today, the temptation would be to create a single Salesforce integration user with system administrator privileges and let the agent run wild.

I call this the God-mode anti-pattern and it is catastrophic.

If an attacker manages to use an indirect prompt injection — hiding malicious instructions in a calendar invite or a web page the agent reads — that agent essentially becomes a confused deputy. It has the keys to the kingdom. It can delete opportunities, export customer lists or modify pricing configurations.

When I joined Gusto to lead business technology, one of my priorities was data trust. You cannot have trust if your non-human actors have unfettered access. We moved away from the wild west of shared credentials toward a model of rigorous identity governance.

For AI agents, this means we must treat them as non-human identities (NHIs) with the same or greater scrutiny than we apply to employees.

Pillar 1: The tool gateway

At Gusto, we didn’t just wire systems together; we used middleware like Workato and MuleSoft to create a commerce engine that sanitized data flow. For AI agents, you need a similar architectural buffer. You should never connect an LLM directly to your raw system APIs.

Instead, you need a Tool Gateway.

Think of this as an air traffic controller. The agent doesn’t see your complex Salesforce schema or your NetSuite SOAP API. It sees a simplified, virtualized set of tools that you define.

  • Schema virtualization: Instead of giving the agent access to the user object, you give it a tool called onboard_new_customer(name, email). The gateway handles the translation.
  • Semantic validation: This is critical. The gateway checks if the action makes business sense before passing it to the backend. For example, “Is the start date before the end date?” or “Does this discount exceed the 20% threshold?”

The industry is coalescing around the model context protocol (MCP) as a standard for this layer. It provides a universal USB-C port for connecting AI models to your data sources. By using an MCP server as your gateway, you ensure the agent never sees the credentials or the full API surface area, only the tools you explicitly allow.

Pillar 2: Identity is context

When I was architecting systems at Slack, we constantly dealt with the concept of user context. If a sales rep in London searches for a contract, they should only see UK contracts.

AI agents often break this because they use a service account that sees everything.

To fix this, we need to lean on the OAuth 2.0 on-behalf-of (OBO) flow. When a user asks an AI agent to “update this deal,” the agent shouldn’t act as the AI; it should exchange the user’s token to act as that specific user.

This means the underlying platform (Salesforce, Workday, etc.) enforces its existing permission rules. If the user doesn’t have permission to view executive compensation, the agent acting on their behalf won’t either. This simple architectural decision saves you from having to rebuild your entire authorization model inside the AI layer.

Pillar 3: The dry-run rule

One of the cultural shifts I drove at Gusto was improving devex (developer experience) and stabilizing our release pipelines with CI/CD tools like Gearset. We treated infrastructure changes with extreme caution.

We need to treat AI actions with the same reverence. My rule for autonomous agents is simple: If it can’t dry run, it doesn’t ship.

Every state-changing tool (POST, PUT, DELETE) exposed to an agent must support a dry_run=true mode. When the agent wants to update a record, it first calls the tool in dry-run mode. The system returns a diff — a preview of exactly what will change (e.g., “Status will change from Active to Churned”).

This allows us to implement a human-in-the-loop approval gate for high-risk actions. The agent proposes the change, the human confirms it and only then is the live transaction executed. This prevents the nightmare scenario we saw with the recent failure where an AI recursively deleted a database because it lacked context awareness.

Pillar 4: Transactional safety nets

Finally, we have to accept that agents are probabilistic. They will hallucinate. They will retry due to network blips. They will get confused.

In the distributed systems we built at Slack and Ethos, we relied on two patterns that are non-negotiable for agentic AI:

  1. Idempotency keys: Every time an agent intends to take an action, it must generate a unique ID. If the agent gets confused and tries to create an invoice three times, the gateway sees the same key and ensures the action only happens once. This idempotency key pattern is common in payments, but it is now mandatory for AI.
  2. Compensating transactions (sagas): If an agent successfully creates a user in Salesforce but fails to create them in NetSuite, you have a data integrity failure. Since we can’t easily do atomic transactions across different SaaS clouds, we need compensating transactions effectively with an undo button. If step 2 fails, the system automatically triggers a rollback of step 1.

The leadership perspective

Leadership isn’t about being perfect; it’s about being present and navigating challenges with resilience. The same applies to our technology strategy. We cannot wait for AI to be perfect before we use it.

At Gusto, by stabilizing our platform and putting these guardrails in place, we were able to ship AI-assisted automations that classify and route incoming tickets, removing manual drudgery for our CX teams and accelerating handling times. We didn’t do it by being reckless; we did it by building a trust architecture first.

As CIOs and IT leaders, our job isn’t to say “no” to AI. It’s to build the invisible rails that allow the business to say “yes” safely. By focusing on gateways, identity and transactional safety, we can give AI the hands it needs to do real work, without losing our grip on the wheel.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?


Read More from This Article: Giving AI ‘hands’ in your SaaS stack
Source: News

Category: NewsFebruary 16, 2026
Tags: art

Post navigation

PreviousPrevious post:AI is about to get really weird. CIOs better be prepared.NextNext post:Don’t rip and replace PeopleSoft — pair it with emerging tech instead

Related posts

HUAWEI eKit strives to simplify AI adoption for SMBs
March 6, 2026
One title, many realities: How the CIO role changes by organization size and industry
March 6, 2026
What the COBOL Translation Backlash Gets Right — and Wrong
March 6, 2026
Technical debt is the tax killing AI ambition
March 6, 2026
BMW lleva robots humanoides con IA a su fábrica de Leipzig
March 6, 2026
Why great IT teams ‘just work’ (and most don’t)
March 6, 2026
Recent Posts
  • HUAWEI eKit strives to simplify AI adoption for SMBs
  • One title, many realities: How the CIO role changes by organization size and industry
  • What the COBOL Translation Backlash Gets Right — and Wrong
  • Technical debt is the tax killing AI ambition
  • BMW lleva robots humanoides con IA a su fábrica de Leipzig
Recent Comments
    Archives
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.