Salesforce is recalibrating its enterprise AI strategy — and CIOs could be footing the bill.
The company has added deterministic controls to Agentforce through a new scripting layer called Agent Script, shifting responsibility for AI behavior back onto customers. Analysts warn the move will force CIOs to absorb new costs, revisit delivery timelines, and defend AI decisions that were once marketed as autonomous.
“Agentforce was pitched as a self-directed agent that could resolve customer issues end-to-end without needing to be micromanaged. CIOs budgeted, planned, and communicated internally based on that vision. What Salesforce is now saying — very clearly is that autonomy without guardrails is unscalable. You need deterministic controls not just to govern AI behavior, but to defend it. That’s a major shift, and it throws existing roadmaps into question,” said Sanchit Vir Gogia, CEO of Greyhound Research.
The need to recalibrate
Salesforce embedded the Agent Script into Agentforce in October as part of an effort to make AI agents viable in production, not just pilots.
Company executives, including Phil Mui, SVP of its AI research division, wrote that its “most sophisticated customers” were struggling to deploy autonomous agents into critical workflows because their unpredictable behavior drove up operational risk and downstream costs.
The problem, according to Gogia, was Agentforce’s heavy reliance on the Atlas Reasoning Engine that itself relied on multiple LLMs to plan, sequence, and select actions in real time based on user input.
While that approach showed promise in theory, it faltered in enterprise production as agent behavior varied from session to session, with identical customer scenarios triggering different execution paths based on how the model interpreted intent in the moment, Gogia said.
When agents drifted or stalled, developers had few options beyond continuously rewriting prompts — a pattern Salesforce engineers later described as a “doom-prompting” cycle that failed to address the underlying issue, Gogia added.
In fact, Animesh Banerjee, managing director of Bong Bong Academy — a Salesforce workforce development partner, said that his firm faced issues while implementing Agentforce in multiple Indian enterprises: “the biggest friction points were high operating costs and unpredictable responses even after structured prompting, especially in customer support use cases.”
Separately, Jayanta Acharjee, senior Salesforce consultant at global software firm Sitetracker, too, said that enterprises faced challenges while implementing Agentforce: Variance in LLM-led answers to user queries was hard to accept for use cases, particularly for support in finance, healthcare.
“Even low-frequency inaccuracies are unacceptable when responses go directly to customers. The failure mode is ‘confidently wrong,’ which creates reputational and legal exposure,” Acharjee, who also worked at professional services firm Huron as a senior Salesforce developer, said.
According to Salesforce’s Mui, Agent Script paves the way for enterprises to impose deterministic (rule-based) structure on agent execution — breaking tasks into governed steps with defined logic and state — so enterprises could better control outcomes, limit unnecessary compute, and make agent behavior auditable.
However, analysts see this tradeoff from being fully autonomous to what Salesforce describes as “hybrid reasoning” as placing greater operational and governance responsibility on CIOs, requiring them to engineer and maintain AI controls that were once expected to be handled by the platform.
Challenges for CIOs
For one, Avasant research director Chandrika Dutt sees the recalibration as a burden on CIOs and their teams, both from a cost and skills perspective.
The change, according to Dutt, will force enterprises to now invest in workflow mapping, data modeling, and prompt efficiency management (to control token and consumption costs) — all of which are capabilities that are often outside the core skill sets of many enterprises, particularly those that adopted Agentforce expecting abstraction and simplicity.
“As a result, many enterprises may find themselves increasingly reliant on service providers to operationalize, govern, and optimize agent-driven workflows. What was initially perceived as a productivity shortcut can quickly evolve into a services-heavy engagement,” Dutt said.
Separately, Greyhound Research’s Gogia pointed out that the layering of deterministic controls also affects the expected return on investment timeline for CIOs by stretching it further.
“If you planned to roll out a GenAI service desk in Q2 and let it learn on the job, you now need to add scripting, testing, versioning, and QA cycles. That slows things down,” Gogia said, adding that this creates a “political problem” for CIOs who evangelized AI internally and now must explain why the vendor changed direction and why the cost-benefit equation has shifted.
Broader industry reality
Analysts also say that Salesforce’s move underscores a broader industry reality: at scale, autonomy is proving harder to control, and more expensive to defend, than vendors initially suggested.
Gogia pointed out that his firm had tracked similar pivots across Microsoft, OpenAI, LinkedIn, ServiceNow, and even emerging AI toolchains: OpenAI, for example, released AgentKit specifically to give developers a way to choreograph agent behavior through explicit steps.
In fact, given the inherent weakness of LLMs as “terrible operators”, Dutt expects other software vendors to follow a similar path to Salesforce, especially in regulated industries, such as healthcare, BFSI, and the public sector.
“In these environments, deterministic automation is not optional; it is essential to ensure predictable behavior, auditability, regulatory compliance, and tightly controlled execution,” Dutt said.
How can CIOs respond?
CIOs facing the shift can only respond by altering the way they view their AI agents, treating them as embedded capabilities within business workflows instead of digital employees, analysts said.
That would mean more upfront investment.
“You must staff appropriately. That includes technical skills for scripting and debugging, but also design thinking for conversation flows, domain experts for policy design, and compliance leads for governance. AI is not a black box you license. It is a system you operate. If you don’t have the people to operate it, you don’t have a strategy. You have a pilot,” Gogia said.
Additionally, the analyst pointed out that CIOs would also have to align board expectations with the recalibration: Be clear about AI’s limits as well as its value and use Salesforce’s pivot to reset expectations, explain the investment required to reach ROI, and position AI as a long-term, scalable capability, not a quick win.
Other alternatives would be to view the recalibration as an incremental step instead of a migration: CIOs should apply deterministic controls selectively, starting with workflows where predictability is non-negotiable, rather than rebuilding agents from scratch, said Akshay Sonawane, a former software engineering manager at Salesforce and currently a machine learning engineer at Apple.
Sonawane pointed out that CIOs should also view learning a new domain-specific language like Agent Script as a short-term issue because the alternative is prompt engineering, which is more difficult to debug, test, and maintain.
A Salesforce spokesperson also echoed Sonawane’s views on migration and said that enterprises facing issues with Agentforce implementation can reach out to its Professional Services team and other certified partners.
The caveat, though, as analysts pointed out, is that it is not free.
Read More from This Article: Salesforce’s Agentforce recalibration raises costs and complexity for CIOs
Source: News

