When financial tech vendor FIS announced its new AI agent for detecting financial crimes on Tuesday, it made much of its embedding of a team of forward deployed engineers (FDEs) from Anthropic to make it happen. It’s just one of the dozen or so companies working with Anthropic on developing agents for financial services using new connectors and so-called “ready-to-run” templates Anthropic announced the same day.
Enterprise CIOs are increasingly paying for the services of AI vendors’ FDEs, given their own data quality issues and the complexity of working with AI models.
But how and why such teams are brought in can make the difference between whether the enterprise is helped to get to the next AI level or becomes a hostage to never-ending consulting costs.
FIS listed the Bank of Montreal (BMO) and Amalgamated Bank as the first two companies to deploy its agent, which it said will compress anti-money-laundering investigations from hours to minutes, assembling evidence across a bank’s core systems and surfacing the riskiest cases for review with full auditability and traceability of decisions. “Anthropic’s Applied AI team and forward-deployed engineers (FDEs) are embedded with FIS to co-design the Financial Crimes AI Agent and transfer knowledge so FIS can build and scale additional agents independently over time,” it said.
Aman Mahapatra, chief strategy officer for Tribeca Softtech, a New York City-based technology consulting firm, suggests CIOs follow the money when evaluating similar work with AI vendors.
“The structurally interesting thing about the FIS-Anthropic model is who actually pays the FDE cost. This is the question CIOs should be asking but mostly are not,” Mahapatra said.
The cost of FDEs could put some AI projects in jeopardy according to a recent report by Alex Coqueiro, a senior director analyst with Gartner. He predicted that by 2028, “70% of enterprises will be forced to abandon agentic AI solutions from FDE-led engagements because of high vendor costs and lack of internal skills to evolve them independently.”
Service, not software
He argued that the problem is not entirely the fault of the AI vendor. Many IT operations don’t put in the necessary preparatory work to clean their data and to make it AI-friendly. Internal corporate politics/personalities is another critical factor.
“The domain experts most critical to FDE success have the strongest incentive to undermine it. An expert who perceives the FDE as capturing their expertise for agentic automation will give the official process instead of the real one, and the AI agent built on it will fail on the exact edge cases they chose not to mention,” Coqueiro said in the report. “Flat FDE effort across successive deployments is the signal that an engagement has produced a dependency, not a capability. When effort does not decrease as use cases mature, the organization is paying consulting rates for operations it should own.”
In the case of FIS’s work with Anthropic, said Mahapatra, “BMO and Amalgamated are not writing direct checks to Anthropic for forward-deployed engineers at quarterly consulting rates. FIS is absorbing the FDE engagement and amortizing it across its banking customer base.”
That approach, he said, “is meaningfully better economics than direct Anthropic engagements where each bank funds its own embedded engineering team to redesign the same context boundaries, shadow autonomy controls, and the jailbreak resistance testing in isolation.”
Mahapatra said much of this problem stems from how generative and agentic AI have been marketed. The original ROI thesis, he said, was that AI enables enterprises to do more with fewer people, but that was “a marketing pitch that was never going to survive contact with regulated banking workflows.”
Nik Kale, a member of the Coalition for Secure AI (CoSAI) and of ACM’s AI Security (AISec) program committee, said that he sees FIS’s presentation of its work with Anthropic as “a concession that frontier AI isn’t a product yet. CIOs thought they were buying software. They’re actually buying a professional services engagement. That changes the cost model, the dependency model and the governance model for every enterprise AI deployment.”
Kale said the statement’s wording gives a clue about the agentic strategy.
“The FIS release says every agent decision is traceable and auditable. True statement, wrong sentence. The harder question isn’t auditing what the agent decided. It’s deciding which decisions are the agent’s to make in the first place. Banks have decades of decision-rights frameworks. They don’t translate cleanly to agent harnesses built by someone else’s engineers,” Kale said. “The CIO test is simple: after the forward-deployed team leaves, can your organization still operate, monitor, challenge, and safely modify the agentic workflow? If the answer is no, it’s not mature yet. It may be a successful implementation project, but it’s not yet an enterprise capability.”
Justin Greis, CEO of consulting firm Acceligence and former head of the North American cybersecurity practice at McKinsey, agreed with Kale.
Human judgment pretending to be process
“The bigger risk isn’t the cost of these engagements. It’s the dependency they can create. Spending a few hundred thousand dollars to get something into production isn’t the issue,” Greis said. “Ending up with a system that only the vendor can operate, extend, or even fully understand is where things start to break down.”
The problem with some of these consulting arrangements is not that they hide IT deficiencies as much as they enable AI shortcuts.
Enterprises paying FDE teams “do not undermine the ROI case for agentic AI. They undermine the lazy version of the ROI case. That distinction matters,” said Sanchit Vir Gogia, chief analyst at Greyhound Research. “For the past two years, too much of the enterprise AI narrative has been sold as a tidy labor-reduction story. Buy the model. Automate the work. Reduce the people. Capture the savings. It is neat, board-friendly, and deeply incomplete. Large enterprises are not collections of clean tasks waiting to be automated. They are collections of exceptions, legacy systems, fragile integrations, access controls, undocumented workarounds, compliance obligations, and human judgement pretending to be process. Forward deployed engineers are the invoice for making AI real. That is not transformation. That is dependency with better stationery.”
Another FDE concern is the inevitable conflict of interest that can exist where the AI vendor that is being paid to fix the complexity is also the vendor that created much of that complexity in its model.
Carmi Levy, an independent technology analyst, said the business case can undermine enterprise objectives. “If AI agents are supposed to autonomously create, deploy, and manage super-capable workflows at all levels of the organization, their very capability threatens the future viability of vendors who have long attached lucrative support contracts to those very same deployments. If the FDE is going to be engaged to work alongside customers to make their AI agents come alive, where is the incentive for AI vendors to build agentic systems that are so capable that they don’t require ongoing support? The FDE business model influences up-front model design, and it’s entirely possible that AI platforms are being deliberately designed to require persistent FDE support.”
Read More from This Article: Anthropic’s financial agents expose forward-deployed engineers as new AI limiting factor
Source: News

