At Davos in January, PwC’s CEO survey was hard to miss. PwC helps set the week’s boardroom agenda, so its findings carry weight: 56% of CEOs say AI hasn’t produced significant cost or revenue benefits yet, and only 12% say it has delivered both. That mismatch between investment and measurable results is why many enterprise AI programs stall.
Enterprise AI tools deliver uneven returns because most benefits live inside redesigned workflows, strong data foundations and disciplined governance. Vendors sell features but enterprises earn ROI only when they redesign how work gets done and measure value in production.
The CEO question that breaks most AI programs
At some point every AI initiative eventually faces the same moment: The CEO or CFO asks, “What changed in dollars?” The room offers a list of pilots, a stack of screenshots and a handful of anecdotes. Then comes the awkward pause.
The truth is that buying AI is easy, but capturing value from AI is hard work. It requires process design, data discipline, adoption and controls that survive contact with the real world.
At Davos, the tone sounded more pragmatic. EY’s Julie Teigland told Reuters that ROI requires changing job descriptions and redesigning how work gets done. She cited EY work suggesting 81 hours of training per employee, with role redesign, can translate into a 14% productivity gain. She also warned that too many pilots can become a “death trap.”
Why the ROI story splits into winners and wanderers
Across reputable research, the pattern looks consistent: Many organizations experiment widely, a smaller set scales a few use cases and a minority ties those deployments to P&L outcomes.
In July 2024, Gartner predicted that at least 30% of AI projects would be abandoned after proof of concept by the end of 2025, citing poor data quality, inadequate risk controls, escalating costs and unclear business value. By February 2025, Gartner had raised the alarm further, predicting that organizations will abandon 60% of AI projects through 2026 when those efforts aren’t supported by “AI-ready” data.
Winners exist, and they look different in two ways. First, they start where the work already has a measurable unit and a clear owner. McKinsey’s research from last year shows that organizations increasingly report revenue impact from AI applications inside business units, with service operations standing out as an early hotspot. Second, they treat foundations as part of the product.
What are common themes that dilute your ROI?
ROI dilution happens when costs show up immediately (licenses, tokens, integration, controls) while benefits arrive as small fragments (minutes saved, fewer errors, faster drafts) that never roll up into P&L impact.
It is fixable, but requires measurement design and workflow redesign early, before the tool spreads through the org.
Here are five common factors diluting AI ROI across the enterprise:
1. Pilot sprawl with no production gate
A pilot can feel like progress because it creates output. Production requires reliability, security, data access, monitoring and support. Without a gate, pilots multiply and none mature.
2. Productivity savings that never become capacity
AI can shave minutes off tasks, but P&L only moves when that time converts into throughput, reduced overtime, fewer contractors, higher conversion or shorter cycle time with the same headcount. Without an operating plan, saved minutes can simply dissolve into the calendar.
Deloitte’s research frames the gap as a mix of fast-moving technology, hard-to-isolate impact and the human reality that tools only pay off when people change how they work.
3. Data friction becomes “AI tax”
AI systems are hungry for high-quality, permissioned, current data. Many enterprises discover that the fastest path to better answers is a better data layer. That work adds cost and time which ends up pushing out the return by a few quarters.
IBM’s CEO study highlights that many leaders see proprietary data as the key to value while also admitting that rapid investment has created disconnected, piecemeal technology. That fragmentation turns every rollout into a bespoke integration project, which slows adoption and raises the cost of control.
4. Control overhead grows faster than usage
Security reviews, model risk assessments, privacy work, vendor approvals and legal reviews all consume scarce talent. When use cases are scattered, every team repeats the same work.
5. The accounting mismatch
CEOs and CFOs want clean attribution: Costs cut or revenue increased. Many first-wave gains show up as quality improvements, faster internal decisions or reduced risk in specific functions. Without disciplined instrumentation, organizations struggle to translate improvements into dollars for ROI math.
Bain’s executive survey reports that many AI use cases met or exceeded expectations, yet only 23% of respondents could tie the work directly to new revenue or lower costs.
How vendor narratives amplify the dilution
Vendors face a simple incentive: They get paid for adoption and expansion. However, enterprises get measured on outcomes. That misalignment shows up in predictable selling patterns:
- ‘Time-to-value’ promises that assume your data is clean, permissioned and searchable.
- Seat-based pricing for copilots that ignores whether employees actually change behavior.
- ROI calculators that treat ‘hours saved’ as ‘money saved’ without tying into staffing or throughput plan.
- Security claims that focus on the model while underplaying the application layer: Connectors, retrieval, agents and human workflows.
- Product roadmaps that move faster than your controls, pushing upgrades that reset validation and change outputs.
None of this means vendors act in bad faith. It means your CIOs and CISOs need procurement muscle that is fluent in how value and risk actually show up.
7 questions CIOs can ask every AI vendor before renewal
Procurement checklists often focus on feature lists. You want the questions that expose hidden work and hidden risk:
- Where does your ‘hours saved’ number come from, and how did customers convert it into P&L impact? Ask for the workflow change story.
- What is the full cost model for our expected usage: Licenses, tokens, embeddings, retrieval, logging and any premium security features?
- What data do you store, for how long and for what purpose? Include prompts, outputs, metadata and admin logs.
- What controls exist for model drift and version changes? How do you notify customers, and how do you support re-validation?
- What security testing do you support for prompt injection, data exfiltration and connector abuse? Provide documentation, not slides.
- What is your incident process, including breach notification timelines and customer responsibilities?
- What does a clean exit look like? Data export, embeddings portability, audit logs and contract language on deletion.
These questions can help turn sales calls into engineering conversations, so you have accurate information that can allow you to predict how much return to expect from your investment.
Bottom line: Returns or retreat
AI programs tend to fail in two predictable ways: They stall in perpetual pilots, or they scale faster than governance can keep up. A better path is deliberate and disciplined: Roll out in measured waves, with value and security treated as one program from day one.
PwC’s CEO data underscores the mood in the market: Many leaders are still waiting for proof that shows up in the numbers. The organizations that win often look unglamorous early on. They run fewer use cases, define sharper success metrics, tighten controls and they pause quickly when results don’t hold.
That discipline also changes the tone of vendor negotiations. When you can walk in with cost per task, adoption by role, error and rework rates and documented risk events, you negotiate from clarity. Instead of relying on promises, you are now buying outcomes that you can measure, verify and enforce.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: The Davos reality check on AI ROI: Why tools don’t pay off until work changes
Source: News

