Many organizations have launched dozens of AI proof-of-concept projects only to see a huge percentage fail, in part because CIOs don’t know whether the POCs are meeting key metrics, according to research firm IDC.
In a September IDC survey, 30% of CIOs acknowledged they didn’t know what percentage of their AI POCs met target KPI metrics or were considered successful.
Combined with an April IDC survey that found organizations launching an average of 37 AI POCs, the September survey suggests many CIOs have been throwing the proverbial spaghetti at the wall to see what sticks, says Daniel Saroff, global vice president for consulting and research services at IDC.
With an average of just five out of dozens of AI POCs going into production, and only three of those considered successful, the result is a generative AI “spin cycle” with organizations launching a lot of experiments with little impact, Saroff says.
“When they say they don’t know what their KPIs are, what they’re really saying is, ‘When we determine the proof of concept, we didn’t have a measure of success,’” he adds.
The potential cost can be huge, with some POCs costing millions of dollars, Saroff says.
Meanwhile, about 70% of those surveyed by IDC in September said nine of every 10 custom-built AI apps failed to clear the POC stage and go into production. Thirty-five percent of CIOs said none of their custom-built AI apps made it out of POC.
CIOs had a slightly better track record with vendor-built AI apps, but still, nearly two-thirds noted a 90% failure rate with vendor-led AI POCs.
Difficult to define success
Even when AI apps make it to production, many CIOs don’t have a clear idea of what success looks like. Nearly half of CIOs said they either didn’t know if their AI production apps were successful, or they think it is too early to tell.
In many cases, organizations appear to be launching POCs without enough preparation, Saroff says. Many organizations have launched gen AI projects without cleaning up and organizing their internal data, he adds.
“We’re seeing a lot of the lack of success in generative AI coming down to something which, in 20/20 hindsight is obvious, which is bad data,” he says. “You have a new technology with a lot of hype around it, with people feeling they need to rush into it, and they’re not doing the preparatory setup.”
A lack of data management and inadequate access management appear to be two of the major roadblocks to AI POC success, adds Daniel Clydesdale-Cotter, CIO at EchoStor, a VAR.
“A particular concern is that many enterprises may be rushing to implement AI without properly considering who owns the data, where it resides, and who can access it through AI models,” he says. “The high uncertainty rate around AI project success likely indicates that organizations haven’t established clear boundaries between proprietary information, customer data, and AI model training.”
Access control is important, Clydesdale-Cotter adds. An organization’s finance team shouldn’t have access to the data being used in an HR AI tool, and vice versa, he says. At the same time, data necessary for an AI tool to work is often siloed across organizations.
A lack of planning
In addition, the percentage of CIOs who can’t tell if their AI POCs are successful suggests a lack of strategic planning before the projects are launched, says Michael Stoyanovich, vice president and senior consultant at Segal, a consulting firm focused on human resources and employee benefits.
It “highlights a lack of clarity and measurement in evaluating AI project success,” he says. “This uncertainty can lead to wasted resources and even more importantly, missed opportunities for improvement.”
In too many cases, organizations appear to launch AI POCs without considering business impact. While some AI POCs can provide incremental improvements in internal productivity, these projects are rarely gamechangers, he says.
“Organizations are just jumping in and not setting a strategic plan to integrate AI into their organization thoughtfully,” Stoyanovich adds. “It is not only appropriate, but probably a boon, to actually take a pause, take a deep breath, straighten your back, and then put place a quick strategic plan.”
The IDC survey results are “alarming,” both that nearly a third of CIOs don’t understand the success metrics and that 90% or more of POCs are failing, adds David Curtis, CTO at RobobAI, a fintech using AI to help companies manage supply chains.
Many POCs appear to lack clear objections and metrics, he says. He also agrees with IDC’s Saroff that many companies launch AI projects with insufficient or poor-quality data.
Too many people pushing organizations to adopt gen AI don’t understand the technology, Curtis says. Many executives have misconceptions about the amount of work needed to deploy AI, and some mistakenly think AI will replace many employees, he says.
“People think that AI is in some way magic, that it’s going to be a point that’s going to solve all the problems in one go,” he adds. “There is a reasonably significant amount of work in dealing with AI, depending on the use case. It isn’t just a case of picking something up off the shelf and running it.”
In some cases, a failed AI experiment may be educational and point organizations to better projects, Curtis says. But many organizations, after seeing a high majority of their AI POCs fail, may stop experimenting.
“A lot of financial services companies that I work with don’t have a risk culture,” he says. “If something fails and they spent millions of dollars on it, they’re likely not to do it again.”
For risk-averse companies, good planning up front may be a better alternative than launching dozens of POCs and failing fast.
“Try to remove some of the risk up front before you actually get started,” Curtis says. “Every place I’ve ever worked at, internal resources are just a premium. Rather than having 37 POCs, you get it down to two or three that are meaningful to start with.”
Start with strategic needs
EchoStor’s Clydesdale-Cotter advises CIOs to carefully consider strategic business needs before launching multiple AI POCs. Like Stoyanovich, he suggests companies focus more on AI projects that bring a competitive advantage than those that provide small efficiency upgrades.
One company he has worked with launched a project to have a large language model (LLM) AI to assist with internal IT service requests. The POC was able to cut operational expenses by using AI to answer many IT service queries.
“The customer really liked the results,” he says. “But the upshot of this was, ‘You’re going to have to spend upwards of a million dollars potentially to run this in your data center, just with the new hardware software requirements.’
“And the business comes back and says, ‘Why would we spend a million dollars? We could hire five people.’”
Read More from This Article: CIOs’ lack of success metrics dooms many AI projects
Source: News