Nearly every CEO on the planet has spent the last two years touting the potential of generative AI (GenAI), and increasingly agentic AI, as the future of efficiency and increased profitability. Rightly so, we’re currently in the midst of an AI super cycle that will drive the biggest technological transformation businesses have seen in the past 25 years. Before that can happen, however, business leaders will need to find ways to convert the potential these new technologies present into tangible results.
More often than not, today, that is not happening. In fact, according to BCG, about 70% of enterprise AI initiatives fail to achieve and scale value. Echoing these findings, our recent enterprise AI study found that 40% of senior executives say their AI initiatives cannot get past the pilot phase.
That means the lion’s share of companies that are using GenAI and experimenting with AI agents are doing so in small pockets or silos within their organizations. They may have a specialized tool or a narrow solution set, but very few are delivering the kind of end-to-end, AI-powered workflows that the world has been dreaming about for so long.
A problem of integration, not potential
What’s causing this growing gap between potential and real-world value? The common bond in all of these struggling AI projects is a lack of understanding of the critical interplay between data, domain expertise, and the ability to integrate AI into enterprise workflows. You can have the most powerful large language model (LLM) or the most accurate agentic agent framework in the world, but if the technology cannot be fully integrated into each step in a complex process, it will not create value for end-users.
This is particularly true in complex, heavily regulated industries like insurance, banking and finance, and healthcare where the massive opportunity for AI to streamline processes and improve customer experience is often overshadowed by data restrictions, security challenges, and administrative roadblocks.
Let’s look at the insurance industry as an example. Each year, the $7 trillion global insurance industry spends about $350 billion on the claims administration and underwriting process. That includes everything from actuarial risk modeling to processing claims to making sure policy coverage details match up with consumer payouts. Each step in that chain of events is accompanied by thousands of data points, hand-offs between carrier personnel and customer touchpoints. It is also rife with inefficiencies.
In the personal lines auto insurance business alone, insurers lose an estimated $30 billion each year due to missing or erroneous underwriting information or other errors that occur in the claims process. This, of course, is precisely the type of data-heavy, repetitive, and manual labor-intensive work that AI was designed to simplify. Yet many insurers have struggled when it comes to integrating AI.
That’s because even the best off-the-shelf AI models and tools were not designed for this type of specialized use case, and few businesses have modernized their data estates to the point where all of the underlying information used in one area of the business is available to all of the other business functions. As a result, efforts to modernize with AI typically run into challenges with data silos and produce results that aren’t quite accurate enough to fully trust.
AI in the workflow, powered by domain, data, and AI
The solution is two-fold. First, it requires embedding AI directly into the workflow, anchored by the powerful combination of domain expertise, data accessibility, and the right AI technologies. Second, it demands the seamless orchestration of these elements—domain, data, and AI—at speed, enabling organizations to accelerate value realization and move quickly from experimentation to tangible business impact.
This means starting not with technology, but with expertise. Before any AI model can be successfully integrated into the workflow, it needs people who deeply understand the domain—all the nuances and components of that workflow—and know where to go to get all of the data they need to make things work. If you’re going to automate an insurance workflow, you need to know insurance intimately; this is not a place where generalist or tech-specific knowledge can substitute for experience. The same goes for banking and finance, healthcare, consumer packaged goods, and every other industry that has its own complex processes.
From there, data is key. Data fuels AI, and particularly when you want to be able to leverage AI into the workflow, this is a critical ingredient. Too often, however, companies think they need to engage in massive data migration exercises to get all of their data into the perfect centralized repository, or data lake, before it can be fully leveraged for AI applications. That’s not the case. Through the use of modern data ontologies and APIs that can connect to multiple applications, it is possible to extract only the data necessary for certain functions, marry it with other datasets, and create a complete data-driven workflow.
Then, once the right people are in place and the right data can be accessed, that’s when it’s time to start digging into AI use cases. Going back to the insurance claims example, this would be the time to start fine-tuning LLMs and testing data extraction tools to see how well they can help underwriters get a faster, more accurate view of risk, spot data anomalies and cut down on lags in the claims process. This is the phase where teams discover that an off-the-shelf GenAI solution may not be the most effective tool to scour claims documents and surface key insights while a custom agentic model may be the best solution for claims analysis.
Orchestrating an AI-led workflow
That full process—from understanding the details of the workflow to navigating the data required to power AI solutions to integrating AI—needs to be orchestrated in a deliberate, highly choreographed way before the enterprise value of AI can be unlocked. The real winners in the AI arms race will be those that are able to first dissect the problem they are trying to solve, then cherry pick the right datasets and AI solutions that help them achieve those results as quickly, inexpensively, and accurately as possible.
Our approach to assisting organizations in integrating AI into their workflow involves utilizing our domain expertise, data, and AI capabilities that we have built over the years. This enables us to orchestrate suitable horizontal and vertical solutions that help clients achieve value efficiently, accurately, and cost-effectively. We’re continuously innovating both horizontal and vertical stacks to deliver value to clients with speed.
Today, the work we are doing to embed AI into enterprise workflows of the leading insurance, banking and finance, and healthcare payer organizations is not only adding efficiency and reducing costs, it is also driving improved customer experience and improving business outcomes. In healthcare, for example, where roughly $180 billion is wasted each year on erroneous claims payments, we’ve been able to deliver $2.2 billion in savings back to the healthcare system by using AI-powered algorithms to spot instances of erroneous or fraudulent billing. That’s much bigger than an efficiency play. It is getting to the core of a complex problem that has challenged the industry for decades. And we’re only just beginning.
To learn more about how EXL is embedding AI in the workflows of some of the world’s leading businesses, please visit www.exlservice.com.
About the author:
Rohit Kapoor is chairman and CEO of EXL, a global data AI company.
Read More from This Article: Why AI in the workflow is essential to unlocking value for organizations
Source: News