When it comes to AI, the fear of missing out is real. According to research by Coleman Parkes Research on behalf of Riverbed, 91% of decision-makers at large companies are concerned their competitors will have an advantage if they get ahead with AI. So it’s no surprise that every respondent said that when it comes to gen AI, they’ll either be using it, testing it, or planning projects with it over the next 18 months.
The unprecedented adoption rates for the new technology see generative AI now eclipsing all other AI applications in the enterprise, according to an S&P Global Market Intelligence survey released in September. Nearly one in four (24%) organizations already have gen AI as an integrated capability across their entire organization, and 37% have it in production but not yet fully scaled.
“FOMO is absolutely real, especially when it looks like every organization has some sort of AI strategy,” says Alla Valente, an analyst at Forrester Research. “And there are dangers of moving too fast,” including bad PR, compliance or cybersecurity risks, legal liability, or even class-action lawsuits.
Even if a gen AI failure doesn’t rise to the level of major public embarrassment or lawsuits, it can still depress a company’s risk appetite, rendering it hesitant to launch more AI projects.
“Those organizations not taking risks with generative AI, they’re not going to be able to grow or innovate as quickly, and will lose in the long term. Even in the medium term they’ll lose market share to competitors,” Valente says.
That doesn’t mean rolling out gen AI everywhere, immediately.
“Companies really need to think about the Goldilocks approach,” she says. “The ‘just right’ for them. That means considering their risk appetite, risk management maturity, and generative AI governance framework.”
Keeping AI away from the public
One area where companies should exercise caution is when it comes to adopting gen AI for public-facing projects.
Since ChatGPT was launched in late 2022, many companies have gotten into trouble by deploying it too quickly. An airline chatbot gave a customer a discount it shouldn’t have, and a court held the company liable. Google’s AI told users to put glue on pizza to keep the cheese from sliding off. More recently, Elon Musk’s Grok AI was found spreading election misinformation, forcing five secretaries of state to issue an open letter to parent company X calling for the chatbot to be fixed.
This kind of behavior is making some companies shy about taking their gen AI public. Instead, they are focusing the technology on internal operations, where it can still make a meaningful difference without causing a huge PR disaster if something goes wrong.
For example, Fortune 1000 tech consulting firm Connection is using gen AI internally for a few projects, such as a workflow enabled by Fisent’s process automation solution BizAI to compare customer purchase orders with sales records, and to recommend whether the order should be approved.
Connection uses Pega Platform to manage workflows in several areas of the company based on business rules and logic to intelligently drive work. With the addition of BizAI, Connection can now digitize and further automate key business processes.
“We get between 50,000 and 60,000 different customer purchase orders per year from lots of small- and medium-sized businesses that aren’t set up to integrate with us electronically,” says Jason Burns, Connection’s senior director of process optimization and transformation.
These customers might attach PDFs, spreadsheets, image files, or other types of documents to an email, for example, or paste the purchase order right into the body of the email.
“Before AI, the review was manual, and humans were manually comparing hard copies of purchase orders with entries in our system,” he says. There were about a dozen people doing this, with a typical turnaround time of up to four hours between the order coming in and someone being able to look at it to make a decision, thanks to documents piling up. With gen AI making initial comparisons and recommendations, the turnaround time is now just two minutes.
For example, gen AI can help determine whether the order was intended for Connection; sometimes customers send purchase orders to the wrong vendor by mistake. The gen AI also checks whether addresses match, something that’s difficult for older types of AI to do. But the gen AI also matches up the customer’s description of the product they want with Connection’s internal SKUs.
“Our customers don’t know about our internal SKUs,” Burns says. “They might describe the products one way, and we might describe them another way, and our AI is able to correlate them pretty effectively.”
The AI is set to be extra conservative, he adds, more conservative than a human would be. If anything is unclear, it defaults back to human review. The odds of hallucinations are also minimized by the fact that the AI is only working with the information presented to it and isn’t generating new content, just making a simple recommendation.
“So far, we had zero instances where the AI recommended the order to move downstream and the human disagrees,” Burns says. It’s more likely to be the case that the AI holds up the order for human review and the human says it’s okay to go forward. “We’re finding more reliability than we initially expected and are even considering how we can allow the AI to lighten up, to be a little less critical of the documentation.”
Connection next plans to deploy gen AI in a dozen other similar internal use cases, as well as to help with code generation, writing letters, and summarizing meetings. “The potential productivity enhancements are significant and we need to explore that,” Burns says.
But Connection isn’t working on customer-facing AI just yet given the additional risks. “Risk tolerance is really the order of the day when it comes to AI,” he says. “We recognize there’s tremendous potential, but our first priority is our customers, their data and security, and delivering outstanding outcomes. The technology will evolve over time, and we’ll evolve with it.”
Keeping humans in the loop
TaskUs, a business process outsourcer with about 50,000 employees, is also keeping gen AI within company walls. But it’s also focusing on use cases where there’s a human in place to catch any problems.
“We don’t want the AI to go willy-nilly on its own,” says TaskUs CIO Chandra Venkataramani.
The company has built an internal platform called TaskGPT that helps its employees support customers, and has already seen a 15% to 35% improvement in efficiency. AI is also starting to be used for internal automation and other productivity benefits.
The Air Canada example — where their chatbot promised a discount to a customer that the company rejected but was later forced to honor — was a cautionary example of why public-facing AI is so risky, says Venkataramani. Instead, the tools are used to help people make suggestions and recommendations.
“That way, the teammate can control it,” he says. “They can say, ‘This doesn’t sound right. I’m not going to send it to my customer.’ The human intervention is so important.” So he’s pushing internal teams to use it more, but only to improve their efficiency. “We’re pushing our customers to adopt it, but we’re not using it recklessly,” he adds. “If we can get 20% improvement and be 100% safe, or a 30% or 40% improvement and not be safe, we’ll take the 20% improvement and safety. Safety and security is our No. 1 concern.”
In fact, many AI problems can be avoided simply by having human oversight. “Hallucinations happen,” says Christa Montagnino, VP of online operations at Champlain College. “AI is trained to please us and it’s not necessarily all accurate.” The college has been using gen AI to help instructional designers and subject matter experts create online courses. In the past, the process was cumbersome, she says. Faculty members aren’t necessarily trained on educational designs and they’re paired up with instructional designers. One seven-week course would take about 15 weeks to create, but with gen AI, the time frame was cut in half.
Still, the human element remains a critical part of the process. “We start with the generative AI now, and then we bring in the subject matter expert to work with the instructional designers,” she says. “They’re the editors of this information; they bring in what makes sense for the student and what other resources need to be included.”
Adding in the AI also reduces some of the routing administrative tasks and burdens, she says, enabling faculty to spend more time with students.
However, Company Nurse, which helps companies handle workplace injuries, learned the hard way. The company automated its QA process using AI, and nurses who provided medical advice to employees of customer organizations got immediate feedback about what they were doing wrong on these calls.
“We thought if we could give our agents more feedback on what they were doing wrong, they would make fewer mistakes,” says Henry Svendblad, the company’s CTO. Instead, the nurses started quitting. Turnover percentages increased from the low teens to the high 30s. Some of that had to do with the start of the pandemic and the great resignation, but part of it was because the agents were getting so much negative feedback so quickly.
“We heard resoundingly from our agents that telling them every mistake they made on every interaction was not leading to positive job satisfaction,” he says. “We saw instances of sending new employees equipment and by the time they got it, they didn’t want the job. That never happened before.”
To solve the problem, TaskUs brought the humans back into the loop, hired a human development manager, and began looking at more of the positives about what the nurses were doing and not just the negatives. “And we definitely took the foot off the gas in terms of automating the QA,” he says.
Avoiding sensitive information
Champlain’s Montagnino says the college is willing to use gen AI to help develop course content or marketing materials because that doesn’t involve giving the AI access to sensitive information.
But that’s not the case when dealing with projects that involve student data, she says, so those types of initiatives will come later. “I feel like the best opportunities we have right now lie on the side of product development and attracting future students,” she adds.
Clinical trial company Fortrea, which recently spun off from LabCorp, is also careful to choose projects that offer the fewest privacy risks. “We have a tremendous opportunity to take clinical trials to the next level,” says CIO Alejandro Galindo. “We recently launched an ML and AI studio — an area we’re using to drive innovation.”
For example, Fortrea is deploying Microsoft’s Copilot assistant for its tech stack. “It’s starting to spread like wildfire because we’ve achieved some interesting results in the organization,” he says. “It’s an intelligence layer that we’re bringing to the organization.”
The company has already seen a 30% reduction in the time it takes to collect information for requests for proposals. “This has given us tremendous productivity,” he says. “And the quality of the product is substantially better than in the past.” That’s because the AI draws information from multiple siloed sources, he says. But, being a healthcare organization, Fortrea also must be extremely careful about the technology it deploys to avoid any compliance problems.
“We have to balance the speed of innovation with compliance and safety,” he says. “We’re fast followers.” For example, clinical trials are very paper intensive, he says. When a clinical research associate goes to a site, there’s a lot of information that could be collected. But the company is being very selective in what information the AI will handle first.
“We need to get clearance from the privacy officer that whatever we’re building is going to be in compliance,” he says. “And my chief security officer has a very strong voice in what we choose.”
For example, technology that can help scan documents, with filters to ensure patient information isn’t accidentally exposed, might be deployed in the future. But today, when it comes to clinical trial site visits, the company is focusing on non-sensitive types of information first, such as the physical equipment being used.
“We can take a picture of the refrigerator and scan for when the maintenance was done, the temperature it’s set at,” he says. “We want to make sure all the right conditions in the facility are in place.”
Taking the time for groundwork
Besides public embarrassment, loss of customers or employees, or legal and compliance liabilities, there are also other, more technical risks of moving too fast with gen AI.
For example, companies that don’t do proper groundwork before rolling AI out might not have the right data foundation or proper guardrails in place, or they might move too quickly to put all their faith in a single vendor.
“There’s a lot of risk that organizations will lock themselves into with a multi-year spend or commitment, and it’ll turn out in a year or two that there’s a cheaper and better way to do things,” says David Guarrera, generative AI lead at EY Americas. And there are organizations that jump into AI without thinking about their enterprise-wide technology strategy.
“What’s happening in many places is that organizations are spinning up tens or hundreds of prototypes,” he says. “They might have a contract analyzer made by the tech shop, and a separate contract analyzer made by the CFO’s office, and they might not even know about each other. We might have a plethora of prototypes being spun up with nowhere to go and so they die.”
Then there’s the issue of wasted money. “Say an organization has FOMO and buys a bunch of GPUs without asking if they’re really needed,” he says. “There’s a risk that investing here might take away from what you actually need in the data space. Maybe what you actually need is more data governance or data cleaning.”
The rush to launch pilots and make hasty spending decisions is driven by everyone panicking and wanting to get on top of gen AI as quickly as possible. “But there are ways to approach this technology to minimize the regrets going forward,” he adds.
Move fast and break things might be a fine slogan for a tiny startup, but it doesn’t work for larger organizations. “You don’t want to put billions of dollars and your markets at risk,” Guarrera says.
Read More from This Article: Weighing the risks of moving too fast with gen AI
Source: News