AI adoption is on the rise. According to a recent McKinsey survey, 55% of companies use artificial intelligence in at least one function, and 27% attribute at least 5% of earnings before interest and taxes to AI, much of that in the form of cost savings.
As AI will dramatically transform nearly every industry it touches, it’s no surprise that vendors and enterprises are looking for opportunities to deploy AI everywhere they can. But not every project can benefit from AI and attempting to apply AI inappropriately can not only cost time and money but also sour employees, customers, and corporate leaders on future AI projects.
The key factors for determining whether a project is suitable for AI are business value, availability of training data, and cultural readiness for change. Here’s a look at how to ensure those criteria are in line for your proposed AI project before your foray into artificial intelligence becomes a sunk cost.
Start with the simplest solution possible
Data scientists in particular gravitate toward an AI-first approach, says Zack Fragoso, data science and AI manager at pizza chain Domino’s, which has more than 18,000 locations in over 90 countries around the world. But you can’t apply AI everywhere.
Despite being a very traditional line of business, Domino’s has been embracing change — especially during the pandemic. Customers now have 13 digital ways to order pizzas, and the company generated more than 70% of sales through digital ordering channels in 2020. That has opened up a lot of opportunity for making good on the promise of AI.
The key for Domino’s in applying AI, Fragoso says, has been taking a simple approach. “At the end of the day, the simple solution runs faster, performs better, and we can explain it to our business partners,” he says. “Explainability is a big part of it — the more people understand the tools and methods we use, the easier it is to gain adoption.”
The approach itself is simple: If there’s a business problem that needs solving, Domino’s looks at the simplest, most traditional solution, and then, “if we go up from there, there needs to be a value-add in the performance of the model,” Fragoso says.
For example, predicting how long it takes to cook a pizza and put it in a box is simple. “We pull that right from our operations research. You can plug in the oven times,” he says. But there are some problems that can only be solved with AI, he adds, such as those requiring image recognition or natural language processing.
For example, last year, Domino’s ran a loyalty program that rewarded customers for eating pizza — any pizza, from any pizza maker. “We built a pizza classifier using millions of pictures of different kinds of pizza and put it into an app,” Fragoso says.
That project offered two types of business value. First, it enhanced the customer experience, he says. Second, it created a collection of pizza images that the company then used to detect pizza quality and temperature. “It was a really great full-circle AI project,” he says.
A more practical AI project Domino’s undertook was a predictor aimed at improving the accuracy of its pizza tracker, as customers want to know when exactly to come to the store to pick up their food, or when to expect their delivery to arrive, Fragoso says. Adding machine learning to the traditional if-then coding of Domino’s pizza tracker resulted in a 100% increase in accuracy, he says.
In building the model, Domino’s stuck with its simplest-first principle. “The first iteration was a simple regression model,” he says. “That got us close. Then a decision tree model, where we could look at more facets. Then we actually moved to a neural net because we could capture some of the same variables as in the decision tree but the neural net produces the answer faster. We want our customer experience on the website to be really fast.”
There is a place for machine learning, says Sanjay Srivastava, chief digital officer at Genpact, particularly when a company is looking to build processes that are continually improving based on experience. But sometimes all that’s needed is a simple correlation, which can be obtained from basic statistical modeling.
“Ten-year-old practices around random forests and other statistical tool kits can get you the answer much faster and much cheaper than building a whole MLOps team around it,” Srivastava says. “You have to know when to fall back to existing techniques that are much simpler and much more effective.”
One common area in which AI is often pitched a solution but is usually overkill is in chatbots, he says: “In some scenarios, it makes sense. But in 90% of scenarios you know the questions that are going to be asked because you can look at the questions that have been asked in the last three years and you know the answer to every question. Turns out, 90% of chatbots can get away with simple question-and-answer pairs.”
Historic data: AI’s key to predicting future results
Any finite set of data can be fitted to a curve. For example, you can take previous years’ winning lottery numbers, and come up with a model that would have predicted them all perfectly. But the model will still be no better at predicting future winnings because the underlying mechanism is completely random.
The COVID-19 pandemic has been a prime example of how this happens in real life. There was no way to predict where lockdowns were going to lead to factory shutdowns, for example. As a result, companies saw a decline in the revenue gains they saw in many areas, according to McKinsey’s state of AI survey.
For example, 73% of respondents saw revenue increases in strategy and corporate finance last year, while only 67% did so this year. The difference was even more stark in supply chain management. Last year, 72% saw revenue increases in this area but only 54% did this year.
“The fundamental characteristic of AI or machine learning is that you’re using history to inform,” says Donncha Carroll, partner in the revenue growth practice at Axiom Consulting Partners. “You are wedded, chained, handcuffed by history. AI is good in circumstances where history is likely to repeat itself — and you’re okay with history repeating itself.”
For example, he says, some of his clients have tried to use AI to predict future revenue. But often, revenue is influenced by factors that can’t be predicted, that can’t be controlled, and that the company doesn’t have any data for. And if some of those factors have an outside impact on results, it can throw off the entire model.
“Then it makes no sense to choose AI,” he says. “Are you going to invest hundreds of thousands of dollars in a solution that can be immediately made irrelevant by a change in one variable?”
AI can still have a role here, he says, in helping to model various scenarios, or in surfacing insights that might not be otherwise apparent. “Your likelihood of success goes up if your focus is more narrow.”
AI will also fall short if the very presence of the AI changes the behavior of the system. For example, if AI is used to filter out hateful speech, people quickly learn what patterns the AI looks for and word things so that they get through the filters.
“The best minds in the world have been trying to solve these problems and they have not been successful,” Carroll says.
Kearney partner Bharath Thota once worked with a $30-billion-plus global consumer products and goods conglomerate. The CFO leadership team wanted better visibility into the conglomerate’s financial metrics so they could see whether their growth was going up and down. The existing process was that they got PDFs of reports 30 days after the reporting period closed.
The data science team applied AI to forecast what the numbers would look like. “They had good intent,” says Thota. “They wanted to provide the leadership with a futuristic view.”
The mistake they made was in the financial data they were feeding into the algorithm. The financial analysts feeding in that data had to make a lot of assumptions, and so the data set wound up containing lots of individual biases.
“The leadership was excited,” says Thota. “They had something forward-facing, not rearview-facing. But when the quarter ended, and they looked back at those predictions, they were completely off.”
The entire project took months, says Thota. “They had to figure out how to build this thing, do the architecture, research AI platforms, get everything to work together.”
When a project like that fails, people lose interest and confidence in AI, he says. For this particular company, the solution was simply to build the CFO leadership team a financial dashboard that gave them the metrics they needed, when they needed them.
Eventually, Thota says, some AI was used as well, in the form of natural language generation, to automatically provide key insights into the data to the executives in plain-English terms.
“It was a visibility problem,” he says. “And there was a simple solution to provide that visibility.”
The data challenge
Most AI projects require data. Good data, relevant data, data that’s properly labeled and without biases that would skew the results.
For example, a company looking to keep cats out of a hen house might choose to install a camera and image recognition technology to spot cats coming in. But success hinges on having an adequate training set.
“You’ll need to have lots of pictures, and those pictures will need to have labels on them about whether they have cats in them or not,” says Gartner analyst Whit Andrews, adding that collecting this data is time consuming and expensive. And once it’s all gathered, will the company be able to reuse the same data set for other projects?
But what if it turns out that the business actually needs to know how many cats are coming into the hen house? Then that original data set of pictures will need to be relabeled with the number of cats in each picture as well.
“Maybe one cat is not that expensive, but a herd of cats is a problem,” Andrews says.
Plus, if only a small percentage of images contain multiple cats, then getting an accurate model will be substantially more difficult.
This situation comes up frequently in marketing applications, when companies try to segment the market to the point that the data sets become infinitesimally small.
“Almost every company I know of uses segmentation for customer targeting,” says Anand Rao, partner and global AI leader at PricewaterhouseCoopers.
If they collect data expecting it to be used for one purpose, and wind up using it for another, the data sets might not meet the new requirements.
For example, if the data collection is set up so that there’s a balance of data points from each region of the United States, but the business question winds up being about the needs of a very narrow demographic segment, all the inferences will be useless. Say, for example, if the company is interested in the purchasing habits of Asian-American women in a particular age range, and there are only a couple in the sample.
“Be very clear about what decision you want to make with your segmentation,” Rao says. “Try to make sure that the sampling you’re doing is both representative, but also it captures your questions.”
The sample problem occurs in any system trying to predict rare events. For example, if a company is looking for examples of fraudulent behavior, in a data set of a million transactions, there are a handful of known fraudulent ones — and an equal or larger number of fraudulent transactions that have been missed.
“That’s not very useful for inferencing,” Rao says, adding that this happens a lot with business process automation when a company has many people doing particular tasks each day, but doesn’t capture data about how those tasks are being done, or doesn’t capture the right data necessary to train an AI on how to do it.
“In those cases, you should go and build a system to capture that information,” he says. “Then, a few months later, come back and build the model.”
And for projects that don’t need data, AI is not the right way to go. For example, some business processes, such as insurance and underwriting, are rules based, Rao says. “You can build a rules-based system by interviewing experts and pulling together traditional formulas. But if you can do it with rules and scripts, you don’t need AI. It would be overkill.”
Using an AI for such a project can require more time and the accuracy might be no better, or only slightly better — or you might not need the improved performance.
“So you won’t have the ROI because you’re spending time on a problem that you could have already solved,” he says.
A $300 million AI mistake
In November, real estate company Zillow announced that it was writing down $304 million worth of homes that it purchased based on the recommendation of its AI-powered Zillow Offers service.
The company may also need to write down another $240 to $265 million next quarter — in addition to laying off a quarter of its workforce.
“In our short tenure operating Zillow Offers, we’ve experienced a series of extraordinary events: a global pandemic, a temporary freezing of the housing market, and then a supply-demand imbalance that led to a rise in home prices at a rate that was without precedent,” Zillow CEO Rich Barton said in a conference call with investors. “We have been unable to accurately forecast future home prices. … We could blame this outsized volatility on exogenous black swan events, tweak our models based on what we’ve learned and press on. But based on our experience to date, it would be naïve to assume unpredictable price forecasting and disruption events will not happen in the future.”
AI learns from the past, says Tim Fountaine, senior partner at McKinsey. “If something hasn’t happened in the past, then it’s impossible for an algorithm to predict it.”
And AIs don’t have common sense, he adds. “An AI algorithm designed to predict the output of a factory that has never seen a fire before, won’t predict that the output would plummet if there’s a fire.”
Predicting property prices is an interesting use of AI, he says. “But you can see everyone becoming a little gun-shy of that type of application.”
Read More from This Article: How to know when AI is the right solution
Source: News