JP Morgan Chase president Daniel Pinto says the bank expects to see up to $2 billion in value from its AI use cases, up from a $1.5 billion estimate in May. And speaking at the Barclays Global Financial Services conference in September, he said gen AI will have a big impact in improving processes and efficiencies. The company has already rolled out a gen AI assistant and is also looking to use AI and LLMs to optimize every process.
“We’re doing two things,” he says. “One is going through the big areas where we have operational services and look at every process to be optimized using artificial intelligence and large language models. And the second is deploying what we call LLM Suite to almost every employee. At the moment it’s being deployed to 140,000 employees to help them do their jobs.”
Operational efficiencies, he says, will be the biggest impact of gen AI in the short to medium term.
He’s not the only one who’s bullish on gen AI. According to a new IDC report, 98% of business leaders view AI as a priority for their organization and the research firm expects AI to add $20 trillion to the global economy through 2030. And in August, OpenAI said its ChatGPT now has more than 200 million weekly users — double what it had last November, with 92% of Fortune 500 companies using its products. The use of its API has also doubled since ChatGPT-4o mini was released in July.
According to research by Coleman Parkes Research conducted on behalf of Riverbed and released this month, 59% of decision makers at large companies say AI projects have met their expectations, and 18% have exceeded them.
“AI has moved out of the IT function and is being pushed out more widely in the organization,” says Ian Beston, director at Coleman Parkes Research. “Generally, there’s optimism and a positive mindset when heading into AI.” But a substantial 23% of respondents say the AI has underperformed expectations as models can prove to be unreliable and projects fail to scale. So for all its vaunted benefits to efficiency, gen AI doesn’t always reduce workloads. Sometimes it actually creates more work than it saves due to legal and compliance issues, hallucinations, and other issues.
More time saved, more wasted time
When gen AI helps employees do their jobs faster, companies assume the free time will be used for higher-value activities. That’s not necessarily the case, says Christina Janzer, SVP of research and analytics at Slack. According to the company’s latest global survey of desk workers, employees spend 37% more time on routine administrative tasks instead. “There’s a lot of potential, though,” says Janzer. “Even though it’s still early, and we’re still figuring it out, we’re seeing some incredible results for productivity and in the way it’s improving the work-life balance and passion for the job.”
The problem, she says, is that people are programmed to fill time with certain tasks, so when AI frees up time, people fill it up with more administrative work. “There’s a never-ending list of busywork that has to get done,” she says.
The solution is to rethink how companies give employees incentives. “Managers tend to incentivize activity metrics and measure inputs versus outputs,” she adds. Instead of looking at the value the employee brings to the company, they look at the numbers of emails they send out, or the hours they spend at the office.”
Out of control inboxes
All this increased busywork creates more work for other employees as well, says Janzer. If gen AI can help an employee craft a well-written email 10 times faster, they might respond to 10 times as many emails as they did before — emails someone else will now have to read and maybe respond to as well.
Or instead of writing one article for the company knowledge base on a topic that matters most to them, they might submit a dozen articles, on less worthwhile topics. Employees who need to submit reports to their managers might be able to get those reports done faster, and increase the number and length of those reports.
“These technologies can produce more content that everyone needs to consume and be aware of,” says Anita Woolley, professor at Carnegie Mellon University. There’s already more low-quality AI content flooding search results, and this can hurt employees looking for information both on the public web and in enterprise knowledge repositories. Finding a result that’s actually useful can be like looking for a needle in a haystack. “The information volume piece is definitely one of the areas where productivity could go down,” says Woolley.
Attention fragmentation
Another potential negative impact on employee productivity from gen AI is attention fragmentation, says CMU’s Woolley. “The AI can go to meetings for you and take notes so you can be in four places at once,” she says. “And some people try to do that. But there’s only so many projects we can meaningfully contribute to, and conversations we can be part of.”
Using AI to help juggle more tasks just contributes to the sense that there’s more work to do, she says. “And we’re at risk of being burned out.”
In addition, while gen AI can help us manage our time and workflow, it can also surface more issues that need urgent attention. “It can trigger alerts so you might be pulled away from what you’re doing to attend to other things,” she says.
With our attention split too much, people might start making bad decisions, she adds. “It gets beyond what we can manage.”
Some companies put limits in place on how many projects employees can be involved with at once. “Everyone is concerned about their career and trying to do more,” she says. “Nobody is really sure what’s really going to drive their evaluation, and that’s where people try to take on more.”
The solution, she says, is for companies to set clear objectives and performance criteria, and avoid an explosion in projects, initiatives, and teams that don’t add value but create work. “Especially in a distributed environment, it’s more important than ever to get away from having meetings just to see that you’re working,” Woolley says.
The high price of FOMO
New AI tools are coming out seemingly every week, each one promising to revolutionize some area of work. In September, for example, OpenAI released a new model that claims to have unprecedented reasoning abilities in math and science. There were new releases for AI video and image generation, too. Workday announced new AI agents to transform HR and finance processes, and
Google issued more AI-powered advertising and marketing tools.
There are so many and each one has a learning curve and a certain period of time before it can actually start to bring value. With too many tools, you’re always playing catch up.
Woolley recommends that companies consolidate around the minimum number of tools they need to get things done, and have a sandbox process to test and evaluate new tools that don’t get in the way of people doing actual work. But it’s also nice for employees to have some personal autonomy.
“If there are tools that are vetted, safe, and don’t pose security risks, and I can play around with them at my discretion, and if it helps me do my job better — great,” Woolley says. “But you have to think ahead to what the consequences will be.”
Hallucinations and inaccuracies
According to the Slack survey, only 7% of desk workers say AI outputs are totally trustworthy for work-related tasks, and 35% say AI results are only slightly or not at all trustworthy. Other research support this. For example, in a recent paper from researchers at Cornell, the universities of Washington and Waterloo, and the nonprofit research institute AI2, even the best-performing models were able to offer completely accurate responses only a third of the time.
That means AI output will require additional oversight, review, editing, correction, or re-work. If that first employee doesn’t notice the issue, then the job of cleaning up the mess will fall on other employees. And if the AI is allowed to work autonomously, such as a customer-service chatbot answering questions on company websites, that could create significant problems down the line, when bad advice starts coming to light.
Steve Ross, director of cybersecurity for the Americas at S-RM Intelligence and Risk Consulting, says gen AI can reduce a day’s worth of research to a single hour, but not without a catch.
“It can give me the top six oil and gas companies in a particular metro region, and the CEO, CFO, and CTO of each organization, and their background,” he says. “The AI can go deeper than a Google search.” But as he logged this information into Salesforce, one of the outputs completely fabricated all the people’s names and credentials. “Now we have to go back and audit everything,” he says.
Fortunately, this problem was caught in time. “It all goes back to having a mindful and strategic approach as we go to roll these things out,” he says.
Too much data science for too little gain
There are so many clients who just want to do AI, any AI, and haven’t carefully thought through the use cases. A company might wind up with an AI that saves a couple of workers a couple of hours, but creates a huge amount of work for a team of data scientists who have to collect and prep the training data, create and test the models, integrate them into the enterprise workflow, and then monitor performance to make sure the AI continues to work well.
According to ZipRecruiter, the average starting salary for an entry-level US data scientist in October was $165,000 per year. “Hold off,” says Ross. “Don’t hire data scientists just to write some emails. First, let’s figure out your use case.” And without a clear one, there’s a good chance the AI project won’t even come out of the proof-of-concept stage, according to Gartner.
At least 30% of gen AI projects will be abandoned by the end of 2025, the research firm predicts, due to unclear business value — as well as poor data quality, inadequate risk controls, and escalating costs. Customizing AI models can cost more than $5 million, and building a custom model from scratch can cost a company up to $20 million.
Expectations of immediacy
For many companies, even when gen AI does create more work, the pain is worth it. It’s just part of the learning process.
Champlain College has been using gen AI to help instructional designers and subject matter experts create online courses, and, though the AI cut the time it took to create a course in half overall, it wasn’t always smooth sailing.
“The content that was generated, with uncanny images and things like that, how is this going to be seen by students and faculty?” asks Christa Montagnino, VP of the college’s online operations. “You need people who are trained to see that. You have to take the content, read it, and understand it, and add that human element.”
In fact, the AI didn’t save any time initially, she says. Not only did people have to learn how to fix AI outputs, but also how to engineer prompts to make those outputs better in the first place.
“We had to figure this out and get our team trained,” she says. “And they get better, and it starts to come naturally to them. But it takes months or years for some to learn how to use this well.”
Champlain College started looking at gen AI in mid 2023, and it took 15 weeks to create a course before AI, and it still took 15 weeks to create a course after the AI was rolled out. But it got better, though it took a whole year before the process got down to seven weeks.
“Some people got there sooner than others, though,” she adds.
Similarly, higher education marketing company Education Dynamics is using gen AI to help with marketing campaigns. And for some tasks, there isn’t really a big productivity boost, says Sarah Russell, the company’s VP of marketing.
“From a creative editing and revision standpoint, we’ve really replaced any time savings from the initial creation and moved that into editing and revisions,” she says. “We want to avoid any output that sounds like it’s AI-generated, devoid of personality, or sounds overwrought. For us, it’s less of a time savings and more of a shift in where you’re spending the time.”
But adopting the technology is helping the company move forward, she says.
“We’re dedicated to being industry leaders in a really dynamic space,” she says. “And even if today it doesn’t really save us time, there will be a point where it’s necessary — and where others would just be starting.”
When it comes to gen AI, there’s a gap between what executives expect it to do and what the actual experiences of employees are, says Ashok Krish, head of advisory and consulting for AI at Tata Consultancy Services. After all, today’s generative AI tools are general-purpose, and in their early stages.
“What’s available today barely scratches the surface of what generative AI will do to transform knowledge work in the near future,” he says. “This is a necessary stage of adoption we all have to go through. It’s like the early stages of the internet where only a small group of engineers and tech enthusiasts knew how to get value out of it.”
So in the short term, employees will have to deal with getting used to a new, limited technology, and companies will have to deal with uncertain ROI. “Because if they don’t, they’ll be left behind when AI inevitably transforms all types of work in the coming years,” he says.
Still, there are some things companies can do to hurry things up.
“We’re seeing that the most productivity increases and ROI from generative AI come from highly targeted, industry-specific applications,” he says. It also helps, he adds, when companies get more employees involved and give them access to AI tools so they can develop their own ways to transform their jobs.
Read More from This Article: 7 ways gen AI can create more work than it saves
Source: News