Generative AI has seen faster and more widespread adoption than any other technology today, with many companies already seeing ROI and scaling up use cases into wide adoption.
Vendors are adding gen AI across the board to enterprise software products, and AI developers haven’t been idle this year either. We’ve also seen the emergence of agentic AI, multi-modal AI, reasoning AI, and open-source AI projects that rival those of the biggest commercial vendors.
According to a Bank of America survey of global research analysts and strategists released in September, 2024 was the year of ROI determination, and 2025 will be the year of enterprise AI adoption.
“Over the next five to 10 years, BofA Global Research expects gen AI to catalyze an evolution in corporate efficiency and productivity that may transform the global economy, as well as our lives,” says Vanessa Cook, content strategist for Bank of America Institute.
Small language models and edge computing
Most of the attention this year and last has been on the big language models — specifically on ChatGPT in its various permutations, as well as competitors like Anthropic’s Claude and Meta’s Llama models. But for many business use cases, LLMs are overkill and are too expensive, and too slow, for practical use.
“Looking ahead to 2025, I expect small language models, specifically custom models, to become a more common solution for many businesses,” says Andrew Rabinovich, head of AI and ML at Upwork. LLMs aren’t just expensive, they’re also very broad, and not always relevant to specific industries, he says.
“Smaller models, on the other hand, are more tailored, allowing businesses to create AI systems that are precise, efficient, robust, and built around their unique needs,” he adds. Plus, they can be more easily trained on a company’s own data, so Upwork is starting to embrace this shift, training its own small language models on more than 20 years of interactions and behaviors on its platform. “Our custom models are already starting to power experiences that aid freelancers in creating better proposals, or businesses in evaluating candidates,” he says.
Small language models are also better for edge and mobile deployments, as with Apple’s recent mobile AI announcements. Anshu Bhardwaj, SVP and COO at Walmart Global Technology says that consumers aren’t the only ones who stand to benefit from mobile AI.
“Enterprises, especially those with large employee and customer bases, will set the standard for on-device AI adoption,” she says. “And we’re likely to see an increase of tech providers keeping large enterprises top of mind when developing the on-device technologies.”
AI will approach human reasoning ability
In mid-September, OpenAI released a new series of models that thinks through problems much like a person would, it claims. The company says it can achieve PhD-level performance in challenging benchmark tests in physics, chemistry, and biology. For example, the previous best model, GPT-4o, could only solve 13% of the problems on the International Mathematics Olympiad, while the new reasoning model solved 83%.
“It’s extremely good at reasoning through logic-types of problems,” says Sheldon Monteiro, chief product officer at Publicis Sapient. That means companies can use it on tough code problems, or large-scale project planning where risks have to be compared against each other.
If AI can reason better, then it will make it possible for AI agents to understand our intent, translate that into a series of steps, and do things on our behalf, says Gartner analyst Arun Chandrasekaran. “Reasoning also helps us use AI as more of a decision support system,” he adds. “I’m not suggesting that all of this will happen in 2025, but it’s the long-term direction.”
According to Gartner’s most recent hype cycle for AI, artificial general intelligence is still more than a decade away.
Massive growth in proven use cases
This year, we’ve seen some use cases proven to have ROI, says Monteiro. In 2025, those use cases will see massive adoption, especially if the AI technology is integrated into the software platforms that companies are already using, making it very simple to adopt.
“The fields of customer service, marketing, and customer development are going to see massive adoption,” he says. “In these uses case, we have enough reference implementations to point to and say, ‘There’s value to be had here.’”
He expects the same to happen in all areas of software development, starting with user requirements research through project management and all the way to testing and quality assurance. “We’ve seen so many reference implementations, and we’ve done so many reference implementations, that we’re going to see massive adoption.”
The evolution of agile development
The agile manifesto was released in 2001 and, since then, the development philosophy has steadily gained over the previous waterfall style of software development.
“For the last 15 years or so, it’s been the de-facto standard for how modern software development works,” says Monteiro. But agile is organized around human limitations — not just limitations on how fast we can code, but in how teams are organized and managed, and how dependencies are scheduled.
Today, gen AI is an adjunct, used to boost productivity of individual team members. But the entire process will need to be reinvented in order to make full use of the technology, says Monteiro. “We have to look at how we interact with colleagues and how we interact with AI,” he adds. “There’s too much attention on AI for code development, which is actually just a fraction of the whole software development process.”
Increased regulation
At the end of September, California governor Gavin Newsom signed a law requiring gen AI developers to disclose the data they used to train their systems, which applies to developers who make gen AI systems publicly available to Californians. Developers must comply by the start of 2026, meaning they’ll have a little over a year to put systems in place to track the provenance of their training data.
“As a practical matter, a lot of people do have a nexus in California, particularly in AI,” says Vivek Mohan, co-chair of the AI practice at law firm Gibson, Dunn & Crutcher LLP. “Many of the world’s leading technology companies are headquartered here, and many of them make their tools available here,” he says. But there are already many other regulations on the books, both in the US and abroad, that touch on issues like data privacy and algorithmic decision making that would also apply to gen AI.
Take for example the use of AI in deciding whether to approve a loan, a medical procedure, pay an insurance claim or make employment recommendations. “That’s an area where there’s a reasonably broad consensus that this is something we should think critically about,” says Mohan. “Nobody wants to be hired or fired by a machine that has no accountability. That’s one use case you probably want to run by your lawyers.”
There are also regulations about the use of deep fakes, facial recognition, and more. The most comprehensive law, the EU’s AI Act, which went into effect last summer, is also something that companies will have to comply with starting in mid-2026, so, again, 2025 is the year when they will need to get ready.
“There’s a high probability that the EU AI act will lead to more regulations in other parts of the world,” says Gartner’s Chandrasekaran. “It’s a step forward in terms of governance, trying to make sure AI is being used in a socially beneficial way.”
AI will become accessible and ubiquitous
When the internet first arrived, early adopters needed to learn HTML if they wanted to have a website, recalls Rakesh Malhotra, principal at Ernst & Young. Users needed modems and special software and accounts with internet providers. “Now you just type in the word you’re looking for,” he says. With gen AI, people are still at the stage of trying to figure out what gen AI is, how it works, and how to use it.
“There’s going to be a lot less of that,” he says. But gen AI will become ubiquitous and seamlessly woven into workflows, the way the internet is today.
Agents will begin replacing services
Software has evolved from big, monolithic systems running on mainframes, to desktop apps, to distributed, service-based architectures, web applications, and mobile apps. Now, it will evolve again, says Malhotra. “Agents are the next phase,” he says. Agents can be more loosely coupled than services, making these architectures more flexible, resilient and smart. And that will bring with it a completely new stack of tools and development processes.
Today, AI agents are relatively expensive, and inference costs can add up quickly for companies looking to deploy massive systems. “But that’s going to shift,” he says. “And as this gets less expensive, the use cases will explode.”
The rise of agentic assistants
In addition to agents replacing software components, we’ll also see the rise of agentic assistants, adds Malhotra. Take for example that task of keeping up with regulations. Today, consultants get continuing education to stay abreast of new laws, or reach out to colleagues who are already experts in them. It takes time for the new knowledge to disseminate and be fully absorbed by employees.
“But an AI agent can be instantly updated to ensure that all our work is compliant with the new laws,” says Malhotra. “This isn’t science fiction. We’re doing this work for our clients now — a less advanced version of it, but next year it becomes a very normal thing.”
And it’s not just keeping up with regulatory changes. Say a vendor releases a new software product. Enterprise customers need to be sure it complies with their requirements. That could happen in an automated way, with the vendor’s agent talking to the customer’s agent. “Today this happens with meetings and reports,” says Malhotra. “But soon it’s all going to happen digitally once we get past some of this newness.”
Soon, showing up to a meeting without an AI assistant will be like an accountant trying to do their work without Excel, he adds. “If you’re not using the proper tools, that’s your first indication you aren’t the right person for the job.”
It’s still early days for AI agents, says Carmen Fontana, IEEE member, and cloud and emerging tech practice lead at Augment Therapy, a digital health company. “But I’ve found them immensely useful in trimming down busy work.” The next step for agents, she says, is pulling together communications from all the different channels, including email, chat, texts, social media, and more.
“Making better spreadsheets doesn’t make for great headlines, but the reality is that productivity gains from workplace AI agents can have a bigger impact than some of the more headline-grabbing AI applications,” she says.
Multi-agent systems
Sure, AI agents are interesting. But things are going to get really interesting when agents start talking to each other, says Babak Hodjat, CTO of AI at Cognizant. It won’t happen overnight, of course, and companies will need to be careful that these agentic systems don’t go off the rails.
First, an agent has to be able to recognize whether it’s capable of carrying out a task, and whether a task is within its purview. Today’s AIs often fail in this regard, but companies can build guardrails, supplemented with human oversight, to ensure agents only do what they’re allowed to do, and only when they can do it well. Second, companies will need systems in place to monitor the execution of those tasks, so they stay within legal and ethical boundaries. Third, companies will need to be able to measure how confident the agents are in their performance, so that other systems, or humans, can be brought in when confidence is low.
“If it goes through all of those gates, only then do you let the agent do it autonomously,” says Hodjat. He recommends that companies keep each individual agent as small as possible. “If you have one agent and tell it to do everything in the sales department, it’ll fail a lot,” he adds. “But if you have lots of agents, and give them smaller responsibilities, you’ll see more work being automated.”
Companies such as Sailes and Salesforce are already developing multi-agent workflows, says Rahul Desai, GM at Chief of Staff Network, a professional development organization. “Combine this with chain-of-thought reasoning, or the ability for an AI agent to reason through a problem in multiple steps — recently incorporated into the new ChatGPT-o1 model — and we’ll likely see the rise of domain expert AI that’s available to everyone,” he says.
Multi-modal AI
Humans and the companies we build are multi-modal. We read and write text, we speak and listen, we see and we draw. And we do all these things through time, so we understand that some things come before other things. Today’s AI models are, for the most part, fragmentary. One can create images, another can only handle text, and some recent ones can understand or produce video.
“When people want to do speech generation, they go to a specialized model that does text to speech,” says Chandrasekaran. “Or a specialized model for image generation.” To have a full understanding of how the world works, for true general intelligence, an AI has to function across all the different modalities. Some of this is available today, though usually the multi-modality is an illusion and the actual work is handled behind the scenes by different specialized, single-mode models.
“Architecturally, these models are separate and the vendor is using a mixture-of-experts architecture,” says Chandrasekaran. Next year, however, he expects multi-modality to be an important trend. Multi-modal AI can be more accurate and more resilient to noise and missing data, and can enhance human-computer interaction. Gartner, in fact, predicts that 40% of gen AI solutions will be multi-modal by 2027, up from 1% in 2023.
Multi-model routing
Not to be confused with multi-modal AI, multi-modal routing is when companies use more than one LLM to power their gen AI applications. Different AI models are better at different things, and some are cheaper than others, or have lower latency. And then there’s the matter of having all your eggs in one basket.
“A number of CIOs I’ve spoken with recently are thinking about the old ERP days of vendor lock,” says Brett Barton, global AI practice leader at Unisys. “And it’s top of mind for many as they look at their application portfolio, specifically as it relates to cloud and AI capabilities.”
Diversifying away from using just a single model for all use cases means a company is less dependent on any one provider and can be more flexible as circumstances change. Today, most companies building AI systems in-house tend to start with just one vendor, since juggling multiple providers is much more difficult. But as they build out scalable architecture next year, having “model gardens” with a selection of vetted, customized, and fine-tuned systems of different sizes and capabilities will be critical to getting maximum performance and highest price efficiency out of their AI.
Jeffrey Hammond, head of WW ISV product management transformation at AWS says he expects to see more companies build internal platforms that provide a common set of services to their development teams, including multi-model routing.
“It helps developers quickly test different LLMs to find the best combination of performance, low-cost, and accuracy for the particular task they’re trying to automate,” he says.
Mass customization of enterprise software
Today, only the largest companies, with the deepest pockets, get to have custom software developed specifically for them. It’s just not economically feasible to build large systems for small use cases.
“Right now, people are all using the same version of Teams or Slack or what have you,” says Ernst & Young’s Malhotra. “Microsoft can’t make a custom version just for me.” But once AI begins to accelerate the speed of software development while reducing costs, it starts to become much more feasible.
“Imagine an agent watching you work for a couple of weeks and designing a custom desktop just for you,” he says. “Companies build custom software all the time, but now AI is making this accessible to everyone. We’re going to start seeing it. Having the ability to get custom software made for me without having to hire someone to do it is awesome.”
Read More from This Article: 12 AI predictions for 2025
Source: News