Small language models (SLMs) are giving CIOs greater opportunities to develop specialized, business-specific AI applications that are less expensive to run than those reliant on general-purpose large language models (LLMs).
By 2027, smaller, context-specific models will outpace their counterparts with usage volume at least three times more than those of LLMs, according to a recent report from Gartner, which also claims LLM response accuracy declines for tasks requiring specific business context.
“The variety of tasks in business workflows and the need for greater accuracy are driving the shift towards specialized models fine-tuned on specific functions or domain data,” says Sumit Agarwal, an analyst at Gartner who helped author the report. “These smaller, task-specific models provide quicker responses and use less computational power, reducing operational and maintenance costs.”
Dr. Magesh Kasthuri, a member of the technical staff at Wipro in India, says he doesn’t think LLMs are more error-prone than SLMs but agrees that LLM hallucinations can be a concern. “For domain-centric solutions such as in the banking or energy sector, SLM is the way to go for agility, cost-effective resources, rapid prototype and development, security, and privacy of organizational data,” Kasthuri says.
Despite the spotlight on general-purpose LLMs that perform a broad array of functions such as OpenAI, Gemini, Claude, and Grok, a growing fleet of small, specialized models are emerging as cost-effective alternatives for task-specific applications, including Meta’s Llama 3.1, Microsoft’s Phi, and Google’s Gemma SLMs.
For example, Google claims its recently introduced Gemma 3 SLM can run on just one Nvidia GPU.
SLMs catch the eye of the enterprise
Nicholas Colisto, CIO at Avery Dennison, credits the rise of agentic AI as one reason fueling greater interest in SLMs among CIOs today.
“Selecting the right foundation models for gen AI and agentic AI apps is one of the more complex decisions CIOs and CAIOs face today. It’s not just about performance benchmarks — it’s about balancing cost, security, explainability, scalability, and time to value,” Colisto says. “The landscape is shifting from large, general-purpose models to smaller, domain-specific ones that better serve industry needs while reducing risk and cost.”
Gartner maintains that SLMs offer increased control over sensitive business data, while reducing operational costs and improving performance in specific domains. This more targeted approach to AI — for customer service automation, market trend analysis, product innovation, and sentiment analysis — also offers privacy and copyright protection benefits, Gartner claims.
“That’s 100% accurate,” says Patrick Buell, chief innovation officer at Hakkoda, an IBM company. “Tuned, open-source small language models run behind firewalls solve many of the security, governance, and cost concerns.”
Tom Richer, a former CIO and founder of Intelagen, a Google Partner that develops and deploys specialized vertical AI solutions, says the Gartner report aligns with what he is seeing in the field.
“General-purpose LLMs have their place, but for specific business problems, smaller, fine-tuned models deliver better results with greater efficiency especially in regulated industries,” Richer says. “The main driver towards SLMs is the hallucination risk of LLMs. The tendency of general-purpose LLMs to generate inaccurate or nonsensical information, especially when dealing with specific or nuanced business contexts, is a significant barrier.”
Richer adds: “For enterprise applications where accuracy and reliability are paramount, this inherent risk makes relying solely on a general LLM a non-starter in many cases. It’s a fundamental reason why a more targeted, specialized approach is often the more prudent and dependable solution. Can’t run the risk of a hallucination in a healthcare use case.”
A fluid future
Last fall, Microsoft announced adapted AI models leveraging its Phi portfolio of SLMs to expand its industry capabilities and enable enterprises to address custom needs more accurately and effectively. The company announced it was developing fine-tuned models, pre-trained with industry-specific data for common business use cases with enterprise partners Bayer, Rockwell Automation, Siemens Digital Industries Software, and others.
Microsoft CEO Satya Nadella recently lauded one SLM developed by a major airliner he saw in a demonstration in Tokyo. “With our SLM Phi, flight attendants at Japan Airlines are spending less time on paperwork and more time with passengers,” Nadella posted on LinkedIn.
Microsoft’s Orca and Orca 2, the company also claims, demonstrate the use of synthetic data for post-training small language models, enabling them to perform better on specialized tasks.
Google’s Gemma 3, based on Gemini 2.0, is a collection of lightweight, state-of-the-art open models designed to run fast, directly on devices — from phones and laptops to workstations. “Gemma SLM models utilize business domain-specific intelligence rather than broad generalization and are tailored for industries such as healthcare, legal, and finance, outperforming LLMs in their respective domains,” according to a company statement issued to CIO.
Gartner says enterprises can customize LLMs for specific tasks by employing retrieval-augmented generation (RAG) or other fine-tuning techniques to create specialized models. But Gartner’s prediction about SLMs outpacing LLMs in two years illustrates a trend accelerating across the industry to make AI more task-specific and that adhere to governance and regulatory compliance, says Naveen Sharma, vice president and global head of AI and analytics at Cognizant.
“As organizations look to scale AI more effectively, smaller, task-specific models are proving to be faster, more efficient, and easier to integrate into real business workflows,” Sharma says. “When it comes to AI models, smaller may be smarter but the future is not either-or: It’s orchestration, with large models providing the base and smaller, targeted models delivering against precise business needs.”
Sharma also says increasing development of SLMs does not mean large foundation models are going away.
“If anything, their role is becoming more strategic. Instead of being the end product, they’re becoming the starting point — providing the core capabilities that teams can build on, adapt, and fine-tune for specific use cases,” Sharma adds. “We’re shifting from using them as general-purpose tools to using them as core infrastructure for more tailored, efficient AI systems that better serve the business.”
Read More from This Article: IT leaders see big business potential in small AI models
Source: News