When Microsoft posted its quarterly earnings last week, its CFO Amy Hood said that customers wanted more cloud compute for their AI workloads than the company could supply.
“Near-term AI demand is a bit higher than our available capacity,” Hood said Thursday, during a conference call to discuss the company’s quarterly results for the quarter ended March 31.
Microsoft posted revenue of $61.9 billion for the quarter, up 17% year on year, of which $26.7 billion (up 21%) came from its Intelligent Cloud segment, consisting of Azure and other public, private, and hybrid server products and cloud services. The cloud figure excludes Bing Search and Xbox Cloud Gaming, which are part of its More Personal Computing segment, where revenue rose 17% to $15.6 billion, and Microsoft 365 and Dynamics 365, part of its Productivity and Business Processes segment, where revenue rose 12% to $19.6 billion.
Within the Intelligent Cloud segment, revenue from Azure and other cloud services grew 31% year on year, ahead of expectations, with AI services contributing 7 points of growth, Hood said.
But Azure’s growth could have been even greater: The shortfall in AI cloud capacity had an impact on revenue in the quarter, and will in the next quarter, too, Hood said.
Balancing supply and demand
Microsoft must balance cloud demand each quarter against its infrastructure investment plans for the next, and is already planning to spend more. “We expect capital expenditures to increase materially on a sequential basis driven by cloud and AI infrastructure investments,” Hood said.
This balancing act, according to Dylan Patel, chief analyst at semiconductor research firm Semianalysis, can be seen in the company’s phased rollout plan for AI copilots across various service offerings.
“Many features have been developed for Windows and Office copilots without having been deployed due to the lack of compute. GitHub Copilot is still using a much smaller model,” Patel said.
When suppliers can’t meet demand in in other domains, they may raise prices until demand falls off, or customers may seek alternative sources or substitute different products.
Customers are unlikely to go elsewhere, though, as AI workloads are often fueled by large amounts of enterprise data from applications running in the same cloud. The sheer cost and complexity of the migration process would discourage existing enterprise customers from moving to another cloud provider over short-term capacity issues.
“Sometimes the level of technical debt incurred in migrating a well-established project can outweigh any concerns over latency,” said Bradley Shimmin, chief analyst at research firm Omdia, adding that enterprises using AI for mission critical applications would be even less likely to change providers.
And although Microsoft may have lost out on a little revenue growth as a result of the capacity constraints, analysts said it is at no risk of losing new customers to its rivals.
“The demand for AI is so large that there simply isn’t enough capacity to put large language models (LLMs) into every application that enterprises want to. AWS and Google both have capacity issues as well. Microsoft is not unique,” said Semianalysis’ Patel.
It may even have an edge because it started building out its AI stack before its rivals, he said.
“AWS and Google still do not properly serve models with GPT-4 Turbo capabilities, for example,” Patel said, adding that his firm’s analysis shows that Microsoft is no longer losing money on API calls and has a healthy margin, giving it yet another advantage against rivals.
Price hikes unlikely
Even with a largely captive customer base, Microsoft is unlikely to resort to price hikes as a way to manage demand, at least in the short term, analysts said.
With immature technologies such as generative AI, Microsoft and its competitors face challenges accurately forecasting changes in demand, according to digital services firm West Monroe’s lead of high-tech and software practice, Dhaval Moogimane. While occasional discrepancies between capacity and demand may persist, it is unlikely to manifest as a protracted or systemic issue that would lead to price hikes, Moogimane said.
Instead, said Shimmin, Microsoft and other hyperscalers will likely resort to other tactics to manage demand, such as downgrading response times for customers paying less or making use of batch inferencing, a process in which predictions are made, stored, and later presented on request. This can be more efficient than online or dynamic inferencing, where predictions are generated in real time.
Batch inferencing, especially in support of API calls is rapidly becoming “a thing” among model hosting providers according, to Shimmin.
Customers shouldn’t be surprised at imbalances between demand and supply in cloud computing, according to IDC analyst Rijo George Thomas: They’re not new and enterprises have been complaining about them since the beginning of the Covid pandemic. “IDC’s Wave surveys have revealed that supply chain constraints were one of the top concerns, at least for Asia-Pacific IT leaders, affecting their tech strategies and budgets,” Thomas said.
Generative AI, Microsoft
Read More from This Article: Microsoft can’t keep up with demand for AI in the cloud — for now
Source: News