When Accenture on Wednesday rolled out a new partnership with Nvidia, including the creation of a 30,000-person Nvidia business unit, it could be seen as just another partnership expansion. And a couple of years ago, that would have been fair. But with today’s IT world being completely rewritten by generative artificial intelligence (genAI), the deal illustrates a new IT reality.
The essence of the deal: the creation of a new business unit, called the Nvidia Business Group, which will use an Accenture AI Refinery platform with agentic AI that leverages the full Nvidia AI stack. It will work with what Accenture describes as a “network of Accenture AI refinery engineering hubs serving 57,000 Accenture AI practitioners to open in Europe, Asia and North America, supporting large-scale operations, agentic architecture and foundation model development with Nvidia AI.”
Given that Accenture has been a Nvidia partner for years, why is this a big deal? Mostly it’s because the enterprise IT landscape is now dependent on genAI development to an extent that blocks out almost everything else.
CIOs have historically always been worried about vendor lock in. But Nvidia has such dominance in AI chip development today, bordering on a near-monopoly, that enterprises have no choice but to secure their AI graphics processing units (GPUs) from Nvidia. Given that reality, enterprise CIOs can’t worry about vendor lock in with Nvidia, because they have no viable alternative.
With that established, enterprises have only one decision remaining. They have to customize their AI efforts, often making them domain-specific. Do they do it with their salaried teams or outsource? Most are reluctantly concluding that they need to outsource, mostly for efficiency and speed reasons. Then the decision becomes with whom to outsource: a major player, such as Accenture, Deloitte, IBM, Ernst & Young, or Wipro, or a boutique genAI house.
That’s the context for evaluating the Accenture deal, said Ted Schadler, a VP and principal analyst for Forrester.
“This is a huge commitment from Accenture to sell Nvidia [offerings]. They are now a 360-degree partner because of the business unit,” Schadler said. “Given that Nvidia is the only game in town, you have to ask, ‘Who has the most commitment and skills with Nvidia?’ If you are the CIO of Nabisco, you are asking yourself, ‘Who is going to run this thing, to be your model builder and operator?’ You don’t have an answer today.”
“You want to host it in a shared platform that gives you scale. There is more here than meets the eye,” Schadler said. “You need to build proprietary models. You need to own your own model infrastructure. The future of AI models is proprietary and not generic. This announcement sets the table for that.”
Schadler spoke of the realities of vendor lock in: today it can’t be avoided in AI, but CIOs still need to reduce the risk as much as possible.
“You face lock in challenges, you absolutely do. Service providers (such as Accenture) are not known for building products that you license and pay for. They are known for selling labor. Who do you want to be locked in with? A boutique? The world has changed in this way.”
However, Schadler said that one of the elements of Accenture’s AI strategy that he found most intriguing was barely mentioned during Wednesday’s rollout. It was an announcement that Accenture made in July, in which it said it would help clients “build custom LLM models with the Llama 3.1 collection of openly available models” on top of the Nvidia AI Foundry. Llama is an open source offering from Meta.
Schadler said that, from the enterprise CIO perspective, the Llama partnership could be quite attractive. “OpenAI is not a core model because it is not yours, but Llama could be,” he said.
OpenAI may get more proprietary if it ends up morphing fully into a for-profit company, but Llama’s commitment to open source may make it more likely to stay flexible.
Jason Andersen, a VP and principal analyst at Moor Insights & Strategy, agreed with Schadler’s perspective, arguing that enterprise CIOs must focus on their value-adds atop genAI developments, and how, and with whom, they are going to make them happen.
“Increasingly, enterprises want to take a foundational model and make it specific to their business. They need to take that model and make their own derivative,” Andersen said.
Sometimes, he predicted, it won’t even be enterprise-specific as much as vertical specific. “Some healthcare companies might make an oncology model,” Andersen said.
That means that CIOs “have to pick somebody, they have to partner. Accenture is saying ‘We now have 30,000 people already doing this.’ The most laborious part of AI is data preparation. Accenture already has people doing this kind of work and are preparing those models.”
Another argument in favor of Accenture, Andersen said, is the law of supply and demand. “Nvidia is in a position now where they can’t meet all of the demands [for their products]. It might be easier for a major partner to source them. In other words, it might be a case of, ‘Accenture customer, you might not have to wait that long for your GPUs.’”
He also doesn’t anticipate the genAI chip pecking order to change any time soon. “Nvidia is a pretty safe bet. They are going to be the big dog for a long time. They have pretty darn good software tools.”
Andersen said a bigger concern he has, though, are the likely dramatic changes to how almost everything in AI is going to be priced. Not the amount necessarily, but the mechanism for how services and products are charged.
“The bigger issue here is that these companies [Accenture and its primary competitors] are built on time and materials. They sell human skill sets. Their business model will change. They have to reconsider their strategy and business models,” Andersen said. “AI is changing the rules, particularly in the world of professional services. People are going to want to pay based on performance.”
Read More from This Article: Accenture — Nvidia deal: A first peek into the new world of genAI-centric strategies
Source: News