The tech world is abuzz with the news that Amazon is in talks to pump a second multi-billion dollar investment into AI startup Anthropic — with the catch that it must use the tech giant’s chips (not Nvidia’s) when training its models.
Anthropic is the creator of the Claude family of large language models (LLMs) and direct competitor to OpenAI, whose ChatGPT rocked the world when it was released just under two years ago. The 3-year-old San Francisco-based startup has raised close to $10 billion to date and claims to have a $40 billion valuation.
While the companies have neither confirmed nor denied the rumors, there are many concerns raised by such an agreement and the precedent it could set when it comes to the broader market. Experts point to various constraints that can go along with vendor lock-in, as well as the possibility of stifling innovation, plus antitrust concerns.
“Amazon’s push for Anthropic to adopt its in-house AI chips over Nvidia’s is a strategic maneuver to assert dominance in the AI infrastructure space,” said Kaveh Vahdat, founder and president of marketing consultancy RiseOpp.
Increased lock-in, reduced competition and interoperability obstacles
The rumored Amazon investment is purportedly contingent on Anthropic using Amazon Web Services’ (AWS) own-developed chips hosted on its cloud computing platforms. Anthropic currently uses systems containing chips from Nvidia as well as those with AWS’ Trainium and Inferentia chips to train its models.
Nvidia is undoubtedly the dominant vendor in AI hardware, and is increasingly moving up the AI stack as it explores software products, but other major companies including Google and Microsoft, as well as smaller niche players, are catching up (or at least trying to). Amazon has invested significantly in its chips to directly compete with the trillion-dollar-valued Nvidia, releasing its Trainium2 and Graviton4 products about a year ago at AWS re:Invent.
Amazon and Anthropic already have a deep partnership, with AWS making a $4 billion investment in the startup in the spring of 2024. This made AWS Anthropic’s primary cloud provider. This week, the two companies announced they were teaming up with data analytics firm Palantir to provide US intelligence and defense agencies access to Claude.
“The reported negotiations highlight the complex dynamics at play in the evolving AI ecosystem,” said Sidharth Ramsinghaney, a corporate strategist at Twilio.
Yes, deeper integration between a major cloud provider like Amazon and an innovative AI research company like Anthropic could drive further advancements and adoption of LLMs and generative AI, he noted. However, this type of partnership raises concerns about increasing vendor lock-in, reduced pricing competition, and “obstacles to portability and interoperability.”
“A turf war between Amazon and other major chip providers like Nvidia could lead to fragmentation, as customers may be forced to choose between optimized versions of AI models for different hardware environments,” he said.
This, he noted, would only add complexity and friction for enterprises looking to deploy AI solutions at scale across an increasingly heterogeneous IT landscape.
A transition presenting ‘significant engineering efforts’
Locking itself into Amazon’s chips could severely limit Anthropic, industry watchers say. Nvidia’s Cuda platform is considered to be more robust, and shifting completely to AWS infrastructure could result in less flexibility with other cloud providers.
“Anthropic may face technical hurdles in adopting Amazon’s chip technology, as the associated software is less mature than Nvidia’s widely used Cuda platform,” said Ramsinghaney.
Clearly, Amazon is looking to create a vertically integrated ecosystem that could potentially reduce costs and enhance performance for its cloud customers, Vahdat pointed out. This reflects a broader trend among tech giants using their own proprietary hardware to gain a competitive edge.
In fact, said Forrester Senior Analyst Alvin Nguyen, “The impact of Anthropic using AWS’s chips exclusively as part of a deal for funding is providing validation for AWS’s chips. Having a well-known AI company like Anthropic utilizing their technology is great publicity and should drive other organizations to consider AWS AI services and products.”
Raising antitrust concerns
Then there’s the pure antitrust question — such a lock-in could bring scrutiny from regulators, who are already eyeing potential monopolies in the AI space. In fact, Amazon’s previous $4 billion investment did prompt an investigation by the UK’s Competitions and Markets Authority (CMA), but that was dropped within one month.
Similarly, Nvidia is under investigation by the US Department of Justice (DOJ). However, at least in the US, antitrust could be less of a concern in the very near future, with transition to the Trump regime in January.
Ultimately, said Ramsinghaney, the positive or negative impact of such an expanded Amazon-Anthropic partnership will depend on how the companies choose to structure their arrangement and how much they prioritize (or don’t) open standards, cross-platform support, and customer choice.
“Maintaining a healthy, competitive AI ecosystem should be a key consideration as these types of strategic alliances evolve,” he said.
Read More from This Article: Anthropic caught up in a potential turf war: What could it mean for competition, complexity and lock-in?
Source: News