Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Nvidia chips sold out? Cut back on AI plans, or look elsewhere

Nvidia CFO Colette Kress’s claim that “The clouds are sold out, and our GPU-installed base […] is fully utilized,” may have thrilled shareholders listening to the company’s earnings call on Wednesday, but it’s bad news for CIOs and data center managers who were counting on Nvidia for increased AI computing capacity, as they will have to change suppliers — or change their plans.

CEO Jensen Huang, asked during the same earnings call whether he saw a realistic path for supply to catch up with demand over the next 12-18 months, claimed that all was going to plan, saying, “We’ve done a really good job planning our supply chain. Nvidia’s supply chain basically includes every technology company in the world.”

Not everyone is convinced. Among the doubters is Forrester senior analyst Alvin Nguyen, who said Thursday he has had clients ask what to do when addressing the lack of available Nvidia GPUs, since demand far exceeds supply.

There are, he said, “other options for both on-premises (AMD, Intel, custom ASICs, CPUs) and in the cloud (TPUs, custom ASICs), although demand means these options may not be enough for everyone’s AI ambitions.”

Nguyen added, “for enterprises and their CIOs, not being able to get the AI infrastructure needed to achieve your full AI vision means re-evaluating those ambitions and paring them back to what is possible. Leveraging AI platforms and services others (Salesforce, ServiceNow, and so forth) provide can help mitigate some needs.”

‘Everything is moving so fast.’

Senior IT executives, he said, should also consider looking at working with smaller models with reduced infrastructure needs, and experimenting with them to help inform future AI decisions.

The constant innovations in this space, said Nguyen, “may help enterprises build or mitigate technical debt depending on what and when they decide on their infrastructure. I know this is an ‘it depends’ answer, but everything is moving so fast that the answers are only clear in hindsight.”

Matt Kimball, principal analyst at Moor Insights & Strategy, said the question about Nvidia and the availability of GPUs is a good one. “Some of the challenges I see organizations face are quite avoidable if some thought exercise around right-sizing of infrastructure were to take place,” he noted.

He pointed out, “Nvidia chips (or any chips for that matter) have different performance profiles, as well as different performance per watt and performance per dollar profiles. The latest [Nvidia] GB300 is not always the right fit for the job. And when we split between training and inference, this approach to rightsizing the solution for the need is even more critical.”

By doing this, said Kimball, “organizations will find they are less reliant on a latest generation chip that has a long line of (much larger) customers queued up and waiting on delivery. The other exercise is to consider whether the Nvidia chip is always necessary for your needs. I know this may sound like tech heresy. Still, especially as it pertains to inference, it is beneficial to understand what the inference environment looks like, where the infrastructure is being deployed, and what the workload entails.”

It may very well be a case where an ASIC-based solution is a better fit for that real-time sensor-driven environment on an oil rig, for example, he observed. 

Kimball added, “I’m certainly not saying not to acquire Nvidia. Still, regardless of supply chain issues, it is a very good exercise to think about your AI needs holistically and align the right-sized acceleration to your needs.” 

In addition to that, he said, “the cloud is always a choice. These are the first-in-line customers for Nvidia silicon, and it is perfectly natural to leverage the cloud for AI needs.” 

The key for CIOs: Be proactive

Gaurav Gupta, VP analyst at Gartner, had this advice: “While Nvidia continues to say that they have a strong control of their supply chain, the complexity is so high that it should be a priority for CIOs to monitor.”

He added, “not only are there potential shortages on some of the big known aspects, like leading-edge wafers, advanced packaging, and HBM [high bandwidth memory], but in my opinion it is the unrecognized constraints for smaller components and precision machinery parts for thermal management, liquid cooling, and server racks that could be bottlenecks. Plus, everyone needs to plan for power availability to run these data centers.”

The key, said Gupta, is to be proactive, plan ahead, and “not be the last in the queue” when it comes to ordering compute resources such as GPUs.

Scott Bickley, advisory fellow at Info-Tech Research Group, noted, “the world is starting to question how Nvidia is going to move from an [about] $250 billion annualized run rate to $350 billion, and eventually to $500 billon plus. Yet they clearly stated every available chip is sold out, in fact, that they could have sold more if they had the inventory. It is reasonable to apply scrutiny to this incredibly complex and fragile supply chain and ask the question, ‘What happens if there is a material failure that limits GPU shipments?’”

Supply chain risks, he said, “are numerous in nature; however, it is clear that Nvidia is customer Number One with all of their suppliers, which drives an inordinate allocation of resources to ensure that production flows. Any disruption would likely be materials-based as opposed to a process or labor issue from their vendor base.”

He added, “geopolitical events would be the most likely origin of any type of medium to long term disruption, think China-Taiwan, expansion of the Russia-Ukraine conflict, or escalation in the US-China trade war.”

For lower impact events, he said, “[Nvidia] does a nice job of setting conservative shipment goals and targets for Wall Street, which they almost invariably beat quarter after quarter. This provides some cushion for them to absorb a labor, process, or geopolitical hiccup and still meet their stated goals. Shipment volumes may not exceed targets, but shipments would continue to flow; the spice must flow after all.”

In a worst-case scenario where shipments are materially impacted, there is little recourse for enterprises that are not large-scale cloud consumers with clout with the limited providers in the space, Bickley added.

Enterprises joining a ‘very long queue’

According to Sanchit Vir Gogia, the chief analyst at Greyhound Research, the Nvidia earnings call “confirms that the bottleneck in enterprise AI is no longer imagination or budget. It is capacity. Nvidia reported $57 billion in quarterly revenue, with more than $51 billion from data center customers alone, yet still described itself as supply-constrained at record levels.”

Blackwell and Blackwell Ultra, he said, have become the default currency of AI infrastructure, yet even at a build rate of roughly 1,000 GPU racks per week, the company cannot meet demand.

Long-term supply and capacity commitments, said Gogia, “now stand at around $50.3 billion, and multi-year cloud service agreements have jumped to $26 billion, implying that much of the next wave of capacity has already been pre-booked by hyperscalers and frontier labs. Enterprises are not stepping into an open market. They are joining the back of a very long queue.”

The supply imbalance, he said, “is not just about chips. It is about everything that has to wrap around them. The filings point to long manufacturing lead times, tight availability of advanced packaging and high bandwidth memory, and significant prepayments and non-cancellable commitments to secure future capacity.”

Gogia also suggested that the single biggest decision for CIOs now “is whether to design their AI strategy around Nvidia or around the risk of Nvidia. These are not equivalent positions. To design around Nvidia is to accept that the platform is the gold standard and lean into it by placing orders 12 months ahead, using multiple OEMs for the same configuration, coordinating with finance teams on prepayments, and building programme timelines that can absorb shipment shifts.”

To design around the risk, he said, “is to recognize that Nvidia is essential but cannot be the only path, and to treat diversification as a resilience measure rather than a philosophical debate.”


Read More from This Article: Nvidia chips sold out? Cut back on AI plans, or look elsewhere
Source: News

Category: NewsNovember 21, 2025
Tags: art

Post navigation

PreviousPrevious post:“예측·분석·자가 치유까지” 현대 IT를 지탱하는 AIOps 도구 톱 14NextNext post:ミドルウェアの役割をかなりやさしく解説

Related posts

La relación entre el CIO y el CISO, a examen: ¿por fin se ha roto la frontera entre innovación y seguridad?
April 24, 2026
CIOs struggle to find clarity in their organizations’ AI strategies
April 24, 2026
IT reskilling: the pressing CIO imperative
April 24, 2026
Shadow AI morphs into shadow operations
April 24, 2026
Moving autonomous agents into production requires a universal context layer
April 24, 2026
How ignoring digital friction erodes your competitive advantage
April 24, 2026
Recent Posts
  • La relación entre el CIO y el CISO, a examen: ¿por fin se ha roto la frontera entre innovación y seguridad?
  • CIOs struggle to find clarity in their organizations’ AI strategies
  • IT reskilling: the pressing CIO imperative
  • Shadow AI morphs into shadow operations
  • Moving autonomous agents into production requires a universal context layer
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.