Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

The post-cloud data center: Back in fashion, but not like before

For most of the last decade, I watched enterprise infrastructure strategy follow a simple arc: abstract complexity, speed up provisioning, move as much as possible into the cloud. That shift delivered real value. It shortened deployment cycles, empowered product teams and removed capital friction that had slowed change.

Cloud did not eliminate the need for physical infrastructure. It only postponed the moment when we would need to think about it again.

That moment is here.

In conversations with platform leaders, executive sponsors and IT leadership, questions have shifted from “how fast can we migrate?” to “where should this run and what risk are we taking if we choose wrong?” The catalyst is not nostalgia for owned data centers. It is the collision of AI, energy constraints, sovereignty expectations and GPU economics.

What appears to be a return is actually architectural maturity. We have entered the post-cloud data center era.

Why “back” is the wrong word

If you describe this moment as a reversal, you miss what changed. The first wave of cloud strategy optimized for velocity and elasticity. We wanted to escape procurement cycles, scale on demand and give more control to software shipping teams. That model is still the right answer for many workloads.

AI exposes the assumptions behind that universal default.

When models move from experimentation to daily operations, elasticity is no longer the dominant variable; GPU usage stabilizes; and data volumes grow rapidly. The cost curve becomes less forgiving. At the same time, boards and regulators ask more pointed questions: Where is data processed? Where are models trained? Who controls the infrastructure beneath it? What evidence exists when an auditor asks?

This is why I do not frame the shift as “cloud repatriation.” It is replatforming at the infrastructure layer. I am not arguing for an on-prem or colocation expansion in isolation. I am arguing for a deliberate placement model where cloud, colocation and on-prem each have defined roles, decision gates and an evidence package when you deviate. Placement is becoming situational, based on density, locality and governance, not ideology.

The survey data support the direction, even if every enterprise will land differently. In its 2024 Global Data Center Survey, the Uptime Institute reports that 64% of enterprise operators are growing their data center capacity, even as colocation and public cloud expand. That is not a mass retreat from the cloud. It is a signal that hybrid is hardening into a long-term operating model, especially as AI workloads mature.

In my architecture work, I see two triggers that bring physical infrastructure back into scope. First, sustained utilization changes the math. A steady, always-on inference pipeline behaves differently from spiky batch processing. If the workload is stable, the economic advantage shifts toward locations where you can control the unit costs of power, cooling and amortization.

Second, data gravity and accountability show up late in the cloud conversation and then dominate it. A proof of concept can run anywhere. A production system tied to regulated data, proprietary IP, customer confidence and board scrutiny rarely can.

Edge is now accountability

The biggest mindset shift I have had to make is changing what “edge” means.

Historically, edge computing meant spatial distance from the core: factories, stores, remote sites. In the AI era, edge now means proximity to accountability. Compute is moving closer to proprietary data, regulatory boundaries and operational decision-making. In practice, that often means enterprise facilities and colocation sites within defined legal and governance zones.

You can see the policy pressure building. Public sector briefings now treat data centers as part of national resilience and sustainability planning, not just private infrastructure. AI policy is also tied to governance and trust, which increases the burden of demonstrating where and how sensitive processing occurs, as reflected in the European Commission’s approach to AI.

This is where colocation has evolved from “outsourced real estate” to a deliberate architecture move. In several programs I have been pulled into, colocation is where the enterprise anchors GPU-dense clusters near power-rich regions, keeps sovereignty-bound workloads inside a controlled footprint and connects to multiple clouds without turning every workload into a single-provider dependency.

The key is control, not location.

If you are building AI capabilities that touch customer data, pricing models, supply chain enhancement or proprietary process know-how, the question is rarely “can the cloud do it?” The question is “can we prove, continuously, that we are operating inside the boundaries our risk owners will accept?” For many organizations, the cleanest proof lives in infrastructure they can audit end-to-end, whether owned or in colocation facilities.

This is also why “data locality” discussions are starting to sound like “data center” discussions again. Once you accept that some data cannot move freely and some models cannot train outside certain jurisdictions, placement becomes a design decision, not just a hosting preference.

What I ask leadership now

AI has pulled infrastructure decisions back into the executive agenda. I am seeing senior technology leaders and steering committees ask detailed questions about topics they delegated for years: rack density, power topology, cooling strategy, site selection and long-term capacity planning. That is not because they want to run facilities. These constraints now shape business outcomes.

GPU density is the forcing function. In NVIDIA’s GPU-ready data center guidance, liquid cooling and AI-optimized designs can enable roughly two-to-five times higher compute density than traditional air-cooled approaches, depending on GPU generation, cooling method and utilization targets. Treat that as a planning range, not a promise. It only works if power delivery, cooling and rack design are engineered together. Legacy enterprise sites were not built for that profile. Power, not square footage, becomes the limiting factor, changing which sites are viable and how quickly capacity can scale.

Energy is the second pressure point. In the cloud era, energy was bundled into pricing models. With AI, energy shows up as a hard constraint and a reporting obligation. Forecasting, securing and governing power capacity is now part of the technology plan, not a facilities footnote. The World Resources Institute has been explicit about the challenge of forecasting electricity needs amid the data center boom, which is exactly the problem AI workloads amplify.

Uptime Institute also highlights how operational constraints, resiliency and sustainability reporting are becoming first-class issues for data center operators, not optional extras. That matters because boards now treat AI as both an engine of growth and a source of risk, which means they will ask for evidence and discipline.

When I help executive sponsors make this practical, I use a short set of questions that forces clarity but without turning it into a cloud debate:

  1. Which AI workloads are steady and which are bursty? If utilization is stable, treat the cost curve like a utility problem. If demand is spiky, cloud elasticity may still win.
  2. What data can move, and what data cannot? Define the non-negotiables with risk and legal early. If you cannot move data, do not design as if you can.
  3. What is the density plan? Document target rack power, cooling approach and upgrade path. If the answer is “we will figure it out later,” AI will arrive before the infrastructure can support it.
  4. What is the evidence plan? By evidence, I mean the artifacts that survive audits and incidents: reference architecture, power and capacity model, security control mapping, runbooks, disaster recovery test evidence and cost telemetry.
  5. What is the exit plan? Avoid permanent placement decisions where possible. Design for movement between cloud, colocation and on-prem as requirements evolve.

One more test I use is to ask whether our AI roadmap assumes power and cooling scale as fast as software. They do not. That mismatch creates the most expensive technical debt: business commitments built on infrastructure that cannot arrive in time.

To keep this from becoming an ad hoc fight between cloud and facilities, I push for a simple governance pattern: classify workloads by density and data sensitivity, map each class to approved landing zones (cloud, colocation or on-prem) and require an evidence package for exceptions. That keeps decisions fast and defensible, which sponsors and steering committees need as AI adoption grows.

This is the posture I push for in reviews: cloud, where it is elastic and safe, on-prem or colocation, where spatial density, locality and governance demand it, and a purposeful design for reversibility across all of it.

On-prem is back in fashion, but not like before. The story is not a return to the past. This is the moment infrastructure stopped being abstract and became a strategic constraint again.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?


Read More from This Article: The post-cloud data center: Back in fashion, but not like before
Source: News

Category: NewsMarch 18, 2026
Tags: art

Post navigation

PreviousPrevious post:量子で何が変わる 産業別ユースケースを「効くところ」から理解するNextNext post:How CIOs can use AI to overcome M&A integration headaches

Related posts

物流危機の時代を越えるために──SGHグループが挑むDX戦略の全貌
April 20, 2026
Adobe bets on agentic AI to rewrite SaaS for customer experience
April 20, 2026
The VMware deadline that could reshape your IT strategy
April 20, 2026
The metric missing from every AI dashboard
April 20, 2026
AI is scoring your job candidates. Can you explain how?
April 20, 2026
7 reasons you keep getting passed over for CIO
April 20, 2026
Recent Posts
  • 物流危機の時代を越えるために──SGHグループが挑むDX戦略の全貌
  • Adobe bets on agentic AI to rewrite SaaS for customer experience
  • The VMware deadline that could reshape your IT strategy
  • The metric missing from every AI dashboard
  • AI is scoring your job candidates. Can you explain how?
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.