Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Cloud modernization is advancing. Utilization isn’t

At Datadog, an observability and security platform for cloud applications, I work on research studies that analyze anonymized infrastructure telemetry from thousands of production environments across Kubernetes, managed container platforms and serverless services across cloud providers. The datasets span multiple cloud providers and billions of workload hours. Much of that work goes into our annual reports on container and serverless adoption, where we examine how organizations run workloads in modern cloud environments.

Over the past few years, one question kept coming up as we updated these reports: As cloud platforms become more granular and autoscaling adoption increases, does resource efficiency improve?

Going into this work, I didn’t have a formal hypothesis about utilization improving over time. But there was an implicit assumption—one that felt reasonable. As platforms became more granular and autoscaling adoption increased, resource efficiency should improve at least incrementally.

It didn’t.

When we compared successive editions of the research, including the 2023 Container Report and the 2025 State of Containers and Serverless, the answer was less straightforward than expected. The share of Kubernetes workloads running well below their requested CPU and memory levels remained broadly consistent between reports.

That persistence raises an uncomfortable question: If modernization alone doesn’t improve utilization, what does?

Rapid evolution in cloud infrastructure

Cloud environments today look markedly different from even three years ago.

In the 2023 Container Report, we found that over 65% of Kubernetes workloads were using less than 50% of their requested CPU and memory. That report examined container telemetry across thousands of production environments to understand how teams run Kubernetes workloads.

Two years later, the 2025 State of Containers and Serverless expanded the scope of the research to look at broader compute patterns, including the growing mix of containers and serverless, while continuing to analyze Kubernetes workloads.

Using the same <50% threshold for comparison, the overall utilization pattern remained similar. In October 2025, 72% of Kubernetes workloads were still using less than 50% of their requested CPU, and 62% were using less than 50% of their requested memory.

In other words, even as organizations adopted newer compute models and expanded autoscaling, most workloads continued to run well below their requested capacity.

At a surface level, the modernization between those report cycles is obvious: More granular compute models, broader instance diversification, increased use of managed services and deeper abstraction.

Looking only at platform capabilities and adoption trends, this appears to be steady operational maturity, the kind often discussed in CIO.com’s own coverage of cloud strategy.

If modernization alone were enough, we would expect to see measurable improvement in utilization patterns. The data suggests otherwise.

The utilization baseline barely moved

Using the same <50% threshold for comparison, the 2025 data shows a familiar pattern. In October 2025, 72% of Kubernetes workloads were using less than 50% of their requested CPU, and 62% (vs. 65% in 2023) were using less than 50% of their requested memory.

In other words, most workloads still operate well below their provisioned capacity.

Looking even closer, the distribution becomes more pronounced. In October 2025, 57% of workloads were using less than 25% of requested CPU, and 37% were using less than 25% of requested memory.

This is not marginal inefficiency at the edges. It reflects a large share of workloads running far below their requested baseline.

When I saw those updated numbers in the 2025 report, I was a little surprised. Not because I expected perfection, since cloud systems are inherently uneven, but because I expected at least some measurable drift toward tighter provisioning as platform sophistication increased.

Instead, the overall distribution remained remarkably persistent.

To be clear, this does not imply that teams are careless or that modernization efforts failed. It suggests something more structural. Utilization behaves less like a short-term tuning issue and more like a stable characteristic of how systems are configured and operated over time.

A longitudinal comparison between the 2023 and 2025 data shows that individual workloads churn, clusters scale and instance types diversify, yet the aggregate distribution remains comparatively steady. That persistence stood out more than any single annual trend.

Importantly, the longitudinal data does not explain why that persistence exists. It only shows that modernization at the platform layer does not automatically reshape the utilization distribution.

At scale, persistent underutilization also has cost implications. Even if individual workloads appear inexpensive, conservative provisioning raises the baseline against which budgets are set.

Over time, that baseline becomes normalized, shaping cloud forecasts, contract negotiations and infrastructure investment priorities.

Averages hide persistence

Infrastructure data is rarely evenly distributed; it is long-tailed.

A relatively small number of workloads drive sustained utilization. A much larger number are bursty, intermittently active or lightly used. When averaged together, the system appears stable even when individual components are dynamic.

Averaging utilization metrics can therefore be misleading. An average implies symmetry. In practice, resource usage is asymmetric. Extreme values often drive cost and capacity exposure, while the median workload remains comparatively quiet. When those extremes are averaged away, the signals that matter most are softened.

Partial instrumentation adds another layer. Not every workload produces the same performance and utilization data at the same level of detail. As organizations mix legacy systems with newer managed services, visibility gaps are common. Those gaps can skew aggregate metrics and create a false sense of stability or efficiency.

CIOs encounter similar issues when interpreting other aggregate metrics such as average latency, mean time to recovery or blended cloud spend. As CIO.com has noted in discussions of meaningful metrics, aggregation can obscure operational reality.

In infrastructure, that obscurity can persist for years.

What “utilization” measures

Before interpreting the trend, it is important to clarify what these utilization metrics measure.

In Kubernetes environments, utilization is typically measured relative to requested resources rather than raw machine capacity. Requests influence scheduling and reserve capacity on anode, shaping the baseline against which utilization is measured. But they also encode human judgment. Sometimes that judgment is based on load testing. Sometimes it reflects historical spikes. Sometimes it is simply conservative.

Two teams can run similar services and choose very different request baselines. The utilization metric will faithfully reflect that configuration choice.

That is one reason I am cautious about treating utilization as a moral signal. It is a technical metric, but it is also a reflection of configuration decisions embedded over time.

Looking at it over time shows what changes and what stubbornly does not, even as platforms evolve.

Autoscaling isn’t the same as precision

One obvious question is whether autoscaling adoption should materially change these patterns.

Horizontal Pod Autoscaling (HPA) is common across Kubernetes environments and widely supported across platforms. This reflects broader ecosystem trends described in the CNCF Annual Survey.

But elasticity is not the same as precision.

Many autoscaling configurations still center on CPU and memory signals. More context-aware scaling, based on queue depth or application-level indicators, remains less prevalent. Vertical scaling is comparatively rare and often used in advisory modes rather than actively reshaping requests.

Workloads can scale up and down without necessarily altering their baseline request posture or the broader utilization distribution we observe.

Enabling elasticity is straightforward. Sustaining precision over time is much harder.

Technical debt doesn’t disappear with new platforms

Another pattern surfaced in the Container Reports is version lag. In both the 2022 and 2023 editions, a significant share of Kubernetes clusters were running versions approaching end-of-life even as newer releases were widely available.

Production systems rarely upgrade at the same pace as new platform capabilities are released. End-of-life versions persist. Premium support tiers extend. Older runtimes remain embedded even when more efficient versions are available.

Upgrades compete with feature delivery. Stability is prioritized. Risk is managed conservatively.

Version adoption does not directly determine utilization levels. But it reflects a broader dynamic: Configuration and upgrade decisions change more slowly than platform capabilities. When analyzed at scale, that inertia becomes visible. New tools layer onto existing systems, but earlier configuration assumptions often remain intact.

In practice, modern platforms often inherit older provisioning choices.

Capability is not outcome

Seeing the same utilization pattern persist across report cycles shifted my thinking. It was not about Kubernetes, serverless or autoscaling in isolation. It was about separating capability from outcome.

Cloud platforms today offer far more granularity than they did a few years ago. We can allocate resources in smaller increments. We can autoscale pods and nodes. We can mix execution models. We can diversify architectures.

None of that automatically changes the empirical shape of infrastructure usage.

Modernization creates new possibilities, but it does not automatically change how resources are used.

Across report cycles, it became clear that architecture was evolving faster than the underlying usage patterns.

That distinction has significant implications for how infrastructure performance—and investment decisions—are interpreted.

When platform evolution isn’t enough

If multiple years of visible platform evolution do not materially shift the utilization baseline, the constraint likely extends beyond feature availability.

What makes this pattern interesting is not that utilization is low in any single snapshot. It is that it remains low even as surrounding variables change. Platform capabilities evolve. Adoption curves shift. Workload composition becomes more heterogeneous. Yet the aggregate distribution remains comparatively stable.

That stability suggests something important: Modernization changes what is possible, but it does not automatically change how systems are configured or revisited over time.

For CIOs and senior technology leaders, the implication is not to pursue the next abstraction layer. It is to examine the decision frameworks that shape provisioning, headroom and risk tolerance year after year.

Cloud platforms will continue to evolve quickly. Whether utilization patterns change will depend less on new capabilities and more on how deliberately organizations revisit the assumptions embedded in their configurations.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?


Read More from This Article: Cloud modernization is advancing. Utilization isn’t
Source: News

Category: NewsMay 5, 2026
Tags: art

Post navigation

PreviousPrevious post:Vibe coding goes enterprise: What you need to know about AI-driven legacy modernizationNextNext post:The fake IT worker problem CIOs can’t ignore

Related posts

Oracle will patch more often to counter AI cybersecurity threat
May 5, 2026
Cuenta atrás para presentar candidaturas en España a los CIO 50 Awards
May 5, 2026
The immutable mountain: Understanding distributed ledgers through the lens of alpine climbing
May 5, 2026
When the CEO leads the AI initiative
May 5, 2026
Vibe coding goes enterprise: What you need to know about AI-driven legacy modernization
May 5, 2026
The fake IT worker problem CIOs can’t ignore
May 5, 2026
Recent Posts
  • Oracle will patch more often to counter AI cybersecurity threat
  • Cuenta atrás para presentar candidaturas en España a los CIO 50 Awards
  • The immutable mountain: Understanding distributed ledgers through the lens of alpine climbing
  • When the CEO leads the AI initiative
  • Vibe coding goes enterprise: What you need to know about AI-driven legacy modernization
Recent Comments
    Archives
    • May 2026
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.