Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Composable infrastructure and build-to-fit IT: From standard stacks to policy-defined intent

For years, many of us built infrastructure the same way we built data centers in the 2000s: Pick a “standard stack,” stamp it out and treat exceptions like a paperwork problem. It worked, until it didn’t.

Retail made the breaking point obvious. Demand patterns stopped being “seasonal” and became “event-driven.” A product drop goes viral. A weather system reroutes delivery windows. A supply chain delay changes the entire inventory story overnight. Meanwhile, customer expectations keep climbing: real-time visibility, accurate pickup promises, personalized offers, fraud-resistant payments and consistent performance from the mobile app to the store lane to the fulfillment center.

In that world, fixed stacks turn into friction. They are either too heavy for small workloads or too rigid for fast-changing ones. Teams start to fork the standard build “just this once,” and suddenly the exception becomes the default. That is how sprawl begins.

Composable infrastructure is the most practical way I have found to break that cycle, but only if we stop defining “composable” as modular hardware. The differentiator is not the pool of compute, storage or fabric. The differentiator is the control plane: The policy, automation and governance that make composition safe, repeatable and reversible.

Gartner’s 2026 Infrastructure and Operations trends point to hybrid computing and “a composable and extensible compute fabric” as a way to orchestrate across diverse mechanisms while future-proofing investments. That framing matches what I see in practice: composability is about the operating model more than the equipment.

Why “reference architecture” alone no longer holds

Reference architectures are valuable. They create shared language, predictable security patterns and operational consistency. The problem is that they often assume stable boundaries: one environment, one platform, one dominant workload shape.

Retail environments do not behave that way anymore. We run mixed workloads across stores, fulfillment nodes, edge appliances, private cloud and multiple public clouds. We ship constantly. We experiment constantly. We also carry compliance obligations that cannot be negotiated at sprint speed.

What happens next is painfully familiar:

  • Teams build shadow patterns to move faster.
  • Security tries to bolt guardrails on after the fact.
  • Operations inherits a zoo of one-off configurations.
  • Finance sees spend drift, but can’t trace it back to intent.

This is why composable infrastructure must be paired with policy-defined infrastructure. Without policy, composability becomes a sprawl engine.

Composable infrastructure, defined like we actually run it

I like the “composable disaggregated infrastructure” description that treats compute, storage and networking resources as services that can be assembled as required, then returned to the pool when the work is complete. That is the operational heart of the idea: assemble, run, disassemble and recycle.

But “assemble” cannot mean “everyone builds whatever they want.”

In a modern enterprise, composition needs four things:

  1. A catalog of building blocks (compute, storage, network, security, data services).
  2. A declaration of intent (what the workload needs, not how to wire it manually).
  3. A policy engine that evaluates intent against guardrails.
  4. Automation that provisions, enforces, observes and retires resources consistently.

This is where platform engineering becomes the bridge. CNCF’s platform engineering work emphasizes internal platforms as a way to deliver reusable capabilities and reduce cognitive load. Composable infrastructure is one of the clearest places to apply that thinking.

The control plane is the product

The moment you move from “stacks” to “building blocks,” the control plane becomes the product you operate.

At a minimum, I expect the control plane to do the following:

  • Translate intent into infrastructure using declarative definitions (infrastructure as code) and reusable compositions.
  • Enforce policy as code consistently across pipelines and runtime.
  • Prevent drift and continuously reconcile desired state.
  • Measure outcomes: Availability, latency, change failure rate, security posture and cost.

Open Policy Agent (OPA) is a common example of a policy engine that lets teams specify policy as code and enforce it across Kubernetes, CI/CD, API gateways and microservices. In practice, that means I can write rules like “no public load balancers without approved tags,” “all data stores containing customer identifiers must use encryption and approved key management,” or “no privileged containers,” and have those rules evaluated automatically.

For GitOps-style reconciliation, the CNCF ecosystem has made the “desired state in Git” model mainstream with tools like Flux and Argo CD. Flux, for example, is explicitly positioned as declarative delivery where Git is the source of truth and the system continuously syncs the live environment to match. That reconciliation loop is what keeps composability from turning into configuration drift.

For cross-cloud composition, projects like Crossplane take it further by treating Kubernetes as a control plane framework for platform engineering, letting you design APIs and abstractions for your users. The point is not the specific tool choice. The point is the pattern: abstract complexity, enforce policy and keep the system converging back to a governed state.

A retail use case: “intent-built” infrastructure for peak-week resilience

Here is a pattern I have used in retail because it forces composability to prove its value in the real world.

Scenario: It is the week of a major promotional event. Digital traffic spikes. Store pickup volumes surge. Fraud attempts rise in parallel. Business wants rapid experimentation on offers and checkout flows, but reliability cannot regress.

If I run this on fixed stacks, I end up overprovisioning everything “just in case” or negotiating every exception manually.

With composable, policy-defined infrastructure, I can express this as intent and let the control plane assemble the right building blocks:

Intent: “Create a peak-week commerce lane that is globally distributed, supports real-time inventory reservations, isolates payment services, emits events for fraud scoring and scales predictably within budget.”

Building blocks assembled by policy

  • Compute: Autoscaled microservices tier for cart, checkout and pickup promise.
  • Network: Segmented service connectivity with explicit ingress and egress controls, plus per-service identities.
  • Security: Enforced workload identity, secrets management, mandatory encryption and least privilege access patterns aligned to zero trust principles. NIST’s Zero Trust Architecture highlights continuous authentication and authorization per request and the idea of narrowing defenses to resources rather than perimeter assumptions.
  • Data services: A short-lived event streaming pipeline for clickstream and order events, a low-latency cache for pickup promises and a governed analytics sink for post-event learning.
  • Observability: SLO-based dashboards for checkout latency, pickup promise accuracy and payment authorization success rate, wired automatically as part of the composition.
  • FinOps guardrails: Budget ceilings, tagging and cost allocation enforced at provisioning time and monitored continuously, using a shared accountability model consistent with FinOps practices.

The “sprawl prevention” mechanisms that matter

  • Every composed environment has a time-to-live by default. If it is not renewed by policy, it is retired automatically.
  • Policies require standard tags (application, owner, cost center, data classification). If tags are missing, provisioning fails early.
  • Network exposure is deny-by-default. Public endpoints require explicit approval paths and documented intent.
  • Data services are tiered by classification, with policy deciding which storage classes and encryption profiles are allowed.
  • Drift is corrected by reconciliation. Manual changes are reverted unless policy allows them.

The outcome is not just faster provisioning. It is safer provisioning. Teams can move quickly without quietly creating long-term operational debt.

The governance model that keeps composability from becoming chaos

I have learned to treat governance as a product feature, not a compliance tax. If governance slows teams down, they route around it. If governance is embedded into the platform, it becomes the fastest path.

This is the model I aim for:

  1. Policy-defined guardrails, not human gates. Rules are versioned, tested, peer-reviewed and rolled out like any other code.
  2. Golden paths that are flexible. Developers should be able to request “an event-driven service with private ingress, managed database and audit logging” without learning every underlying primitive.
  3. Reversibility by design. Every composed stack must be easy to unwind, and rollback must be part of the orchestration.
  4. Continuous compliance, not quarterly scramble. Compliance is evaluated at build time and runtime, with evidence generated automatically.
  5. Outcome-based telemetry. If I cannot tie composition back to reliability, security posture and unit cost, I am just moving complexity around.

What leaders should ask before calling it “composable”

When I talk to peers about adopting composable infrastructure, I ask a few questions that cut through vendor messaging:

  • Can we express infrastructure by intent and have the platform translate that intent into consistent builds?
  • Do we have a policy engine that enforces guardrails across provisioning and runtime, not just documentation?
  • How do we prevent orphaned resources and environment sprawl, automatically?
  • How do we measure business outcomes (conversion performance, pickup accuracy, fraud loss avoidance) and not just cluster health?
  • Can we run this across hybrid environments without multiplying operating models?

If the answer is “we will standardize later,” composability will likely amplify your current inconsistencies.

The real shift: from building infrastructure to operating a control system

Composable infrastructure is a story about maturity. It is the shift from handcrafted stacks to configurable building blocks, assembled by intent and governed by policy.

When it is done well, it changes the daily experience of IT:

  • Teams stop fighting over one-size-fits-all reference architectures.
  • Security stops chasing exceptions and starts shipping enforceable policies.
  • Operations stops inheriting snowflakes and starts running a reconciling system.
  • Finance gets visibility into spend tied directly to intent, not guesswork.

That is what “build-to-fit IT” means to me: the enterprise gets flexibility without losing control, because the controls are part of the platform, not an afterthought.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?


Read More from This Article: Composable infrastructure and build-to-fit IT: From standard stacks to policy-defined intent
Source: News

Category: NewsMarch 3, 2026
Tags: art

Post navigation

PreviousPrevious post:AI data center optimization needs a semantic digital twinNextNext post:What shapes an organization’s ability to manage data

Related posts

칼럼 | 멀티 벤더 프로젝트 실패, 대부분은 ‘거버넌스’에서 시작된다
April 29, 2026
샤오미, MIT 라이선스 ‘미모 V2.5’ 공개···장시간 실행 AI 에이전트 시장 겨냥
April 29, 2026
SAS makes AI governance the centerpiece of its agent strategy
April 29, 2026
The boardroom divide: Why cyber resilience is a cultural asset
April 28, 2026
Samsung Galaxy AI for business: Productivity meets security
April 28, 2026
Startup tackles knowledge graphs to improve AI accuracy
April 28, 2026
Recent Posts
  • 칼럼 | 멀티 벤더 프로젝트 실패, 대부분은 ‘거버넌스’에서 시작된다
  • 샤오미, MIT 라이선스 ‘미모 V2.5’ 공개···장시간 실행 AI 에이전트 시장 겨냥
  • SAS makes AI governance the centerpiece of its agent strategy
  • The boardroom divide: Why cyber resilience is a cultural asset
  • Samsung Galaxy AI for business: Productivity meets security
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.