Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Q & A: Strategy and core architecture of AI infrastructure

In this Q&A, we speak with Datacom AI and infrastructure experts – Matt Neil (Director – Data Centres), Mike Walls (Director – Cloud), and Daniel Bowbyes (Associate Director – Strategy) – to explore how enterprises can design a right-sized AI infrastructure. They share practical insights on balancing use cases, governance, data sovereignty, latency, and cost across on-prem, colocation, private cloud, and public cloud options.

Why has “where artificial intelligence (AI) runs” become such a critical strategic question for enterprise infrastructure leaders?

Daniel Bowbyes, Associate Director – Strategy: AI isn’t a single system that you deploy once and tick the completed box on your backlog. Almost all vendors are rushing to infuse AI into their software to enhance value and increase stickiness. The future will see AI distributed across the enterprise landscape with important strategic decisions needing to be made as to where and when AI capability is turned on (and licensed) balancing the value it delivers to the enterprise versus cost, security and risk.  

Matt Neil, Director – Data Centres: Where AI runs has become a critical strategic question because it directly shapes cost, risk, performance, data sovereignty and long-term competitive advantage, not just technology architecture. AI workloads place different demands on infrastructure than traditional IT. High-density GPU environments can drive rack power requirements into the 50–100 kW range, pushing power, cooling and facility design well beyond what most standard on-prem environments can safely or economically support. Without purpose-built data centre capability, including advanced cooling such as liquid cooling, organisations increase their exposure to availability, resilience and operational risk.

Because power and thermal constraints now dictate physical footprint, the choice of where AI runs – on-prem, colocation, private cloud or public cloud – becomes a strategic decision. Each option carries materially different implications for unit cost, scalability, resilience, time to value and control over critical workloads.

Beyond the physical layer, organisations must be clear on the type of AI workload they are deploying, including training, inference or generative AI, as each drives very different requirements for architecture, connectivity, storage performance, latency and data movement. These choices directly affect operating cost, user experience and the ability to scale sustainably.

Critically, where AI runs also determines data sovereignty, regulatory exposure and control of intellectual property. As AI becomes embedded in core business processes and customer-facing services, enterprises need confidence that sensitive data, models and outcomes remain governed, auditable and aligned to jurisdictional requirements.

Bowbyes: There are a number of factors that infrastructure leaders need to consider and weigh up when making strategic decisions regarding where AI runs. These include: 

  • Organisational data and core systems location
  • Data sovereignty and residency
  • Intellectual-property protection and regulatory compliance
  • Physical hosting, power and cooling capacity
  • Latency and performance requirements 
  • Vendor/platform lock-in risks

All of these must be weighed up against the cost and inference economics (the cost and value trade-off of running model inference) associated with AI, the desired return on investment and the customer experience to be delivered.

Mike Walls, Director – Cloud: Leaders also weigh whether the private cloud, public cloud or on‑prem data centre best supports governance, data locality, security and the ability to provide the needed performance and scalability. You really need to align infrastructure with your organisation’s defined use cases and maturity level, and Datacom can help you make these decisions in consideration to not only the infrastructure, but governance and applications as well.

Neil: Infrastructure leaders are not choosing a single ‘best’ location for AI. They are balancing cost, risk, performance, sovereignty and competitive advantage across a portfolio of deployment models, aligning each AI workload to the environment that best supports its purpose, scale, and business impact. This is why AI workload placement has moved from an architectural decision to a core strategic concern.

What do we mean by “right-sized AI infrastructure”, and how does this differ from traditional cloud-first or centralised data centre models?

Walls: Right‑sized AI infrastructure is about tailoring the footprint and capabilities to the actual AI use case and maturity of an organisation, rather than applying a one‑size‑fits‑all model. It’s clear that different AI models perform better at different use cases, so a practical approach is to use a funnel that starts with the desired AI activities (considering business use case, and then activities such as inferencing, training or code generation) and then map to the appropriate environment, service model and tooling. This contrasts with a singular cloud‑first or centralised data centre mindset by prioritising the specific use case, data governance needs, latency requirements and orchestration needs before choosing a deployment locus. Right-sized AI infrastructure broadens the case for what has previously been described as hybrid cloud architecture when you understand that different AI use cases and models will require a hybrid architecture of edge, data centre, private and public cloud infrastructure and platforms.

Why are one-size-fits-all AI deployment models starting to break down at enterprise scale?

Neil: Because AI workloads and organisational needs are not uniform. AI means very different things across organisations – from large-scale model training and high-throughput inferencing to agent-based automation or coding assistance. Each of these use cases places distinct demands on infrastructure, governance, data handling, security and regulatory compliance. 

A single deployment model can’t simultaneously optimise for:

  • Different workload characteristics and scale profiles
  • Varying data residency, sovereignty and security requirements
  • Latency-sensitive and edge-based use cases
  • Different consumption and operating models (such as GPU as a service, AI infrastructure as a service or platform and tooling enablement) 

As AI adoption scales, forcing all workloads into a single cloud-first or centralised model leads to unnecessary cost, performance constraints, elevated risk or regulatory friction.

Datacom recommends an AI readiness assessment, which uses a structured framework to anchor infrastructure decisions in clearly defined use cases, maturity and risk profiles. This helps ensure AI environments are designed to fit real business needs, rather than generic AI aspirations or one-size-fits-all deployment assumptions.

How does data gravity influence AI architecture decisions, particularly in regulated or data-intensive industries?

Walls: Data gravity matters because where data resides often dictates where processing can or should occur. Data regulations may require certain data types to stay within specific jurisdictions, which can constrain cross‑border processing and influence whether AI workloads run in a particular country or region. Security concerns and anticipated regulatory increases also shape decisions about data location, data handling and the systems that can access and process the data. In regulated or data‑intensive industries, these data‑location requirements can be as decisive as performance or cost considerations.

Bowbyes: For AI agents to have organisational context and support accurate and real-time decision-making, they need to be able to access organisational data and core systems. Moving large amounts of data between platforms and geographic locations is time consuming, costly, complicated and often introduces a new set of regulatory security challenges and risks. It’s often less complex to build out AI infrastructure adjacent to existing enterprise systems that are originating the data that the AI agent is using.

What role does sovereignty and local control play in AI infrastructure decisions for organisations in Australia and New Zealand?

Neil: Sovereignty and local control play an increasingly important – but nuanced – role in AI infrastructure decisions, and the emphasis varies by market, sector and workload.

In some regions and industries, there is a strong preference for AI environments that are onshore and locally controlled, with infrastructure, data and operational accountability residing within national borders. This is particularly relevant for regulated industries and public sector use cases, where data residency, jurisdictional control and auditability are critical.

In practice, however, sovereignty is not a binary concept. Many vendors make broad ‘sovereign’ claims based on in-market presence, even when ownership, operational control or parts of the technology stack remain offshore. As a result, organisations are increasingly looking beyond marketing labels and asking more precise questions about where data is processed, who operates the environment and which legal frameworks apply.

For Australia and New Zealand, this has led to a more pragmatic approach. Datacom’s New Zealand ownership allows customers to place AI workloads fully onshore in New Zealand, where sovereignty and local control are required, while also supporting Australian-based deployments where proximity, scale or regulatory alignment makes that more appropriate. This flexibility enables organisations to balance sovereignty requirements with cost, performance and operational needs, rather than forcing all AI workloads into a single jurisdiction or deployment model.

Learn how Datacom’s local expertise and integrated approach are helping organisations design, scale, and govern AI environments to meet their business and compliance needs.

Glossary

  • Right-sized AI infrastructure: tailoring the footprint and tooling to actual AI use cases and organisational maturity, not a one-size-fits-all approach.
  • Data gravity: the principle that data location and control affect where processing should occur and what architectural choices are feasible.
  • AI workloads: categories such as training, inference and generative AI, each with different latency, storage and governance needs.
  • Governance: policies and controls around security, data handling and regulatory compliance.
  • Data residency versus data sovereignty: residency refers to where data physically resides, while sovereignty relates to legal authority and regulatory obligations over that data.


Read More from This Article: Q & A: Strategy and core architecture of AI infrastructure
Source: News

Category: NewsMarch 26, 2026
Tags: art

Post navigation

PreviousPrevious post:AIのベストプラクティスを待つことは見えないコストを生んでいるNextNext post:La IA doblará la rentabilidad de los concesionarios en apenas un lustro

Related posts

샤오미, MIT 라이선스 ‘미모 V2.5’ 공개···장시간 실행 AI 에이전트 시장 겨냥
April 29, 2026
SAS makes AI governance the centerpiece of its agent strategy
April 29, 2026
The boardroom divide: Why cyber resilience is a cultural asset
April 28, 2026
Samsung Galaxy AI for business: Productivity meets security
April 28, 2026
Startup tackles knowledge graphs to improve AI accuracy
April 28, 2026
AI won’t fix your data problems. Data engineering will
April 28, 2026
Recent Posts
  • 샤오미, MIT 라이선스 ‘미모 V2.5’ 공개···장시간 실행 AI 에이전트 시장 겨냥
  • SAS makes AI governance the centerpiece of its agent strategy
  • The boardroom divide: Why cyber resilience is a cultural asset
  • Samsung Galaxy AI for business: Productivity meets security
  • Startup tackles knowledge graphs to improve AI accuracy
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.