In this Q&A, we speak with Datacom AI and infrastructure experts – Matt Neil (Director – Data Centres), Mike Walls (Director – Cloud), and Daniel Bowbyes (Associate Director – Strategy) – to explore how enterprises can design a right-sized AI infrastructure. They share practical insights on balancing use cases, governance, data sovereignty, latency, and cost across on-prem, colocation, private cloud, and public cloud options.
Why has “where artificial intelligence (AI) runs” become such a critical strategic question for enterprise infrastructure leaders?
Daniel Bowbyes, Associate Director – Strategy: AI isn’t a single system that you deploy once and tick the completed box on your backlog. Almost all vendors are rushing to infuse AI into their software to enhance value and increase stickiness. The future will see AI distributed across the enterprise landscape with important strategic decisions needing to be made as to where and when AI capability is turned on (and licensed) balancing the value it delivers to the enterprise versus cost, security and risk.
Matt Neil, Director – Data Centres: Where AI runs has become a critical strategic question because it directly shapes cost, risk, performance, data sovereignty and long-term competitive advantage, not just technology architecture. AI workloads place different demands on infrastructure than traditional IT. High-density GPU environments can drive rack power requirements into the 50–100 kW range, pushing power, cooling and facility design well beyond what most standard on-prem environments can safely or economically support. Without purpose-built data centre capability, including advanced cooling such as liquid cooling, organisations increase their exposure to availability, resilience and operational risk.
Because power and thermal constraints now dictate physical footprint, the choice of where AI runs – on-prem, colocation, private cloud or public cloud – becomes a strategic decision. Each option carries materially different implications for unit cost, scalability, resilience, time to value and control over critical workloads.
Beyond the physical layer, organisations must be clear on the type of AI workload they are deploying, including training, inference or generative AI, as each drives very different requirements for architecture, connectivity, storage performance, latency and data movement. These choices directly affect operating cost, user experience and the ability to scale sustainably.
Critically, where AI runs also determines data sovereignty, regulatory exposure and control of intellectual property. As AI becomes embedded in core business processes and customer-facing services, enterprises need confidence that sensitive data, models and outcomes remain governed, auditable and aligned to jurisdictional requirements.
Bowbyes: There are a number of factors that infrastructure leaders need to consider and weigh up when making strategic decisions regarding where AI runs. These include:
- Organisational data and core systems location
- Data sovereignty and residency
- Intellectual-property protection and regulatory compliance
- Physical hosting, power and cooling capacity
- Latency and performance requirements
- Vendor/platform lock-in risks
All of these must be weighed up against the cost and inference economics (the cost and value trade-off of running model inference) associated with AI, the desired return on investment and the customer experience to be delivered.
Mike Walls, Director – Cloud: Leaders also weigh whether the private cloud, public cloud or on‑prem data centre best supports governance, data locality, security and the ability to provide the needed performance and scalability. You really need to align infrastructure with your organisation’s defined use cases and maturity level, and Datacom can help you make these decisions in consideration to not only the infrastructure, but governance and applications as well.
Neil: Infrastructure leaders are not choosing a single ‘best’ location for AI. They are balancing cost, risk, performance, sovereignty and competitive advantage across a portfolio of deployment models, aligning each AI workload to the environment that best supports its purpose, scale, and business impact. This is why AI workload placement has moved from an architectural decision to a core strategic concern.
What do we mean by “right-sized AI infrastructure”, and how does this differ from traditional cloud-first or centralised data centre models?
Walls: Right‑sized AI infrastructure is about tailoring the footprint and capabilities to the actual AI use case and maturity of an organisation, rather than applying a one‑size‑fits‑all model. It’s clear that different AI models perform better at different use cases, so a practical approach is to use a funnel that starts with the desired AI activities (considering business use case, and then activities such as inferencing, training or code generation) and then map to the appropriate environment, service model and tooling. This contrasts with a singular cloud‑first or centralised data centre mindset by prioritising the specific use case, data governance needs, latency requirements and orchestration needs before choosing a deployment locus. Right-sized AI infrastructure broadens the case for what has previously been described as hybrid cloud architecture when you understand that different AI use cases and models will require a hybrid architecture of edge, data centre, private and public cloud infrastructure and platforms.
Why are one-size-fits-all AI deployment models starting to break down at enterprise scale?
Neil: Because AI workloads and organisational needs are not uniform. AI means very different things across organisations – from large-scale model training and high-throughput inferencing to agent-based automation or coding assistance. Each of these use cases places distinct demands on infrastructure, governance, data handling, security and regulatory compliance.
A single deployment model can’t simultaneously optimise for:
- Different workload characteristics and scale profiles
- Varying data residency, sovereignty and security requirements
- Latency-sensitive and edge-based use cases
- Different consumption and operating models (such as GPU as a service, AI infrastructure as a service or platform and tooling enablement)
As AI adoption scales, forcing all workloads into a single cloud-first or centralised model leads to unnecessary cost, performance constraints, elevated risk or regulatory friction.
Datacom recommends an AI readiness assessment, which uses a structured framework to anchor infrastructure decisions in clearly defined use cases, maturity and risk profiles. This helps ensure AI environments are designed to fit real business needs, rather than generic AI aspirations or one-size-fits-all deployment assumptions.
How does data gravity influence AI architecture decisions, particularly in regulated or data-intensive industries?
Walls: Data gravity matters because where data resides often dictates where processing can or should occur. Data regulations may require certain data types to stay within specific jurisdictions, which can constrain cross‑border processing and influence whether AI workloads run in a particular country or region. Security concerns and anticipated regulatory increases also shape decisions about data location, data handling and the systems that can access and process the data. In regulated or data‑intensive industries, these data‑location requirements can be as decisive as performance or cost considerations.
Bowbyes: For AI agents to have organisational context and support accurate and real-time decision-making, they need to be able to access organisational data and core systems. Moving large amounts of data between platforms and geographic locations is time consuming, costly, complicated and often introduces a new set of regulatory security challenges and risks. It’s often less complex to build out AI infrastructure adjacent to existing enterprise systems that are originating the data that the AI agent is using.
What role does sovereignty and local control play in AI infrastructure decisions for organisations in Australia and New Zealand?
Neil: Sovereignty and local control play an increasingly important – but nuanced – role in AI infrastructure decisions, and the emphasis varies by market, sector and workload.
In some regions and industries, there is a strong preference for AI environments that are onshore and locally controlled, with infrastructure, data and operational accountability residing within national borders. This is particularly relevant for regulated industries and public sector use cases, where data residency, jurisdictional control and auditability are critical.
In practice, however, sovereignty is not a binary concept. Many vendors make broad ‘sovereign’ claims based on in-market presence, even when ownership, operational control or parts of the technology stack remain offshore. As a result, organisations are increasingly looking beyond marketing labels and asking more precise questions about where data is processed, who operates the environment and which legal frameworks apply.
For Australia and New Zealand, this has led to a more pragmatic approach. Datacom’s New Zealand ownership allows customers to place AI workloads fully onshore in New Zealand, where sovereignty and local control are required, while also supporting Australian-based deployments where proximity, scale or regulatory alignment makes that more appropriate. This flexibility enables organisations to balance sovereignty requirements with cost, performance and operational needs, rather than forcing all AI workloads into a single jurisdiction or deployment model.
Learn how Datacom’s local expertise and integrated approach are helping organisations design, scale, and govern AI environments to meet their business and compliance needs.
Glossary
- Right-sized AI infrastructure: tailoring the footprint and tooling to actual AI use cases and organisational maturity, not a one-size-fits-all approach.
- Data gravity: the principle that data location and control affect where processing should occur and what architectural choices are feasible.
- AI workloads: categories such as training, inference and generative AI, each with different latency, storage and governance needs.
- Governance: policies and controls around security, data handling and regulatory compliance.
- Data residency versus data sovereignty: residency refers to where data physically resides, while sovereignty relates to legal authority and regulatory obligations over that data.
Read More from This Article: Q & A: Strategy and core architecture of AI infrastructure
Source: News

