Datacom’s AI and infrastructure leaders – Matt Neil (Director – Data Centres), Mike Walls (Director – Cloud) and Daniel Bowbyes (Associate Director – Strategy) – discuss where edge and near-edge deployments add value and how to orchestrate AI across distributed environments while maintaining governance and cost discipline.
Latency is often cited as a driver for edge or regional deployments. In practice, what types of artificial intelligence (AI) workloads are most sensitive to latency?
Matt Neil, Director – Data Centres: Workloads requiring real-time processing and immediate feedback are the most latency‑sensitive. These include applications that need sub-10 millisecond responses, such as vehicle telematics, control systems, robotics and real-time decision-making in autonomous or semi-autonomous contexts.
Edge computing reduces the need to transmit data to core data centres, enabling faster responses and lower network dependency. As AI adoption grows, edge will become more important for delivering services close to end users, especially when the data volume or the speed of response makes central processing impractical.
Can you share examples of AI use cases that are best suited to edge or near-edge environments?
Neil: Edge or near‑edge deployments are best suited to AI workloads that require real‑time processing and local decision‑making, where latency, bandwidth or reliability constraints make sending data back to a central data centre impractical.
These use cases prioritise speed, autonomy and the ability to process large volumes of data close to where it is generated. By running AI at or near the edge, organisations can deliver immediate outcomes, reduce network dependency and enable decision‑making in scenarios where milliseconds matter.
Key examples include:
- Gaming and interactive experiences that demand immediate response
- Remote farming and agricultural tech, where local processing supports autonomous or semi‑autonomous operations
- Vehicle telematics and control systems, including safety‑critical or instant‑driven decisions
- Self‑driving vehicles and other autonomous mobility applications
- Warehouse and manufacturing automation, where on‑site AI can optimise operations without round‑trip network delays
Energy consumption and sustainability are becoming board-level concerns. How does AI change the energy equation for infrastructure teams?
Neil: AI workloads are more energy‑intensive than traditional data workloads, driving a need for more, and sometimes newer, data centre capacity. This intensifies trade‑offs between sustainability and cost with organisations aiming for efficiency and lower carbon footprints but also working to balance these goals against budget realities and time to value.
Location also matters, as naturally cooler climates can improve energy efficiency, so places like New Zealand are attractive for organisations seeking data centres that offer sustainable operations at scale. When comparing options, organisations will favour locations offering comparable service levels at similar prices but with lower energy use or lower carbon intensity. The practical takeaway is to pursue the most energy‑efficient, cost‑effective option that still meets performance, resilience and regulatory needs, rather than pursuing sustainability targets in isolation.
As AI becomes more distributed, what new challenges does this introduce for orchestration and management?
Mike Walls, Director – Cloud: Distributed AI adds complexity in orchestration, lifecycle management and governance across multiple environments. Challenges include:
- Keeping models and data in sync across edge and central environments
- Managing access and security consistently – across human and digital realms
- Maintaining transparency and control over where and how data is processed
- Ensuring data residency and compliance across jurisdictions
- Managing fluctuating AI model, platform and token use and costs
Daniel Bowbyes, Associate Director – Strategy: It’s no longer enough to be able to identify a person or an AI agent when it accesses a system. Management solutions will also have to track agent intent to validate the purpose behind the agent’s action to protect against prompt injection attacks and intent hijacking.
Where systems are using external large language models (LLMs), organisations will need to continually probe and test them for potential supply chain attacks leading to drift and undesirable AI agent decision-making. As agents become more powerful, they will require access to multiple organisational systems and data sources. It’s critical that organisations have the right management systems in place to be able to easily report on which agents have access to which data sources and replay agent actions in the event of an investigation.
Just as organisations harvest unused licences and regularly review system usage to ensure staff are using the most optimal licence for their needs, organisations will have to evolve their AI FinOps capability to not only track if staff are using AI features in a system, but if their usage is appropriate for the system and the LLM employed by that system.
What operational issues are organisations often under-prepared for when running AI at scale?
Bowbyes: AI at scale can be a significant financial cost to the organisation. For those organisations looking to deploy and manage their own infrastructure, there are significant supply chain challenges in just getting hold of compute hardware, let alone the complexity and cost of the hosting, power and cooling systems needed to run the hardware. There are also the ongoing infrastructure running and licensing costs and the significant rate of hardware technology change.
Walls: There’s also a general preparedness gap around:
- The need for mature governance and policy frameworks
- Understanding use cases against the right models and infrastructure
- The challenges of building the right infrastructure to support AI at scale
- Heightened data governance and regulatory pressures
- The risk that current AI, and certainly traditional, security is insufficient
All of these collectively need proactive planning, investment in processes and flexible, scalable infrastructure strategies.
Bowbyes: Organisations that are looking to consume AI infrastructure as a service shield themselves from large capital outlays but potentially face high running costs, GPU availability risks, latency risks and the need to have very tight financial controls to ensure AI processing costs don’t blow out, causing cashflow issues.
How important is consistency across environments (cloud, data centre, edge) when it comes to deploying and operating AI models?
Walls: It’s hugely important for governance, security and reliability when leveraging or moving models and data across environments. We’re finding that AI infrastructure and the reality of different models for different use cases drives different platform needs is a recurring topic in conversations we’re having, which implies there’s a need for coherent policy, tooling and architecture across the different environments. While edge deployments address latency and locality, centralised environments provide scale and governance. But even within an environment, different model use may drive different infrastructure needs. A managed, consistent approach across all environments helps reduce risk, simplify operations and improve the ability to deploy and maintain AI models at scale.
Find out how Datacom is supporting organisations in optimising and governing AI workloads across distributed environments, while advancing sustainability and compliance goals.
Glossary
- Edge: Computing resources located close to data sources or end users to reduce latency.
- Near-edge: A closely proximate compute layer to edge devices, often networked to edge data sources.
- Distributed AI: AI workloads spread across multiple environments (edge, data centre, cloud) with coordinated governance.
- Governance: Policies and controls governing data, models, security and compliance.
- FinOps: Financial operations practices for cloud/AI spend, including costing, budgeting and optimisation.
- Data residency and sovereignty: Regulations governing where data is stored and processed and by whom.
- Orchestration: Managing the end-to-end workflow of AI models and data across environments.
- Prompt injection / supply-chain risk: Security concerns around manipulated prompts or compromised models/data sources.
Read More from This Article: Q&A: Edge, latency, energy and governance in a distributed AI world
Source: News

