As the race to deploy artificial intelligence (AI) hits a fever pitch across enterprises, the savviest organizations are already looking at how to achieve artificial consciousness—a pinnacle of technological and theoretical exploration. However, this undertaking requires unprecedented hardware and software capabilities, and while systems are under construction, the enterprise has a long way to go to understand the demands—and even longer before it can deploy them. This piece is the first in a series of three articles outlining the parameters for artificial consciousness.
The hardware requirements include massive amounts of compute, control, and storage. These enterprise IT categories are not new, but the performance requirements are unprecedented. While enterprises have experience deploying compute, control, and storage requirements for Software-as-a-Service (SaaS)-based applications in a mobile-first and cloud-first world, they are learning how to scale these hardware requirements for AI environments and, ultimately, systems that can deliver artificial consciousness nirvana.
It all starts with compute capacity
As Lenovo’s third annual global CIO report revealed, CIOs are developing their AI roadmaps now and assessing everything from their organizational support to capacity building to future-forward investment in tech. The first requirement CIOs must meet when considering artificial consciousness is compute capacity, which falls under capacity building. The amount of compute needed is much more intensive than AI or even GenAI given the sheer volume of data required to enable systems that are fully capable of learning and reasoning.
The higher processing power is achieved by leveraging a compute fabric comprised of sophisticated server clusters. This approach is familiar to CIOs that have deployed high-performance computing (HPC) infrastructure. These clusters seamlessly integrate advanced hardware to deliver unparalleled processing power and efficiency.
At the heart of this cluster-based infrastructure configuration is the concept of a “pod,” meticulously organized to maximize computing density and thermal efficiency. Each pod comprises 16 racks, with each rack housing eight water-cooled servers—a configuration that ensures not only optimal performance but also environmental sustainability through the advanced cooling capabilities. These high-powered servers feature 2TB of DDR5 registered DIMM ECC system memory to ensure rapid access to data and combine the direct water cooling with a rear door heat exchanger that captures residual waste heat. These state-of-the-art servers are customizable with the latest GPUs or AI processors available from Nvidia, AMD, or Intel, providing massive parallel computing power for this extremely demanding application.
Each 16-rack pod also includes a Vertiv end-of-row coolant distribution unit—an innovative component designed to efficiently manage the thermal dynamics of high-density computing environments and ensure this high-powered hardware operates within safe thermal thresholds. The result is a system that delivers high performance and reliability while also significantly boosting energy efficiency. By reducing the overall cooling power requirements, each pod is both powerful and environmentally conscious.
Laying the foundation for artificial consciousness
The quest to build artificial consciousness is ambitious, as maximizing the groundbreaking algorithms introduces a whole new set of hardware infrastructure requirements—the first of which is compute power. Once an enterprise scales its processing power, it must also scale its control and storage hardware before it can activate the advanced software stacks and strategic services that will operationalize artificial consciousness. The next article in the series will look at how to build capacity for higher control and storage hardware requirements.
Read More from This Article: Beyond AI: Building toward artificial consciousness – Part I
Source: News