electronics-journal.com
15
'25
Written on Modified on
AMD Unveils “Helios” Open Rack Platform for Scalable AI Infrastructure
Showcased at OCP Global Summit 2025, the Helios platform supports gigawatt-scale AI data centers with open, liquid-cooled, double-wide racks.
www.amd.com

At the Open Compute Project (OCP) Global Summit in San Jose, AMD showcased a static display of its “Helios,” rack scale platform for the first time in public. Developed based on the new Open Rack Wide (ORW) specification, introduced by Meta, “Helios” extends the AMD open hardware philosophy from silicon to system to rack, representing a major step forward in open, interoperable AI infrastructure.
Extending AMD leadership in AI and high-performance computing, “Helios” provides the foundation to deliver the open, scalable infrastructure that will power the world’s growing AI demands. Designed to meet the demands of gigawatt-scale data centers, the new ORW specification defines an open, double-wide rack optimized for the power, cooling, and serviceability needs of next-generation AI systems. By adopting ORW and OCP standards, “Helios” provides the industry with a unified, standards-based foundation to develop and deploy efficient, high-performance AI infrastructure at scale.
“Open collaboration is key to scaling AI efficiently,” said Forrest Norrod, executive vice president and general manager, Data Center Solutions Group, AMD. “With ‘Helios,’ we’re turning open standards into real, deployable systems — combining AMD Instinct GPUs, EPYC CPUs, and open fabrics to give the industry a flexible, high-performance platform built for the next generation of AI workloads.”
Built for Open, Efficient, and Sustainable AI Infrastructure
The AMD “Helios” rack scale platform integrates open compute standards including OCP DC-MHS, UALink, and Ultra Ethernet Consortium (UEC) architectures, supporting both open scale-up and scale-out fabrics. The rack features quick-disconnect liquid cooling for sustained thermal performance, a double-wide layout for improved serviceability, and standards-based Ethernet for multi-path resiliency.
As a reference design, “Helios” enables OEMs, ODMs, and hyperscalers to adopt, extend, and customize open AI systems quickly — reducing deployment time, improving interoperability, and supporting efficient scaling for AI and HPC workloads. The Helios platform reflects the ongoing collaboration from AMD across the OCP community to enable open, scalable infrastructure for AI deployments worldwide.
www.amd.com