electronics-journal.com
19
'26
Written on Modified on
Samsung Advances AI Infrastructure with HBM4 Memory Collaboration
Partnership with AMD focuses on high-bandwidth memory and DDR5 solutions to improve performance, efficiency, and scalability in AI and data center systems.
www.amd.com

High-performance computing, AI training, and hyperscale data center applications are driving demand for higher memory bandwidth and energy efficiency, prompting Samsung Electronics Co., Ltd. to expand its collaboration with AMD on next-generation AI memory solutions, including HBM4 and DDR5 technologies for upcoming GPU and CPU platforms.
The agreement, formalized through a Memorandum of Understanding signed at Samsung’s semiconductor complex in Pyeongtaek, Korea, outlines joint development of high-bandwidth memory and DRAM solutions tailored for AI workloads. The collaboration targets integration with AMD’s Instinct GPU accelerators, EPYC processors, and rack-scale architectures such as the Helios platform, addressing system-level performance constraints in AI infrastructure.
HBM4 memory targets bandwidth-intensive AI workloads
Samsung’s HBM4 memory introduces a new generation of stacked DRAM designed to meet the increasing bandwidth requirements of AI model training and inference. Built on a 6th-generation 10 nm-class (1c) DRAM process and combined with a 4 nm logic base die, the architecture delivers data transfer speeds of up to 13 Gbps and memory bandwidth reaching 3.3 TB/s.
These specifications position HBM4 as a key enabler for reducing data bottlenecks between memory and compute units in AI accelerators. In practical terms, higher bandwidth allows GPUs to process larger datasets more efficiently, which is critical for large language models and real-time inference systems deployed in cloud and enterprise environments.
The memory will be integrated as a primary component in AMD’s next-generation Instinct MI455X GPU, which is designed for high-performance AI workloads. Its role within the broader Helios rack-scale platform highlights the shift toward tightly integrated compute architectures where memory, processors, and interconnects are co-optimized.
DDR5 optimization for next-generation EPYC processors
In parallel, the collaboration extends to DDR5 DRAM solutions optimized for 6th generation AMD EPYC processors, codenamed “Venice.” These processors are expected to support advanced AI and data center workloads where memory capacity, latency, and energy efficiency are critical factors.
By aligning DDR5 memory design with processor architecture, the companies aim to improve system throughput and reduce power consumption at the server level. This is particularly relevant for hyperscale data centers, where incremental efficiency gains translate into significant operational cost reductions.
System-level integration for AI infrastructure
The collaboration reflects a broader trend toward system-level co-design in AI infrastructure, where performance is increasingly dependent on how memory, compute, and packaging technologies interact. Samsung’s capabilities in advanced memory fabrication, foundry services, and packaging are expected to support AMD’s roadmap for integrated AI systems.
Beyond memory supply, the agreement also includes discussions on foundry services for future AMD products, indicating potential expansion into chip manufacturing and advanced packaging integration.
Positioning within the AI memory ecosystem
The introduction of HBM4 marks a progression from earlier high-bandwidth memory generations such as HBM3 and HBM3E, which are already deployed in current AI accelerators. Compared to these, HBM4 increases both data rates and total bandwidth, addressing the growing gap between compute performance and memory throughput.
While other semiconductor manufacturers are also developing next-generation HBM solutions, the collaboration between Samsung and AMD focuses on close integration between memory and processing platforms, which can influence real-world system performance more than component-level improvements alone.
By combining high-bandwidth memory with optimized DDR5 solutions and rack-scale architectures, the partnership aims to support scalable AI infrastructure capable of handling increasingly complex workloads across cloud computing, enterprise AI, and high-performance computing environments.
Edited by Industrial Journalist, Natania Lyngdoh — Adapted by AI.
www.amd.com

