Join the 155,000+ IMP followers

electronics-journal.com

AMD reveals open AI ecosystem with new silicon, software, & systems at Advancing AI 2025

AMD powers full-spectrum AI with top GPUs, CPUs, networking, and open software, backed by key partners like Meta and Microsoft.

  www.amd.com
AMD reveals open AI ecosystem with new silicon, software, & systems at Advancing AI 2025

AMD delivered its comprehensive, end-to-end integrated AI platform vision and introduced its open, scalable rack-scale AI infrastructure built on industry standards at its 2025 Advancing AI event.

AMD and its partners showcased:
  • How they are building the open AI ecosystem with the new AMD Instinct™ MI350 Series accelerators
  • The continued growth of the AMD ROCm™ ecosystem
  • The company’s powerful, new, open rack-scale designs and roadmap that bring leadership rack-scale AI performance beyond 2027
“AMD is driving AI innovation at an unprecedented pace, highlighted by the launch of our AMD Instinct MI350 series accelerators, advances in our next generation AMD ‘Helios’ rack-scale solutions, and growing momentum for our ROCm open software stack,” said Dr. Lisa Su, AMD chair and CEO. “We are entering the next phase of AI, driven by open standards, shared innovation and AMD’s expanding leadership across a broad ecosystem of hardware and software partners who are collaborating to define the future of AI.”

AMD Delivers Leadership Solutions to Accelerate an Open AI Ecosystem
AMD announced a broad portfolio of hardware, software and solutions to power the full spectrum of AI:
  • AMD unveiled the Instinct MI350 Series GPUs, setting a new benchmark for performance, efficiency and scalability in generative AI and high-performance computing. The MI350 Series, consisting of both Instinct MI350X and MI355X GPUs and platforms, delivers a 4x, generation-on-generation AI compute increasei and a 35x generational leap in inferencingii, paving the way for transformative AI solutions across industries. MI355X also delivers significant price-performance gains, generating up to 40% more tokens-per-dollar compared to competing solutionsiii. More details are available in this blog from Vamsi Boppana, AMD SVP, AI.
     
  • AMD demonstrated end-to-end, open-standards rack-scale AI infrastructure—already rolling out with AMD Instinct MI350 Series accelerators, 5th Gen AMD EPYC™ processors and AMD Pensando™ Pollara NICs in hyperscaler deployments such as Oracle Cloud Infrastructure (OCI) and set for broad availability in 2H 2025.
     
  • AMD also previewed its next generation AI rack called “Helios.” It will be built on the next-generation AMD Instinct MI400 Series GPUs – which compared to the previous generation are expected to deliver up to 10x more performance running inference on Mixture of Experts modelsiv, the “Zen 6”-based AMD EPYC “Venice” CPUs and AMD Pensando “Vulcano” NICs. More details are available in this blog post.
     
  • The latest version of the AMD open-source AI software stack, ROCm 7, is engineered to meet the growing demands of generative AI and high-performance computing workloads—while dramatically improving developer experience across the board. ROCm 7 features improved support for industry-standard frameworks, expanded hardware compatibility and new development tools, drivers, APIs and libraries to accelerate AI development and deployment. More details are available in this blog post from Anush Elangovan, AMD CVP of AI Software Development.
     
  • The Instinct MI350 Series exceeded AMD’s five-year goal to improve the energy efficiency of AI training and high-performance computing nodes by 30x, ultimately delivering a 38x improvementv. AMD also unveiled a new 2030 goal to deliver a 20x increase in rack-scale energy efficiency from a 2024 base yearvi, enabling a typical AI model that today requires more than 275 racks to be trained in fewer than one fully utilized rack by 2030, using 95% less electricityvii. More details are available in this blog post from Sam Naffziger, AMD SVP and Corporate Fellow.
     
  • AMD also announced the broad availability of the AMD Developer Cloud for the global developer and open-source communities. Purpose-built for rapid, high-performance AI development, users will have access to a fully managed cloud environment with the tools and flexibility to get started with AI projects – and grow without limits. With ROCm 7 and the AMD Developer Cloud, AMD is lowering barriers and expanding access to next-gen compute. Strategic collaborations with leaders like Hugging Face, OpenAI and Grok are proving the power of co-developed, open solutions.
Broad Partner Ecosystem Showcases AI Progress Powered by AMD
Today, seven of the 10 largest model builders and Al companies are running production workloads on Instinct accelerators. Among those companies are Meta, OpenAI, Microsoft and xAI, who joined AMD and other partners at Advancing AI, to discuss how they are working with AMD for AI solutions to train today’s leading AI models, power inference at scale and accelerate AI exploration and development:
  • Meta detailed how Instinct MI300X is broadly deployed for Llama 3 and Llama 4 inference. Meta shared excitement for MI350 and its compute power, performance-per-TCO and next-generation memory. Meta continues to collaborate closely with AMD on AI roadmaps, including plans for the Instinct MI400 Series platform.
     
  • OpenAI CEO Sam Altman discussed the importance of holistically optimized hardware, software and algorithms and OpenAI’s close partnership with AMD on AI infrastructure, with research and GPT models on Azure in production on MI300X, as well as deep design engagements on MI400 Series platforms.
     
  • Oracle Cloud Infrastructure (OCI) is among the first industry leaders to adopt the AMD open rack-scale AI infrastructure with AMD Instinct MI355X GPUs. OCI leverages AMD CPUs and GPUs to deliver balanced, scalable performance for AI clusters, and announced it will offer zettascale AI clusters accelerated by the latest AMD Instinct processors with up to 131,072 MI355X GPUs to enable customers to build, train and inference AI at scale.
     
  • HUMAIN discussed its landmark agreement with AMD to build open, scalable, resilient and cost-efficient AI infrastructure leveraging the full spectrum of computing platforms only AMD can provide.
     
  • Microsoft announced Instinct MI300X is now powering both proprietary and open-source models in production on Azure.
     
  • Cohere shared that its high-performance, scalable Command models are deployed on Instinct MI300X, powering enterprise-grade LLM inference with high throughput, efficiency and data privacy.
     
  • Red Hat described how its expanded collaboration with AMD enables production-ready AI environments, with AMD Instinct GPUs on Red Hat OpenShift AI delivering powerful, efficient AI processing across hybrid cloud environments.
     
  • Astera Labs highlighted how the open UALink ecosystem accelerates innovation and delivers greater value to customers and shared plans to offer a comprehensive portfolio of UALink products to support next-generation AI infrastructure.
     
  • Marvell joined AMD to highlight its collaboration as part of the UALink Consortium developing an open interconnect, bringing the ultimate flexibility for AI infrastructure.

  Ask For More Information…

LinkedIn
Pinterest

Join the 155,000+ IMP followers