INTEL GAUDI AI ACCELERATOR GAINS 2X PERFORMANCE LEAP ON GPT-3 WITH FP8 SOFTWARE
The latest MLPerf results for Intel Gaudi2 and 4th Gen Intel Xeon demonstrate how Intel is raising the bar for AI performance with cost-effective and high-performance AI solutions.
MLCommons published results of the industry standard MLPerf training v3.1 benchmark for training AI models, with Intel submitting results for Intel® Gaudi®2 accelerators and 4th Gen Intel® Xeon™ Scalable processors with Intel® Advanced Matrix Extensions (Intel® AMX). Intel Gaudi2 demonstrated a significant 2x performance leap, with the implementation of the FP8 data type on the v3.1 training GPT-3 benchmark. The benchmark submissions reinforced Intel’s commitment to bring AI everywhere with competitive AI solutions.
The newest MLCommons MLPerf results build on Intel’s strong AI performance over previous MLPerf training results from June. The Intel Xeon processor remains the only CPU reporting MLPerf results, and Intel Gaudi2 is one of only three accelerator solutions upon which results are based, only two of which are commercially available.
Intel Gaudi2 and 4th Gen Xeon processors demonstrate compelling AI training performance in a variety of hardware configurations to address the increasingly broad array of customer AI compute requirements.
Gaudi2 continues to be the only viable alternative to NVIDIA’s H100 for AI compute needs, delivering significant price-performance. MLPerf results for Gaudi2 displayed the AI accelerator’s increasing training performance:
- Gaudi2 demonstrated a 2x performance leap with the implementation of the FP8 data type on the v3.1 training GPT-3 benchmark, reducing time-to-train by more than half compared to the June MLPerf benchmark, completing the training in 153.58 minutes on 384 Intel Gaudi2 accelerators. The Gaudi2 accelerator supports FP8 in both E5M2 and E4M3 formats, with the option of delayed scaling when necessary.
- Intel Gaudi2 demonstrated training on the Stable Diffusion multi-modal model with 64 accelerators in 20.2 minutes, using BF16. In future MLPerf training benchmarks, Stable Diffusion performance will be submitted on the FP8 data type.
- On eight Intel Gaudi2 accelerators, benchmark results were 13.27 and 15.92 minutes for BERT and ResNet-50, respectively, using BF16.
- Intel submitted results for RESNet50, RetinaNet, BERT and DLRM dcnv2. The 4th Gen Intel Xeon scalable processors’ results for ResNet50, RetinaNet and BERT were similar to the strong out-of-box performance results submitted for the June 2023 MLPerf benchmark.
- DLRM dcnv2 is a new model from June’s submission, with the CPU demonstrating a time-to-train submission of 227 minutes using only four nodes.
With software updates and optimizations, Intel anticipates more advances in AI performance results in forthcoming MLPerf benchmarks. Intel’s AI products provide customers with more choice for AI solutions to meet dynamic requirements requiring performance, efficiency and usability.