The post NVIDIA Dominates MLPerf Training v5.1 with Blackwell Ultra GPUs appeared on BitcoinEthereumNews.com. Rongchai Wang Nov 13, 2025 02:42 NVIDIA swept the MLPerf Training v5.1 benchmarks, showcasing superior AI training performance with its Blackwell Ultra GPU architecture across multiple AI model categories. NVIDIA has once again showcased its dominance in AI training performance by sweeping all seven tests in the latest MLPerf Training v5.1 benchmarks. The company demonstrated the fastest training times across various AI model categories, including large language models (LLMs), image generation, recommender systems, computer vision, and graph neural networks, according to NVIDIA’s blog. Blackwell Ultra’s Impressive Debut The highlight of this round was the debut of the GB300 NVL72 rack-scale system, powered by NVIDIA’s Blackwell Ultra GPU architecture. This system delivered groundbreaking performance, achieving more than four times the pretraining speed of the Llama 3.1 405B model and nearly five times the fine-tuning speed of the Llama 2 70B model compared to its predecessor, the Hopper architecture. These performance gains were driven by Blackwell Ultra’s advanced architectural features, including new Tensor Cores capable of 15 petaflops of NVFP4 AI compute, and 279GB of HBM3e memory. The company also introduced new training methods to leverage the architecture’s NVFP4 compute capabilities effectively. Advancements in AI Training Precision NVIDIA’s success in this benchmark can be attributed to its pioneering use of NVFP4 precision in AI training—a first in MLPerf’s history. This approach allows the architecture to perform calculations on data with fewer bits, significantly enhancing computational speed while maintaining accuracy. This innovation is part of NVIDIA’s broader strategy to optimize AI models for faster training times. Record-Breaking Performance Metrics NVIDIA’s Blackwell GPUs achieved a new record by training the Llama 3.1 405B model in just 10 minutes, thanks to efficient scaling across over 5,000 GPUs. This feat marked a 2.7x improvement over previous results. Additionally,… The post NVIDIA Dominates MLPerf Training v5.1 with Blackwell Ultra GPUs appeared on BitcoinEthereumNews.com. Rongchai Wang Nov 13, 2025 02:42 NVIDIA swept the MLPerf Training v5.1 benchmarks, showcasing superior AI training performance with its Blackwell Ultra GPU architecture across multiple AI model categories. NVIDIA has once again showcased its dominance in AI training performance by sweeping all seven tests in the latest MLPerf Training v5.1 benchmarks. The company demonstrated the fastest training times across various AI model categories, including large language models (LLMs), image generation, recommender systems, computer vision, and graph neural networks, according to NVIDIA’s blog. Blackwell Ultra’s Impressive Debut The highlight of this round was the debut of the GB300 NVL72 rack-scale system, powered by NVIDIA’s Blackwell Ultra GPU architecture. This system delivered groundbreaking performance, achieving more than four times the pretraining speed of the Llama 3.1 405B model and nearly five times the fine-tuning speed of the Llama 2 70B model compared to its predecessor, the Hopper architecture. These performance gains were driven by Blackwell Ultra’s advanced architectural features, including new Tensor Cores capable of 15 petaflops of NVFP4 AI compute, and 279GB of HBM3e memory. The company also introduced new training methods to leverage the architecture’s NVFP4 compute capabilities effectively. Advancements in AI Training Precision NVIDIA’s success in this benchmark can be attributed to its pioneering use of NVFP4 precision in AI training—a first in MLPerf’s history. This approach allows the architecture to perform calculations on data with fewer bits, significantly enhancing computational speed while maintaining accuracy. This innovation is part of NVIDIA’s broader strategy to optimize AI models for faster training times. Record-Breaking Performance Metrics NVIDIA’s Blackwell GPUs achieved a new record by training the Llama 3.1 405B model in just 10 minutes, thanks to efficient scaling across over 5,000 GPUs. This feat marked a 2.7x improvement over previous results. Additionally,…

NVIDIA Dominates MLPerf Training v5.1 with Blackwell Ultra GPUs

2025/11/14 10:28
2분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다


Rongchai Wang
Nov 13, 2025 02:42

NVIDIA swept the MLPerf Training v5.1 benchmarks, showcasing superior AI training performance with its Blackwell Ultra GPU architecture across multiple AI model categories.

NVIDIA has once again showcased its dominance in AI training performance by sweeping all seven tests in the latest MLPerf Training v5.1 benchmarks. The company demonstrated the fastest training times across various AI model categories, including large language models (LLMs), image generation, recommender systems, computer vision, and graph neural networks, according to NVIDIA’s blog.

Blackwell Ultra’s Impressive Debut

The highlight of this round was the debut of the GB300 NVL72 rack-scale system, powered by NVIDIA’s Blackwell Ultra GPU architecture. This system delivered groundbreaking performance, achieving more than four times the pretraining speed of the Llama 3.1 405B model and nearly five times the fine-tuning speed of the Llama 2 70B model compared to its predecessor, the Hopper architecture.

These performance gains were driven by Blackwell Ultra’s advanced architectural features, including new Tensor Cores capable of 15 petaflops of NVFP4 AI compute, and 279GB of HBM3e memory. The company also introduced new training methods to leverage the architecture’s NVFP4 compute capabilities effectively.

Advancements in AI Training Precision

NVIDIA’s success in this benchmark can be attributed to its pioneering use of NVFP4 precision in AI training—a first in MLPerf’s history. This approach allows the architecture to perform calculations on data with fewer bits, significantly enhancing computational speed while maintaining accuracy. This innovation is part of NVIDIA’s broader strategy to optimize AI models for faster training times.

Record-Breaking Performance Metrics

NVIDIA’s Blackwell GPUs achieved a new record by training the Llama 3.1 405B model in just 10 minutes, thanks to efficient scaling across over 5,000 GPUs. This feat marked a 2.7x improvement over previous results. Additionally, NVIDIA set new benchmarks with the Llama 3.1 8B and FLUX.1 models, underscoring its commitment to continuous innovation in AI training.

Industry Collaboration and Future Prospects

NVIDIA’s ecosystem of partners, including major tech companies such as Dell Technologies and Hewlett Packard Enterprise, played a vital role in achieving these results. This widespread collaboration highlights the robust support and scalability of NVIDIA’s technology, fostering rapid advancements in AI capabilities.

As NVIDIA continues to innovate at a rapid pace, it is setting the stage for unprecedented growth in AI adoption and intelligence, paving the way for future breakthroughs in AI training and inference.

Image source: Shutterstock

Source: https://blockchain.news/news/nvidia-dominates-mlperf-training-v5-1-blackwell-ultra-gpus

시장 기회
플러리싱 에이아이 로고
플러리싱 에이아이 가격(SLEEPLESSAI)
$0.01846
$0.01846$0.01846
-2.17%
USD
플러리싱 에이아이 (SLEEPLESSAI) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

$30,000 in PRL + 15,000 USDT

$30,000 in PRL + 15,000 USDT$30,000 in PRL + 15,000 USDT

Deposit & trade PRL to boost your rewards!