The post NVIDIA Dominates MLPerf Training v5.1 with Blackwell Ultra GPUs appeared on BitcoinEthereumNews.com. Rongchai Wang Nov 13, 2025 02:42 NVIDIA swept the MLPerf Training v5.1 benchmarks, showcasing superior AI training performance with its Blackwell Ultra GPU architecture across multiple AI model categories. NVIDIA has once again showcased its dominance in AI training performance by sweeping all seven tests in the latest MLPerf Training v5.1 benchmarks. The company demonstrated the fastest training times across various AI model categories, including large language models (LLMs), image generation, recommender systems, computer vision, and graph neural networks, according to NVIDIA’s blog. Blackwell Ultra’s Impressive Debut The highlight of this round was the debut of the GB300 NVL72 rack-scale system, powered by NVIDIA’s Blackwell Ultra GPU architecture. This system delivered groundbreaking performance, achieving more than four times the pretraining speed of the Llama 3.1 405B model and nearly five times the fine-tuning speed of the Llama 2 70B model compared to its predecessor, the Hopper architecture. These performance gains were driven by Blackwell Ultra’s advanced architectural features, including new Tensor Cores capable of 15 petaflops of NVFP4 AI compute, and 279GB of HBM3e memory. The company also introduced new training methods to leverage the architecture’s NVFP4 compute capabilities effectively. Advancements in AI Training Precision NVIDIA’s success in this benchmark can be attributed to its pioneering use of NVFP4 precision in AI training—a first in MLPerf’s history. This approach allows the architecture to perform calculations on data with fewer bits, significantly enhancing computational speed while maintaining accuracy. This innovation is part of NVIDIA’s broader strategy to optimize AI models for faster training times. Record-Breaking Performance Metrics NVIDIA’s Blackwell GPUs achieved a new record by training the Llama 3.1 405B model in just 10 minutes, thanks to efficient scaling across over 5,000 GPUs. This feat marked a 2.7x improvement over previous results. Additionally,… The post NVIDIA Dominates MLPerf Training v5.1 with Blackwell Ultra GPUs appeared on BitcoinEthereumNews.com. Rongchai Wang Nov 13, 2025 02:42 NVIDIA swept the MLPerf Training v5.1 benchmarks, showcasing superior AI training performance with its Blackwell Ultra GPU architecture across multiple AI model categories. NVIDIA has once again showcased its dominance in AI training performance by sweeping all seven tests in the latest MLPerf Training v5.1 benchmarks. The company demonstrated the fastest training times across various AI model categories, including large language models (LLMs), image generation, recommender systems, computer vision, and graph neural networks, according to NVIDIA’s blog. Blackwell Ultra’s Impressive Debut The highlight of this round was the debut of the GB300 NVL72 rack-scale system, powered by NVIDIA’s Blackwell Ultra GPU architecture. This system delivered groundbreaking performance, achieving more than four times the pretraining speed of the Llama 3.1 405B model and nearly five times the fine-tuning speed of the Llama 2 70B model compared to its predecessor, the Hopper architecture. These performance gains were driven by Blackwell Ultra’s advanced architectural features, including new Tensor Cores capable of 15 petaflops of NVFP4 AI compute, and 279GB of HBM3e memory. The company also introduced new training methods to leverage the architecture’s NVFP4 compute capabilities effectively. Advancements in AI Training Precision NVIDIA’s success in this benchmark can be attributed to its pioneering use of NVFP4 precision in AI training—a first in MLPerf’s history. This approach allows the architecture to perform calculations on data with fewer bits, significantly enhancing computational speed while maintaining accuracy. This innovation is part of NVIDIA’s broader strategy to optimize AI models for faster training times. Record-Breaking Performance Metrics NVIDIA’s Blackwell GPUs achieved a new record by training the Llama 3.1 405B model in just 10 minutes, thanks to efficient scaling across over 5,000 GPUs. This feat marked a 2.7x improvement over previous results. Additionally,…

NVIDIA Dominates MLPerf Training v5.1 with Blackwell Ultra GPUs

For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com


Rongchai Wang
Nov 13, 2025 02:42

NVIDIA swept the MLPerf Training v5.1 benchmarks, showcasing superior AI training performance with its Blackwell Ultra GPU architecture across multiple AI model categories.

NVIDIA has once again showcased its dominance in AI training performance by sweeping all seven tests in the latest MLPerf Training v5.1 benchmarks. The company demonstrated the fastest training times across various AI model categories, including large language models (LLMs), image generation, recommender systems, computer vision, and graph neural networks, according to NVIDIA’s blog.

Blackwell Ultra’s Impressive Debut

The highlight of this round was the debut of the GB300 NVL72 rack-scale system, powered by NVIDIA’s Blackwell Ultra GPU architecture. This system delivered groundbreaking performance, achieving more than four times the pretraining speed of the Llama 3.1 405B model and nearly five times the fine-tuning speed of the Llama 2 70B model compared to its predecessor, the Hopper architecture.

These performance gains were driven by Blackwell Ultra’s advanced architectural features, including new Tensor Cores capable of 15 petaflops of NVFP4 AI compute, and 279GB of HBM3e memory. The company also introduced new training methods to leverage the architecture’s NVFP4 compute capabilities effectively.

Advancements in AI Training Precision

NVIDIA’s success in this benchmark can be attributed to its pioneering use of NVFP4 precision in AI training—a first in MLPerf’s history. This approach allows the architecture to perform calculations on data with fewer bits, significantly enhancing computational speed while maintaining accuracy. This innovation is part of NVIDIA’s broader strategy to optimize AI models for faster training times.

Record-Breaking Performance Metrics

NVIDIA’s Blackwell GPUs achieved a new record by training the Llama 3.1 405B model in just 10 minutes, thanks to efficient scaling across over 5,000 GPUs. This feat marked a 2.7x improvement over previous results. Additionally, NVIDIA set new benchmarks with the Llama 3.1 8B and FLUX.1 models, underscoring its commitment to continuous innovation in AI training.

Industry Collaboration and Future Prospects

NVIDIA’s ecosystem of partners, including major tech companies such as Dell Technologies and Hewlett Packard Enterprise, played a vital role in achieving these results. This widespread collaboration highlights the robust support and scalability of NVIDIA’s technology, fostering rapid advancements in AI capabilities.

As NVIDIA continues to innovate at a rapid pace, it is setting the stage for unprecedented growth in AI adoption and intelligence, paving the way for future breakthroughs in AI training and inference.

Image source: Shutterstock

Source: https://blockchain.news/news/nvidia-dominates-mlperf-training-v5-1-blackwell-ultra-gpus

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Crypto News: Donald Trump-Aligned Fed Governor To Speed Up Fed Rate Cuts?

Crypto News: Donald Trump-Aligned Fed Governor To Speed Up Fed Rate Cuts?

The post Crypto News: Donald Trump-Aligned Fed Governor To Speed Up Fed Rate Cuts? appeared on BitcoinEthereumNews.com. In recent crypto news, Stephen Miran swore in as the latest Federal Reserve governor on September 16, 2025, slipping into the board’s last open spot right before the Federal Open Market Committee kicks off its two-day rate discussion. Traders are betting heavily on a 25-basis-point trim, which would bring the federal funds rate down to 4.00%-4.25%, based on CME FedWatch Tool figures from September 15, 2025. Miran, who’s been Trump’s top economic advisor and a supporter of his trade ideas, joins a seven-member board where just three governors come from Democratic picks, according to the Fed’s records updated that same day. Crypto News: Miran’s Background and Quick Path to Confirmation The Senate greenlit Miran on September 15, 2025, with a tight 48-47 vote, following his nomination on September 2, 2025, as per a recent crypto news update. His stint runs only until January 31, 2026, stepping in for Adriana D. Kugler, who stepped down in August 2025 for reasons not made public. Miran earned his economics Ph.D. from Harvard and worked at the Treasury back in Trump’s first go-around. Afterward, he moved to Hudson Bay Capital Management as an economist, then looped back to the White House in December 2024 to head the Council of Economic Advisers. There, he helped craft Trump’s “reciprocal tariffs” approach, aimed at fixing trade gaps with China and the EU. He wouldn’t quit his White House gig, which irked Senator Elizabeth Warren at the September 7, 2025, confirmation hearings. That limited time frame means Miran gets to cast a vote straight away at the FOMC session starting September 16, 2025. The full board now features Chair Jerome H. Powell (Trump pick, term ends 2026), Vice Chair Philip N. Jefferson (Biden, to 2036), and folks like Lisa D. Cook (Biden, to 2028) and Michael S. Barr…
Share
BitcoinEthereumNews2025/09/18 03:14
CEO Sandeep Nailwal Shared Highlights About RWA on Polygon

CEO Sandeep Nailwal Shared Highlights About RWA on Polygon

The post CEO Sandeep Nailwal Shared Highlights About RWA on Polygon appeared on BitcoinEthereumNews.com. Polygon CEO Sandeep Nailwal highlighted Polygon’s lead in global bonds, Spiko US T-Bill, and Spiko Euro T-Bill. Polygon published an X post to share that its roadmap to GigaGas was still scaling. Sentiments around POL price were last seen to be bearish. Polygon CEO Sandeep Nailwal shared key pointers from the Dune and RWA.xyz report. These pertain to highlights about RWA on Polygon. Simultaneously, Polygon underlined its roadmap towards GigaGas. Sentiments around POL price were last seen fumbling under bearish emotions. Polygon CEO Sandeep Nailwal on Polygon RWA CEO Sandeep Nailwal highlighted three key points from the Dune and RWA.xyz report. The Chief Executive of Polygon maintained that Polygon PoS was hosting RWA TVL worth $1.13 billion across 269 assets plus 2,900 holders. Nailwal confirmed from the report that RWA was happening on Polygon. The Dune and https://t.co/W6WSFlHoQF report on RWA is out and it shows that RWA is happening on Polygon. Here are a few highlights: – Leading in Global Bonds: Polygon holds 62% share of tokenized global bonds (driven by Spiko’s euro MMF and Cashlink euro issues) – Spiko U.S.… — Sandeep | CEO, Polygon Foundation (※,※) (@sandeepnailwal) September 17, 2025 The X post published by Polygon CEO Sandeep Nailwal underlined that the ecosystem was leading in global bonds by holding a 62% share of tokenized global bonds. He further highlighted that Polygon was leading with Spiko US T-Bill at approximately 29% share of TVL along with Ethereum, adding that the ecosystem had more than 50% share in the number of holders. Finally, Sandeep highlighted from the report that there was a strong adoption for Spiko Euro T-Bill with 38% share of TVL. He added that 68% of returns were on Polygon across all the chains. Polygon Roadmap to GigaGas In a different update from Polygon, the community…
Share
BitcoinEthereumNews2025/09/18 01:10
T7X Launches Regulated Launchpad for Tokenized Real-World Asset Securities

T7X Launches Regulated Launchpad for Tokenized Real-World Asset Securities

SHERIDAN, Wyo., March  18, 2026  (GLOBE NEWSWIRE) -- T7X announces the launch of the T7X Launchpad, a digital issuance platform designed to support the crea
Share
CryptoReporter2026/03/18 20:49