The post Nvidia Drops Nemotron 3 Super Amid $26 Billion Open-Model AI Bet—America’s Answer to Qwen? appeared on BitcoinEthereumNews.com. In brief Nvidia launchedThe post Nvidia Drops Nemotron 3 Super Amid $26 Billion Open-Model AI Bet—America’s Answer to Qwen? appeared on BitcoinEthereumNews.com. In brief Nvidia launched

Nvidia Drops Nemotron 3 Super Amid $26 Billion Open-Model AI Bet—America’s Answer to Qwen?

For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

In brief

  • Nvidia launched Nemotron 3 Super, a 120B open-weight AI model optimized for autonomous agents and ultra-long context tasks.
  • The hybrid Mamba-Transformer MoE architecture delivers faster reasoning and over 5× throughput while running at 4-bit precision.
  • Nvidia’s $26 billion investment into open-source AI wants to counter China’s rise in the field.

Nvidia just shipped Nemotron 3 Super, a 120-billion-parameter open-weight model built to do one thing well: run autonomous AI agents without bleeding your compute budget dry.

That’s not a small problem. Multi-agent systems generate a lot more tokens than a normal chat—every tool call, reasoning step, and slice of context gets re-sent from scratch. As a result, costs explode, models tend to drift, and the agents slowly forget what they were supposed to be doing in the first place… or at least decrease in accuracy.

Nemotron 3 Super is Nvidia’s answer to all of that. The model runs 12 billion active parameters out of 120 billion total, using a mixture-of-experts (MoE) design that keeps inference cheap while retaining the reasoning depth complex workflows need. It packs a 1-million-token context window, so agents can hold an entire codebase, or nearly 750,000 words in memory before collapsing.

To build its model, Nvidia combined three components that rarely appear together in the same architecture: Mamba-2 state-space layers—a faster, memory-efficient alternative to attention for handling long token streams—along with Transformer attention layers for precise recall, and a new “Latent MoE” design that compresses token embeddings before routing them to experts. That allows the model to activate four times as many specialists at the same compute cost.

The model was also pretrained natively in NVFP4, Nvidia’s 4-bit floating-point format. In practice, that means the system learned to operate accurately within 4-bit arithmetic from the very first gradient update, rather than being trained at high precision and compressed afterward, which often causes models to lose accuracy.

For context, a model’s precision is measured in bits. Full precision, known as FP32, is the gold standard—but it is also extremely expensive to run at scale. Developers often reduce precision to save compute while trying to preserve useful performance.

Think of it like shrinking a 4K image down to 1080p: The picture still looks the same at a glance, just with less detail. Normally, dropping from 32-bit precision all the way to 4-bit would cripple a model’s reasoning ability. Nemotron avoids that problem by learning to operate at low precision from the start, instead of being squeezed into it later.

Compared to its own predecessor, Nemotron 3 Super delivers more than five times the throughput. Against external rivals, it’s 2.2x faster than OpenAI’s GPT-OSS 120B on inference throughput, and 7.5x faster than Alibaba’s Qwen3.5-122B.

We ran our own quick test. The reasoning held up well, including on prompts that were deliberately vague, badly worded, or based on wrong information. The model caught small errors in context without being asked to, handled math and logic problems cleanly, and didn’t fall apart when the question itself was slightly off.

The full training pipeline is public: weights on Hugging Face, 10 trillion curated pretraining tokens seen over 25 trillion total during training, 40 million post-training samples, and reinforcement learning recipes across 21 environment configurations. Perplexity, Palantir, Cadence, and Siemens are already integrating the model in their workflows.

The $26 billion bet

The model may be one piece of a larger strategy. A 2025 financial filing shows Nvidia plans to spend $26 billion over the next five years building open-weight AI models. Executives confirmed it, too.

Bryan Catanzaro, VP of applied deep learning research, told Wired the company recently finished pretraining a 550-billion-parameter model. Nvidia released its first Nemotron model back in November 2023, but that filing makes clear this is no longer a side project.

The investment is strategic considering Nvidia’s chips are still the default infrastructure for training and running frontier models. Models tuned to its hardware give customers a built-in reason to stay on Nvidia despite efforts from competitors to use other hardware. But there’s a more urgent pressure behind the move: America is losing the open-source AI race, and losing it fast.

Chinese open models went from barely 1.2% of global open-model usage in late 2024 to roughly 30% by the end of 2025, according to research by OpenRouter and Andreessen Horowitz. Alibaba’s Qwen overtook Meta’s Llama as the most-used self-hosted open-source model, according to Runpod. American companies including Airbnb adopted it for customer service. Startups worldwide are building on top of it. Beyond market share, that kind of adoption creates infrastructure dependencies that are hard to reverse.

While U.S. giants like OpenAI, Anthropic, and Google keep their best models locked behind APIs, Chinese labs from DeepSeek to Alibaba have been flooding the open ecosystem. Meta was the one major American player competing in open source with Llama, but Zuckerberg recently signaled the company might not make future models fully open.

The gap between “best proprietary model” and “best open model” used to be massive—and in America’s favor. That gap is now very small, and the open side of the ledger is increasingly Chinese.

There’s also a hardware threat underneath all of this. A new DeepSeek model is widely expected to drop soon, and it’s rumored to have been trained entirely on chips made by Huawei—a sanctioned Chinese company. If that’s confirmed, then it would give developers around the world, particularly in China, a concrete reason to start testing Huawei’s hardware. China’s Ziphu AI is already doing that.

That’s the scenario Nvidia most needs to prevent: Chinese open models and Chinese chips building an ecosystem that doesn’t need Nvidia at all.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Source: https://decrypt.co/360929/nvidia-drops-nemotron-3-super-26-billion-open-model-ai-bet

Market Opportunity
Belong Logo
Belong Price(LONG)
$0.002082
$0.002082$0.002082
+4.72%
USD
Belong (LONG) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

TradFi Titan BlackRock Debuts Staked Ethereum ETF, Letting Investors Earn Yield Alongside ETH Exposure ⋆ ZyCrypto

TradFi Titan BlackRock Debuts Staked Ethereum ETF, Letting Investors Earn Yield Alongside ETH Exposure ⋆ ZyCrypto

The post TradFi Titan BlackRock Debuts Staked Ethereum ETF, Letting Investors Earn Yield Alongside ETH Exposure ⋆ ZyCrypto appeared on BitcoinEthereumNews.com.
Share
BitcoinEthereumNews2026/03/13 12:15
UK crypto holders brace for FCA’s expanded regulatory reach

UK crypto holders brace for FCA’s expanded regulatory reach

The post UK crypto holders brace for FCA’s expanded regulatory reach appeared on BitcoinEthereumNews.com. British crypto holders may soon face a very different landscape as the Financial Conduct Authority (FCA) moves to expand its regulatory reach in the industry. A new consultation paper outlines how the watchdog intends to apply its rulebook to crypto firms, shaping everything from asset safeguarding to trading platform operation. According to the financial regulator, these proposals would translate into clearer protections for retail investors and stricter oversight of crypto firms. UK FCA plans Until now, UK crypto users mostly encountered the FCA through rules on promotions and anti-money laundering checks. The consultation paper goes much further. It proposes direct oversight of stablecoin issuers, custodians, and crypto-asset trading platforms (CATPs). For investors, that means the wallets, exchanges, and coins they rely on could soon be subject to the same governance and resilience standards as traditional financial institutions. The regulator has also clarified that firms need official authorization before serving customers. This condition should, in theory, reduce the risk of sudden platform failures or unclear accountability. David Geale, the FCA’s executive director of payments and digital finance, said the proposals are designed to strike a balance between innovation and protection. He explained: “We want to develop a sustainable and competitive crypto sector – balancing innovation, market integrity and trust.” Geale noted that while the rules will not eliminate investment risks, they will create consistent standards, helping consumers understand what to expect from registered firms. Why does this matter for crypto holders? The UK regulatory framework shift would provide safer custody of assets, better disclosure of risks, and clearer recourse if something goes wrong. However, the regulator was also frank in its submission, arguing that no rulebook can eliminate the volatility or inherent risks of holding digital assets. Instead, the focus is on ensuring that when consumers choose to invest, they do…
Share
BitcoinEthereumNews2025/09/17 23:52
Edges higher ahead of BoC-Fed policy outcome

Edges higher ahead of BoC-Fed policy outcome

The post Edges higher ahead of BoC-Fed policy outcome appeared on BitcoinEthereumNews.com. USD/CAD gains marginally to near 1.3760 ahead of monetary policy announcements by the Fed and the BoC. Both the Fed and the BoC are expected to lower interest rates. USD/CAD forms a Head and Shoulder chart pattern. The USD/CAD pair ticks up to near 1.3760 during the late European session on Wednesday. The Loonie pair gains marginally ahead of monetary policy outcomes by the Bank of Canada (BoC) and the Federal Reserve (Fed) during New York trading hours. Both the BoC and the Fed are expected to cut interest rates amid mounting labor market conditions in their respective economies. Inflationary pressures in the Canadian economy have cooled down, emerging as another reason behind the BoC’s dovish expectations. However, the Fed is expected to start the monetary-easing campaign despite the United States (US) inflation remaining higher. Investors will closely monitor press conferences from both Fed Chair Jerome Powell and BoC Governor Tiff Macklem to get cues about whether there will be more interest rate cuts in the remainder of the year. According to analysts from Barclays, the Fed’s latest median projections for interest rates are likely to call for three interest rate cuts by 2025. Ahead of the Fed’s monetary policy, the US Dollar Index (DXY), which tracks the Greenback’s value against six major currencies, holds onto Tuesday’s losses near 96.60. USD/CAD forms a Head and Shoulder chart pattern, which indicates a bearish reversal. The neckline of the above-mentioned chart pattern is plotted near 1.3715. The near-term trend of the pair remains bearish as it stays below the 20-day Exponential Moving Average (EMA), which trades around 1.3800. The 14-day Relative Strength Index (RSI) slides to near 40.00. A fresh bearish momentum would emerge if the RSI falls below that level. Going forward, the asset could slide towards the round level of…
Share
BitcoinEthereumNews2025/09/18 01:23