China’s tech wonder kid DeepSeek has launched a new experimental model, V3.2-Exp, as part of its attempt to challenge American dominance in AI. The release came on Monday and was first made public through a post on Hugging Face, a popular AI forum. DeepSeek claims that this latest version builds on its current model, V3.1-Terminus, […]China’s tech wonder kid DeepSeek has launched a new experimental model, V3.2-Exp, as part of its attempt to challenge American dominance in AI. The release came on Monday and was first made public through a post on Hugging Face, a popular AI forum. DeepSeek claims that this latest version builds on its current model, V3.1-Terminus, […]

What to know about DeepSeek's new V3.2-Exp model

2025/09/30 20:42
4 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

China’s tech wonder kid DeepSeek has launched a new experimental model, V3.2-Exp, as part of its attempt to challenge American dominance in AI. The release came on Monday and was first made public through a post on Hugging Face, a popular AI forum.

DeepSeek claims that this latest version builds on its current model, V3.1-Terminus, but with a stronger emphasis on speed, cost, and memory handling.

According to Hugging Face’s Chinese community lead Adina Yakefu, the model features something called DeepSeek Sparse Attention, or DSA, which she said “makes the AI better at handling long documents and conversations” while also cutting operating costs in half.

If you recall, around a year ago, DeepSeek dropped and shook things up by dropping its first model, R1, without warning. That model showed it was possible to train a large language model using fewer chips and much less computing power. No one expected a Chinese team to pull that off under those constraints. With V3.2-Exp, the goal hasn’t changed: less hardware, more performance.

Adds DeepSeek Sparse Attention and reduces AI running cost

DSA is the big feature in this model. It changes how the AI picks which information to look at. Instead of scanning everything, DeepSeek trains the model to focus only on what seems useful for the task. Adina explained that the benefit here is twofold: “efficiency” and “cost reduction.”

By skipping irrelevant data, the model moves faster and requires less energy. She said the model was designed with open-source collaboration in mind.

Nick Patience, who leads AI research at The Futurum Group, told CNBC the model has the potential to open up powerful AI tools to developers who can’t afford to use more expensive models. “It should make the model faster and more cost-effective to use without a noticeable drop in performance,” Nick said. But that doesn’t mean there aren’t risks.

The way DeepSeek uses sparse attention is like how airlines pick flight routes. There might be hundreds of ways to get from one place to another, but only a few make sense. The model filters through the noise and focuses on what matters — or at least what it thinks matters.

But this comes with concerns. Ekaterina Almasque, who cofounded BlankPage Capital, explained it simply: “So basically, you cut out things that you think are not important.” But the issue, she said, is that there’s no guarantee the model is cutting the right things.

Ekaterina, who has backed companies like Dataiku, Darktrace, and Graphcore, warned that cutting corners might create problems later. “They [sparse attention models] have lost a lot of nuances,” she said. “And then the real question is, did they have the right mechanism to exclude not important data, or is there a mechanism excluding really important data, and then the outcome will be much less relevant?”

Connects to Chinese chips and releases open code

Despite those concerns, DeepSeek insists that V3.2-Exp performs just as well as V3.1-Terminus. The model can also run directly on domestic Chinese chips like Ascend and Cambricon, with no extra configurations required. That’s key in China’s broader effort to build AI on homegrown hardware and reduce dependency on foreign tech. “Right out of the box,” Adina said, DeepSeek works with these chips.

The company also made the model’s full code and tools public. That means anyone can download, run, modify, or build on top of V3.2-Exp. This move aligns with DeepSeek’s open-source strategy, but it raises another issue: patents. Since the model is open and the core idea, sparse attention, has been around since 2015, DeepSeek can’t lock it down legally.

“The approach is not super new,” said Ekaterina. For her, the only defensible part of the tech is how DeepSeek chooses what to keep and what to ignore.

That’s where the real competition lies now. Not just in making smarter models, but making them faster, cheaper, and leaner — without screwing up results. Even DeepSeek called this version “an intermediate step toward our next-generation architecture,” which suggests they’re already working on something bigger.

Nick said the model shows that efficiency is now just as important as raw power. And Adina believes the company has a long-term play in mind. “DeepSeek is playing the long game to keep the community invested in their progress,” she said. “People will always go for what is cheap, reliable, and effective.”

Join a premium crypto trading community free for 30 days - normally $100/mo.

Market Opportunity
Particl Logo
Particl Price(PART)
$0.1601
$0.1601$0.1601
+0.06%
USD
Particl (PART) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

WAR Token Surges 56% as On-Chain Activity Signals Growing Adoption in Gaming Sector

WAR Token Surges 56% as On-Chain Activity Signals Growing Adoption in Gaming Sector

WAR token has recorded a remarkable 56% price increase over the past 24 hours, accompanied by $17.85 million in trading volume. Our analysis reveals interesting
Share
Blockchainmagazine2026/03/06 07:06
South Korea Consumer Price Index Growth (YoY) below forecasts (2.1%) in February: Actual (2%)

South Korea Consumer Price Index Growth (YoY) below forecasts (2.1%) in February: Actual (2%)

The post South Korea Consumer Price Index Growth (YoY) below forecasts (2.1%) in February: Actual (2%) appeared on BitcoinEthereumNews.com. GBP/USD edged lower
Share
BitcoinEthereumNews2026/03/06 07:37
CEO Sandeep Nailwal Shared Highlights About RWA on Polygon

CEO Sandeep Nailwal Shared Highlights About RWA on Polygon

The post CEO Sandeep Nailwal Shared Highlights About RWA on Polygon appeared on BitcoinEthereumNews.com. Polygon CEO Sandeep Nailwal highlighted Polygon’s lead in global bonds, Spiko US T-Bill, and Spiko Euro T-Bill. Polygon published an X post to share that its roadmap to GigaGas was still scaling. Sentiments around POL price were last seen to be bearish. Polygon CEO Sandeep Nailwal shared key pointers from the Dune and RWA.xyz report. These pertain to highlights about RWA on Polygon. Simultaneously, Polygon underlined its roadmap towards GigaGas. Sentiments around POL price were last seen fumbling under bearish emotions. Polygon CEO Sandeep Nailwal on Polygon RWA CEO Sandeep Nailwal highlighted three key points from the Dune and RWA.xyz report. The Chief Executive of Polygon maintained that Polygon PoS was hosting RWA TVL worth $1.13 billion across 269 assets plus 2,900 holders. Nailwal confirmed from the report that RWA was happening on Polygon. The Dune and https://t.co/W6WSFlHoQF report on RWA is out and it shows that RWA is happening on Polygon. Here are a few highlights: – Leading in Global Bonds: Polygon holds 62% share of tokenized global bonds (driven by Spiko’s euro MMF and Cashlink euro issues) – Spiko U.S.… — Sandeep | CEO, Polygon Foundation (※,※) (@sandeepnailwal) September 17, 2025 The X post published by Polygon CEO Sandeep Nailwal underlined that the ecosystem was leading in global bonds by holding a 62% share of tokenized global bonds. He further highlighted that Polygon was leading with Spiko US T-Bill at approximately 29% share of TVL along with Ethereum, adding that the ecosystem had more than 50% share in the number of holders. Finally, Sandeep highlighted from the report that there was a strong adoption for Spiko Euro T-Bill with 38% share of TVL. He added that 68% of returns were on Polygon across all the chains. Polygon Roadmap to GigaGas In a different update from Polygon, the community…
Share
BitcoinEthereumNews2025/09/18 01:10