TLDR DeepSeek introduced Manifold-Constrained Hyper-Connections (mHC) to improve large-model training scalability and efficiency. The mHC method was tested on 3BTLDR DeepSeek introduced Manifold-Constrained Hyper-Connections (mHC) to improve large-model training scalability and efficiency. The mHC method was tested on 3B

DeepSeek Introduces mHC Architecture to Improve Large Model Training

TLDR

  • DeepSeek introduced Manifold-Constrained Hyper-Connections (mHC) to improve large-model training scalability and efficiency.
  • The mHC method was tested on 3B, 9B, and 27B parameter models, showing stable performance without added computational cost.
  • mHC builds on ByteDance’s 2024 hyper-connection architecture by adding a manifold constraint to reduce memory overhead.
  • CEO Liang Wenfeng co-authored and uploaded the paper, reaffirming his direct involvement in DeepSeek’s technical development.
  • Industry observers expect a new DeepSeek model release ahead of Spring Festival 2026, based on the company’s publication patterns.

DeepSeek has released a new AI training method, Manifold-Constrained Hyper-Connections (mHC), in a paper uploaded to arXiv by CEO Liang Wenfeng. The architecture aims to improve training scalability for large models while keeping computational costs low. Researchers tested the method on models with 3, 9, and 27 billion parameters, showing consistent training efficiency. This comes as the company is expected to launch a new model before the Spring Festival in February 2026.

DeepSeek Builds on ResNet and Hyper-Connection Foundations

According to a report by SCMP, the mHC method enhances earlier hyper-connection (HC) designs first proposed by ByteDance in 2024 as an improvement to ResNet. ResNet allows deeper neural networks by preserving signal strength across layers, but faces challenges in maintaining efficient learning at large scale. ByteDance’s HC improved signal flow but didn’t fully address memory usage in larger models.

DeepSeek introduced a manifold constraint to limit expansion and better control memory and compute costs during training. This adjustment preserved the HC benefits while making the network suitable for larger training tasks. Researchers wrote that mHC maintained performance without increasing computational overhead per unit during model training at scale.

Lead authors Zhenda Xie, Yixuan Wei, and Huanqi Cao explained that the system enables stable deep learning without collapse. They confirmed mHC works with minimal infrastructure adjustments, making it efficient for broader deployment. The architecture was tested across multiple model sizes, confirming the technique’s adaptability and reliability. DeepSeek reported that the method handled signal preservation and scalability better than previous HC-based frameworks.

Liang Wenfeng Directly Leads Technical Advancement

CEO Liang Wenfeng was listed as the final author and uploaded the paper himself, continuing his role in major DeepSeek research. He has consistently shared technical papers linked to the company’s top models, such as R1 and V3 on arXiv. Other researchers typically upload supporting studies not directly tied to product development.

His involvement in this paper signals continued leadership in the company’s core AI work. The release underscores DeepSeek’s approach of linking internal research closely with future product direction. Florian Brand, a PhD researcher at Trier University, said DeepSeek papers often indicate what models are coming next.

He noted that the R1 model followed a similar pattern of publication and then launch. Liang’s involvement has again drawn attention from analysts watching DeepSeek’s release schedule. The company has not announced a date, but its publication strategy has become predictable. DeepSeek has remained quiet on details, but research uploads suggest new systems are under development.

The post DeepSeek Introduces mHC Architecture to Improve Large Model Training appeared first on Blockonomi.

Market Opportunity
Hyperlane Logo
Hyperlane Price(HYPER)
$0.12226
$0.12226$0.12226
+0.58%
USD
Hyperlane (HYPER) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Missed Solana’s Massive Gains? APEMARS is the Next Big Crypto With 3000x Potential (Whitelist Open for Early Access)

Missed Solana’s Massive Gains? APEMARS is the Next Big Crypto With 3000x Potential (Whitelist Open for Early Access)

Here’s a fact that stings: if you put $1,000 into Solana when it launched at $0.08, you’d be sitting on over $1.5 million at its peak. Most investors weren’t paying
Share
Coinstats2026/01/02 06:15
Flow advances recovery plan, raises exchange concerns after $3.9M exploit

Flow advances recovery plan, raises exchange concerns after $3.9M exploit

                                                                               The plan to address a multimillion-dollar exploit continued with "phase two p
Share
Coinstats2026/01/02 05:38
BlackRock boosts AI and US equity exposure in $185 billion models

BlackRock boosts AI and US equity exposure in $185 billion models

The post BlackRock boosts AI and US equity exposure in $185 billion models appeared on BitcoinEthereumNews.com. BlackRock is steering $185 billion worth of model portfolios deeper into US stocks and artificial intelligence. The decision came this week as the asset manager adjusted its entire model suite, increasing its equity allocation and dumping exposure to international developed markets. The firm now sits 2% overweight on stocks, after money moved between several of its biggest exchange-traded funds. This wasn’t a slow shuffle. Billions flowed across multiple ETFs on Tuesday as BlackRock executed the realignment. The iShares S&P 100 ETF (OEF) alone brought in $3.4 billion, the largest single-day haul in its history. The iShares Core S&P 500 ETF (IVV) collected $2.3 billion, while the iShares US Equity Factor Rotation Active ETF (DYNF) added nearly $2 billion. The rebalancing triggered swift inflows and outflows that realigned investor exposure on the back of performance data and macroeconomic outlooks. BlackRock raises equities on strong US earnings The model updates come as BlackRock backs the rally in American stocks, fueled by strong earnings and optimism around rate cuts. In an investment letter obtained by Bloomberg, the firm said US companies have delivered 11% earnings growth since the third quarter of 2024. Meanwhile, earnings across other developed markets barely touched 2%. That gap helped push the decision to drop international holdings in favor of American ones. Michael Gates, lead portfolio manager for BlackRock’s Target Allocation ETF model portfolio suite, said the US market is the only one showing consistency in sales growth, profit delivery, and revisions in analyst forecasts. “The US equity market continues to stand alone in terms of earnings delivery, sales growth and sustainable trends in analyst estimates and revisions,” Michael wrote. He added that non-US developed markets lagged far behind, especially when it came to sales. This week’s changes reflect that position. The move was made ahead of the Federal…
Share
BitcoinEthereumNews2025/09/18 01:44