By combining the advantages of state space models (SSMs) with attention mechanisms, SAMBA presents a hybrid neural architecture that enables effective, scalable language modeling with an almost infinite context length. SAMBA surpasses both pure attention-based and SSM-based models on a variety of reasoning, comprehension, and coding metrics when trained on SlimPajama with consistent setups. The model processes sequences up to 256K tokens with little fine-tuning, achieving exceptional speed and extrapolation capacity.By combining the advantages of state space models (SSMs) with attention mechanisms, SAMBA presents a hybrid neural architecture that enables effective, scalable language modeling with an almost infinite context length. SAMBA surpasses both pure attention-based and SSM-based models on a variety of reasoning, comprehension, and coding metrics when trained on SlimPajama with consistent setups. The model processes sequences up to 256K tokens with little fine-tuning, achieving exceptional speed and extrapolation capacity.

How Hybrid AI Models Balance Memory and Efficiency

2025/10/28 17:13

Abstract and 1. Introduction

  1. Methodology

  2. Experiments and Results

    3.1 Language Modeling on vQuality Data

    3.2 Exploration on Attention and Linear Recurrence

    3.3 Efficient Length Extrapolation

    3.4 Long-Context Understanding

  3. Analysis

  4. Conclusion, Acknowledgement, and References

A. Implementation Details

B. Additional Experiment Results

C. Details of Entropy Measurement

D. Limitations

\

A Implementation Details

\ For the GLA layer in the Sliding GLA architecture, we use the number of heads dm/384, a key expansion ratio of 0.5, and a value expansion ratio of 1. For the RetNet layer we use a number of head that is half of the number of attention query heads, key expansion ratio of 1 and value expansion ratio of 2. The GLA and RetNet implementations are from the Flash Linear Attention repository[3] [YZ24]. We use the FlashAttention-based implementation for Self-Extend extrapolation[4]. The Mamba 432M model has a model width of 1024 and the Mamba 1.3B model has a model width of 2048. All models trained on SlimPajama have the same training configurations and the MLP intermediate size as Samba, unless otherwise specified. The training infrastructure on SlimPajama is based on a modified version of the TinyLlama codebase[5].

\ Table 10: Detailed hyper-parameters of the SAMBA models trained at different scales. We only show the optimization settings for the first training phase of the 3.8B model.

\ In the generation configurations for the downstream tasks, we use greedy decoding for GSM8K, and Nucleus Sampling [HBD+19] with a temperature of τ = 0.2 and top-p = 0.95 for HumanEval. For MBPP and SQuAD, we set τ = 0.01 and top-p = 0.95.

B Additional Experiment Results

\ Figure 6: Training loss curves of Samba 1.7B and Mistral 1.6B models during 500 steps of instruction tuning on Passkey Retrieval with 4K sequence length. We plot the loss curves for both models using the simple moving average of window size 10.

\

\ Figure 7: Overall passkey retrieval accuracy on the 256K document length of Samba 1.7B and Mistral 1.6B models during 500 steps of instruction tuning.

\

C Details of Entropy Measurement

\

\

D Limitations

Although Samba demonstrates promising memory retrieval performance through instruction tuning, its pre-trained base model has retrieval performance similar to that of the SWA-based model, as shown in Figure 7. This opens up future direction on further improving the Samba’s retrieval ability without compromising its efficiency and extrapolation ability. In addition, the hybridization strategy of Samba is not consistently better than other alternatives in all tasks. As shown in Table 2, MambaSWA-MLP shows improved performance on tasks such as WinoGrande, SIQA, and GSM8K. This gives us the potential to invest in a more sophisticated approach to perform input-dependent dynamic combinations of SWA-based and SSM-based models.

\

:::info Authors:

(1) Liliang Ren, Microsoft and University of Illinois at Urbana-Champaign (liliangren@microsoft.com);

(2) Yang Liu†, Microsoft (yaliu10@microsoft.com);

(3) Yadong Lu†, Microsoft (yadonglu@microsoft.com);

(4) Yelong Shen, Microsoft (yelong.shen@microsoft.com);

(5) Chen Liang, Microsoft (chenliang1@microsoft.com);

(6) Weizhu Chen, Microsoft (wzchen@microsoft.com).

:::


:::info This paper is available on arxiv under CC BY 4.0 license.

:::

[3] https://github.com/sustcsonglin/flash-linear-attention

\ [4] https://github.com/datamllab/LongLM/blob/master/selfextendpatch/Llama.py

\ [5] https://github.com/jzhang38/TinyLlama

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Why Digitap ($TAP) is the Best Crypto Presale December Follow-Up

Why Digitap ($TAP) is the Best Crypto Presale December Follow-Up

The post Why Digitap ($TAP) is the Best Crypto Presale December Follow-Up appeared on BitcoinEthereumNews.com. Crypto Projects Hyperliquid’s HYPE has seen another disappointing week. The token struggled to hold the $30-$32 price range after 9.9M tokens were unlocked and added to the circulating supply. Many traders are now watching whether HYPE will reclaim the $35 area as support or break down further towards the high $20s. Unlike Hyperliquid, whose trading volume is shrinking, Digitap ($TAP), a rising crypto presale project, has already raised over $2 million in just weeks. This is all thanks to its live omnibank app that combines crypto and fiat tools in a single, seamless account. While popular altcoins stall, whales are channeling capital into early-stage opportunities. This shift is shaping discussions on the best altcoins to buy now in the current market dynamics. Hyperliquid Spot Trades Clustered Between the Low and Mid $30s HYPE price closed the week with an 11% loss. This is because a significant portion of its spot trades are clustered between the low and mid $30s. This leaves the token with a multi-billion-dollar fully diluted valuation on its daily trading volume. Source: CoinMarketCap Moreover, HYPE’s daily RSI is still stuck above $40s, while the short-term averages are continually dropping. This shows an indecisiveness, where the bears and the bulls don’t have clear control of the market. Additionally, roughly 2.6% of the circulating supply is in circulation. After unlocking 9.9M tokens, the Hyperliquid team spent over $600 million on buybacks. This amount often buys only a few million tokens a day. That steady demand is quite small compared to the 9.9 million tokens that were released. This has left the HYPE market with an oversupply. Many HYPE holders are now rotating capital into crypto presale projects, like Digitap, that offer immediate upside. HYPE Market Sentiments Shows Mixed Signals Traders are now projecting mixed sentiments for the token. Some…
Share
BitcoinEthereumNews2025/12/08 22:17