This study examines how the SST (Singular Spectrum Transformation) model improves compression and efficiency in large language models compared to GaLore. Results show that SST retains lower perplexity across high pruning ratios and consistently outperforms GaLore in memory-efficient training experiments on datasets like IWSLT’14 and OpenWebText. By concentrating essential information into fewer singular values, SST enables lighter, faster, and more capable models—making it a leading approach for scalable, high-performance AI inference.This study examines how the SST (Singular Spectrum Transformation) model improves compression and efficiency in large language models compared to GaLore. Results show that SST retains lower perplexity across high pruning ratios and consistently outperforms GaLore in memory-efficient training experiments on datasets like IWSLT’14 and OpenWebText. By concentrating essential information into fewer singular values, SST enables lighter, faster, and more capable models—making it a leading approach for scalable, high-performance AI inference.

SST vs. GaLore: The Battle for the Most Efficient AI Brain

Abstract and 1. Introduction

  1. Related Work

  2. Low Rank Adaptation

    3.1 LoRA and 3.2 Limitation of LoRA

    3.3 ReLoRA*

  3. Sparse Spectral Training

    4.1 Preliminaries and 4.2 Gradient Update of U, VT with Σ

    4.3 Why SVD Initialization is Important

    4.4 SST Balances Exploitation and Exploration

    4.5 Memory-Efficient Implementation for SST and 4.6 Sparsity of SST

  4. Experiments

    5.1 Machine Translation

    5.2 Natural Language Generation

    5.3 Hyperbolic Graph Neural Networks

  5. Conclusion and Discussion

  6. Broader Impacts and References

Supplementary Information

A. Algorithm of Sparse Spectral Training

B. Proof of Gradient of Sparse Spectral Layer

C. Proof of Decomposition of Gradient of Weight

D. Proof of Advantage of Enhanced Gradient over Default Gradient

E. Proof of Zero Distortion with SVD Initialization

F. Experiment Details

G. Singular Value Pruning

H. Evaluating SST and GaLore: Complementary Approaches to Memory Efficiency

I. Ablation Study

G Singular Value Pruning

We further conduct an analysis study of the potential for using SST model for further compression. The results, as shown in Figure 3, indicate that the SST model retains lower perplexity across a wider range of pruning ratios compared to the full-rank model. This suggests that the SST method effectively concentrates the informational content of the weights into fewer singular values, making it more suitable for further compression.

\ This enhanced performance underscores the potential of SST in maintaining essential model characteristics even under significant compression, making it a promising approach for developing lightweight yet powerful language models for inference.

\ Figure 3: Singular Value Pruning. We conduct singular value pruning on full-rank and SST pretrained OPT-125M model. After performing singular value decomposition on weight matrices, we preserve the top k singular values so that the cumulative sum of preserved singular values ranges from [100%, 99%, 98%, …, 93%, 90%] of the original cumulative sum. The pruned ratio of singular values is plotted along the x-axis.

\

H Evaluating SST and GaLore: Complementary Approaches to Memory Efficiency

Recently, a new approach named Gradient Low-Rank Projection (GaLore) has been proposed to address the memory challenges associated with training large language models. GaLore, by implementing a memory-efficient gradient projection method, enhances training efficiency without compromising the training dynamics as traditional low-rank adaptation methods, like LoRA, often do.

\ Using the released code of GaLore[2], we conducted comparative experiments on the IWSLT’14 dataset with Transformer models, employing the same configurations as other low-rank methods. We set the scale factor α = 1 in these experiments because α = 0.25, which is used in the article, performs much worse than α = 1. As illustrated in Table 9, SST method consistently outperformed GaLore across various model dimensions and ranks, except for d = 256, r = 32.

\ In addition, we evaluated validation perplexity on the OpenWebText dataset with OPT-125M models. We tested GaLore with scale factor α = 0.25 (used in the article) and α = 1. As shown in Table 10, SST surpassed GaLore at both settings of α.

\ \

\ \ \ Table 9: The BLEU score on IWSLT’14 for Euclidean Transformer, compared with GaLore. Values highlighted in bold represent the highest performance among the low rank methods, while those marked with an “*” denote performance that exceeds that of the full-rank variants.

\ \ \ Table 10: Validation perplexity, compared with GaLore on OpenWebText dataset with OPT-125M, along with the number of trainable parameters of each method. r = 64. Values highlighted in bold represent the highest performance among the low rank methods.

\

I Ablation Study

We conduct an ablation study to evaluate the impact of various components and configurations within SST on the IWSLT’14 using a Euclidean Transformer with a dimension of 128 and rank r of 4. The results of this study are summarized in Table 11, which highlights the contributions of specific elements to the overall performance measured in BLEU score.

\ \

\ \ \ Table 11: Ablation Study on IWSLT’14 dataset with Euclidean Transformer. Dimension is 128 and r is 4.

\ \ \ Figure 4: Singular Value Distribution. This visualization depicts the distribution of singular values for the OPT-125M model with full-rank, LoRA, and SST, with r = 64). The x-axis represents the index of singular values, sorted from largest to smallest, while the y-axis shows the magnitude of each value. It highlights how LoRA predominantly captures and overestimates the top-r singular values, in contrast to SST, which shows a much similar distribution as full-rank training.

\ \

:::info Authors:

(1) Jialin Zhao, Center for Complex Network Intelligence (CCNI), Tsinghua Laboratory of Brain and Intelligence (THBI) and Department of Computer Science;

(2) Yingtao Zhang, Center for Complex Network Intelligence (CCNI), Tsinghua Laboratory of Brain and Intelligence (THBI) and Department of Computer Science;

(3) Xinghang Li, Department of Computer Science;

(4) Huaping Liu, Department of Computer Science;

(5) Carlo Vittorio Cannistraci, Center for Complex Network Intelligence (CCNI), Tsinghua Laboratory of Brain and Intelligence (THBI), Department of Computer Science, and Department of Biomedical Engineering Tsinghua University, Beijing, China.

:::


:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Qatar wealth fund commits $25bn to Goldman investments

Qatar wealth fund commits $25bn to Goldman investments

The Qatar Investment Authority (QIA) has signed a preliminary agreement with Goldman Sachs, committing $25 billion in investments to US managed funds and co-investment
Share
Agbi2026/01/21 13:38
Positive view remains intact above 185.00, with bullish RSI momentum

Positive view remains intact above 185.00, with bullish RSI momentum

The post Positive view remains intact above 185.00, with bullish RSI momentum appeared on BitcoinEthereumNews.com. The EUR/JPY cross loses ground near 185.25 during
Share
BitcoinEthereumNews2026/01/21 13:24
Cryptos Signal Divergence Ahead of Fed Rate Decision

Cryptos Signal Divergence Ahead of Fed Rate Decision

The post Cryptos Signal Divergence Ahead of Fed Rate Decision appeared on BitcoinEthereumNews.com. Crypto assets send conflicting signals ahead of the Federal Reserve’s September rate decision. On-chain data reveals a clear decrease in Bitcoin and Ethereum flowing into centralized exchanges, but a sharp increase in altcoin inflows. The findings come from a Tuesday report by CryptoQuant, an on-chain data platform. The firm’s data shows a stark divergence in coin volume, which has been observed in movements onto centralized exchanges over the past few weeks. Bitcoin and Ethereum Inflows Drop to Multi-Month Lows Sponsored Sponsored Bitcoin has seen a dramatic drop in exchange inflows, with the 7-day moving average plummeting to 25,000 BTC, its lowest level in over a year. The average deposit per transaction has fallen to 0.57 BTC as of September. This suggests that smaller retail investors, rather than large-scale whales, are responsible for the recent cash-outs. Ethereum is showing a similar trend, with its daily exchange inflows decreasing to a two-month low. CryptoQuant reported that the 7-day moving average for ETH deposits on exchanges is around 783,000 ETH, the lowest in two months. Other Altcoins See Renewed Selling Pressure In contrast, other altcoin deposit activity on exchanges has surged. The number of altcoin deposit transactions on centralized exchanges was quite steady in May and June of this year, maintaining a 7-day moving average of about 20,000 to 30,000. Recently, however, that figure has jumped to 55,000 transactions. Altcoins: Exchange Inflow Transaction Count. Source: CryptoQuant CryptoQuant projects that altcoins, given their increased inflow activity, could face relatively higher selling pressure compared to BTC and ETH. Meanwhile, the balance of stablecoins on exchanges—a key indicator of potential buying pressure—has increased significantly. The report notes that the exchange USDT balance, around $273 million in April, grew to $379 million by August 31, marking a new yearly high. CryptoQuant interprets this surge as a reflection of…
Share
BitcoinEthereumNews2025/09/18 01:01