In this study, we address the crucial problem of instability in hyperbolic deep learning, particularly in the learning of the curvature of the manifold. Naive techniques have a fundamental weakness that the authors point out: performance deteriorates when the curvature parameter is updated before the model parameters are updated, invalidating the Riemannian gradients and projections. They address this by presenting a new ordered projection schema that re-projects the model parameters onto the new manifold after first updating the curvature and then projecting them to a stable tangent space.In this study, we address the crucial problem of instability in hyperbolic deep learning, particularly in the learning of the curvature of the manifold. Naive techniques have a fundamental weakness that the authors point out: performance deteriorates when the curvature parameter is updated before the model parameters are updated, invalidating the Riemannian gradients and projections. They address this by presenting a new ordered projection schema that re-projects the model parameters onto the new manifold after first updating the curvature and then projecting them to a stable tangent space.

Understanding Training Stability in Hyperbolic Neural Networks

2025/10/28 22:52
5 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

Abstract and 1. Introduction

  1. Related Work

  2. Methodology

    3.1 Background

    3.2 Riemannian Optimization

    3.3 Towards Efficient Architectural Components

  3. Experiments

    4.1 Hierarchical Metric Learning Problem

    4.2 Standard Classification Problem

  4. Conclusion and References

3.1 Background

\

3.2 Riemannian Optimization

Optimizers for Learned Curvatures In their hyperbolic learning library GeoOpt, Kochurov et al. [21] attempt to make the curvature of the hyperbolic space a learnable parameter. However, we have found no further work that makes proper use of this feature. Additionally, our empirical tests show that this approach often results in higher levels of instability and performance degradation. We attribute these issues to the naive implementation of curvature updates, which fails to incorporate the updated hyperbolic operations into the learning algorithm. Specifically, Riemannian optimizers rely on Riemannian projections of Euclidean gradients and projected momentums onto the tangent spaces of gradient vectors. These operations depend on the current properties of the manifold that houses the hyperbolic parameters being updated. From this, we can identify one main issue with the naive curvature learning approach.

\ The order in which parameters are updated is crucial. Specifically, if the curvature of the space is updated before the hyperbolic parameters, the Riemannian projections and tangent projections of the gradients and momentums become invalid. This happens because the projection operations start using the new curvature value, even though the hyperbolic parameters, hyperbolic gradients, and momentums have not yet been reprojected onto the new manifold.

\ To resolve this issue, we propose a projection schema and an ordered parameter update process. To sequentialize the optimization of model parameters, we first update all manifold and Euclidean parameters, and then update the curvatures after. Next, we parallel transport all Riemannian gradients and project all hyperbolic parameters to the tangent space at the origin using the old curvature value. Since this tangent space remains invariant when the manifold curvature changes, we can assume the points now lie on the tangent space of the new origin as well. We then re-project the hyperbolic tensors back onto the manifold using the new curvature value and parallel transport the Riemannian gradients to their respective parameters. This process can be illustrated in algorithm 1.

\

\ Riemannian AdamW Optimizer Recent works, especially with transformers, rely on the AdamW optimizer proposed by Loshchilov and Hutter [26] for training. As of current, there is no established Riemannian variant of this optimizer. We attempt to derive AdamW for the Lorentz manifold and argue a similar approach could be generalized for the Poincaré ball. The main difference between AdamW and Adam is the direct weight regularization which is more difficult to perform in the Lorentz space given the lack of an intuitive subtraction operation on the manifold. To resolve this, we model the regularized parameter instead as a weighted centroid with the origin. The regularization schema becomes:

\

\

\ As such, we propose a maximum distance rescaling function on the tangent of the origin to conform with the representational capacity of hyperbolic manifolds.

\

\ Specifically, we apply it when moving parameters across different manifolds. This includes moving from the Euclidean space to the Lorentz space and moving between Lorentz spaces of different curvatures. We also apply the scaling after Lorentz Boosts and direct Lorentz concatenations [31]. Additionally, we add this operation after the variance-based rescaling in the batchnorm layer. This is because we run into situations where adjusting to the variance pushes the points outside the radius during the operation.

3.3 Towards Efficient Architectural Components

Lorentz Convolutional Layer In their work, Bdeir et al. [1] relied on dissecting the convolution operation into a window-unfolding followed by a modified version of the Lorentz Linear layer by Chen et al. [3]. However, an alternative definition for the Lorentz Linear layer is offered by Dai et al. [5] based on a direct decomposition of the operation into a Lorentz boost and a Lorentz rotation. We follow the dissection scheme by Bdeir et al. [1] but rely on Dai et al. [5]s’ alternate definition of the Lorentz linear transformation. The core transition here would be moving from a matrix multiplication on the spatial dimensions followed by a reprojection, to learning an individual rotation operation and a Lorentz Boost.

\

\ out = LorentzBoost(TanhScaling(RotationConvolution(x)))

\ where TanhRescaling is the operation described in 2 and RotationConvolution is a normal convolution parameterized through the procedure in 2, where Orthogonalize is a Cayley transformation similar to [16]. We use the Cayley transformation in particular because it always results in an orthonormal matrix with a positive determinant which prevents the rotated point from being carried to the lower sheet of the hyperboloid.

\ Lorentz-Core Bottleneck Block In an effort to expand on the idea of hybrid hyperbolic encoders [1], we designed the Lorentz Core Bottleneck blocks for Hyperbolic Resnet-based models. This is similar to a standard Euclidean bottleneck block except we replace the internal 3x3 convolutional layer with our efficient convolutional layer as seen in figure 1. We are then able to benefit from a hyperbolic structuring of the embeddings in each block while maintaining the flexibility and speed of Euclidean models. We interpret this integration as a form of hyperbolic bias that can be adopted into Resnets without strict hyperbolic modeling.

Specifically, we apply it when moving parameters across different manifolds. This includes moving from the Euclidean space to the Lorentz space and moving between Lorentz spaces of different curvatures. We also apply the scaling after Lorentz Boosts and direct Lorentz concatenations [31]. Additionally, we add this operation after the variance-based rescaling in the batchnorm layer. This is because we run into situations where adjusting to the variance pushes the points outside the radius during the operation.

\

:::info Authors:

(1) Ahmad Bdeir, Data Science Department, University of Hildesheim (bdeira@uni-hildesheim.de);

(2) Niels Landwehr, Data Science Department, University of Hildesheim (landwehr@uni-hildesheim.de).

:::


:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

Market Opportunity
DeepBook Logo
DeepBook Price(DEEP)
$0.034499
$0.034499$0.034499
+0.20%
USD
DeepBook (DEEP) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

BetFury is at SBC Summit Lisbon 2025: Affiliate Growth in Focus

BetFury is at SBC Summit Lisbon 2025: Affiliate Growth in Focus

The post BetFury is at SBC Summit Lisbon 2025: Affiliate Growth in Focus appeared on BitcoinEthereumNews.com. Press Releases are sponsored content and not a part of Finbold’s editorial content. For a full disclaimer, please . Crypto assets/products can be highly risky. Never invest unless you’re prepared to lose all the money you invest. Curacao, Curacao, September 17th, 2025, Chainwire BetFury steps onto the stage of SBC Summit Lisbon 2025 — one of the key gatherings in the iGaming calendar. From 16 to 18 September, the platform showcases its brand strength, deepens affiliate connections, and outlines its plans for global expansion. BetFury continues to play a role in the evolving crypto and iGaming partnership landscape. BetFury’s Participation at SBC Summit The SBC Summit gathers over 25,000 delegates, including 6,000+ affiliates — the largest concentration of affiliate professionals in iGaming. For BetFury, this isn’t just visibility, it’s a strategic chance to present its Affiliate Program to the right audience. Face-to-face meetings, dedicated networking zones, and affiliate-focused sessions make Lisbon the ideal ground to build new partnerships and strengthen existing ones. BetFury Meets Affiliate Leaders at its Massive Stand BetFury arrives at the summit with a massive stand placed right in the center of the Affiliate zone. Designed as a true meeting hub, the stand combines large LED screens, a sleek interior, and the best coffee at the event — but its core mission goes far beyond style. Here, BetFury’s team welcomes partners and affiliates to discuss tailored collaborations, explore growth opportunities across multiple GEOs, and expand its global Affiliate Program. To make the experience even more engaging, the stand also hosts: Affiliate Lottery — a branded drum filled with exclusive offers and personalized deals for affiliates. Merch Kits — premium giveaways to boost brand recognition and leave visitors with a lasting conference memory. Besides, at SBC Summit Lisbon, attendees have a chance to meet the BetFury team along…
Share
BitcoinEthereumNews2025/09/18 01:20
Why XRP Could Be More Important Than Anyone Realised: DTCC, Mastercard and DBS Explained

Why XRP Could Be More Important Than Anyone Realised: DTCC, Mastercard and DBS Explained

The post Why XRP Could Be More Important Than Anyone Realised: DTCC, Mastercard and DBS Explained appeared first on Coinpedia Fintech News XRP is trading at $1.
Share
CoinPedia2026/03/14 00:54
XLM Price Climbs Toward $0.163 as Social Surge on Adoption News From BoG in Africa

XLM Price Climbs Toward $0.163 as Social Surge on Adoption News From BoG in Africa

The post XLM Price Climbs Toward $0.163 as Social Surge on Adoption News From BoG in Africa appeared first on Coinpedia Fintech News The XLM price has quietly staged
Share
CoinPedia2026/03/13 23:56