This article explores the implementation of gradient descent algorithms for minimizing global loss functions in neural networks, particularly in problems governed by Rankine-Hugoniot conditions. While gradient descent reliably converges, scalability issues arise when handling large domains with many coupled networks. To address this, a domain decomposition method (DDM) is introduced, enabling parallel optimization of local loss functions. The result is faster convergence, improved scalability, and a more efficient framework for training complex AI models.This article explores the implementation of gradient descent algorithms for minimizing global loss functions in neural networks, particularly in problems governed by Rankine-Hugoniot conditions. While gradient descent reliably converges, scalability issues arise when handling large domains with many coupled networks. To address this, a domain decomposition method (DDM) is introduced, enabling parallel optimization of local loss functions. The result is faster convergence, improved scalability, and a more efficient framework for training complex AI models.

Why Gradient Descent Converges (and Sometimes Doesn’t) in Neural Networks

2025/09/19 18:38

Abstract and 1. Introduction

1.1. Introductory remarks

1.2. Basics of neural networks

1.3. About the entropy of direct PINN methods

1.4. Organization of the paper

  1. Non-diffusive neural network solver for one dimensional scalar HCLs

    2.1. One shock wave

    2.2. Arbitrary number of shock waves

    2.3. Shock wave generation

    2.4. Shock wave interaction

    2.5. Non-diffusive neural network solver for one dimensional systems of CLs

    2.6. Efficient initial wave decomposition

  2. Gradient descent algorithm and efficient implementation

    3.1. Classical gradient descent algorithm for HCLs

    3.2. Gradient descent and domain decomposition methods

  3. Numerics

    4.1. Practical implementations

    4.2. Basic tests and convergence for 1 and 2 shock wave problems

    4.3. Shock wave generation

    4.4. Shock-Shock interaction

    4.5. Entropy solution

    4.6. Domain decomposition

    4.7. Nonlinear systems

  4. Conclusion and References

3. Gradient descent algorithm and efficient implementation

In this section we discuss the implementation of gradient descent algorithms for solving the minimization problems (11), (20) and (35). We note that these problems involve a global loss functional measuring the residue of HCL in the whole domain, as well Rankine-Hugoniot conditions, which results in training of a number of neural networks. In all the tests we have done, the gradient descent method converges and provides accurate results. We note also, that in problems with a large number of DLs, the global loss functional couples a large number of networks and the gradient descent algorithm may converge slowly. For these problems we present a domain decomposition method (DDM).

3.1. Classical gradient descent algorithm for HCLs

All the problems (11), (20) and (35) being similar, we will demonstrate in details the algorithm for the problem (20). We assume that the solution is initially constituted by i) D ∈ {1, 2, . . . , } entropic shock waves emanating from x1, . . . , xD, ii) an arbitrary number of rarefaction waves, and that iii) there is no shock generation for t ∈ [0, T].

\

\

3.2. Gradient descent and domain decomposition methods

Rather than minimizing the global loss function (21) (or (12), (36)), we here propose to decouple the optimization of the neural networks, and make it scalable. The approach is closely connected to domain decomposition methods (DDMs) Schwarz Waveform Relaxation (SWR) methods [21, 22, 23]. The resulting algorithm allows for embarrassingly parallel computation of minimization of local loss functions.

\ \

\ \ \

\ \ \

\ \ In conclusion, the DDM becomes relevant thanks to its scalability and for kDDMkLocal < kGlobal, which is expected for D large.

\

:::info Authors:

(1) Emmanuel LORIN, School of Mathematics and Statistics, Carleton University, Ottawa, Canada, K1S 5B6 and Centre de Recherches Mathematiques, Universit´e de Montr´eal, Montreal, Canada, H3T 1J4 (elorin@math.carleton.ca);

(2) Arian NOVRUZI, a Corresponding Author from Department of Mathematics and Statistics, University of Ottawa, Ottawa, ON K1N 6N5, Canada (novruzi@uottawa.ca).

:::


:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Superstate launches an on-chain direct issuance solution, enabling companies to raise funds in stablecoins to issue tokenized shares.

Superstate launches an on-chain direct issuance solution, enabling companies to raise funds in stablecoins to issue tokenized shares.

PANews reported on December 10th that Superstate, led by Compound founder Robert Leshner, announced the launch of "Direct Issuance Programs." This program allows publicly traded companies to raise funds directly from KYC-verified investors by issuing tokenized shares, with investors paying in stablecoins and settling instantly. The service will run on Ethereum and Solana, with the first offering expected to launch in 2026. The program requires no underwriters, complies with SEC regulations, and aims to promote the on-chaining of capital markets.
Share
PANews2025/12/10 21:07
Trump to start final Fed chair interviews beginning with Kevin Warsh

Trump to start final Fed chair interviews beginning with Kevin Warsh

The post Trump to start final Fed chair interviews beginning with Kevin Warsh appeared on BitcoinEthereumNews.com. President Donald Trump will begin the final interviews of candidates for the Federal Reserve chair this week, putting back on track the formal selection process that began this summer. “We’re going to be looking at a couple different people, but I have a pretty good idea of who I want,” Trump said Tuesday night aboard Air Force One to reporters. The interviews by Trump and Treasury Secretary Scott Bessent will begin with former Fed governor Kevin Warsh on Wednesday and also include Kevin Hassett, the director of the National Economic Council, at some point, according to two sources. It restarts the process that was derailed a bit last week when interviews with candidates were abruptly canceled. Trump said recently he knew who he was going to pick to replace current Chair Jerome Powell, and prediction markets overwhelmingly believed it would be Hassett. But his possible selection received some pushback from the markets recently, especially among fixed income investors concerned Hassett would only do Trump’s bidding and keep rates too low even if inflation snaps back. So it’s unclear if these interviews are a sign Trump has changed his mind or just the final stage of the formal process. CNBC first reported in October that Trump had narrowed the candidate list down to five people. Four of those five will be part of these final interviews. The group also includes current Governors Christopher Waller and Michelle Bowman as well as BlackRock fixed income chief Rick Rieder. The Fed will likely lower rates for a third time this year on Wednesday, but Powell, whose term as chair is up in May, is expected to strike a cautious tone at his post-meeting press conference on how much lower the central bank will go next year. The Fed’s latest forecast released in September called…
Share
BitcoinEthereumNews2025/12/10 21:07