Drawing from Barron, Hornik, and Telgarsky, it proves neural networks yield superior efficiency in higher‑dimensional pricing tasks.Drawing from Barron, Hornik, and Telgarsky, it proves neural networks yield superior efficiency in higher‑dimensional pricing tasks.

Mathematics of Differential Machine Learning in Derivative Pricing and Hedging: Choice of Basis

2025/11/04 23:18
4 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

Table of Links

Abstract

  1. Keywords and 2. Introduction

  2. Set up

  3. From Classical Results into Differential Machine Learning

    4.1 Risk Neutral Valuation Approach

    4.2 Differential Machine learning: building the loss function

  4. Example: Digital Options

  5. Choice of Basis

    6.1 Limitations of the Fixed-basis

    6.2 Parametric Basis: Neural Networks

  6. Simulation-European Call Option

    7.1 Black-Scholes

    7.2 Hedging Experiment

    7.3 Least Squares Monte Carlo Algorithm

    7.4 Differential Machine Learning Algorithm

  7. Numerical Results

  8. Conclusion

  9. Conflict of Interests Statement and References

Notes

6 Choice of Basis

A parametric basis can be thought of as a set of functions made up of linear combinations of relatively few basis functions with a simple structure and depending non-linearly on a set of “inner” parameters e.g., feed-forward neural networks with one hidden layer and linear output activation units. In contrast, classical approximation schemes do not use inner parameters but employ fixed basis functions, and the corresponding approximators exhibit only a linear dependence on the external parameters.

\ However, experience has shown that optimization of functionals over a variable basis such as feed-forward neural networks often provides surprisingly good suboptimal solutions.

\ A well-known functional-analytical fact is the employing the Stone-Weierstrass theorem, it is possible to construct several examples of fixed basis, such as the monomial basis, a set that is dense in the space of continuous function whose completion is L2. The limitations of the fixed basis are well studied and can be summarized as the following.

\

6.1 Limitations of the Fixed-basis

\ The variance-bias trade-off can be translated into two major problems:

\

  1. Underfitting happens due to the fact that high bias can cause an algorithm to miss the relevant relations between features and target outputs. This happens with a small number of parameters. In the previous terminology, that corresponds to a low d value (see Equation 4).

    \

  2. The variance is an error of sensitivity to small fluctuations in the training set. It is a measure of spread or variations in our predictions. High variance can cause an algorithm to model the random noise in the training data, rather than the intended outputs, which is denominated as overfitting. This, in turn, happens with a high number of parameters. In the previous terminology, that corresponds to a high d value (see Equation 4).

\ The following result resumes the problem discussed. I will state it as in Barron, 1993, and the proof can be found in Barron, 1993 and Gnecco et al., 2012

\

\

\ So, there is a need to study the class of basis, that can adjust to the data. That is the case with the parametric basis.

\

6.2 Parametric Basis: Neural Networks

\ From Hornik et al., 1989, we find the following relevant results:

\ The flexibility and approximation power of neural networks makes them an excellent choice as the parametric basis.

\ 6.2.1 Depth

In practical applications, it has been noted that a multi-layer neural network, outperforms a single-layer neural network. This is still a question under investigation, once the top-of-the-art mathematical theories cannot account for the multi-layer comparative success. However, it is possible to create some counter-examples, where the single-layer neural network would not approach the target function as in the following proposition:

\ Therefore it is beneficial or at least risk-averse to select a multi-layer feed-forward neural network, instead of a single-layer feed-forward neural network

\ 6.2.2 Width

This section draws inspiration from the works Barron, 1993 andTelgarsky, 2020. Its primary objective is to investigate the approximating capabilities of a neural network based on the number of nodes or neurons. I provide some elaboration on this result, once it is not so well known and it does not require any assumption regarding the activation function unlike in Barron, 1994.

\

\ This sampling procedure correctly represents the mean as:

\

\ As the number of nodes, d, increases, the approximation capability improves. This result, contrary to Proposition 5.1, establishes an upper bound that is independent of the dimension of the target function. By comparing both theorems, it can be argued that there is a clear advantage for feed-forward neural networks when d > 2 for d ∈ N.

\

:::info Author:

(1) Pedro Duarte Gomes, Department of Mathematics, University of Copenhagen.

:::


:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

The AI Price Collapse Is the Best Case for Bitcoin You’ve Never Heard

The AI Price Collapse Is the Best Case for Bitcoin You’ve Never Heard

Chain of Thoughts — Side Episode GPT-4 cost $30 per million tokens in 2023. Today it’s $0.25. That 120x price drop is the most underrated macro argument fo
Share
Medium2026/03/16 12:59
The Hidden Layer of Digital Equity: Why Every Token Leads Back to ITL

The Hidden Layer of Digital Equity: Why Every Token Leads Back to ITL

How the InterLink Settlement Layer Functions as the Operating System of a New Digital Economy ‌ In our previous analysis, we established the fundamental
Share
Medium2026/03/16 13:27
Wormhole Jumps 11% on Revised Tokenomics and Reserve Initiative

Wormhole Jumps 11% on Revised Tokenomics and Reserve Initiative

The post Wormhole Jumps 11% on Revised Tokenomics and Reserve Initiative appeared on BitcoinEthereumNews.com. Cross-chain bridge Wormhole plans to launch a reserve funded by both on-chain and off-chain revenues. Wormhole, a cross-chain bridge connecting over 40 blockchain networks, unveiled a tokenomics overhaul on Wednesday, hinting at updated staking incentives, a strategic reserve for the W token, and a smoother unlock schedule. The price of W jumped 11% on the news to $0.096, though the token is still down 92% since its debut in April 2024. W Chart In a blog post, Wormhole said it’s planning to set up a “Wormhole Reserve” that will accumulate on-chain and off-chain revenues “to support the growth of the Wormhole ecosystem.” The protocol also said it plans to target a 4% base yield for governance stakers, replacing the current variable APY system, noting that “yield will come from a combination of the existing token supply and protocol revenues.” It’s unclear whether Wormhole will draw from the reserve to fund this target. Wormhole did not immediately respond to The Defiant’s request for comment. Wormhole emphasized that the maximum supply of 10 billion W tokens will remain the same, while large annual token unlocks will be replaced by a bi-weekly distribution beginning Oct. 3 to eliminate “moments of concentrated market pressure.” Data from CoinGecko shows there are over 4.7 billion W tokens in circulation, meaning that more than half the supply is yet to be unlocked, with portions of that supply to be released over the next 4.5 years. Source: https://thedefiant.io/news/defi/wormhole-jumps-11-on-revised-tokenomics-and-reserve-initiative
Share
BitcoinEthereumNews2025/09/18 01:31