Blockchain transaction fees fluctuate due to limited block capacity and network congestion. The Fee Estimation based on Neural Network (FENN) framework tackles this challenge by combining three data sources—transaction features, mempool states, and network characteristics. Using deep learning methods like LSTM and attention mechanisms, FENN predicts future block behaviors and network trends to estimate optimal transaction fees. This dual-layer model—feature extraction and prediction—helps improve accuracy and efficiency in confirming blockchain transactions.Blockchain transaction fees fluctuate due to limited block capacity and network congestion. The Fee Estimation based on Neural Network (FENN) framework tackles this challenge by combining three data sources—transaction features, mempool states, and network characteristics. Using deep learning methods like LSTM and attention mechanisms, FENN predicts future block behaviors and network trends to estimate optimal transaction fees. This dual-layer model—feature extraction and prediction—helps improve accuracy and efficiency in confirming blockchain transactions.

The Future of Crypto Transactions? AI That Predicts Network Congestion

2025/10/22 14:30

Abstract and 1. Introduction

  1. Preliminaries
  2. Problem definition
  3. BtcFlow
  4. Bitcoin Core (BCore)
  5. Mempool state and linear perceptron machine learning (MSLP)
  6. Fee estimation based on neural network (FENN)
  7. Experiments
  8. Conclusion, Acknowledgements, and References

7 Fee estimation based on neural network (FENN)

ue to the low block capacity, the majority of submitted transactions may experience various confirmation delays. Transactions are selected and added to the miner’s mempool after submission, where they compete for confirmation in the next block. A transaction is considered complete when it is recorded in a block in the blockchain. In the confirmation process, transaction fees are considered as an incentive to confirm transactions into the blockchain. To sum up, we summarize three groups of features that may influence the transaction confirmation:

\ – Transaction features, which describes the submitted transaction.

\ – Mempool states, which records the distribution of feerates of unconfirmed transactions in the mempool, implicitly modelling the competition among unconfirmed transactions.

\ – Network features, which reflects the characteristics of the mined blocks including block size, block generation speed, etc.

\ These three groups of features correspond to the three types of information fed to the estimation function F in Section 3. Although transaction features are already available in the submitted transaction, network features and mempool states are not known. However, such features are desirable, because if we had known how many transactions would be contained in future blocks, how fast future blocks would be generated, how competitive the submitted transaction would be in future mempools, we would increase the chance to predict the confirmation fee more accurately. Consequently, in FENN, our main idea is to predict network features and mempool states from historical state sequences by utilizing sequence learning models. Finally, we combine the three groups of features to do the estimation.

\ The prediction procedure can be formulated based on its data resources:

\

7.1 Estimation procedure

The estimation framework can be divided into two layers, one feature extraction layer to extract patterns from network features, mempool states and the submitted transaction itself, and one prediction layer to analyze the relationship between transaction fee and the extracted features. Fig. 4 shows the framework.

\ 7.1.1 Feature extraction layer

\ It includes three parts. Other than modelling the submitted transaction itself, the feature extraction layer also predicts the future characteristics of block states and model mempool competition states of the unconfirmed transactions.

\ 1. Transaction features contain information on the transaction that has been submitted and is awaiting confirmation. We pick features that we believe may affect a transaction’s validation and confirmation. The transaction vector contains:

\ – number of inputs, number of outputs Miners need to seek for the source transactions pointed to the new transaction’s inputs when confirming a transaction, which means that the number of transaction inputs and outputs affects the verification complexity.

\ – transaction version, transaction size and weight We use both transaction size of raw data and transaction weight to characterize transactions.

\ – transaction first seen time, confirmation timestamp and confirmation block height. The first-seen time refers to the time that a transaction is first observed. Because it’s difficult to determine the precise submission time of a historical transaction, we use the publicly available first-seen time.

\

\ 3. Network features are expected to encode future block size and generation speed, which can affect a transaction’s confirmation time. Historical network features are learned as a sequence to predict future network features.

\ – block size, block weight and transaction count We use three factors to characterize the size of a block, namely, the overall size of transactions (Bytes), the overall weight of transactions (Weight) and the transaction count in a block.

\ – difficulty It reflects the mining difficulty in the Bitcoin system, which is tuned to maintain an average 10-minute block frequency.

\ – block time The mining time of this block. It reveals the block generation speed.

\ – average feerate in block The average feerate of all the transactions in the block. This indicator is designed to reveal the feerate trend in continuous blocks.

\

\ Approach 1: LSTM [16] extract patterns by aggregating information on a token-by-token basis in a sequential order and summarizes the sequence into a context vector. To be specific, at each time step, LSTM maintains a hidden vector h and a memory vector c responsible for state updates and output prediction [18], and the final state is used as the extracted patterns from the sequence in our models:

\

\ Approach 2: Attention is another popular timeseries processing technique. It simulates the cognitive process of selectively concentration on different parts in psychology. In other words, it returns a new representation vector related to the importance at various positions in the sequence. Three state-of-the-art attention modules are applied below:

\ (a) Additive attention [3] computes the compatibility function using a feed-forward network

\ Fig. 4: FENN framework.

\ with a single hidden layer.

\

\ where W is a weight matric, and h is the hidden states in the former LSTM processing stage.

\

\ 7.1.2 Prediction layer

\ After aggregating inputs from the feature extraction layer, FENN is followed by a fully-connected neural network. By learning the relationship among historical block information, mempool data, and transaction details, FENN can provide a specific estimated feerate for each transaction. The testing instance of the estimated transaction consists of three parts: the block sequence, current mempool states and the transaction itself.

\

:::info Authors:

(1) Limeng Zhang, Swinburne University of Technology, Melbourne, Australia (limengzhang@swin.edu.au);

(2) Rui Zhou Swinburne, University of Technology, Melbourne, Australia (rzhou@swin.edu.au);

(3) Qing Liu, Data61, CSIRO, Hobart, Australia (q.liu@data61.csiro.au);

(4) Chengfei Liu, Swinburne University of Technology, Melbourne, Australia (cliu@swin.edu.au);

(5) M.Ali Babar, The University of Adelaide, Adelaide, Australia (ali.babar@adelaide.edu.au).

:::


:::info This paper is available on arxiv under CC0 1.0 UNIVERSAL license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO

Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO

The post Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO appeared on BitcoinEthereumNews.com. Aave DAO is gearing up for a significant overhaul by shutting down over 50% of underperforming L2 instances. It is also restructuring its governance framework and deploying over $100 million to boost GHO. This could be a pivotal moment that propels Aave back to the forefront of on-chain lending or sparks unprecedented controversy within the DeFi community. Sponsored Sponsored ACI Proposes Shutting Down 50% of L2s The “State of the Union” report by the Aave Chan Initiative (ACI) paints a candid picture. After a turbulent period in the DeFi market and internal challenges, Aave (AAVE) now leads in key metrics: TVL, revenue, market share, and borrowing volume. Aave’s annual revenue of $130 million surpasses the combined cash reserves of its competitors. Tokenomics improvements and the AAVE token buyback program have also contributed to the ecosystem’s growth. Aave global metrics. Source: Aave However, the ACI’s report also highlights several pain points. First, regarding the Layer-2 (L2) strategy. While Aave’s L2 strategy was once a key driver of success, it is no longer fit for purpose. Over half of Aave’s instances on L2s and alt-L1s are not economically viable. Based on year-to-date data, over 86.6% of Aave’s revenue comes from the mainnet, indicating that everything else is a side quest. On this basis, ACI proposes closing underperforming networks. The DAO should invest in key networks with significant differentiators. Second, ACI is pushing for a complete overhaul of the “friendly fork” framework, as most have been unimpressive regarding TVL and revenue. In some cases, attackers have exploited them to Aave’s detriment, as seen with Spark. Sponsored Sponsored “The friendly fork model had a good intention but bad execution where the DAO was too friendly towards these forks, allowing the DAO only little upside,” the report states. Third, the instance model, once a smart…
Share
BitcoinEthereumNews2025/09/18 02:28
Shytoshi Kusama Addresses $2.4 Million Shibarium Bridge Exploit

Shytoshi Kusama Addresses $2.4 Million Shibarium Bridge Exploit

The post Shytoshi Kusama Addresses $2.4 Million Shibarium Bridge Exploit appeared on BitcoinEthereumNews.com. The lead developer of Shiba Inu, Shytoshi Kusama, has publicly addressed the Shibarium bridge exploit that occurred recently, draining $2.4 million from the network. After days of speculation about his involvement in managing the crisis, the project leader broke his silence. Kusama emphasized that a special “war room” has been set up to restore stolen finances and enhance network security. The statement is his first official words since the bridge compromise occurred. “Although I am focusing on AI initiatives to benefit all our tokens, I remain with the developers and leadership in the war room,” Kusama posted on social media platform X. He dismissed claims that he had distanced himself from the project as “utterly preposterous.” The developer said that the reason behind his silence at first was strategic. Before he could make any statements publicly, he must have taken time to evaluate what he termed a complex and deep situation properly. Kusama also vowed to provide further updates in the official Shiba Inu channels as the team comes up with long-term solutions. As highlighted in our previous article, targeted Shibarium’s bridge infrastructure through a sophisticated attack vector. Hackers gained unauthorized access to validator signing keys, compromising the network’s security framework. The hackers executed a flash loan to acquire 4.6 million BONE ShibaSwap tokens. The validator power on the network was majority held by them after this purchase. They were able to transfer assets out of Shibarium with this control. The response of Shibarium developers was timely to limit the breach. They instantly halted all validator functions in order to avoid additional exploitation. The team proceeded to deposit the assets under staking in a multisig hardware wallet that is secure. External security companies were involved in the investigation effort. Hexens, Seal 911, and PeckShield are collaborating with internal developers to…
Share
BitcoinEthereumNews2025/09/18 03:46