This paper presents the first framework to deliberately train neural networks for accuracy and agreement between feature attribution techniques: PEAR (Post hoc Explainer Agreement Regularizer). In addition to the conventional task loss, PEAR incorporates a correlation-based consensus loss that combines Pearson and Spearman correlation measures, promoting alignment across explainers like Grad and Integrated Gradients. By using a soft ranking approximation to address differentiability issues, the loss function is completely trainable by backpropagation. Tested on three OpenML tabular datasets, multilayer perceptrons trained using PEAR surpass linear baselines in accuracy and explanation consensus, and in certain instances, even compete with XGBoost. The findings advance reliable and interpretable AI by showing that consensus-aware training successfully reduces explanation disagreement while maintaining prediction performance.This paper presents the first framework to deliberately train neural networks for accuracy and agreement between feature attribution techniques: PEAR (Post hoc Explainer Agreement Regularizer). In addition to the conventional task loss, PEAR incorporates a correlation-based consensus loss that combines Pearson and Spearman correlation measures, promoting alignment across explainers like Grad and Integrated Gradients. By using a soft ranking approximation to address differentiability issues, the loss function is completely trainable by backpropagation. Tested on three OpenML tabular datasets, multilayer perceptrons trained using PEAR surpass linear baselines in accuracy and explanation consensus, and in certain instances, even compete with XGBoost. The findings advance reliable and interpretable AI by showing that consensus-aware training successfully reduces explanation disagreement while maintaining prediction performance.

Notes on Training Neural Networks for Consensus

2025/09/21 13:46

Abstract and 1. Introduction

1.1 Post Hoc Explanation

1.2 The Disagreement Problem

1.3 Encouraging Explanation Consensus

  1. Related Work

  2. Pear: Post HOC Explainer Agreement Regularizer

  3. The Efficacy of Consensus Training

    4.1 Agreement Metrics

    4.2 Improving Consensus Metrics

    [4.3 Consistency At What Cost?]()

    4.4 Are the Explanations Still Valuable?

    4.5 Consensus and Linearity

    4.6 Two Loss Terms

  4. Discussion

    5.1 Future Work

    5.2 Conclusion, Acknowledgements, and References

Appendix

3 PEAR: POST HOC EXPLAINER AGREEMENT REGULARIZER

Our contribution is the first effort to train models to be both accurate and to be explicitly regularized via consensus between local explainers. When neural networks are trained naturally (i.e. with a single task-specific loss term like cross-entropy), disagreement between post hoc explainers often arises. Therefore, we include an additional loss term to measure the amount of explainer disagreement during training to encourage consensus between explanations. Since human-aligned notions of explanation consensus can be captured by more than one agreement metric (listed in A.3), we aim to improve several agreement metrics with one loss function.[2]

\ Our consensus loss term is a convex combination of the Pearson and Spearman correlation measurements between the vectors of attribution scores (Spearman correlation is just the Pearson correlation on the ranks of a vector).

\ To paint a clearer picture of the need for two terms in the loss, consider the examples shown in Figure 3. In the upper example, the raw feature scores are very similar and the Pearson correlation coefficient is in fact 1 (to machine precision). However, when we rank these scores by magnitude, there is a big difference in their ranks as indicated by the Spearman value. Likewise, in the lower portion of Figure 3 we show that two explanations with identical magnitudes will show a low Pearson correlation coefficient. Since some of the metrics we use to measure disagreement involve ranking and others do not, we conclude that a mixture of these two terms in the loss is appropriate.

\ While the example in Figure 3 shows two explanation vectors with similar scale, different explanation methods do not always

\ Figure 2: Our loss function measures the task loss between the model outputs and ground truth (task loss), as well as the disagreement between explainers (consensus loss). The weight given to the consensus loss term is controlled by a hyperparameter 𝜆. The consensus loss term term is a convex combination of the Spearman and Pearson correlation measurements between feature importance scores, since increasing both rank correlation (Spearman) and raw-score correlation (Pearson) are useful for improving explainer consensus on our many agreement metrics.

\ Figure 3: Example feature attribution vectors where Pearson and Spearman show starkly different scores. Recall, both Pearson and Spearman correlation range from −1 to +1. Both of these pairs of vectors satisfy some human-aligned notions of consensus. But in each circumstance, one of the correlation metrics gives a low similarity score. Thus, in order to successfully encourage explainer consensus (by all of our metrics), we use both types of correlation in our consensus loss term.

\ align. Some explainers have the sums of their attribution scores constrained by various rules, whereas other explainers have no such constraints. The correlation measurements we use in our loss provide more latitude when comparing explainers than a direct difference measurement like mean absolute error or mean squared error, allowing our correlation measurement.

\

\ We refer to the first term in the loss function as the task loss, or ℓtask, and for our classification tasks we use cross-entropy loss. A graphical depiction of the flow from data to loss value is shown in Figure 2. Formally, our complete loss function can be expressed as follows with two hyperparameters 𝜆, 𝜇 ∈ [0, 1]. We weight the influence of our consensus term with 𝜆, so lower values give more priority to task loss. We weight the influence between the two explanation correlation terms with 𝜇, so lower values give more weight to Pearson correlation and higher values give more weight to Spearman correlation.

\

3.1 Choosing a Pair of Explainer

The consensus loss term is defined for any two explainers in general, but since we train with standard backpropagation we need these explainers to be differentiable. With this constraint in mind, and with some intuition about the objective of improving agreement metrics, we choose to train for consensus between Grad and IntGrad. If Grad and IntGrad align, then the function should become more locally linear in logit space. IntGrad computes the average gradient along a path in input space toward each point being explained. So, if we train the model to have a local gradient at each point (Grad) closer to the average gradient along a path to the point (IntGrad), then perhaps an easy way for the model to accomplish that training objective would be for the gradient along the whole path to equal the local gradient from Grad. This may push the model to be more similar to a linear model. This is something we investigate with qualitative and quantitative analysis in Section 4.

3.2 Differentiability

On the note of differentiability, the ranking function 𝑅 is not differentiable. We substitute a soft ranking function from the torchsort package [3]. This provides a floating point approximation of the ordering of a vector rather than an exact integer computation of the ordering of a vector, which allows for differentiation

4 THE EFFICACY OF CONSENSUS TRAINING

In this section we present each experiment with the hypothesis it is designed to test. The datasets we use for our experiments are Bank Marketing, California Housing, and Electricity, three binary classification datasets available on the OpenML database [39]. For each dataset, we use a linear model’s performance (logistic regression) as a lower bound of realistic performance because linear models are considered inherently explainable.

\ The models we train to study the impact of our consensus loss term are multilayer perceptrons (MLPs). While the field of tabular deep learning is still growing, and MLPs may be an unlikely choice for most data scientists on tabular data, deep networks provide the flexibility to adapt training loops for multiple objectives [1, 10, 17, 28, 31, 35]. We also verify that our MLPs outperform linear models on each dataset, because if deep models trained to reach consensus are less accurate than a linear model, we would be better off using the linear model.

\ We include XGBoost [6] as a point of comparison for our approach, as it has become a widely popular method with high performance and strong consensus metrics on many tabular datasets (figures in Appendix A.7). There are cases where we achieve more explainer consensus than XGBoost, but this point is tangential as we are invested in exploring a loss for training neural networks.

\ For further details on our datasets and model training hyperparameters, see Appendices A.1 and A.2.

\

:::info Authors:

(1) Avi Schwarzschild, University of Maryland, College Park, Maryland, USA and Work completed while working at Arthur (avi1umd.edu);

(2) Max Cembalest, Arthur, New York City, New York, USA;

(3) Karthik Rao, Arthur, New York City, New York, USA;

(4) Keegan Hines, Arthur, New York City, New York, USA;

(5) John Dickerson†, Arthur, New York City, New York, USA (john@arthur.ai).

:::


:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

[2] The PEAR package will be publicly for download on the Package Installer for Python (pip), and it is also available upon request from the authors.

\ [3] When more than one of the entries have the same magnitude, they get a common ranking value equal to the average rank if they were ordered arbitrarily.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Superstate launches an on-chain direct issuance solution, enabling companies to raise funds in stablecoins to issue tokenized shares.

Superstate launches an on-chain direct issuance solution, enabling companies to raise funds in stablecoins to issue tokenized shares.

PANews reported on December 10th that Superstate, led by Compound founder Robert Leshner, announced the launch of "Direct Issuance Programs." This program allows publicly traded companies to raise funds directly from KYC-verified investors by issuing tokenized shares, with investors paying in stablecoins and settling instantly. The service will run on Ethereum and Solana, with the first offering expected to launch in 2026. The program requires no underwriters, complies with SEC regulations, and aims to promote the on-chaining of capital markets.
Share
PANews2025/12/10 21:07
Trump to start final Fed chair interviews beginning with Kevin Warsh

Trump to start final Fed chair interviews beginning with Kevin Warsh

The post Trump to start final Fed chair interviews beginning with Kevin Warsh appeared on BitcoinEthereumNews.com. President Donald Trump will begin the final interviews of candidates for the Federal Reserve chair this week, putting back on track the formal selection process that began this summer. “We’re going to be looking at a couple different people, but I have a pretty good idea of who I want,” Trump said Tuesday night aboard Air Force One to reporters. The interviews by Trump and Treasury Secretary Scott Bessent will begin with former Fed governor Kevin Warsh on Wednesday and also include Kevin Hassett, the director of the National Economic Council, at some point, according to two sources. It restarts the process that was derailed a bit last week when interviews with candidates were abruptly canceled. Trump said recently he knew who he was going to pick to replace current Chair Jerome Powell, and prediction markets overwhelmingly believed it would be Hassett. But his possible selection received some pushback from the markets recently, especially among fixed income investors concerned Hassett would only do Trump’s bidding and keep rates too low even if inflation snaps back. So it’s unclear if these interviews are a sign Trump has changed his mind or just the final stage of the formal process. CNBC first reported in October that Trump had narrowed the candidate list down to five people. Four of those five will be part of these final interviews. The group also includes current Governors Christopher Waller and Michelle Bowman as well as BlackRock fixed income chief Rick Rieder. The Fed will likely lower rates for a third time this year on Wednesday, but Powell, whose term as chair is up in May, is expected to strike a cautious tone at his post-meeting press conference on how much lower the central bank will go next year. The Fed’s latest forecast released in September called…
Share
BitcoinEthereumNews2025/12/10 21:07