The results highlight ICR's vulnerability to interference and motivate the need for more robust, distraction-mitigating approaches like RECKONING.The results highlight ICR's vulnerability to interference and motivate the need for more robust, distraction-mitigating approaches like RECKONING.

Multi-Task vs. Single-Task ICR: Quantifying the High Sensitivity to Distractor Facts in Reasoning

2025/10/29 23:11

Abstract and 1. Introduction

  1. Background

  2. Method

  3. Experiments

    4.1 Multi-hop Reasoning Performance

    4.2 Reasoning with Distractors

    4.3 Generalization to Real-World knowledge

    4.4 Run-time Analysis

    4.5 Memorizing Knowledge

  4. Related Work

  5. Conclusion, Acknowledgements, and References

\ A. Dataset

B. In-context Reasoning with Distractors

C. Implementation Details

D. Adaptive Learning Rate

E. Experiments with Large Language Models

B In-context Reasoning with Distractors

To motivate the advantage of RECKONING on mitigating interference from distractors, we analyze the performance change of fine-tuned incontext reasoning with and without distractors present in the context of the questions. We define distractors as additional facts or rules present in a question’s context that are not directly relevant to the questions. A model should not be able to use only these distractors to answer a question correctly. For an example of distractors in a question’s context, please see Table 9. We evaluate the baseline on the ProofWriter dataset since it naturally contains contexts including distractors (Table 9). Recall that we have two training objectives. The single-task objective only trains the model to predict an answer for each question given their contexts. The multi-task objective (MT) trains the model not only to predict an answer but also to reproduce the correct facts and rules (in contrast to distractors) based on the contexts. We evaluate the baseline on 2, 3, and 5-hop datasets with both training objectives, and we report the average label accuracy across hops in Figure 7. Compared to the baseline’s performance without distractors in the context, the performance with distractors decreases significantly. For single-task, the performance drops 23.2% when adding distractors to the contexts, and the performance with the multi-task objective drops 28.6%. The results highlight in-context reasoning’s high sensitivity to the interference of irrelevant information in the contexts.

\ Figure 7: Label accuracy of fine-tuned in-context reasoning on questions with and without distractors in the context. With the same questions, adding distractors to contexts significantly lowers the performance of in-context reasoning, both in the singletask and multi-task settings.

\

:::info Authors:

(1) Zeming Chen, EPFL (zeming.chen@epfl.ch);

(2) Gail Weiss, EPFL (antoine.bosselut@epfl.ch);

(3) Eric Mitchell, Stanford University (eric.mitchell@cs.stanford.edu)';

(4) Asli Celikyilmaz, Meta AI Research (aslic@meta.com);

(5) Antoine Bosselut, EPFL (antoine.bosselut@epfl.ch).

:::


:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Why Digitap ($TAP) is the Best Crypto Presale December Follow-Up

Why Digitap ($TAP) is the Best Crypto Presale December Follow-Up

The post Why Digitap ($TAP) is the Best Crypto Presale December Follow-Up appeared on BitcoinEthereumNews.com. Crypto Projects Hyperliquid’s HYPE has seen another disappointing week. The token struggled to hold the $30-$32 price range after 9.9M tokens were unlocked and added to the circulating supply. Many traders are now watching whether HYPE will reclaim the $35 area as support or break down further towards the high $20s. Unlike Hyperliquid, whose trading volume is shrinking, Digitap ($TAP), a rising crypto presale project, has already raised over $2 million in just weeks. This is all thanks to its live omnibank app that combines crypto and fiat tools in a single, seamless account. While popular altcoins stall, whales are channeling capital into early-stage opportunities. This shift is shaping discussions on the best altcoins to buy now in the current market dynamics. Hyperliquid Spot Trades Clustered Between the Low and Mid $30s HYPE price closed the week with an 11% loss. This is because a significant portion of its spot trades are clustered between the low and mid $30s. This leaves the token with a multi-billion-dollar fully diluted valuation on its daily trading volume. Source: CoinMarketCap Moreover, HYPE’s daily RSI is still stuck above $40s, while the short-term averages are continually dropping. This shows an indecisiveness, where the bears and the bulls don’t have clear control of the market. Additionally, roughly 2.6% of the circulating supply is in circulation. After unlocking 9.9M tokens, the Hyperliquid team spent over $600 million on buybacks. This amount often buys only a few million tokens a day. That steady demand is quite small compared to the 9.9 million tokens that were released. This has left the HYPE market with an oversupply. Many HYPE holders are now rotating capital into crypto presale projects, like Digitap, that offer immediate upside. HYPE Market Sentiments Shows Mixed Signals Traders are now projecting mixed sentiments for the token. Some…
Share
BitcoinEthereumNews2025/12/08 22:17