This article outlines the implementation details for RECKONING, which uses a GPT-2-base model and runs on NVIDIA A100 GPUs.This article outlines the implementation details for RECKONING, which uses a GPT-2-base model and runs on NVIDIA A100 GPUs.

Technical Setup for RECKONING: Inner Loop Gradient Steps, Learning Rates, and Hardware Specification

2025/10/29 23:29

Abstract and 1. Introduction

  1. Background

  2. Method

  3. Experiments

    4.1 Multi-hop Reasoning Performance

    4.2 Reasoning with Distractors

    4.3 Generalization to Real-World knowledge

    4.4 Run-time Analysis

    4.5 Memorizing Knowledge

  4. Related Work

  5. Conclusion, Acknowledgements, and References

\ A. Dataset

B. In-context Reasoning with Distractors

C. Implementation Details

D. Adaptive Learning Rate

E. Experiments with Large Language Models

C Implementation Details

We select GPT-2-base [59] as the model for our method and all the baselines. We use the version implemented by the Huggingface Transformers library [78]. All the experiments for RECKONING

\ Table 6: Dataset splits and statistics for our experiments

\ Table 7: An example from the dataset ProofWriter. There are 6 facts and 6 rules mapped to three question-answer pairs. Each question can be answered based on the given facts and rules.

are conducted on a cluster with NVIDIA A100 (40GB) GPUs. All the baseline experiments are conducted on a local machine with NVIDIA RTX 3090 GPU (24GB).

\ Fine-tuned In-context Reasoning We set the train batch size to 16 and train the model for 6 epochs with early stopping based on the validation label accuracy. We set the learning rate to 3e-5 and use the AdamW optimizer with ϵ set to 1e-8. We validate the model on the development set for every epoch and select the best checkpoint using the validation accuracy as the metric.

\ RECKONING In the inner loop, we generally perform 4 gradient steps for lower-hop questions (2, 3, 4-hop) and 5 gradient steps for higher-hop questions (5 and 6-hop). We select the AdamW [46] as the optimizer for the inner loop since the main task is language modeling. The inner-loop learning rate is set to 3e-5 before training, and the algorithm dynamically learns a set of optimal learning rates when converged. In our experiments and analysis, we only report the results from RECKONING with a multi-task objective since its performance is better than the single-task objective. In the outer loop, we also use the AdamW with a learning rate of 3e-5. For both optimizers, we set ϵ to 1e-8. We set the train batch size to 2 due to memory limitations. We apply the technique of gradient accumulation and set the accumulation step to 2. We train the model for 6 epochs with early stopping. For each epoch, we validate the model twice: once in the middle and once at the end. We select the best model checkpoint based on the validation label accuracy

\

:::info Authors:

(1) Zeming Chen, EPFL (zeming.chen@epfl.ch);

(2) Gail Weiss, EPFL (antoine.bosselut@epfl.ch);

(3) Eric Mitchell, Stanford University (eric.mitchell@cs.stanford.edu)';

(4) Asli Celikyilmaz, Meta AI Research (aslic@meta.com);

(5) Antoine Bosselut, EPFL (antoine.bosselut@epfl.ch).

:::


:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Crypto-Fueled Rekt Drinks Sells 1 Millionth Can Amid MoonPay Collab

Crypto-Fueled Rekt Drinks Sells 1 Millionth Can Amid MoonPay Collab

The post Crypto-Fueled Rekt Drinks Sells 1 Millionth Can Amid MoonPay Collab appeared on BitcoinEthereumNews.com. In brief Rekt Brands sold its 1 millionth can of its Rekt Drinks flavored sparkling water. The Web3 firm collaborated with payments infrastructure company MoonPay on a peach-raspberry flavor called “Moon Crush.” Rekt incentivizes purchasers of its drinks with the REKT token, which hit an all-time high market cap of $583 million in August. Web3 consumer firm Rekt Brands sold its 1 millionth can of its Rekt Drinks sparkling water on Friday, surpassing its first major milestone with the sold-out drop of its “Moon Crush” flavor—a peach raspberry-flavored collaboration with payments infrastructure firm MoonPay.  The sale follows Rekt’s previous sellout collaborations with leading Web3 brands like Solana DeFi protocol Jupiter, Ethereum layer-2 network Abstract, and Coinbase’s layer-2 network, Base. Rekt has already worked with a number of crypto-native brands, but says it has been choosy when cultivating collabs. “We have received a large amount of incoming enquiries from some of crypto’s biggest brands, but it’s super important for us to be selective in order to maintain the premium feel of Rekt,” Rekt Brands co-founder and CEO Ovie Faruq told Decrypt.  (Disclosure: Ovie Faruq’s Canary Labs is an investor in DASTAN, the parent company of Decrypt.) “We look to work with brands who are able to form partnerships that we feel are truly strategic to Rekt’s goal of becoming one of the largest global beverage brands,” he added. In particular, Faruq highlighted MoonPay’s role as a “gateway” between non-crypto and crypto users as a reason the collaboration made “perfect sense.”  “We’re thrilled to bring something to life that is both delicious and deeply connected to the crypto community,” MoonPay President Keith Grossman told Decrypt.  Rekt Brands has been bridging the gap between Web3 and the real world with sales of its sparkling water since November 2024. In its first sale,…
Share
BitcoinEthereumNews2025/09/20 09:24