← ML Research Wiki / 2506.17219

No Free Lunch: Rethinking Internal Feedback for LLM Reasoning

(2025)

Paper Information
arXiv ID

Abstract

Reinforcement learning has emerged as a powerful paradigm for post-training large language models (LLMs) to improve reasoning.Approaches like Reinforcement Learning from Human Feedback (RLHF) and Reinforcement Learning with Verifiable Rewards (RLVR) have shown strong results, but they require extensive external supervision.We investigate an alternative class of methods, Reinforcement Learning from Internal Feedback (RLIF), which relies solely on intrinsic model-derived signals instead of external rewards.In particular, we leverage unsupervised reward proxies such as token-level entropy, trajectory-level entropy, and self-certainty.Our theoretical analysis shows these internal objectives are partially equivalent, and we empirically evaluate various RLIF strategies on challenging math reasoning benchmarks.Experimental results demonstrate that RLIF can boost the reasoning performance of base LLMs at the beginning phase of the training, matching or surpassing RLVR techniques on these tasks.However, when training progresses, performance degrades even below the model before training.Moreover, we find that RLIF yields little improvement for instruction-tuned models, indicating diminishing returns of intrinsic feedback once an LLM is already instruction-tuned.We further analyze this limitation by mixing model weights and explain the reason of RLIF's training behaviors, providing practical guidelines for integrating internal feedback signals into LLM training.We hope our analysis of internal feedback will inform more principled and effective strategies for LLM post-training.

Summary

This paper investigates Reinforcement Learning from Internal Feedback (RLIF), an approach for post-training large language models (LLMs) that utilizes intrinsic signals rather than external rewards to enhance reasoning capabilities. The authors present theoretical analyses suggesting that internal feedback signals (self-certainty, token-level entropy, and trajectory-level entropy) are partially equivalent, and they empirically evaluate these methods on challenging mathematical reasoning benchmarks, finding that RLIF can initially improve performance but may lead to performance degradation in later training stages. The study suggests practical guidelines for effectively integrating internal feedback into LLM training, while highlighting limitations such as reduced improvement for instruction-tuned models. Additionally, the paper discusses the implications of transitional words on model performance and reasoning capability during training.

Methods

This paper employs the following methods:

  • Reinforcement Learning from Internal Feedback (RLIF)
  • Reinforcement Learning from Human Feedback (RLHF)
  • Reinforcement Learning with Verifiable Rewards (RLVR)

Models Used

  • Qwen2.5-3B
  • Qwen3-1.7B
  • Qwen3-4B
  • DeepSeek-R1

Datasets

The following datasets were used in this research:

  • AIME2025
  • MATH500
  • GSM8K

Evaluation Metrics

  • pass@1
  • pass@N

Results

  • RLIF improves reasoning performance at early training stages but degrades in later stages.
  • RLIF is less effective for instruction-tuned models.

Limitations

The authors identified the following limitations:

  • Performance degradation of RLIF as training progresses.
  • Limited improvement for instruction-tuned models.

Technical Requirements

  • Number of GPUs: None specified
  • GPU Type: None specified
  • Compute Requirements: None specified

Papers Using Similar Methods

External Resources