← ML Research Wiki / 2403.07691

ORPO: Monolithic Preference Optimization without Reference Model

Jiwoo Hong [email protected] KAIST AI, Noah Lee [email protected] KAIST AI, James Thorne [email protected] KAIST AI (2024)

Paper Information
arXiv ID
Venue
Conference on Empirical Methods in Natural Language Processing
Domain
Natural Language Processing
SOTA Claim
Yes
Code
Reproducibility
8/10

Abstract

While recent preference alignment algorithms for language models have demonstrated promising results, supervised fine-tuning (SFT) remains imperative for achieving successful convergence.In this paper, we study the crucial role of SFT within the context of preference alignment, emphasizing that a minor penalty for the disfavored generation style is sufficient for preference-aligned SFT.Building on this foundation, we introduce a straightforward and innovative reference model-free monolithic odds ratio preference optimization algorithm, ORPO, eliminating the necessity for an additional preference alignment phase.We demonstrate, both empirically and theoretically, that the odds ratio is a sensible choice for contrasting favored and disfavored styles during SFT across the diverse sizes from 125M to 7B.Specifically, fine-tuning Phi-2 (2.7B), Llama-2 (7B), and Mistral (7B) with ORPO on the Ul-traFeedback alone surpasses the performance of state-of-the-art language models with more than 7B and 13B parameters: achieving up to 12.20% on AlpacaEval 2.0 (Figure1), 66.19% on IFEval (instruction-level loose, Table6), and 7.32 in MT-Bench (Figure12).We release code 1 and model checkpoints for Mistral-ORPOα (7B) 2 and Mistral-ORPO-β (7B). 3

Summary

The paper introduces ORPO (Odds Ratio Preference Optimization), a novel monolithic preference alignment method for fine-tuning language models without requiring a reference model. It emphasizes the importance of supervised fine-tuning (SFT) in preference alignment and proposes a method that uses an odds ratio based penalty to optimize the training process, efficiently contrasting favored and disfavored generation styles. The authors provide empirical results showing that models fine-tuned with ORPO outperform traditional SFT and reinforcement learning methods in instruction-following tasks across various model sizes. The method is validated using benchmarks like AlpacaEval and IFEval, demonstrating significant performance improvements, indicating ORPO’s potential for enhancing the efficiency of preference-aligned language models. Code and model checkpoints are made publicly available to facilitate reproducibility.

Methods

This paper employs the following methods:

  • Odds Ratio Preference Optimization (ORPO)
  • Reinforcement Learning with Human Feedback (RLHF)
  • Direct Preference Optimization (DPO)

Models Used

  • Phi-2 (2.7B)
  • Llama-2 (7B)
  • Mistral (7B)

Datasets

The following datasets were used in this research:

  • HH-RLHF
  • UltraFeedback

Evaluation Metrics

  • AlpacaEval
  • MT-Bench
  • Win Rate in Preference Alignment

Results

  • Mistral-ORPO-α achieved 11.33% on AlpacaEval 2.0
  • Mistral-ORPO-β achieved 12.20% on AlpacaEval 2.0
  • ORPO improved instruction-following performance of Phi-2 (2.7B) to exceed Llama-2 Chat (7B)
  • Mistral-ORPO-α and Mistral-ORPO-β scored 7.23 and 7.32 on MT-Bench

Limitations

The authors identified the following limitations:

  • Not specified

Technical Requirements

  • Number of GPUs: 2
  • GPU Type: NVIDIA A100

Keywords

Preference Optimization Monolithic training Reinforcement learning Supervised fine-tuning Language models

Papers Using Similar Methods

External Resources