← ML Research Wiki / 2403.07974

LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code

(2024)

Paper Information
arXiv ID
Venue
International Conference on Learning Representations

Abstract

Large Language Models (LLMs) applied to code-related applications have emerged as a prominent field, attracting significant interest from both academia and industry.However, as new and improved LLMs are developed, existing evaluation benchmarks (e.g., HumanEval, MBPP) are no longer sufficient for assessing their capabilities.In this work, we propose LiveCodeBench, a comprehensive and contamination-free evaluation of LLMs for code, which collects new problems over time from contests across three competition platforms, namely LeetCode, AtCoder, and CodeForces.Notably, our benchmark also focuses on a broader range of code-related capabilities, such as self-repair, code execution, and test output prediction, beyond just code generation.Currently, LiveCodeBench hosts over five hundred coding problems that were published between May 2023 and May 2024.We have evaluated 18 base LLMs and 34 instruction-tuned LLMs on LiveCodeBench.We present empirical findings on contamination, holistic performance comparisons, potential overfitting in existing benchmarks as well as individual model comparisons.We will release all prompts and model completions for further community analysis, along with a general toolkit for adding new scenarios and models.

Summary

This paper introduces LiveCodeBench, a new benchmark for evaluating large language models (LLMs) in coding tasks. It addresses the limitations of existing evaluation benchmarks such as HumanEval and MBPP which focus mainly on code generation and are at risk of data contamination as their samples may exist in the training datasets of LLMs. LiveCodeBench features a holistic approach, incorporating scenarios like self-repair, code execution, and test output prediction along with code generation. Problems are sourced from competitive programming platforms (LeetCode, AtCoder, CodeForces) and are continuously updated to avoid contamination. The benchmark already includes over 500 problems and has evaluated a variety of LLMs, revealing findings on model performance variance and contamination effects. The authors emphasize the need for holistic evaluation metrics that go beyond code generation.

Methods

This paper employs the following methods:

  • Live updates
  • Holistic evaluation
  • Quality assurance on problem sets

Models Used

  • DeepSeek
  • GPT-4
  • Claude-3-Opus
  • Mistral-L
  • L3-Base
  • StarCoder2
  • CodeLLaMa
  • Gemini-Pro

Datasets

The following datasets were used in this research:

  • LeetCode
  • AtCoder
  • CodeForces

Evaluation Metrics

  • Pass@1

Results

  • LiveCodeBench exposes contamination weaknesses in existing benchmarks.
  • Diverse capabilitiy assessments show a strong correlation between model performance across coding tasks.
  • The evaluations indicate potential overfitting to HumanEval, showing that some models may not generalize well to broader coding tasks.

Limitations

The authors identified the following limitations:

  • The benchmark currently only focuses on Python.
  • Limited dataset size might introduce noise to performance evaluations.
  • Generational limits may lead to reduced diversity in problem testing sets.

Technical Requirements

  • Number of GPUs: None specified
  • GPU Type: None specified
  • Compute Requirements: None specified

Papers Using Similar Methods

External Resources