← ML Research Wiki / 2307.06281

MMBench: Is Your Multi-modal Model an All-around Player?

Yuan Liu Shanghai AI Laboratory, Haodong Duan Shanghai AI Laboratory, Yuanhan Zhang Nanyang Technological University, Bo Li Nanyang Technological University, Songyang Zhang Shanghai AI Laboratory, Wangbo Zhao National University of Singapore, Yike Yuan Zhejiang University, Jiaqi Wang Shanghai AI Laboratory, Conghui He Shanghai AI Laboratory, Ziwei Liu Nanyang Technological University, Kai Chen Shanghai AI Laboratory, Dahua Lin Shanghai AI Laboratory The Chinese University of Hong Kong (2023)

Paper Information
arXiv ID
Venue
European Conference on Computer Vision
Domain
artificial intelligence, computer vision, natural language processing
SOTA Claim
Yes
Reproducibility
7/10

Abstract

Large vision-language models (VLMs) have recently achieved remarkable progress, exhibiting impressive multimodal perception and reasoning abilities.However, effectively evaluating these large VLMs remains a major challenge, hindering future development in this domain.Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but lack fine-grained ability assessment and robust evaluation metrics.Meanwhile, subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, which is not scalable and may display significant bias.In response to these challenges, we propose MMBench, a bilingual benchmark for assessing the multi-modal capabilities of VLMs.MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of the following key features: 1. MMBench is meticulously curated with well-designed quality control schemes, surpassing existing similar benchmarks in terms of the number and variety of evaluation questions and abilities; 2. MMBench introduces a rigorous CircularEval strategy and incorporates large language models to convert free-form predictions into pre-defined choices, which helps to yield accurate evaluation results for models with limited instruction-following capabilities.3. MMBench incorporates multiple-choice questions in both English and Chinese versions, enabling an apples-to-apples comparison of VLMs' performance under a bilingual context.To summarize, MMBench is a systematically designed objective benchmark for a robust and holistic evaluation of vision-language models.We hope MMBench will assist the research community in better evaluating their models and facilitate future progress in this area.The evalutation code of MMBench has been integrated into VLMEvalKit: https://github.com/open-compass/VLMEvalKit. 1 1 This is a revised version released in April 2024.It describes MMBench v1.1, a refined version of the MMBench (with better data quality).Please refer to https://arxiv.org/pdf/2307.06281v3 for the previous version, which is released in August 2023.

Summary

The paper introduces MMBench, a bilingual benchmark designed to robustly evaluate the multimodal capabilities of large vision-language models (VLMs). It addresses the shortcomings of existing benchmarks that either focus on traditional quantitative metrics or rely on biased human evaluations. MMBench boasts over 3,000 multiple-choice questions across 20 distinct ability dimensions, including object localization and social reasoning, promoting a fine-grained assessment of model capabilities. Key innovations include a quality control scheme, a novel evaluation strategy called CircularEval involving multiple rounds of questioning, and leveraging large language models (LLMs) like GPT-4 for choice extraction from VLM outputs. The benchmark aims to provide direct comparisons among different VLMs, offering insights for future model improvements and enhanced evaluation methodologies in the multimodal research community.

Methods

This paper employs the following methods:

  • MMBench
  • CircularEval

Models Used

  • GPT-4
  • OpenAI's ChatGPT
  • GPT-4v
  • Gemini-Pro-V
  • LLaVA
  • MiniGPT-4

Datasets

The following datasets were used in this research:

  • VQAv2
  • COCO Caption
  • GQA
  • OK-VQA
  • OwlEval
  • MMBench

Evaluation Metrics

  • Accuracy
  • BLEU
  • CIDEr

Results

  • MMBench offers over 3,000 evaluation questions across 20 ability dimensions.
  • CircularEval strategy improves robustness by requiring multiple correct predictions for evaluation.

Limitations

The authors identified the following limitations:

  • Existing benchmarks exhibit bias and limit the capacity to perform fine-grained assessments. Subjective evaluations are not scalable due to the reliance on human annotators.

Technical Requirements

  • Number of GPUs: None specified
  • GPU Type: None specified

Keywords

multimodal benchmarking vision-language models MMBench evaluation strategy bilingual assessment circular evaluation

Papers Using Similar Methods

External Resources