InfiMM-Eval

Complex Open-ended Reasoning Evaluation for Multi-Modal Language Models

Dataset Information
Modalities
Images
Languages
English
Introduced
2023
License
Homepage

Overview

Multi-modal Large Language Models (MLLMs) are increasingly prominent in the field of artificial intelligence. Although many benchmarks attempt to holistically evaluate MLLMs, they typically concentrate on basic reasoning tasks, often yielding only simple yes/no or multi-choice responses. These methods naturally lead to confusion and difficulties in conclusively determining the reasoning capabilities of MLLMs. To mitigate this issue, we manually curate CORE-MM benchmark dataset, specifically designed for MLLMs with a focus on complex reasoning tasks. Our benchmark comprises three key reasoning categories: deductive, abductive, and analogical reasoning. The queries in our dataset are intentionally constructed to engage the reasoning capabilities of MLLMs in the process of generating answers. For a fair comparison across various MLLMs, we incorporate intermediate reasoning steps into our evaluation criteria. CORE-MM benchmark consists of 279 manually curated reasoning questions, associated with a total of 342 images. Among which, 49 questions pertain to abductive reasoning, 181 require deductive reasoning, and 49 involve analogicalreasoning. Furthermore, the dataset is divided into two folds based on reasoning complexity, with 108 classified as “High” reasoning complexity and 171 as “Moderate” reasoning complexity.

Variants: InfiMM-Eval

Associated Benchmarks

This dataset is used in 1 benchmark:

Recent Benchmark Submissions

Task Model Paper Date
Visual Question Answering (VQA) SPHINX v2 SPHINX: The Joint Mixing of … 2023-11-13
Visual Question Answering (VQA) mPLUG-Owl2 mPLUG-Owl2: Revolutionizing Multi-modal Large Language … 2023-11-07
Visual Question Answering (VQA) CogVLM-Chat CogVLM: Visual Expert for Pretrained … 2023-11-06
Visual Question Answering (VQA) LLaVA-1.5 Improved Baselines with Visual Instruction … 2023-10-05
Visual Question Answering (VQA) InternLM-XComposer-VL InternLM-XComposer: A Vision-Language Large Model … 2023-09-26
Visual Question Answering (VQA) Qwen-VL-Chat Qwen-VL: A Versatile Vision-Language Model … 2023-08-24
Visual Question Answering (VQA) OpenFlamingo-v2 OpenFlamingo: An Open-Source Framework for … 2023-08-02
Visual Question Answering (VQA) Emu Emu: Generative Pretraining in Multimodality 2023-07-11
Visual Question Answering (VQA) InstructBLIP InstructBLIP: Towards General-purpose Vision-Language Models … 2023-05-11
Visual Question Answering (VQA) Otter Otter: A Multi-Modal Model with … 2023-05-05
Visual Question Answering (VQA) LLaMA-Adapter V2 LLaMA-Adapter V2: Parameter-Efficient Visual Instruction … 2023-04-28
Visual Question Answering (VQA) MiniGPT-v2 MiniGPT-4: Enhancing Vision-Language Understanding with … 2023-04-20
Visual Question Answering (VQA) GPT-4V GPT-4 Technical Report 2023-03-15
Visual Question Answering (VQA) BLIP-2-OPT2.7B BLIP-2: Bootstrapping Language-Image Pre-training with … 2023-01-30

Research Papers

Recent papers with results on this dataset: