HallusionBench

Dataset Information
Modalities
Images, Videos, Texts
Languages
English, Chinese
Introduced
2023
License
BSD 3-Clause License
Homepage

Overview

Large language models (LLMs), after being aligned with vision models and integrated into vision-language models (VLMs), can bring impressive improvement in image reasoning tasks. This was shown by the recently released GPT-4V(ison), LLaVA-1.5, etc. However, the strong language prior in these SOTA LVLMs can be a double-edged sword: they may ignore the image context and solely rely on the (even contradictory) language prior for reasoning. In contrast, the vision modules in VLMs are weaker than LLMs and may result in misleading visual representations, which are then translated to confident mistakes by LLMs.

To study these two types of VLM mistakes, i.e., language hallucination and visual illusion, we curated HallusionBench, an image-context reasoning benchmark that is still challenging to even GPT-4V and LLaVA-1.5. We provide a detailed analysis of examples in HallusionBench, which sheds novel insights on the illusion or hallucination of VLMs and how to improve them in the future.

Variants: HallusionBench

Associated Benchmarks

This dataset is used in 1 benchmark:

Recent Benchmark Submissions

Task Model Paper Date
Visual Question Answering (VQA) GPT-4V HallusionBench: An Advanced Diagnostic Suite … 2023-10-23
Visual Question Answering (VQA) LRV-Instruct Mitigating Hallucination in Large Multi-Modal … 2023-06-26
Visual Question Answering (VQA) mPLUG-Owl mPLUG-Owl: Modularization Empowers Large Language … 2023-04-27

Research Papers

Recent papers with results on this dataset: