Domain
computer vision, natural language processing
In the quest for artificial general intelligence, Multi-modal Large Language Models (MLLMs) have emerged as a focal point in recent advancements.However, the predominant focus remains on developing their capabilities in static image understanding.The potential of MLLMs in processing sequential visual data is still insufficiently explored, highlighting the absence of a comprehensive, highquality assessment of their performance.In this paper, we introduce Video-MME, the first-ever full-spectrum, Multi-Modal Evaluation benchmark of MLLMs in Video analysis.Our work distinguishes from existing benchmarks through four key features: 1) Diversity in video types, spanning 6 primary visual domains with 30 subfields to ensure broad scenario generalizability; 2) Duration in temporal dimension, encompassing both short-, medium-, and long-term videos, ranging from 11 seconds to 1 hour, for robust contextual dynamics; 3) Breadth in data modalities, integrating multi-modal inputs besides video frames, including subtitles and audios, to unveil the all-round capabilities of MLLMs; 4) Quality in annotations, utilizing rigorous manual labeling by expert annotators to facilitate precise and reliable model assessment.900 videos with a total of 254 hours are manually selected and annotated by repeatedly viewing all the video content, resulting in 2,700 question-answer pairs.With Video-MME, we extensively evaluate various state-of-the-art MLLMs, including GPT-4 series and Gemini 1.5 Pro, as well as open-source image models like InternVL-Chat-V1.5 and video models like LLaVA-NeXT-Video.Our experiments reveal that Gemini 1.5 Pro is the best-performing commercial model, significantly outperforming the open-source models with an average accuracy of 75%, compared to 71.9% for GPT-4o.The results also demonstrate that Video-MME is a universal benchmark, which applies to both image and video MLLMs.Further analysis indicates that subtitle and audio information could significantly enhance video understanding.Besides, a decline in MLLM performance is observed as video duration increases for all models.Our dataset along with these findings underscores the need for further improvements in handling longer sequences and multi-modal data, shedding light on future MLLM development.Project page: https://video-mme.github.io.
This paper introduces Video-MME, a comprehensive evaluation benchmark for Multi-modal Large Language Models (MLLMs) in video analysis, highlighting the under-exploration of MLLMs in sequential visual data compared to static images. The benchmark includes a diverse dataset of 900 videos across six visual domains and employs 2,700 expertly annotated question-answer pairs, designed to evaluate MLLMs' capabilities in video understanding. Key findings reveal that Gemini 1.5 Pro significantly outperforms other models, achieving an accuracy of 75%, and that the inclusion of subtitles and audio consistently enhances performance, especially in longer videos. The study also uncovers a decline in model performance as video lengths increase, emphasizing the need for continued advancements in handling longer sequences and multi-modal data.
This paper employs the following methods:
- Gemini 1.5 Pro
- GPT-4 series
- InternVL-Chat-V1.5
- LLaVA-NeXT-Video
The following datasets were used in this research:
- Gemini 1.5 Pro achieved an average accuracy of 75%.
- Performance declines with increasing video duration for all models.
The authors identified the following limitations:
- Number of GPUs: None specified
- GPU Type: None specified
multi-modal large language models
video analysis
benchmark evaluation
video understanding
temporal reasoning