Venue
Neural Information Processing Systems
Domain
artificial intelligence, computer vision, natural language processing
We introduce Cambrian-1, a family of multimodal LLMs (MLLMs) designed with a visioncentric approach.While stronger language models can enhance multimodal capabilities, the design choices for vision components are often insufficiently explored and disconnected from visual representation learning research.This gap hinders accurate sensory grounding in realworld scenarios.Our study uses LLMs and visual instruction tuning as an interface to evaluate various visual representations, offering new insights into different models and architecturesself-supervised, strongly supervised, or combinations thereof-based on experiments with over 20 vision encoders.We critically examine existing MLLM benchmarks, addressing the difficulties involved in consolidating and interpreting results from various tasks, and introduce a new vision-centric benchmark, CV-Bench.To further improve visual grounding, we propose the Spatial Vision Aggregator (SVA), a dynamic and spatially-aware connector that integrates high-resolution vision features with LLMs while reducing the number of tokens.Additionally, we discuss the curation of high-quality visual instruction-tuning data from publicly available sources, emphasizing the importance of data source balancing and distribution ratio.Collectively, Cambrian-1 not only achieves state-of-the-art performance but also serves as a comprehensive, open cookbook for instruction-tuned MLLMs.We provide model weights, code, supporting tools, datasets, and detailed instruction-tuning and evaluation recipes.We hope our release will inspire and accelerate advancements in multimodal systems and visual representation learning.
This paper introduces Cambrian-1, a family of multimodal large language models (MLLMs) designed for improved visual understanding. The authors highlight the existing gap between language and visual representation in MLLMs, arguing that current benchmarks and methodologies do not adequately assess multimodal performance. They propose the new vision-centric benchmark, CV-Bench, and introduce the Spatial Vision Aggregator (SVA), which integrates visual features with LLMs while maintaining efficiency. The paper emphasizes the importance of high-quality visual instruction-tuning data and carefully curates it to enhance model performance. Through extensive experimentation, Cambrian-1 shows state-of-the-art performance across various benchmarks, which underscores its capacity to excel in visual-centric tasks. The release of model weights, tools, datasets, and detailed tuning recipes is intended to support and inspire further advancements in the field of multimodal systems and visual representation learning.
This paper employs the following methods:
- Multimodal large language models
- Visual instruction tuning
- Dynamic spatial vision feature integration
- Data curation techniques
The following datasets were used in this research:
- ImageNet-1K
- COCO
- ADE20K
- Cambrian-10M
- Cambrian-1 achieves state-of-the-art performance across various multimodal benchmarks.
- Introduction of CV-Bench, a new vision-centric benchmark for evaluating MLLMs.
- Success in addressing the visual grounding challenges in real-world applications.
The authors identified the following limitations:
- The potential for over-optimization on benchmarks that do not reflect real-world performance.
- Dependence on high-quality, balanced datasets for effective model tuning.
- Number of GPUs: None specified
- GPU Type: NVIDIA A6000, A100, H100
multimodal large language models
visual-centric evaluation
visual representation learning
instruction tuning
benchmarking
vision encoders