← ML Research Wiki / 2304.10592

MINIGPT-4: ENHANCING VISION-LANGUAGE UNDERSTANDING WITH ADVANCED LARGE LANGUAGE MODELS

Deyao Zhu [email protected] King Abdullah University of Science and Technology, Jun Chen [email protected] King Abdullah University of Science and Technology, Xiaoqian Shen [email protected] King Abdullah University of Science and Technology, Xiang Li [email protected] King Abdullah University of Science and Technology, Mohamed Elhoseiny [email protected] King Abdullah University of Science and Technology (2023)

Paper Information
arXiv ID
Venue
International Conference on Learning Representations
Domain
artificial intelligence, computer vision, natural language processing
SOTA Claim
Yes
Code
Available
Reproducibility
7/10

Abstract

The recent GPT-4 has demonstrated extraordinary multi-modal abilities, such as directly generating websites from handwritten text and identifying humorous elements within images.These features are rarely observed in previous visionlanguage models.However, the technical details behind GPT-4 continue to remain undisclosed.We believe that the enhanced multi-modal generation capabilities of GPT-4 stem from the utilization of sophisticated large language models (LLM).To examine this phenomenon, we present MiniGPT-4, which aligns a frozen visual encoder with a frozen advanced LLM, Vicuna, using one projection layer.Our work, for the first time, uncovers that properly aligning the visual features with an advanced large language model can possess numerous advanced multi-modal abilities demonstrated by GPT-4, such as detailed image description generation and website creation from hand-drawn drafts.Furthermore, we also observe other emerging capabilities in MiniGPT-4, including writing stories and poems inspired by given images, teaching users how to cook based on food photos, and so on.In our experiment, we found that the model trained on short image caption pairs could produce unnatural language outputs (e.g., repetition and fragmentation).To address this problem, we curate a detailed image description dataset in the second stage to finetune the model, which consequently improves the model's generation reliability and overall usability.Our code, pre-trained model, and collected dataset are available at https://minigpt-4.github.io/.

Summary

The paper presents MiniGPT-4, a vision-language model that aligns a frozen visual encoder with a frozen large language model, Vicuna, utilizing a single projection layer. This model effectively achieves advanced multi-modal capabilities such as detailed image description generation, website creation from handwritten drafts, and other capabilities not present in traditional vision-language models like Kosmos-1 and BLIP-2. The approach includes two training stages: an initial pretraining stage with a large dataset of aligned image-text pairs, followed by a second stage fine-tuning with a curated dataset of detailed image description pairs to enhance language generation. Experiments reveal that MiniGPT-4 can generate richer and more coherent language outputs compared to previous models, demonstrating the effectiveness of combining high-quality visual features with advanced language models.

Methods

This paper employs the following methods:

  • Frozen Visual Encoder
  • Advanced LLM (Vicuna)
  • Fine-tuning

Models Used

  • Vicuna
  • BLIP-2
  • Kosmos-1

Datasets

The following datasets were used in this research:

  • LAION
  • Conceptual Captions
  • SBU

Evaluation Metrics

  • Accuracy
  • Hallucination Rate

Results

  • MiniGPT-4 generates detailed image descriptions
  • MiniGPT-4 explains memes and creates websites from drafts
  • Improved generation reliability and usability through fine-tuning

Limitations

The authors identified the following limitations:

  • Not specified

Technical Requirements

  • Number of GPUs: 4
  • GPU Type: NVIDIA A100 80GB

Keywords

MiniGPT-4 vision-language understanding large language models multi-modal capabilities AI-generated content

Papers Using Similar Methods

External Resources