Venue
International Conference on Learning Representations
Domain
artificial intelligence, computer vision, natural language processing
The recent GPT-4 has demonstrated extraordinary multi-modal abilities, such as directly generating websites from handwritten text and identifying humorous elements within images.These features are rarely observed in previous visionlanguage models.However, the technical details behind GPT-4 continue to remain undisclosed.We believe that the enhanced multi-modal generation capabilities of GPT-4 stem from the utilization of sophisticated large language models (LLM).To examine this phenomenon, we present MiniGPT-4, which aligns a frozen visual encoder with a frozen advanced LLM, Vicuna, using one projection layer.Our work, for the first time, uncovers that properly aligning the visual features with an advanced large language model can possess numerous advanced multi-modal abilities demonstrated by GPT-4, such as detailed image description generation and website creation from hand-drawn drafts.Furthermore, we also observe other emerging capabilities in MiniGPT-4, including writing stories and poems inspired by given images, teaching users how to cook based on food photos, and so on.In our experiment, we found that the model trained on short image caption pairs could produce unnatural language outputs (e.g., repetition and fragmentation).To address this problem, we curate a detailed image description dataset in the second stage to finetune the model, which consequently improves the model's generation reliability and overall usability.Our code, pre-trained model, and collected dataset are available at https://minigpt-4.github.io/.
The paper presents MiniGPT-4, a vision-language model that aligns a frozen visual encoder with a frozen large language model, Vicuna, utilizing a single projection layer. This model effectively achieves advanced multi-modal capabilities such as detailed image description generation, website creation from handwritten drafts, and other capabilities not present in traditional vision-language models like Kosmos-1 and BLIP-2. The approach includes two training stages: an initial pretraining stage with a large dataset of aligned image-text pairs, followed by a second stage fine-tuning with a curated dataset of detailed image description pairs to enhance language generation. Experiments reveal that MiniGPT-4 can generate richer and more coherent language outputs compared to previous models, demonstrating the effectiveness of combining high-quality visual features with advanced language models.
This paper employs the following methods:
- Frozen Visual Encoder
- Advanced LLM (Vicuna)
- Fine-tuning
The following datasets were used in this research:
- LAION
- Conceptual Captions
- SBU
- Accuracy
- Hallucination Rate
- MiniGPT-4 generates detailed image descriptions
- MiniGPT-4 explains memes and creates websites from drafts
- Improved generation reliability and usability through fine-tuning
The authors identified the following limitations:
- Number of GPUs: 4
- GPU Type: NVIDIA A100 80GB
MiniGPT-4
vision-language understanding
large language models
multi-modal capabilities
AI-generated content