← ML Research Wiki / 2306.14824

KOSMOS-2: Grounding Multimodal Large Language Models to the World

Zhiliang Peng Microsoft Research https://aka.ms/GeneralAI, Wenhui Wang Microsoft Research https://aka.ms/GeneralAI, Li Dong Microsoft Research https://aka.ms/GeneralAI, Yaru Hao Microsoft Research https://aka.ms/GeneralAI, Shaohan Huang Microsoft Research https://aka.ms/GeneralAI, FuruShuming Ma Microsoft Research https://aka.ms/GeneralAI (2023)

Paper Information
arXiv ID
Venue
International Conference on Learning Representations
Domain
Not specified

Abstract

We introduce KOSMOS-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world.Specifically, we represent refer expressions as links in Markdown, i.e., "[text span](bounding boxes)", where object descriptions are sequences of location tokens.Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GRIT) to train the model.In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), KOSMOS-2 integrates the grounding capability into downstream applications.We evaluate KOSMOS-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation.This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence.Code and pretrained models are available at

Summary

This paper introduces KOSMOS-2, a multimodal large language model (MLLM) that integrates grounding capabilities. The model enables users to refer directly to objects in images using bounding boxes, improving human-AI interaction in vision-language tasks. KOSMOS-2 is based on the Transformer architecture and utilizes a large-scale dataset of grounded image-text pairs known as GRIT, which is constructed by linking text spans in captions to corresponding image regions. The model exhibits enhanced performance in multimodal grounding, referring expression comprehension and generation, and various perception-language tasks. Results indicate that KOSMOS-2 achieves competitive performance on existing benchmarks while also demonstrating new capabilities in grounding and referring tasks.

Methods

This paper employs the following methods:

  • Transformer

Models Used

  • KOSMOS-2
  • KOSMOS-1

Datasets

The following datasets were used in this research:

  • GRIT
  • LAION-2B
  • COYO-700M
  • Flickr30k Entities
  • RefCOCO
  • RefCOCO+
  • RefCOCOg

Evaluation Metrics

  • IoU
  • R@1
  • R@5
  • R@10
  • CIDEr

Results

  • KOSMOS-2 achieves competitive performance on language and vision-language tasks
  • KOSMOS-2 exhibits impressive performance on grounding tasks
  • KOSMOS-2 outperforms other models on phrase grounding tasks
  • KOSMOS-2 shows notable results in referring expression understanding and generation

Limitations

The authors identified the following limitations:

  • Not specified

Technical Requirements

  • Number of GPUs: 256
  • GPU Type: V100

Papers Using Similar Methods

External Resources