← ML Research Wiki / 2401.14159

Grounded SAM: Assembling Open-World Models for Diverse Visual Tasks

Promptable Seg (2024)

Paper Information
arXiv ID
Venue
arXiv.org
Domain
computer vision
SOTA Claim
Yes
Reproducibility
6/10

Abstract

Figure 1: Grounded SAM can simultaneously detect and segment corresponding regions within images based on arbitrary text inputs provided by users.And it can seamlessly integrate with other Open-World models to accomplish more intricate visual tasks

Summary

Grounded SAM proposes an innovative framework that integrates open-set detector models with promptable segmentation models to tackle complex open-world visual tasks. The paper identifies current methodologies in open-world visual perception, including Unified Models, LLM as Controllers, and Ensemble Foundation Models, and introduces Grounded SAM as a flexible solution that facilitates efficient assembly of diverse expert models. Key capabilities include open-set segmentation, automatic image annotation through RAM-Grounded-SAM, and highly controllable image editing with Grounded-SAM-SD. The effectiveness of Grounded SAM is validated on the Segmentation in the Wild (SGinW) benchmark, showing significant performance improvements over previous approaches. Future prospects include enhancing annotation processes, leveraging large language models for execution of computer vision tasks, and creating new datasets.

Methods

This paper employs the following methods:

  • Ensemble Foundation Models
  • Unified Models
  • LLM as Controller

Models Used

  • Grounded SAM
  • Stable Diffusion
  • Recognize Anything Model (RAM)
  • Grounding DINO
  • Segment Anything Model (SAM)
  • OSX
  • BLIP
  • ChatGPT
  • GPT-4

Datasets

The following datasets were used in this research:

  • SAM-1B
  • V3Det
  • SGinW
  • None specified

Evaluation Metrics

  • None specified

Results

  • Significant performance improvements on SGinW benchmark compared to previous models

Limitations

The authors identified the following limitations:

  • Lack of robust pipelines for complex open-world tasks
  • Limited scope in data for complex tasks like open-set segmentation

Technical Requirements

  • Number of GPUs: None specified
  • GPU Type: None specified

Keywords

grounded segmentation open-set detection open-world models multimodal models image annotation image editing

Papers Using Similar Methods

External Resources