← ML Research Wiki / 2403.06977

VideoMamba: State Space Model for Efficient Video Understanding

(2024)

Paper Information
arXiv ID
Venue
European Conference on Computer Vision

Abstract

Addressing the dual challenges of local redundancy and global dependencies in video understanding, this work innovatively adapts the Mamba to the video domain.The proposed VideoMamba overcomes the limitations of existing 3D convolution neural networks and video transformers.Its linear-complexity operator enables efficient long-term modeling, which is crucial for high-resolution long video understanding.Extensive evaluations reveal VideoMamba's four core abilities: (1) Scalability in the visual domain without extensive dataset pretraining, thanks to a novel self-distillation technique; (2) Sensitivity for recognizing shortterm actions even with fine-grained motion differences; (3) Superiority in long-term video understanding, showcasing significant advancements over traditional feature-based models; and (4) Compatibility with other modalities, demonstrating robustness in multi-modal contexts.Through these distinct advantages, VideoMamba sets a new benchmark for video understanding, offering a scalable and efficient solution for comprehensive video understanding.All the code and models are available.

Summary

The paper introduces VideoMamba, a novel State Space Model (SSM) designed for efficient video understanding. It addresses challenges related to local redundancy and global dependencies in video processing, outperforming traditional 3D convolutional neural networks and video transformers. VideoMamba features a linear-complexity operator facilitating long-term modeling, crucial for high-resolution videos. The paper reports four core abilities of VideoMamba: (1) scalability without extensive dataset pre-training, (2) sensitivity to recognize short-term actions, (3) superiority in long-term video understanding compared to feature-based models, and (4) compatibility with other modalities. Extensive evaluations demonstrate the model's robust performance across various video understanding tasks.

Methods

This paper employs the following methods:

  • Selective State Space Model
  • Self-Distillation

Models Used

  • VideoMamba
  • S4
  • Mamba
  • TimeSformer
  • ViViT
  • Uni-Former

Datasets

The following datasets were used in this research:

  • K400
  • SthSthV2
  • Breakfast
  • COIN
  • LVU
  • ImageNet-1K

Evaluation Metrics

  • Accuracy

Results

  • Improvements in scalability without extensive dataset pretraining
  • Superior performance in short-term action recognition
  • Efficiency in processing long videos faster and with less memory
  • Robustness in multi-modal integration and video-text retrieval

Technical Requirements

  • Number of GPUs: 1
  • GPU Type: NVIDIA A100-80G
  • Compute Requirements: batch size of 128

Papers Using Similar Methods

External Resources