← ML Research Wiki / 2410.18072

WORLDSIMBENCH: TOWARDS VIDEO GENERATION MODELS AS WORLD SIMULATORS

Yiran Qin The Chinese University of Hong Kong Shenzhen Shanghai Artificial Intelligence Laboratory, Zhelun Shi Beihang University, Jiwen Yu The University of Hong Kong Project, Xijun Wang Shanghai Artificial Intelligence Laboratory, Enshen Zhou Beihang University, Lijun Li Shanghai Artificial Intelligence Laboratory, Zhenfei Yin Shanghai Artificial Intelligence Laboratory, Xihui Liu The University of Hong Kong Project, Lu Sheng Beihang University, Jing Shao Shanghai Artificial Intelligence Laboratory, Lei Bai Shanghai Artificial Intelligence Laboratory, Wanli Ouyang Shanghai Artificial Intelligence Laboratory, Ruimao Zhang The Chinese University of Hong Kong Shenzhen (2024)

Paper Information
arXiv ID
Venue
arXiv.org
Domain
computer vision and artificial intelligence
SOTA Claim
Yes
Reproducibility
6/10

Abstract

Recent advancements in predictive models have demonstrated exceptional capabilities in predicting the future state of objects and scenes.However, the lack of categorization based on inherent characteristics continues to hinder the progress of predictive model development.Additionally, existing benchmarks are unable to effectively evaluate higher-capability, highly embodied predictive models from an embodied perspective.In this work, we classify the functionalities of predictive models into a hierarchy and take the first step in evaluating World Simulators by proposing a dual evaluation framework called WorldSimBench.World-SimBench includes Explicit Perceptual Evaluation and Implicit Manipulative Evaluation, encompassing human preference assessments from the visual perspective and action-level evaluations in embodied tasks, covering three representative embodied scenarios: Open-Ended Embodied Environment, Autonomous Driving, and Robot Manipulation.In the Explicit Perceptual Evaluation, we introduce the HF-Embodied Dataset, a video assessment dataset based on fine-grained human feedback, which we use to train a Human Preference Evaluator that aligns with human perception and explicitly assesses the visual fidelity of World Simulators.In the Implicit Manipulative Evaluation, we assess the video-action consistency of World Simulators by evaluating whether the generated situation-aware video can be accurately translated into the correct control signals in dynamic environments.Our comprehensive evaluation offers key insights that can drive further innovation in video generation models, positioning World Simulators as a pivotal advancement toward embodied artificial intelligence.

Summary

This paper presents a dual evaluation framework called WorldSimBench aimed at assessing video generation models that function as World Simulators. It categorizes predictive models into a hierarchy based on their degree of embodiment and proposes two evaluation approaches: Explicit Perceptual Evaluation and Implicit Manipulative Evaluation, focusing on embodied scenarios such as Open-Ended Embodied Environment, Autonomous Driving, and Robot Manipulation. The paper introduces the HF-Embodied Dataset for video assessment based on fine-grained human feedback, and evaluates the models' capabilities in generating situation-aware videos and translating them into actions. Through extensive testing, the authors highlight the strengths and weaknesses of current video generation models, providing insights for future developments in embodied artificial intelligence.

Methods

This paper employs the following methods:

  • WorldSimBench
  • Explicit Perceptual Evaluation
  • Implicit Manipulative Evaluation

Models Used

  • Open-Sora-Plan(T2V)
  • Lavie
  • ModelScope
  • OpenSora
  • AnimateDiff
  • Dynamicrafter
  • EasyAnimate

Datasets

The following datasets were used in this research:

  • HF-Embodied Dataset
  • OpenAI Contractor Gameplay Dataset
  • nuScenes
  • RH20T-P
  • CALVIN

Evaluation Metrics

  • Accuracy
  • Pearson linear correlation coefficient (PLCC)
  • Route Completion (RC)
  • Infraction Score (IS)
  • Driving Score (DS)
  • Vehicle Collisions (VC)
  • Pedestrian Collisions (PC)
  • Layout Collisions (LC)
  • Red Light Violations (RV)
  • Offroad Infractions (OI)

Results

  • Identified hierarchical categorization for Predictive Models based on embodiment
  • Evaluated World Simulators under two distinct evaluation methods
  • Provided insights into the performance of various video generation models

Limitations

The authors identified the following limitations:

  • Evaluation focused mainly on three specific embodied scenarios, requiring further exploration for broader applications.

Technical Requirements

  • Number of GPUs: 4
  • GPU Type: A100 80GB

Keywords

video generation world simulation embodied AI benchmark evaluation predictive models

External Resources