V2X-SIM

Dataset Information
Modalities
Videos, Point cloud, RGB Video
Introduced
2022
License
Custom
Homepage

Overview

V2X-Sim, short for vehicle-to-everything simulation, is the a synthetic collaborative perception dataset in autonomous driving developed by AI4CE Lab at NYU and MediaBrain Group at SJTU to facilitate collaborative perception between multiple vehicles and roadside infrastructure. Data is collected from both roadside and vehicles when they are presented near the same intersection. With information from both the roadside infrastructure and vehicles, the dataset aims to encourage research on collaborative perception tasks.

Although not collected from the real world, highly realistic traffic simulation software is used to ensure the representativeness of the dataset compared to real-world driving scenarios. To be more exact, the traffic flow of the recording files is managed by CARLA-SUMO co-simulation, and three town maps from CARLA are currently used to increase the diversity of the dataset.

Here is a tutorial showing how to load the dataset: https://ai4ce.github.io/V2X-Sim/tutorial.html

Variants: V2X-SIM

Associated Benchmarks

This dataset is used in 1 benchmark:

Recent Benchmark Submissions

Task Model Paper Date
3D Object Detection QUEST QUEST: Query Stream for Practical … 2023-08-03
3D Object Detection Where2comm Where2comm: Communication-Efficient Collaborative Perception via … 2022-09-26
3D Object Detection V2X-ViT V2X-ViT: Vehicle-to-Everything Cooperative Perception with … 2022-03-20
3D Object Detection DiscoNet Learning Distilled Collaboration Graph for … 2021-11-01
3D Object Detection V2VNet V2VNet: Vehicle-to-Vehicle Communication for Joint … 2020-08-17

Research Papers

Recent papers with results on this dataset: