RoCoG-v2

Robot Control Gestures

Dataset Information
Modalities
Videos
Introduced
2023
License
Unknown
Homepage

Overview

RoCoG-v2 (Robot Control Gestures) is a dataset intended to support the study of synthetic-to-real and ground-to-air video domain adaptation. It contains over 100K synthetically-generated videos of human avatars performing gestures from seven (7) classes. It also provides videos of real humans performing the same gestures from both ground and air perspectives

Variants: RoCoG-v2

Associated Benchmarks

This dataset is used in 1 benchmark:

Recent Benchmark Submissions

Task Model Paper Date
Action Recognition AZTR (Ours) AZTR: Aerial Video Action Recognition … 2023-03-02

Research Papers

Recent papers with results on this dataset: