Robot Control Gestures
RoCoG-v2 (Robot Control Gestures) is a dataset intended to support the study of synthetic-to-real and ground-to-air video domain adaptation. It contains over 100K synthetically-generated videos of human avatars performing gestures from seven (7) classes. It also provides videos of real humans performing the same gestures from both ground and air perspectives
Variants: RoCoG-v2
This dataset is used in 1 benchmark:
Task | Model | Paper | Date |
---|---|---|---|
Action Recognition | AZTR (Ours) | AZTR: Aerial Video Action Recognition … | 2023-03-02 |
Recent papers with results on this dataset: