The UTD-MHAD dataset consists of 27 different actions performed by 8 subjects. Each subject repeated the action for 4 times, resulting in 861 action sequences in total. The RGB, depth, skeleton and the inertial sensor signals were recorded.
Source: Skepxels: Spatio-temporal Image Representation of Human Skeleton Joints for Action Recognition
Image Source: https://www.researchgate.net/figure/Sample-shots-of-the-27-actions-in-the-UTD-MHAD-database_fig12_283090976
Variants: UTD-MHAD
This dataset is used in 1 benchmark:
Task | Model | Paper | Date |
---|---|---|---|
Action Recognition | Action Machine (RGB only) | Action Machine: Rethinking Action Recognition … | 2018-12-14 |
Recent papers with results on this dataset: