UTD-MHAD

Dataset Information
Modalities
Images, Videos
Introduced
2015
License
Unknown
Homepage

Overview

The UTD-MHAD dataset consists of 27 different actions performed by 8 subjects. Each subject repeated the action for 4 times, resulting in 861 action sequences in total. The RGB, depth, skeleton and the inertial sensor signals were recorded.

Source: Skepxels: Spatio-temporal Image Representation of Human Skeleton Joints for Action Recognition
Image Source: https://www.researchgate.net/figure/Sample-shots-of-the-27-actions-in-the-UTD-MHAD-database_fig12_283090976

Variants: UTD-MHAD

Associated Benchmarks

This dataset is used in 1 benchmark:

Recent Benchmark Submissions

Task Model Paper Date
Action Recognition Action Machine (RGB only) Action Machine: Rethinking Action Recognition … 2018-12-14

Research Papers

Recent papers with results on this dataset: