The PKU-MMD dataset is a large skeleton-based action detection dataset. It contains 1076 long untrimmed video sequences performed by 66 subjects in three camera views. 51 action categories are annotated, resulting almost 20,000 action instances and 5.4 million frames in total. Similar to NTU RGB+D, there are also two recommended evaluate protocols, i.e. cross-subject and cross-view.
Source: Co-occurrence Feature Learning from Skeleton Data for Action Recognition and Detection with Hierarchical Aggregation
Image Source: https://www.icst.pku.edu.cn/struct/Projects/PKUMMD.html
Variants: PKU-MMD
This dataset is used in 1 benchmark:
Task | Model | Paper | Date |
---|---|---|---|
Action Recognition In Videos | EPAM-Net | EPAM-Net: An Efficient Pose-driven Attention-guided … | 2024-08-10 |
Action Recognition In Videos | DVANet (RGB only) | DVANet: Disentangling View and Action … | 2023-12-10 |
Recent papers with results on this dataset: