A new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes. Okutama-Action features many challenges missing in current datasets, including dynamic transition of actions, significant changes in scale and aspect ratio, abrupt camera movement, as well as multi-labeled actors.
Source: Okutama-Action: An Aerial View Video Dataset for Concurrent Human Action Detection
Variants: Okutama-Action
This dataset is used in 1 benchmark:
Task | Model | Paper | Date |
---|---|---|---|
Action Recognition | PLAR with bbox (Ours) | SCP: Soft Conditional Prompt Learning … | 2023-05-21 |
Action Recognition | PLAR without bbox (Ours) | SCP: Soft Conditional Prompt Learning … | 2023-05-21 |
Recent papers with results on this dataset: