InterHuman is a multimodal dataset, named InterHuman. It consists of about 107M frames for diverse two-person interactions, with accurate skeletal motions and 16,756 natural language descriptions.
Source: InterGen: Diffusion-based Multi-human Motion Generation under Complex Interactions
Image Source: GitHub Repo: InterGen
Variants: InterHuman
This dataset is used in 1 benchmark:
Task | Model | Paper | Date |
---|---|---|---|
Motion Synthesis | InterMask | InterMask: 3D Human Interaction Generation … | 2024-10-13 |
Motion Synthesis | FreeMotion | FreeMotion: A Unified Framework for … | 2024-05-24 |
Motion Synthesis | in2IN | in2IN: Leveraging individual Information to … | 2024-04-15 |
Motion Synthesis | MoMat-MoGen | Digital Life Project: Autonomous 3D … | 2023-12-07 |
Motion Synthesis | InterGen | InterGen: Diffusion-based Multi-human Motion Generation … | 2023-04-12 |
Motion Synthesis | ComMDM | Human Motion Diffusion as a … | 2023-03-02 |
Motion Synthesis | MDM | Human Motion Diffusion Model | 2022-09-29 |
Motion Synthesis | TEMOS | TEMOS: Generating diverse human motions … | 2022-04-25 |
Recent papers with results on this dataset: