Vision-and-Language Navigation in Continuous Environments
Vision and Language Navigation in Continuous Environments (VLN-CE) is an instruction-guided navigation task with crowdsourced instructions, realistic environments, and unconstrained agent navigation. The dataset consists of 4475 trajectories converted from Room-to-Room train and validation splits. For each trajectory, multiple natural language instructions from Room-to-Room and a pre-computed shortest path are provided following the waypoints via low-level actions.
Variants: VLN-CE
This dataset is used in 1 benchmark:
Task | Model | Paper | Date |
---|---|---|---|
Image Generation | GAUDI | GAUDI: A Neural Architect for … | 2022-07-27 |
Image Generation | GSN | GAUDI: A Neural Architect for … | 2022-07-27 |
Image Generation | GRAF | GAUDI: A Neural Architect for … | 2022-07-27 |
Image Generation | π-GAN | GAUDI: A Neural Architect for … | 2022-07-27 |
Recent papers with results on this dataset: