RLBench

Dataset Information
Modalities
Environment
Introduced
2019
Homepage

Overview

RLBench is an ambitious large-scale benchmark and learning environment designed to facilitate research in a number of vision-guided manipulation research areas, including: reinforcement learning, imitation learning, multi-task learning, geometric computer vision, and in particular, few-shot learning.

Variants: RLBench, The COLOSSEUM, GEMBench

Associated Benchmarks

This dataset is used in 1 benchmark:

  • Robot Manipulation -

Recent Benchmark Submissions

Task Model Paper Date
Robot Manipulation Mini Diffuser Mini Diffuser: Fast Multi-task Diffusion … 2025-05-14
Robot Manipulation ARP+ Autoregressive Action Sequence Learning for … 2024-10-04
Robot Manipulation 3D-LOTUS Towards Generalizable Vision-Language Robotic Manipulation: … 2024-10-02
Robot Manipulation RVT-2 RVT-2: Learning Precise Manipulation from … 2024-06-12
Robot Manipulation SAM-E SAM-E: Leveraging Visual Foundation Model … 2024-05-30
Robot Manipulation 3D Diffuser Actor 3D Diffuser Actor: Policy Diffusion … 2024-02-18
Robot Manipulation PolarNet PolarNet: 3D Point Clouds for … 2023-09-27
Robot Manipulation Act3D Act3D: 3D Feature Field Transformers … 2023-06-30
Robot Manipulation RVT RVT: Robotic View Transformer for … 2023-06-26
Robot Manipulation PerAct (Evaluated in RVT) Perceiver-Actor: A Multi-Task Transformer for … 2022-09-12
Robot Manipulation PerAct Perceiver-Actor: A Multi-Task Transformer for … 2022-09-12
Robot Manipulation Image-BC VIT Perceiver-Actor: A Multi-Task Transformer for … 2022-09-12
Robot Manipulation Image-BC CNN Perceiver-Actor: A Multi-Task Transformer for … 2022-09-12
Robot Manipulation Hiveformer Instruction-driven history-aware policies for robotic … 2022-09-11
Robot Manipulation Auto-λ Auto-Lambda: Disentangling Dynamic Task Relationships 2022-02-07
Robot Manipulation C2FARM-BC (Evaluated in PerAct) Coarse-to-Fine Q-attention: Efficient Learning for … 2021-06-23

Research Papers

Recent papers with results on this dataset: