(2025)
This paper introduces Sparse-Reg, a regularization technique based on sparsity, aimed at improving sample complexity in offline reinforcement learning (RL) when limited training datasets are available. The authors explore the challenges posed by small datasets in offline RL, showing that existing algorithms often overfit, leading to suboptimal performance. Sparse-Reg mitigates overfitting by inducing sparsity in neural network parameters, balancing essential pattern capture with model generalization. Through experiments in continuous control tasks using the MuJoCo environment within the D4RL benchmark, the authors demonstrate that Sparse-Reg outperforms state-of-the-art methods under various sample sizes, highlighting its effectiveness in enhancing offline RL performance amidst limited data settings. The paper makes a significant contribution to making offline RL more applicable in real-world scenarios where high-quality data collection can be expensive or impractical.
This paper employs the following methods:
The following datasets were used in this research: