Abstract

Offline Reinforcement Learning (RL) faces challenges such as distributional shift and unreliable value estimation, especially for out-of-distribution (OOD) actions. To address these issues, existing uncertainty-based methods penalize the value function with uncertainty quantification and require numerous ensemble networks, leading to computational challenges and suboptimal outcomes. In this paper, we introduce a novel strategy that employs diverse randomized value functions to estimate the posterior distribution of Q-values. This approach provides robust uncertainty quantification and estimates the lower confidence bounds (LCB) of Q-values. By applying moderate value penalties for OOD actions, our method fosters a provably pessimistic approach. We also emphasize diversity within randomized value functions and enhance efficiency by introducing a diversity regularization method, thereby reducing the requisite number of networks. These modules result in reliable value estimation and efficient policy learning from offline data. Theoretical analysis shows that our method recovers the provably efficient LCB-penalty under linear MDP assumptions. Extensive empirical results demonstrate that our proposed method significantly outperforms baseline methods in terms of performance and parametric efficiency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.