Abstract

PurposeOffline reinforcement learning (RL) acquires effective policies by using prior collected large-scale data, while, in some scenarios, collecting data may be hard because it is time-consuming, expensive and dangerous, i.e. health care, autonomous driving, seeking a more efficient offline RL method. The purpose of the study is to introduce an algorithm, which attempts to sample the high-value transitions in the prioritized buffer, and uniformly sample from the normal experience buffer, improving sample efficiency of offline reinforcement learning, as well as alleviating the “extrapolation error” commonly arising in offline RL.Design/methodology/approachThe authors propose a new structure of experience replay architecture, which consists of double experience replies, a prioritized experience replay and a normal experience replay, supplying samples for policy updates in different training phases. At the first training stage, the authors sample from prioritized experience replay according to the calculated priority of each transitions. At the second training stage, the authors sample from the normal experience replay uniformly. The combination of the two experience replies is initialized by the same offline data set.FindingsThe proposed method eliminates out-of-distribution problem in an offline RL regime, and promotes training by leveraging a new efficient experience replay. The authors evaluate their method on D4RL benchmark, and the results reveal that the algorithm can achieve superior performance over the state-of-the-art offline RL algorithm. The ablation study proves that the authors’ experience replay architecture plays an important role in terms of improving final performance, data-efficiency and training stability.Research limitations/implicationsBecause of the extra addition of prioritized experience replay, the proposed method increases the computational burden and has the risk of changing data distribution due to the combined sample strategy. Therefore, researchers are encouraged to use the experience replay block effectively and efficiently further.Practical implicationsOffline RL is susceptible to the quality and coverage of pre-collected data, which may be not easy to be collected from specific environment, demanding practitioners to handcraft behavior policy to interact with environment for gathering data.Originality/valueThe proposed approach focuses on the experience replay architecture for offline RL, and empirically demonstrates the superiority of the algorithm on data efficiency and final performance over conservative Q-learning across diverse D4RL tasks. In particular, the authors compare different variants of their experience replay block, and the experiments show that the stages, when to sample from the priority buffer, play an important role in the algorithm. The algorithm is easy to implement and can be combined with any Q-value approximation-based offline RL methods by minor adjustment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call