Abstract
Exploration in sparse reward reinforcement learning remains an open challenge. Many state-of-the-art methods use intrinsic motivation to complement the sparse extrinsic reward signal, giving the agent more opportunities to receive feedback during exploration. Commonly these signals are summed directly as intrinsic rewards and extrinsic rewards. However intrinsic rewards are non-stationary, which directly contaminates extrinsic environmental rewards and changes the optimization objective of the policy to maximize the sum of intrinsic and extrinsic rewards. This could lead the agent to a mixture policy that neither conducts exploration nor task score fulfillment resolutely. This adopts a simple and generic perspective, where we explicitly disentangle extrinsic reward and intrinsic reward. Through the multiple sampling mechanism, our method, State Novelty Sampling Exploration (SNSE), cleverly decouples the intrinsic and extrinsic rewards, so that the two can take their respective roles. Letting intrinsic rewards directly guide the agent to explore novel samples during the exploration phase, and that our policy optimization goal is still to maximize extrinsic rewards. In sparse rewards environments, our experiments show that SNSE can improve the efficiency of exploring unknown states and improve the final performance of the policy. Under dense rewards, SNSE do not make the policy produce optimization bias and cause performance loss.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.