Abstract

Exploration in sparse reward reinforcement learning remains an open challenge. Many state-of-the-art methods use intrinsic motivation to complement the sparse extrinsic reward signal, giving the agent more opportunities to receive feedback during exploration. Commonly these signals are summed directly as intrinsic rewards and extrinsic rewards. However intrinsic rewards are non-stationary, which directly contaminates extrinsic environmental rewards and changes the optimization objective of the policy to maximize the sum of intrinsic and extrinsic rewards. This could lead the agent to a mixture policy that neither conducts exploration nor task score fulfillment resolutely. This adopts a simple and generic perspective, where we explicitly disentangle extrinsic reward and intrinsic reward. Through the multiple sampling mechanism, our method, State Novelty Sampling Exploration (SNSE), cleverly decouples the intrinsic and extrinsic rewards, so that the two can take their respective roles. Letting intrinsic rewards directly guide the agent to explore novel samples during the exploration phase, and that our policy optimization goal is still to maximize extrinsic rewards. In sparse rewards environments, our experiments show that SNSE can improve the efficiency of exploring unknown states and improve the final performance of the policy. Under dense rewards, SNSE do not make the policy produce optimization bias and cause performance loss.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call