Abstract

Reinforcement learning algorithms rely on carefully engineering environment rewards that are extrinsic to agents. However, environments with dense rewards are rare, motivating the need for developing reward functions that are intrinsic to agents. Curiosity is a type of successful intrinsic reward function, which uses the prediction error as an reward signal. In prior work, the prediction problem used to generate intrinsic rewards is optimized in the pixel space rather than a learnable feature space to avoid randomness caused by feature changes. However, these methods ignore small but important elements of the states that are often associated with locations of the character, which makes it impossible to generate accurate internal rewards for efficient exploration. In this article, we first demonstrate the effectiveness of introducing prior learned features for existing prediction-based exploration methods. Then, an attention map mechanism is designed to discretize learned features, thereby updating the learned feature and meanwhile reducing the impact of randomness on intrinsic rewards caused by the learning process of features. We verify our method on some video games from the standard reinforcement learning Atari benchmark, achieving clear improvements over random network distillation, which is one of the most advanced exploration methods, in almost all Atari games.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call