Abstract

Imporving the generalization ability of an agent is an important and challenging task in deep reinforcement learning (RL). Procedually generated environment is an important benchmark for testing generalization in deep RL. In this benchmark, each game consists of multiple levels, each level is an algorithmically created environment instance with a unique configuration of its factors of variation. Existing methods (e.g., regularization, data augmentation) for improving the generalization of RL agent do not learn well the invariant representation among multiple levels. Besides, existing methods for learning invariant representations in RL using adversarial training can only learn invariant information across two levels. To solve this problem, we propose Adversarial Discriminative Feature Separate (ADFS). First, ADFS design a new discriminator for distinguishing whether two observations belong to the same level. Thus, the policy encoder is encouraged to learn invariant information between multiple levels. Second, it separates the representation of observation into level-invariant features and level-discriminative features, so that correction of the optimization direction of the discriminator. The discriminative features are learned by reducing the similarity of specific features intra-levels and increasing that of inter-levels, respectively. Experimental results demonstrate that our method is quite competitive with existing state-of-the-art methods on Procgen Benchmark.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call