Abstract

Data augmentation is a promising technique in improving exploration and convergence speed in deep reinforcement learning methodologies. In this work, we propose a data augmentation framework based on generative models for creating completely novel states and increasing diversity. For this purpose, a diffusion model is used to generate artificial states (learning the distribution of original, collected states), while an additional model is trained to predict the action executed between two consecutive states. These models are combined to create synthetic data for cases of high and low immediate rewards, which are encountered less frequently during the agent’s interaction with the environment. During the training process, the synthetic samples are mixed with the actually observed data in order to speed up agent learning. The proposed methodology is tested on the Atari 2600 framework, producing realistic and diverse synthetic data which improve training in most cases. Specifically, the agent is evaluated on three heterogeneous games, achieving a reward increase of up to 31%, although the results indicate performance variance among the different environments. The augmentation models are independent of the learning process and can be integrated to different algorithms, as well as different environments, with slight adaptations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call