Abstract

The frame-skipping strategy has been widely employed in deep reinforcement learning (DRL) technology to train an agent. Specifically, this strategy repeats the action determined by the agent for a fixed number of frames. It increases computational efficiency by reducing the number of inferences by making the action decision sparse. However, previously, these consecutive changes in frames during the frame-skipping were hidden and ignored from the environment and did not affect the agent’s action decision. As a result, it can adversely affect the performance of trained agents, where the performance is more critical than computational efficiency. To alleviate these issues, we propose a new framework that utilizes these hidden frames during the frame-skipping, called <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$hidden$</tex-math></inline-formula> <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$observation$</tex-math></inline-formula> , to enhance the performance of the trained agent. The proposed framework retrieves all hidden observations during frame skipping. It then combines batch inference and an exponentially weighted sum to calculate and merge the outputs from hidden observations. Through experiments, we validated the effectiveness of the proposed method in terms of both performance and stability with only a marginal increase in computation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call