Abstract

AbstractIn this article, we propose a method to adapt stepsize parameters used in reinforcement learning for non-stationary environments. When the environment is non-stationary, the learning agent must adapt learning parameters like stepsize to the changes of environment through continuous learning. We show several theorems on higher-order derivatives of exponential moving average, which is a base schema of major reinforcement learning methods, using stepsize parameters. We also derive a systematic mechanism to calculate these derivatives in a recursive manner. Based on it, we construct a precise and flexible adaptation method for the stepsize parameter in order to maximize a certain criterion. The proposed method is also validated by several experimental results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call