Abstract

The optimistic nature of the Q−learning target leads to an overestimation bias, which is an inherent problem associated with standard Q−learning. Such a bias fails to account for the possibility of low returns, particularly in risky scenarios. However, the existence of biases, whether overestimation or underestimation, need not necessarily be undesirable. In this paper, we analytically examine the utility of biased learning, and show that specific types of biases may be preferable, depending on the scenario. Based on this finding, we design a novel reinforcement learning algorithm, Balanced Q-learning, in which the target is modified to be a convex combination of a pessimistic and an optimistic term, whose associated weights are determined online, analytically. Such a balanced target inherently promotes risk-averse behavior, which we examine through the lens of the agent's exploration. We prove the convergence of this algorithm in a tabular setting, and empirically demonstrate its consistently good learning performance in various environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call