This paper overviews and discusses the relationship between Reinforcement Learning (RL) and the recently developed Dual Control for Exploitation and Exploration (DCEE). It is argued that there are two related but quite distinctive approaches, namely, control and machine learning, in tackling intractability arising in optimal decision making/control problems. In the control approach, the original problems (of an infinite horizon) are approximated by finite horizon problems and solved online by taking advantage of the availability of computing power. In the machine learning approach, the optimal solutions are approximated through iterations, or (offline) training through trials when models are not available. When dealing with unknown environments, DCEE as a technique developed from the control approach could potentially solve similar problems as RL while offering a number of advantages, most notably, coping with uncertainty in environment/tasks, high efficiency in learning through balancing exploitation and exploration, and potential in establishing its formal properties like stability. The links between DCEE and other relevant methods like dual control, Model Predictive Control and particularly Active Inference in neuroscience are discussed. The latter provides a strong biological endorsement for DCEE. The methods and discussions are illustrated by autonomous source search using a robot. It is concluded that DCEE provides a promising, complementary approach to RL, and more research is required to develop it as a generic theory and fully realise its potential. The relationships revealed in this paper provide insights into these relevant methods and facilitate cross fertilisation between control, machine learning and neuroscience for developing autonomous control under uncertain environments.
Read full abstract