Abstract

This paper overviews and discusses the relationship between Reinforcement Learning (RL) and the recently developed Dual Control for Exploitation and Exploration (DCEE). It is argued that there are two related but quite distinctive approaches, namely, control and machine learning, in tackling intractability arising in optimal decision making/control problems. In the control approach, the original problems (of an infinite horizon) are approximated by finite horizon problems and solved online by taking advantage of the availability of computing power. In the machine learning approach, the optimal solutions are approximated through iterations, or (offline) training through trials when models are not available. When dealing with unknown environments, DCEE as a technique developed from the control approach could potentially solve similar problems as RL while offering a number of advantages, most notably, coping with uncertainty in environment/tasks, high efficiency in learning through balancing exploitation and exploration, and potential in establishing its formal properties like stability. The links between DCEE and other relevant methods like dual control, Model Predictive Control and particularly Active Inference in neuroscience are discussed. The latter provides a strong biological endorsement for DCEE. The methods and discussions are illustrated by autonomous source search using a robot. It is concluded that DCEE provides a promising, complementary approach to RL, and more research is required to develop it as a generic theory and fully realise its potential. The relationships revealed in this paper provide insights into these relevant methods and facilitate cross fertilisation between control, machine learning and neuroscience for developing autonomous control under uncertain environments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.