Abstract

Background : There has been an increasing surge of interest on development of advanced Reinforcement Learning (RL) systems as intelligent approaches to learn optimal control policies directly from smart agents’ interactions with the environment. Objectives : In a model-free RL method with continuous state-space, typically, the value function of the states needs to be approximated. In this regard, Deep Neural Networks (DNNs) provide an attractive modeling mechanism to approximate the value function using sample transitions. DNN-based solutions, however, suffer from high sensitivity to parameter selection, are prone to overfitting, and are not very sample efficient. A Kalman-based methodology, on the other hand, could be used as an efficient alternative. Such an approach, however, commonly requires a-priori information about the system (such as noise statistics) to perform efficiently. The main objective of this paper is to address this issue. Methods : As a remedy to the aforementioned problems, this paper proposes an innovative Multiple Model Kalman Temporal Difference (MM-KTD) framework, which adapts the parameters of the filter using the observed states and rewards. Moreover, an active learning method is proposed to enhance the sampling efficiency of the system. More specifically, the estimated uncertainty of the value functions are exploited to form the behaviour policy leading to more visits to less certain values, therefore, improving the overall learning sample efficiency. As a result, the proposed MM-KTD framework can learn the optimal policy with significantly reduced number of samples as compared to its DNN-based counterparts. Results : To evaluate performance of the proposed MM-KTD framework, we have performed a comprehensive set of experiments based on three RL benchmarks, namely, Inverted Pendulum; Mountain Car, and; Lunar Lander. Experimental results show superiority of the proposed MM-KTD framework in comparison to its state-of-the-art counterparts.

Highlights

  • Inspired by exceptional learning capabilities of human beings, Reinforcement Learning (RL) systems have emerged aiming to form optimal control policies merely by relying on the knowledge about the past interactions of an agent with its environment

  • EXPERIMENTAL RESULTS we evaluate performance of the proposed Multiple Model Kalman Temporal Difference (MM-Kalman Temporal difference (KTD)) framework

  • It is worth mentioning that one benefit of the proposed MM-KTD framework is its superior ability to deal with scenarios where enough information about the underlying parameters is not fully available

Read more

Summary

Introduction

Inspired by exceptional learning capabilities of human beings, Reinforcement Learning (RL) systems have emerged aiming to form optimal control policies merely by relying on the knowledge about the past interactions of an agent with its environment. Such a learning approach is beneficial, as unlike supervised learning methods, an RL system. Methods: As a remedy to the aforementioned problems, this paper proposes an innovative Multiple Model Kalman Temporal Difference (MM-KTD) framework, which adapts the parameters of the filter using the observed states and rewards. The proposed MM-KTD framework can learn the optimal policy with significantly reduced number of samples as compared to its DNN-based counterparts. Experimental results show superiority of the proposed MM-KTD framework in comparison to its state-ofthe-art counterparts

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.