Abstract
This paper proposes a robust control design method using reinforcement learning for controlling partially-unknown dynamical systems under uncertain conditions. The method extends the optimal reinforcement learning algorithm with a new learning technique based on the robust control theory. By learning from the data, the algorithm proposes actions that guarantee the stability of the closed-loop system within the uncertainties estimated also from the data. Control policies are calculated by solving a set of linear matrix inequalities. The controller was evaluated using simulations on a blood glucose model for patients with Type 1 diabetes. Simulation results show that the proposed methodology is capable of safely regulating the blood glucose within a healthy level under the influence of measurement and process noises. The controller has also significantly reduced the post-meal fluctuation of the blood glucose. A comparison between the proposed algorithm and the existing optimal reinforcement learning algorithm shows the improved robustness of the closed-loop system using our method.
Highlights
Control of unknown dynamic systems with uncertainties is a challenge because exact mathematical models are often required
As an reinforcement learning (RL) framework, the proposed robust control algorithm consists of an agent that takes actions and learns the consequences of its actions in an unknown environment
In order to evaluate the performance of the robust RL controller, we implemented the controller on the glucose kinetics model as described in the previous section under a daily scenario of patients with Type 1 diabetes
Summary
Control of unknown dynamic systems with uncertainties is a challenge because exact mathematical models are often required. Control algorithms can be designed based on the parameters of the approximator. Based on this approach, many control techniques have been proposed using machine learning models such as neural networks and fuzzy logic. Goyal et al [2] proposed a robust sliding mode controller which can be designed from Chebyshev neural networks. Chadli and Guerra [3] introduced a robust static output feedback controller for Takagi Sugeno fuzzy models. Ngo and Shin [4] proposed a method to model unstructured uncertainties and a new Takagi Sugeno fuzzy controller using type-2 fuzzy neural networks
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.