Abstract

Many expert systems have been developed for self-adaptive PID controllers of mobile robots. However, the high computational requirements of the expert systems layers, developed for the tuning of the PID controllers, still require previous expert knowledge and high efficiency in algorithmic and software execution for real-time applications. To address these problems, in this paper we propose an expert agent-based system, based on a reinforcement learning agent, for self-adapting multiple low-level PID controllers in mobile robots. For the formulation of the artificial expert agent, we develop an incremental model-free algorithm version of the double Q-Learning algorithm for fast on-line adaptation of multiple low-level PID controllers. Fast learning and high on-line adaptability of the artificial expert agent is achieved by means of a proposed incremental active-learning exploration-exploitation procedure, for a non-uniform state space exploration, along with an experience replay mechanism for multiple value functions updates in the double Q-learning algorithm. A comprehensive comparative simulation study and experiments in a real mobile robot demonstrate the high performance of the proposed algorithm for a real-time simultaneous tuning of multiple adaptive low-level PID controllers of mobile robots in real world conditions.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.