Abstract
The path planning task is defined as the process to compute the motion sequence allowing the robot to move from the start position to the final destination autonomously without human actions. The path planning is one of the popular tasks encountered by imprecision and uncertainties and it has been studied using fuzzy logic systems (FLS). The construction of a well performing fuzzy controller is not always easy. The problem of finding appropriate membership functions and fuzzy rules is a difficult task. However, the design of fuzzy rules is often reliant on heuristic experience and it lacks systematic methodology, therefore these rules might not be correct and consistent. The design can prove to be long and delicate due to the important number of parameters to determine, and can lead then to a solution with poor performance. To cope with this difficulty, many researchers have been working to find and apply learning algorithms for fuzzy controller design. These automatic methods enable to extract information when the knowledge is not available. The most popular approach to optimize fuzzy logic controllers may be a kind of supervised learning where the training data is available. However, in real applications, extraction of training data is not always easy and become impossible when the cost to obtain training data is expensive. For these problems, reinforcement learning (RL) is more suitable than supervised learning. A control strategy with a learning capacity can be carried out by using Q-learning for tuning fuzzy logic controllers; which the robot receives only a scalar signal likes a feedback. This information makes to adjust the robot behavior in order to improve their performances. The basic idea in Q-learning algorithm of RL is to maximize the received rewards after each interaction with the environment. In this chapter, Q-learning algorithm is used to optimize Takagi-Sugeno fuzzy logic controllers for autonomous path planning of a mobile robot. These optimized fuzzy controllers are used for the different robot tasks: goal seeking, obstacle avoidance and wall-following. The obtained results of this optimization method present significant improvements of the robot behaviors.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.