Abstract

In this paper, self-learning approaches are applied to an obstacle avoidance task of a mobile robot. Compared with conventional reinforcement learning (RL) and fuzzy RL (FRL), a novel approach termed dynamic self-generated fuzzy Q-Learning (DSGFQL) and its extended version, Enhanced Dynamic Self-Generated Fuzzy Q-Learning (EDS-GFQL), are proposed. Both methods are capable of generating a fuzzy inference system (FIS) without any priori knowledge. In the DSGFQL approach, the structure and preconditioning parts of an FIS are generated according to the input space partition and the reinforcement of the system. An extended self organizing map (SOM) algorithm is combined with the DSGFQL approach and the EDSGFQL algorithm can update the centers of membership functions (MFs). In both the DSGFQL and EDSGFQL approaches, the consequent parts of the FIS are updated by Fuzzy Q- Learning, which is a widely used RL method. As a consequence, the proposed DSGFQL and EDSGFQL methodologies can automatically create, delete and adjust fuzzy rules without any priori knowledge or supervision. Simulation studies on an obstacle avoidance task by a mobile robot show that the proposed DSGFQL and EDSGFQL approaches are superior to those current RL methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call