Abstract

A novel methodology for generating fuzzy reinforcement learning systems without a prior knowledge and expert effect named as dynamic self-generated fuzzy Q-learning (DSGFQL) has been proposed in this paper. Compared with authors' previous work on dynamic fuzzy Q-learning (DFQL), DSGFQL offers an automatical generation method for fuzzy reinforcement learning with capabilities of creating as well as pruning fuzzy rules. Similar as the DFQL, epsiv-completeness criterion is applied for recruiting new fuzzy rules. At the same time, global and local reward criterions are adopted for parameters modification for fuzzy rules which pass the epsiv-completeness criterion. In DSGFQL, local reward and local firing strength have been utilized for deleting unsatisfactory and unnecessary fuzzy rules. In this paper, DSGFQL has been applied for a wall-following task of a mobile robot. Experiment results and comparative studies between the novel DSGFQL and DFQL demonstrate that the proposed DSGFQL is superior to the DFQL in both overall performance and computational efficiency as the number of failures is fewer, the reward is bigger and the number of fuzzy rules is smaller. Moreover, the proposed framework can be applied of generating fuzzy inference systems (FIS) automatically for other reinforcement learning methods as well

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call