Abstract

This paper proposes a new reinforcement neural fuzzy surrogate (RNFS)-assisted multiobjective evolutionary optimization (RNFS-MEO) algorithm to boost the learning efficiency of data-driven fuzzy controllers (FCs). The RNFS-MEO is applied to evolve a population of FCs in a multiobjective robot wall-following control problem in order to reduce the number of time-consuming control trials and the implementation time of learning. In the RNFS-MEO, the RNFS is incorporated into a typical multiobjective continuous ant colony optimization algorithm to improve its learning efficiency. The RNFS estimates the accumulated multiobjective function values of the FCs in a colony without applying them to control a process, which helps reduce the number of control trials. The RNFS is trained online through structure and parameter learning based on the reinforcement signals from controlling a process. Considering the influence of the current control signals on the future states of a controlled process, the temporal difference technique is used in the RNFS training so that it estimates not only the current but also the future objective function values. The colony of FCs in the RNFS-MEO is repeatedly evolved based on the RNFS estimated values or the objective function values from real evaluations until a colony of successful FCs is found. The RNFS-MEO-based FC learning approach is applied to a robot wall-following control problem. Simulations and experiments on the robot control application are performed to verify the effectiveness and efficiency of the RNFS-MEO.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call