Abstract
Reinforcement learning refers to a class of learning tasks and algorithms in which the learning system learns an associative mapping by maximizing a scalar evaluation function by interacting with environment. Fuzzy actor critic learning (FACL) is a reinforcement learning method based on dynamic programming principle. A priori knowledge of the process either in the form of models or experts is required for tuning the conclusion part of the fuzzy inference system (FIS). This paper proposes a novel algorithm using FACL to automatically tune the conclusion part of the FIS. The only information available for learning is the system feedback that describes the rate of reward or punishment for the action performed at the previous state. Reinforcement learning problems are discrete time dynamic problems, in which the learner has classically a discrete state perception and trigger only discrete actions. It is planned to apply the same for the control of continuous processes. The generality of these methods allows the system to learn every kind of reinforcement learning problems. The experimental studies of these methods have also shown superiority of these methods over the related reinforcement methods as stated in the literature. In this paper, the proposed reinforcement learning algorithm was initially applied to boiler drum level system and the performance was studied. To show the generality of these methods, it was also applied to a number of linear processes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.