Abstract

The fuzzy min–max neural network constitutes a neural architecture that is based on hyperbox fuzzy sets and can be incrementally trained by appropriately adjusting the number of hyperboxes and their corresponding volumes. An extension to this network has been proposed recently, that is based on the notion of random hyperboxes and is suitable for reinforcement learning problems with discrete action space. In this work, we elaborate further on the random hyperbox idea and propose the stochastic fuzzy min–max neural network, where each hyperbox is associated with a stochastic learning automaton. Experimental results using the pole balancing problem indicate that the employment of this model as an action selection network in reinforcement learning schemes leads to superior learning performance compared with the traditional approach where the multilayer perceptron is employed.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.