Abstract

Summary form only given. A novel recurrent neuro-fuzzy network is proposed in this paper. More specifically, we generalize the recurrent neuro-fuzzy network structure proposed by Ballini et al. (2001), which in turn is an improvement of the feedforward structure introduced by Caminhas et al. (1999). The network structure is composed by two structures: a fuzzy inference system and a neural network. The fuzzy inference system contains fuzzy neurons modeled with the aid of logic operations processed via t-norms and s-norms. The neural network is composed by nonlinear elements placed in series with the previous logical element. The network model implicitly encodes a set of if-then rules and its recurrent multi layer structure performs fuzzy inference. The recurrent fuzzy neural network is particularly suitable to model nonlinear dynamic systems and to learn sequences. Network learning involves three main phases: 1) uses a convenient modification of the vector quantization approach to granulate the input universes; 2) simply sets network connections and their initial, randomly chosen weights; and 3) uses two main paradigms to update the network weights: gradient descent and associative reinforcement learning. The performance of the recurrent neurofuzzy network is verified with an example. Computational experiments show that the fuzzy neural model learned is simpler and that learning is faster than its counterpart.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.