Abstract

This study is concerned with the genetic development of logic-based fuzzy models (logic networks). The models are constructed with the aid of AND and OR fuzzy neurons. These neurons are logic-driven processing units realizing s–t and t–s composition operations of fuzzy sets and fuzzy connections with “t” and “s” being triangular norms and co-norms. The fundamental architecture of the network comprises a hidden layer formed by a collection of AND neurons that is followed by an output layer consisting of OR neurons. We show that this structure of the network directly translates into a collection of “if-then” statements. The learning procedure in such networks consists of two main development phases. First, a blueprint of the network is constructed genetically (through the use of a genetic algorithm) with the underlying intent of capturing the structural (logical) essence of the data and encapsulating it into the network. At this development phase the resulting network is kept binary (Boolean) with anticipation that the binary connections are a sound model of the blueprint of the logic nature of the data. The second phase of the design of the network is aimed at it refinement in which we move from the binary connections and optimize them to reflect the details of the data. There are two main optimization mechanisms applied in the above sequence: the structural binary development uses a genetic algorithm (GA) while a parametric refinement of the network is realized through a gradient-based learning. This organization of the optimization process exhibits several evident advantages. The GA concentrates on a preliminary and rough binary optimization and in this way helps combat an eventual curse of dimensionality. This curse is inherent to high-dimensional problems to which gradient-based learning is quite vulnerable. Having the blueprint developed, further refinement being guided by gradient-based computing becomes more effective. An interesting optimization aspect addressed in the genetic optimization environment deals with a construction of optimal subspaces of the overall feature space: in highly dimensional spaces we can envision that only a small subset of features is essential and their reduction may help develop a compact network. We introduce a concept of network connectivity to quantify the aspect of feature reduction. A carefully presented numerical example serves as a detailed illustration of the development approach.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.