Abstract

Although flexible neural networks (FNNs) have been used more successfully than classical neural networks (CNNs), nothing is rigorously known about their properties. In fact, they are not even well known to the systems and control community. In this paper, theoretical evidence is given for their superiority over CNNs. Following an overview of flexible bipolar sigmoid functions (FBSFs), several fundamental properties of feedforward and recurrent FNNs are established. For the feedforward case, it is proven that similar to CNNs, FNNs with as few as a single hidden layer (SHL) are universal approximators. It is also proven that unlike irreducible SHL CBSNNs, irreducible SHL FBSNNs are nonuniquely determined by their input-output (I-0) maps, up to a finite group of symmetries. Then, recurrent FNNs are introduced. It is observed that they can be interpreted as a generalization of the conventional state-space framework. For the recurrent case, it is substantiated that similar to CBSNNs, FBSNNs are universal approximators. Necessary and sufficient conditions for the controllability and observability of a generic class of them are established. For a subclass of this class, it is proven that unlike CBSNNs, FBSNNs are nonuniquely determined by their I-O maps, up to a finite group of symmetries, and that every system inside this subclass is minimal. Finally, a new class of FNNs, namely, flexible bipolar radial basis neural networks (FBRBNNs) is introduced. It is proven that as in the case of classical radial basis neural networks (CRBNNs), feedforward SHL FBRBNNs are universal approximators.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call