Abstract

There are two basic types of artificial neural networks: Multi-Layer Perceptron (MLP) and Radial Basis Function network (RBF). The first type (MLP) consists of one type of neuron, which can be decomposed into a linear and sigmoid part. The second type (RBF) consists of two types of neurons: radial and linear ones. The radial basis function is analyzed and then used for decomposition of RBF network. The resulting Perceptron Radial Basis Function Network (PRBF) consists of two types of neurons: linear and extended sigmoid ones. Any RBF network can be directly converted to a four-layer PRBF network while any MLP network with three layers can be approximated by a five-layer PRBF network. The new PRBF network is then a generalization of MLP and RBF network abilities. Learning strategies are also discussed. The new type of PRBF network and its learning via repeated local optimization is demonstrated on a numerical example together with RBF and MLP for comparison. This paper is organized as follows: Basic properties of MLP and RBF neurons are summarized in the first two chapters. The third chapter includes novel relationship between sigmoidal and radial functions, which is useful for RBF decomposition and generalization. Description of new PRBF network, together with its properties, is subject of the fourth chapter. Numerical experiments with a PRBF and their requests are given in the last chapters.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.