Abstract

A nonparametric neural architecture called the Sigma-Pi Cascade extended Hybrid Neural Network σπ-(CHNN) is proposed to extend approximation capabilities in neural architectures such as Projection Pursuit Learning (PPL) and Hybrid Neural Networks (HNN). Like PPL and HNN, σπ-CHNN also uses distinct activation functions in its neurons but, unlike these previous neural architectures, it may consider multiplicative operators in its hidden neurons, enabling it to extract higher-order information from given data. σπ-CHNN uses arbitrary connectivity patterns among neurons. An evolutionary learning algorithm combined with a conjugate gradient algorithm is proposed to automatically design the topology and weights of σπ-CHNN. σπ-CHNN performance is evaluated in five benchmark regression problems. Results show that σπ-CHNN provides competitive performance compared to PPL and HNN in most problems, either in computational requirements to implement the proposed neural architecture or in approximation accuracy. In some problems, σπ-CHNN reduces the approximation error on the order of 10-1 compared to PPL and HNN, whereas in other cases it achieves the same approximation error as these neural architectures but uses a smaller number of hidden neurons (usually 1 hidden neuron less than PPL and HNN).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.