Abstract

This paper attempts to the development of a novel feed forward artificial neural network paradigm. In its formulation, the hidden neurons were defined by the use of sample activation functions. The following function parameters were included: amplitude, width and translation. Further, the hidden neurons were classified as low and high resolution neurons, with global and local approximation properties, respectively. The gradient method was applied to obtain simple recursive relations for paradigm training. The results of the applications shown the interesting paradigm properties: (i) easy choice of neural network size; (ii) fast training; (iii) strong ability to perform complicated function approximation and nonlinear modeling.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call