Abstract

The variable projection (VP) method is a classical and effective method for the separable nonlinear least squares (SNLLS) problem. Training a radial basis function neural network (RBFNN) with only one output neuron by minimizing the sum of the squared errors (SSE) is an SNLLS problem, so that the classical VP method has been applied to RBFNN. However, the one-output-RBFNN (ORBFNN) is just one type of RBFNN, so that the paper proposes a new VP method for the general radial basis function neural network (GRBFNN) which has no limit of the number of the output neurons. The new VP method translates the problem corresponding to minimizing the SSE of GRBFNN into a lower-dimensional optimization problem. We prove theoretically that the set of stationary points of the objective function of the lower-dimensional problem is equivalent to that of the original objective function. In addition, the lower dimension leads to less guesses about the initial point for the new problem. The numerical experiments indicate that, with the same algorithm, minimizing the new objective function converges in fewer iterations and makes both a smaller training error and a testing error than minimizing the original objective function.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.