Abstract

Abstract Many parametric approximation problems can be cast as nonlinear least squares. And in many practical considerations, the problems are ill-posed. Unlike classical nonlinear least squares including trust region methods, the algorithm developed in this paper aims at not only achieving high approximation accuracy but also low model complexity as well as low computational complexity. The algorithm minimizes a generalized objective function with a special trust region constraint, and in the meantime utilizes Jacobian rank deficiency to remove redundant parameters in the approximation models. Some convergence properties of the algorithm are qualitatively evaluated and the effectiveness of the algorithm and its variants are demonstrated using two neural network approximation examples. Comparisons are made with the Levenberg-Marquardt algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call