Abstract

For separable nonlinear least squares models, a variable projection algorithm based on matrix factorization is studied, and the ill-conditioning of the model parameters is considered in the specific solution process of the model. When the linear parameters are estimated, the Tikhonov regularization method is used to solve the ill-conditioned problems. When the nonlinear parameters are estimated, the QR decomposition, Gram–Schmidt orthogonalization decomposition, and SVD are applied in the Jacobian matrix. These methods are then compared with the method in which the variables are not separated. Numerical experiments are performed using RBF neural network data, and the experimental results are analyzed in terms of both qualitative and quantitative indicators. The results show that the proposed algorithms are effective and robust.

Highlights

  • Regarding the SNLLS problem, there are few studies on the potential ill-conditioning of the parameters to be estimated

  • The Levenberg–Marquardt (LM) algorithm is used to estimate the nonlinear parameters, and the Jacobian matrix in the algorithm is in the form given by Kaufman

  • The singular value decomposition (SVD), QR decomposition, and Gram–Schmidt orthogonalization (GSO) decomposition are applied to the matrix to improve the iteration efficiency

Read more

Summary

Journal of Mathematics

Regarding the SNLLS problem, there are few studies on the potential ill-conditioning of the parameters to be estimated. When the linear parameters are estimated, the Tikhonov regularization method is used to solve the potential ill-conditioned problems. E LM iterative algorithm is used to estimate the nonlinear parameters. E following iterative methods can be used to estimate the nonlinear parameters: θkN+1 θkN + βkdk,. By combining the linear parameter and nonlinear parameter estimation methods, three algorithms can be obtained. The Jacobian matrix takes three forms—JSVD, JGSO, and JQR—for the SVD, GSO, and QR methods, respectively. The first 200 points were used to learn and train the model and estimate the linear and nonlinear parameters. As listed, this shows that they yield accurate estimation parameters and prediction curves. With the increase in the number of prediction points, the three methods have a higher error growth rate in the early stages and a lower error growth rate in the later

Not separated
Parameter value
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call