Abstract

AbstractNonlinear least squares iterative solver is considered for real-valued sufficiently smooth functions. The algorithm is based on successive solution of orthogonal projections of the linearized equation on a sequence of appropriately chosen low-dimensional subspaces. The bases of the latter are constructed using only the first-order derivatives of the function. The technique based on the concept of the limiting stepsize along normalized direction (developed earlier by the author) is used to guarantee the monotone decrease of the nonlinear residual norm. Under rather mild conditions, the convergence to zero is proved for the gradient and residual norms. The results of numerical testing are presented, including not only small-sized standard test problems, but also larger and harder examples, such as algebraic problems associated with canonical decomposition of dense and sparse 3D tensors as well as finite-difference discretizations of 2D nonlinear boundary problems for 2nd order partial differential equations.

Highlights

  • Application areas of nonlinear least squares are numerous and include, for instance, numerical solution of nonlinear equations arising as discrete models of physical problems, acceleration of neural networks learning processes using Levenberg-Marquardt type algorithms, pattern recognition, signal processing, nonlinear system modeling and control, design of new fast matrix algorithms etc

  • The technique based on the concept of the limiting stepsize along normalized direction is used to guarantee the monotone decrease of the nonlinear residual norm

  • The results of numerical testing are presented, including small-sized standard test problems, and larger and harder examples, such as algebraic problems associated with canonical decomposition of dense and sparse 3D tensors as well as finitedifference discretizations of 2D nonlinear boundary problems for 2nd order partial differential equations

Read more

Summary

Introduction

Application areas of nonlinear least squares are numerous and include, for instance, numerical solution of nonlinear equations arising as discrete models of physical problems, acceleration of neural networks learning processes using Levenberg-Marquardt type algorithms, pattern recognition, signal processing, nonlinear system modeling and control, design of new fast matrix algorithms etc. This explains the need in further development of robust and efficient nonlinear least squares solvers

Kaporin
General Estimate for Residual Norm
Practical method for choosing the stepsize
Bounding the limiting stepsize α*
Maximizing θ2: link to Gauss-Newton method
Convergence estimates
2.10 Description of computational algorithm
Results
Rosenbrock Function
Chained Rosenbrock Function
Approximate Canonical Decomposition of Dense 3D Tensor
Canonical Decomposition of Matrix Multiplication Tensor
Concluding Remarks
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call