Abstract
We propose solving nonlinear systems of equations by function optimization and we give an optimal algorithm which relies on a special canonical form of gradient descent. The algorithm can be applied under certain assumptions on the function to be optimized, that is, an upper bound must exist for the norm of the Hessian, whereas the norm of the gradient must be lower bounded. Due to its intrinsic structure, the algorithm looks particularly appealing for a parallel implementation. As a particular case, more specific results are given for linear systems. We prove that reaching a solution with a degree of precision /spl epsiv/ takes /spl Theta/(n/sup 2/k/sup 2/ log /sup k///sub /spl epsiv//), k being the condition number of A and n the problem dimension. Related results hold for systems of quadratic equations for which an estimation for the requested bounds can be devised. Finally, we report numerical results in order to establish the actual computational burden of the proposed method and to assess its performances with respect to classical algorithms for solving linear and quadratic equations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.