Abstract
We introduce an effective iterative method for solving rectangular linear systems, based on gradients along with the steepest descent optimization. We show that the proposed method is applicable with any initial vectors as long as the coefficient matrix is of full column rank. Convergence analysis produces error estimates and the asymptotic convergence rate of the algorithm, which is governed by the term sqrt {1-kappa^{-2}}, where κ is the condition number of the coefficient matrix. Moreover, we apply the proposed method to a sparse linear system arising from a discretization of the one-dimensional Poisson equation. Numerical simulations illustrate the capability and effectiveness of the proposed method in comparison to the well-known and recent methods.
Highlights
Linear systems play an essential role in modern applied mathematics, including numerical analysis, statistics, mathematical physics/biology, and engineering
2 Proposing the algorithm we introduce a new method for solving rectangular linear systems based on gradients, and we provide an appropriate sequence of convergent factors that minimizes an error at each iteration
We report the comparison of TauOpt, our proposed algorithm, with the existing algorithms we have presented in the introduction, that is, gradientbased iterative (GI) (Proposition 1.1), least-squares iterative (LS) (Proposition 1.2), BB1 (5), and BB2 (6)
Summary
Linear systems play an essential role in modern applied mathematics, including numerical analysis, statistics, mathematical physics/biology, and engineering. Many researchers developed gradient-based iterative algorithms for solving matrix equations based on the techniques of hierarchical identification and minimization of associated norm-error functions; see, for example, [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24] Convergence analysis for such algorithms relies on the Frobenius norm · F , the spectral norm · 2, and the condition number respectively defined for each A ∈ Mm,n(R) by. We propose a new gradient-based iterative algorithm with a sequence of optimal convergent factors for solving rectangular linear systems We can conclude that Algorithm 2.1 gives the fastest convergence
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.