Abstract

We introduce an effective iterative method for solving rectangular linear systems, based on gradients along with the steepest descent optimization. We show that the proposed method is applicable with any initial vectors as long as the coefficient matrix is of full column rank. Convergence analysis produces error estimates and the asymptotic convergence rate of the algorithm, which is governed by the term sqrt {1-kappa^{-2}}, where κ is the condition number of the coefficient matrix. Moreover, we apply the proposed method to a sparse linear system arising from a discretization of the one-dimensional Poisson equation. Numerical simulations illustrate the capability and effectiveness of the proposed method in comparison to the well-known and recent methods.

Highlights

  • Linear systems play an essential role in modern applied mathematics, including numerical analysis, statistics, mathematical physics/biology, and engineering

  • 2 Proposing the algorithm we introduce a new method for solving rectangular linear systems based on gradients, and we provide an appropriate sequence of convergent factors that minimizes an error at each iteration

  • We report the comparison of TauOpt, our proposed algorithm, with the existing algorithms we have presented in the introduction, that is, gradientbased iterative (GI) (Proposition 1.1), least-squares iterative (LS) (Proposition 1.2), BB1 (5), and BB2 (6)

Read more

Summary

Introduction

Linear systems play an essential role in modern applied mathematics, including numerical analysis, statistics, mathematical physics/biology, and engineering. Many researchers developed gradient-based iterative algorithms for solving matrix equations based on the techniques of hierarchical identification and minimization of associated norm-error functions; see, for example, [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24] Convergence analysis for such algorithms relies on the Frobenius norm · F , the spectral norm · 2, and the condition number respectively defined for each A ∈ Mm,n(R) by. We propose a new gradient-based iterative algorithm with a sequence of optimal convergent factors for solving rectangular linear systems We can conclude that Algorithm 2.1 gives the fastest convergence

Method
Method TauOpt GI LS
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call