Abstract

In this paper, the convergence analysis of the conventional conjugate Gradient method was reviewed. And the convergence analysis of the modified conjugate Gradient method was analysed with our extension on preconditioning the algorithm. Convergence of the algorithm is a function of the condition number of M-1A. Again, this work broadens our understanding that the modified CGM yields the exacts result after n-iterations, and further proves that the CGM algorithm is quicker if there are duplicated eigenvalues. Given infinite floating point precision, the number of iterations required to compute an exact solution is at most the number of distinct eigenvalues. It was discovered that the modified CGM algorithm converges more quickly when eigenvalues are clustered together than when they are irregularly distributed between a given interval. The effectiveness of a preconditioner is determined by the condition number of the matrix and occasionally by its clustering of eigenvalues. For large scale application, CGM should always be used with a pre-conditioner to improve convergence.KEYWORDS: Convergence, Conjugate Gradient, eigenvalue, preconditioning.

Highlights

  • Optimization theory is aimed at solving problem under investigation with a high degree of precision and under a highly restrictive operation time so as to minimize computing cost

  • We find that modified Conjugate Gradient method (CGM) converges more quickly when eigenvalues are clustered together than when they are irregularly distributed between min and max, because it is easier for the algorithm to choose a polynomial that makes equation (10) small

  • The problem remains of finding a pre conditioner that approximate A well enough to improve convergence enough to make up for the cost of computing the product M-1rί once per iteration

Read more

Summary

INTRODUCTION

Optimization theory is aimed at solving problem under investigation with a high degree of precision and under a highly restrictive operation time so as to minimize computing cost. In practice, accumulated floating point roundoff error courses the residual to gradually lose accuracy, and cancellation error causes the search vectors to lose A- orthogonality This convergence analysis is important because the modified CGM algorithm is used for large class of problems that is not feasible to run even in niterations. We find that modified CGM converges more quickly when eigenvalues are clustered together than when they are irregularly distributed between min and max, because it is easier for the algorithm to choose a polynomial that makes equation (10) small. Setting ί = 1 in equation (11), we obtain the convergence result for the steepest descent method of our earlier work (Omorogbe and Osagiede, 2008b): i.e.|| Incomplete Cholesky preconditioning is not always stable (Gilbert and Nocedal, 1992)

The Modified Conjugate Gradients on the Normal Equations
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.