Abstract

The Jacobi–Davidson iteration method is very efficient in solving Hermitian eigenvalue problems. If the correction equation involved in the Jacobi–Davidson iteration is solved accurately, the simplified Jacobi–Davidson iteration is equivalent to the Rayleigh quotient iteration which achieves cubic convergence rate locally. When the involved linear system is solved by an iteration method, these two methods are also equivalent. In this paper, we present the convergence analysis of the simplified Jacobi–Davidson method and present the estimate of iteration numbers of the inner correction equation. Furthermore, based on the convergence factor, we can see how the accuracy of the inner iteration controls the outer iteration.

Highlights

  • Let A be a sparse and Hermitian matrix. en, we are supposed to compute the smallest eigenvalue λ of A and the associated eigenvector x of λ with large, i.e., Ax λx, with ‖x‖ 1. (1)Here and in the following, ‖ · ‖ indicates the induced Euclidean norm for either a vector or a matrix

  • E main framework for the subspace methods is generating a sequence of enlarging subspaces Vk, which contain more and more information for the desired eigenvalue or eigenvector of matrix A. e central problem for this method, which can be accomplished by the Rayleigh–Ritz procedure, is to extract the approximation to the desired eigenvalue or eigenvector from the projection subspace

  • An ill-conditioned linear system of equations should be solved exactly which is expensive in each step of the iteration as the approximation is close to the target eigenvalue. e idea replacing the exact solution by a cheaper approximate solution results in an inexact Rayleigh quotient iteration (IRQI) [8, 9]; this replacement may destroy the local convergence property of the RQI

Read more

Summary

Introduction

Let A be a sparse and Hermitian matrix. en, we are supposed to compute the smallest eigenvalue λ of A and the associated eigenvector x of λ with large, i.e., Ax λx, with ‖x‖ 1. In the last part of this paper, we give the convergence analysis in terms of the residual norms, from which we can see that these results are asymptotically identical to those derived in the former section; we gave the analysis on the iteration number of the inner linear system. Bai and Miao in [11] presented the convergence of the JD iteration method, and they proved that the JD iteration method attains quadratic convergence locally when the involved correction equation is solved by a Krylov subspace method and attains cubic convergence rate when the correction equation is solved to a prescribed precision proportional to the norm of the current residual vector.

Preliminaries
Convergence Analysis of the JD Iteration
Estimate for the Iteration Number
Findings
Concluding Remarks
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call