Abstract

The least-squares method can be used to approximate an eigenvector for a matrix when only an approximation is known for the corresponding eigenvalue. In this paper, this technique is analyzed and error estimates are established proving that if the error in the eigenvalue is sufficiently small, then the error in the approximate eigenvector produced by the least-squares method is also small. Also reported are some empirical results based on using the algorithm. 1. Notation. We use upper case, bold letters to represent complex matrices, and lower case bold letters to represent vectors in C k . We consider a vector v to be a column, and so its adjoint v ∗ is a row vector. Hence v ∗v2 yields the complex dot product v2 · v1. The vector ei is the vector having 1 in its ith coordinate and 0 elsewhere, and In is the n × n identity matrix. We usevto represent the 2-norm on vectors;that isv� 2 = v ∗ v .A lso,|||F||| represents the spectral matrix norm of a square matrix F ,a nd soFv� ≤ ||| F||| � vfor every vector v. Finally, for an n × n Hermitian matrix F, we will write each of the n (not necessarily distinct) real eigenvalues for F as λi(F), where λ1(F) ≤ λ2(F) ≤ · ·· ≤λn(F). 2. The Method and Our Goal. Suppose M is an arbitrary n×n matrix having λ as an eigenvalue, and let A = λIn − M. Generally, one can find an eigenvector for M corresponding to λ by solving the homogeneous system Ax = 0. However, the computation of an eigenvalue does not always result in an exact answer, either because a numerical technique was used for its computation, or due to roundoff error. Suppose λ � is the approximate, known value for the actual eigenvalue λ .I fλ � λ � , then the known matrix K = λ � In − M is most likely nonsingular, and so the homogeneous system Kx = 0 has only the trivial solution. This situation occurs frequently when attempting to solve small eigenvalue problems on calculators. Let � = λ � − λ .T henK = A + � In. Our goal is to approximate a vector in the kernel of A when only the matrix K is known. We assume that M has no other eigenvalues within |� | units of λ ,s o thatK is nonsingular, and thus has trivial kernel. Let u be a unit vector in ker(A). Although we know that u exists, u is unknown. Let v be an arbitrarily chosen unit vector in C n such that w = v ∗ u � 0. In practice, when choosing v ,t he value ofw is unknown, but if v is chosen at random, the probability that w = 0 is zero. Let B be the (n +1 )× n matrix formed by appending the row v ∗

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.