Abstract

A numerical method for the symmetric matrix eigenvalue problem is developed by reducing it to a number of matrix–matrix multiplications. For matrices of size n, the number of such multiplications is on the order of $\log _2 n$. On high performance parallel computers, it is important to minimize memory reference, since the movement of data between memory and registers can be a dominant factor for the overall performance. The matrix–matrix multiplication is more efficient than matrix–vector or vector–vector operations, since it involves $O(n^3 )$ floating point operations while creating only $O(n^2 )$ data movements. The number of data movements of the traditional methods based on reduction to the tridiagonal form is $O(n^3 )$, while that of our method is $O(n^2 \log _2 n)$. Asymptotically, there are fast numerical algorithms for matrix multiplications that require less than $O(n^3 )$ floating point operations. One example is the $O(n^{2.376} )$ method of Coppersmith and Winograd [Proc.19th Ann. ACM Symp. Theory Comput., 1987, pp. 1–6]. Therefore, in principle, our method for the symmetric matrix eigenvalue problem requires only $O(n^{2.376} \log _2 n)$ operations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call