In this work, an improved and scalable implementation of Cuppen’s algorithm for diagonalizing symmetric tridiagonal matrices is presented. This approach uses a hybrid-heterogeneous parallelization technique, taking advantage of GPU and CPU in a distributed hardware architecture. Cuppen’s algorithm is a theoretical concept and a powerful tool in various scientific and engineering applications. It is a key player in matrix diagonalization, finding its use in Functional Density Theory (FDT) and Spectral Clustering. This highly efficient and numerically stable algorithm computes eigenvalues and eigenvectors of symmetric tridiagonal matrices, making it a crucial component in many computational methods. One of the challenges in parallelizing algorithms for GPUs is their limited memory capacity. However, we overcome this limitation by utilizing multiple nodes with both CPUs and GPUs. This enables us to solve subproblems that fit within the memory of each device in parallel and subsequently combine these subproblems to obtain the complete solution. The hybrid-heterogeneous approach proposed in this work outperforms the state-of-the-art libraries and also maintains a high degree of accuracy in terms of orthogonality and quality of eigenvectors. Furthermore, the sequential version of the algorithm with our approach in this work demonstrates superior performance and potential for practical use. In the experiments carried out, it was possible to verify that the performance of the implementation that was carried out scales by 2× using two graphic cards in the same node. Notably, Symmetric Tridiagonal Eigenvalue Solvers are fundamental to solving more general eigenvalue problems. Additionally, the divide-and-conquer approach employed in this implementation can be extended to singular value solvers. Given the wide range of eigenvalue problems encountered in scientific and engineering domains, this work is essential in advancing computational methods for efficient and accurate matrix diagonalization.
Read full abstract