Abstract

Solving the eigenvalues of matrices is an open problem which is often related to scientific computation. With the increasing of the order of matrices, traditional sequential algorithms are unable to meet the needs for the calculation time. Although people can use cluster systems in a short time to solve the eigenvalues of large-scale matrices, it will bring an increase in equipment costs and power consumption. This paper proposes a parallel algorithm named Jacobi on gpu which is implemented by CUDA (Computer Unified Device Architecture) on GPU (Graphic Process Unit) to solve the eigenvalues of symmetric matrices. In our experimental environment, we have Intel Core i5-760 quad-core CPU, NVIDIA GeForce GTX460 card, and Win7 64-bit operating system. When the size of matrix is 10240×10240, the number of iterations is 10000 times, the speedup ratio is 13.71. As the size of matrices increase, the speedup ratio increases correspondingly. Moreover, as the number of iterations increases, the speedup ratio is very stable. When the size of matrix is 8192×8192, the number of iterations are 1000, 2000, 4000, 8000 and 16000 respectively, the standard deviation of the speedup ratio is 0.1161. The experimental results show that the Jacobi on gpu algorithm can save more running time than traditional sequential algorithms and the speedup ratio is 3.02~13.71. Therefore, the computing time of traditional sequential algorithms to solve the eigenvalues of matrices is reduced significantly.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call