Abstract

We herein propose an efficient implementation of tridiagonalization (TRD) for small matrices on manycore CPUs. Tridiagonalization is a matrix decomposition that is used as a preprocessor for eigenvalue computations. Further, TRD for such small matrices appears even in the HPC environment as a subproblem of large computations.To utilize the large cache memory of recent manycore CPUs, we reconstructed all parts of the implementation by introducing a systematic code generator to achieve performance portability and future extensibility. The flexibility of the system allows us to incorporate the BLAS+X approach, thereby improving the data reusability of the TRD algorithm and batching.The performance results indicate that our system outperforms the library implementations of TRD nearly twofold (or more for small matrices), on three different manycore CPUs: Fujitsu SPARC64, Intel Xeon, and Xeon Phi. As an extension, we also implemented the batching execution of TRD with a cache-aware scheduler on the top of our system. It not only doubles the peak performance at small matrices of n = O(100), but also improves it significantly up to n = O(1, 000), which is our target.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.