Abstract
We aim at modeling the performance of linear algebra algorithms without executing either them or parts of them. The performance of an algorithm can be expressed in terms of the time spent on CPU execution and on memory-stalls. The main concern of this paper is to build analytical models to accurately predict memory-stalls. The scenario in which data resides in the L2 cache is considered; with this assumption, only L1 cache misses occur. We construct an analytical formula for modeling the L1 cache misses of fundamental linear algebra operations such as those included in the Basic Linear Algebra Subprograms (BLAS) library. The number of cache misses occurring in higher-level algorithms "like a matrix factorization" is then predicted by combining the models for the appropriate BLAS subroutines. As case studies, we consider GER, a BLAS level-2 operation, and the LU factorization. The models are validated on both Intel and AMD processors, attaining remarkably accurate performance predictions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.