Abstract
As multicore processors become more common in today’s computing systems and parallel programming models are enriched, programmers must consider how to choose the appropriate and parallel programming model when writing parallel code. The purpose of this paper is to compare and analyze the performance gap between different C++ parallel programming models, such as C++ standard library threads, OpenMP and Pthreads, in terms of matrix operations. The experiments use different libraries to implement matrix multiplication separately and then analyze their performance. The experimental data show that the data size has a significant impact on the performance of the different models. For very small matrices of size magnitude less than or close to the number of threads, the performance of parallel implementations is much lower than that of serial implementations. For small matrices with magnitudes larger than the number of threads, the C++ standard library threads outperforms Pthreads and OpenMP due to its lightweight thread performance on relatively small matrices. pthreads shows the best performance on very large matrices due to its fine-grained control over thread management, communication, and synchronization operations. openMP’s is not as stable as the other two libraries, especially for smaller matrices. This paper provides a comparative analysis that can help programmers choose the most appropriate library for their specific computational needs.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.