Abstract
Modern processors include a cache to reduce the access latency to off-chip memory. In shared memory multi-processors, the same data can be stored in multiple processor-local caches. These private copies reduce contention on the memory system, however, incur a replication overhead. Multiple copies consume valuable cache resources and thus increase the likelihood for capacity misses. Maintaining cache coherence is another difficulty caused by multiple copies. In particular, to set a cache line’s status to exclusive in one cache requires invalidating all other shared copies, which can significantly stress the processor interconnect. Furthermore, loading data from a remote cache incurs a large overhead. In the absence of source code or data layout modifications, a rearrangement of a parallel application’s threads can often reduce cache line replication significantly. By mapping threads that frequently access the same cache lines to the same processor node, redundant duplicates and excessive invalidation can be minimized. In this paper, we devise a closed queuing network model to compare the performance of different thread arrangements onto the nodes of a multiprocessor system in order to predict the expected optimal arrangement. The inputs to the model are obtained through a single profiling run. The outputs of the queuing network are performance indices such as throughput, utilization, and latency for the different components of the memory system. Based on these metrics, we compute the memory stall time of individual cores and predict application runtime. Evaluated on a 72-core 4-node Intel Xeon architecture, the presented model is able to identify the best thread arrangement from a set of six configurations for 20 out of 21 parallel applications from various benchmark suites.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.