Abstract
With the development of electric technology, uniprocessor is being substituted by CMP (chip multi processors). CMP can run multi-thread program efficiently, many researchers engage in the study of multi-thread, including extracting multi-thread from legacy single thread program and making standardization for future multi-thread program. Data communication among threads is an inevitable problem in multi-thread program, and efficient data sharing is an important aspect for program performance. But researchers focus data sharing on memory organization and relationship among threads, there is little attention for intra-processor. In this paper, we develop a thread assignment method for group sharing L2 cache architecture according to the data relationship among threads. We allocate some threads to some cores and some threads others. In our experiment, we simulates four threads with different degree data sharing and running in four cores CMP, whose cores is divided into two groups. Comparing some program execution tracks, we find that the main difference between two simulations is the hit rate of L2 cache and thread assignment brings 6.25% running time improvement. The L2 cache hit rate is 91.0% and 87.1% with thread assignment our proposed, but the L2 cache hit rate is 77.0% and 75.4% with random thread assignment. It descends 14.0% and 11.7% for each group.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.