Abstract
As more cores are integrated into one die,chip multiprocessors suffers higher on-chip communication latency,and linearly increased directory overheard.Hierarchical cache architecture partitions on-chip caches into multilevel regions recursively,reducing the communication latency by replicating the data blocks to multiple regions that contains the requestor and alleviating the storage overhead of directory by using multilevel directory.According to the data distribution in the last-level cache,we improve the data placement policy and propose an enhanced hierarchical cache directory(EHCD).EHCD directly puts an incoming off-chip data block into the lowest region that contains the requestor to reduce access latency,which guarantees only one data replica is kept in the last-level cache for private data.EHCD improves the capacity utilization of the last-level cache as well as good scalability.Simulation results on a 16-core CMP show that compared with shared organization,EHCD gets 24% execution time reduction,and 15% reduction over original hierarchical cache design.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.