Abstract
Cache only memory architecture (COMA), even with its additional memory overhead, can incur longer inter/intra-node communication latency than cache-coherent nonuniform memory access (CC-NUMA). Some studies on COMA suggest that the inclusion property applied between the processor cache and its local memory is one of the major causes of less-than-desirable performance. The inclusion property creates extra accesses to the slow local memory. We consider the binding time of data address to the local memory to be an important factor related to the long latency in COMA. This paper considers the inclusion property in COMA and introduces a variant of COMA, dubbed Dynamic Memory Architecture (DYMA), where the local memory is utilized as a backing store for blocks discarded from the processor cache. Thus, by delaying the binding time, the long latency due to the inclusion property can be avoided. This paper examines the potential performance of DYMA compared to COMA and CC-NUMA.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.