Abstract

Progress in device design and implementation technologies will change the structure of bus-connected multi-processors and allow them to be implemented on a single chip. In such an implementation, the speed of the bus inside the chip is far faster than that of the backplane bus, and data transfer between the chip and external devices will become a bottleneck. Many studies on snoop cache protocols have been made, but they generally assume that the cache memory is on a printed circuit board. In this paper, we first classify the snoop cache protocols in terms of accesses to off-chip shared memory, which will be the principal cause of performance degradation, and then compare them quantitatively. Evaluations are made with an instruction-level multiprocessor simulator and practical parallel applications, varying the cache size or the access latency of shared memory. The results show that an I/N/C protocol that actively uses the line transfers between caches achieves the highest performance under all conditions. © 1997 Scripta Technica, Inc. Syst Comp Jpn, 28(2): 62–72, 1997

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.