Abstract

Modern processors no longer work mostly in isolation, but rather communicate continuously with peer processors and other devices; communication is now at least as important as computation. Old architectures had the communication medium -the network- far from the processor, interfacing to it through the I/O bus, which was acceptable when the network was slower than the processor. New architectures need to bring the network close to the processors, at a latency and throughput level equal to that of the cache memories. Coherent caches are good at supporting Implicit Communication, where the communicating threads do not know in advance which input data will be needed or who produced them. On the other hand, in the cases of Explicit Communication, when the input data set is known ahead of time, prefetching yields best performance; furthermore, when the users-to-be of an output data set are known, eager send works even better. Prefetching (pull-communication) works either on top of coherent caches with prefetch engines, or on top of local stores (scratchpad memories) with remote DMA, but consumes much less network traffic -hence energy- in the second case. Eager send (push-communications) works almost only using remote DMA; again, traffic and energy advantages are even more pronounced. Recent advances in parallel programming efficiently support explicit communication, by letting the programmer only identify the input and output data sets, and having the compiler and runtime system do the rest by appropriately placing the data and scheduling the transfers. We conclude that future chip multiprocessors should have local SRAM blocks that are configurable to operate partly as coherent caches and partly as local (scratchpad) memories; it should then be possible and advantageous to merge the cache controller and network interface functions into a single unit. These combined hardware mechanisms will most efficiently support both implicit and explicit communication, leading to a unification of the two traditional camps: shared memory and message passing.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.