Abstract

Data compression has been proposed to increase the utility of on-chip memory space or Network-on-Chip (NoC) bandwidth in energy-efficient processors. However, such techniques usually add additional compression and decompression latency to the critical path of memory access, which is one of the major factors limiting their application to processors. In contrast to prior work that deals with either cache compression or network compression separately, this study proposes a unified on-chip DIStributed data COmpressor, DISCO, to enable near-zero latency cache/NoC compression for chip multi-processors (CMPs) adopting Non-Uniform Cache Access (NUCA). DISCO integrates data compressors into NoC routers and seeks opportunity to overlap the de/compression latency with the NoC queuing delay through a coordinated NoC scheduling and cache compression mechanism. With the support of DISCO that unifies the solutions of on-chip data compression, it is shown in evaluation that DISCO significantly boosts the efficiency of on-chip data caching and data moving.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.