Abstract

Hardware-based DRAM cache techniques for GPGPUs propose to use GPU DRAM as a cache of the host (system) memory. However, these approaches do not exploit the opportunity of allocating store-before-load data (data that is written before being read by GPU cores) on GPU DRAM that would save multiple CPU-GPU transactions. In this context, we propose ReDRAM, a novel memory allocation strategy for GPGPUs which re-configures GPU DRAM cache as a heterogeneous unit. It allows allocation of store-before-load data directly onto GPU DRAM and also utilizes it as a cache of the host memory. Our simulation results using a modified version of GPGPU-Sim show that ReDRAM can improve performance for applications that use store-before-load data by 57.6 percent (avg.) and 4.85x (max.) when compared to the existing proposals on state-of-the-art GPU DRAM caches.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.