Abstract

Memory access latency is always a bottleneck for the performance improvement of data-intensive applications. Exploiting the memory access patterns of Data-Level Parallelism(DLP) is a promising way for loop accelerators to reduce the latency significantly. This paper proposes two DLP-oriented data provisioning mechanisms to alleviate memory access latency: 1) DLP-oriented memory access(DoMA) for efficiently utilizes the available memory bandwidth. 2) a data access patterns aware on-chip buffer(PABUF) for exploiting reuse in a user-transparent manner. Unlike those loop accelerators using traditional DMA to access global memory, DoMA efficiently reduces the transmission of useless data by adjusting the size of requests intelligently. In addition, PABUF, which manages data using DLP’s memory access patterns without software engineering efforts, allows the loop accelerators to access data in parallel. Experiments show that when our mechanisms are integrated into a loop accelerator based on Rocket Chip Coprocessor(RoCC), it can achieve 4.20x-10.65x(6.81x on average) speedups with negligible overhead of power and area compared to L1 Cache.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.