Abstract

Processing-in-memory (PIM) proposes to move computational components inside memory units to alleviate the high cost of data movement in big data processing. This approach has been recently utilized to reach high performance and energy-efficiency in large-scale graph processing. This paper analyzes a state-of-the-art PIM accelerator for graph processing and identifies message queue management as a significant bottleneck for system efficiency. Two metrics were introduced for representing the waiting time and processor utilization. We then present a lightweight solution for reducing waiting time caused by the message queue while increasing resource utilization in the system. Our simulation results on a set of real-world graph examples indicate that the enhanced graph processing system achieves 40% reduction in the overall execution time and 15% system energy savings over the baseline PIM based accelerator.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.