Abstract

The past decade has seen the development of many shared-memory graph processing frameworks intended to reduce the effort of developing high-performance parallel applications. However, many of these frameworks, based on Vertex-centric or Edge-centric paradigms suffer from several issues, such as poor cache utilization, irregular memory accesses, heavy use of synchronization primitives, or theoretical inefficiency, that deteriorate over-all performance and scalability. Recently, we proposed a cache and memory-efficient partition-centric paradigm for computing PageRank [26]. In this article, we generalize this approach to develop a novel Graph Processing Over Parts (GPOP) framework that is cache efficient, scalable, and work efficient. GPOP induces locality in memory accesses by increasing granularity of execution to vertex subsets called “parts,” thereby dramatically improving the cache performance of a variety of graph algorithms. It achieves high scalability by enabling completely lock and atomic free computation. GPOP’s built-in analytical performance model enables it to use a hybrid of source and part-centric communication modes in a way that ensures work efficiency each iteration, while simultaneously boosting high bandwidth sequential memory accesses. Finally, the GPOP framework is designed with programmability in mind. It completely abstracts away underlying parallelism and programming model details from the user and provides an easy to program set of APIs with the ability to selectively continue the active vertex set across iterations. Such functionality is useful for many graph algorithms but not intrinsically supported by the current frameworks. We extensively evaluate the performance of GPOP for a variety of graph algorithms, using several large datasets. We observe that GPOP incurs up to 9×, 6.8×, and 5.5× less L2 cache misses compared to Ligra, GraphMat, and Galois, respectively. In terms of execution time, GPOP is up to 19×, 9.3×, and 3.6× faster than Ligra, GraphMat, and Galois, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call