Abstract

Graph edge partition models have recently become an appealing alternative to graph vertex partition models for parallel and distributed computing due to their flexibility in balancing loads and their performance in reducing communication cost [1, 3]. In this paper, we introduce a simple yet effective graph edge partitioning model for GPU computing. In practice, our model yields high partition quality (better than or the same as the state-of-the-art edge partition approaches, at least for power-law graphs) with low partition overhead. In theory, previous work [1] showed that an approximation factor of O (dmax √log n log k) apply to the graphs with m = Ω(k2) edges (k is the number of partitions). Our model extends this result to all graphs.We demonstrate how graph edge partition model can be applied to GPU computing. We draw our examples from GPU program for locality enhancement both over time and (processor) space. For the first time, we demonstrate the effectiveness of edge partition for modeling data reuse in a many-core processors, both in theory and in practice.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.