Abstract

This paper presents MST, a communication-efficient message library for fast graph traversal on exascale clusters. The key idea is to follow the multi-level network topology to perform topology-aware message aggregation, where small messages are gathered and scattered at each level of domain. To facilitate message aggregation, we equip MST with flexible buffer management including active buffer switching and dynamic buffer expansion. We implement MST on the newest-generation Tianhe supercomputer and evaluated its performance using various traversal-centric algorithms on both synthetic trillion-scale graphs and real-world big graphs. The results show that MST-based graph traversal is orders of magnitude faster than that based on Active Messages Library (AML). For the Graph500-BFS benchmark, MST-based Tianhe (with 77.2K nodes) outperforms the Fugaku supercomputer (with 148.5K nodes) by 18.53%, while Fugaku is ranked No. 1 in the latest Graph500-BFS ranking (June 2023). MST also greatly improves graph processing performance on other commercial large-scale computing systems at the National Supercomputing Center in Changsha (NSCC) and WuzhenLight.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.