Abstract

Breadth-first graph search (a.k.a., BFS) is one of the typical in-memory computing models with complicated and frequent memory accesses. Existing single-server graph computing systems fail to take advantage of access pattern of BFS for performance optimization, hence suffering from a lot of extra memory latencies due to accessing no longer useful data elements of a big graph as well as wasting plenty of computing resources for processing them. In this article, we propose FastBFS, a new approach that accelerates breadth-first graph search on a single server by leverage of the access pattern during iterating over a big graph. First, FastBFS uses an edge-centric graph processing model to obtain the high bandwidth of sequential memory and/or disk access without expensive data preprocessing. Second, with a dynamic and asynchronous trimming mechanism, FastBFS can efficiently reduce the size of a big graph by eliminating useless edges in parallel with the computation. Third, FastBFS schedules I/O streams efficiently and can attain greater parallelism if an additional disk is available. We implement FastBFS by modifying the X-Stream system developed by EPFL. Our experimental results show that FastBFS can attain speedups of up to 7.9× and 10.4× in the computing speed compared with X-stream and GraphChi respectively. With an additional disk, the performance can be further improved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call