Abstract

Decycling and dismantling of complex networks are underlying many important applications in network science. Recently these two closely related problems were tackled by several heuristic algorithms, simple and considerably sub-optimal, on the one hand, and involved and accurate message-passing ones that evaluate single-node marginal probabilities, on the other hand. In this paper we propose a simple and extremely fast algorithm, CoreHD, which recursively removes nodes of the highest degree from the 2-core of the network. CoreHD performs much better than all existing simple algorithms. When applied on real-world networks, it achieves equally good solutions as those obtained by the state-of-art iterative message-passing algorithms at greatly reduced computational cost, suggesting that CoreHD should be the algorithm of choice for many practical purposes.

Highlights

  • Decycling and dismantling of complex networks are underlying many important applications in network science

  • The authors of ref. 4 claimed that a heuristics based on the so-called collective influence (CI) measure can be a perfect candidate for this purpose

  • We compare to the Belief Propagation guided Decimation (BPD)[5] and Collective Influence method (CI)[4] (CI4 results are obtained using the original code of ref. 4)

Read more

Summary

Introduction

Decycling and dismantling of complex networks are underlying many important applications in network science. Recent theoretic and algorithmic progress on both these problems[1,2,3,5,6] came from the fact that, on random sparse networks with degree distributions having a finite second moment, methods from physics of spin glasses provide accurate algorithms for both decycling and dismantling These sparse random networks are locally tree-like and do not contain many short loops. Even on real-world networks that typically contain many small loops, best dismantling is currently achieved by first finding a decycling solution and re-inserting nodes that close short loops but do not increase too much the size of the largest component[5,6] Both the algorithms of refs 5 and 6 achieve performance that is extremely close to the theoretically optimal values computed on random networks.

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call