Abstract

Link prediction is a paradigmatic and challenging problem in network science, which attempts to uncover missing links or predict future links, based on known topology. A fundamental but still unsolved issue is how to choose proper metrics to fairly evaluate prediction algorithms. The area under the receiver operating characteristic curve (AUC) and the balanced precision (BP) are the two most popular metrics in early studies, while their effectiveness is recently under debate. At the same time, the area under the precision–recall curve (AUPR) becomes increasingly popular, especially in biological studies. Based on a toy model with tunable noise and predictability, we propose a method to measure the discriminating ability of any given metric. We apply this method to the above three threshold-free metrics, showing that AUC and AUPR are remarkably more discriminating than BP, and AUC is slightly more discriminating than AUPR. The result suggests that it is better to simultaneously use AUC and AUPR in evaluating link prediction algorithms. At the same time, it warns us that the evaluation based only on BP may be unauthentic. This article provides a starting point towards a comprehensive picture about effectiveness of evaluation metrics for link prediction.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.