Abstract

CluEval is a Python tool designed to assess the accuracy of named entity disambiguation methods in clustering performance. It allows users to employ five commonly used clustering evaluation metrics in entity disambiguation research. With newly developed, comprehensive, and fast algorithms, CluEval can handle large-scale computations and help users better understand and systematically interpret the clustering performance of named entity disambiguation methods. This tool can serve as a stand-alone evaluation code or be integrated as a module into any named entity disambiguation framework that produces clusters as the final disambiguation outputs, such as author name disambiguation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call