Abstract

Collective Classification (CC) is the process of simultaneously inferring the class labels of a set of inter-linked nodes, such as the topic of publications in a citation graph. Recently, Markov Logic Networks (MLNs) have attracted significant attention because of their ability to combine first order logic with probabilistic reasoning. A few authors have used this ability of MLNs in order to perform CC over linked data, but the relative advantages of MLNs vs. other CC techniques remains unknown. In response, this paper compares a wide range of MLN learning and inference algorithms to the best previously studied CC algorithms. We find that MLN accuracy is highly dependent on the type of learning and the input rules that are used, which is not unusual given MLNs’ flexibility. More surprisingly, we find that even the best MLN performance generally lags that of the best previously studied CC algorithms. However, MLNs do excel on the one dataset that exhibited the most complex linking patterns. Ultimately, we find that MLNs may be worthwhile for CC tasks involving data with complex relationships, but that MLN learning for such data remains a challenge.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.