Abstract
Learning parameters associated with propositions is one of the main tasks of probabilistic logic programming (PLP), and learning algorithms for PLP have been primarily developed based on maximum likelihood estimation or the optimization of discriminative criteria. This paper explores yet another innovative approach to parameter learning, learning to rank or rank learning, that has been studied mainly in the field of preference learning. We combine learning to rank with techniques developed in PLP to make the latter applicable to a variety of ranking problems such as information retrieval. We implement our approach in PRISM, a PLP system based on the distribution semantics. It supports many parameter learning algorithms such as the expectation maximization algorithm, the variational Bayes algorithm and an algorithm for Viterbi training efficiently by mapping them onto a single data structure called explanation graph. To ensure the same efficiency for parameter learning by learning to rank as in the current PRISM, we introduce a gradient-based learning method that takes advantage of dynamic programming on the explanation graph. This paper also presents three experimental results. The first one is with synthetic data to check the learning behaviors of the proposed approach. The second one uses a knowledge base (knowledge graph) and apply rank learning to a DistMult model for the task of deciding whether relations over entities exist or not. The last one tackles the problem of parsing by a probabilistic context free grammar whose parameters are learned from a tree corpus by rank learning. These experiments successfully demonstrated the potential and effectiveness of learning to rank in PLP. We plan to release a new version of PRISM augmented with the ability of learning to rank in the near future.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.