Abstract

Complex question answering (CQA) over knowledge bases(KB) is a challenging task that has attracted increasing attention in recent years. Semantic parsing-based methods face challenges such as poor adaptability for incomplete KB, large search spaces, and high costs to label logic forms. The gap between knowledge graph representations and question token embeddings leads to poor generalizability and uninterpretable reasoning of information retrieval-based methods. We propose an interpretable and scalable system called Predict, Pretrained, Select and Answer (PPSA) to solve CQA tasks over KB. Our system first trains a language model to predict the reasoning paths required to answer questions. We select only the entities that the predicted reasoning paths pass through in the knowledge graph as candidate entities to reduce the amount of distracting information. The paths that connect the topic entity and the selected candidate entity along with the question are then fed into another language model for answer prediction. The answer prediction module loads the parameters of the trained path prediction module before training to improve accuracy. The system reduces the search space by predicting the path and does not need expensive logic forms annotation. The textual path is the input to the language model, which bridges the gap between the graph representations and token embeddings. We analyse the system reasoning ability over knowledge graphs with different degrees of sparseness, and evaluate the system generalizability. The results of experiments performed with the Complex WebQuestions and WebQuestionsSP datasets demonstrate the effectiveness of our approach for CQA task.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.