Abstract

Entity linking (EL) is a task about natural language that links mentions of entities in text to corresponding entities that are in a knowledge base. Potential applications include question-answering systems, information extraction and knowledge base population (KBP). The key to structuring an EL system that has high quality involves creating careful representations of words and entities. However, a hypothesis that whole words have the same weight in their context exists in most previous methods, which causes the meanings of words to be biased. In this paper, a novel approach to analyze entity linking between words and entities for a knowledge base using attention-based bilinear joint learning is proposed. First, the approach designs a novel encoding method to model entities and words in EL. The method learns words and entities in a joint way and uses an attention mechanism to obtain different importance values in the context. Second, the approach introduces a weighted summation method to form the textual context and introduces the method with same line of reasoning to model coherence to improve ranking the features. Finally, the approach employs a pairwise boosting regression tree (PBRT) to rank the candidate entities. During the ranking, the approach takes features constructed with a weighted summation model and conventional EL features as the input. Through experiments, it demonstrates that compared with other state-of-the-art methods, the proposed model learns embeddings efficiently and improves EL performance. Our approach achieves progressive results on CoNLL and TAC 2010 datasets.

Highlights

  • Entity linking (EL) is highly important for building the Semantic Web and is an important part for knowledge discovering in text

  • 2) The features of text context and entity coherence were studied in EL, and the work modeled the two features in a new weighting way with trained embeddings, which is helpful for the EL algorithm

  • The bilinear joint learning model (BJLM) [12] uses a mapping matrix method to learn the embeddings of entity and word defined in spaces of different jointly

Read more

Summary

INTRODUCTION

Entity linking (EL) is highly important for building the Semantic Web and is an important part for knowledge discovering in text. Yamada et al [11] introduced a hypothesis that words and entities are distributed in the same space and used a joint embedding model for learning both words and entities. When training embeddings of target words or mentions, different levels of influence should be assigned to each context word to better capture its meaning. The paper proposes an attention-based bilinear joint learning (ABJL) model to embed words and entities for EL. On both datasets, the proposed method outperforms the current advanced methods. 2) The features of text context and entity coherence were studied in EL, and the work modeled the two features in a new weighting way with trained embeddings, which is helpful for the EL algorithm.

RELATED WORK
SKIP-GRAM MODEL
SIMPLIFIED JOINT LEARNING MODEL
ENTITY LINKING USING LEARNED EMBEDDING
MODELING TEXTUAL CONTEXT INFORMATION
EXPERIMENT
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.