Abstract

Neural-based context-aware models for slot tagging tasks in language understanding have achieved state-of-the-art performance, especially deep contextualized models, such as ELMo, BERT. However, the presence of out-of-vocab (OOV) words significantly degrades the performance of neural-based models, especially in a few-shot scenario. In this paper, we propose a novel knowledge-aware slot tagging model to integrate contextual representation of input text and the large-scale lexical background knowledge. Besides, we use multi-level graph attention to explicitly reason via lexical relations. We aim to leverage both linguistic regularities covered by deep language models (LM) and high-quality background knowledge derived from curated knowledge bases (KB). Consequently, our model could infer rare and unseen words in the test dataset by incorporating contextual semantics learned from the training dataset and lexical relations from ontology. The experiments show that our proposed knowledge integration mechanism achieves consistent improvements across settings with different sizes of training data on two public benchmark datasets. We also show through detailed analysis that incorporating background knowledge effectively alleviates issues of data scarcity.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.