Abstract

Abstract Sequence labeling is the widely used method for CCG supertagging task where a supertag (lexical category) is assigned to each word in an input sentence. In CCG supertagging the major challenging problem is due to the large number of lexical categories. To address this, machine learning and deep learning methods have been used and achieved promising results. However, these models whether use many hand-crafted features case of machine learning methods or use sentence level representation processing a sequence without any correlations between labels in neighborhoods which have great influences on predicting the current label case of deep learning models. More recently, there is a marriage of machine learning and deep learning models. In this paper, we use the combination of Conditional Random Field and Bidirectional Long Short-Term Memory models. So first the model learns sentence representation where we can gain from both past and future input features thanks to Bidirectional Long Short-Term Memory Networks architecture. Afterward, the model uses sentence level tag information thanks to Conditional Random Field model. By combining Bidirectional Long Short-Term Memory and Conditional Random Field (BLSTM-CRF) models, we evaluate our model on in-domain and out-of-domain datasets, and in both cases achieve (or close to) state-of-the-art results on CCG supertagging task.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.