Abstract
The growing interaction between humans and machines raises the necessity to more sophisticated tools for natural language understanding. Dependency parsing is crucial for capturing the semantics of a sentence. Although graph-based dependency parsing approaches outperform transition-based methods because they are not exposed to error propagation as their compeer, their feature space is comparatively limited. Thus, the main issue with graph-based parsing is how to expand the set of features to improve performance. In this research, we propose to expand the feature space of graph-based parsers. To benefit from the global meaning of the entire sentence content, we employee the sentence representation as an additional token feature. Also, to highlight local word collaborations that build sub-tree structures, we use convolutional neural network layers over token embeddings. We achieve the state-of-art results for Turkish, English, Hungarian, and Korean by getting the unlabeled and labeled attachment scores respectively on the test sets; 82.64% and 76.35% on Turkish IMST, 93.36% and 91.34% on English EWT, 90.85% and 87.39% on Hungarian Szeged, 92.44% and 89.58% on Korean GSD treebanks. Our experimental findings show that augmented global and local features empower the performance of graph-based dependency parsers.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.