Abstract

A sentence is composed of linguistically linked units, such as words or phrases. The dependencies between them compose the linguistic structures of a sentence, which indicates the meanings of linguistic units and encodes the syntactic or semantic relationships between them. Therefore, it is important to learn the linguistic structures of a sentence for entity relation extraction or other natural language processing (NLP) tasks. In related works, manual rules or dependency trees are usually adopted to capture the linguistic structures. These methods heavily depend on prior knowledge or external toolkits. In this paper, we introduce a Supervised Graph Autoencoder Network (SGAN) model to automatically learn the linguistic structures of a sentence. Unlike traditional graph neural networks that use a fixed adjacency matrix initialized with prior knowledge, the SGAN model contains a learnable adjacency matrix that is dynamically tuned by a task-relevant learning objective. It can automatically learn linguistic structures from raw input sentences. After being evaluated on seven public datasets, the SGAN achieves state-of-the-art (SOTA) performance, outperforming all compared models. The results show that automatically learned linguistic structures have better performance than manually designed linguistic patterns. It exhibits great potential for supporting entity relation extraction and other NLP tasks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.