Abstract
Syntactic dependency structures are commonly utilized as language-agnostic features to solve the word order difference issues in zero-shot cross-lingual relation and event extraction tasks. However, while sentences in multiple forms can be employed to express the same meaning, the syntactic structure may vary considerably in specific scenarios. To fix this problem, we find semantics are rarely considered, which could provide a more consistent semantic analysis of sentences and be served as another bridge between different languages. Therefore, in this article, we introduce Syntax and Semantic Driven Network (SSDN) to equip syntax and semantic knowledge across languages simultaneously. Specifically, predicate–argument structures from semantic role labelling are explicitly incorporated into word representations. Then, a semantic-aware relational graph convolutional network and a transformer-based encoder are utilized to model both semantic dependency and syntactic dependency structures, respectively. Finally, a fusion module is introduced to integrate output representations adaptively. We conduct experiments on the widely used Automatic Content Extraction 2005 English, Chinese, and Arabic datasets. The evaluation results demonstrate that the proposed method achieves the state-of-the-art performance. Further study also indicates SSDN could produce robust representations that facilitate the transfer operations across languages.
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have