Abstract

Determining the temporal relationship between events has always been a challenging natural language understanding task. Previous research mainly relies on neural networks to learn effective features or artificial language features to extract temporal relationships, which usually fails when the context between two events is complex or extensive. In this paper, we propose our JSSA (Joint Semantic and Syntactic Attention) model, a method that combines both coarse-grained information from semantic level and fine-grained information from syntactic level. We utilize neighbor triples of events on syntactic dependency trees and events triple to construct syntactic attention served as clue information and prior guidance for analyzing the context information. The experiment results on TB-Dense and MATRES datasets have proved the effectiveness of our ideas.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call