Abstract

Implicit discourse relation recognition is a challenging task due to the absence of the necessary informative clues from explicit connectives. An implicit discourse relation recognizer has to carefully tackle the semantic similarity of sentence pairs and the severe data sparsity issue. In this article, we learn token embeddings to encode the structure of a sentence from a dependency point of view in their representations and use them to initialize a baseline model to make it really strong. Then, we propose a novel memory component to tackle the data sparsity issue by allowing the model to master the entire training set, which helps in achieving further performance improvement. The memory mechanism adequately memorizes information by pairing representations and discourse relations of all training instances, thus filling the slot of the data-hungry issue in the current implicit discourse relation recognizer. The proposed memory component, if attached with any suitable baseline, can help in performance enhancement. The experiments show that our full model with memorizing the entire training data provides excellent results on PDTB and CDTB datasets, outperforming the baselines by a fair margin.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call