Abstract

Tree-structured neural networks, such as TreeLSTM and its variants, have proven effective for learning semantic representations of sentences, which are useful for a variety of tasks in natural language processing such as text categorisation, text semantic matching and machine translation. These neural network models take as inputs parse trees of sentences, which are generated by a language parser. However, most existing tree-structured neural network models lack the ability of distinguishing different syntactic compositions, thus the expressive power of these models is limited. Moreover, the syntactic knowledge provided by Part-of-Speech tags in a parse tree has not been fully utilised in existing tree-structured neural network models. It is expected that such syntactic knowledge should help distinguish syntactic compositions, so should result in better semantic representation. This paper proposes a novel neural network model, TagHyperTreeLSTM, which contains two components, a tag-aware hypernetwork and a sentence encoder. The tag-aware hypernetwork, which accepts tags as inputs, generates the parameters of the sentence encoder dynamically in order to distinguish different syntactic compositions. The sentence encoder, which accepts words as inputs, generates the final sentence representation. Experimental results show that the proposed model achieves superior or competitive performance in text classification and text semantic matching based on six benchmark datasets when compared against previous tree-structured models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call