Abstract

Sentiment classification has been a very hot topic in the field of natural language processing (NLP) and understanding in recent years. Recurrent neural networks (RNN) is a widely used tool to deal with the classification problem of variable-length sentences. The standard RNN can only access the preceding context of a sentence. In this paper, a new architecture termed Comprehensive Attention Recurrent Neural Networks (CA-RNN) which can store preceding, succeeding and local contexts of any position in a sequence is developed. The bidirectional recurrent neural networks (BRNN) is used to access the past and future information while a convolutional layer is employed to capture local information. The standard RNN is also replaced by two recently emerged RNN variants, namely long short-term memory (LSTM) and gated recurrent unit (GRU), to enhance the effectiveness of the new architecture. Another salient feature of the proposed model is that it can be trained end-to-end without any human intervention. It is very easy to be implemented. We conduct experiments on several sentiment-labeled datasets and analysis tasks. Experiment results demonstrate that capturing comprehensive contextual information can significantly enhance the classification accuracy compared with the standard recurrent models and the new models can achieve competitive performance compared with the state-of-the-art approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call