Abstract

Sentiment analysis has been a well-studied research direction in computational linguistics. Deep neural network models, including convolutional neural networks (CNN) and recurrent neural networks (RNN), yield promising results on text classification tasks. RNN-based architectures, such as, long short-term memory (LSTM) and gated recurrent unit (GRU) can process sequences of any length. However, using them in the feature extraction layer of a deep neural network architecture increases the dimensionality of the feature space. In addition, such models value different features equally. To solve these issues, we propose a bidirectional convolutional recurrent neural network architecture, which utilizes two separate bidirectional LSTM and GRU layers, to derive both past and future contexts by connecting two hidden layers of opposite directions to the same context. The group-wise enhancement mechanism has been employed on the features extracted by bidirectional layers, which divides features into multiple classes, enhancing important features in each group while weakening the less important ones. The presented scheme employs convolution and pooling layers to extract high level features and to reduce the dimensionality of the feature space. The experimental results indicate that the presented bidirectional convolutional recurrent neural network architecture with group-wise enhancement mechanism can outperform the state-of-the-art results for sentiment analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call