Abstract

Deep Neural Network (DNN) models have shown remarkable results in Natural Language Processing(NLP) and Computer Vision(CV) applications. Convolutional Neural Networks (CNN), variants of Recurrent Neural Networks (RNN), and Transformer models are the architectures used for this. RNNs can capture the long-term dependencies but suffer from vanishing gradient problems. Long short-term memory networks (LSTM) and gated recurrent units (GRU) are introduced to overcome the limitations of traditional recurrent networks. Bi-directional information processing in combination with recurrent structure made to capture the past and future. All the components of the input sequence may not be relevant and essential in generating the target. An attention mechanism has been used to identify the crucial chunks from the input. Visualization of attention weights can describe the interpretability of model learning. Suggestion mining is treated as a text classification task to classify the given sentence into a suggestion or non-suggestion. This paper presents explainable systems using different RNNs and hybrid models for suggestion mining with and without attention components in the model. The experimental results on the standard datasets demonstrate that the local and global attention model outperforms. The heat map visualization on the input sentence also shows that the Bi-LSTM model with attention mechanism correctly captures the suggestive intent words.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call