Abstract

Text classification has been one of the major problems in natural language processing. With the advent of deep learning, convolutional neural network (CNN) has been a popular solution to this task. However, CNNs which were first proposed for images, face many crucial challenges in the context of text processing, namely in their elementary blocks: convolution filters and max pooling. These challenges have largely been overlooked by the most existing CNN models proposed for text classification. In this paper, we present an experimental study on the fundamental blocks of CNNs in text categorization. Based on this critique, we propose Sequential Convolutional Attentive Recurrent Network (SCARN). The proposed SCARN model utilizes both the advantages of recurrent and convolutional structures efficiently in comparison to previously proposed recurrent convolutional models. We test our model on different text classification datasets across tasks like sentiment analysis and question classification. Extensive experiments establish that SCARN outperforms other recurrent convolutional architectures with significantly less parameters. Furthermore, SCARN achieves better performance compared to equally large various deep CNN and LSTM architectures.

Highlights

  • Text classification is one of the major applications of Natural Language Processing (NLP)

  • Sentiment analysis of product reviews, language detection, topic classification of various news articles are some of the problem statements of text classification

  • Very deep Convolutional Neural Networks (CNN) architectures were proposed based on character level features (Zhang et al, 2015) and word level features (Conneau et al, 2016) which significantly improved the performance in text classification

Read more

Summary

Introduction

Text classification is one of the major applications of Natural Language Processing (NLP). Frequency based feature selection techniques like Bag-ofWords (BoW), Term frequency-Inverse document frequency (TF-IDF) have been used and the features are trained using machine learning classifiers like Logistic Regression (LR) or Naive Bayes (NB) These approaches provided strong baselines for text classification (Wang and Manning, 2012). Proposed CNN architectures use max pooling operation across convolution outputs to capture most important features (Kim, 2014). This leads to another question, if the feature selected by max pooling will always be the most important feature of the input or otherwise. Based on their critique, we propose a Sequential Convolutional Attentive Recurrent Network (SCARN) model for effective text classification. The results show that the proposed model achieves better results with lesser number of parameters than other recurrent convolutional architectures

Recurrent Convolutional Networks
Attention Mechanism
Convolution operation
Max pooling operation
Overview
Convolution Recurrent subnetwork
Datasets We tested our model on standard benchmark datasets
Baselines
Implementation
Results and Discussion
Conclusion
A Max Pooling: missclassified examples
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call