Abstract

Classification of text documents is commonly carried out using various models of bag-of-words that are generated using feature selection methods. In these models, selected features are used as input to well-known classifiers such as Support Vector Machines (SVM) and neural networks. In recent years, a technique called word embeddings has been developed for text mining and, deep learning models using word embeddings have become popular for sentiment classification. However, there is no extensive study has been conducted to compare these approaches for sentiment classification. In this paper, we present an in-depth comparative study on these two types of approaches, feature selection based approaches and and deep learning models for document-level sentiment classification. Experiments were conducted using four datasets with varying characteristics. In order to investigate the effectiveness of word embeddings features, feature sets including combination of selected bag-of-words features and averaged word embedding features were used in sentiment classification. For analyzing deep learning models, we implemented three different deep learning architecture, convolutional neural network, long short-term memory network, and long-term recurrent convolutional network. Our experimental results show that that deep learning models performed better on three out of the four datasets, a combination of selected bag-of-words features and averaged word embedding features gave the best performance on one dataset. In addition, we will show that a deep learning model initialized with either one-hot vectors or fine-tuned word embeddings performed better than the model initialized using than word embeddings without tuning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call