Abstract

Convolutional Neural Networks (CNNs) effectively extract local features from input data. However, CNN based on word embedding and convolution layers displays poor performance in text classification tasks when compared with traditional baseline methods. We address this problem and propose a model named NNGN that simplifies the convolution layer in the CNN by replacing it with a pooling layer that extracts n-gram embedding in a simpler way and obtains document representations via linear computation. We implement two settings in our model to extract n-gram features. In the first setting, which we refer to as seq-NNGN, we consider word order within each n-gram. In the second setting, BoW-NNGN, we do not consider word order. We compare the performance of these settings in different classification tasks with those of other models. The experimental results show that our proposed model achieves better performance than state-of-the-art models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call