Abstract

In spite of the state-of-the-art performance of deep neural networks, shallow neural networks are still the choice in applications with limited computing and memory resources. Convolutional neural network (CNN), in particular the one-convolutional-layer CNN, is a widely-used shallow neural network in natural language processing tasks such as text classification. However, it was found that CNNs may misfit to task-irrelevant words in dataset, which in turn leads to unsatisfactory performance. To alleviate this problem, attention mechanism can be integrated into CNN, but this takes up the limited resources. In this paper, we propose to address the misfitting problem from a novel angle - pruning task-irrelevant words from the dataset. The proposed method evaluates the performance of each convolutional filter based on its discriminative power of the feature generated at the pooling layer, and prunes words captured by the poorly-performed filters. Experiment results show that our proposed model significantly outperforms the CNN baseline model. Moreover, our proposed model produces performance similar to or better than the benchmark models (attention integrated CNNs) while demanding less parameters and FLOPs, and is therefore a choice model for resource limited scenarios, such as mobile applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call