Abstract Convolutional Neural Networks (CNNs) have become ubiquitous in many NLP tasks. However, understanding its process is still an area with much to be done. In this paper, we introduce a method to study the interpretability of CNNs when used for text classification. More specifically, we work on the interpretability of the convolutional filters in the context of sentiment analysis. The framework used in this paper has allowed the understanding of the mechanics of the network when applied to the task of sentiment analysis. The results of the experiments reveal how certain parts of speech (POS) tags are more relevant than others to the classification of a sentence. Furthermore, we also observed a preference for shorter $n$-grams when classifying negative sentiment sequences. Additionally, we detected a certain amount of redundancy among the convolutional filters, thus allowing us to conclude that smaller architectures would have worked for this particular task. This was achieved by computing the relevant metrics to measure the influence of the pertinent property on the desired class. In our case, the concepts treated were POS tags that carry semantic information and properties were related to the convolutional filters.