Abstract

Convolutional Neural Networks have achieved excellent results in various tasks such as face verification and image classification. As a typical loss function in CNNs, the softmax loss is widely used as the supervision signal to train the model for multi-class classification, which can force the learned features to be separable. Unfortunately, these learned features aren’t discriminative enough. In order to efficiently encourage intra-class compactness and inter-class separability of learned features, this paper proposes a H-contrastive loss based on contrastive loss for multi-class classification tasks. Jointly supervised by softmax loss, H-contrastive loss and center loss, we can train a robust CNN to enhance the discriminative power of the deeply learned features from different classes. It is encouraging to see that through our joint supervision, the results achieve the state-of-the-art accuracy on several multi-class classification datasets such as MNIST, CIFAR-10 and CIFAR-100.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call