Abstract

Convolutional neural networks (CNNs) have shown great advantages in computer vision fields, and loss functions are of great significance to their gradient descent algorithms. Softmax loss, a combination of cross-entropy loss and Softmax function, is the most commonly used one for CNNs. Hence, it can continuously increase the discernibility of sample features in classification tasks. Intuitively, to promote the discrimination of CNNs, the learned features are desirable when the inter-class separability and intra-class compactness are maximized simultaneously. Since Softmax loss hardly motivates this inter-class separability and intra-class compactness simultaneously and explicitly, we propose a new method to achieve this simultaneous maximization. This method minimizes the distance between features of homogeneous samples along with Softmax loss and thus improves CNNs' performance on vision-related tasks. Experiments on both visual classification and face verification datasets validate the effectiveness and advantages of our method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call