Abstract

Cross-entropy loss function (CEL) is widely used for training a multi-class classification deep convolutional neural network (DCNN). While CEL has been successfully implemented in image classification tasks, it only focuses on the posterior probability of correct class when the labels of training images are one-hot. It cannot be discriminated against the classes not belong to correct class (wrong classes) directly. Negative Log Likelihood Ratio Loss (NLLR) is proposed to better discriminate the correct class from competing wrong classes. But optimization of the loss function is normally presented as a minimization problem. In training DCNN, the value of NLLR is not constantly positive or negative, which affects the convergence of NLLR adversely. So, we propose competing ratio loss (CRL), which calculates the posterior probability ratio between the correct class and competing wrong classes to better widen the difference between the probability of the correct class and the probabilities of wrong classes, which also assures the value of CRL is constantly positive. Through massive experiments, we demonstrate the effectiveness and robustness of CRL on deep convolutional neural networks, our CRL outperforms CEL and NLLR on CIFAR-10/100 datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call