Abstract

The real-world data distribution is essentially long-tailed, which poses a significant challenge to the deep model. Classification models minimizing cross-entropy loss struggle to classify the tail classes, although cross-entropy training is successful on balanced data. We reveal that minimizing cross-entropy loss under long-tailed distribution leads to the Tail Collapse phenomenon, which fundamentally limits the performance of neural networks. To correct the optimization behavior of cross-entropy training, we propose a new Curricular Balanced Loss (CurB Loss) to alleviate the imbalance. The CurB loss has two factors: the re-weighting factor and the curriculum learning factor. We design the re-weighting factor based on the margin-based training that can theoretically reach the optimums of networks. Then, we incorporate the idea of Curriculum Learning into the re-weighting loss in an adaptive manner. We design the curriculum learning factor to make the model gradually emphasize the hard classes. The empirical results demonstrate the complementary of the two factors. Our method outperforms previous state-of-the-art methods by 0.9%, 2.7%, 1.2% on CIFAR10-LT, CIFAR-100-LT and ImageNet-LT, demonstrating the effectiveness of CurB Loss for long-tailed visual recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call