Abstract
Real world datasets are often unbalanced. Such an imbalance in data can limit the performance of modern deep learning based solutions by introducing a bias in the trained model. In particular, the trained model show weak performance on sub tasks or classes where the data availability is sparse. In this work, we address such a data imbalance problem and propose a novel modification to the existing cross-entropy loss function to mitigate the issue. Our proposed loss function can amplify the loss gradients generated during the back-propagation step. In particular, we penalize the predictions of the model which can result in higher loss values and gradients. We compare our proposed loss function with several recently proposed approaches and show superior performance. Our experiments show that proposed approach achieves state of the art performance on log-tailed image classification on CIFAR100/10-LT and Imagenet-LT datasets and on semantic segmentation task on Citiscapes dataset.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.