Abstract

Detecting Out-of-Distribution (OOD) inputs is essential for reliable deep learning in the open world. However, most existing OOD detection methods have been developed based on training sets that exhibit balanced class distributions, making them susceptible when confronted with training sets following a long-tailed distribution. To alleviate this problem, we propose an effective three-branch training framework, which demonstrates the efficacy of incorporating an extra rejection class along with auxiliary outlier training data for effective OOD detection in long-tailed image classification. In our proposed framework, all outlier training samples are assigned the label of the rejection class. We employ an inlier loss, an outlier loss, and a Tail-class prototype induced Supervised Contrastive Loss (TSCL) to train both the in-distribution classifier and OOD detector within one network. During inference, the OOD detector is constructed using the rejection class. Extensive experimental results demonstrate that the superior OOD detection performance of our proposed method in long-tailed image classification. For example, in the more challenging case where CIFAR100-LT is used as in-distribution, our method improves the average AUROC by 1.23% and reduces the average FPR95 by 3.18% compared to the baseline method utilizing Outlier Exposure (OE). Code is available at github.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.