Abstract
Recently, deep neural networks have achieved remarkable progress in class balancing instance segmentation. However, most applications in the real world have a long-tailed distribution, i.e., limited training examples in the majority of classes. The long-tailed challenge leads to a catastrophic drop in instance segmentation because the gradient of the head classes suppresses the gradient of the tail classes, leading to a bias towards the major classes. We propose LiCAM, a novel framework for long-tailed segmentation. It features an adaptive loss function named Moac Loss, which is adjustable during the training according to the monitored classification accuracy. LiCAM also cooperates with an oversampling technique named RFS, which alleviates the severe imbalance between head and tail classes. We conducted extensive experiments on the LVIS v1 dataset to evaluate LiCAM. With a coherent end-to-end training pipeline, LiCAM significantly outperforms other baselines.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.