Abstract
Imbalanced classification is a common and difficult task in many medical image analysis applications. However, most existing approaches focus on balancing feature distribution and classifier weights between classes, while ignoring the inner-class heterogeneity and the individuality of each sample. In this paper, we proposed a sample-specific fine-grained prototype learning (SFPL) method to learn the fine-grained representation of the majority class and learn a cosine classifier specifically for each sample such that the classification model is highly tuned to the individual’s characteristic. SFPL first builds multiple prototypes to represent the majority class, and then updates the prototypes through a mixture weighting strategy. Moreover, we proposed a uniform loss based on set representations to make the fine-grained prototypes distribute uniformly. To establish associations between fine-grained prototypes and cosine classifier, we propose a selective attention aggregation module to select the effective fine-grained prototypes for final classification. Extensive experiments on three different tasks demonstrate that SFPL outperforms the state-of-the-art (SOTA) methods. Importantly, as the imbalance ratio increases from 10 to 100, the improvement of SFPL over SOTA methods increases from 2.2% to 2.4%; as the training data decreases from 800 to 100, the improvement of SFPL over SOTA methods increases from 2.2% to 3.8%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.