Abstract

It is usually supposed that training database is manually balanced in traditional visual recognition tasks. However, in nature, data tends to follow long-tailed distributions. In recent years, many plug-and-play methods based on data augmentation or representation learning have been proposed to tackle the long-tailed visual recognition task. Although these methods are effective, we find that when different plug-and-play methods are applied to the same long-tail recognition model, they sometimes fail to promote each other. The reason for this phenomenon may lie in the fact that the overall performance of the model is constrained by the insufficient capability of a traditional feature extractor. Motivated by this fact, we first propose Hierarchical Block Aggregation Network (HBAN), a network structure with stronger feature extraction capability. Then, we design a Quantity-Aware Balanced (QAB) loss and a decoupled training paradigm to optimize HBAN. The effectiveness of HBAN is demonstrated by extensive experiments. In particular, HBAN achieves significant improvements over our baseline on three benchmark datasets, and outperforms the state-of-the-art methods on CIFAR 100-LT.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.