Abstract

Dealing with significant class imbalance poses a significant challenge in various real-world applications, particularly when the accurate classification and generalization of minority classes are of crucial. One key factor that results in model biases is the inadequate representation of intra-class diversity in samples from tail classes. To tackle this challenge, transfer learning strategies have emerged as a viable approach to promote the data distribution of tail classes by transferring feature learned from head classes. However, existing methods often assume that transferability exists between any head class and tail class, and randomly transfer information during training, which can be problematic when the head and tail classes have uncorrelated semantics. To tackle this problem, we propose a novel feature transfer strategy to promote the learning of discriminative representations for tail classes. Firstly, we compute the semantic similarity in a hybrid manner by combining local visual features and global semantic features. Secondly, we utilize the Gaussian mixture model to represent each class with multiple and anisotropic prototypes, which helps us fit the actual distribution accurately. We then carry out distribution-aware head-to-tail feature transfer, leveraging an optimization objective that can derive a computationally efficient closed-form upper bound. This procedure mitigates representation biases in the feature space and achieves state-of-the-art results on CIFAR100-LT, ImageNet-LT, and iNaturalist long-tailed benchmarks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call