Abstract

Few-shot classification is a challenging task of computer vision and is critical to the data-sparse scenario like rare disease diagnosis. Feature augmentation is a straightforward way to alleviate the data-sparse issue in few-shot classification. However, mimicking the original feature distribution from a small amount of data is challenging. Existing augmentation-based methods are task-agnostic: the augmented feature is not with optimal intra-class diversity and inter-class discriminability concerning a certain task. To address this drawback, we propose a novel <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Task-adaptive Feature Disentanglement and Hallucination</i> framework, dubbed TaFDH. Concretely, we first perceive the task information to disentangle the original feature into two components: class-irrelevant and class-specific features. Then more class-irrelevant features are decoded from a learned variational distribution, fused with the class-specific feature to get the augmented features. Finally, a generalized prior distribution over a quadratic classifier is meta-learned, which can be fast adapted to the class-specific posterior, thus further alleviating the inadequacy and uncertainty of feature hallucination via the nature of Bayesian inference. In this way, we construct a more discriminable embedding space with reasonable intra-class diversity instead of simply restoring the original embedding space, which can lead to a more precise decision boundary. We obtain the augmented features equipped with enhanced inter-class discriminability by highlighting the most discriminable part while boosting the intra-class diversity by fusing with the diverse generated class-irrelevant parts. Experiments on five multi-grained few-shot classification datasets demonstrate the superiority of our method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.