Abstract
Few-shot learning is to recognize novel concepts with few labeled samples. Recently, significant progress has been made to address the overfitting caused by data scarcity, especially those on modeling the distribution of novel categories given a single point. However, they often deeply rely on the prior knowledge from base set, which is generally hard to define, and its selection can easily bias the learning. A popular pipeline is to pretrain a feature extractor with base set and generate statistics from them as prior information. Since pretrained feature extractor cannot extract accurate representations for categories have never seen, and there is only 1 or 5 support images from novel categories, making it hard to acquire accurate priors, especially when they are far away from the class center. To address these issues, in this paper, we base our network on Maximum a posteriori (MAP), proposing a strategy for better prior selection from base set. We specially introduce semantic information, which are learned from unsupervised text corpora and easily available, to alleviate biases caused by unrepresentative support samples. Our intuition is that when the support from visual information is biased, semantics can provide strong prior knowledge to assist learning. Experimental results on four few-shot benchmarks also show that it outperforms the state-of-the-art methods by a large margin, improves around 2.08%∼12.68% than the best results in each dataset on both 1- and 5-shot tasks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.