Abstract

Deep neural networks have surpassed humans in some cases, such as image recognition and image classification, with numerous labeled training samples. However, multiple tasks cannot provide enough labeled samples, training a neural network with a limited number of labeled samples is challenging. Therefore, meta-learning emerges. It aims to summarize from few-shot tasks and can quickly adapt to new categories. However, it is challenging to ensure that the trained feature extractor in the meta-train can adapt to the novel class in the meta-test. In this paper, we propose a subspace learning module to deal with the feature-mismatch problem. Specifically, we embed the linear discriminant analysis (LDA) module into the few-shot learning framework. It guarantees the feature embeddings in each few-shot task are more discriminative via increasing the inter-class distance and reducing the intra-class variances. We conduct experiments on four benchmark few-shot learning datasets, namely mini-Imagenet, CIFAR-FS, tiered-ImageNet, and CUB, to demonstrate the effectiveness of the proposed module. Experimental results show that this method has better performance than the state-of-the-art approaches.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.