Abstract
Few-shot learning is a tough topic to solve since obtaining a large number of training samples in real applications is challenging. It has attracted increasing attention recently. Meta-learning is a prominent way to address this issue, intending to adapt predictors as base-learners to new tasks swiftly. However, a key challenge of meta-learning is its lack of expressive capacity, which stems from the difficulty of extracting general information from a small number of training samples. As a result, the generalizability of meta-learners trained from high-dimensional parameter spaces is frequently limited. To learn a better representation, we propose a graph complemented latent representation (GCLR) network for few-shot image classification. In particular, we embed the representation into a latent space, in which the latent codes are reconstructed using variational information to enrich the representation. In this way, the latent representation can achieve better generalizability. Another benefit is that, because the latent space is formed using variational inference, it cooperates well with various base-learners, boosting robustness. To make full use of the relation between samples in each category, a graph neural network (GNN) is also incorporated to improve relation mining. Consequently, our end-to-end framework delivers competitive performance on three few-shot learning benchmarks for image classification.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.