Abstract

Graph neural network has shown impressive ability to capture relations among support(labeled) and query(unlabeled) instances in a few-shot task. It is a feasible way that features are extracted using a pre-trained backbone network, and later adjusted in a few-shot scenario with an episodic meta-trained graph network. However, these adjusted features cannot well represent the few-shot data characteristics owing to the feature distribution mis-match caused by the different optimizations between the backbone and the graph network (multi-class pre-train v.s. episodic meta-train). Additionally, learning from the limited support instances fails to depict true data distributions thus cause incorrect class allocation. In this paper, we propose to transform the features extracted by a pre-trained self-supervised feature extractor into a Gaussian-like distribution to reduce the feature distribution mis-match, which significantly benefits the later meta-training of the graph network. To tackle the incorrect class allocation, we propose to leverage support and query instances to estimate class centers by computing an optimal class allocation matrix. Extensive experiments on few-shot benchmarks demonstrate that our graph-based few-shot learning pipeline outperforms baseline by 12%, and surpasses state-of-the-art results by a large margin under both full-supervised and semi-supervised settings.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.