Abstract

Few-Shot Learning (FSL) is essential for visual recognition. Many methods tackle this challenging problem via learning an embedding function from seen classes and transfer it to unseen classes with a few labeled instances. Researchers recently found it beneficial to incorporate task-specific feature adaptation into FSL models, which produces the most representative features for each task. However, these methods ignore the diversity of classes and apply a global transformation to the task. In this paper, we propose Global and Local Feature Adaptor (GLoFA), a unifying framework that tailors the instance representation to specific tasks by global and local feature adaptors. We claim that class-specific local transformation helps to improve the representation ability of feature adaptor. Global masks tend to capture sketchy patterns, while local masks focus on detailed characteristics. A strategy to measure the relationship between instances adaptively based on the characteristics of both tasks and classes endow GLoFA with the ability to handle mix-grained tasks. GLoFA outperforms other methods on a heterogeneous task distribution and achieves competitive results on benchmark datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.