Abstract
Recently, meta-learning has been shown to be a promising way to solve few-shot learning. In this paper, inspired by the human cognition process, which utilizes both prior-knowledge and visual attention when learning new knowledge, we present a novel paradigm of meta-learning approach that capitalizes on three developments to introduce attention mechanism and prior-knowledge to meta-learning. In our approach, prior-knowledge is responsible for helping the meta-learner express the input data in a high-level representation space, and the attention mechanism enables the meta-learner to focus on key data features in the representation space. Compared with the existing meta-learning approaches that pay little attention to prior-knowledge and visual attention, our approach alleviates the meta-learner’s few-shot cognition burden. Furthermore, we discover a Task-Over-Fitting (TOF) problem,11When tested on J-shot classification tasks, the meta-learner trained on K-shot tasks does not perform as well as the one trained on J-shot tasks, where K and J are different unsigned integers denoting different numbers of shots for the meta-learner. which indicates that the meta-learner has poor generalization across different K-shot learning tasks. To model the TOF problem, we propose a novel Cross-Entropy across Tasks (CET) metric.22A metric for quantizing the extent to which a meta-learning method suffers from the TOF problem. Extensive experiments demonstrate that our techniques improve the meta-learner to state-of-the-art performance on several few-shot learning benchmarks while also substantially alleviating the TOF problem.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.