Abstract

• We propose to consider a new, closer to ‘infant learning’, setting of Few-Shot Learning with Multiple and Complex Semantics. • In this context we propose a new benchmark for FSL-MCS, and an associated training and evaluation protocol. • A new multi-branch architecture that provides the first batch of encouraging results for the proposed FSL-MCS benchmark. Learning from one or few visual examples is one of the key capabilities of humans since early infancy, but is still a significant challenge for modern AI systems. While considerable progress has been achieved in few-shot learning from a few image examples, much less attention has been given to the verbal descriptions that are usually provided to infants when they are presented with a new object. In this paper, we focus on the role of additional semantics that can significantly facilitate few-shot visual learning. Building upon recent advances in few-shot learning with additional semantic information , we demonstrate that further improvements are possible by combining multiple and richer semantics (category labels, attributes, and natural language descriptions). Using these ideas, we offer the community new results on the popular mini ImageNet and CUB few-shot benchmarks, comparing favorably to the previous state-of-the-art results for both visual only and visual plus semantics-based approaches. We also performed an ablation study investigating the components and design choices of our approach. Code available on github.com/EliSchwartz/mutiple-semantics .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call