Abstract

AbstractHumans excel at learning and recognizing objects, swiftly adapting to new concepts with just a few samples. However, current studies in computer vision on few-shot learning have not yet achieved human performance in integrating prior knowledge during the learning process. Humans utilize a hierarchical structure of object categories based on past experiences to facilitate learning and classification. Therefore, we propose a method named n-Hierarchy SEmantic Guided Attention (nHi-SEGA) that acquires abstract superclasses. This allows the model to associate with and pay attention to different levels of objects utilizing semantics and visual features embedded in the class hierarchy (e.g., house finch-bird-animal, goldfish-fish-animal, rose-flower-plant), resembling human cognition. We constructed an nHi-Tree using WordNet and Glove tools and devised two methods to extract hierarchical semantic features, which were then fused with visual features to improve sample feature prototypes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call