Abstract

Zero-shot learning (ZSL) methods mainly associate global or region features to semantic vectors within a single image, for transferring semantic knowledge from the seen classes to unseen ones. However, the interactive region learning among a group of images from different categories, which can enhance the discrimination of region features and thus lead to a desirable knowledge transfer between seen and unseen classes, is seldom considered. To remedy the above challenge, we propose a group-wise interactive region learning (GIRL) model to guarantee a comprehensive and explicit region interaction. Specifically, GIRL consists of an attentive region interaction (ARI) module and a holistic semantic embedding (HSE) module. ARI utilizes the semantic commonalities and differences of group regions to produce refined region features. HSE holistically maps these region features to the semantic space for a more stable semantic transfer. We also present a semantic consistency loss and a relation alignment loss that can distill the refined/original region features and introduce unseen class semantic vectors for training, respectively. Extensive experiments demonstrate the effectiveness of GIRL over other methods, achieving 68.9%, 42.9%, 75.5%, and 47.8% the Generalized ZSL (GZSL) H scores on CUB, SUN, AWA2, and APY. The code is publicly available at https://github.com/TingML/GIRL.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call