Abstract
In zero-shot learning, knowledge transfer problem is the major challenge, which can be achieved by exploring the pattern between visual and semantic space. However, only aligning the global visual features with semantic vectors may ignore some discriminative differences. The local region features are not only implicitly related with semantic vectors, but also contain more discriminative information. Besides, most of the previous methods only consider the first-order statistical features, which may fail to capture the complex relations between categories. In this paper, we propose a semantic-guided high-order region attention embedding model that leverages the second-order information of both global features and local region features via different attention modules in an end-to-end fashion. First, we devise an encoder-decoder part to reconstruct the visual feature maps guided by semantic attention. Then, the original and new feature maps are simultaneously fed into their respective following branches to calculate region attentive and global attentive features. After that, a second-order pooling module is integrated to form higher-order features. The comprehensive experiments on four popular datasets of CUB, AWA2, SUN and aPY show the efficiency of our proposed model for zero-shot learning task and a considerable improvement over the state-of-the-art methods under generalized zero-shot learning setting.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.