Abstract

Zero-Shot Learning (ZSL) aims to generalize a pretrained classification model to unseen classes with the help of auxiliary semantic information. Recent generative methods are based on the paradigm of synthesizing unseen visual data from class attributes. A mapping is learnt from semantic attributes to visual features extracted by a pre-trained backbone such as ResNet101 by training a generative adversarial network. Considering the domain-shift problem between pre-trained backbone and task ZSL dataset as well as the information asymmetry problem between images and attributes, this manuscript suggests that the visual-semantic balance should be learnt separately from the ZSL models. In particular, we propose a plug-and-play Attribute Representation Transformation (ART) framework to pre-process visual features with a contrastive regression module and an attribute place-holder module. Our contrastive regression loss is a tailored design for visual-attribute transformation, which gains favorable properties from both classification and regression losses. As for the attribute place-holder module, an end-to-end mapping loss function is introduced to build the relationship between transformed features and semantic attributes. Experiments conducted on five popular benchmarks manifest that the proposed ART framework can significantly benefit existing generative models in both ZSL and generalized ZSL settings.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.