Abstract

Zero-Shot Learning (ZSL) has made significant progress driven by deep learning and is being promoted further with the advent of generative models. Despite the success of these methods, the type and number of unseen categories are nailed in the generative models, which makes it challenging to recognize unseen categories in an incremental manner, and the profits of some superior performance algorithms largely arise from their advanced capability of feature extraction, such as Transformers. This paper rigidly follows the assumptions introduced in conventional ZSL and proposes a visual feature filtering method based on a semantic mapping model, namely, filtering visual features through class-specific filters to effectively remove class-agnostic information. Extensive experiments are conducted on four benchmark datasets and have achieved very competitive performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.