Abstract

Zero shot learning seeks to learn useful patterns in the source domain and identify novel concepts in the target domain. This transfer learning paradigm has recently gained immense popularity given the inherent limitations in data acquisition and subsequent annotation for a task (or domain). While typical zero shot learning methods utilize all the classes (and their instances) in the source domain in a passive way, we, in our work, actively use only a handful of relevant classes for learning in the source domain. With this intelligent data subset, we jointly learn the source and target domain parameters using coupled semantic autoencoders. This joint learning reduces the projection domain shift problem. We further extend the above model for word embedding based semantic space as well. For classes with no word embedding, we have solved prototype sparsity problem by training a neural network with all classes that has one. This neural network seeks to learn a mapping from attribute space to word embedding space. Experiments on AWA2 and CUB-UCSD datasets confirm the superiority of our hybrid approach over state of art methods by up to 16% and 8% in attribute and word embedding space respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call