Abstract

Current zero shot learning methods mostly focus on applying the knowledge learnt by seen images to the unseen images. However, there is a big distribution difference between seen and unseen data, also called source and target domain. Thus, there are many irrelevant seen samples for unseen samples. We want to partially transfer the seen samples to target domain by selecting relevant seen samples. In this paper, we propose a method, zero shot learning by partial transfer from source domain with L2,1 norm constraint, called ZSLPT which embeds visual similarity and semantic similarity to transfer partial source samples. The relevant source samples are selected, while the irrelevant are eliminated. What’s more, we train source classification model used for transferring to target domain with the selected source samples, making the transferred target model more accurate. We have experimented on the state-of-the-art zero shot learning datasets, demonstrating that ZSLPT has good performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call