Existing cross-media retrieval methods are mainly based on the condition where the training set covers all the categories in the testing set, which lack extensibility to retrieve data of new categories. Thus, zero-shot cross-media retrieval has been a promising direction in practical application, aiming to retrieve data of new categories (unseen categories), only with data of limited known categories (seen categories) for training. It is challenging for not only the heterogeneous distributions across different media types, but also the inconsistent semantics across seen and unseen categories need to be handled. To address the above issues, we propose dual adversarial distribution network (DADN) , to learn common embeddings and explore the knowledge from word-embeddings of different categories. The main contributions are as follows. First, zero-shot cross-media dual generative adversarial networks architecture is proposed, in which two kinds of generative adversarial networks (GANs) for common embedding generation and representation reconstruction form dual processes. The dual GANs mutually promote to model semantic and underlying structure information, which generalizes across different categories on heterogeneous distributions and boosts correlation learning. Second, distribution matching with maximum mean discrepancy criterion is proposed to combine with dual GANs, which enhances distribution matching between common embeddings and category word-embeddings. Finally, adversarial inter-media metric constraint is proposed with an inter-media loss and a quadruplet loss, which further model the inter-media correlation information and improve semantic ranking ability. The experiments on four widely used cross-media datasets demonstrate the effectiveness of our DADN approach.
Read full abstract