Abstract
Existing cross-media retrieval methods are usually conducted under the supervised setting, which need lots of annotated training data. Generally, it is extremely labor-consuming to annotate cross-media data. So unsupervised cross-media retrieval is highly demanded, which is very challenging because it has to handle heterogeneous distributions across different media types without any annotated information. To address the above challenge, this paper proposes Domain Adaptation with Scene Graph (DASG) approach, which transfers knowledge from the source domain to improve cross-media retrieval in the target domain. Our DASG approach takes Visual Genome as the source domain, which contains image knowledge in the form of scene graph. The main contributions of this paper are as follows: First, we propose to address unsupervised cross-media retrieval by domain adaptation. Instead of using the labor-consuming annotated information of cross-media data in the training stage, our DASG approach learns cross-media correlation knowledge from Visual Genome, and then transfers the knowledge to cross-media retrieval through media alignment and distribution alignment. Second, our DASG approach utilizes fine-grained information via scene graph representation to enhance generalization capability across domains. The generated scene graph representation builds (subject$\rightarrow $ relationship$\rightarrow $ object) triplets by exploiting objects and relationships within image and text, which makes the cross-media correlation more precise and promotes unsupervised cross-media retrieval. Third, we exploit the related tasks including object and relationship detection for learning more discriminative features across domains. Leveraging the semantic information of objects and relationships improves cross-media correlation learning for retrieval. Experiments on two widely-used cross-media retrieval datasets, namely Flickr-30K and MS-COCO, show the effectiveness of our DASG approach.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Circuits and Systems for Video Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.