Abstract

Multimedia data with various modalities, such as image and text, are huge in quantity but have inconsistent distribution and representation. Many works have been done to break the boundary between image and text to measure their correlation. However, they focus on either the transformation to common subspace or the unidirectional generation from one to another individually, which cannot fully explore their interactions. It is noted that the bidirectional generation between image and text not only can provide complementary hints and mutually boost to learn cross-modal correlation but also cross-modal correlation learning can feed back to give comprehensive clues for promoting the cross-modal generation process. Therefore, we have the motivation that information transmission between image and text should be treated as a circular process, which aims to fully understand their latent correlation, and further realize cross-modal generation to produce both realistic images and text descriptions in a unified framework. In this paper, we propose the cross-modal circular correlation learning approach to perform both cross-modal correlation learning and generation simultaneously through an efficient circular learning training procedure. First, we propose the cross-modal circular learning model to perform an image-to-text caption and text-to-image synthesis circularly and learn common representation as a round-trip bridge, which can realize efficient interactions to fully exploit latent cross-modal correlations. Second, a unified bidirectional framework is proposed to conduct cross-modal mutual generation and is trained in an efficient circular process to enhance the generative ability of common representation, which can feed back circularly to further promote cross-modal correlation learning. In summary, we simultaneously perform cross-modal retrieval , image-to-text caption , and text-to-image synthesis in a unified framework with the circular learning process, which has high scalability and generality to realize universal cognition on the cross-modal data. We conduct extensive experiments to not only evaluate the correlation performance by cross-modal retrieval but also to show the generation effectiveness of both image caption and synthesis on the MS-COCO dataset.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.