Abstract
With the development of computer network, multimedia and digital transmission technology in recent years, the traditional form of information dissemination which mainly depends on text has changed to the multimedia form including texts, images, videos, audios and so on. Under this situation, to meet the growing demand of users for access to multimedia information, cross-media retrieval has become a key problem of research and application. Given queries of any media type, cross-media retrieval can return all relevant media types as results with similar semantics. For measuring the similarity between different media types, it is important to learn better shared representation for multimedia data. Existing methods mainly extract single modal representation for each media type and then learn the cross-media correlations with pairwise similar constraint, which cannot make full use of the rich information within each media type and ignore the dissimilar constraints between different media types. For addressing the above problems, this paper proposes a deep multimodal learning method (DML) for cross-media shared representation learning. First, we adopt two different deep networks for each media type with multimodal learning, which can obtain the high-level semantic representation of single media. Then, a two-pathway network is constructed by jointly modeling the pairwise similar and dissimilar constraints with a contrastive loss to get the shared representation. The experiments are conducted on two widely-used cross-media datasets, which shows the effectiveness of our proposed method. abstract environment.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.