Abstract

In real-world applications, a cross-model retrieval model trained on multimodal instances without considering differences in data distributions among users, termed as user domain shift, usually cannot generalize well to unknown user domains. In this paper, we define a new task of user-generalized cross-modal retrieval, and propose a novel Meta-Learning Multimodal User Generalization (MLMUG) method to solve it. MLMUG simulates the user domain shift with meta-optimization, which aims to embed multimodal data effectively and generalize the cross-modal retrieval model to any unknown user domains. We design a cross-modal embedding network with a learnable meta covariant attention module to encode transferable knowledge among different user domains. A user-adaptive metaoptimization scheme is proposed to adaptively aggregate gradients and meta-gradients for fast and stable meta-optimization.We build two benchmarks for user-generalized cross-modal retrieval evaluation. Experiments on the proposed benchmarks validate the generalization of our method compared with several stateof-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.