Abstract

With the recent prevalence of online fashion-oriented communities and advances in multimedia processing, increasing research interests have been paid to the fashion compatibility modeling, where the compatibility between complementary fashion items (e.g., a top and a bottom) can be assessed automatically. Existing fashion compatibility modeling techniques mainly focus on measuring the compatible preference between fashion items with Deep Neural Networks (DNN), but overlook the generative compatibility modeling. Differently, in this paper, we explore the potential of the Generative Adversarial Network (GAN) in fashion compatibility modeling and thus propose a Multi-modal Generative Compatibility Modeling (MGCM) scheme. In particular, we introduce a multi-modal enhanced compatible template generation network, regularized by the pixel-wise consistency and template compatibility, to sketch a compatible template as the auxiliary link between fashion items. Accordingly, MGCM is able to measure the compatibility between complementary fashion items comprehensively from both item-item and item-template perspectives. Experimental results on two real-world datasets demonstrate the superiority of the proposed scheme over state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.