Abstract

With the recent prevalence of online fashion-oriented communities and advances in multimedia processing, increasing research interests have been paid to the fashion compatibility modeling, where the compatibility between complementary fashion items (e.g., a top and a bottom) can be assessed automatically. Existing fashion compatibility modeling techniques mainly focus on measuring the compatible preference between fashion items with Deep Neural Networks (DNN), but overlook the generative compatibility modeling. Differently, in this paper, we explore the potential of the Generative Adversarial Network (GAN) in fashion compatibility modeling and thus propose a Multi-modal Generative Compatibility Modeling (MGCM) scheme. In particular, we introduce a multi-modal enhanced compatible template generation network, regularized by the pixel-wise consistency and template compatibility, to sketch a compatible template as the auxiliary link between fashion items. Accordingly, MGCM is able to measure the compatibility between complementary fashion items comprehensively from both item-item and item-template perspectives. Experimental results on two real-world datasets demonstrate the superiority of the proposed scheme over state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call