Abstract

Fashion recommendation is an essential component of user shopping that it is capable of selecting and presenting fascinating items to customers. The fact that humans exhibit inconsistencies for fashion items in their choice is known to all due to the visual aesthetic features and fine-grained differences of fashion items. Previous research on fashion recommendations mainly focuses on sequential models, most of them only consider complex similarity relationships in fashion compatibility while neglecting the real-world compatible information often desired in practical applications. To learn the fashion compatibility and generate for the outfit, we propose an approach that jointly learns latent fashion concepts in visual-semantic space to measure compatibility between items. The fashion concepts are shaped by design elements such as color, material, and silhouette. Accordingly, we model a unified representation to learn different notions of similarity by mapping text descriptors and images into latent space to learn high-level representations. Experimental results reveal that our method effectively reaches the aimed results on the fill-in-the-blank and outfit compatibility tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.