Abstract
Fashion recommendation has become a prominent focus in the realm of online shopping, with various tasks being explored to enhance the customer experience. Recent research has particularly emphasized fashion recommendation based on body shapes, yet a critical aspect of incorporating multimodal data relevance has been overlooked. In this paper, we present the Contrastive Multimodal Cross-Attention Network, a novel approach specifically designed for fashion recommendation catering to diverse body shapes. By incorporating multimodal representation learning and leveraging contrastive learning techniques, our method effectively captures both inter- and intra-sample relationships, resulting in improved accuracy in fashion recommendations tailored to individual body types. Additionally, we propose a locality-aware cross-attention module to align and understand the local preferences between body shapes and clothing items, thus enhancing the matching process. Experimental results conducted on a diverse dataset demonstrate the state-of-the-art performance achieved by our approach, reinforcing its potential to significantly enhance the personalized online shopping experience for consumers with varying body shapes and preferences.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: ACM Transactions on Intelligent Systems and Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.