Abstract

In this article, we develop an end-to-end clothing collocation learning framework based on a bidirectional long short-term memories (Bi-LTSM) model, and propose new feature extraction and fusion modules. The feature extraction module uses Inception V3 to extract low-level feature information and the segmentation branches of Mask Region Convolutional Neural Network (RCNN) to extract high-level semantic information; whereas the feature fusion module creates a new reference vector for each image to fuse the two types of image feature information. As a result, the feature can involve both low-level image and high-level semantic feature information, so that the performance of Bi-LSTM can be enhanced. Extensive simulations are conducted based on Ployvore and DeepFashion2 datasets. Simulation results verify the effectiveness of the proposed method compared with other state-of-the-art clothing collocation methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call