Abstract

Cross-domain clothing retrieval is an active research topic because of its massive potential applications in fashion industry. Due to the large number of garment categories or styles, and different clothing appearances caused by different camera angles, different shooting conditions, different messy background environments, or different postures of the dressed human body, the retrieval accuracy of traditional consumer-to-shop scheme is always low. In this paper, based on the framework of deep convolution neural network, a novel cross-domain clothing retrieval method is proposed by using feature fusion and quadruplet loss function, which is named as ClothingNet. First, the pre-trained deep neural network Resnet-50 is adopted to extract feature map of clothing images. The extracted high-level image features can thus be merged with middle-level features, and the final feature representation of clothing images can be obtained by constraining the fusion feature values to a certain range in term of L2 norm. This fusion feature provides a comprehensive description of the differences between clothing images. For effectively training our ClothingNet, the cross-domain clothing images are organized in form of a quadruplet for calculating its loss function, and the network parameters can be optimized according to back propagation scheme via stochastic gradient descent of loss function. Our proposed method is validated on two public datasets for clothing retrieval, DARN and DeepFashion, showing that the top-50 retrieval accuracy is 35.67% and 53.52% respectively. Experimental results illustrate the effectiveness of our clothing retrieval method.

Highlights

  • Nowadays, online clothes shopping has become increasingly popular as a fashion shopping manner for young people

  • In addition to providing item id labels for each clothing image, the training data provide several additional labels including categories, colors, lengths, and so on. 2) DeepFashion dataset [4]: A dataset provided by the Multimedia Laboratory of the Chinese University of Hong Kong in 2016

  • These clothing images are divided into 4 sub-categories, which can be adopted for the garment attribute prediction, the keypoint positioning, the cross-domain clothing retrieval, the clothing alignment and so on

Read more

Summary

INTRODUCTION

Online clothes shopping has become increasingly popular as a fashion shopping manner for young people. The concept of cross-domain clothing retrieval was first introduced by Liu et al [1] They firstly obtained the local features of clothing images by extracting 30 regions of human body according to the human pose estimation, which can reduce the image differences due to different dressing postures of cross-domain clothing images. In our task of cross-domain clothing retrieval, it is necessary to capture both high-level semantic information and middle-level feature information of clothing images for determining whether two garments match or not. Due to above two issues, a novel cross-domain retrieval method of clothing images is proposed in this paper, which is named as ClothingNet and is based upon the feature fusion and quadruplet loss function. A cross-domain image retrieval framework based upon the feature fusion and quadruplet loss function is introduced, and the validity and accuracy of our retrieval method can be verified by the public datasets

CLOTHING RETRIEVAL NETWORK WITH FEATURE FUSION AND QUADRUPLET LOSS
QUADRUPLET LOSS FUNCTION
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call