Abstract

The difficulty of describing a shoe item seeing on street with text for online shopping demands an image-based retrieval solution. We call this problem street-to-shop shoe retrieval, whose goal is to find exactly the same shoe in the online shop image (shop scenario), given a daily shoe image (street scenario) as the query. We propose an improved Multi-Task View-invariant Convolutional Neural Network (MTV-CNN+) to handle the large visual discrepancy for the same shoe in different scenarios. A novel definition of shoe style is defined according to the combinations of part-aware semantic shoe attributes and the corresponding style identification loss is developed. Furthermore, a new loss function is proposed to minimize the distances between images of the same shoe captured from different viewpoints. In order to efficiently train MTV-CNN+, we develop an attribute-based weighting scheme on the conventional triplet loss function to put more emphasis on the hard triplets; a three-stage process is incorporated to progressively select the hard negative examples and anchor images. To validate the proposed method, we build a multi-view shoe dataset with semantic attributes (MVShoe) from the daily life and online shopping websites, and investigate how different triplet loss functions affect the performance. Experimental results show the advantage of MTV-CNN+ over existing approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call