Abstract

Clothing retrieval and clothing style recognition are important and practical problems. They have drawn a lot of attention in recent years. However, the clothing photos collected in existing datasets are mostly of front- or near-front view. There are no datasets designed to study the influences of different viewing angles on clothing retrieval performance. To address view-invariant clothing retrieval problem properly, we construct a challenge clothing dataset, called Multi-View Clothing dataset. This dataset not only has four different views for each clothing item, but also provides 264 attributes for describing clothing appearance. We adopt a state-of-the-art deep learning method to present baseline results for the attribute prediction and clothing retrieval performance. We also evaluate the method on a more difficult setting, cross-view exact clothing item retrieval. Our dataset will be made publicly available for further studies towards view-invariant clothing retrieval.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call