Abstract

Automatic personal clothing retrieval on photo collections, i.e., searching the same clothes worn by the same person, is not a trivial problem as photos are usually taken under completely uncontrolled realistic imaging conditions. Typically, the captured clothing images have large variations due to geometric deformation, occlusion, cluttered background, and photometric variability from illumination and viewpoint, which pose significant challenges to text-based or reranking-based visual search methods. In this paper, a novel framework is presented to tackle these issues by leveraging low-level features (e.g., color) and high-level features (attributes) of clothing. First, a content-based image retrieval (CBIR) approach based on the bag-of-visual-words (BOW) model is developed as our baseline system, in which a codebook is constructed from extracted dominant color patches. A reranking approach is then proposed to improve search quality by exploiting clothing attributes, including the type of clothing, sleeves, patterns, etc. Compared to low-level features, the attributes have better robustness to clothing variations, and carry semantic meanings as high-level image representations. Different visual attribute detectors are learned from large amounts of training data to extract the corresponding attributes. The construction of codebook and building of attribute classifiers are conducted offline, which leads to fast online search performance. Extensive experiments on photo collections show that the reranking algorithm based on attribute learning significantly improves retrieval performance in combination with the proposed baseline. Even our color-based baseline alone outperforms the previous CBIR-based search approaches. The experiments also demonstrate that our approach is robust to large variations of images taken in unconstrained environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call