Abstract

Visual codebook serves as a fundamental component in many state-of-the-art visual search and object recognition systems. While most existing codebooks are built based solely on unsupervised patch quantization, there are few works exploited image labels to supervise its construction. The key challenge lies in the following: image labels are global, but patch supervision should be local. Such imbalanced supervision is beyond the scope of most existing supervised codebooks [9,10,12–15,29]. In this paper, we propose a weakly supervised codebook learning framework, which integrates image labels to supervise codebook building with two steps: the Label Propagation step propagates image labels into local patches by multiple instance learning and instance selection [20,21]. The Graph Quantization step integrates patch labels to build codebook using Mean Shift. Both steps are co-optimized in an Expectation Maximization framework: the E-phase selects the best patches that minimize the semantic distortions in quantization to propagate image labels; while the M-phase groups similar patches with related labels (modeled by WordNet [18]), which minimizes the visual distortions in quantization. In quantitative experiments, our codebook outperforms state-of-the-art unsupervised and supervised codebooks [1,10,11,25,29] using benchmark datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.