Abstract

With the recent advancement of web search ranking framework, a.k.a. learning to rank, it is questionable whether it can be still applicable to the large-scale content based image retrieval settings. Moreover, given the complex structure of image representation, it is also challenging how to design visual ranking features that not only scale up well, but also model various visual modalities and the spatial distributions of local features. In this paper, we answer the above two questions by investigating the performance of learning to rank for the large-scale content based image retrieval problem, with some scalable visual based ranking features proposed to improve the performance. Specifically, we firstly adopt several well performed ad-hoc ranking models to generate the Bag-of-Visual-Words based ranking features. Additionally, to preserve the spatial information of image local descriptors, we split images into blocks from coarse to fine, and extract ranking features hierarchically with a spatial pyramid manner. Finally, image global features are also quantized via LSH and concatenated with the existing ranking features all together. Experimental results on both Oxford and Image Net databases demonstrate the effectiveness and efficiency of the proposed ranking model, as well as the complementarity of each ranking features.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.