Abstract
With the rapid development of deep learning methods, researchers have gradually shifted the research focus from hand-crafted features to deep features in the field of the content-based image retrieval (CBIR). A great deal of attention has been paid to aggregate the extracted features from the convolutional layer in the deep convolutional neural network (CNN) into a global representation vector for CBIR. In this paper, we propose a simple but effective method which called Strong-Response-Stack-Contribution (SRSC) to generate the global representation vector for object retrieval. As we know, for object retrieval, when using CNN to extract features, what we want is to extract features in the region of interest (ROI). So we explored spatial and channel contribution to help us focus more on ROI and make the global image representation vector more representative. The process of the approach SRSC is to first generate spatial contribution according to the degree of channel response intensity. Then, we generate channel contribution by joining the sparsity information and the element-value information together. Finally, the global representation vector is generated according to spatial and channel contribution to perform image retrieval. Experiments on Oxford and Paris buildings datasets show the effectiveness of the proposed approach.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.