Abstract

In large-scale image retrieval, deep features extracted by Convolutional Neural Network (CNN) can effectively express more image information than those extracted by traditional manual methods. However, the deep feature dimensions obtained by Deep Convolutional Neural Network (DCNN) are too high and redundant, which leads to low retrieval efficiency. We propose a novel image retrieval method, which combines deep features selection with improved DCNN and hash transform based on high-dimension features reduction to gain lowdimension deep features and realizes efficient image retrieval. Firstly, the improved network is based on the existing deep model to build a more profound and broader network by adding multiple groups of different branches. Therefore, it is named DFS-Net (Deep Feature Selection Network). The adaptive learning deep features of the Network can effectively alleviate the influence of over-fitting and improve the feature expression of image content. Secondly, the information gain rate method is used to filter the extracted deep features to reduce the feature dimension and ensure the information loss is small. The last step of the method, hash Transform, sparsifies and binarizes this representation to reduce the computation and storage pressure while maintaining the retrieval accuracy. Finally, the scheme is based on the distinguished ResNet50, InceptionV3, and MobileNetV2 models, and studied and evaluated deeply on the CIFAR10 and Caltech256 datasets. The experimental results show that the novel method can train the deep features with stronger recognition ability on limited training samples, and improve the accuracy and efficiency of image retrieval effectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.