Abstract

Semantic-based image retrieval plays an important role in many practical applications, which aims to look for images with similar contents. Extracting discriminative representations of images is the real crux of this task. Directly utilizing the results of fully connected layers of convolutional neural networks (CNNs) is one of the best feature extraction methods. However, the fully connected layer is only supervised by the softmax loss, which only aims to maximize the accuracy of object classification and hardly pays attention to the spatial distribution of features, especially the intra-class and inter-class feature distances which are of great importance in semantic-based image retrieval. To compensate the performance degradation due to this reason, we try to address the spatial distribution of the features by two different loss functions. The first loss function jointly combines the softmax loss and the center loss in the training of the CNNs, which simultaneously ensures that the features of images with different contents are separable and the features of images in the same class are close. The second loss function is defined as an improved center loss which not only penalizes the distance between the obtained deep feature and the feature center of its own class, but also encourages a far distance between the feature and the feature centers of any other classes. We have conducted experiments on both ILSVRC data set and Caltech256 data set, demonstrated that the deep features got by our approaches can achieve better performance than other methods in semantic-based image retrieval, and the improved center loss can further outperform the joint supervision of the softmax loss and the center loss.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.