Abstract

Image retrieval is a challenging problem in computer vision domain. Traditional content based image retrieval (CBIR) systems were built to retrieve images based on low level content representations like color, texture and shape. These domain specific handcrafted features performed well in various image retrieval applications. The choice of image features greatly affects the performance of such systems. Also, one needs deeper understanding of the domain in order to choose right features for image retrieval application. Recent advances in image retrieval focus on creating features which are domain independent. Machine learning can help to learn important representations from images. Convolution neural networks (CNN) are an important class of machine learning models. CNNs can derive high-level multi-scale features from image data. CNNs with deep layers are widely used in image classification problems. Creating a new effective deep CNN model requires huge training time, computing resources and big datasets. There are many deep CNN models like VGG16, ResNet, Alexnet etc., which are pre-trained on huge datasets and model weights are shared for transferring the learnt knowledge to new domains. Pre-trained CNNs can be applied to image retrieval problem by extracting features from fully connected layers of the model before output layer. In this work, two leading pre-trained CNN models VGG16 and ResNet are used to create a CBIR method. Learnt features from these pre-trained models are used to create a fusion feature and use them for image retrieval. The proposed CBIR framework is applied to image retrieval problem in a different domain, satellite images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call