Abstract

Digital image retrieval applications rely on content representations to measure image similarity during image search. Usage of inappropriate features widens the semantic gap and leads to poor results. Automatic feature extraction, which is independent of domain understanding, is very important in CBIR. Convolutional neural networks (CNN) can create important expressive features automatically from input image data. Creating and training a deep CNN model from scratch require very large datasets, enormous computing environments, and execution time. To tackle this, researchers are looking at leveraging established deep CNN structures. It is possible to transfer the knowledge learnt by an established convolutional neural network to new domains to address new challenges. This work intends to create a new technique to extract fusion of features from ensemble of two deep CNN models (VGG16 and ResNet50) for image retrieval. VGG16 model is retrained on the target dataset after customizing it with our own classification layer. Retrained features from VGG16 model are clubbed with pre-trained features from ResNet50 model. The proposed method is subjected to experimentation on Swedish leaf images dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call