In content-based image retrieval (CBIR), primitive image signatures are critical because they represent the visual characteristics. Image signatures, which are algorithmically descriptive and accurately recognized visual components, are used to appropriately index and retrieve comparable results. To differentiate an image in the category of qualifying contender, feature vectors must have image information's like colour, objects, shape, spatial viewpoints. Previous methods such as sketch-based image retrieval by salient contour (SBIR) and greedy learning of deep Boltzmann machine (GDBM) used spatial information to distinguish between image categories. This requires interest points and also feature analysis emerged image detection problems. Thus, a proposed model to overcome this issue and predict the repeating pattern as well as series of pixels that conclude similarity has been necessary. In this study, a technique called CBIR-similarity measure via artificial neural network interpolation (CBIR-SMANN) has been presented. By collecting datasets, the images are resized then subject to Gaussian filtering in the pre-processing stage, then by permitting them to the Hessian detector, the interesting points are gathered. Based on Skewness, mean, kurtosis and standard deviation features were extracted then given to ANN for interpolation. Interpolated results are stored in a database for retrieval. In the testing stage, the query image was inputted that is subjected to pre-processing, and feature extraction was then fed to the similarity measurement function. Thus, ANN helps to get similar images from the database. CBIR-SMANN have been implemented in the python tool and then evaluated for its performance. Results show that CBIR-SMANN exhibited a high recall value of 78% with a minimum retrieval time of 980 ms. This showed the supremacy of the proposed model was comparatively greater than the previous ones.
Read full abstract