Abstract

Search engine features on the Internet are increasingly getting more sophisticated. Not only by using keywords, searching for information now can also be done by inputting images through an image feature. It returns the results related to the query image, its size, and the sites that load the image as a query. This method is called Reverse Image Search (RIS), one type of Content-Based Image Retrieval (CBIR). Occasionally, the query image provides several other images that are similar and correlated with the query image. Thus, it requires a machine that must distinguish and find correlations between one image and another. In this paper, the author extracted image features from RIS s results by implementing a CNN (Convolutional Neural Network) model trained to extract image features and performed RIS on a database. We compared similar and relevant content to the query image with the performance vector distance extracted by comparing the Pre-Trained CNN Model with the Pre-Trained CNN Model method and conventional methods such as perceptual hashing. In this paper, the Pre-Trained CNN Model method succeeded in handling orientation information. However, the Euclidean distance from the feature vector of several images was quite close, albeit different.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.