In this study, a two-stage approach for developing a Semantic-Based Image Retrieval system supported by Ontology is proposed. In the initial stage, the Object Detection process is employed to identify objects within the image. Subsequently, a predicate describing the relationship between these two objects is determined using the developed Bi-directional Recurrent Neural Network (Bi-RNN) model. In the second stage, relations defined in the form of <subject-predicate-object> are transformed into Ontologies and utilized to search for images that are semantically similar. In addressing the primary challenge of Semantic Gap within the Semantic-Based Image Retrieval approach, the proposed solution involves measuring the number of similar relationships between two images through the utilization of entropy. The Semantic Gap between two images was computed using the Joint Entropy method, leveraging the number of relationships (X) identified in the query image and the total number of relationships (Y) in the image with similar relationships obtained as a query result. The proposed approach exhibits characteristics of a novel method within this field, distinct from other similar methods employed in Semantic-Based Image Retrieval through the utilization of Ontologies. In the performance measurement of the developed model, 91% accuracy was obtained according to the Recall@100 (Top-5 accuracy) result.
Read full abstract