Abstract

Colour images are rich in visual information. The process of searching for the most similar images in large-scale database based on visual features of query image is still a challenge in Content-Based Image Retrieval (CBIR) due to a semantic gap issue. In this paper, we proposed a fusing retrieval method to diminish the gap between high-level and low-level meanings by involving two aspects. The first aspect is increasing the effectiveness of image representation. Hence, data-level fusion features were suggested, a local feature from Discrete Cosine Transform (DCT) and Local Binary Patterns (LBP) in frequency and spatial domains respectively that was applied by a spectral clustering algorithm (graph-based) in addition to a global weighted LBP feature. The second aspect is fusing multiple retrieved similarity measures (scores/evidences) obtained from above global (LBP) and local features (DCTLBP) in terms of score-level fusion. The method is evaluated in WANG standard publically dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call