Abstract

AbstractContent-based image retrieval can assist radiologists by finding similar images in databases as a means to providing decision support. In general, images are indexed using low-level features, and given a new query image, a distance function is used to find the best matches in the feature space. However, using low-level features to capture the appearance of diseases in images is challenging and the semantic gap between these features and the high-level visual concepts in radiology may impair the system performance. In addition, the results of these systems are fixed and cannot be updated based on user’s intention. We present a new framework that enables retrieving similar images based on high-level semantic image annotations and user feedback. In this framework, database images are automatically annotated with semantic terms. Image retrieval is then performed by computing the similarity between image annotations using a new similarity measure, which takes into account both image-based and ontological inter-term similarities. Finally, a relevance feedback mechanism allows the user to iteratively mark the returned answers, informing which images are relevant according to the query. This information is used to infer user-defined inter-term similarities that are then injected in the image similarity measure to produce a new set of retrieved images. We validated this approach for the retrieval of liver lesions from CT images and annotated with terms of the RadLex ontology.KeywordsImage retrievalRiesz waveletsImage annotationRadLexSemantic gapRelevance feedbackComputed tomographic (CT) images

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call