Abstract

This paper presents a unified medical image retrieval method that integrates visual features and text keywords using multimodal classification and filtering. For content-based image search, concepts derived from visual features are modeled using support vector machine (SVM)-based classification of local patches from local image regions. Text keywords from associated metadata provides the context and are indexed using the vector space model of information retrieval. The concept and context vectors are combined and trained for SVM classification at a global level for image modality (e.g., CT, MR, x-ray, etc.) detection. In this method, the probabilistic outputs from the modality categorization are used to filter images so that the search can be performed only on a candidate subset. An evaluation of the method on ImageCLEFmed 2010 dataset of 77,000 images, XML annotations and topics results in a mean average precision (MAP) score of 0.1125. It demonstrates the effectiveness and efficiency of the proposed multimodal framework compared to using only a single modality or without using any classification information.KeywordsSupport Vector MachineImage RetrievalQuery ImageMean Average PrecisionVisual ConceptThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.