Abstract
The medical automatic annotation task issued by the cross language evaluation forum (CLEF) aims at a fair comparison of state-of-the art algorithms for medical content-based image retrieval (CBIR). The contribution of this work is twofold: at first, a logical decomposition of the CBIR task is presented, and key elements to support the relevant steps are identified: (i) implementation of algorithms for feature extraction, feature comparison, and classifier combination, (ii) visualization of extracted features and retrieval results, (iii) generic evaluation of retrieval algorithms, and (iv) optimization of the parameters for the retrieval algorithms and their combination. Data structures and tools to address these key elements are integrated into an existing framework for image retrieval in medical applications (IRMA). Secondly, baseline results for the CLEF annotation tasks 2005–2007 are provided applying the IRMA framework, where global features and corresponding distance measures are combined within a nearest neighbor approach. Using identical classifier parameters and combination weights for each year shows that the task difficulty decreases over the years. The declining rank of the baseline submission also indicates the overall advances in CBIR concepts. Furthermore, a rough comparison between participants who submitted in only one of the years becomes possible.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.