Abstract

AbstractIn this paper we report our work using visual feature fusion for the tasks of medical image retrieval and annotation in the benchmark of ImageCLEF 2005. In the retrieval task, we use visual features without text information, having no relevance feedback. Both local and global features in terms of both structural and statistical nature are captured. We first identify visually similar images manually and form templates for each query topic. A pre-filtering process is utilized for a coarse retrieval. In the fine retrieval, two similarity measuring channels with different visual features are used in parallel and then combined in the decision level to produce a final score for image ranking. Our approach is evaluated over all 25 query topics with each containing example image(s) and topic textual statements. Over 50,000 images we achieved a mean average precision of 14.6%, as one of the best performed runs. In the annotation task, visual features are fused in an early stage by concatenation with normalization. We use support vector machines (SVM) with RBF kernels for the classification. Our approach is trained over a 9,000 image training set and tested over the given test set with 1000 images and on 57 classes with a correct classification rate of about 80%.KeywordsSupport Vector MachineVisual FeatureImage RetrievalPrinciple Component AnalysisTraining ImageThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call