Abstract
Summary Content-based visual image retrieval is a research domain with the goal to allow efficient access to the large amount of visual information that is being produced in medical institutions. Currently, visual retrieval is most often strictly separated from textual information extraction and retrieval in medical records. The complementary nature of the two methods by contrast invites to use the two together in an integrated fashion. We use a visual retrieval system (medGIFT) and a textual search engine powered with biomedical terminological resources (easyIR) together on a data set presented at the imageCLEF image retrieval competition. Both systems are available free of charge as open source. The dataset is also publicly available to make results reproducible and comparable. Results show that a simple combination of visual and textual features for retrieval improves performance significantly for fully automatic retrieval as well as for runs with manual relevance feedback. The currently applied techniques are fairly simple combinations and better results can be expected when optimizing the combined weighting based on learning data. Visual and textual features should be used together for information retrieval whenever they are both available to allow optimal access to varied data sources.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.