Abstract

Generally, medical content-based image retrieval (CBIR) systems select low-level visual features as image descriptors. However, these descriptors fail to provide clues for understanding the content of medical images in a similar way as a human expert, which makes the retrieval results inconsistent with the user’s intention. To solve this problem, we propose a closed-loop brain tumor retrieval system for MR images with an eye-tracking based relevance feedback mechanism. In our method, we first model the intention of the user by training a convolutional neural network based on the temporal and spatial features extracted from his/her eye-tracking data collected when inspecting the relevance between different images. Upon using visual features as a bridge, the relevancy degree to the query image of any of the database images is computed with our user’s intention model by transferring to it the eye movement data from the most visually similar image amongst images iteratively accumulated in the canvas. Our proposed retrieval system is implemented in an iterative manner. In each round of iteration, user’s eye movement data when inspecting the system returns are collected and the canvas collection of images is also updated by appending to it the user inspected system returns. With the updated canvas collections, the relevancy degree of database images can be recomputed and the system can begin a new round search of the most relevant images. Extensive experiments have been performed on a publicly available T1-weighted contrast-enhanced magnetic resonance image (CE-MRI) dataset that consists of three types of brain tumors (glioma, meningioma, and pituitary tumor) collected from 233 patients with a total of 3064 images across the axial, coronal, and sagittal views. Experimental results of 22 volunteers (11 males and 11 females, with an average age of 24.4 years) from our medical school show that upon implicit involvement of users in the brain tumor retrieving process, our proposed system significantly outperforms state-of-the-art methods and achieves Prec@10 to 99.94%, mAP to 97.95% after the third round of iteration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call