Abstract
Computational modeling of visual attention is an active area of research. These models have been successfully employed in applications such as robotics. However, most computational models of visual attention are developed in the context of natural scenes, and their role with medical images is not well investigated. As radiologists interpret a large number of clinical images in a limited time, an efficient strategy to deploy their visual attention is necessary. Visual saliency maps, highlighting image regions that differ dramatically from their surroundings, are expected to be predictive of where radiologists fixate their gaze. We compared 16 state-of-art saliency models over three medical imaging modalities. The estimated saliency maps were evaluated against radiologists' eye movements. The results show that the models achieved competitive accuracy using three metrics, but the rank order of the models varied significantly across the three modalities. Moreover, the model ranks on the medical images were all considerably different from the model ranks on the benchmark MIT300 dataset of natural images. Thus, modality-specific tuning of saliency models is necessary to make them valuable for applications in fields such as medical image compression and radiology education.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.