Abstract

The physician’s experience is highly correlated with the content interpretation of medical images. Over time, physicians develop their ability to examine the images, and this is usually reflected on gaze patterns they follow to observe visual cues that lead them to diagnostic decisions. In the context of gaze prediction, graph and machine learning methods have been proposed for the visual saliency estimation on generic images. In this work we preset a novel and robust gaze estimation methodology based on physicians’ eye fixations, using convolutional neural networks (CNNs) trained according to a novel co-operative scheme, on medical images acquired during Wireless Capsule Endoscopy (WCE). The proposed training approach considers both the reconstruction accuracy of the estimated saliency maps, and their contribution to the classification process of normal and abnormal findings. The model that was trained with the proposed co-operative procedure was able to achieve an average score of 0.76 Judd’s Area Under the receiver operating Characteristic (AUC-J).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call