Abstract

With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed. These models are evaluated by using performance evaluation metrics that measure how well a predicted map matches eye-tracking data obtained from human observers. Though there are a number of existing performance evaluation metrics, there is no clear consensus on which evaluation metric is the best. This work proposes a subjective study that uses ratings from human observers to evaluate saliency maps computed by existing VA models based on comparing the maps visually with ground-truth maps obtained from eye-tracking data. The subjective ratings are correlated with the scores obtained from existing as well as a proposed objective VA performance evaluation metric using several correlation measures. The correlation results show that the proposed objective VA metric outperforms the existing metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call