Abstract

Features of different pattern complexity and extracted from multiple context scopes all provide important information for saliency prediction. Our model considers saliency can essentially be described by a combination of relatively simple features, and the scale of features varies. For this reason, improvements for saliency prediction model in three aspects are proposed to capture multiple contexts, and in final stage saliency prediction is made in a comprehensive way. The flexibility of proposed approach makes it possible to convert many existing convolutional neural networks into saliency prediction models. Experiments on two benchmark datasets show that the proposed approach is capable of converting a VGG base model into a competitive saliency prediction model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call