Abstract

Most of the existing deep learning based salient object detection (SOD) models adopt multi-level feature fusion strategies, and have achieved remarkable progress. However, current SOD models still suffer from the uncertainty dilemma in predicting salient probabilities of the pixels surrounding the contour of salient objects. To solve this issue, we propose a novel uncertainty-aware SOD model, where multiple supervision signals, i.e., internal contour uncertainty map, saliency map and external contour uncertainty map, are used to guide the network to not only focus on the pixels in the salient object but also shift its partial attention to the pixels surrounding the contour of salient objects. Furthermore, we introduce a new feature interaction module to aggregate internal contour uncertainty features, saliency features and external contour uncertainty features in the decoding stage, aiming to enhance the model’s ability in dealing with the “uncertain” pixels. Extensive experiments on four public benchmark datasets demonstrate the superiority of the proposed method over the existing state-of-the-art SOD methods. Furthermore, the proposed method shows better attribute-based performance on the SOC dataset, suggesting that the proposed model can also handle challenging scenarios in SOD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call