Abstract

With the rise of fully convolutional networks (FCN), in recent years, more and more models of salient object detection based on deep learning have used or modified FCN to perform pixel-level saliency prediction. However, many existing algorithms lack accurate saliency edges due to insufficient use of edge information. In this paper, we solve this problem by focusing on the complementarity between salient object information and salient edge information. Because salient edge features contain more details, the boundary of salient object will be more accurate when fused with edge information. Salient object features can help edge features to better locate the foreground object and filter out background interference. Accordingly, we propose a dual-information progressive optimization network (DIPONet) for salient object detection. In the first step, we extract salient edge features from the low-level of VGG or ResNet backbone and salient object features from the subsequent layers separately, and send them to group optimization fusion modules (GOFMs) which aim to achieve self-optimization of these two kinds of features. In the second step, we design a model named dual-stream information optimization (DSIO) module based on the logical relationship between salient object information and salient edge information to jointly optimize these two kinds of features in a progressive fusion way. Benefiting from the rich edge information in salient edge features and location information in salient object features, the fused features can help detecting salient objects with more accurate boundaries. Excessive experimental results conducted on five benchmark datasets show that our proposed model surpasses the state-of-the-art models and can produce accurate salient object predictions with sharp details.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call