Abstract

Due to the development of deep learning and Fully Convolutional Neural Network (FCN), the research on salient object detection has made great progress in recent years. However, such FCN based models are always affected by the scale-space problem, which reduces the saliency detection accuracy and leads to a blurred object boundary. In this paper, we propose a novel saliency object detection method, which predicts the saliency by incorporating both pixel-level and region-level predictions. First, in order to alleviate the scale-space problem, modified dilated convolution layers and short connections are integrated into the FCN model, and the pixel-level saliency maps is generated by a pixel-wise salient classifier. Then, we employ a superpixel based manifold learning algorithm to obtain a better boundary of salient object, by which a region-level saliency map with clearer object boundary is generated. At last, a simple fusion method is utilized to fuse the two saliency maps into a unified saliency map, followed by a DenseCRF post-refinement module to further optimize the final results. Experiments are conducted on two benchmark datasets to demonstrate the effectiveness of our method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call