Abstract

Recently, a deep learning technique has been introduced to saliency detection and has achieved promising results, but most of them are based on superpixel algorithms. Consequently, their performances and efficiencies depend largely on the results of a segmentation algorithm. Instead of classifying superpixels, we treat salient object detection as a dense prediction task. Fully convolutional networks show strong potential in dense prediction tasks, but the resolution and the quality of output maps need improving due to the loss of location information. In order to achieve high‐quality saliency maps, we propose a very efficient method using two Fully Convolutional Networks (FCNs) to extract global and local information respectively via different receptive fields. The global model produces accurate but coarse saliency maps, while the refined model produces full‐sized, fine results. We evaluate our method on eight public datasets and find that our method outperforms the other state‐of‐the‐art methods. Besides, our method runs much faster than the existing deep learning methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call