Abstract

Deep convolutional neural networks (CNNs) have recently achieved great improvements in salient object detection. Most existing CNN-based models adopt cross entropy loss to optimize the networks for its capability in probability prediction. The function of cross entropy loss in salient object detection can be seemed as a pixel-wise label classification for images, which automatically predict whether the pixel is salient or non-salient. However, cross entropy loss pays attention to each single pixel of image when classifying the label, which doesn’t consider the relationship with other pixels. In this paper, we propose an additional loss function, called group loss, to improve the above limitation of cross entropy loss. In our model, group loss as well as cross entropy loss work together to optimize the network for better saliency detection performance. The purpose of group loss is to make the difference between salient pixels smaller while the distance between salient and non-salient pixels as large as possible. Meanwhile, due to the large computation cost of pixel-wise comparisons, we design a superpixel pooling layer for computing group loss with no additional parameters, which converts the computation of group loss to superpixel level. The experimental results show that the introduction of group loss improves the performance of CNN network in salient object detection, which makes the boundaries of salient objects more distinct.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call