Abstract

As a widely used technology, visual saliency detection has attracted a lot of attention in the past decades. Although a large number of methods, especially fully convolutional neural network- (FCN-) based approaches, have been proposed and achieved remarkable performance, it is still of great value to extend representative architecture to visual saliency detection task. In this paper, we propose an improved U-Net-like network, pyramid feature attention-based U-Net-like (PFAU-Net) for visual saliency detection problem. The main improvements of the proposed model include that in order to enable the network to extract features with more representation ability, we introduce a context-aware feature extraction (CFE) module and a channel attention module into the U-shaped backbone to obtain valuable multiscale features, and a feature pyramid path is also utilized in the decoder part of the network to take advantages of multilevel information. Moreover, we construct the loss function using three terms including pixel-level cross-entropy, image-level intersection over union (IoU), and a structural similarity term, which aim to make the model learn more saliency related knowledge. To verify the effectiveness of the proposed model, we conduct extensive experiments on six widely used public datasets, and the experimental results indicate that (1) our improved model can significantly improve the performance of the backbone network on all test datasets, and (2) our proposed model can outperform comparison FCN-based networks and nonneural network approaches. Both objective and qualitative evaluations verify the effectiveness of our proposed model.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.