Abstract

Fully convolutional networks (FCNs) play an significant role in salient object detection tasks, due to the capability of extracting abundant multi-level and multi-scale features. However, most of FCN-based models utilize multi-level features in a single indiscriminative manner, which is difficult to accurately predict saliency maps. To address this problem, in this article, we propose a recurrent network which uses hierarchical attention features as a guidance for salient object detection. First of all, we divide multi-level features into low-level features and high-level features. Multi-scale features are extracted from high-level features using atrous convolutions with different receptive fields to obtain contextual information. Meanwhile, low-level features are refined as supplement to add detailed information in convolutional features. It is observed that the attention focus of hierarchical features is considerably different because of their distinct information representations. For this reason, a two-stage attention module is introduced for hierarchical features to guide the generation of saliency maps. Effective hierarchial attention features are obtained by aggregating the low-level and high-level features, but the attention of integrated features may be biased, leading to deviations in the detected salient regions. Therefore, we design a recurrent guidance network to correct the biased salient regions, which can effectively suppress the distractions in background and progressively refine salient objects boundaries. Experimental results show that our method exhibits superior performance in both quantitative and qualitative assessments on several widely used benchmark datasets.

Highlights

  • As a common preprocessing step for various computer vision tasks, salient object detection aims to locate the most prominent areas in an image, which is widely used in image segmentation [1], visual tracking [2], image retrieval [3], and video compression [4], etc

  • Lu et al.: Salient Object Detection Using Recurrent Guidance Network with Hierarchical Attention Features into two parts: low-level features encode detailed information and high-level features contain semantic information, both of which are essential for saliency detection

  • We design a recurrent guidance network to further improve the performance of salient object detection

Read more

Summary

INTRODUCTION

As a common preprocessing step for various computer vision tasks, salient object detection aims to locate the most prominent areas in an image, which is widely used in image segmentation [1], visual tracking [2], image retrieval [3], and video compression [4], etc. S. Lu et al.: Salient Object Detection Using Recurrent Guidance Network with Hierarchical Attention Features into two parts: low-level features encode detailed information and high-level features contain semantic information, both of which are essential for saliency detection. FCN-based models [9], [10] predicted saliency maps by using high-level features while ignoring low-level features, resulting in coarse salient object boundaries. The gate control structures have the ability of keeping essential features and discarding useless features, which contributes to learning long-term dependencies Inspired by this structure, we propose a recurrent guidance network which utilizes the gate control structure of LSTM [16] to progressively refine attention features. Considering the effectiveness of hierarchical processing and the advantages of recurrent networks, in this paper, we propose a recurrent guidance network with hierarchical attention features for salient object detection, which can accurately detect salient objects

RELATED WORK
HIERARCHICAL FEATURES EXTRACTION
RECURRENT GUIDANCE MODULE
2) Evaluation Metrics
LIMITATIONS
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.