Abstract
Current visual saliency detection algorithms based on deep learning suffer from reduced detection effect in complex scenes owing to ineffective feature expression and poor generalization. The present study addresses this issue by proposing a recurrent residual network based on dense aggregated features. Firstly, different levels of dense convolutional features are extracted from the ResNeXt101 network. Then, the features of all layers are aggregated under an Atrous spatial pyramid pooling operation, which makes comprehensive use of all possible saliency cues. Finally, the residuals are learned recurrently under a deep supervision mechanism to achieve continuous optimization of the saliency map. Application of the proposed algorithm to publicly available datasets demonstrates that the dense aggregation of features not only enhances the aggregation of effective information within a single layer, but also enhances external interactions between information at different feature levels. As a result, the proposed algorithm provides better detection ability than that of current state-of-the-art algorithms.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.