Abstract

With the development of deep learning, significant improvements and optimizations have been made in salient object detection. However, many salient object detection methods have limitations, such as insufficient context information extraction, limited interaction modes for different level features, and potential information loss due to a single interaction mode. In order to solve the aforementioned issues, we proposed a dual-stream aggregation network based on multi-scale features, which consists of two main modules, namely a residual context information extraction (RCIE) module and a dense dual-stream aggregation (DDA) module. Firstly, the RCIE module was designed to fully extract context information by connecting features from different receptive fields via residual connections, where convolutional groups composed of asymmetric convolution and dilated convolution are used to extract features from different receptive fields. Secondly, the DDA module aimed to enhance the relationships between different level features by leveraging dense connections to obtain high-quality feature information. Finally, two interaction modes were used for dual-stream aggregation to generate saliency maps. Extensive experiments on 5 benchmark datasets show that the proposed model performs favorably against 15 state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call