Abstract

Salient object detection is a challenging task in complex compositions depicting multiple objects of different scales. Albeit the recent progress thanks to the convolutional neural networks, the state-of-the-art salient object detection methods still fall short to handle such real-life scenarios. In this paper, we propose a new method called MP-SOD that exploits both Multi-Scale feature fusion and Pyramid spatial pooling to detect salient object regions in varying sizes. Our framework consists of a front-end network and two multi-scale fusion modules. The front-end network learns an end-to-end mapping from the input image to a saliency map, where a pyramid spatial pooling is incorporated to aggregate rich context information from different spatial receptive fields. The multi-scale fusion module integrates saliency cues across different layers, that is from low-level detail patterns to high-level semantic information by concatenating feature maps, to segment out salient objects with multiple scales. Extensive experimental results on eight benchmark datasets demonstrate the superior performance of our method compared with existing methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.