Abstract
The U-shape structure has shown its advantage in salient object detection for efficiently combining multi-scale features. However, most existing U-shape-based methods focused on improving the bottom-up and top-down pathways while ignoring the connections between them. This paper shows that we can achieve the cross-scale information interaction by centralizing these connections, hence obtaining semantically stronger and positionally more precise features. To inspire the newly proposed strategy's potential, we further design a relative global calibration module that can simultaneously process multi-scale inputs without spatial interpolation. Our approach can aggregate features more effectively while introducing only a few additional parameters. Our approach can cooperate with various existing U-shape-based salient object detection methods by substituting the connections between the bottom-up and top-down pathways. Experimental results demonstrate that our proposed approach performs favorably against the previous state-of-the-arts on five widely used benchmarks with less computational complexity. The source code will be publicly available.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.