Abstract
Precise salient object detection (SOD) in realistic scenarios heavily relies on multi-scale context. Although existing SOD methods have made significant advancements by incorporating contextual information, they often overlook the correlation of contexts at different scales during feature extraction, leading to challenges in producing precise saliency maps. Our proposed solution to mitigate the above challenges associated with SOD is the Context Exploration and Multi-level Interaction Network (CEMINet). Specifically, we first develop a progressive multi-scale context extraction (PMCE) module, which enables the gradual capture of strongly correlated multi-scale context with the aid of multi-receptive-field convolution operations. Additionally, we design a hierarchical feature hybrid interaction (HFHI) module to effectively aggregate multi-level features by exploiting a hybrid interaction strategy following a top-down approach. And then, a well-designed stereoscopic attention enhancement (SAE) module is presented to refine multi-level features from HFHI through two parallel attention branches combined in a stereoscopic structure for generating precise predictions. Comprehensive experiments show our CEMINet achieves superior performance, without requiring any post-processing, compared to 16 state-of-the-art SOD models across five popular datasets. To substantiate the effectiveness and generality of our model, we also implement it to the camouflaged object detection (COD), outperforming the corresponding state-of-the-art models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.