Abstract

Salient object detection (SOD) is a critical task in computer vision that involves accurately identifying and segmenting visually significant objects in an image. To address the challenges of gridding issues and feature dilution effects commonly encountered in SOD, we propose a sophisticated context-aware middle-layer guidance network (CMGNet). CMGNet incorporates the context-aware central-layer guidance module (CCGM), which utilizes cost-effective large kernels of depth-wise convolutions with embedded parallel channel attentions and squeeze-and-excitation (SeE) attentions mechanisms. It enables the model to effectively perceive objects of varying scales in complex scenarios. Additionally, the incorporation of the adjacent-to-central-layers paradigm enriches the model’s ability to capture more structural and contextual information. To further enhance performance, we introduce the dual-phase central-layer refinement module (DCRM), which effectively removes the minute blurry residuals in complex scenarios and enhances object segmentation. Moreover, we propose a novel hybrid loss function that handles hard pixels at or near boundaries by incorporating a weighting formula. This hybrid loss function combines binary cross-entropy (BCE), intersection over union (IoU), and consistency-enhanced loss (CEL), resulting in smoother and more precise saliency maps. Extensive evaluations on challenging datasets demonstrate the superiority of our approach over 15 state-of-the-art methods in salient object detection.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.