Abstract

Camouflaged Object Detection (COD) remains a challenging task due to the inherent difficulty in distinguishing concealed objects from their surroundings, primarily owing to their high visual similarity. Current research efforts have been insufficient in distinguishing the similarity differences between camouflaged objects and their corresponding backgrounds, leading to suboptimal performance, particularly when locating small-scale objects. To solve this issue, we introduce a novel Dual-branch Fusion and Dual Self-similarity Network (DSNet) comprising three modules. The first, a dual-branch fusion, mimics human observation of camouflaged objects and extracts multi-angle information. Dual-branch features are then decoded using a symmetric joint decoder module with channel interaction via multi-stage inter-group interaction. Inspired by natural organisms’ self-similarity, the self-similarity constraint module employs global and mutual constraints to identify subtle foreground-background differences. DSNet shows superior performance in experiments, and the constraint module can enhance other models as a plug-and-play component, further boosting overall performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.