Camouflaged Object Detection (COD) remains a challenging task due to the inherent difficulty in distinguishing concealed objects from their surroundings, primarily owing to their high visual similarity. Current research efforts have been insufficient in distinguishing the similarity differences between camouflaged objects and their corresponding backgrounds, leading to suboptimal performance, particularly when locating small-scale objects. To solve this issue, we introduce a novel Dual-branch Fusion and Dual Self-similarity Network (DSNet) comprising three modules. The first, a dual-branch fusion, mimics human observation of camouflaged objects and extracts multi-angle information. Dual-branch features are then decoded using a symmetric joint decoder module with channel interaction via multi-stage inter-group interaction. Inspired by natural organisms’ self-similarity, the self-similarity constraint module employs global and mutual constraints to identify subtle foreground-background differences. DSNet shows superior performance in experiments, and the constraint module can enhance other models as a plug-and-play component, further boosting overall performance.
Read full abstract