Abstract

Camouflaged object detection (COD) is a task needs to segment objects that subtly blend into their surroundings effectively. Edge and texture information of the objects can be utilized to reveal the edges of camouflaged objects and detect texture differences between camouflaged objects and the surrounding environment. However, existing methods often fail to fully exploit the advantages of these two types of information. Considering this, our paper proposes an innovative Dual Cross Perception Network (DCPNet) with texture and boundary guidance for camouflaged object detection. DCPNet consists of two essential modules, namely Dual Cross Fusion Module (DCFM) and the Subgroup Aggregation Module (SAM). DCFM utilizes attention techniques to emphasize the information that exists in edges and textures by cross-fusing features of the edge, texture, and basic RGB image, which strengthens the ability to capture edge information and texture details in image analysis. SAM gives varied weights to low-level and high-level features in order to enhance the comprehension of objects and scenes of various sizes. Several experiments have demonstrated that DCPNet outperforms 13 state-of-the-art methods on four widely used assessment metrics.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.