Abstract
Camouflaged object detection (COD) is more challenging than traditional object detection because of the high similarity in color and texture between the camouflaged object and the background. Current COD models employ deep neural networks and focus more on the deep semantic information, ignoring the edge boundaries and not making good use of the information in high-resolution feature maps. To this end, we propose an edge fusion network which fuses low-resolution but high-level semantic information with high-resolution but low-level spatial information. The low resolution features are first fused at multiple scales, which enhances the high resolution feature map. The two maps are fused by an edge-aware fusion module that can better exploit the semantic and spatial information to obtain the final prediction map. Our models achieve competitive results on public datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.