Abstract

Camouflaged object detection (COD) is more challenging than traditional object detection because of the high similarity in color and texture between the camouflaged object and the background. Current COD models employ deep neural networks and focus more on the deep semantic information, ignoring the edge boundaries and not making good use of the information in high-resolution feature maps. To this end, we propose an edge fusion network which fuses low-resolution but high-level semantic information with high-resolution but low-level spatial information. The low resolution features are first fused at multiple scales, which enhances the high resolution feature map. The two maps are fused by an edge-aware fusion module that can better exploit the semantic and spatial information to obtain the final prediction map. Our models achieve competitive results on public datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call