Abstract

Compared with the traditional object segmentation/detection, camouflaged object detection is much more difficult due to the indefinable boundaries and high intrinsic similarities between the camouflaged regions and the background. Although various algorithms have been proposed to solve the issue, these methods still suffer from coarse boundaries and are not competent to identify the camouflaged objects from the background in complex scenarios. In this paper, we propose a novel boundary-guided network to address this challenging problem in a coarse-to-fine manner. Specifically, we design a locating module to infer the initial location of the camouflaged objects by exploiting local detailed cues and global contextual information. Moreover, a boundary-guided fusion module is proposed to explore the complementary relationship between the camouflaged regions and their boundaries. By leveraging the boundary feature, we can not only generate prediction maps with sharper boundaries but also effectively eliminate background noises. Equipped with the two key modules, our BgNet is capable of segmenting camouflaged regions accurately and quickly. Extensive experimental results on four widely used benchmark datasets demonstrate that the proposed BgNet runs at a real-time speed (36 FPS) on a single NVIDIA Titan XP GPU and outperforms 17 state-of-the-art competing algorithms in terms of six standard evaluation metrics. Source code will be publicly available at https://github.com/clelouch/BgNet upon paper acceptance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call