Abstract

We propose a new framework for camouflaged object detection (COD) named FLCNet, which comprises three modules: an underlying feature mining module (UFM), a texture-enhanced module (TEM), and a neighborhood feature fusion module (NFFM). Existing models overlook the analysis of underlying features, which results in extracted low-level feature texture information that is not prominent enough and contains more interference due to the slight difference between the foreground and background of the camouflaged object. To address this issue, we created a UFM using convolution with various expansion rates, max-pooling, and avg-pooling to deeply mine the textural information of underlying features and eliminate interference. Motivated by the traits passed down through biological evolution, we created an NFFM, which primarily consists of element multiplication and concatenation followed by an addition operation. To obtain precise prediction maps, our model employs the top-down strategy to gradually combine high-level and low-level information. Using four benchmark COD datasets, our proposed framework outperforms 21 deep-learning-based models in terms of seven frequently used indices, demonstrating the effectiveness of our methodology.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call