Abstract
We present the first systematic work on Military High-level Camouflage object Detection (MHCD), aiming to identify objects visibly embedded in chaotic backgrounds. The high intrinsic similarities (e.g., texture, intensity, color, etc.) between the attention object and its background give the task far more challenging than general object detection. In this paper, we construct a benchmark MHCD2022 dataset, which consists of 3000 images with dense annotations covering 5 categories from multiple real-world scenes. Remarkably, based on the observation that biological vision usually first obtains perception from global search and strives to recover the complete object, we propose a novel Military High-level detection Network, called MHNet, which is characterized by four ingenious modules: Subject Perception Gathering (SPG), Part-object Relationships Mining (PRM), Concept Recovery/Feature Clue Supplement (CR/FCS) and Springboard Selection (SS). Firstly, a SPG is designed for global foreground rough perception by the exploitation of depth information. Second, a PRM is particularly used to mine part-object potential relations in diverse environments. After that, we propose CR/FCS and SS to enhance the destroyed instance-level representation and suppress the domain imbalance problem, respectively. Extensive experimental results show that previous methods suffered from poor performance, MHNet significantly outperforms camouflage baselines and competing methods on the MHCD2022 for the high-level camouflaged object. Finally, we also present and highlight the practical application value and several future directions of the research.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.