Abstract
The camouflaged object segmentation (COS) task aims to segment objects visually embedded within the background. Existing models usually rely on prior information as an auxiliary means to identify camouflaged objects. However, low-quality priors and the singular guidance form hinder the effective utilization of prior information. To address these issues, we propose a novel approach for prior generation and guidance, named prior-guided transformer (PGT). For prior generation, we design a prior generation subnetwork consisting of a Transformer backbone and simple convolutions to obtain higher-quality priors at a lower cost. In addition, to fully exploit the backbone’s understanding capabilities of the camouflage characteristics, a novel two-stage training method is proposed to achieve the backbone’s deep supervision. For prior guidance, we design a prior guidance modules (PGM), with distinct space token mixers to respectively capture global dependencies of location priors and local details of boundary priors. Additionally, we introduce a cross-level prior in the form of features to facilitate inter-level communication of backbone features. Extensive experiments have been conducted and experimental results illustrate the effectiveness and superiority of our method. The code is available at https://github.com/Ray3417/PGT.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have