Abstract
The purpose of co-salient object detection (CoSOD) is to detect the salient objects that co-occur in a group of relevant images. CoSOD has been significantly prospered by recent advances in <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">convolutional neural networks</i> (CNNs). However, it shows general limitations in modeling long-range feature dependencies, which is crucial for CoSOD. In the vision transformer, the self-attention mechanism is utilized to capture global dependencies but unfortunately destroy local spatial details, which are also essential for CoSOD. To address the above issues, we propose a dual network structure, called <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">TCNet</i> , which can efficiently excavate both local information and global representations for co-saliency learning via the parallel interaction of Transformers and CNNs. Specifically, it contains three critical components, <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">i.e</i> ., the mutual consensus module (MCM), the consensus complementary module (CCM), and the group consistent progressive decoder (GCPD). MCM aims to capture the global consensus from high-level features of these two branches as a guide for the following integration of consensus cues of both branches at each level. Next, CCM is designed to effectively fuse the consensus of local information and global contexts from different levels of the two branches. Finally, GCPD is developed to maintain group feature consistency and predict accurate co-saliency maps. The proposed <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">TCNet</i> is evaluated on five challenging CoSOD benchmark datasets using six widely used metrics, showing that our proposed method is superior to other existing cutting-edge methods for co-salient object detection.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Circuits and Systems for Video Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.