Abstract

The critical challenge for co-salient object detection (CoSOD) is to extract common saliency information from a group of relevant images. Most of the existing CoSOD methods do not fully explore the semantic commonality of co-salient objects, which can be strong guidance for collaborative feature learning, and do not take full advantage of rich hierarchical features of different layers, resulting in inferior performance. To this end, we propose a Group Semantic-guided Neighbor interaction network (GSNNet) for co-salient object detection. Specifically, the proposed network contains the group semantic module (GSM), neighbor interaction module (NIM), and feature enhancement module (FEM). The network first learns semantic consensus from a group of relevant images by the GSM, which uses the reverse guidance strategy and the group-wise combination strategy to distill the group semantic cues from the forward and complementary features. With the powerful guidance of group semantic, the NIM is employed to conduct the neighbor feature interaction of adjacent layers to excavate the contextual information and enhance feature representation. Then the FEM is adopted to refine the critical cues by the attention mechanism, which enhances the compactness of feature representation. The proposed GSNNet is evaluated on three challenging CoSOD benchmark datasets using four widely-used metrics, which demonstrates that our proposed method is superior to the other twelve cutting-edge methods for co-salient object detection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call