Abstract

In this article, we tackle the saliency detection task from an interesting perspective: we focus both on salient regions (or foreground) detection and nonsalient regions (or background) detection instead of only the foreground and propose a novel complementarity-aware attention network. It is a unified framework with two branches, namely, positive attention module (PAM) and negative attention module (NAM), for the foreground and background detection, respectively. More specifically, the PAM exploits a position self-attention mechanism to enhance the discriminant ability of feature representation, which can detect most of the salient object regions. Meanwhile, the NAM is designed to detect the background regions, aiming to pop out the missing object parts and details in the prediction map produced by the PAM. By fusing these two attention modules together, NAM can provide complementary cues to assist PAM for precise object detection. Furthermore, in order to capture more multiscale contextual information, we introduce a bidirectional structure with multisupervision to the proposed complementarity-aware attention module for performance improvement. Experiments on five benchmark datasets show that the proposed framework achieves comparable results compared with the state-of-the-art saliency detection methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call