Abstract

As an active topic in computer vision, RGB-D salient object detection has witnessed substantial progress. Although the existing methods have achieved appreciable performance, there are still some challenges. The locality of convolutional neural networks requires that the model has a sufficiently deep global receptive field, while the local characteristic represented by transformer with strong globality is always not enough. Besides, the shared information of contextual features tends to be usually overlooked. To address these bottlenecks, we propose a novel group transformer network (GroupTransNet), which is good at learning the long-range dependencies of cross layer features to promote more perfect feature expression between high-level and low-level features. Importantly, we soft group the features of the middle and latter three levels to absorb the semantic information of slightly former level features. Firstly, the input features are adaptively purified by the element-wise operation and sequential attention mechanism. Afterwards, the intermediate features are uniformly fused at different layers, and then processed by several transformers in multiple groups. Finally, the output features are clustered within different classifications and combined with underlying features. Extensive experiments demonstrate the proposed GroupTransNet outperforms the competitors and achieves new state-of-the-art performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.