Abstract

Convolutional neural networks (CNN)-based salient object detection (SOD) models have achieved promising performance in optical remote sensing images (ORSIs) in recent years. However, the restriction concerning the local sliding window operation of CNN has caused many existing CNN-based ORSI SOD models to still struggle with learning long-range relationships. To this end, a novel transformer framework is proposed for ORSI SOD which is inspired by the powerful global dependency relationships of transformer networks. This is the first attempt to explore global and local details using transformer architecture for SOD in ORSIs. Concretely, we design an adaptive spatial tokenization transformer encoder to extract global-local features, which can accurately sparsify tokens for each input image and achieve competitive performance in ORSI SOD tasks. Then, a specific dense token aggregation decoder (DTAD) is proposed to generate saliency results, including three cascade decoders to integrate the global-local tokens and contextual dependencies. Extensive experiments indicate that the proposed model greatly surpasses twenty state-of-the-art (SOTA) SOD approaches on two standard ORSI SOD datasets under seven evaluation metrics. We also report a comparison results to demonstrate the generalization capacity on the latest challenging ORSI datasets. In addition, we validate the contributions of different modules through a series of ablation analyses, especially the proposed adaptive spatial tokenization module (ASTM), which can halve the computational budget.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call