Abstract

Deep learning-based object detectors have been widely adopted in the field of remote sensing imagery interpretation. These detectors heavily depend on the expensive large-scale labeled datasets, while the scarce remote sensing datasets limit the performance. The domain adaptive object detection can alleviate this problem. However, it struggles with the confusing feature’s alignment, damaging the domain generalization performance, especially for the remote sensing scene with sparse objects and diverse backgrounds. For that reason, a semisynthetic data generator (SDG) is proposed to automatically generate the remote sensing dataset with low cost and replace the real-world training dataset, a <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">feature aligned domain adaptive object detector</i> (FADA) is proposed to enhance the domain adaptation among the cross-domain remote sensing images. The FADA contains two proposed modules in addition to the base detector: an adversarial-based foreground alignment (AFA) and a prototype-based confusing feature alignment (PCFA). The AFA aligns the cross-domain foreground feature by adversarial training (AT), and it can filter the noisy background feature that is not suitable to transfer. Then, the PCFA adaptively aligns the confusing background and foreground feature, further promoting the domain adaptation performance. Comprehensive experiments validate the effectiveness of the proposed method. Compared with the baseline model trained on the semisynthetic source dataset, our FADA improves the generalized performance on the real-world target dataset a large-scale Dataset for Object deTection in Aerial images (DOTA) by 15.7% average precision (AP) and achieves state-of-the-art results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call