Abstract

Tiny object detection in remote sensing images has been a popular and challenging task in recent years. Due to the mismatch of feature scales that tiny objects rely on and the interference from complex surroundings in remote sensing images, traditional object detection algorithms still exhibit poor performance in detecting tiny objects. Based on the above observations, this paper proposes a cross-scale spatial perception guided network (CSPGNet) for tiny object detection in remote sensing images. Specifically, we first designed a cross-scale hierarchical perception module (CSHPM) at the topmost level of the Faster R-CNN backbone network to integrate contextual information from various levels and scales, thereby optimizing the representation of tiny objects in feature scales. Furthermore, to address the issue of information loss that occurs when combining low-resolution feature layers with those generated by the aforementioned module during the fusion process, we have developed an adaptive spatial alignment unit (ASAU) that utilizes variability convolution to adaptively align the spatial information of neighboring feature layers. Finally, we present an attention-guided information integration module (AGIIM), which utilizes large kernel attention to guide the feature information, improving tiny objects' global and local information across various feature layers and mitigating the influence of complex environments on the detection task. Extensive experiments were conducted on two publicly available tiny object datasets, namely AI-TOD and VisDrone2019, and the results demonstrate that our approach achieves higher accuracy compared to the majority of state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call