Abstract

ABSTRACT Remote sensing and deep learning are being widely combined in tasks such as urban planning and disaster prevention. However, due to interference occasioned by density, overlap, and coverage, the tiny object detection in remote sensing images has always been a difficult problem. Therefore, we propose a novel TO–YOLOX(Tiny Object–You Only Look Once) model. TO–YOLOX possesses a MiSo(Multiple-in-Single-out) feature fusion structure, which exhibits a spatial-shift structure, and the model balances positive and negative samples and enhances the information interaction pertaining to the local patch of remote sensing images. TO–YOLOX utilizes an adaptive IOU-T (Intersection Over Uni-Tiny) loss to enhance the localization accuracy of tiny objects, and it applies attention mechanism Group-CBAM (group-convolutional block attention module) to enhance the perception of tiny objects in remote sensing images. To verify the effectiveness and efficiency of TO–YOLOX, we utilized three aerial-photography tiny object detection datasets, namely VisDrone2021, Tiny Person, and DOTA–HBB, and the following mean average precision (mAP) values were recorded, respectively: 45.31% (+10.03%), 28.9% (+9.36%), and 63.02% (+9.62%). With respect to recognizing tiny objects, TO–YOLOX exhibits a stronger ability compared with Faster R-CNN, RetinaNet, YOLOv5, YOLOv6, YOLOv7, and YOLOX, and the proposed model exhibits fast computation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.