Abstract

Currently, reliable and accurate object detection in high-resolution remote sensing images still faces significant challenges such as color, aspect ratio, complex background, and scale variations. Even the detection results obtained based on the latest convolutional neural network (CNN) methods are not satisfactory. To obtain more accurate detection results from large-scale remote sensing images, we proposed a multiscale object detection algorithm oriented to adaptively attentional feature fusion based on the YOLOX algorithm. Firstly, a multiscale attentional feature fusion (MSAFF) structure was added to the YOLOX network to increase the perceptual field and aggregate the contextual information to obtain a feature map with richer semantic information. Secondly, an adaptively spatial feature fusion (ASFF) structure was introduced to process the fused feature maps, to which spatial feature weights were assigned at different levels to enhance the feature representation of remote sensing objects and reduce feature loss. Finally, the aligned convolutional network was used in the object classification and localization task to achieve a more accurate localization of densely arranged objects with arbitrary orientation. The algorithm proposed in this paper was extensively experimented on the PASCAL VOC and the DIOR datasets, and the average accuracy reached 89.2% and 75.3%, respectively. Compared with some current mainstream two-stage and one-stage object detection algorithms, the experimental results demonstrated that our method performed well in both accuracy and speed. At the same time, it reduced the leakage rate of remote sensing objects to a certain extent.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call