Abstract

In recent years, techniques for generating remotely sensed images have been widely developed and remote sensing object detection has received increasing attention. In order to detect as many objects at different scales as possible in high-resolution remote sensing images, existing remote sensing target detection methods tend to use feature pyramid networks to extract features. However, different levels of the feature map contain objects at different scales. As a result, while identifying one predictive result in one feature map as a positive sample, other scale feature maps may treat this one predictive result as background. This can reduce the accuracy of the object detection algorithm. To fuse multi-scale features more effectively, we propose a remote sensing object detection method (MSFF) based on multi-scale feature fusion, which classifies all targets into different scales for location and classification through adaptive pooling. Our method improves the network structure of the feature pyramid network and also recalculates the weights to reduce the semantic gap in feature fusion. Experimental results show that our method has good results. Our proposed method achieves a good result of 75.6% mAP on the DOTA dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call