Abstract

Vehicle detection in remote sensing images is of great significance to urban traffic intelligence. Though existing vehicle detection methods for remote sensing images such as fully convolutional regression network, spatial density building net, and pretraining and random-initialized fusion network have made many efforts and progress on network structural optimization, their models remain being weak on the feature anti-interference, contextual information utilization, and neglect the loss of feature information during the down-sampling process. In this article, we propose feature anti-interference and adaptive residual attention-Net, a remote sensing image object detection algorithm based on feature anti-interference and adaptive residual attention. First, a feature interference module is constructed, fed with the shallow feature and random noise, and generates interference in the detection process, making the detector improve its anti-interference ability against the disturbance in the adversarial training process. Second, a novel adaptive residual attention module is introduced into the network to extract the adaptively contextual features and enhance the weak features. Finally, a cross level fusion module is designed to enhance the collaboration between multiscale feature layers to reduce the loss of small target feature information. The effectiveness of the method is verified by comparing the method proposed in this article with other mainstream methods on the UCAS-AOD, CARPK, and OVDS datasets. The code is freely available at: <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/hel2020/FICLAR-Net</uri> .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call