Abstract

The detection of airports from Synthetic Aperture Radar (SAR) images is of great significance in various research fields. However, it is challenging to distinguish the airport from surrounding objects in SAR images. In this paper, a new framework, multi-level and densely dual attention (MDDA) network is proposed to extract airport runway areas (runways, taxiways, and parking lots) in SAR images to achieve automatic airport detection. The framework consists of three parts: down-sampling of original SAR images, MDDA network for feature extraction and classification, and up-sampling of airports extraction results. First, down-sampling is employed to obtain a medium-resolution SAR image from the high-resolution SAR images to ensure the samples (500 × 500) can contain adequate information about airports. The dataset is then input to the MDDA network, which contains an encoder and a decoder. The encoder uses ResNet_101 to extract four-level features with different resolutions, and the decoder performs fusion and further feature extraction on these features. The decoder integrates the chained residual pooling network (CRP_Net) and the dual attention fusion and extraction (DAFE) module. The CRP_Net module mainly uses chained residual pooling and multi-feature fusion to extract advanced semantic features. In the DAFE module, position attention module (PAM) and channel attention mechanism (CAM) are combined with weighted filtering. The entire decoding network is constructed in a densely connected manner to enhance the gradient transmission among features and take full advantage of them. Finally, the airport results extracted by the decoding network were up-sampled by bilinear interpolation to accomplish airport extraction from high-resolution SAR images. To verify the proposed framework, experiments were performed using Gaofen-3 SAR images with 1 m resolution, and three different airports were selected for accuracy evaluation. The results showed that the mean pixels accuracy (MPA) and mean intersection over union (MIoU) of the MDDA network was 0.98 and 0.97, respectively, which is much higher than RefineNet and DeepLabV3. Therefore, MDDA can achieve automatic airport extraction from high-resolution SAR images with satisfying accuracy.

Highlights

  • Synthetic Aperture Radar (SAR) can acquire images all day and all night without being affected by the weather and light conditions [1], which is a tremendous advantage that optical remote sensing images cannot offer

  • The runway area, which includes runways, taxiways, and parking lots, is marked red and other targets are regarded as background

  • While for multi-level and densely dual attention (MDDA), the transmission between features is enhanced by introducing dense connection, and redundant features were abandoned and useful features were retained via incurring the dual attention mechanism

Read more

Summary

Introduction

Synthetic Aperture Radar (SAR) can acquire images all day and all night without being affected by the weather and light conditions [1], which is a tremendous advantage that optical remote sensing images cannot offer It plays an increasingly important role in military and civilian applications. This work is very helpful to reduce the false alarms generated by aircraft detection by excluding specious targets from SAR images. The experimental results showed that RefineNet and DeepLabV3 do not have the ability to learn airport features and cannot distinguish runway areas from similar areas, resulting in poor detection integrity and false alarms. The network’s ability to learn features is improved, which makes the runway extraction results free of false alarms and high extraction completeness. Compared with the other two networks, MDDA could almost completely extract the entire runway edge line

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call