Abstract
The water and shadow areas in SAR images contain rich information for various applications, which cannot be extracted automatically and precisely at present. To handle this problem, a new framework called Multi-Resolution Dense Encoder and Decoder (MRDED) network is proposed, which integrates Convolutional Neural Network (CNN), Residual Network (ResNet), Dense Convolutional Network (DenseNet), Global Convolutional Network (GCN), and Convolutional Long Short-Term Memory (ConvLSTM). MRDED contains three parts: the Gray Level Gradient Co-occurrence Matrix (GLGCM), the Encoder network, and the Decoder network. GLGCM is used to extract low-level features, which are further processed by the Encoder. The Encoder network employs ResNet to extract features at different resolutions. There are two components of the Decoder network, namely, the Multi-level Features Extraction and Fusion (MFEF) and Score maps Fusion (SF). We implement two versions of MFEF, named MFEF1 and MFEF2, which generate separate score maps. The difference between them lies in that the Chained Residual Pooling (CRP) module is utilized in MFEF2, while ConvLSTM is adopted in MFEF1 to form the Improved Chained Residual Pooling (ICRP) module as the replacement. The two separate score maps generated by MFEF1 and MFEF2 are fused with different weights to produce the fused score map, which is further handled by the Softmax function to generate the final extraction results for water and shadow areas. To evaluate the proposed framework, MRDED is trained and tested with large SAR images. To further assess the classification performance, a total of eight different classification frameworks are compared with our proposed framework. MRDED outperformed by reaching 80.12% in Pixel Accuracy (PA) and 73.88% in Intersection of Union (IoU) for water, 88% in PA and 77.11% in IoU for shadow, and 95.16% in PA and 90.49% in IoU for background classification, respectively.
Highlights
Synthetic Aperture Radar (SAR) is an active imaging radar featuring the ability of all-day and all-weather observation
A total of two sets of comparative experiments have been done to validate the performance of the proposed framework
Encoder_MFEF2 or Encoder_MFEF1 denotes it only uses the framework combining the Encoder network and the MFEF2 network or MFEF1 network in Figure 1 to perform classification for SAR images, which we use to compare their abilities of classification
Summary
Synthetic Aperture Radar (SAR) is an active imaging radar featuring the ability of all-day and all-weather observation. Ranjani and Thiruvengadam [14] proposed a classification method based on the multi-level ratio of exponentially weighted means, to compute the optimal threshold of classification. This thresholding approach still required manual tuning, which was very challenging to set if the pixel values of different classes were close. Classification has been an important component for water and shadow extraction from SAR images. In original FCN-8s framework, VGG-19 network is used to extract features, while ResNet-101_FCN, Large_Kernel_Matters, and the frameworks proposed in the paper all use ResNet-101 network. While using ResNet-101 network to replace VGG-19 network, the classification accuracy is improved significantly, especially for water extraction, with the improvement from
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.