Abstract
AbstractIt is difficult for a convolutional neural network (CNN) to capture the detailed features of synthetic aperture radar (SAR) images when increasing the network depth. To capture sufficient information for reconstructing image details, the authors propose a multidirectional and multiscale convolutional neural network (MMCNN) in which the wavelet subband is input into each independent subnetwork to be trained. Each subnetwork has few convolution layers and a loss function. When the loss function reaches its optimal value, all subbands are integrated to produce the despeckled SAR image through the inverse Wavelet transform. The proposed MMCNN consisting of multiple subnetworks extracts the detailed features and suppresses speckle noise from different directions and scales; thus, its performance is improved by broadening the network width rather than increasing the depth. Experimental results on synthetic and real SAR images show that the proposed method shows superior performance over the state‐of‐the‐art methods in terms of both quantitative assessments and subjective visual quality, especially for strong speckle noise.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.