Abstract

The visualization of synthetic aperture radar (SAR) images involves the mapping of high dynamic range (HDR) amplitude values to gray levels for lower dynamic range (LDR) display devices. This dynamic range compression process determines the visibility of details in the displayed result. It therefore plays a critical role in remote sensing applications. There are some problems with existing methods, such as poor adaptability, detail loss, imbalance between contrast improvement and noise suppression. To effectively obtain the images suitable for human observation and subsequent interpretation, we introduce a novel self-adaptive SAR image dynamic range compression method based on deep learning. Its designed objective is to present the maximal amount of information content in the displayed image and eliminate the contradiction between contrast and noise. Considering that, we propose a decomposition-fusion framework. The input SAR image is rescaled to a certain size and then put into a bilateral feature enhancement module to remap high and low frequency features to realize noise suppression and contrast enhancement. Based on the bilateral features, a feature fusion module is employed for feature integration and optimization to achieve a more precise reconstruction result. Visual and quantitative experiments on synthesized and real-world SAR images show that the proposed method notably realizes visualization which exceeds several statistical methods. It has good adaptability and can improve SAR images’ contrast for interpretation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call