Abstract
Long-distance imaging is generally blurry and warped due to the presence of atmospheric turbulence, which harms the performance of the photoelectric system. Among these, real-time restoration based on a single turbulence-degraded image has always been a challenging topic that everyone is concerned about. The approach performed here optimizes the convolutional neural network using residual learning and smoothed dilated convolutions, which may increase the field of vision under the situation of limited GPU memory. To identify the model's performance, the authors employ training and test data with strong, medium, and weak levels synthesized by the Fried kernel, the real-time data captured by the Ritchey–Chretien telescope, the Open Turbulent Images Set and the real comparative data. Furthermore, the proposed model is compared to previous state-of-the-art approaches. The experimental results demonstrate that the proposed novel model can recover turbulently degraded images more effectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.