Abstract

Ground-based remote observation systems are vulnerable to atmospheric turbulence, which can lead to image degradation. While some methods can mitigate this turbulence distortion, many have issues such as long processing times and unstable restoration effects. Furthermore, the physics of turbulence is often not fully integrated into the image reconstruction algorithms, making their theoretical foundations weak. In this paper, we propose a method for atmospheric turbulence mitigation using optical flow and convolutional neural networks (CNN). We first employ robust principal component analysis (RPCA) to extract a reference frame from the images. With the help of optical flow and the reference frame, the tilt can be effectively corrected. After correcting the tilt, the turbulence mitigation problem can be simplified as a deblurring problem. Then, we use a trained CNN to remove blur. By utilizing (i) a dataset that conforms to the turbulence physical model to ensure the restoration effect of the CNN and (ii) the efficient parallel computing of the CNN to reduce computation time, we can achieve better results compared to existing methods. Experimental results based on actual observed turbulence images demonstrate the effectiveness of our method. In the future, with further improvements to the algorithm and updates to GPU technology, we expect even better performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.