Abstract

Long-range imaging with visible or infrared observation systems is typically hampered by atmospheric turbulence. Software-based turbulence mitigation methods aim to stabilize and sharpen such recorded image sequences based on the image data only. Although successful restoration has been achieved on static scenes in the past, a significant challenge remains in accounting for moving objects such that they remain visible as moving objects in the output. Here, we investigate a new approach for turbulence mitigation on background as well as large moving objects under moderate turbulence conditions. In our method, we apply and compare different optical flow algorithms to locally estimate both the apparent and true object motion in image sequences and subsequently apply dynamic super-resolution, image sharpening, and newly developed local stabilization methods to the aligned images. We assess the use of these stabilization methods as well as a new method for occlusion compensation for these conditions. The proposed methods are qualitatively evaluated on several visible light recordings of real-world scenes. We demonstrate that our methods achieve a similar image quality on background elements as our prior methods for static scenes, but at the same time obtain a substantial improvement in image quality and reduction in image artifacts on moving objects. In addition, we show that our stabilization and occlusion compensation methods can be robustly used for turbulence mitigation in imagery featuring complex backgrounds and occlusion effects, without compromising the performance in less challenging conditions.

Highlights

  • Atmospheric turbulence often hampers long-range imaging with visible or infrared observation systems

  • Successful restoration has been achieved on static scenes in the past [3–7], there remains a significant challenge in restoring imagery depicting moving objects

  • We will investigate whether our proposed dynamic turbulence mitigation can improve the in image quality on moving objects compared to the previous global turbulence mitigation approach, while retaining a similar image quality on the background

Read more

Summary

Introduction

Atmospheric turbulence often hampers long-range imaging with visible or infrared observation systems. This turbulence causes random spatiotemporal variations in the density and the local refractive index of the air between the observer and the imaged scene [1, 2]. Software-based turbulence mitigation methods aim to restore such recorded image sequences based on the image data only [3–14]. The goal of these algorithms is to produce sharp and stable imagery of the observed scene, enabling visual detection, recognition, and identification at larger distances. The apparent shifts will have similar magnitudes and length scales as the observed frame-toframe object motion. Medium turbulence conditions are roughly defined here as having 0.2 < r0/D < 1, with D the diameter of the observation system’s aperture and r0 the Fried parameter for the image formation: Nieuwenhuizen et al EURASIP Journal on Image and Video Processing (2019) 2019:2

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.