Abstract

The potential benefits of real-time, or near-real-time, image processing hardware to correct for turbulence-induced image defects for long-range surveillance and weapons targeting are sufficient to motivate significant resource commitment to their development. Quantitative comparisons between potential candidates are necessary to decide on a preferred processing algorithm. We begin by comparing the mean-square-error (MSE) performance of speckle imaging (SI) methods and multiframe blind deconvolution (MFBD), applied to long-path horizontal imaging of a static scene under anisoplanatic seeing conditions. Both methods are used to reconstruct a scene from three sets of 1000 simulated images featuring low, moderate, and severe turbulence-induced aberrations. The comparison shows that SI techniques can reduce the MSE up to 47%, using 15 input frames under daytime conditions. The MFBD method provides up to 40% improvement in MSE under the same conditions. The performance comparison is repeated under three diminishing light conditions, 30, 15, 8 photons per pixel on average, where improvements of up to 39% can be achieved using SI methods with 25 input frames, and up to 38% for the MFBD method using 150 input frames. The MFBD estimator is applied to three sets of field data and representative results presented. Finally, the performance of a hybrid bispectrum-MFBD estimator that uses a rapid bispectrum estimate as the starting point for the MFBD image reconstruction algorithm is examined.

Highlights

  • There is interest in the development of human-portable surveillance systems capable of observing a wide field of view over long horizontal or near-horizontal paths

  • For the severe turbulence case, C2N 1⁄4 5.25 × 10−14 mð−2∕3Þ, the improvement in MSE available by including additional input frames hits a maximum of ∼36% of full scale at Nf 1⁄4 14 and neither the MSE nor the standard deviation improves significantly as additional input images are added to the processing stack

  • The performances of an speckle imaging (SI) estimator and an unconstrained optimization-based multiframe blind deconvolution (MFBD) estimator were compared in terms of the MSE between the object reconstructed using the method and a diffraction-limited image

Read more

Summary

Introduction

There is interest in the development of human-portable surveillance systems capable of observing a wide field of view over long horizontal or near-horizontal paths. Currently fielded surveillance systems compress images prior to transmission.[1] An image processing system that could reconstruct images corrupted by turbulence prior to compression and transmission would allow such systems to make more efficient use of available bandwidth. Under all but the most benign conditions, temperature inhomogeneities in the atmosphere result in turbulence and variations in the index of refraction along the imaging path. Even small variations in the index of refraction cause changes in the optical path length that result in phase aberrations at the aperture.[2] As the optical path length increases, or the turbulence strength increases, the aberrations become stronger, and the isoplanatic angle decreases. The atmospheric coherence radius, r0, is commonly used to define the effective aperture radius of an imaging system.

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.