Abstract

Modeling atmospheric turbulence which plays a critical role in the training of neural network wavefront sensors is discussed in the framework of an adaptive optics program for the multiple mirror telescope. It is concluded that the accuracy of the wavefront correction possible with a neural network directly depends on the similarity of the training images to those seen in the telescope. The image simulations used in the training of neural network wavefront sensors are based on a random mid-point displacement (RMD) algorithm and sine wave summation algorithms. The RMD algorithm is considered to be an extremely fast method of wavefront generation used for very large arrays and image sequences without time evolution. Multiple turbulent layer atmospheric models based on the sine wave summation algorithm create image sequences with temporal structure functions that closely match real structure function data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call