Abstract

Fourier-based wavefront sensors, such as the Pyramid Wavefront Sensor (PWFS), are the current preference for high contrast imaging due to their high sensitivity. However, these wavefront sensors have intrinsic nonlinearities that constrain the range where conventional linear reconstruction methods can be used to accurately estimate the incoming wavefront aberrations. We propose to use Convolutional Neural Networks (CNNs) for the nonlinear reconstruction of the wavefront sensor measurements. It is demonstrated that a CNN can be used to accurately reconstruct the nonlinearities in both simulations and a lab implementation. We show that solely using a CNN for the reconstruction leads to suboptimal closed loop performance under simulated atmospheric turbulence. However, it is demonstrated that using a CNN to estimate the nonlinear error term on top of a linear model results in an improved effective dynamic range of a simulated adaptive optics system. The larger effective dynamic range results in a higher Strehl ratio under conditions where the nonlinear error is relevant. This will allow the current and future generation of large astronomical telescopes to work in a wider range of atmospheric conditions and therefore reduce costly downtime of such facilities.

Highlights

  • The past few decades have seen an increase in the use of adaptive optics (AO) to correct for wavefront errors in imaging systems

  • We have demonstrated the use of Convolutional Neural Networks for nonlinear wavefront reconstruction

  • We have compared the closed-loop performance of the reconstructors for simulated atmospheric turbulence. This showed that solely using a Convolutional Neural Networks (CNNs) results in suboptimal performance

Read more

Summary

Introduction

The past few decades have seen an increase in the use of adaptive optics (AO) to correct for wavefront errors in imaging systems. Different modulation schemes and even other types of focal plane masks instead of the pyramidal prism can be described in the general formalism of Fourier-based wavefront sensing [13], where a two lens system is used with a focal plane mask in its intermediate focus to measure wavefront errors This formalism shows that there is a linear trade-off between the sensitivity of the wavefront sensor and its linear range. The g-ODWFS is similar to Fourier-based sensors but uses two filters in the focal plane instead of a single mask. In this work we propose to use a Convolutional Neural Network (CNN) as reconstructor to extend the effective dynamic range of Fourier-based wavefront sensors well into the nonlinear regime where conventional linear reconstructors degrade substantially in quality.

Reconstruction methods
Simulation setup
Calibration
Simulation results: reconstruction of DM modes
Simulation results: closed loop performance
Experimental setup
Lab results: reconstruction of DM modes
Conclusions & discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call