Abstract

ABSTRACTThe theory of the differential image motion monitor (DIMM), a standard and widely spread method of measuring astronomical seeing, is reviewed and extended. More accurate coefficients for computing the Fried parameter from the measured variance of image motion are given. They are tested by numerical simulations that show that any DIMM measures Zernike tilts, not image centroids as generally assumed. The contribution of CCD readout noise to image motion variance is modeled. It can substantially bias DIMM results if left unsubtracted. The second most important DIMM bias comes from the used exposure time, which is typically not short enough to freeze image motion completely. This effect is studied quantitatively for real turbulence and wind profiles, and its correction by interlaced short and long exposures is validated. Finally, the influence of turbulence outer scale reduces image size in large telescopes by 10% or more compared to the standard theory; new formulae to compute FWHM and half‐energy diameter of the atmospheric point‐spread function that take into account outer scale are provided.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call