Abstract

Single molecule localization microscopy (SMLM) forms super-resolution images with a resolution of several to tens of nanometers, by detecting isolated single molecule emission patterns and localizing the centers of individual probes. However, the inhomogeneous refractive indices distort and blur single molecule emission patterns, reduce the information content carried by each detected photon, increase localization uncertainty, and thus cause significant resolution loss, which is irreversible by post-processing. To compensate tissue induced aberrations, conventional sensorless adaptive optics methods rely on iterative mirror-changes and image-quality metrics to compensate aberrations. However, these metrics result in inconsistent, and sometimes opposite, metric responses which fundamentally limited the efficacy of these approaches for aberration correction in tissues. Bypassing the previous iterative trial-then-evaluate processes, we developed deep learning driven adaptive optics (DL-AO) for SMLM to allow direct inference of wavefront distortion and near real-time compensation. Our trained deep neural network (DNN) monitors the individual emission patterns from single molecule experiments, infers their shared wavefront distortion, feeds the estimates through a dynamic filter (Kalman), and drives a deformable mirror to compensate sample induced aberrations. The method simultaneously estimates and compensates 28 types of wavefront deformation shapes, restores single molecule emission patterns approaching the conditions untouched by specimen, and improves the resolution and fidelity of 3D SMLM through brain tissues over 130 µm, with as few as 3-20 mirror changes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call