Abstract

The inhomogeneous refractive indices of biological tissues blur and distort single-molecule emission patterns generating image artifacts and decreasing the achievable resolution of single-molecule localization microscopy (SMLM). Conventional sensorless adaptive optics methods rely on iterative mirror changes and image-quality metrics. However, these metrics result in inconsistent metric responses and thus fundamentally limit their efficacy for aberration correction in tissues. To bypass iterative trial-then-evaluate processes, we developed deep learning-driven adaptive optics for SMLM to allow direct inference of wavefront distortion and near real-time compensation. Our trained deep neural network monitors the individual emission patterns from single-molecule experiments, infers their shared wavefront distortion, feeds the estimates through a dynamic filter and drives a deformable mirror to compensate sample-induced aberrations. We demonstrated that our method simultaneously estimates and compensates 28 wavefront deformation shapes and improves the resolution and fidelity of three-dimensional SMLM through >130-µm-thick brain tissue specimens.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call