Abstract

Current 3D localization microscopy approaches are fundamentally limited in their ability to image thick, densely labeled specimens. Here, we introduce a hybrid optical-electronic computing approach that jointly optimizes an optical encoder (a set of multiple, simultaneously imaged 3D point spread functions) and an electronic decoder (a neural-network-based localization algorithm) to optimize 3D localization performance under these conditions. With extensive simulations and biological experiments, we demonstrate that our deep-learning-based microscope achieves significantly higher 3D localization accuracy than existing approaches, especially in challenging scenarios with high molecular density over large depth ranges.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call