Abstract
Fluorescence emission difference microscopy (FED) obtains super-resolution microscopic images by extracting the information of the intensity differences between the solid and doughnut confocal images. Due to the mismatch of the outer contour of the solid and doughnut point spread function, FED suffers the problem of information loss. Here, we present a framework for deep-learning-enhanced FED (DL-FED) microscopy through cycle-consistent generative adversarial network (CycleGAN) based image reconstruction. Using this framework, we effectively avoid the information loss and enhance the spatial resolution of the fluorescence images acquired by the standard FED. We also demonstrate that the standard FED images can be transformed to match the results of Airyscan-based FED and saturated-FED, which can effectively enhance the signal-to-noise ratio and avoid the photo-bleaching effect. The validity of DL-FED is demonstrated by simulations and experiments based on fluorescent nanoparticles and biological cells. Featuring the potential to realize a high imaging speed, this approach may be widely applied in live cells investigations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.