Fluorescence emission difference microscopy (FED) obtains super-resolution microscopic images by extracting the information of the intensity differences between the solid and doughnut confocal images. Due to the mismatch of the outer contour of the solid and doughnut point spread function, FED suffers the problem of information loss. Here, we present a framework for deep-learning-enhanced FED (DL-FED) microscopy through cycle-consistent generative adversarial network (CycleGAN) based image reconstruction. Using this framework, we effectively avoid the information loss and enhance the spatial resolution of the fluorescence images acquired by the standard FED. We also demonstrate that the standard FED images can be transformed to match the results of Airyscan-based FED and saturated-FED, which can effectively enhance the signal-to-noise ratio and avoid the photo-bleaching effect. The validity of DL-FED is demonstrated by simulations and experiments based on fluorescent nanoparticles and biological cells. Featuring the potential to realize a high imaging speed, this approach may be widely applied in live cells investigations.
Read full abstract