Abstract

Spatial-frequency shift (SFS) imaging microscopy can break the diffraction limit of fluorescently labeled and label-free samples by transferring the high spatial-frequency information into the passband of microscope. However, the resolution improvement is at the cost of decreasing temporal resolution since dozens of raw SFS images are needed to expand the frequency spectrum. Although some deep learning methods have been proposed to solve this problem, no neural network that is compatible to both labeled and label-free SFS imaging has been proposed. Here, we propose the joint spatial-Fourier channel attention network (JSFCAN), which learns the general connection between the spatial domain and Fourier frequency domain from complex samples. We demonstrate that JSFCAN can achieve a resolution similar to the traditional algorithm using nearly 1/4 raw images and increase the reconstruction speed by two orders of magnitude. Subsequently, we prove that JSFCAN can be applied to both fluorescently labeled and label-free samples without architecture changes. We also demonstrate that compared with the typical spatial domain optimization network U-net, JSFCAN is more robust to deal with deep-SFS images and noisy images. The proposed JSFCAN provides an alternative route for fast SFS imaging reconstruction, enabling future applications for real-time living cell research.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.