Abstract

Structured illumination microscopy (SIM) surpasses the optical diffraction limit and offers a two-fold enhancement in resolution over diffraction limited microscopy. However, it requires both intense illumination and multiple acquisitions to produce a single high-resolution image. Using deep learning to augment SIM, we obtain a five-fold reduction in the number of raw images required for super-resolution SIM, and generate images under extreme low light conditions (at least 100× fewer photons). We validate the performance of deep neural networks on different cellular structures and achieve multi-color, live-cell super-resolution imaging with greatly reduced photobleaching.

Highlights

  • Structured illumination microscopy (SIM) surpasses the optical diffraction limit and offers a two-fold enhancement in resolution over diffraction limited microscopy

  • We accomplish this by reconstructing images using deep neural networks that have been trained on real images, enabling us to visualize specific complex cellular structures and address complicated cellular or instrument-dependent backgrounds

  • We show that U-Net can be trained to directly reconstruct superresolution images from SIM raw data using fewer raw images

Read more

Summary

Introduction

Structured illumination microscopy (SIM) surpasses the optical diffraction limit and offers a two-fold enhancement in resolution over diffraction limited microscopy. SIM applies varying, nonuniform illumination on samples and uses dedicated computational algorithms to derive superresolution information from nine or fifteen sequentially acquired images, for 2D or 3D imaging, respectively Since it was first introduced by the laboratories of Heintzmannl[1] and Gustafsson[2] two decades ago, SIM has been evolving constantly to improve speed, resolution, and to decrease the required light dosages. We apply deep learning to increase the speed of SIM by reducing the number of raw images, and to retrieve super-resolution information from low-light samples. We accomplish this by reconstructing images using deep neural networks that have been trained on real images, enabling us to visualize specific complex cellular structures (mitochondria, actin networks etc.) and address complicated cellular or instrument-dependent backgrounds (e.g. out of focus light).

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call