Abstract
Fluorescence nanoscopy, both in the far-field (STED) and near-field version (NSOM) allows imaging of biological samples with exquisite resolution and high signal-to-noise ratio, representing a powerful tool to address biological questions at the nanometer scale. Yet, despite the highly improved resolution and quality in comparison with conventional confocal microscopy, Poisson noise, background noise and cell autofluorescence are limiting factors for the automated quantification of molecular spatial organization as well as for the recognition of distinct features such as clusters and aggregates. For these reasons, the analysis of nanoscopy images, if at all, is often performed in a semi-automatic fashion, requiring the manual identification of image features and the supply of user-defined parameters. As such, these approaches are time-consuming and might suffer from non-objective evaluation, which are critical parameters in the analysis of samples with high emitters density.Here we propose a method that allows fully automated reconstruction of fluorescence nanoscopy images and quantitative analysis of protein spatial organization. Our approach is based on the combination of a maximum likelihood algorithm and an expectation-maximization step, by means of which the image is de-noised and decomposed into point-spread functions. The further application of a local maxima identifying routine and Voronoi tessellation, allow us to extract the maximum information from the fluorescence images without impacting on the optical resolution. The performance of the method has been tested on simulated images at varying emitter density and signal-to-noise ratio, showing its applicability to standard STED and NSOM images. Furthermore, we have successfully applied this algorithm to quantitatively analyze fluorescence nanoscopy images of high densely packed membrane receptors on mammalian cells. Our results show faithful retrieval of receptor positions, stoichiometry and their lateral distribution of receptors on the cell membrane.
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have