Abstract
Abstract Fluorescence microscopy based cell painting technique profiles the morphological characteristics of specific cell organelles with high resolution. However, photo toxicity, photo bleaching and advanced instrumentation limits its utility for comprehensively annotating the cell structure. Generating cell painted organelles from simple and least invasive transmitted light microscopy provides a surrogate for clinical applications. In this study, the employability of semantic segmentation model for delineating the nuclei from composite image using UNet++ is investigated. For this, a public dataset consisting of 3456 composite images are considered from Broad Bioimage Benchmark collection. The binary masks of endoplasmic reticulum (ER), nuclei and cytoplasm are generated for pixel wise labelling. The composite images and their labelled dataset are fed to the UNet++ model for segmenting the cell painted nuclei. The performance of the deep semantic network is analysed for 50 epochs and the segmentation results are validated with mean intersection-over-union (IoU). The UNet++model provides an accuracy of 95.8% with a minimum loss of 0.1. Mean IoU of 0.91 is achieved for the prediction of nuclei from the composite images. The obtained results suggest that this study could be employed for predicting the subcellular components from transmitted light microscopy without the use of fluorescent labelling.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.