Abstract

Recent innovations in tissue clearing and light sheet microscopy allow rapid acquisition of three-dimensional micron resolution images in fluorescently labeled brain samples. These data allow the observation of every cell in the brain, necessitating an accurate and high-throughput cell segmentation method in order to perform basic operations like counting number of cells within a region; however, large computational challenges given noise in the data and sheer number of features to identify. Inspired by the success of deep learning technique in medical imaging, we propose a supervised learning approach using convolution neural network (CNN) to learn the non-linear relationship between local image appearance (within an image patch) and manual segmentations (cell or background at the center of the underlying patch). In order to improve the segmentation accuracy, we further integrate high-level contextual features with low-level image appearance features. Specifically, we extract contextual features from the probability map of cells (output of current CNN) and train the next CNN based on both patch-wise image appearance and contextual features, extending previous methods into a cascaded approach. Using (a) high-level contextual features extracted from the cell probability map and (b) the spatial information of cell-to-cell locations, our cascaded CNN progressively improves the segmentation accuracy. We have evaluated the segmentation results on mouse brain images, and compared conventional image processing approaches. More accurate and robust segmentation results have been achieved with our cascaded CNN method, indicating the promising potential of our proposed cell segmentation method for use in large tissue cleared images.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.