Abstract

BackgroundQuantifying cells in a defined region of biological tissue is critical for many clinical and preclinical studies, especially in the fields of pathology, toxicology, cancer and behavior. As part of a program to develop accurate, precise and more efficient automatic approaches for quantifying morphometric changes in biological tissue, we have shown that both deep learning-based and hand-crafted algorithms can estimate the total number of histologically stained cells at their maximal profile of focus in Extended Depth of Field (EDF) images. Deep learning-based approaches show accuracy comparable to manual counts on EDF images but significant enhancement in reproducibility, throughput efficiency and reduced error from human factors. However, a majority of the automated counts are designed for single-immunostained tissue sections. New MethodTo expand the automatic counting methods to more complex dual-staining protocols, we developed an adaptive method to separate stain color channels on images from tissue sections stained by a primary immunostain with secondary counterstain. Comparison with Existing MethodsThe proposed method overcomes the limitations of the state-of-the-art stain-separation methods, like the requirement of pure stain color basis as a prerequisite or stain color basis learning on each image. ResultsExperimental results are presented for automatic counts using deep learning-based and hand-crafted algorithms for sections immunostained for neurons (Neu-N) or microglial cells (Iba-1) with cresyl violet counterstain. ConclusionOur findings show more accurate counts by deep learning methods compared to the handcrafted method. Thus, stain-separated images can function as input for automatic deep learning-based quantification methods designed for single-stained tissue sections.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.