Abstract

Cell segmentation and classification are critical tasks in spatial omics data analysis. We introduce CelloType, an end-to-end model designed for cell segmentation and classification of biomedical microscopy images. Unlike the traditional two-stage approach of segmentation followed by classification, CelloType adopts a multi-task learning approach that connects the segmentation and classification tasks and simultaneously boost the performance of both tasks. CelloType leverages Transformer-based deep learning techniques for enhanced accuracy of object detection, segmentation, and classification. It outperforms existing segmentation methods using ground-truths from public databases. In terms of classification, CelloType outperforms a baseline model comprised of state-of-the-art methods for individual tasks. Using multiplexed tissue images, we further demonstrate the utility of CelloType for multi-scale segmentation and classification of both cellular and non-cellular elements in a tissue. The enhanced accuracy and multi-task-learning ability of CelloType facilitate automated annotation of rapidly growing spatial omics data.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.