Abstract

In recent years, cancer has seriously damaged human health, and the morphological structure of cells serves as an important basis for cancer diagnosis and grading. Automatic cell segmentation based on deep learning has become an important means of computer-aided pathological diagnosis. Aiming at the existing problems of rough segmentation boundaries and inaccurate segmentation in cell image segmentation, this paper designs a cell image segmentation network model (ERF-TransUNet) based on edge feature residual fusion from the perspective of mutual complementarity and constraint between edge features and object features. The model uses a hybrid architecture of CNN and Transformer to extract multi-scale features from cell images, and adds independent edge feature extraction modules and residual fusion modules to enhance the extraction of edge features and their constraints when fusing with cell object features, improving the accuracy of cell contour positioning. Through experiments on two gland cell datasets, CRAG and Glas, and comparing the segmentation effects with current popular deep learning models, the network model proposed in this paper has achieved good performance in both Dice coefficient and Hausdorff distance, which can effectively improve the segmentation effect of cell images.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.