Cancer, a disease with a high mortality rate, poses a great threat to patients' physical and mental health and can lead to huge medical costs and emotional damage. With the continuous development of artificial intelligence technologies, deep learning-based cancer image segmentation techniques are becoming increasingly important in cancer detection and accurate diagnosis. However, in segmentation tasks, there are differences in efficiency between large and small objects and limited segmentation effects on objects of individual sizes. The previous segmentation frameworks still have room for improvement in multi-scale collaboration when segmenting objects. This paper proposes a method to train a deep learning segmentation framework using a feature pyramid processing dataset to improve the average precision (AP) index, and realizes multi-scale cooperation in target segmentation. Pan-Cancer Histology Dataset for Nuclei Instance Segmentation and Classification (PanNuke) dataset was selected to include approximately 7500 pathology images with cells from 19 different types of tissues, including five classifications of cancer, non-cancer, inflammation, death, and connective tissue. First, the method uses whole-slide images in the pan-cancer histology dataset for nuclei instance segmentation and classification (PanNuke) dataset, combined with the mask region convolutional neural network (Mask R-CNN) segmentation framework and improved loss function to segment and detect each cellular tissue in cancerous sections. Second, to address the problem of non-synergistic object segmentation at different scales in cancerous tissue segmentation, a scheme using feature pyramids to process the dataset was adopted as part of the feature extraction module. Extensive experimental results on this dataset show that the method in this paper yields 0.269 AP and a boost of about 4% compared to the original Mask R-CNN framework. It is effective and feasible to use feature pyramid to process data set to improve the effect of medical image segmentation.
Read full abstract