To segment images through an unsupervised method as an alternative to manual labeling. A total of 100 whole slide image (WSI) data of HE stained and Pap stained slides were selected as the research and test objects, including 70 breast slides, 20 lung slides and 10 thyroid slides. In order to ensure the diversity of data, the breast slides included those of normal tissue, inflammation and tumor, the lung slides were mainly neoplasms in the lower lobe, including those of inflammation and tumor, and the thyroid slides were of cells, all benign, obtained through fine needle aspiration. The maximum total magnification (original magnification) of each image was 400 times, and the file format was NDPI. Each WSI was manually labeled, and the labeled area of each WSI was more than 10 fields of vision. The labeled information was to be used for validity verification. An unsupervised image segmentation technique based on superpixel and fully convolution neural network algorithms was constructed and used to segment any region of interest (ROI) of unlabeled WSI. In comparison with the region adjacency graph merging method, the segmentation effect of the two methods was assessed with the under segmentation error, the boundary recall and the mean Intersection-over-Union, and the efficiency of the two methods was also compared. In the comparison of execution efficiency, the test process included the preprocessing time of superpixel, and excluded the time of loading the deep learning engine. Unsupervised automatic segmentation was implemented for any ROI region of WSI according to the texture and color. The results of the breast slides, lung slides and thyroid slides showed slight differences, and multiple tests yielded stable results. However, the performance of this method in differentiating inflammation and tumor was average. The under-segmentation error, the boundary recall and the mean Intersection-over-Union were 19.10%, 82.06% and 45.06%, respectively. The under segmentation error, the boundary recall and the mean Intersection-over-Union for the region adjacency graph merging method were 21.52%, 78.39% and 44.81%, respectively. The average time consumption of the whole process was 0.27 s in GPU mode and 1.30 s in CPU mode. The average time consumption of the region adjacency graph merging method was 10.5 s in CPU mode because the method of region adjacency graph merging was not realized in the GPU mode. This method produced ideal pixel level labeling results through simple human-computer interaction, which could effectively reduce the cost of digital pathology slide data labeling. Compared with the region adjacency graph merging method, this method had better performance in processing image texture and had faster processing speed.
Read full abstract