Abstract

In recent years, artificial intelligence has been applied to 3D COVID-19 medical image diagnosis, which reduces detection costs and missed diagnosis rates with higher predictive accuracy, and diagnostic efficiency. However, the limited size and low quality of clinical 3D medical image samples have hindered the segmentation performance of 3D models. Therefore, we propose a 3D medical image segmentation model based on semi-supervised learning using co-training. Multi-view and multi-modal images are generated using spatial flipping and windowing techniques to enhance the spatial diversity of 3D image samples. A pseudo label generation module based on confidence-weights is employed to generate reliable pseudo labels for non-annotated data, thereby increasing the sample size and reducing overfitting. The proposed approach utilizes a three-stage training process: firstly, training a single network based on annotated data; secondly, incorporating non-annotated data to train a dual-modal network and generate pseudo labels; finally, jointly training six models in three dimensions using both annotated and pseudo labels generated from multi-view and multi-modal images, aiming to enhance segmentation accuracy and generalization performance. Additionally, a consistency regularization loss is applied to reduce noises and accelerate convergence of the training. Moreover, a heatmap visualization method is employed to focus on the attention of features at each stage of training, providing effective reference for clinical diagnosis. Experiments were conducted on an open dataset of 3D COVID-19 CT samples and a non-annotated dataset from TCIA, including 771 NIFTI-format CT images from 661 COVID-19 patients. The results of 5-fold cross-validation show that the proposed model achieves a segmentation accuracy of Dice=73.30 %, ASD=10.633, Sensitivity=63.00 %, and Specificity=99.60 %. Compared to various typical semi-supervised learning 3D segmentation models, it demonstrates better segmentation accuracy and generalization performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call