Abstract

The cross-teaching based on Convolutional Neural Network (CNN) and Transformer has been successful in semi-supervised learning; however, the information interaction between local and global relations ignores the semantic features of the medium scale, and at the same time, the information in the process of feature coding is not fully utilized. To solve these problems, we proposed a new semi-supervised segmentation network. Based on the principle of complementary modeling information of different kernel convolutions, we design a dual CNN cross-supervised network with different kernel sizes under cross-teaching. We introduce global feature contrastive learning and generate contrast samples with the help of dual CNN architecture to make efficient use of coding features. We conducted plenty of experiments on the Automated Cardiac Diagnosis Challenge (ACDC) dataset to evaluate our approach. Our method achieves an average Dice Similarity Coefficient (DSC) of 87.2% and Hausdorff distance ([Formula: see text]) of 6.1 mm on 10% labeled data, which is significantly improved compared with many current popular models. Supervised learning is performed on the labeled data, and dual CNN cross-teaching supervised learning is performed on the unlabeled data. All data would be mapped by the two CNNs to generate features, which are used for contrastive learning to optimize the parameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call