Abstract
Collaborative learning is a fundamental component of consistency learning. It has been extensively utilized in semi-supervised medical image segmentation, primarily based on the learning of multiple models from each other. However, existing semi-supervised collaborative image segmentation methods face two primary issues. Firstly, these methods fail to fully leverage the hidden knowledge within the models during a knowledge exchange, resulting in inefficient knowledge sharing and limited generalization capabilities. To address this, we propose a novel approach, termed ‘fusion teacher’, which merges the knowledge of two models at the feature-level. This enhances the efficiency of knowledge exchange between models and generates more accurate pseudo-labels for consistency learning. Secondly, the initial and intermediate stages of collaborative learning are hindered by a significant performance gap between the fusion teacher and student models, impairs effective knowledge transfer. Our approach advocates a gradual increase in the dropout rate. This strategy enhances the transfer efficiency of knowledge from a fusion teacher to a student model. To demonstrate the efficacy of our method, we conduct experiments on the ISIC, ACDC, and AbdomenCT-1K datasets. Our approach achieves Dice scores of 87.4%, 84.8%, and 84.5%, respectively, with 10% labelled data. Compared with the current state-of-the-art (SOTA) methods, our method demonstrates strong competitiveness.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have