Abstract

Medical image segmentation is a fundamental task in many clinical applications, yet current automated segmentation methods rely heavily on manual annotations, which are inherently subjective and prone to annotation bias. Recently, modeling annotator preference has garnered great interest, and several methods have been proposed in the past two years. However, the existing methods completely ignore the potential correlation between annotations, such as complementary and discriminative information. In this work, the Adaptive annotation CorrelaTion based multI-annOtation LearNing (ACTION) method is proposed for calibrated medical image segmentation. ACTION employs consensus feature learning and dynamic adaptive weighting to leverage complementary information across annotations and emphasize discriminative information within each annotation based on their correlations, respectively. Meanwhile, memory accumulation-replay is proposed to accumulate the prior knowledge and integrate it into the model to enable the model to accommodate the multi-annotation setting. Two medical image benchmarks with different modalities are utilized to evaluate the performance of ACTION, and extensive experimental results demonstrate that it achieves superior performance compared to several state-of-the-art methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.