Abstract

Deep learning techniques for medical image analysis often encounter domain shifts between source and target data. Most existing approaches focus on unsupervised domain adaptation (UDA). However, in practical applications, many source domain data are often inaccessible due to issues such as privacy concerns. For instance, data from different hospitals exhibit domain shifts due to equipment discrepancies, and data from both domains cannot be accessed simultaneously because of privacy issues. This challenge, known as source-free UDA, limits the effectiveness of previous UDA medical methods. Despite the introduction of various medical source-free unsupervised domain adaptation (MSFUDA) methods, they tend to suffer from an over-fitting problem described as “longer training, worse performance”. To address this issue, we proposed the Stable Learning (SL) strategy. SL is a method that can be integrated with other approaches and consists of weight consolidation and entropy increase. Weight consolidation helps retain domain-invariant knowledge, while entropy increase prevents over-learning. We validated our strategy through experiments on three MSFUDA methods and two public datasets. For the abdominal dataset, the application of the SL strategy enables the MSFUDA method to effectively address the domain shift issue. This results in an improvement in the Dice coefficient from 0.5167 to 0.7006 for the adaptation from CT to MRI, and from 0.6474 to 0.7188 for the adaptation from MRI to CT. The same improvement is observed with the cardiac dataset. Additionally, we conducted ablation studies on the two involved modules, and the results demonstrated the effectiveness of the SL strategy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.