Abstract

A well known problem in medical imaging is the performance degradation that occurs when using a model learned on source data, in a new site. Supervised Domain Adaptation (SDA) strategies that focus on this challenge, assume the availability of a limited number of annotated samples from the new site. A typical SDA approach is to pre-train the model on the source site and then fine-tune on the target site. Current research has thus mainly focused on which layers should be fine-tuned. Our approach is based on transferring also the gradients history of the pre-training phase to the fine-tuning phase. We present two schemes to transfer the gradients information to improve the generalization achieved during pre-training while fine-tuning the model. We show that our methods outperform the state-of-the-art with different levels of data scarcity from the target site, on multiple datasets and tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.