Abstract

The recent prevalence of deep neural networks has led semantic segmentation networks to achieve human-level performance in the medical field, provided they are given sufficient training data. However, these networks often fail to generalize when tasked with creating semantic maps for out-of-distribution images, necessitating re-training on new distributions. This labor-intensive process requires expert knowledge for generating training labels. In the medical field, distribution shifts can naturally occur due to the choice of imaging devices, such as MRI or CT scanners. To mitigate the need for labeling images in a target domain after successful model training in a fully annotated source domain with a different data distribution, unsupervised domain adaptation (UDA) can be employed. Most UDA approaches ensure target generalization by generating a shared source/target latent feature space, allowing a source-trained classifier to maintain performance in the target domain. However, such approaches necessitate joint source and target data access, potentially leading to privacy leaks with respect to patient information. We propose a UDA algorithm for medical image segmentation that does not require access to source data during adaptation, thereby preserving patient data privacy. Our method relies on approximating the source latent features at the time of adaptation and creates a joint source/target embedding space by minimizing a distributional distance metric based on optimal transport. We demonstrate that our approach is competitive with recent UDA medical segmentation works, even with the added requirement of privacy. 11Early partial results of this work has been presented in 2022 British Machine Vision Conference (Stan and Rostami, 2022a).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call