Abstract

AbstractThe performance of convolutional neural networks (CNNs) often drop when they encounter a domain shift. Recently, unsupervised domain adaptation (UDA) and domain generalization (DG) techniques have been proposed to solve this problem. However, access to source domain data is required for UDA and DG approaches, which may not always be available in practice due to data privacy. In this paper, we propose a novel test-time adaptation framework for volumetric medical image segmentation without any source domain data for adaptation and target domain data for offline training. Specifically, our proposed framework only needs pre-trained CNNs in the source domain, and the target image itself. Our method aligns the target image on both image and latent feature levels to source domain during the test-time. There are three parts in our proposed framework: (1) multi-task segmentation network (Seg), (2) autoencorders (AEs) and (3) translation network (T). Seg and AEs are pre-trained with source domain data. At test-time, the weights of these pre-trained CNNs (decoders of Seg and AEs) are fixed, and T is trained to align the target image to source domain at image-level by the autoencoders which optimize the similarity between input and reconstructed output. The encoder of Seg is also updated to increase the domain generalizability of the model towards the source domain at the feature level with self-supervised tasks. We evaluate our method on healthy controls, adult Huntington’s disease (HD) patients and pediatric Aicardi Goutières Syndrome (AGS) patients, with different scanners and MRI protocols. The results indicate that our proposed method improves the performance of CNNs in the presence of domain shift at test-time.KeywordsSelf-supervisedTest-time trainingTest-time adaptationSegmentation

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.