Generalizable medical image segmentation enables models to generalize to unseen target domains under domain shift issues. Recent progress demonstrates that the shape of the segmentation objective, with its high consistency and robustness across domains, can serve as a reliable regularization to aid the model for better cross-domain performance, where existing methods typically seek a shared framework to render segmentation maps and shape prior concurrently. However, due to the inherent texture and style preference of modern deep neural networks, the edge or silhouette of the extracted shape will inevitably be undermined by those domain-specific texture and style interferences of medical images under domain shifts. To address this limitation, we devise a novel framework with a separation between the shape regularization and the segmentation map. Specifically, we first customize a novel whitening transform-based probabilistic shape regularization extractor namely WT-PSE to suppress undesirable domain-specific texture and style interferences, leading to more robust and high-quality shape representations. Second, we deliver a Wasserstein distance-guided knowledge distillation scheme to help the WT-PSE to achieve more flexible shape extraction during the inference phase. Finally, by incorporating domain knowledge of medical images, we propose a novel instance-domain whitening transform method to facilitate a more stable training process with improved performance. Experiments demonstrate the performance of our proposed method on both multi-domain and single-domain generalization.
Read full abstract