Abstract

Deep learning has shown remarkable promise in medical imaging tasks, reaching an expert level of performance for some diseases. However, these models often fail to generalize properly to data not used during training, which is a major roadblock to successful clinical deployment. This paper proposes a generalization enhancement approach that can mitigate the gap between source and unseen data in deep learning-based segmentation models without using ground-truth masks of the target domain. Leveraging a subset of unseen domain's CT slices for which the model trained on the source data yields the most confident predictions and their predicted masks, the model learns helpful features of the unseen data over a retraining process. We investigated the effectiveness of the introduced method over three rounds of experiments on three open-access COVID-19 lesion segmentation datasets, and the results illustrate constant improvements of the segmentation model performance on datasets not seen during training.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call