Abstract

BackgroundMedical image analysis pipelines often involve segmentation, which requires a large amount of annotated training data, which is time-consuming and costly. To address this issue, we proposed leveraging generative models to achieve few-shot image segmentation. MethodsWe trained a denoising diffusion probabilistic model (DDPM) on 480,407 pelvis radiographs to generate 256 ✕ 256 px synthetic images. The DDPM was conditioned on demographic and radiologic characteristics and was rigorously validated by domain experts and objective image quality metrics (Frechet inception distance [FID] and inception score [IS]). For the next step, three landmarks (greater trochanter [GT], lesser trochanter [LT], and obturator foramen [OF]) were annotated on 45 real-patient radiographs; 25 for training and 20 for testing. To extract features, each image was passed through the pre-trained DDPM at three timesteps and for each pass, features from specific blocks were extracted. The features were concatenated with the real image to form an image with 4225 channels. The feature-set was broken into random patches, which were fed to a U-Net. Dice Similarity Coefficient (DSC) was used to compare the performance with a vanilla U-Net trained on radiographs. ResultsExpert accuracy was 57.5 % in determining real versus generated images, while the model reached an FID = 7.2 and IS = 210. The segmentation UNet trained on the 20 feature-sets achieved a DSC of 0.90, 0.84, and 0.61 for OF, GT, and LT segmentation, respectively, which was at least 0.30 points higher than the naively trained model. ConclusionWe demonstrated the applicability of DDPMs as feature extractors, facilitating medical image segmentation with few annotated samples.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.