Abstract

Rationale and objectivesQuantification of 129Xe MRI relies on accurate segmentation of the thoracic cavity, typically performed manually using a combination of 1H and 129Xe scans. This can be accelerated by using Convolutional Neural Networks (CNNs) that segment only the 129Xe scan. However, this task is complicated by peripheral ventilation defects, which requires training CNNs with large, diverse datasets. Here, we accelerate the creation of training data by synthesizing 129Xe images with a variety of defects. We use this to train a 3D model to provide thoracic cavity segmentation from 129Xe ventilation MRI alone. Materials and methodsTraining and testing data consisted of 22 and 33 3D 129Xe ventilation images. Training data were expanded to 484 using Template-based augmentation while an additional 298 images were synthesized using the Pix2Pix model. This data was used to train both a 2D U-net and 3D V-net-based segmentation model using a combination of Dice-Focal and Anatomical Constraint loss functions. Segmentation performance was compared using Dice coefficients calculated over the entire lung and within ventilation defects. ResultsPerformance of both U-net and 3D segmentation was improved by including synthetic training data. The 3D models performed significantly better than U-net, and the 3D model trained with synthetic 129Xe images exhibited the highest overall Dice score of 0.929. Moreover, addition of synthetic training data improved the Dice score in ventilation defect regions from 0.545 to 0.588 for U-net and 0.739 to 0.765 for the 3D model. ConclusionIt is feasible to obtain high-quality segmentations from 129Xe scan alone using 3D models trained with additional synthetic images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call