Abstract

Automated cardiac segmentation from two-dimensional (2D) echocardiographic images is a crucial step toward improving clinical diagnosis. Anatomical heterogeneity and inherent noise, however, present technical challenges and lower segmentation accuracy. The objective of this study is to propose a method for the automatic segmentation of the ventricular endocardium, the myocardium, and the left atrium, in order to accurately determine clinical indices. Specifically, we suggest using the recently introduced pixel-to-pixel Generative Adversarial Network (Pix2Pix GAN) model for accurate segmentation. To accomplish this, we integrate the backbone PatchGAN model for the discriminator and the UNET for the generator, for building the Pix2Pix GAN. The resulting model produces precisely segmented images, thanks to UNET's capability for precise segmentation and PatchGAN's capability for fine-grained discrimination. For the experimental validation, we use the Cardiac Acquisitions for Multi-structure Ultrasound Segmentation (CAMUS) dataset, which consists of echocardiographic images from 500 patients in 2-chamber (2CH) and 4-chamber (4CH) views at the end-diastolic (ED) and end-systolic (ES) phases. Similarly to state-of-the-art studies on the same dataset, we followed the same train-test splits. Our results demonstrate that the proposed GAN-based technique improves segmentation performance for clinical and geometrical parameters compared to the state-of-the-art methods. More precisely, throughout the ED and ES phases, the mean Dice values for the left ventricular endocardium reached 0.961 and 0.930 for 2CH, and 0.959 and 0.950 for 4CH, respectively. Furthermore, the average ejection fraction correlation and Mean Absolute Error obtained were 0.95 and 3.2ml for 2CH, and 0.98 and 2.1ml for 4CH, outperforming the state-of-the-art results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call