Abstract

PurposeDeep learning (DL) techniques are widely used in medical imaging and in particular for segmentation. Indeed, manual segmentation of organs at risk (OARs) is time-consuming and suffers from inter- and intra-observer segmentation variability. Image segmentation using DL has given very promising results. In this work, we present and compare the results of segmentation of OARs and a clinical target volume (CTV) in thoracic CT images using three DL models. Materials and methodsWe used CT images of 52 patients with breast cancer from a public dataset. Automatic segmentation of the lungs, the heart and a CTV was performed using three models based on the U-Net architecture. Three metrics were used to quantify and compare the segmentation results obtained with these models: the Dice similarity coefficient (DSC), the Jaccard coefficient (J) and the Hausdorff distance (HD). ResultsThe obtained values of DSC, J and HD were presented for each segmented organ and for the three models. Examples of automatic segmentation were presented and compared to the corresponding ground truth delineations. Our values were also compared to recent results obtained by other authors. ConclusionThe performance of three DL models was evaluated for the delineation of the lungs, the heart and a CTV. This study showed clearly that these 2D models based on the U-Net architecture can be used to delineate organs in CT images with a good performance compared to other models. Generally, the three models present similar performances. Using a dataset with more CT images, the three models should give better results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call