Abstract

Automatic segmentation of organs-at-risk (OARs) is a key step in radiation treatment planning to reduce human efforts and bias. Deep convolutional neural networks (DCNN) have shown great success in many medical image segmentation applications but there are still challenges in dealing with large 3D images for optimal results. The purpose of this study is to develop a novel DCNN method for thoracic OARs segmentation using cropped 3D images. To segment the five organs (left and right lungs, heart, esophagus and spinal cord) from the thoracic CT scans, preprocessing to unify the voxel spacing and intensity was first performed, a 3D U-Net was then trained on resampled thoracic images to localize each organ, then the original images were cropped to only contain one organ and served as the input to each individual organ segmentation network. The segmentation maps were then merged to get the final results. The network structures were optimized for each step, as well as the training and testing strategies. A novel testing augmentation with multiple iterations of image cropping was used. The networks were trained on 36 thoracic CT scans with expert annotations provided by the organizers of the 2017 AAPM Thoracic Auto-segmentation Challenge and tested on the challenge testing dataset as well as a private dataset. The proposed method earned second place in the live phase of the challenge and first place in the subsequent ongoing phase using a newly developed testing augmentation approach. It showed superior-than-human performance on average in terms of Dice scores (spinal cord: 0.893±0.044, right lung: 0.972±0.021, left lung: 0.979±0.008, heart: 0.925±0.015, esophagus: 0.726± 0.094), mean surface distance (spinal cord: 0.662±0.248mm, right lung: 0.933±0.574mm, left lung: 0.586±0.285mm, heart: 2.297±0.492mm, esophagus: 2.341±2.380mm) and 95% Hausdorff distance (spinal cord: 1.893±0.627mm, right lung: 3.958±2.845mm, left lung: 2.103±0.938mm, heart: 6.570±1.501mm, esophagus: 8.714±10.588mm). It also achieved good performance in the private dataset and reduced the editing time to 7.5min per patient following automatic segmentation. The proposed DCNN method demonstrated good performance in automatic OAR segmentation from thoracic CT scans. It has the potential for eventual clinical adoption of deep learning in radiation treatment planning due to improved accuracy and reduced cost for OAR segmentation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.