Abstract
Cancer in the head and neck area is commonly treated with radiotherapy. A key step for low-risk treatment is the accurate delineation of organs at risk in the planning imagery. The success of deep learning in image segmentation led to automated algorithms achieving human expert performance on certain datasets. However, such algorithms require large datasets for training and fail to segment previously unseen pathologies, where human experts still succeed. As pathologies are rare and large datasets costly to generate, we investigate the effect of: reduced training data, batch sizes and incorporation of prior knowledge. The small data problem is studied by training a full-volume segmentation network with the reduced amount of data from the MICCAI 2015 head and neck segmentation challenge. To improve the segmentation, we evaluate the batch size as a hyper-parameter and first study and then incorporate a stacked autoencoder as shape prior into the training process. We found that using half of the training data (12 images of 25) results in an accuracy drop of only 3% for the segmentation of organs at risk. Also, the batch size turns out to be relevant for the quality of the segmentation when trained with less than half of the data. By applying PCA on the autoencoder's latent space we achieve a compact and accurate shape model, which is used as a regularizer and significantly improves the segmentation results. Small training data of up to 12 training images is enough to train accurate head and neck segmentation models. By using a shape prior for regularization, the performance of the segmentation can be improved significantly on the full dataset. When training on fewer than 12 images, the batch size is relevant and models have to be trained much longer until convergence.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Computer Assisted Radiology and Surgery
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.