Abstract

AbstractThe segmentation of Organs At Risk (OAR) in Computed Tomography (CT) images is an essential part of the planning phase of radiation treatment to avoid the adverse effects of cancer radiotherapy treatment. Accurate segmentation is a tedious task in the head and neck region due to a large number of small and sensitive organs and the low contrast of CT images. Deep learning‐based automatic contouring algorithms can ease this task even when the organs have irregular shapes and size variations. This paper proposes a fully automatic deep learning‐based self‐supervised 3D Residual UNet architecture with CBAM(Convolution Block Attention Mechanism) for the organ segmentation in head and neck CT images. The Model Genesis structure and image context restoration techniques are used for self‐supervision, which can help the network learn image features from unlabeled data, hence solving the annotated medical data scarcity problem in deep networks. A new loss function is applied for training by integrating Focal loss, Tversky loss, and Cross‐entropy loss. The proposed model outperforms the state‐of‐the‐art methods in terms of dice similarity coefficient in segmenting the organs. Our self‐supervised model could achieve a 4% increase in the dice score of Chiasm, which is a small organ that is present only in a very few CT slices. The proposed model exhibited better accuracy for 5 out of 7 OARs than the recent state‐of‐the‐art models. The proposed model could simultaneously segment all seven organs in an average time of 0.02 s. The source code of this work is made available at https://github.com/seeniafrancis/SABOSNet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call