Abstract

Radiotherapy is a mainstay treatment for cancer in clinic. An excellent radiotherapy treatment plan is always based on a high-quality dose distribution map which is produced by repeated manual trial-and-errors of experienced experts. To accelerate the radiotherapy planning process, many automatic dose distribution prediction methods have been proposed recently and achieved considerable fruits. Nevertheless, these methods require certain auxiliary inputs besides CT images, such as segmentation masks of the tumor and organs at risk (OARs), which limits their prediction efficiency and application potential. To address this issue, we design a novel approach named as TransDose for dose distribution prediction that treats CT images as the unique input in this paper. Specifically, instead of inputting the segmentation masks to provide the prior anatomical information, we utilize a super-pixel-based graph convolutional network (GCN) to extract category-specific features, thereby compensating the network for the necessary anatomical knowledge. Besides, considering the strong continuous dependency between adjacent CT slices as well as adjacent dose maps, we embed the Transformer into the backbone, and make use of its superior ability of long-range sequence modeling to endow input features with inter-slice continuity message. To our knowledge, this is the first network that specially designed for the task of dose prediction from only CT images without ignoring necessary anatomical structure. Finally, we evaluate our model on two real datasets, and extensive experiments demonstrate the generalizability and advantages of our method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call