Abstract

Radiotherapy is the mainstay treatment for most patients with cancer. During radiotherapy planning, it is essential to generate a clinically acceptable dose distribution map. In practice, dosimetrists need to tweak the planning in an iterative trial-and-error manner according to their experience, making traditional workflow time-consuming and subjective. Although some deep learning-based dose prediction models have been proposed, they always require additional handcrafted features, such as planning target volume (PTV) and organs at risk (OAR) segmentation maps, as network inputs. To overcome these limitations, we propose a multi-task attention adversarial network (MtAA-NET) to automatically complete dose planning using only CT images. In particular, our framework consists of a main dose distribution prediction task to generate a dose map and an auxiliary segmentation task to provide additional anatomical information of the PTV and OAR for the main task, implemented by a shared encoder and two exclusive decoders. To effectively integrate dosimetric and anatomical features, we introduce an attention mechanism embedded cross-task feature fusion (CtFF) module to further fuse the features from different tasks in a deep supervision manner. Note that the generated attention maps reveal the attention of our model in the PTV area of the intermediate features, providing an explainable evaluation of the quality of the predicted dose maps. Experimental results on an in-house cervical cancer dataset and a public head and neck cancer dataset demonstrate the superiority of the proposed method compared with state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call