Abstract

Adaptive radiation therapy (ART) aims to deliver radiotherapy accurately and precisely in the presence of anatomical changes, in which the synthesis of computed tomography (CT) from cone-beam CT (CBCT) is an important step. However, because of serious motion artifacts, CBCT-to-CT synthesis remains a challenging task for breast-cancer ART. Existing synthesis methods usually ignore motion artifacts, thereby limiting their performance on chest CBCT images. In this paper, we decompose CBCT-to-CT synthesis into artifact reduction and intensity correction, and we introduce breath-hold CBCT images to guide them. To achieve superior synthesis performance, we propose a multimodal unsupervised representation disentanglement (MURD) learning framework that disentangles the content, style, and artifact representations from CBCT and CT images in the latent space. MURD can synthesize different forms of images using the recombination of disentangled representations. Also, we propose a multipath consistency loss to improve structural consistency in synthesis and a multidomain generator to improve synthesis performance. Experiments on our breast-cancer dataset show that MURD achieves impressive performance with a mean absolute error of 55.23±9.94 HU, a structural similarity index measurement of 0.721±0.042, and a peak signal-to-noise ratio of 28.26±1.93 dB in synthetic CT. The results show that compared to state-of-the-art unsupervised synthesis methods, our method produces better synthetic CT images in terms of both accuracy and visual quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call