Abstract

Cone-beam computed tomography (CBCT) often has suboptimal image quality compared with standard fan-beam CT due to increased sensitivity to scattering and motion artifacts. In radiation treatment of cancers, due to the convenience in image acquisition as it is fully incorporated into the treatment station, CBCT plays a significant role in treatment plan re-evaluation and adaptation in adaptive radiation therapy but is limited by the image quality. In this paper, we propose a deep convolutional neural network (DCNN) based method for head and neck (HN) CBCT enhancement, where the network learns to transform the low-quality CBCT images to high-quality images.CT and CBCT image pairs from 50 HN cancer patients who received radiotherapy treatment were used for model training, validation and testing. An additional 22 raw CBCT images from another vendor were also used for testing. Pre-processing procedures include rigid registration, center-cropping, and Otsu's thresholding to segment the anatomical region. Data quality control was applied before training a 2D U-Net to reduce the adverse impact from the unmatched pairs. A patch-based strategy was employed during training, since the randomly extracted patches could reduce the receptive field and thus reduce the probability of overfitting based on the global anatomy. We explored the impact of patch size by varying it to be 64 × 64, 128 × 128, 192 × 192 and 256 × 256. After fixing to the optimal patch size, a Pix2Pix network combining L1 loss and conditional generative adversarial network (GAN) loss was also trained.It was found that the U-Net trained with a patch size of 128 × 128 reached the optimal performance and improved the mean absolute error (MAE) from 57.91 to 34.78 Hounsfield units (HU), peak signal-to-noise ratio (PSNR) from 30. 56 to 32.84 dB, and structural similarity (SSIM) from 0.7546 to 0.8455 compared with the original CBCT. With the addition of conditional GAN loss, the trained Pix2Pix network better kept the image details, especially on the external testing set.In this work, we developed a DCNN-based method to improve the CBCT image quality. This method has the advantages of high adaptability, vendor neutrality and low cost, since it operates purely in the image domain. Our data quality control pipeline automatically removed the unmatched CT and CBCT image pairs from the training set. The proposed DCNN-based method with the patch-based training strategy showed promising results for fast CBCT image enhancement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call