Abstract

3D dose distribution measurement is crucial for precise radiotherapy. Radiation-excited fluorescence imaging has potential for the 3D dosimetry with high spatial resolution, but multiple fluorescence images from different view-angles are required for analytical reconstruction techniques. Furthermore, the imaging data are contaminated by anisotropic Cherenkov light emission and statistical noise. This project aims to establish a novel deep learning-based model to predict 3D dose distributions from a single-view 2D fluorescence image while simultaneously removing the adverse effects of Cherenkov signals and other noises. A total of 124 single-aperture static photon beams were delivered to an acrylic tank containing 1 g/L quinine hemisulfate water solution with varying aperture shapes and collimator angle. The emitted optical signals were detected by a low-cost CMOS camera for 20 seconds, and image pre-processing was performed to obtain input 2D fluorescence images with 0.3 × 0.3 mm spatial resolution. 3D back-projected dose distribution images were also calculated from the input fluorescence images. Ground-truth of 3D dose distributions and 2D field map images were obtained from a clinical treatment planning system with 1.4 × 1.4 × 1.4 mm spatial resolution. The proposed deep learning-based dose reconstruction method involved 3 steps. First, 2D fluence map images at the bottom plane of the tank were predicted from the fluorescence images by using a customized convolutional neural network (CNN). Second, the predicted fluence map images were transformed into the 2D field map images on the isocenter plane by applying perspective transformation. Finally, 2D dose distributions at a given radiological depth were calculated by using the predicted field map images, the back-projected dose distribution images, and the radiological depth value as inputs of a shallow CNN. Both CNN models were trained separately, and the 3D dose distributions were predicted by concatenating the output 2D dose distributions at various radiological depths. The proposed CNN model yielded accurate 2D field map images. Averaged Dice similarity coefficient and mean absolute error of the field maps in the test data was 92.0% ± 4.6% and 0.0132 ± 0.0113, respectively. Moreover, our deep learning-based approach was able to predict accurate 3D dose distributions from the 2D fluorescence images. Mean squared error and averaged 3D gamma passing ratio (3%/3mm) were 9.55 mGy ± 6.8 mGy and 86.3% ± 9.86%, respectively. Theproposed deep learning-based method calculated accurate 3D dose distributions from a single-view 2D fluorescence image. Since this technique require only a single CMOS camera image and fluorescent material, it can be readily used for any external radiotherapy modalities, including SRS/SBRT with small fields. This method is useful for acquiring 3D dose distribution data for precise dose verification within a few seconds.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.