Abstract

Accurate delineation of Gross Target Volume (GTV) and Organs at Risk (OARs) in medical images is an essential but challenging step in radiotherapy. Deep-learning based automated delineation methods, which learn from manual annotations, are currently prevalent in academic research. However, the limited resolution of medical images and the fuzzy boundaries of lesions and organs present a challenge to the precision of manual annotations. By leveraging the complementary information from multi-modal medical images, this study proposed a novel method to generate objective boundaries of GTV and OARs. We present a novel method of objective boundary generation, inspired by image matting primarily used for 2D RGB natural images, to process 3D grayscale medical images. The proposed method has the following advantages. 1) It allows for flexible input modalities and assigns weights to each modality according to their relative significance when computing information flows in the matting algorithm. 2) It computes 3D spatial information flow among voxels, which has more advantages over its 2D counterpart. 3) It has a closed-form solution that generates deterministic results. To evaluate the characteristics of the generated boundaries, patients with stage I nasopharyngeal carcinoma (NPC) were studied, with CT images and multi-modal MR images (T1, T1C, T2) aligned using deformable registration. Region of Interests (ROIs), i.e., GTV and parotid gland, were used, with a rough trimap marking extremely few foreground voxels, many background voxels, and a large unknown region. The proposed algorithm leverages the connection between each voxel and its nearest neighbors in the feature space, to propagate the opacity information. We evaluated the results by employing both qualitative and quantitative methods. Using qualitative evaluation, experienced clinicians confirmed that the results were in agreement with the input data, especially for areas where borders were visible in most modalities (e.g., between air and tumor). For more challenging regions, where boundaries were unclear in the images, the results displayed fine-grained opacity transitions indicating the confidence of each voxel belonging to the ROI. When compared to the delineations made by clinicians, we found our results are usually more compact. We define a precision metric that evaluates the ratio of the matted foreground inside clinicians' delineations versus the entire matted foreground. Using a threshold of 0.4, our binarized result scored 0.95 for GTV and 0.92 for parotid gland. The proposed method demonstrated satisfactory results on challenging ROIs. The objective boundaries generated by this method have advantages in many aspects, including improvement of delineation protocols, enhancement of manual annotation consistency, and increase of deep-learning based automated delineation accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call