Abstract

PurposeWe have developed a feasible method to evaluate deformable image registration using deep learning (DL)–based segmentation. MethodsEighty patients with nasopharyngeal carcinoma were enrolled in this study. Two sets of fixed and moving computed tomography images acquired from each patient were input into the DL segmentation model to generate nine anatomic regions of interest (ROIs) separately and automatically. The ROIs generated in moving images were transferred to the fixed images using the registration transformation metric. The registration evaluation indexes, including the Dice similarity coefficient, derived from 60 well-registrated cases were then used to develop criteria for decision making. A double-blind study was performed to test the proposed method on quality assurance (QA) for image registration on a new test data set of 20 cases. ResultsThe values of evaluation indexes generated by our automated evaluation method were quite consistent with those from the manual method; however, the proposed method could save about 116 min per patient on average. The QA method achieved promising image registration error detection, with the following metrics for the nine ROIs: balanced accuracy, 0.946 ± 0.029; sensitivity, 0.959 ± 0.021; and specificity, 0.933 ± 0.050. ConclusionsThe proposed method could potentially evaluate the deformable registration accuracy of specific areas. The preliminary NPC result shows that it has consistent performance with the conventional evaluation method with higher efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call