PurposeTo develop an automatic segmentation model for surgical marks, titanium clips, in target volume delineation of breast cancer radiotherapy after lumpectomy.MethodsA two-stage deep-learning model is used to segment the titanium clips from CT image. The first network, Location Net, is designed to search the region containing all clips from CT. Then the second network, Segmentation Net, is designed to search the locations of clips from the previously detected region. Ablation studies are performed to evaluate the impact of various inputs for both networks. The two-stage deep-learning model is also compared with the other existing deep-learning methods including U-Net, V-Net and UNETR. The segmentation accuracy of these models is evaluated by three metrics: Dice Similarity Coefficient (DSC), 95% Hausdorff Distance (HD95), and Average Surface Distance (ASD).ResultsThe DSC, HD95 and ASD of the two-stage model are 0.844, 2.008 mm and 0.333 mm, while their values are 0.681, 2.494 mm and 0.785 mm for U-Net, 0.767, 2.331 mm and 0.497 mm for V-Net, 0.714, 2.660 mm and 0.772 mm for UNETR. The proposed 2-stage model achieved the best performance among the four models.ConclusionWith the two-stage searching strategy the accuracy to detect titanium clips can be improved comparing to those existing deep-learning models with one-stage searching strategy. The proposed segmentation model can facilitate the delineation of tumor bed and subsequent target volume for breast cancer radiotherapy after lumpectomy.
Read full abstract