Abstract

Urethral injuries (UIs) are significant complications pertaining to transanal total mesorectal excision (TaTME). It is important for surgeons to identify the prostate during TaTME to prevent UI occurrence; intraoperative image navigation could be considered useful in this regard. This study aims at developing a deep learning model for real-time automatic prostate segmentation based on intraoperative video during TaTME. The proposed model's performance has been evaluated. This was a single-institution retrospective feasibility study. Semantic segmentation of the prostate area was performed using a convolutional neural network (CNN)-based approach. DeepLab v3 plus was utilized as the CNN model for the semantic segmentation task. The Dice coefficient (DC), which is calculated based on the overlapping area between the ground truth and predicted area, was utilized as an evaluation metric for the proposed model. Five hundred prostate images were randomly extracted from 17 TaTME videos, and the prostate area was manually annotated on each image. Fivefold cross-validation tests were performed, and as observed, the average DC value equaled 0.71 ± 0.04, the maximum value being 0.77. Additionally, the model operated at 11 fps, which provides acceptable real-time performance. To the best of the authors' knowledge, this is the first effort toward realization of computer-assisted TaTME, and results obtained in this study suggest that the proposed deep learning model can be utilized for real-time automatic prostate segmentation. In future endeavors, the accuracy and performance of the proposed model will be improved to enable its use in practical applications, and its capability to reduce UI risks during TaTME will be verified.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.