Abstract

Semantic segmentation of anatomical structures in laparoscopic videos is a crucial task to enable the development of new computer-assisted systems that can assist surgeons during surgery. However, this is a difficult task due to artifacts and similar visual characteristics of anatomical structures on the laparoscopic videos. Recently, deep learning algorithms have been showed promising results on the segmentation of laparoscopic instruments. However, due to the lack of large public datasets for semantic segmentation of anatomical structures, there are only a few studies on this task. In this work, we evaluate the performance of five networks, namely U-Net, U-Net++, DynUNet, UNETR and DeepLabV3+, for segmentation of laparoscopic cholecystectomy images from the recently released CholecSeg8k dataset. To the best of our knowledge, this is the first benchmark performed on this dataset. Training was performed with dice loss. The networks were evaluated on segmentation of 8 anatomical structures and instruments, performance was quantified through the dice coefficient, intersection over union, recall, and precision. Apart from the U-Net, all networks obtained scores similar to each other, with the U-Net++ being the network with the best overall score with a mean Dice value of 0.62. Overall, the results show that there is still room for improvement in the segmentation of anatomical structures from laparoscopic videos. Clinical Relevance- The results of this study show the potential of deep neural networks for the segmentation of anatomical structures in laparoscopic images which can later be incorporated into computer-aided systems for surgeons.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.