Abstract

Surgical tool segmentation is a challenging and crucial task for computer and robot-assisted surgery. Supervised learning approaches have shown great success for this task. However, they need a large number of paired training data. Based on Generative Adversarial Networks (GAN), unpaired image-to-image translation (I2I) techniques (like CycleGAN and dualGAN) have been proposed to avoid the requirement of paired data and have been employed for surgical tool segmentation. The unpaired I2I methods avoid annotating images for domain changes. Instead of using them directly for the segmentation task, we propose new GAN-based methods for unpaired I2I by embedding a specific constraint for segmentation, namely each pixel of input image belongs to either background or surgical tool. Our methods simplify the architectures of existing unpaired I2I with a reduced number of generators and discriminators. Compared with dualGAN, the proposed strategies have a faster training process without reducing the accuracy of the segmentation. Besides, we show that, using textured tool images as annotated samples to train discriminators, unpaired I2I (including our methods) can achieve simultaneous tool image segmentation and repair (such as reflection removal and tool inpainting). The proposed strategies are validated for image segmentation of a flexible tool and for in vivo images from the EndoVis dataset.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.