Abstract

The segmentation of continuum robots in medical images can be of interest for analyzing surgical procedures or for controlling them. However, the automatic segmentation of continuous and flexible shapes is not an easy task. On one hand conventional approaches are not adapted to the specificities of these instruments, such as imprecise kinematic models, and on the other hand techniques based on deep-learning showed interesting capabilities but need many manually labeled images. In this article we propose a novel approach for segmenting continuum robots on endoscopic images, which requires no prior on the instrument visual appearance and no manual annotation of images. The method relies on the use of the combination of kinematic models and differential kinematic models of the robot and the analysis of optical flow in the images. A cost function aggregating information from the acquired image, from optical flow and from robot encoders is optimized using particle swarm optimization and provides estimated parameters of the pose of the continuum instrument and a mask defining the instrument in the image. In addition a temporal consistency is assessed in order to improve stochastic optimization and reject outliers. The proposed approach has been tested for the robotic instruments of a flexible endoscopy platform both for benchtop acquisitions and an in vivo video. The results show the ability of the technique to correctly segment the instruments without a prior, and in challenging conditions. The obtained segmentation can be used for several applications, for instance for providing automatic labels for machine learning techniques.

Highlights

  • Continuum robots, contrarily to industrial robots, do not present a succession of joints and rigid links

  • In order to put the robot differential kinematics in relation with the optical flow F, we introduce the notion of virtual optical flow maps

  • The consistency check brought higher Precision (p < 0.01) and Intersection over Union (IoU) (p < 0.05) values. These results show that if the image selection procedure fails, the consistency check will help filtering out low quality optimization results

Read more

Summary

INTRODUCTION

Contrarily to industrial robots, do not present a succession of joints and rigid links. In order to detect instruments in endoscopic images using marker-less techniques, several approaches make use of color information from the instrument (Doignon et al, 2005), or restrict the search space using constraints related to the medical setup, such as a rigid instrument passing through a trocar in laparoscopy (Voros et al, 2007) Those approaches are not directly applicable to continuum robots, and/or require human intervention for an initialization step (Pezzementi et al, 2009). A training set composed of endoscopic images is manually labeled by an expert, an algorithm learns the links between the labels and some visual features and generalizes them to other surgeries Such approaches have shown promising results for segmentation as well as for 3D pose estimation (Allan et al, 2018).

Continuum Robot Kinematics
DIFFERENTIAL KINEMATICS AND OPTICAL FLOW
Optical Flow
Virtual Optical Flow Rendering
OPTIMIZATION-BASED AUTOMATIC LABELING
Optimization
After Optimization
System and Images
Implementation Details
Metrics and Evaluation
Qualitative Results
Validation of the Cost Function
Optimization Results
DISCUSSION AND CONCLUSION
ETHICS STATEMENT
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.