Purpose This paper aims to investigate the problem of vision based autonomous laparoscope control, which can serve as the primary function for semi-autonomous minimally invasive surgical robot system. Providing the surgical gesture recognition information is a fundamental key component for enabling intelligent context-aware assistance in autonomous laparoscope control task. While significant advances have been made in recent years, how to effectively carry out the efficient integration of surgical gesture recognition and autonomous laparoscope control algorithms for robotic assisted minimally invasive surgical robot system is still an open and challenging topic. Design/methodology/approach The authors demonstrate a novel surgeon in-loop semi-autonomous robotic-assisted minimally invasive surgery framework by integrating the surgical gesture recognition and autonomous laparoscope control tasks. Specifically, they explore using a transformer-based deep convolutional neural network to effectively recognize the current surgical gesture. Next, they propose an autonomous laparoscope control model to provide optimal field of view which is in line with surgeon intra-operation preferences. Findings The effectiveness of this surgical gesture recognition methodology is demonstrated on the public JIGSAWS and Cholec80 data sets, outperforming the comparable state-of-the-art methods. Furthermore, the authors have validated the effectiveness of the proposed semi-autonomous framework on the developed HUAQUE surgical robot platforms. Originality/value This study demonstrates the feasibility to perform cognitive assistant human–robot shared control for semi-autonomous robotic-assisted minimally invasive surgery, contributing to the reference for further surgical intelligence in computer-assisted intervention systems.
Read full abstract