Countertraction is a vital technique in laparoscopic surgery, stretching the tissue surface for incision and dissection. Due to the technical challenges and frequency of countertraction, autonomous countertraction has the potential to significantly reduce surgeons' workload. Despite several methods proposed for automation, achieving optimal tissue visibility and tension for incision remains unrealized. Therefore, we propose a method for autonomous countertraction that enhances tissue surface planarity and visibility. We constructed a neural network that integrates a point cloud convolutional neural network (CNN) with a deep reinforcement learning (RL) model. This network continuously controls the forceps position based on the surface shape observed by a camera and the forceps position. RL is conducted in a physical simulation environment, with verification experiments performed in both simulation and phantom environments. The evaluation was performed based on plane error, representing the average distance between the tissue surface and its least-squares plane, and angle error, indicating the angle between the tissue surface vector and the camera's optical axis vector. The plane error decreased under all conditions both simulation and phantom environments, with 93.3% of case showing a reduction in angle error. In simulations, the plane error decreased from to , and the angle error from to . In the phantom environment, the plane error decreased from to , and the angle error from to . The proposed neural network was validated in both simulation and phantom experimental settings, confirming that traction control improved tissue planarity and visibility. These results demonstrate the feasibility of automating countertraction using the proposed model.
Read full abstract