Abstract

Aside from reliable robotic hardware and sensing technologies, to successfully transition from teleoperation to an autonomous and safe minimally invasive robotic surgery on unknown Deformable Tissues (U-DTs), various challenges need to be simultaneously considered and tackled to ensure safety and accuracy of the procedure. These challenges mainly include but are not limited to online modeling and reliable tracking of a U-DT with integrated critical tissues as well as development of reliable and fast control algorithms to enable safe, accurate, and autonomous surgical procedures. To collectively and simultaneously address these challenges and toward performing an autonomous and safe minimally invasive robotic surgery in a confined environment, in this paper, we present a surgical robotic framework with (i) real-time vision-based detection algorithm based on a Convolutional Neural Network (CNN) architecture that enables tracking the time-varying deformation of a critical tissue located within a U-DT and (ii) a complementary data-driven adaptive constrained optimization approach that learns deformation behavior of a U-DT while autonomously manipulating it within a time-varying constrained environment defined based on the output of the CNN detection algorithm. To thoroughly evaluate the performance of the proposed framework, we used the da Vinci Research Kit (dVRK) and performed various experiments on a custom-designed U-DT phantom with an arbitrary deformable vessel embedded within the phantom’s body (serving as the U-DT’s integrated critical space). Various experiments were conducted and analyzed to demonstrate the performance of the proposed framework and ensure robustness and safety while performing an autonomous surgical procedure.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call