Abstract
Robotic surgeries offer many benefits, however do not allow for simultaneous control of the endoscopic camera and the surgical instruments. This leads to frequent interruptions as surgeons adjust their viewpoints. Autonomous camera control could help overcome this challenge. We propose a predictive approach for anticipating when camera movements will occur using artificial neural networks. We used kinematic data of surgical instruments from robotic surgical training. We split the data into segments, and labeled if each segment immediately preceded a camera movement or did not. Due to the large class imbalance, we trained an ensemble of networks on balanced sub-sets of the data. We found that the instruments’ kinematics can be used to predict when camera movements will occur, and evaluated the performance on different segment durations and ensemble sizes. We also studied how much in advance upcoming camera movements can be predicted, and found that predicting camera movements up to 0.5 s in advance led to only a small decrease in performance relative to predicting imminent camera movements. These results serve as a proof-of-concept for predicting the timing of camera movements in robotic surgeries and suggest that an autonomous camera controller for robotic surgeries may someday be feasible.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.