Abstract

Artificial intelligence (AI) has demonstrated superior performances in various fields, including computer vision and medical imaging. Further advancement in AI and medical imaging can effectively contribute to increasing the quality of service in both robotic and non-robotic surgical domains by providing enhanced medical perception and predictive assistance. In this regard, we propose a method for successfully recognizing the current surgical actions as well as predicting future surgical actions based on the robotic surgical scenes. We introduce an online robotic tool detection method that can extract visual features that focus on each robotic surgical tool without prior learning. Based on the features, we develop an encoder-decoder framework to recognize the current surgical action and predict the sequence of the next surgical actions to be performed. By performing experiments using the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) data, it is verified that the proposed method recognizes the current surgical action with high accuracy and effectively predicts a sequence of future surgical actions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call