Abstract

BackgroundTo perform accurate laparoscopic hepatectomy (LH) without injury, novel intraoperative systems of computer-assisted surgery (CAS) for LH are expected. Automated surgical workflow identification is a key component for developing CAS systems. This study aimed to develop a deep-learning model for automated surgical step identification in LH. Materials and methodsWe constructed a dataset comprising 40 cases of pure LH videos; 30 and 10 cases were used for the training and testing datasets, respectively. Each video was divided into 30 frames per second as static images. LH was divided into nine surgical steps (Steps 0–8), and each frame was annotated as being within one of these steps in the training set. After extracorporeal actions (Step 0) were excluded from the video, two deep-learning models of automated surgical step identification for 8-step and 6-step models were developed using a convolutional neural network (Models 1 & 2). Each frame in the testing dataset was classified using the constructed model performed in real-time. ResultsAbove 8 million frames were annotated for surgical step identification from the pure LH videos. The overall accuracy of Model 1 was 0.891, which was increased to 0.947 in Model 2. Median and average accuracy for each case in Model 2 was 0.927 (range, 0.884–0.997) and 0.937 ± 0.04 (standardized difference), respectively. Real-time automated surgical step identification was performed at 21 frames per second. ConclusionsWe developed a highly accurate deep-learning model for surgical step identification in pure LH. Our model could be applied to intraoperative systems of CAS.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call