The common use of laparoscopic intervention produces impressive amounts of video data that are difficult to review for surgeons wishing to evaluate and improve their skills. Therefore, a need exists for the development of computer-based analysis of laparoscopic video to accelerate surgical training and assessment. We developed a surgical instrument detection system for video recordings of laparoscopic gastrectomy procedures. This system, the use of which might increase the efficiency of the video reviewing process, is based on the open source neural network platform, YOLOv3. A total of 10,716 images extracted from 52 laparoscopic gastrectomy videos were included in the training and validation data sets. We performed 200,000 iterations of training. Video recordings of 10 laparoscopic gastrectomies, independent of the training and validation data set, were analyzed by our system, and heat maps visualizing trends of surgical instrument usage were drawn. Three skilled surgeons evaluated whether each heat map represented the features of the corresponding operation. After training, the testing data set precision and sensitivity (recall) was 0.87 and 0.83, respectively. The heat maps perfectly represented the devices used during each operation. Without reviewing the video recordings, the surgeons accurately recognized the type of anastomosis, time taken to initiate duodenal and gastric dissection, and whether any irregular procedure was performed, from the heat maps (correct answer rates ≥ 90%). A new automated system to detect manipulation of surgical instruments in video recordings of laparoscopic gastrectomies based on the open source neural network platform, YOLOv3, was developed and validated successfully.
Read full abstract