Abstract

ObjectiveDetection and segmentation of surgical instruments is an indispensable technology in robot-assisted surgery that enables doctors to obtain more comprehensive visual information and further improve the safety of surgery. However, the results of a detection are more easily interfered by environmental factors, such as instrument shaking, incomplete displays and insufficient light. To overcome those issues, we designed a hybrid deep-CNN model (SINet) for real-time surgical instrument detection and segmentation. MethodsThe framework employs YOLOv5 as the object detection model and introduces a GAM attention mechanism to improve its feature extraction abilities. During training, the SiLU activation function is adopted to avoid gradient explosions and unstable training situations. Specifically, the vector angle relationship between the ground truth boxes and the prediction boxes was applied in the SIoU loss function to reduce the degree of freedom of the regression and accelerate the network convergence. Finally, a semantic segmentation head is used to implement detections of the surgical instruments by paralleling the detection and segmentation. ResultsThe proposed method is evaluated on the m2cai16-tool-locations public dataset and achieved a significant 97.9% mean average precision (mAP), 133 frames per second (FPS), 85.7% mean intersection over union (MIoU) and 86.6% Dice. Experiment based on simulated surgery platform also shows satisfactory detection performance. ConclusionExperimental results demonstrated that the SINet can effectively detect the pose of surgical instruments and achieves a better performance than most of the current algorithms. The method has the potential to help perform a series of surgical operations efficiently and safely.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call