Abstract

In robot-assisted surgery, precise surgical instrument segmentation technology can provide accurate location and pose data for surgeons, helping them perform a series of surgical operations efficiently and safely. However, there are still some interfering factors, such as surgical instruments being covered by tissue, multiple surgical instruments interlacing with each other, and instrument shaking during surgery. To better address these issues, an effective surgical instrument segmentation network called InstrumentNet is proposed, which adopts YOLOv7 as the object detection framework to achieve a real-time detection solution. Specifically, a multiscale feature fusion network is constructed, which aims to avoid problems such as feature redundancy and feature loss and enhance the generalization ability. Furthermore, an adaptive feature-weighted fusion mechanism is introduced to regulate network learning and convergence. Finally, a semantic segmentation head is introduced to integrate the detection and segmentation functions, and a multitask learning loss function is specifically designed to optimize the surgical instrument segmentation performance. The proposed segmentation model is validated on a dataset of intracranial surgical instruments provided by seven experts from Beijing Tiantan Hospital and achieved an mAP score of 93.5 %, Dice score of 82.49 %, and MIoU score of 85.48 %, demonstrating its universality and superiority. The experimental results demonstrate that the proposed model achieves good segmentation performance on surgical instruments compared to other advanced models and can provide a reference for developing intelligent medical robots.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call