Abstract

Abstract The shortage of operating room technicians has led to a growing demand for automated systems in the OR to maintain the quality of care. Robotic scrub nurse (RSN) systems are increasingly being developed, which perform tasks such as handling instruments and documenting the surgery. While research has focused on detecting instruments in the hands of surgical staff or recognizing surgical phases, there is a lack of research on detecting instruments on the instrument tray. Therefore, this study proposes and evaluates two distinct methodologies for instrument detection on the OR table using the deep learning approaches YOLOv5 and Mask R-CNN. The performance of the two approaches has been evaluated on 18 YOLOv5 models and twelve Mask R-CNN models, mainly differing in model size. Two sets of instruments were used to assess generalizability of the models. The results show a mean average precision (mAP) score of 0.978 for YOLOv5 and 0.846 for Mask R-CNN on the test dataset comprising three classes. An mAP of 0.874 and 0.707 have been computed respectively for the test dataset including six classes. The study provides a comparison of the performance of two suitable approaches for instrument detection on the instrument tray in the OR to enhance the development of RSN systems.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.