Abstract

Practicing open-wound suturing requires feedback, but evaluation of each suture practice takes instructors’ time and is often subjective. Recent work showed that deep-learning models can automate the evaluation by analyzing an image or a video of suturing into either pass or fail. However, they lacked fine-grained feedback offered by previous systems that required specialized devices or manual annotations. This work introduced a system that automatically extract geometric measurements from a suture practice end-product image for further evaluation. We proposed the suture instance segmentation task and a hand-crafted algorithm to extract interpretable metrics from an image. We collected two new simple suture datasets consisting of 240 images with instance segmentation and physical measurement annotations. The experiment results shows that current deep-learning model can accurately segment suture instances and the extracted measurements from our system are highly correlated with physical measurements from humans.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call