Abstract

Scene perceptual ability is key to developing autonomy and intelligence in surgical robots. This study helps stereotactic surgical robots detect and segment key objects in unstructured surgical scenes. First, we construct a neurosurgery robot working scene dataset. Next, we propose a 2Dimage-scene-aware pipeline that integrates a Mask R-CNN (mask region-based convolutional neural network) with a conditional random field and a superpixel method; the pipeline detects and segments key objects, such as the patient’s head, head frame, and body. Then, we establish a multiview projection voting and supervoxel fusion pipeline that extracts further information from a 3D point cloud scene. The proposed method was tested in different clinical scenarios, and the results show that the method can detect and segment specific surgical objects and achieves comparable accuracy and stability on both 2D images and 3D point cloud data. The average precision (AP) and average 2D and 3D Dice scores for the patient’s head were 97.65, 91.6, and 92.6, respectively. Better segmentation performances can be achieved when the data-based neural network method further integrates the traditional color and contour-based image processing methods. The proposed solution allows stereotactic surgical robots to better understand their surroundings, provides semantic information useful for subsequent tasks, and lays a foundation for autonomous stereotactic surgical robots.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.