Abstract

Robotic systems for surgery of the inner ear must enable highly precise movement in relation to the patient. To allow for a suitable collaboration between surgeon and robot, these systems should not interrupt the surgical workflow and integrate well in existing processes. As the surgical microscope is a standard tool, present in almost every microsurgical intervention and due to it being in close proximity to the situs, it is predestined to be extended by assistive robotic systems. For instance, a microscope-mounted laser for ablation. As both, patient and microscope are subject to movements during surgery, a well-integrated robotic system must be able to comply with these movements. To solve the problem of on-line registration of an assistance system to the situs, the standard of care often utilizes marker-based technologies, which require markers being rigidly attached to the patient. This not only requires time for preparation but also increases invasiveness of the procedure and the line of sight of the tracking system may not be obstructed. This work aims at utilizing the existing imaging system for detection of relative movements between the surgical microscope and the patient. The resulting data allows for maintaining registration. Hereby, no artificial markers or landmarks are considered but an approach for feature-based tracking with respect to the surgical environment in otology is presented. The images for tracking are obtained by a two-dimensional RGB stream of a surgical microscope. Due to the bony structure of the surgical site, the recorded cochleostomy scene moves nearly rigidly. The goal of the tracking algorithm is to estimate motion only from the given image stream. After preprocessing, features are detected in two subsequent images and their affine transformation is computed by a random sample consensus (RANSAC) algorithm. The proposed method can provide movement feedback with up to 93.2 μm precision without the need for any additional hardware in the operating room or attachment of fiducials to the situs. In long term tracking, an accumulative error occurs.

Highlights

  • Otologic microsurgery requires the surgeon to work at the limit of their visuo-tactile feedback and dexterity

  • This results in an error distribution, which is plotted for each inner ear model

  • Images from a surgical microscope are processed to derive pose changes between patient and microscope. These information can serve as input for compensating motion of a microscope mounted robotic system

Read more

Summary

Introduction

Otologic microsurgery requires the surgeon to work at the limit of their visuo-tactile feedback and dexterity. The procedure of a cochlea implantation, for example, consists traditionally of a manually drilled, nearly cone-shaped access beginning on the outer surface of the skull with a diameter of around 30 mm and tapered to a 2 mm narrow opening to the middleear (posterior tympanotomy). After visualization of the round window, the cochlea can be opened through the round window or a cochleostomy, an artificial opening drilled by the surgeon. The surgeon has to move a 0.3–1 mm thin electrode array through the posterior tympanotomy in the even more narrow cochlea. Robotic systems can exceed human precision in order of multiple magnitudes. It is obvious that otologic microsurgery can highly benefit from robotic assistance

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call