Abstract
Accurate medical Augmented Reality (AR) rendering requires two calibrations, a camera intrinsic matrix estimation and a hand-eye transformation. We present a unified, practical, marker-less, real-time system to estimate both these transformations during surgery. For camera calibration we perform calibrations at multiple distances from the endoscope, pre-operatively, to parametrize the camera intrinsic matrix as a function of distance from the endoscope. Then, we retrieve the camera parameters intra-operatively by estimating the distance of the surgical site from the endoscope in less than 1 s. Unlike in prior work, our method does not require the endoscope to be taken out of the patient; for the hand-eye calibration, as opposed to conventional methods that require the identification of a marker, we make use of a rendered tool-tip in 3D. As the surgeon moves the instrument and observes the offset between the actual and the rendered tool-tip, they can select points of high visual error and manually bring the instrument tip to match the virtual rendered tool tip. To evaluate the hand-eye calibration, 5 subjects carried out the hand-eye calibration procedure on a da Vinci robot. Average Target Registration Error of approximately 7mm was achieved with just three data points.
Highlights
Augmented reality (AR) and mixed reality (MR) are valuable technologies for medical applications
In the past, multiple self-calibration methods have been proposed for intra-operative use, where the intrinsic camera calibration parameters can be estimated by using feature correspondences in the surgical scene [3, 4]
The root mean square error was calculated for each camera parameter
Summary
Augmented reality (AR) and mixed reality (MR) are valuable technologies for medical applications. MR/AR improves the hand-eye coordination for the surgeon [1], but requires two calibration steps to take pre-operative medical data to the intra-operative camera/endoscope feed. In the past, multiple self-calibration methods have been proposed for intra-operative use, where the intrinsic camera calibration parameters can be estimated by using feature correspondences in the surgical scene [3, 4]. Such methods cannot account for the change in lens distortion. The inability of new procedures to integrate seamlessly in the current existing surgical workflows is a major roadblock in translation of AR/MR surgical guidance systems to the operation theatre [6]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have