Abstract

PURPOSE : Navigation in visually complex endoscopic environments requires an accurate and robust localisation system. This paper presents the single image deep learning based camera localisation method for orthopedic surgery. METHODS : The approach combines image information, deep learning techniques and bone-tracking data to estimate camera poses relative to the bone-markers. We have collected one arthroscopic video sequence for four knee flexion angles, per synthetic phantom knee model and a cadaveric knee-joint. RESULTS : Experimental results are shown for both a synthetic knee model and a cadaveric knee-joint with mean localisation errors of 9.66mm/0.85[Formula: see text] and 9.94mm/1.13[Formula: see text] achieved respectively. We have found no correlation between localisation errors achieved on synthetic and cadaveric images, and hence we predict that arthroscopic image artifacts play a minor role in camera pose estimation compared to constraints introduced by the presented setup. We have discovered that the images acquired for 90°and 0°knee flexion angles are respectively most and least informative for visual localisation. CONCLUSION : The performed study shows deep learning performs well in visually challenging, feature-poor, knee arthroscopy environments, which suggests such techniques can bring further improvements to localisation in Minimally Invasive Surgery.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call