Abstract

Traditional soft tissue registration methods require direct intraoperative visualization of a significant portion of the target anatomy in order to produce acceptable surface alignment. Image guidance is therefore generally not available during the robotic exposure of structures like the kidneys which are not immediately visualized upon entry into the abdomen. This paper proposes guiding surgical exposure with an iterative state estimator that assimilates small visual cues into an a priori anatomical model as exposure progresses, thereby evolving pose estimates for the occluded structures of interest. Intraoperative surface observations of a right kidney are simulated using endoscope tracking and preoperative tomography from a representative robotic partial nephrectomy case. Clinically relevant random perturbations of the true kidney pose are corrected using this sequence of observations in a particle filter framework to estimate an optimal similarity transform for fitting a patient-specific kidney model at each step. The temporal response of registration error is compared against that of serial rigid coherent point drift (CPD) in both static and simulated dynamic surgical fields, and for varying levels of observation persistence. In the static case, both particle filtering and persistent CPD achieved sub-5mm accuracy, with CPD processing observations 75% faster. Particle filtering outperformed CPD in the dynamic case under equivalent computation times due to the former requiring only minimal persistence. This proof-of-concept simulation study suggests that Bayesian state estimation may provide a viable pathway to image guidance for surgical exposure in the abdomen, especially in the presence of dynamic intraoperative tissue displacement and deformation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call