Abstract

Bronchoscopy is a standard technique for airway examination, providing a minimally invasive approach for both diagnosis and treatment of pulmonary diseases. To target lesions identified pre-operatively, it is necessary to register the location of the bronchoscope to the CT bronchial model during the examination. Existing vision-based techniques rely on the registration between virtually rendered endobronchial images and videos based on image intensity or surface geometry. However, intensity-based approaches are sensitive to illumination artefacts, while gradient-based approaches are vulnerable to surface texture. In this paper, depth information is employed in a novel way to achieve continuous and robust camera localisation. Surface shading has been used to recover depth from endobronchial images. The pose of the bronchoscopic camera is estimated by maximising the similarity between the depth recovered from a video image and that captured from a virtual camera projection of the CT model. The normalised cross-correlation and mutual information have both been used and compared for the similarity measure. The proposed depth-based tracking approach has been validated on both phantom and in vivo data. It outperforms the existing vision-based registration methods resulting in smaller pose estimation error of the bronchoscopic camera. It is shown that the proposed approach is more robust to illumination artefacts and surface texture and less sensitive to camera pose initialisation. A reliable camera localisation technique has been proposed based on depth information for bronchoscopic navigation. Qualitative and quantitative performance evaluations show the clinical value of the proposed framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call