Abstract

ABSTRACT Physicians use an endoscopic navigation system during bronchoscopy to decrease the risk of getting lost in complex tree-structure like bronchus. Most existing navigation systems based on the camera pose estimated from bronchoscope tracking and/or deep learning. However, bronchoscope tracking-based method exists tracking error, and the pre-training of the model needs massive data. This paper describes an improved bronchoscope tracking procedure by adopting image domain translation technique to improve tracking performance. Specifically, our scheme consists of three modules, an RGB-D image domain translation module, an anatomical structure classification module and a structure-aware bronchoscope tracking module. The RGB-D image domain translation module translates a real bronchoscope (RB) image to its corresponding virtual bronchoscope image and depth image. The anatomical dependency module classifies the current scene into two categories: structureless and rich structure. The bronchoscope tracking module uses a modified video-CT bronchoscope tracking approach to estimate camera pose. Experimental results showed that the proposed method achieved higher tracking accuracy than the current state-of-the-art bronchoscope tracking methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call