Augmenting X-ray (XR) fluoroscopy with 3D anatomic overlays is an essential technique to improve the guidance of the catheterization procedures. Unfortunately, cardiac and respiratory motion compromises the augmented fluoroscopy. Motion compensation methods can be applied to update the overlay of a static model with regard to respiratory and cardiac motion. We investigate the feasibility of motion detection between two fluoroscopic frames by applying a convolutional neural network (CNN). Its integration in the existing open-source software framework 3D-XGuide is demonstrated, such extending its functionality to automatic motion detection and compensation.The CNN is trained on reference data generated from tracking of the rapid pacing catheter tip by applying template matching with normalized cross-correlation (CC). The developed CNN motion compensation model is packaged in a standalone web service, allowing for independent use via a REST API. For testing and demonstration purposes, we have extended the functionality of 3D-XGuide navigation framework by an additional motion compensation module, which uses the displacement predictions of the standalone CNN model service for motion compensation of the static 3D model overlay. We provide the source code on GitHub under BSD license.The performance of the CNN motion compensation model was evaluated on a total of 1690 fluoroscopic image pairs from ten clinical datasets. The CNN model-based motion compensation method clearly overperformed the tracking of the rapid pacing catheter tip with CC with prediction frame rates suitable for live application in the clinical setting.A novel CNN model-based method for automatic motion compensation during fusion of 3D anatomic models with XR fluoroscopy is introduced and its integration with a real software application demonstrated. Automatic motion extraction from 2D XR images using a CNN model appears as a substantial improvement for reliable augmentation during catheter interventions.
Read full abstract