Abstract

Vascular diseases are often treated minimally invasively. The interventional material (stents, guidewires, etc.) used during such percutaneous interventions are visualized by some form of image guidance. Today, this image guidance is usually provided by 2D X-ray fluoroscopy, that is, a live 2D image. 3D X-ray fluoroscopy, that is, a live 3D image, could accelerate existing and enable new interventions. However, existing algorithms for the 3D reconstruction of interventional material require either too many X-ray projections and therefore dose, or are only capable of reconstructing single, curvilinearstructures. Using only two new X-ray projections per 3D reconstruction, we aim to reconstruct more complex arrangements of interventional material than was previously possible. This is achieved by improving a previously presented deep learning-based reconstruction pipeline, which assumes that the X-ray images are acquired by a continuously rotating biplane system, in two ways: (a) separation of the reconstruction of different object types, (b) motion compensation using spatial transformernetworks. Our pipeline achieves submillimeter accuracy on measured data of a stent and two guidewires inside an anthropomorphic phantom with respiratory motion. In an ablation study, we find that the aforementioned algorithmic changes improve our two figuresof merit by 75% (1.76mm → 0.44mm) and 59% (1.15mm → 0.47mm) respectively. A comparison of our measured dose area product (DAP) rate to DAP rates of 2D fluoroscopy indicates a roughly similar doseburden. This dose efficiency combined with the ability to reconstruct complex arrangements of interventional material makes the presented algorithm a promising candidate to enable 3Dfluoroscopy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call