Abstract

An essential technique for spine surgery guidance is the registration of intraoperative 2D X-ray with preoperative 3D CT, which enables the correlation of real-time imaging with surgical planning. Previous deep-learning-based methods generally need to convert 3D CT into a 2D projection for further registration, resulting in the loss of spatial information and failing to satisfy the clinical requirements of a large adaptation range and high precision. In this paper, a novel transformer-based two-step registration network is proposed to directly regress the transformation parameters without dimension reduction of the 3D CT. The spine information is extracted by reconstruction and segmentation modules and is further used in the registration network that utilizes both the original images and the spine features. Meanwhile, an adaptive multi-dimensional loss function containing both parameter-domain loss and graph-domain loss is designed to be more consistent with the registration mechanism. Both improvements expand the range of acceptable deformations and increase registration accuracy. We demonstrate the validity and generalizability of the proposed method by achieving state-of-the-art performance on both synthesized and clinical data with an average mTRE of 0.96 mm and 2.32 mm. Further, the high registration performance over a large deformation reflects the robustness of the methods in complex scenarios. The proposed methods enhance the tremendous potential of deep learning in spinal surgery navigation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call