Image-to-patient registration aligns preoperative images to intra-operative anatomical structures and it is a critical step in image-guided surgery (IGS). The accuracy and speed of this step significantly influence the performance of IGS systems. Rigid registration based on paired points has been widely used in IGS, but studies have shown its limitations in terms of cost, accuracy, and registration time. Therefore, rigid registration of point clouds representing the human anatomical surfaces has become an alternative way for image-to-patient registration in the IGSsystems. We propose a novel correspondence-based rigid point cloud registration method that can achieve global registration without the need for pose initialization. The proposed method is less sensitive to outliers compared to the widely used RANSAC-based registration methods and it achieves high accuracy at a high speed, which is particularly suitable for the image-to-patient registration inIGS. We use the rotation axis and angle to represent the rigid spatial transformation between two coordinate systems. Given a set of correspondences between two point clouds in two coordinate systems, we first construct a 3D correspondence cloud (CC) from the inlier correspondences and prove that the CC distributes on a plane, whose normal is the rotation axis between the two point clouds. Thus, the rotation axis can be estimated by fitting the CP. Then, we further show that when projecting the normals of a pair of corresponding points onto the CP, the angle between the projected normal pairs is equal to the rotation angle. Therefore, the rotation angle can be estimated from the angle histogram. Besides, this two-stage estimation also produces a high-quality correspondence subset with high inlier rate. With the estimated rotation axis, rotation angle, and the correspondence subset, the spatial transformation can be computed directly, or be estimated using RANSAC in a fast and robust way within only 100iterations. To validate the performance of the proposed registration method, we conducted experiments on the CT-Skull dataset. We first conducted a simulation experiment by controlling the initial inlier rate of the correspondence set, and the results showed that the proposed method can effectively obtain a correspondence subset with much higher inlier rate. We then compared our method with traditional approaches such as ICP, Go-ICP, and RANSAC, as well as recently proposed methods like TEASER, SC2-PCR, and MAC. Our method outperformed all traditional methods in terms of registration accuracy and speed. While achieving a registration accuracy comparable to the recently proposed methods, our method demonstrated superior speed, being almost three times faster thanTEASER. Experiments on the CT-Skull dataset demonstrate that the proposed method can effectively obtain a high-quality correspondence subset with high inlier rate, and a tiny RANSAC with 100 iterations is sufficient to estimate the optimal transformation for point cloud registration. Our method achieves higher registration accuracy and faster speed than existing widely used methods, demonstrating great potential for the image-to-patient registration, where a rigid spatial transformation is needed to align preoperative images to intra-operative patientanatomy.
Read full abstract