BackgroundThe purpose of this study was to reconstruct 3-dimensional (3D) computed tomography (CT) images from single anteroposterior (AP) postoperative total hip arthroplasty (THA) X-ray images using a deep learning algorithm known as generative adversarial networks (GANs) and to validate the accuracy of cup angle measurement on GAN-generated CT. MethodsWe used 2 GAN-based models, CycleGAN and X2CT-GAN, to generate 3D CT images from X-ray images of 386 patients who underwent primary THAs using a cementless cup. The training dataset consisted of 522 CT images and 2,282 X-ray images. The image quality was validated using the peak signal-to-noise ratio and the structural similarity index measure. The cup anteversion and inclination measurements on the GAN-generated CT images were compared with the actual CT measurements. Statistical analyses of absolute measurement errors were performed using Mann–Whitney U tests and nonlinear regression analyses. ResultsThe study successfully achieved 3D reconstruction from single AP postoperative THA X-ray images using GANs, exhibiting excellent peak signal-to-noise ratio (37.40) and structural similarity index measure (0.74). The median absolute difference in radiographic anteversion was 3.45° and the median absolute difference in radiographic inclination was 3.25°, respectively. Absolute measurement errors tended to be larger in cases with cup malposition than in those with optimal cup orientation. ConclusionsThis study demonstrates the potential of GANs for 3D reconstruction from single AP postoperative THA X-ray images to evaluate cup orientation. Further investigation and refinement of this model are required to improve its performance.
Read full abstract