Abstract

ObjectiveConstructing a 3D model from its 2D images, known as 2D-3D reconstruction, is a challenging task. Conventionally, a parametric 3D model such as a statistical shape model (SSM) is deformed by matching the shapes in its 2D images through a series of processes, including calibration, 2D-3D registration, and optimization for nonrigid deformation. To overcome this complicated procedure, a streamlined 2D-3D reconstruction using a single X-ray image is developed in this study. MethodsWe propose 2D-3D reconstruction of a femur by adopting a deep neural network, where the deformation parameters in the SSM determining the 3D shape of the femur are predicted from a single X-ray image using a deep transfer-learning network. For learning the network from distinct features representing the 3D shape information in the X-ray image, a specific proximal part of the femur from a unique X-ray pose that allows accurate prediction of the 3D femur shape is designated and used to train the network. Then, the corresponding proximal/distal 3D femur model is reconstructed from only the single X-ray image acquired at the designated position. ResultsExperiments were conducted using actual X-ray images of a femur phantom and X-ray images of a patient's femur derived from computed tomography to verify the proposed method. The average errors of the reconstructed 3D shape of the proximal and distal femurs from the proposed method were 1.20 mm and 1.08 mm in terms of root mean squared point-to-surface distance, respectively. ConclusionThe proposed method presents an innovative approach to simplifying the 2D-3D reconstruction using deep neural networks that exhibits performance compatible with the existing methodologies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call