Abstract

Computed tomography (CT) and magnetic resonance imaging (MRI) scanners measure three-dimensional (3D) images of patients. However, only low-dimensional local two-dimensional (2D) images may be obtained during surgery or radiotherapy. Although computer vision techniques have shown that 3D shapes can be estimated from multiple 2D images, shape reconstruction from a single 2D image such as an endoscopic image or an X-ray image remains a challenge. In this study, we propose X-ray2Shape, which permits a deep learning-based 3D organ mesh to be reconstructed from a single 2D projection image. The method learns the mesh deformation from a mean template and deep features computed from the individual projection images. Experiments with organ meshes and digitally reconstructed radiograph (DRR) images of abdominal regions were performed to confirm the estimation performance of the methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.