Abstract

Photoacoustic tomography (PAT) is a non-ionizing hybrid imaging technology of clinical importance that combines the high contrast of optical imaging with the high penetration of ultrasonic imaging. Two-dimensional (2D) tomographic images can only provide the cross-sectional structure of the imaging target rather than its overall spatial morphology. This work proposes a deep learning framework for reconstructing three-dimensional (3D) surface of an object of interest from a series of 2D images. It achieves end-to-end mapping from a series of 2D images to a 3D image, visually displaying the overall morphology of the object. The framework consists of four modules: segmentation module, point cloud generation module, point cloud completion module, and mesh conversion module, which respectively implement the tasks of segmenting a region of interest, generating a sparse point cloud, completing sparse point cloud and reconstructing 3D surface. The network model is trained on simulation data sets and verified on simulation, phantom, and in vivo data sets. The results showed superior 3D reconstruction performance both visually and on the basis of quantitative evaluation metrics compared to the state-of-the-art non-learning and learning approaches. This method potentially enables high-precision 3D surface reconstruction from the tomographic images output by the preclinical PAT system without changing the imaging system. It provides a general deep learning scheme for 3D reconstruction from tomographic scanning data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call