Abstract

Background Dual-energy computed tomography (DECT) has been widely used due to improved substances identification from additional spectral information. The quality of material-specific image produced by DECT attaches great importance to the elaborated design of the basis material decomposition method. Objective The aim of this work is to develop and validate a data-driven algorithm for the image-based decomposition problem. Methods A deep neural net, consisting of a fully convolutional net (FCN) and a fully connected net, is proposed to solve the material decomposition problem. The former net extracts the feature representation of input reconstructed images, and the latter net calculates the decomposed basic material coefficients from the joint feature vector. The whole model was trained and tested using a modified clinical dataset. Results The proposed FCN delivers image with about 60% smaller bias and 70% lower standard deviation than the competing algorithms, suggesting its better material separation capability. Moreover, FCN still yields excellent performance in case of photon noise. Conclusions Our deep cascaded network features high decomposition accuracies and noise robust property. The experimental results have shown the strong function fitting ability of the deep neural network. Deep learning paradigm could be a promising way to solve the nonlinear problem in DECT.

Highlights

  • Conventional single-energy X-ray technique provides information about the examined object which is not sufficient to characterize it precisely

  • Convolutional network (FCN) is one kind of convolutional neural network (CNN), which is firstly proposed and used for semantic segmentation [29]. e standard CNN generally is composed of a pooling layer and a convolutional layer which are alternately connected. e convolutional layers learn the features of the input. e pooling layers guarantee that the deeper layers can extract higher scalelevel features through downsampling

  • We have designed a cascaded neural network for the material decomposition problem. e reconstructed images are pixel wisely mapped to decomposed images via several convolutional layers and a fully connected layer. e size of the input layer is 65 × 65, based on the hypothesis that the value of the material coefficient depends largely on the local region in reconstructed images. e proposed fully convolutional net (FCN) processes data in an end-to-end way, without any needs of precorrected images or other prior knowledge. e experimental results show its strong performance in capturing the localized structural information and suppressing image noise. e decomposed images generated by matrix inversion and iterative decomposition contain relatively a large amount of artifacts

Read more

Summary

Introduction

Conventional single-energy X-ray technique provides information about the examined object which is not sufficient to characterize it precisely. Dual-energy computed tomography (DECT) provides additional information by using two different energy spectra to scan the object, which has been presented as a valid alternative to conventional single-energy X-ray imaging. Projection-based methods pass the projection data through a decomposition function, followed by image reconstruction such as filtered backprojection (FBP). Image-based methods use linear combinations of reconstructed images to get an image that contains material-selective DECT information. It is an approximative technique, and the resulting images are less quantitative than with projectionbased methods. Image-based methods can handle mismatched projection datasets and are applicable to the Computational and Mathematical Methods in Medicine decomposition of three or more constituent materials, which is more expedient in practice. Deep learning paradigm could be a promising way to solve the nonlinear problem in DECT

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.