Abstract
In X-ray microtomography, the flat field image is usually needed to normalize the collected sample projections. Owing to the high brightness of the synchrotron radiation facility, dynamic CT imaging of in-situ or in-operando processes is broadly employed for the investigation of three-dimensional microstructure evolution. However, the fast, continuous data acquisition and the heavy, bulky in-situ devices usually prevent the easy collection of accurate flat field images, which means that conventional flat field correction is hard to efficiently correct the artefacts of X-ray microtomography. We report a deep-learning-based artefact correction method for X-ray microtomography, which uses flat field generated from each CT projection by an improved pix2pixHD model. Experimental results demonstrated that the proposed method has a significant advantage over the conventional method and available deep-learning-based flat field correction method for the flat field correction of projection images. The CT imaging results show that the proposed method efficiently reduces the systematic error during the intensity normalization process, and the CT reconstruction is improved significantly. Therefore, the method developed in this paper is applicable for the flat field correction of dynamic CT. Furthermore, experiments with a set of low Z material samples verified the generalization of the deep-learning-based method for a variety of samples never used for network training. In conclusion, the method developed in this paper is practicable for the flat field correction of in-situ CT imaging of dynamic processes and is also applicable to new samples as long as the neural network model is effectively trained.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.