Abstract

X-ray computed tomography and, specifically, time-resolved volumetric tomography data collections (4D datasets) routinely produce terabytes of data, which need to be effectively processed after capture. This is often complicated due to the high rate of data collection required to capture at sufficient time-resolution events of interest in a time-series, compelling the researchers to perform data collection with a low number of projections for each tomogram in order to achieve the desired `frame rate'. It is common practice to collect a representative tomogram with many projections, after or before the time-critical portion of the experiment without detrimentally affecting the time-series to aid the analysis process. For this paper these highly sampled data are used to aid feature detection in the rapidly collected tomograms by assisting with the upsampling of their projections, which is equivalent to upscaling the θ-axis of the sinograms. In this paper, a super-resolution approach is proposed based on deep learning (termed an upscaling Deep Neural Network, or UDNN) that aims to upscale the sinogram space of individual tomograms in a 4D dataset of a sample. This is done using learned behaviour from a dataset containing a high number of projections, taken of the same sample and occurring at the beginning or the end of the data collection. The prior provided by the highly sampled tomogram allows the application of an upscaling process with better accuracy than existing interpolation techniques. This upscaling process subsequently permits an increase in the quality of the tomogram's reconstruction, especially in situations that require capture of only a limited number of projections, as is the case in high-frequency time-series capture. The increase in quality can prove very helpful for researchers, as downstream it enables easier segmentation of the tomograms in areas of interest, for example. The method itself comprises a convolutional neural network which through training learns an end-to-end mapping between sinograms with a low and a high number of projections. Since datasets can differ greatly between experiments, this approach specifically develops a lightweight network that can easily and quickly be retrained for different types of samples. As part of the evaluation of our technique, results with different hyperparameter settings are presented, and the method has been tested on both synthetic and real-world data. In addition, accompanying real-world experimental datasets have been released in the form of two 80 GB tomograms depicting a metallic pin that undergoes corruption from a droplet of salt water. Also a new engineering-based phantom dataset has been produced and released, inspired by the experimental datasets.

Highlights

  • Over recent decades, X-ray computed tomography has become more and more popular, allowing researchers to capture the hidden inner structure of many different systems

  • There are cases where the time of exposure of the imaged sample must be limited, allowing only for a small number of projections to be captured. One of these cases is during the acquisition of time-resolved volumetric tomography data collections (4D datasets), where multiple tomograms with smaller numbers of component projections are collected to enable capture of fast-occurring temporal events

  • New improved variations have been proposed (Kim et al, 2015; Ledig et al, 2016), with the main focus being better upscaling of natural images using deeper or more complex convolutional neural network (CNN) architectures. Inspired by this previous work, we propose here a superresolution deep learning approach that aims to upscale the sinograms of X-ray tomograms (Section 2.2)

Read more

Summary

Introduction

X-ray computed tomography has become more and more popular, allowing researchers to capture the hidden inner structure of many different systems. There are cases where the time of exposure of the imaged sample must be limited, allowing only for a small number of projections to be captured One of these cases is during the acquisition of time-resolved volumetric tomography data collections (4D datasets), where multiple tomograms with smaller numbers of component projections are collected to enable capture of fast-occurring temporal events. This series of projections comprises a representation of the tomogram, that later using specialized software (Atwood et al, 2015) is reconstructed into a 3D representation consisting of voxels It is common practice for facilities such as Diamond Light Source to capture these tomograms with a low number of projections, allowing for greater time resolution of the experimental process that is being measured. This can sometimes be addressed using iterative reconstruction methods such as SIRT (Trampert & Leveque, 1990), SART (Andersen & Kak, 1984) and CGLS (Zhu et al, 1997), or more complex methods such as model building approaches, but these are often very computationally expensive and so not computationally plausible for application on large volumes of data, such as seen here

Objectives
Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call