Abstract

Regularization-based iterative CT reconstruction has demonstrated its tremendous power. Formulated under an optimization approach, regularization-based iterative CT reconstruction considers an objective function containing a data-fidelity term and a regularization term. With the latter introduced to enforce image quality, high quality images can be reconstructed under clinically desirable conditions, such as reduced imaging dose to patients. Yet there are always parameters in the problem, which govern tradeoffs between the two terms and impact solution image quality. Manual parameter tuning is not only tedious, but become impractical when many parameters exist. Motivated by recent advancements in deep-learning area that can accomplish tasks in a human-like manner, such as playing computer games, we propose to employ deep reinforcement learning (DRL) to develop a system that automatically adjusts parameters with human-level intelligence, such that the intensive efforts of manual parameter tuning can be released. We consider an example problem of CT reconstruction with pixel-wise total-variation (TV) regularization. We set up a parameter tuning policy network (PTPN). It maps an image patch to a decision that specifies the direction and amplitude of parameter adjustment for the pixel at the patch center. To train PTPN, a reward function is defined that favors improvement in image quality, as quantified by the difference between the solution image and the ground truth image, which is known at the training stage. We train PTPN via end-to-end deep DRL with experience replay technique. Once PTPN is trained, we test it in simulation and real experimental cases. During a reconstruction process, we initiate parameters randomly. PTPN continuously observes the intermediate reconstructed image and make decision to adjust the parameters, until the image quality cannot be further improved. In simulation studies, PTPN can intelligently guide the reconstruction process, yielding images with 3% lower error than that under the randomly initialized parameters. The resulting image is slightly better than that under manually tuned parameter (∼0.5% lower error). Similar behavior is observed in experimental data. For the problem of CT reconstruction with pixel-wise TV regularization, closed form solution of the optimal parameter values can be derived. The parameters tuned by PTPN appeared similar to the optimal parameters, demonstrating the effectiveness of PTPN. Under DRL, we have developed PTPN that can adjust parameters of an iterative CT reconstruction problem in a human-like manner. The resulting image is similar or better than those under manual parameter tuning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call