Abstract

A fast scan with a high helical pitch is desirable for many CT exams, such as pediatric, chest, and some of cardiovascular exams, to suppress patient motion artifacts. However, on a single-source scanner, the pitch typically cannot exceed ~1.5 without generating image distortion within the entire scanning field of view due to insufficient data acquired in a fast pitch mode. In this work, we developed a deep convolutional neural network-based approach to reducing artifacts on images reconstructed from insufficient data acquired in an ultra-fast-pitch mode (𝑝𝑝 ≥ 2.0). This custom-designed neural network, referred to as Ultra-fast-pitch image reconstruction neural network (UFP-net) consists of functional modules using both local and non-local operators, as well as the z-coordinate of each image, to effectively suppress the location- and structure-dependent artifacts induced by the fast-pitch mode. The UFP-net was trained using a customized loss function that involves image-gradient-correlation loss and feature reconstruction loss. Projection data at a regular pitch (𝑝𝑝 = 1.0) and a fast-pitch (𝑝𝑝 = 3.0) were simulated using 10 patient CT cases to generate training and validation datasets. Compared to filtered-back-projection (FBP), the UFP-net largely suppressed image artifacts and restored anatomical details. The structural similarity index (SSIM) was significantly improved (Mean SSIM: UFP-net 0.9, FBP 0.6), and the root-mean-square-error (RMSE) was largely reduced (Mean RMSE: UFP-net 57 HU, FBP 273 HU). The proposed method has the potential to enable ultra-fast-pitch data acquisition on single-source CT scanners to improve scanning speed while maintaining image quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call