For PET/CT, the CT transmission data are used to correct the PET emission data for attenuation. However, subject motion between the consecutive scans can cause problems for the PET reconstruction. A method to match the CT to the PET would reduce resulting artifacts in the reconstructed images. This work presents a deep learning technique for inter-modality, elastic registration of PET/CT images for improving PET attenuation correction (AC). The feasibility of the technique is demonstrated for two applications: general whole-body (WB) imaging and cardiac myocardial perfusion imaging (MPI), with a specific focus on respiratory and gross voluntary motion. A convolutional neural network (CNN) was developed and trained for the registration task, comprising two distinct modules: a feature extractor and a displacement vector field (DVF) regressor. It took as input a non-attenuation-corrected PET/CT image pair and returned the relative DVF between them-it was trained in a supervised fashion using simulated inter-image motion. The 3D motion fields produced by the network were used to resample the CT image volumes, elastically warping them to spatially match the corresponding PET distributions. Performance of the algorithm was evaluated in different independent sets of WB clinical subject data: for recovering deliberate misregistrations imposed in motion-free PET/CT pairs and for improving reconstruction artifacts in cases with actual subject motion. The efficacy of this technique is also demonstrated for improving PET AC in cardiac MPI applications. A single registration network was found to be capable of handling a variety of PET tracers. It demonstrated state-of-the-art performance in the PET/CT registration task and was able to significantly reduce the effects of simulated motion imposed in motion-free, clinical data. Registering the CT to the PET distribution was also found to reduce various types of AC artifacts in the reconstructed PET images of subjects with actual motion. In particular, liver uniformity was improved in the subjects with significant observable respiratory motion. For MPI, the proposed approach yielded advantages for correcting artifacts in myocardial activity quantification and potentially for reducing the rate of the associated diagnostic errors. This study demonstrated the feasibility of using deep learning for registering the anatomical image to improve AC in clinical PET/CT reconstruction. Most notably, this improved common respiratory artifacts occurring near the lung/liver border, misalignment artifacts due to gross voluntary motion, and quantification errors in cardiac PET imaging.
Read full abstract